Categories
(Modulate This) Interviews Sound Design TBT

#TBT Interview with Composer Reinhold Heil on His Work on The Helix TV Series on SyFy

helix-modulate-this

I’m enjoying watching season 1 of the SyFy series Helix on Netflix for a second time. Back in 2014 I interviewed the show’s composer Reinhold Heil on his amazing score for the show. I thought it would be fun to post a link to the interview here as a “throw back Thursday”. Since I interviewed Reinhold he’s gone on to compose for:

Read the original interview here

https://modulatethis.com/2014/05/02/modulate-this-interview-composer-reinhold-heil-helix-tv-series-syfy-music/

Enjoy,

Mark Mosher
Synthesist, Electronic Musician, Producer
Boulder, CO
ModulateThis.com


If you found this article inspiring or informative click here.

 

Categories
(Mark Mosher Music News) (Modulate This) Featured Interviews Helix Interviews Reinhold Heil Soundtracks

Modulate This! Interview with Composer Reinhold Heil on His Work on The Helix TV Series on SyFy

I’m a HUGE fan of the television show Helix and especially love the score which is composed by Golden Globe-nominated composer Reinhold Heil. Here is a brief bio from Reinhold’s official web site.

A multi-instrumentalist with a broad musical range, he first came to prominence as the keyboarder of the legendary German punk band, the Nina Hagen Band, and as a producer of international pop stars. His film and television credits include Run Lola Run, One Hour Photo, The International, Perfume: The Story of a Murderer, Deadwood, Without a Trace, and the epic adventure-drama Cloud Atlas. He is currently scoring Helix for Syfy. He lives and works in downtown Los Angeles.

I googled around a bit and found no interviews on Reinhold’s work on Helix so I reached out to him with some questions about his work on the show. He was kind enough to take time out of his busy schedule to answer these questions and offer a fascinating behind-the-scenes look into his work on the show. Note, one of his answers contains spoilers and I wrapped the answer with with **** Begin Spoiler Alert ****   and *** End Spoiler Alert ***.


Mark Mosher: How did you get involved with Helix?

Reinhold Heil: My agent asked me to submit a demo and I did. I put a lot of effort in that because I love the genre and really wanted to show what I have to offer. Apparently they liked the demo and gave me the job without an interview. They must have been swamped with the shoot that had just started in Montreal, so most of them weren’t even in town. As it turned out they were all wonderful to work with and I had a lot of fun doing the series.

Watch Season 1 Trailer

Mark Mosher:  When Did you Start Working on the Show?

Reinhold Heil: On Helix I [started] developing material in August 2013, while they were assembling the first episode. So there was definitely an early involvement, but it was already inspired by the look of the show and the characters.

Mark Mosher:  There are some very happy – dare I say – “elevator music” style and old Wurlitzer organ/drum machine styling’s in the show. Do you use vintage gear (and if so what gear) for these cues, or are you using virtual instruments or libraries?

Reinhold Heil: Funnily most people don’t understand that I have mostly nothing to do with the elevator music. It becomes very obvious when they are using classics like “Road to San José” or “Fever”, but the only easy-listening pieces I actually contributed to Helix are the main and the end-title. And I did the adaptations of the two pieces from Tchaikovski’s Nutcracker that happen in episode 6.

I’m not involved in the selection but check out the two transitions into “Fever”. They are pretty smooth and I did work hard on those. I did try to have the score segue seamlessly into the source pieces as often as I could. Some of them are exceptionally well chosen and used to great effect, but the guys in the writers room and show runner Steve Maeda as well as Producer Stephen Welke are the people to give credit for that.

Mark Mosher:  These twisted “happy” cues are so great and act as an emotional signal to viewers that very bad things are just about to happen. It’s such a clever idea and it makes the hair stand up on the back of my neck when I hear any happy music in the show – lol. How did this idea for using happy and lounge sounding orchestrations come about and evolve?

Categories
(Modulate This) Featured Interviews Gary Numan Interviews TBT

Modulate This! Interview with Gary Numan

Gary Numan will be playing Denver tonight (April 4, 2014) at the Gothic Theater. Show starts at 8pm. He’s currently on tour supporting his fantastic album Splinter (Songs from a Broken Mind). I caught the show at the Mountain Oasis Festival 2013 and will be at the show tonight. It’s an incredible show so if you are in Denver area come on down.

For those readers not in Denver and for those who have not yet bought the album visit so visit http://www.numan.co.uk for more information on the album and tour.

Gary was kind enough to take time out of his schedule to answer a few questions for Modulate This!


Mark Mosher: I love the amazing amount of sonic space and dynamic range in the mix of Splinter. I especially like how your vocals are right up front and you can hear the amazing detail in the music and sound design. Even when songs like “Who Are You” are running at full tilt, the mix has enough sonic space so you can make out interesting sound elements like scraping metallic noises and such. Can you shed some light on the overall development of Splinter and your collaborative process with producer Ade Fenton to create an album with such drive and emotion without losing all the sonic detail?

Click here to listen on Spotify

Gary Numan: No special tricks or processes were employed to get the album to sound the way it does, just a lot of attention to detail and care. Ade worked very closely with Nathan Boddy with the mixing at their respective studios in the UK and those mixes were sent over to me for feedback. There was a lot of communication and discussion obviously as things progressed. The songwriting part of it is fairly simple. I start with a piano and work out the melody and structure. When I’m happy with that I turn to the technology and begin to flesh the song out, building the dynamics and mood. A rough guide vocal without real words follows so that I can get the phrasing exactly right without trying to squeeze in lyrics that don’t really fit, then, when I’m happy with that, the actual lyrics, then the final vocal. At this stage I will have a fairly well developed demo that gives Ade the guidance he needs to know where I see the song going. Those files are then sent to Ade and the production part of it begins. From then on it’s a lot of to and fro as we move the song forward. We do argue but it’s rarely angry, we’re always trying to get the song as good as it can be rather than win a contest between us. It’s quite difficult to comment on the way we work as being anything unusual because it really isn’t. I write the songs and create reasonably high quality demo’s, Ade makes them sound much better and, quite often, will take the song in a new direction. Sometimes that works, sometimes not, but I’m always happy to try out his ideas and see where they take us.

Mark Mosher: My favorite track on the album is “A Shadow Falls on Me”. It has such an interesting arrangement. The non-vocal elements of the song are conjured up in a wake behind your vocals. The end result is you really pull the listener along and make them try and anticipate what’s coming next. Was this a idea pulling the listener along with the vocals and melody a conscious idea from the beginning or something that happened as you developed the song?

Gary Numan: Yes, pretty much. The idea was to build the song with each new vocal section, increasing the level of emotion and power at each step. Ade came up with a huge drum part that was great and changed things considerably but it was just too much to have running from start to finish so we adapted it and used the idea to build an even bigger series of steps, following the original idea but with a greater shift in power and emotion each time. Interestingly the vocal line started out as my first attempt to collaborate with the band Battles. They weren’t too keen on my first vocal idea for their My Machines song so I used it on A Shadow Falls On Me instead.

Mark Mosher:  There are some amazing textures and sound elements on Splinter. What’s your creative process for creating unique sounds to support your song writing?

Gary Numan: Sounds can come from anywhere. Walking around the street with a recorder kicking things, slamming things, scraping, dragging, whatever. Using software packages like Omnisphere and Massive, whispering words and phrases and then manipulating those sounds beyond recognition, recording journeys, trains, cars, absolutely anything and everything, and then finding ways to mess with those source sounds until you have something you’ve never heard before. There is no process as such, just a real pleasure from finding new ways to create new sounds.

Mark Mosher: There is a fantastic video on the Nine Inch Nails YouTube channel where you make a surprise appearance and perform “Metal” (https://www.youtube.com/watch?v=ehMqEXUspfs) back in 2009. You’ve gone on to share the bill with NIN for a series of concerts and NIN guitarist Robin Finck both plays on Splinter and has played with your touring band. Can you tell us more about the Gary Numan-Trent Reznor-NIN connection and perhaps how this connection has deepened since you have moved to LA?Gary Numan: Trent came to see us at a show I was playing in Baton Rouge many years ago, this was when he was making The Fragile. He brought with him a copy of a song of mine that he had covered called Metal which was fantastic. After that, whenever NIN played in London I would go and see them and we would meet up briefly for a chat. Then in 2009 I was invited to join them on stage at their O2 gig in London, then to do the same thing when they played the last four shows of that version of NIN in Los Angeles that same year. When I moved to Los Angeles Trent wrote the first of my Testimonial letters for the US authorities which really helped. As soon as we moved to the US he invited to his house a couple of times and made us feel very welcome, then the recent shows and some other social things. He’s been a good friend, in his own, quiet way, on several levels and I’m very grateful to him.

Mark Mosher: I was in attendance at your interview at the Mountain Oasis 2013 Festival in Asheville, NC with Geary Yelton for Keyboard Magazine. In that interview, you mentioned that since you’ve now moved to Los Angeles, that you were hoping to get involved with some film soundtracks. Can you give us an update on any developments in this area of your career?

Gary Numan: It’s a very cautious thing for me. The musical side of that idea is very exciting but the political side of it, or at least the horror stories I’ve heard about it, are really quite daunting so I’m not sure whether it will suit me or not. I’m just finishing my first film score, which I co-wrote with Ade Fenton on this occasion, for an animated movie called From Inside. A grim and heavy story about a pregnant girls journey on a mysterious train after the world has been destroyed. It has been a gentle first step into writing scores for both of us and again, I’m very grateful to the people involved, John Bergin the Director, and Brian McNellis the Producer, for giving me the opportunity and for making it a stress free project. We’ll see where it goes from here.

Mark Mosher: Rather than fall back on “nostalgia” you have really pushed the envelope to try new ideas throughout your career. Do you have any advice for Modulate This readers on how to take the “long view” of their craft and their music careers?

Gary Numan: I’ve always been aware that everything you do today will stick to you in the future so you must be very careful. You need to think about how today’s actions will be perceived in the coming years. Will they hurt your reputation, weaken your fan base? Are you doing things now for short term gain that might kill your career growth in the coming years? I’ve made some terrible mistakes over the years but the thing that has always been important to me is never to rely or dwell on past glories, no matter how big they might be. Try to move forward musically with every album, don’t be afraid to try new things, constantly, and avoid nostalgia at all costs. Of course, if you just want to be rich then milk the nostalgia route for all it’s worth. Plenty of people make very good livings by simply repeating things they did decades ago but I think that’s a pretty empty way to look at creativity. Write music because you genuinely love what you are doing, not because you think it might get you on the radio or keep the record label happy. I went through a period of writing ‘strategically’ and the music suffered and I did nothing that I’m proud of or still play today. It was soul destroying actually and almost ruined my career. For the first part of my career, and certainly for the last 20 years, I’ve written songs with no thoughts at all about how they might achieve commercial success. I want that of course, but you must NOT try to design your music to achieve it. Write what’s in your heart, what you love, and then hope for the best as far as commercial success is concerned.


Special thanks to the fantastic photographer and musician Rod Tanaka for coordinating this interview.

Mark Mosher
Electronic Musician Boulder, CO
www.ModulateThis.com
www.MarkMosherMusic.com
MarkMosher.Bandcamp.com

Categories
(Modulate This) Interviews Music Monday

Music Monday: An Interview with Kent Barton (aka SEVEN7HWAVE) on His New Concept Album “CYBERIA”

SEVEN7THWAVE-interview

Denver artist Kent Barton (aka SEVEN7HWAVE) just released a new concept album called CYBERIA. I met first met Kent at the Ableton Colorado User Group a few years back when he was just starting down the path to create this album so I thought it would be interesting to hear about his creative process.  Oh, and Kent is also a member member of the new Boulder Synthesizer Meetup.

First I’ll offer some links to the album, then the interview followed by Kent’s social links. Kent is offering this new album ”name your price” over on bandcamp and as always I encourage a buy to show your support.

CYBERIA Album

Hong Kong: 2050 A.D. You're about to inject a dose of mind-altering nanobots. This is the soundtrack to your trip.

http://bandcamp.com/EmbeddedPlayer/v=2/album=1415150382/size=grande3/bgcol=FFFFFF/linkcol=4285BB/

Concept and Production: Kent Barton
Mastering: Tarekith at Inner Portal Studio
Vocals on Brain Zaps: Brittany Patterson
Field Recordings on No Passengers, Kowloon Bay, and Brain Zaps: swuing
Field Recording on 0100000101001001: James Tobin
Muse: Brittany Patterson
Creative Inspiration: Mark Mosher, Marc Wei, Matt Stampfle, and the Denver Ableton User Group

Interview with Kent Barton

Mark: Tell us a little bit about your musical background. What instruments do you play and how did you first get interested in electronic music?

Kent: I had some formal classical training on the violin as a child. Even though I got tired of the instrument by middle school, it did a good job of wiring my brain for music. Fast-forward to the start of college, and I decided to pick up the guitar to emulate my metal heroes. That was my introduction to the world of songwriting, bands, live shows, and the search for the perfect tone.

Back around 2004, I discovered the Trance station on Shoutcast (!). A year or two later I got my first proper introduction to electronic music, clubs, and raves, with artists like Ferry Corsten, Junkie XL, and Infected Mushroom. Eventually my waning interest in playing intricate guitar riffs was replaced by a newfound lust for producing music.

Mark: What inspired you to create an album about “mind altering nanobots” in 2050 A.D.?

Kent: I’ve always been a sci-fi freak with part of my brain permanently lodged in the future. Blade Runner was an obvious inspiration here, along with the cyberpunk movement. But it’s also a commentary on where we are today, and where we could be headed. Technology is a double-edged sword; it can liberate us or imprison us. The internet connects us all, but it’s also a giant Big Brother machine. These two opposing forces will be even more important in the future, as computers get smaller, faster, and implanted into our bodies.

Creatively, I was inspired by Reboot and I Hear Your Signals (editor’s note – I did not bribe Kent to say this :^) ). The idea of a badass album telling a story has been around for a long time (Operation: Mindcrime, I’m looking in your direction), but it never dawned on me to use the same technique for electronica until hearing these two albums.
 
Mark: What role did Ableton Live played in your creative and production process?

Kent: Occasionally I’d go lo-fi and hammer out a melody or chord progression on my guitar. But other than that, Ableton was the centerpiece of everything, from sketching out ideas to recording to arrangement to mixing. People keep bitching about when Live 9 is coming out. I honestly don’t care; the current version is powerful enough to do everything I want to do.
 
Mark: What were your go-to synthesizers for this project and what is it you like about them?

Kent: My mainstays were…

Sylenth1: When I think bass, I think old-school West Coast hip-hop smooth-ass warm sub. That’s what I was aiming for, and Sylenth delivered. I also used it for the pad sound on “Vimanas,” which was my obligatory nod to Vangelis.

Peach: This freeware, from Tweakbench, had exactly the chiptune sound I wanted for this album. Pure NES awesomeness…and it sounds even better with some spatial FX slathered on.

Plogue Chipsounds: I re-sampled Chipsounds for a lot of FX, as well as the main bleep lead on 0100000101001001. It’s an 8-bit emulation powerhouse.

Mark: There is a consistent palette throughout the album which helps give listeners a sense of the “universe” the story takes place in.  Did you have a sense of the palette from the beginning, or did this evolve as the production progressed?

Kent: Early on, I stumbled across a collection of incredible field recordings someone made while traveling in Hong Kong. This inspired the setting of the album. As I was writing, these served as the “glue” between each track. I also started with a simple equation that I thought might yield awesome results: Chiptune + Strings + Guitar – fusing the organic and electronic. But as the album evolved, I found myself downplaying the guitar element and bringing in more synth.
 
Mark: I love how you modulated the arpeggiator speeds in “No Passengers” and also changed the glitch speeds in “Brain Zaps”. Did you record real-time automation for this or use automation envelopes?

Kent:  The changing arp speed on “No Passengers” was recorded in one pass. I like to limit myself to one or two takes to capture the moment and avoid endless tweaking. “Brain Zaps” was one of those cases where I forget to connect a controller when I’m arranging a track. Rather than stop the workflow, I’ll just draw in the glitches by hand.
 
Mark: How do you feel composing against a story line helped you keep the project moving to completion?

Kent: Having a storyline was incredibly helpful. It created a common thread throughout the songs, and added visual elements to the creative process. Sometimes I felt like a movie director, rather than a producer. Creating an environment and living in it was also a huge help – especially when I was stuck and didn’t know where to go next.

I can’t recommend this enough. If you’re a producer looking for inspiration that can drive an entire collection of songs, try thinking of a story to tell. You don’t need a deep plot or characters. Just a simple concept is enough to fuel that creative spark.

Mark: What is your next musical project?

Kent: I’m working with an incredible animator/visual artist on a video for “No Passengers.” I’ll be releasing that shortly.

I freak out if I’m not writing, so I’m also cobbling together the building blocks for my next album. I feel like I’ve found my own sound with Cyberia, Now I’m excited to evolve it and take it in new directions.

Links for SEVEN7HWAVE

Electronic Musician, Boulder CO
http://www.ModulateThis.com
www.MarkMosherMusic.com

Categories
(Modulate This) Guest Appearances, Interviews, Awards Interviews

The Tables Are Turned. My Interview on Sound Design Live Podcast “Turning Technology Into Performance”

markmosher_sounddesignlive_interview

I was recently interviewed by Berkeley, CA Sound Designer Nathan Lively on his podcast Sound Design Live which has over 10,000 followers! This was a nice turn as I’m normally the one doing the interviewing :^).

You can read the original post “Turning Technology Into Performancehere, listen to the full 26 minute interview using the player below, or download the MP3 here.

In the interview I offer a lot of behind-the-scenes notes on: how I got started with electronic music, Modulate This! Blog, sound design work on Patchlab, working with tangible interfaces such as AudioCubes, social media for artists, and a discussion of my open knowledge 9 Box method which is being used as the basis for a K-6 electronic music lab in a Denver Metro school.

https://player.soundcloud.com/player.swf?url=http%3A%2F%2Fapi.soundcloud.com%2Ftracks%2F28321888&g=1&color=1bcf28&show_comments=true
Turning Technology Into Performance by Sound Design Live

Thanks to Nathan for taking to time to interview me and making me LOL multiple times during the interview. It was a blast! If you want to learn more about Nathan visit http://www.nathanlively.com.

Mark Mosher
Electronic Music Artist, Boulder, CO
Official Web Site: www.MarkMosherMusic.com
Listen/Download Albums: www.MarkMosherMusic.com/music.html

Categories
(Modulate This) Events Interviews Videos

A Behind-The-Scenes Look at Robert Edgar’s Simultaneous Opposites Engine in Max/Msp/Jitter

Modulate-this-interview-robert-edgar-banner-01 

Have you ever wanted to look over the shoulder of someone who's spent multiple years programming and refining a custom application with Max/MSP/Jitter? Well you are in luck as artist Robert Edgar kindly agreed a let me record a behind-the-scenes walk through of his "Simultaneous Opposites Engine" app when I was visiting Sunnyvale last April.

He describes the app as "a performance/navigation system for real-time traversal of existing video files, sorting through the audio and video a single frame at a time, in an arrhythmic spiraling motion". While this is not a commercial application, watching the interview will not only give you insights into Robert's innovative work, but will also give you a sense as to why people choose to program their own apps and how they use technologies like Cycling 74's to solve problems and express their art.

After the interview video I've embedded a few example videos to show some of Robert's recent work. I've included a link to his Vimeo channel at the bottom of the post which you should definitely checkout as it's the home of over 60 videos created by Robert over the last few years allowing you to see the progression of the technology and the art. I recommend you view all videos full screen and HD.

Watch embedded video

 

Flower Beds: Simultaneous Opposites #62 from Robert Edgar on Vimeo.

Simultaneous Opposites #59: Dirty Bubbles from Robert Edgar on Vimeo.

 

Links

Mark Mosher
Electronic Music Artist, Boulder, CO
Synthesist | Composer | Keyboardist | Performer

http://modulatethis.com/
Official Web Site: www.MarkMosherMusic.com
Listen/Download Albums: www.MarkMosherMusic.com/music.html


Categories
(Modulate This) Contollerism Interviews Space Palette Theremin Tim Thompson Videos

An Exclusive First Look at Tim Thompson’s Kinect-Based Instrument: MultiMultiTouchTouch

modulatethis_tim_thompson_multimultitouchtouch_banner

Tim Thompson is a software engineer, musician, and installation artist. He was recently mentioned in Roger Linn’s post “Research Project: LinnStrument — A New Musical Instrument Concept where Roger credits Tim with writing a program that “translates the TouchCo's proprietary USB messages into TUIO messages sent over OSC.”

I met Tim at my recent concert at the Art Institute of California/Sunnyvale and he was kind enough to invite me over to see his latest development project, the MultiMultiTouchTouch. This custom solution offers players any number of arbitrarily-shaped multitouch areas with three-dimensional spatial control. Interaction with this space allows users to control and play virtual synthesizers using nothing but a Microsoft Kinect as the controller.

Ironically, the concept shown in Moog Music’s April Fools video “Introducing the Moog Polyphonic Theremin” is not only a reality, but Tim has one-upped this idea by providing polyphonic spatial control in multiple “frames”, AND more granular control than a Theremin with finger blob detection. In short MultiMultiTouchTouch is like having a polyphonic/multitimbral Theremin that can not only detect hand movements, but finger movements as well – from multiple players!!!

Luckily I brought my video camera along and recorded Tim describing and demoing the technology. I also give the MultiMultiTouchTouch a try at the end of the video. So, without further ado, I present the video “An Exclusive First Look at Tim Thompson's Kinect-Based Instrument: MultiMultiTouchTouch”

Watch embedded video in HD

An Exclusive First Look at Tim Thompson’s Kinect-Based: MultiMultiTouchTouch

Components
In summary, Tim developed with the following components:

The raw output of this controller is OSC messages formatted using the TUIO (multitouch) standard format. Parameters of the software can be controlled with JSON-formatted messages.

Events
If you're near Silicon Valley, you can play with this controller on April 10 at the Stanford DIY Musical Instrument Tailgate Party , or at the Kinect Hackathon at Hacker Dojo. Tim will also be using it in installations at Lightning In a Bottle and Burning Man this year.

Pass It On
I want to reiterate, this is real and NOT a late April Fool’s joke. Incredible work Tim! Congrats. I can’t wait to see where Tim takes this and look forward to the possibility of doing some MultiMultiTouchTouch compositions and performances myself. To help Tim promote his work share this video.

Links

 

Mark Mosher
Electronic Music Artist & Synthesist, Boulder, CO
www.markmoshermusic.com
www.modulatethis.com

Categories
(Modulate This) Featured Interviews Interviews

Interview With Alan Pollard – Keyboard Technician for Goldfrapp, Björk, The Human League, The Cure, ELP…

Alan Pollard has been a keyboard technician for over 20 years working with major artists such as Björk, Queen, The Human League, Goldfrapp, The Cure, Stevie Wonder, Paul Weller, Annie Lennox, Emerson Lake & Palmer to name a few. These days, he  specializes in programming and running Mac based sequencing software for touring live shows. He also designs and builds reliable installation, live and studio keyboard / computer systems.

Alan was kind enough to take some time out of his busy schedule to answer some questions about his work and offer some insights for those who want to purse projects of this nature.

Mark Mosher: Your website mentions you “specialize in programming and running Mac based sequencing software for touring live shows.” What is your “go to” software for sequencing live shows and why?

Alan Pollard: Most tours I work on I use Apple’s Logic mainly because I tend to end up syncing various instruments and sending program changes etc. So Logic’s integration of audio and midi tracks makes most sense to me.

image

Mark Mosher: How did you first get started in this line work?

Alan Pollard: I worked in a music shop when MIDI was first introduced and set up some early sequencing set-ups for studios. When I left I pretty much went straight onto a tour doing the same thing as I had been doing in the store.

Mark Mosher: How has performance and stability improvements in laptop technology changed your strategy for live performance rig design and what new challenges have laptops introduced?

Alan Pollard: Using laptops makes a smaller set-up than having a racked up desktop and is also easier to have back-ups; I often set up the show on my personal Macbook Pro to use in emergencies. I still think that if a tour set list is pretty much set in stone and the band are not likely to change much then a hard drive playback system is more stable. But you often still need the computers with you in case songs get added or changed. The ability to work on a track back in the hotel room with just a laptop makes some things a lot easier, as trying to sync up drum parts at the side of a noisy stage at a muddy festival site is not always the best way 🙂

alan_pollard_02

Mark Mosher: You are currently touring with Goldfrapp. Their latest album “Head First” is loaded with synth including some classic analog sounds. What is your role on this tour and what approaches will you be taking to reproduce these sounds live?

Alan Pollard: I worked closely with Will Gregory for a couple months before the tour getting all the sounds we needed from the album stems and sampling other original parts. Angie Pollock plays all her keyboard sounds from within Logic. We are using a mixture of ESX samples and soft synths (eg FM8 and Surge) to reproduce the parts needed. I have backing parts on hard drives but also run Logic to send program changes to Angie and the electronic drums (Akai’s) as well as start the 2 hard disk recorders in sync.

Mark Mosher: I read that on the Björk tour that you were using Reactable. How do you think tangible interfaces such as Reactable change the way you perform music and connect with audiences?

image

Alan Pollard: That particular Björk tour was about having the electronic aspect controlled in a very tactile way so that the audience could see things were happening and not just faced with someone hunched over a laptop. The reactable was obviously a big visual synth. But we also used Tenori-ons, Lemurs, korg kaos pads and various fader banks, which all meant that there wasn’t too much mousing and you could see that the guys on stage were performing.

Mark Mosher: Do you have any tips for musicians performing with laptops on how to harden or “crash proof” their rigs?

Alan Pollard: As I mentioned before, I always start by saying do you really need it running from a laptop or could it be played back from another source, ie hard disc etc.

Then I would say you need backup; lots of backups. And not all in the same place, leave a drive with a clone of your machine at home, and carry one with you away from the rest of the gear.

Next test your show in order all the way through. You never can be sure that one song may throw up a problem for another when played back to back. And if anything ever goes wrong, don’t say it was “just one of those things”, find the problem or reason, there almost always is one. Also if you can have a clean laptop just for music and don’t go over board with every plug-in and soft synth, just have the ones you need for the show.

image

Mark Mosher: Are there hardware synths or controllers that you tend to bring out on every tour and do you bring a backup for each hardware synth?

Alan Pollard: Yes I always try and have a spare of everything, but I have to work with the band and their budget 🙂 My off stage rack tend to have similar things in it but it depends on the tour.

image

Mark Mosher: How much time do you usually have to create a rig and program a live show?

Alan Pollard: It really varies, usually you know and can start planning a month or so before, i.e. order equipment and locate masters etc. Sometimes you might just come in at the start of rehearsals, a couple of weeks before the first show, and you just have to make do with the bands equipment and get it in a road worthy state as best you can.

Mark Mosher: You’ve worked with the likes of Bjork, Queen, The Human League, The Cure, Stevie Wonder, Annie Lennox to name a few. What was the most technically complex and challenging show you’ve worked?

Alan Pollard: They all have their own challenges, but I guess that the last Björk tour had the most going on for me. My playback rig off stage, Damian Taylor with laptop & keys, tactile interfaces and mixer with feeds from other players. Mark Bell running Abelton live and various effects. The reactable, a real harpsichord, all synced together with MIDI metronomes for the brass section!

Mark Mosher: Do you have any words of advice you can give to Modulate This readers who might want to pursue a career in programming for touring live shows?

Alan Pollard: Tricky one…listen to people, everyone’s got something they can teach you and a valid opinion. I’ve learned a lot from touring with some very accomplished players and crew. Also it should be fun, remember its for people’s entertainment and that’s the bottom line.

Links

For more information see Alan’s web site – www.alanpollard.co.uk.


Mark Mosher
Electronic Music Artist, Boulder, CO
www.MarkMosherMusic.com
www.ModulateThis.com

Categories
(Modulate This) Interviews

Modulate This Interview with Imagine Research CEO Jay LeBoeuf

image

I recently had a chance to meet Jay LeBoeuf, the CEO and founder of the San Francisco based company Imagine Research, and learn more about his past and current work. Imagine Research is working on next-generation intelligent signal processing and machine learning technologies. I thought the work was fascinating and Jay graciously agreed to take time out of his busy schedule to share some insights on his work and the field in general. He also has some suggestion on how you can get involved with helping to solve real-world problems in the digital audio and music realm.

_____________________________________________________

Mark Mosher:  How long have you been involved in R&D work and how did you get started?

Jay LeBoeuf: I've always had a passion for music and technology – in undergrad (Cornell University) , I was an electrical engineer, with a minor in music, and gigged with my band on weekends.  Everything suddenly made sense when I did a Masters at CCRMA (Stanford University).  If you understand audio software and technology at its lowest levels, you have this immense appreciation for the tools that our industry uses.  You also develop this urge to make new tools, and help bring new experimental technologies to market… which is how I ended up at Digi.

MM: Prior to founding Imagine Research, you were at Digidesign doing R&D on Pro Tools. What Pro Tools features that Modulate This readers might use daily did you have a hand in creating?

JL: Digi was such an amazing place and opportunity – I was one of the first team members on Pro Tools' transition from OS 9 to OS X.  I was on design and test teams for D-Control / ICON mixing console, the HD Accel Card, integration of M-Audio into the Pro Tools product line, and Pro Tools software releases 5.1.1 through 7.4.  In my later years, I researched techniques for intelligent audio analysis  – the field that I'm most excited about.

Imagine Research Web siteMM:  Do you feel that being an independent research firm allows you to work more on the "bleeding edge" than if you were doing the research from within a company?

JL: Absolutely.  Imagine Research was founded because this "bleeding edge" technology needs a helping hand into industry.  Most companies, especially in the MI space, keep their focus on their incremental features, compatibility, and bug fixes – and applied research is inherently difficult and risky to productize.

The U.S. National Science Foundation has been a great partner in helping us bring innovative, high-risk-high-reward technologies to market.  We've received several Small Business Innovation Research (SBIR) grants to address the feasibility and commercialization challenges of music information retrieval / intelligent audio analysis technologies.  I encourage all entrepreneurs to look into the SBIR program.

MM:  How does Imagine Research help companies leverage emerging and disruptive technologies yet build practical solutions?

JL: Close collaborations are key during the entire technology evaluation process.  We focus on end-user problems and the workflows enabled by technology.  The solution is what's important , and we try not to geek out and use unnecessarily sophisticated technology when a simpler solution works fine.  That said, the more disruptive technologies tend to spawn new ideas, features, and products- and you need a long-term  partnership to capitalize on it! 

MM: According to your web site,  Imagine Research is working on a platform for “machine learning”. Can you briefly tell us what machine learning is and offer some examples of how machine learning could be applied to change how composers and sound designers create?

JL: In short, machine learning algorithms allow a computer to be trained to recognize or predict something.  One way to train a machine learning algorithm to make predictions is to provide it with lots of positive and negative examples.  You can then reinforce its behavior by correcting it, or having your end-users correct its mistakes. 

In our case, we use machine learning to enable machine hearing.  Our platform, MediaMined™,  listens to a sound and understands what it is listening to – exactly as human listeners can identify sounds.   

When software or hardware is capable of understanding what it is listening to, an enormous array of creative possibilities open up: such as DAWs that are aware of each tracks contents, search engines that listen to loops and sound effects and finds similar-sounding content, and intelligent signal processing devices.  I'm confident that this will enable unprecedented access to content, faster and more creative workflows, and lower barriers to entry for novice musicians.

MM: Are there non-musical applications for your platform?

JL: Absolutely.  Our platform was designed for sound-object recognition – so while I frequently discuss analyzing music loops, music samples, and sound effects, we can also understand any real-world sounds.  We're working on applying our techniques to video analysis, as well as exploratory projects involving biomedical signal processing (heart and breath sound analysis), security/surveillance, interactive games, and more than enough to keep us busy!

MM: How can app developers leverage your platform?

JL: While the specific platform details are still under wraps, I'd really enjoy talking with desktop, mobile, and web-based app developers.  We really welcome input at this early stage.  I'm happy to discuss at "info at imagine-research dot com".  For general information, announcements, and updates, please follow us on Twitter (@imagine-research).

MM: Imagine Research also creates "intelligent" algorithms for consumer audio and video products. Can you give us some examples of products that might be utilizing your algorithms?

JL:  Sure – check out JamLegend (think: Guitar Hero but online, free, social-networked, and it's one of the only music games where you can upload and play YOUR OWN music).  We developed the technology for users to play interactive Guitar Hero-style games with any MP3s.  So far, over 1.1 million tracks have been analyzed. 

We have a number of exciting partnerships with our MediaMined platform to be announced.  These applications directly aid musicians and creative professionals. 

MM: How do you think that the growth in cloud computing and the explosion of Smartphone processor power will change the la
ndscape of digital audio?

JL: The most exciting thing to me is unparalleled access to content – we'll be able to access Terabytes of user-generated content, mash-ups, and manufacturer/content-provider material (loops, production music, samples, SFX),  online from any device. 

Music creation can now occur anywhere.  Smartphones provide a means to record / compose wherever and whenever the muse strikes.  With cloud-based access to every loop, sample, sound effect, and music track ever created, how do you begin to find that "killer loop" or sample in a massive cloud-based collection — and — on a mobile device?!?  Don’t worry, there’s some disruptive technology for that. 

MM: Do you have any words of advice you can give to Modulate This readers who might want to pursue a career in audio R&D?

JL: Full-time corporate R&D gigs typically requires a graduate degree in music technology and music and audio signal processing such  as at Stanford's CCRMA, UCSB's MAT program, NYU, etc.)  But let's talk about the most untapped resource for research: industry-academic collaboration.  The academics have boundless creativity and technical knowledge, but might not know the current real-world problems that need solving.  I'd encourage readers to reach out to professors and graduate students doing audio work that they find interesting.  Think big – the hardest problems are the ones worth solving. 

____________________________________________________________

Links:

Mark Mosher
Electronic Musician, Music Tech & Technique Blogger, Boulder CO
www.MarkMosherMusic.com
www.ModulateThis.com

Categories
(Modulate This) AudioCubes Interviews

Interview with Bert Schiettecatte Inventor of Percussa AudioCubes

Bert_Schiettecatte_Interview_ModulateThis

I recently conducted a phone interview with Percussa founder and AudioCube inventor Bert Schiettecatte. I think music artists, visual artists, sound designers, those interested in tangible interfaces for installations, and music technology fans will all enjoy this interview – even if you are not in the market for a tangible interface. Below is a brief context-setting introduction. If you want to jump straight to the interview click here.

Introduction
If you’ve been following Modulate This you know I’ve been using AudioCubes, a tangible interface made by Percussa. As I started using the cubes, I began contacting Percussa with questions. Percussa is a small company in Belgium and Bert Schiettecatte the founder and inventor of AudioCubes himself is happy to talk with customers directly which I found quite refreshing.

I have to say that prior to my experience with AudioCubes, I didn’t know much about tangible interfaces and the more I talked with Bert, the  more I began to understand how big of an innovation Percussa AudioCubes actually are.

Most tangible interfaces are comprised of an infrastructure of components that include tables with special surfaces, cameras, projectors, software, and computers. In most cases they are very, very expensive, not very portable, and require a lot of calibration if they are moved. In other words, tangible interfaces are out of reach for most artists.

Bert formed Percussa with the radical goal of producing an affordable portable self-contained tangible interface that you could throw in a backpack and that eliminated the infrastructure. The result is the AudioCube. Each cube is a wireless, battery powered, autonomous computer that can be used as a performance interface to music software.

Below is a recent phone interview I conducted with Bert. In this interview Bert discusses his time at the CCRMA lab at Stanford and the founding of Percussa. He also offers an introduction to tangible interfaces; and a detailed run-down on Percussa AudioCubes, their function, their electronics and how they compare with other tangible interfaces. He goes on to discuss some of the FREE apps that Percussa provides AudioCube users. Note – I originally planned on a 5-10 minute interview but after editing I ended up at around 24 minutes. Bert had a lot of interesting things to say, so I decided to offer all 24 minutes.

Interview

0:19 – Stanford and Laser Harps
1:21 – Founding Percussa
2:13 – What are Tangible Interfaces?
3:23 – AudoCubes Explained
5:35 – On Overview of the LED System
6:37 – Overview of the FREE apps That Work with AudioCubes
11:48 – Where Do People Go to Get the Apps?
12:26 – OS Platforms, drivers and AudioCube fabrication
13:43 – How do AudioCubes compare to other tangible interfaces
17:16 – What are typical uses of AudioCubes and who is using them?
18:11 – Art installations
19:50 – Packaging, where to buy and shipping
21:28 – Where to go to learn more
23:03 – Thanks Bert

Links

Mark Mosher
Electronic Music Artist, Composer, Sound Designer
Louisville/Denver/Boulder

http://www.modulatethis.com
http://www.markmoshermusic.com
http://www.twitter.com/markmosher

image


Download/Buy my album REBOOT on Bandcamp
Buy on iTunes

Categories
(Modulate This) Interviews Synths & Instruments (Virtual)

Modulate This! Interview With Smule’s Dr. Ge Wang (Maker of iPhone Ocarina)

Modulatethis_dr_wang_interview_banner_001

Want to know what one of the leading iPhone developers has on his mind?

I recently had the opportunity to interview Dr. Ge Wang, CTO and Co-founder of Smule.com. Smule are the makers of extremely popular and innovative iPhone applications such as Sonic Lighter and Ocarina. Dr. Wang is also an assistant professor at Stanford University, at the Center for Computer Research in Music and Acoustics (CCRMA). He holds a PhD in Computer Science from Princeton University and a BS in Computer Science from Duke University. Ge is the creator and chief architect of the ChucK audio programming language, and the founding director of the Stanford Laptop Orchestra (SLOrk).

I asked a wide variety of questions in this interview – so – whether you are a musician, a developer, an iPhone user, or an entrepreneur, I hope you find this interview interesting and enlightening.

I’ve provided this audio interview in YouTube (for computer or iPhone users), and in MP3 formats*.

Part 1 – Dr. Wang discusses the iPhone as an application
platform, how constraint leads to innovation, and his vision for using
technology to bring people together. Watch on YouTube or Download MP3.

Part 2 – Dr. Wang discusses how people are using Ocarina and how Ocarina has brought music to the disabled. He also discusses the future of the Ocarina and Smule, and what it’s like to be “Smulian”. Watch on YouTube or Download MP3.

Links:

Mark Mosher
www.modulatethis.com
www.markmoshermusic.com

—- Production Notes —
Audio was taken from a phone conversation between myself and Dr. Wang. I originally intended to publish as transcribed text but felt of the tone of conversation would be lost so I instead published an audio version. I decided to present both sides of our conversation at phone quality to preserve the feel of the conversation. Note that for a short time in the beginning of Part 1, Dr. Wang was on a mobile phone with some signal drop out and the quality improves as the conversation continues. In addition to MP3 format, I’ve provided a YouTube format so you can easily listen to this from an iPhone or web browser.

Categories
(Modulate This) Artist News Featured Interviews Interviews

Modulate This! Interview with Ramin Sakurai of the Supreme Beings of Leisure

The Supreme Beings of Leisure are an electronic/trip-hop band based out of LA, California. According to their entry on Wikipedia “The release of the first Supreme Beings of Leisure album sold over 250,000 units with very little promotional touring. Instead, SBL opted to use the internet to market and promote the album, being the first band to ever do a “Virtual Internet Tour”, and among the very first to use Flash animation for their videos. The “Supreme Beings of Leisure” peaked at 47 on the Billboard Heatseekers chart according to allmusic.com, and is in the top 100 of the Trip-Hop Dance & DJ music category according to Amazon.com sales ranking.”

The band is currently a DUO made up of original members Geri Soriano-Lightwood (Singer/Songwriter), Ramin Sakurai (Multi-Instrumentalist, Programmer).  In addition to working with SBL, Ramin produces, remixes, and composes music for artists, television and movies.  After a  5 year extended break SBL released its third major studio album, 11i, on February 12, 2008 with Rykodisc Records.

I recently had an opportunity to ask Ramin a series of questions about the making of 11i, and on the affect of technology on his process for making music.

Enjoy the interview and visit www.myspace.com/supremebeingsofleisure for more info on the band.

Mark Mosher
www.modulatethis.com
www.markmoshermusic.com


Mark Mosher: What was the primary music production software you used in the creation of 11i?
Ramin Sakurai:  That would’ve been Protools versions 6-7. Some of the songs started off in other programs like Live or Reason but they always ends up in Protools.

Mark Mosher: Can you give us brief overview of your studio rig?
Ramin Sakurai: The main studio rig consists of a Protools HD3 with two 192 i/o’s and a G5 dual 2.5ghz. I run a 3.1ghz PC for GIgastudio. I use quite a bit of outboard gear as well. I believe a record can be recorded and mixed entirely in the box but you need a little help with some analog gear. I use Avalon, multiple API, NEVE-type preamps along with various compressors.

Mark Mosher:  How has the advancement of music production software and ability to produce from a laptop changed your workflow for this album?