Meng Qi

Sound + Process # 13 features musician, designer, programmer and teacher Meng Qi. A pioneering interface theorist, Meng Qi is perhaps best known for opening new dimensions of control and interaction with Peter Blasser’s Ciat-Lonbarde circuits. His earliest module, Voltage Memory, is the first synth module to have ever been both designed and manufactured in China. Over the years, Meng Qi has released a wide spectrum of music with these and other instruments. His experiments with feedback and frequency modulation are only enhanced by explorations of tonality — the resulting songs are uniquely beautiful in both timbre and emotion.

Since Meng Qi’s audiences span many countries and languages, I’ve chosen to present his episode in two different formats — one is this transcript annotated with audio and the other is a standalone album with a downloadable document at https://soundandprocess.bandcamp.com. Visit one, visit both, but as always feel free to join the conversation on lines — https://llllllll.co.

 

“One single person.”

Since I was a child, I became interested in making music.

I started out playing instruments, but soon I realized that as an individual I can’t [make] a whole track, because there are so many instruments involved — it’s not realistic if I practice every instrument that I need in my music. So, actually, I started using synthesizers and computers because it seemed a good way to make a whole sound, just by one single person. That’s how I started using synthesizers — and soon, I realized that there’s unlimited possibilities for expressions, control, interface and timbre. With electronic technology, we have a lot of freedom in interface and sound, itself. That kept me attracted along the way.

Expression

Around 2007, I started making synthesizers because, I think, with different instruments (or different workflows) — even if we have the same motive — we can have different expressions. So I aim to build something different, something for myself. The first goal is to optimize my old workflow, the second goal is to find something new with interface. I’ve always been interested in the interface side of electronic musical instruments.

I was trying to adapt Peter Blasser’s designs into my own workflow. Peter Blasser’s instruments are certainly unique, [which is] the first thing that attracted me to them — and they sound beautiful, too. He has some innovative ideas about instruments, so I just build based on them. He also designs some primitive oscillating circuits that’s producing the so-called ‘rhythm’ and by patching the primitive circuits together, it will make a lot of variations. They’re also touch sensitive — because they have a primate state, they can be easily influenced by the human body.

I found that during designing instruments that if the difficulties to master or practice — to know — an instrument is above a threshold for the player the character of the instrument would dominate their expressions. So, there’s very few personal expressions in the music they can make — it will just be the musical instruments.

Gesture and Output

Some of Peter’s instruments [are] actually impossible to practice, because they produce different results with the same gesture or playing style — every time. You can’t build a solid relationship between your playing gesture and the sound output. So [in] that way, it’s not actually a musical instrument — it’s more on the sound-art side, which may give some of your pieces some character but you can’t play a score. Even if you just repeat the same gesture on the same instrument, it will be different. And it’s always very hard to maintain the pitches.

So, the first Eurorack module I designed was the Voltage Memory — a six-track voltage programmer and sequencer — which memorizes the voltage you set up and you can recall them with various methods like manual recall, sequential arpeggiation, and so on. That was born [for] Peter’s instrument, Plumbutter — a drum machine — because I wanted to make some melodies on it and it wasn’t obeying any voltage standard like 1V/octave. I can only tune each note by hand, so that’s why Voltage Memory was born — it’s me trying to adapt the instrument into my workflow.

I think there’s so many degrees of [reliability]. If it’s totally reliable like most traditional instruments — if you have the same velocity, the same precision you play the instrument with — it will almost always sound the same. And there’s nuances and differences coming from the emotions. With electronic musical instruments, we have a lot of freedom in deciding how reliable this relationship can be. It depends on the usage, or the principle of the instrument, when it’s designed. When the designer wants the instrument to be learnable, there needs to be a certain amount of reliable connection between gesture and sound output. I think one very important thing is to decide when there will be sound. Basically, it’s about loudness and the pitch of the sound. If we can play with these two factors of sound, there’s a lot of reliable connection between gesture and sound output.  With timbre, it’s more free. So, if we have the same melody or same rhythm of the phrase, [we can] alter the timbre.

Because each individual has different methods, different approaches to handle the emotions in the self, they have built different connections between emotion and the actual sound. You can’t actually determine how your player will play your instrument [or] how the same emotions will sound with the same instruments, because the player will make different connections between them in the middle — they have a different way to play it.

Motive

When I have the will, the motive, for a piece of music I start it by playing my most used instrument. Maybe it’s a keyboard, maybe it’s a drum — I start from there. The second approach is more technical, because when I want to try out a specific technique or specific setup, I just mess around with it until something strikes me emotionally. I can start from there.

I always record a video for my playing. For me, playing music is actually the most important thing. Normally, I don’t edit the music after I record it — I just put on some basic EQ, some reverb and that’s it. Also, normally just two tracks of [direct] recordings because I want to show the direct relationship between me and the setup I use. I will show the whole process of a track.

With instruments, most people are doing exactly the same thing, but the output can vary. For me, when I start with a new piece of equipment, like anybody else, I try out basic functions and combinations. I think the key here is not to find a ‘new way’ to play, the key is to find beauty in the sound output you’re playing. So there’s some connections between you playing the instrument and the sound you hear from it. That’s what makes a player happy, I think. Technically, most the things we do have already been done before by someone else. But we can make totally different music.

“So-called ‘errors’.”

With feedback, I try to limit the frequencies and the range of sound. I try to make sure most of the sounds in the track are pleasing to myself, especially with acoustic feedback. The modular box acts like a big microphone that picks up the sound from the speakers and reacts to it. It’s actually very hard to control — there’s too many sweet spots to make sure that each every occasion of sound that it outputs can be pleasing. No matter how hard I try, there are always some so-called ‘errors’ because there was always some sound that is unpleasing to me sometime – it’s overloaded [but] if I turn the volume too far low, it won’t have the feedback at all. So, I accept all the errors in it, because it’s not totally refined instrument for this kind of feedback playing. And every time I play, it should be in a different room, so it’s very hard to make something custom for every situation that’s coming – to make ‘less errors’, to make it ‘perfect’. So all I can do is to accept the possibility of the wrong note, or the wrong sound…and do the best I can to make the piece beautiful.

Musician / Designer / Programmer / Teacher

In our schools, I teach Max (graphical programming) and with music students I teach basic sound synthesis. I don’t know if it’s a problem specifically in China, [but] all the students in the art domain are not actually good at math or logic. So the first thing is to actually teach them how logic works, how to turn the real-world problems into the relationships between numbers. That’s the best, that’s the main goal. In high school, we were separated in two classes — one class is studying Math, Physics and Chemistry and the other is studying Literature and History. Most of my students are from the Literature class. It’s a problem, that they’re not good at Math. I’m not good at both, I just know a little on each side.

As a first-comer into instrument building, [students] are not building something in their thought. They work on something that’s already designed — I design basic building blocks and during the process, they can have some options to customize this a bit. [When] designing an instrument, there’s a lot of technical limits one needs to obey — but the students don’t know about them, so I have to start with some kind of limits for them to make sure they can achieve something that’s playable, something finished, when they finish the semester. If I give them too much freedom on the technical side, they will just get lost. I think the best thing, even though there’s limits, [is that] they always have some character in the instruments they build. No two instruments are the same and they always have something unique to the builder themselves. Most of the time, I didn’t think of the changes they made.

I just want to experiment with a particular interface that’s warping a sound engine, or a particular controller idea of mine. I didn’t think about making them commercially available. The first thing, making something one-off, you don’t need to make it finished. There’s a lot of aspects that aren’t finished in lots of my one-off builds. If you want them to playable, want them to get most players accepting the instrument, they need to be a finished instrument — there’s nothing too awkward. If they require some reprogramming to play, it will be impossible for the masses. There’s a lot of things to consider for a ‘finished’ instrument. For me, the thing that interests me the most, is the reaction when I play it. The reaction with gestures and sound — to see if this interface, if this combination, can inspire me. If I can get that, the mission is done. So, I’m not interested in selling them, actually. The work involved in making a finished instrument is so much that if I make each of my instruments finished, I won’t be doing so many of them. One other very realistic point is that most of the instruments I build are based on Peter’s PCBs, so actually I can’t reproduce them because they’re his design. I use his sound engine with my interface, so for me it’s just pure experiments. If I find something that’s more complete, something worth repeating, I may build a few but it won’t be many. I have some products, mostly the Eurorack modules — it’s a good format to have a product in.

“One very important point of playing music is to make one happy.”

The reason I love Mannequins is because they are very innovative designs that are not currently available in other brands. For example, one of my favorite utilities is the Cold Mac — it’s one knob, a set of analog utilities like cross-fader, analog logic. But they’re all interconnected, so if you use it like a normal utility module and then you patch your sound back into the knob [Survey], you suddenly have your cross-fader and analog logic FM’d with the sound. This allows a lot of very nice possibilities with experimentation. That’s why I love Mannequins — they’re very well designed with unique features. They’re very completely designed. monome also makes some of the best things on the Earth, so the combination can’t be wrong.

The best thing about Teletype is that it simplifies the code. There’s always a learning curve — a distance between learning the code and making valuable output out of your code. Teletype is the shortest possible path on Earth [between] learning the code and making something musical. The other thing is that it limits you to very short pieces of code because you can’t exceed 7 lines per script and the length of each row is also limited. Everything is on the same screen — that actually helps a lot with the workflow. With traditional coding interfaces, like when someone codes in Python for music, they have unlimited room for the code.That allows another workflow, but if freedom is given in one aspect, the possibilities in other aspects will shrink. When people were writing music for harpsichord, which has literally no velocity — all the notes you play sound the same — there were some very complex, great structures of music. With something like Chinese traditional music, like guqin, sometimes it’s just one note ringing for five minutes. The player manipulates the sound of the note. There’s no melody, no rhythm, no harmony. So, if we have a lot of freedom in one aspect, it will reduce the expressions in all other aspects. As humans, we’re a very limited animal. We can only focus on a few things when we’re playing music, when we’re making music. The limitations actually are very good.

One very important point of playing music is to make one happy. Whether it’s a piano, harmonica, triangle or modular synthesizer — this goal is the same. Just to make the player happy.

Possibility

The big instruments (ed.: modular systems) are only better if one is not using it. It’s like it’s good to have as many books as I can, if I’m not reading them. If you have too many books, you can’t possibly read all of them or you can’t possibly read one of them multiple times, to get a better understanding [of that book]. So if you have a bigger system, it will actually reduce your expression. One possibility to make a big system usable is for modular synthesis to become the new ‘grooveboxes’, constructed with drum noises and sequencers…and clock dividers, which make them actually usable as a very big system.

I think one sentence in an advertisement is very true — I remember one from Make Noise’s shared system: “There’s more destinations than you can ever go to in a lifetime.” And that’s just a 7U system — that’s very small!

With the older systems (like Serge and Buchla) or the newer ones like Mannequins and Make Noise, the modularity is very high — they are actually synthesizer systems that you can use in literally unlimited ways. Even if you have a very small system, you can still use it to produce a lot of different types of sound. A drum module, in comparison, is more limited. To maintain playability in a larger system, it will naturally lend to the ‘groovebox’ side. It will become a very big musical workstation that’s connected with triggers — generation, conditioning, multiplying/diving, and random — to trigger the drum noises. It’s another way to use modular synths, but if you’re going the synthesis route a big system doesn’t help at all. A small system is pretty much enough for one to explore, even in a lifetime, because there’s so many ways to connect the cables. The combinations are literally endless. A bigger system is the workstation route, [but with] a smaller system I go the synthesis route — you can focus on the system. With the bigger systems, you’re actually picking the particular voices for the sound you’re building because sometimes this bass drum fits better than the other one — but they’re bass drums, so you can’t perform classic synthesis with them.

Reward

As modular synthesis is going mainstream, the users need to be rewarded to keep using it. But synthesis is actually not very rewarding — it’s not a fast approach between learning and rewarding. [Often], you spend a lot of time on it and it still sounds like shit. Not everyone knows where all the harmonic partials lie [when] doing an FM pair or making modulations. A lot of time is required in experimenting, to find the sound that suits the player. It’s more like building your custom instrument, right? You don’t want to buy a ready-made desktop synth, but rather you’d want this sound generator paired with that filter, maybe some effects too. One can use modular synths as a way to assemble their very own custom instrument. That’s another approach to use a modular synth — there’s nothing wrong in it, there’s just so many hours of frustration waiting for you. But if you want to pick the easier route, pick up a 4-voice chord module which basically will make all the chord types and inversions for you! It’s like when the automatic camera came out — did it do badly? I don’t think so. It’s a good thing to make things easier and approachable. But at the same time, it’s not as free as synthesis.

It’s always two-fold. A lot of times I use the easy approach — programming some good scales into the Teletype and make a pointer run in the scales, to make some pleasing sounds. That’s also good! Not all the time am I doing multiple feedback or partial feedback, because most of the time it sounds like shit! I need to unplug and re-patch! It’s not rewarding. But sometimes it makes great results — by this method, I learn a lot. That’s the advantage of this.

Sometimes, I totally go apeshit on a synth — just patching every hole. That way [though], the patch isn’t actually traceable. You can’t trace the signal route. If that happens and it sounds like shit no matter how I turn the knobs, I just give up and unplug all the cables. Have some rest, maybe play a game. If there’s a specific synthesis routine that I’m practicing — FM, AM or something combing a few of these techniques — I can try to alter the connections, apply some attenuation or inversion to tame the sound. That way, I can probably learn better than the first time.

I never force myself to do anything — sound and the possibilities of expression with sound still interest me a lot. So, I never need to find the will-power to continue to do this, it just happens naturally. It’s just part of my life, like my entertainment.

“Let me just stop talking…” (live performance)

This instrument specifically combines two parts, both are Peter’s design. The first is Sidrassi, which is a seven voice triangle wave synthesizer — each voice can be modulated by the one before it and modulates the one after it. It’s making a circle of modulations, which will turn into noise because this type of FM will result in noise. The other part is Rollz, from Rollz-5, which is primitive pulsing circuits which emit rhythm and some noises as well. It’s very sensitive to touch, so on this instrument I multiplied the nodes in the Rollz circuit as banana jacks as well as copper ribbon, so I can touch it during playing. It makes it a lot more versatile.

I just touch some nodes and it reacts.

Just by patching and touching, it will actually emit some type of rhythm.

One tip when you come to this type of instrument: if you tune it to some kind of ear-pleasing scale, it will normally produce very acceptable results. Just using that as a start point, you can find some nuances of dynamics in sound — it almost sounds like envelopes modulating the timbres, but there’s not actually any particular type of synthesis. It’s just primitive circuits! It produces some unpredictable results and sometimes sounds really beautiful.

Download the whole episode on Bandcamp

Playlist:

Great River

Nabra Snow

Custom Sidrassi

lights are from a window

span (live at cafa)

Beautiful Error 美丽错误

glittering skys go dark

ooolo

晚风 wanfeng

live sidrolz performance

Leave a Reply

Your email address will not be published. Required fields are marked *