Modern Approaches, Hip-Hop: Mixing

Some of hip-hop’s most exciting producers share insights on the mixing process

June 20, 2018

A credit of “producer” can mean many different things – ranging from “DJ” to “ghostwriter” to “financier” – depending on who is carrying the title and what sort of record label their credit is printed on. There is a certain essential continuity, however, to the role of producer that runs through different decades, genres and personalities. For the rock generation, the producer was the Man in the Booth – and it was almost always a man – a shadowy, Oz-like figure just visible beyond the glass of the control room and the seemingly endless yardage of the mixing board, teasing sliders and twiddling knobs, eliciting a coherent, harmonious work of art from the various recorded elements of a session the way a conductor conjured a symphony out of the various sections of an orchestra.

In the era of modern hip-hop, a producer is as likely to be a teenage girl on a laptop as a gnome-like rock sage behind a huge console. But that sense of wizardry is still central to the job title, even if the instruments to be orchestrated were cut to wax by musicians in different decades on different continents, rather than different takes at the same session. When the songwriting and arranging is done and all the sounds captured, it’s partially the producer’s job to make them gel together, using all the various tools of the trade to make them speak to each other meaningfully. An engineer can make a recording sound clean, but the producer likewise decides what voice from an array of hundreds of sounds must take center stage to carry the storyline of a particular passage. The producer puts the various raw materials into their proper place in the mix to carve out the sonics of a song – and then invites the listener into it. That’s just as true of a sonic traphouse as it is a virtual symphony hall, so once again we’ve assembled some of the most respected and innovative producers currently working in hip-hop to share how they tease their own distinctive sense of space out of the mix.

Onra (Paris – All City, Nothing But Net)

I mix everything in the MPC as I’m making the beat, so once I feel like I’m done with this track for now, the mix is usually almost where I want it to be. When I’m making the track, I treat it as one, and I have no real choice ’cause they’re all in the MPC. I can only apply a total of two different effects per song, so it’s very limiting. When I take the tracks to the studio, sometimes I keep it as a whole, sometimes I separate – it just depends if it already sounds good or not… It might be easier to fix some issues if you process sounds separately, but then I might lose the balance I had going on in the MPC.

It’s funny, ’cause people [have listened to my beats] and thought I was side-chaining. I have read that in comments many times, and I didn’t even know what it meant. But I’m actually not doing this on most of my past productions. I’ve only recently tried this technique on my latest record, to try to emulate the sound that I was getting with “just the MPC.” I’m not 100% convinced of the result – there’s pros and cons.

When I take my tracks to the studio, I go really in-depth with it, and of course I have to think about frequency bands and everything else. I have worked with only one sound engineer through my career (on approximately seven albums), and that’s Blanka from Kasablanka Studios in Paris. He quickly understands whatever I’m trying to do, even if I put it in my own words, and he also taught me a lot about these things. My ear is the final judge, though. My ear will tell me which frequency needs to be attenuated or turned up or fixed.

Supah Mario (Columbia, South Carolina – Drake, Young Thug, 2 Chainz)

The sound’s cool to me if it fits in the space. I mix each sound individually – I don’t group anything. I don’t feel like drums should be all on one track, because there’s different frequencies. You don’t want to put an 808 on the same track as a hi-hat. In certain cases where I haven’t really mixed at all and I’ve just done level-work and EQ and stuff, I will leave things grouped together just for time’s sake, and then track it, or just give it to the engineer. But if I’m left with the job to finish the entire track and polish it up? I do every single instrument in its own mixing channel.

I’ll add compression and distortion to drums as I’m going [rather than waiting til the final mixdown] ’cause I like to know what the tracks going to sound like before I end it. Before I end it, I know whether or not I’ll need distortion, so I’ll just add it, because if I add distortion at the end it might not come out the way I expected.

Karriem Riggins (Detroit/Los Angeles – Common, Erykah Badu, J Dilla)

I mix as I go. When the song is complete, we’re sending it off for mastering. I know that there’s automatic side-chaining with compression and everything, but I like to do a lot of that stuff live; live automation. I work with an SSL Nucleus. It’s a new Solid State board that just came out. And yeah, I like to automate. I automate everything, man. If you see one of my songs, it looks crazy. It’s a lot of hands-on automation, and a lot of CPU being used, a lot of RAM being used.

I don’t do a lot of side-chaining but sometimes, depending on the sample, you have to dip the sample, because the drums will just sound like a big mess if everything is at one level. So definitely dip in the sample for the kick to knock, things like that. Sometimes the bassline may be in a loop and you might have to flesh that out by doing some effects things, with compression and things like that.

And then you got a lot of low-end music nowadays. So to try to find those frequencies, but still incorporate what I do, is experiment. I’m trying different things in different frequencies, and not going to the same thing for every song, because it just makes a project very interesting, to have those different sounds. So I always try to encourage people to find it, just experiment, just find what you like. I don’t really have a specific frequency or a set of dBs I’m aiming for – I do it until I feel it’s there.

Johannes Ammler

Mndsgn (Los Angeles – Stones Throw)

When I think a track is done, a lot of times I have to give myself time and space away from it so that I can experience it objectively. When I’m in a space – and when enough time has passed – that I can do that, then there’s a simple question: whether or not I’m feeling what’s being expressed. And it involves not thinking about, like, “Oh, how do I mix in drums?” or about how loud certain things are – not really thinking of the technical things, just feeling it. And if I feel it, then that’s the answer right there. Because it’s thinking technically that really holds me back a lot of the times, like, “Oh no, this needs to be perfect,” or “That sound isn’t right…” There’s a certain line you have to draw. Sometimes things have to be EQed a certain way, because maybe it hurts your ears, but as long as it sounds pleasant and the message is there, then it’s done for me.

For the mixing process, I’ll just take the tracks, tune it down so I have about maybe 6 dB of headroom. Then I’ll crank my monitors so it sounds loud, but there’s also a bunch of headroom for the master engineer. Then I’ll just take it from there, trying to give everything dynamic space, letting everything breathe. Going one track at a time, or even soloing two tracks at a time and seeing how they work together. One approach would be just starting with drums and bass, and that could be thought of as the foundation of the song, some songs… Well, most songs. You start from there, you marry those two together. You make that sound union and maybe bring in some chords, some keys, and try to marry that to the other two. I mostly do that channel by channel, but I’ve experimented with gating the bass to the drums so whenever the kick drum hits, it gates the bass, like, dusts really fast, so it doesn’t interfere with the low frequencies that are within the kick drum as well.

Suzi Analogue (Baltimore/Brooklyn – Never Normal Records)

I do a little pre-mixing to keep everything clean so I get the total effect when I’m listening back, because the loops and the tracks that I’m creating, I play them ongoing, for hours at a time. I listen to what I should be doing to try to clean up the sound, so I need to at least get it to the point where I like it and I can hear everything clearly, the way I need to.

Other than that, most of the mixing I just leave for the very end, because I don’t want to stop. My process is just a straight-through process. I don’t want to stop the creative process just because I’m like “Oh, this drum needs to hit like this.” It’s a little counter-productive. I’ve been there, and as a producer we’ve all kind of struggled with that. “When do we start to mix?” But I kind of just save it for the end. As long as you can hear everything cleanly, I would say just make your process happen in chunks so you can not feel inhibited with the creativity. I don’t want my creativity inhibited in any way. And even if that means that it doesn’t sound perfect, that’s just where it is at that point.

I do like to listen to [the rough mix] in cars. I don’t own a car currently, but I’ll have my files, and if I get a ride with someone or take a Lyft, if they’re like, “You want the aux cord?” I’m like, “Yes, I do!” I love to hear my music in cars, because I grew up within that culture – you know, you’ve got your subs-in-the-trunk kind of culture. Everybody had to hook up their car systems, and that’s how I actually fell in love with music, listening to it in cars on long drives. I grew up outside the city, so those long rides definitely were meaningful to me as far as my relationship with music. I like to take road trips, and when I gig, sometimes I’ll drive. If something is not too far, like “Oh, it’s two hours away,” I might just drive instead of taking a bus or something, because I want to listen to music and hear the newest tracks, mine and other people’s as well.

Other than that, I’ll listen to it out of the Mac speakers, the general speakers, try to listen to it in different headphones, to get a sense of the mix. I recently was at a hotel and we had this pool, it was this huge, open space. It had a lot of reverb, and my friend started playing one of the tracks from Zonez Vol. 3, “Wildflower,” and the reverb made it sound crazy! I was like “Oh, wow. That’s something to take into account,” like, I might want to turn the volume of this one down if I play it out in say, a warehouse that has metal all around. So playing it out, playing things live – I’ll play tracks out before I drop them, just to give them a test run, and not even to see how the crowd reacts, but see how the mix sound in the rooms that I’m in. Because with each room, the sound changes, so I try to measure the tracks up against a lot of different scenarios to get a good idea of where I’m at with it.


Listen on SoundCloud


I don’t want any of my sounds hidden. Even if they’re sonically supposed to be a surprise, I don’t want to hide them. I would go to a gate when, say, if you have a sound that is just supposed to be a supportive sound, like a line of synths – it’s not the main line of synths, but it’s a supportive sound – better put a gate on that synth because it will make it stand out. You can play around with turning the frequency up or down on the gate, which will make it pop out and sit in a really good place in the mix. Gating is a really cool thing to add. Sometimes, people go to compression before anything else, because they’re like “It’s too quiet. This is too quiet.” Well, you could get a lot done with just gating, so I would totally recommend gates as, like, the underdog of mixing.

For side-chaining, say I have a ARP – I love the sound of ARPs, that’s just typical for me – and then I have like a steady beat. I would side-chain my ARP to my beat. If it’s a kick drum, or whatever I want to tell the story – understanding the concepts of drums telling a story – I use my side-chaining compression to bump up my rhythmic aspects. So I might have a rhythmic melody and I have a rhythmic drum pattern and I want them to answer one another – like a call-and-response kind of thing – but they’re playing at the same time, I would side-chain them to one another so they can both stand out in the mix, support one another versus clashing with one another. Even if it’s a tone that’s played throughout, I will still side-chain it to the drum track, like a kick… so it’s very listenable. I use side-chaining as a way to make the crazier elements that I’m adding more listenable and more enjoyable.

I do group instruments, but I group it just based on what’s the action in the track. It’s less of a section, like a orchestral kind of thing. But it’s more like, “These two are really ramping up in this part.” They might not fit in the same section if this was an orchestra, these two instruments, but they are both at this same party, dancing at the same part of the dancefloor, and I want them to dance together. I don’t want them to feel like there’s not enough space for either one of them to dance, you know? Maybe I just think of it more like a dancefloor and less like an orchestra.

Linafornia (Los Angeles – Dome of Doom)

When I mix I will group the instruments. I’ll definitely put the drum pattern on one channel and then I’ll put a sound sample on another. I’ll even put the kick pattern on one channel and the snare pattern on the other, the hi-hat on another, and then I EQ it that way. If it’s not sitting in the mix right I might go to some effects, but not too much. [For effects] I keep each drum part individual. Sometimes I may add my outer reverb to a snare or something like that, but I don’t do too much to the kicks. Maybe I’ll add more bass to a kick, because I like a lot of bass. I usually put that upfront, basslines and things like that. I like it a lot ’cause it gives it a warmth, really. Because a lot of my music sounds like what most people might consider “lo-fi.” It just sounds warmer and the fidelity isn’t so crisp either, it just gives it a warmer feel. I think it’s the bass that makes it warm, the bass and then the fidelity of the sample.

eevee (Dordrecht – Inner Ocean Records)

I need to mix and EQ elements as I write/arrange a new track. For example, if I want to make a bassline for a sample, I EQ the bass out of the sample so there is room for a new bass. Also, if I put my kick, snare and hi-hats extension in the beat, I prefer to mix them before because it sounds better. If I have a really hard kick and I didn’t mix it before, it sounds really harsh and because of that I can’t come into the feel of the beat.

I prefer to keep every sound separate in a track and let it find its own way in the mix. I put every sound on its own channel so I can mix it separately. I prefer to mix by ear because sometimes you can’t see all the sounds in the waveform. I use Peak Controller often on my kicks or snares or sometimes Bold. I also use Gross Beat in FL Studio sometimes on my samples or synths to give it extra effects, and sometimes I use free filters on the drums/sample or the master.

Sufyvn (Khartoum – Indie)

The detailed work of the mixing is something I do later, especially with something like compression. I don’t even use compression that much, so when I use it I move it for later. Distortion is something that I don’t even go near. Not that there’s anything wrong with it, but when I use it I wouldn’t even call it distortion. I use effects like the bitcrusher. Distortion itself, I have not even tried it before, but the bitcrusher is something that’s cool. Sometimes I use it in small amounts to add something, like I would send a lot of light percussion sounds to one channel bus and I would add a small little bitcrusher to it, that can really add a flavor.

For samples – I mean the melody, not the drums, [because] the drums I use a lot of basic stuff; the filters, the EQs – but when it comes to the oud, the melody, I process it through a lot of things: through granulizer, the reverbs, bitcrushers, then sometimes I would run it through a virtual instrument. Something like Kontakt – they have something also called Reaktor from Native Instruments – would add a sample to it, and they have a few effects that I can’t mimic otherwise. A lot of stuff I add a lot of delay to.

I use side-chain for everything. Not the drums, but every single melody in every beat that I ever put out has side-chain to it. Especially my song in the Ascension EP called “Whispers,” I put a lot of side-chain to it, to the point where it was way too obvious, in an annoying way, but I just went with it.

CLΔMS CΔSINФ (New Jersey – Vince Staples, A$AP Rocky, Mac Miller)

I don’t mixdown at the end, I usually do it as I go. It takes me a long time to make things that I love, so I’m spending a lot of time with it, mixing it little by little as I go. I’m never really able to just sit down and pull up a session and say, “I’m going to mix this now.” Because usually by the time I’m done making a beat, it’s pretty close to how I want it. As I’m producing it, I’m slowly tweaking it and getting the mix how I think it’s going to be in the end.

As far as dealing with mixing engineers, most of the time it’s just telling them “Don’t do too much!” Don’t clean it up too much, you know? Like, they might hear something – especially if they’re coming from the more pop side of things – and immediately what they want to do is clean it up. They don’t understand that most of the time I spent working on it is to make it sound like that, putting things in that they think are mistakes… If anything, what I want mixing engineers to do is just make everything have its own spot, make it hit hard enough, but don’t clean it up too much. Just make everything come through and make the drums more punchy, if possible, and hard. Some people just don’t really understand what I want and what I’m trying to do, so it is a little tough sometimes, and that’s why I like to be involved as I can in the final mix.

Crystal Caines (Harlem – A$AP Ferg, Baauer)

When it comes to mixing and getting the sound right, I do it at the beginning, because sometimes that might be the whole feel of the record. When I go in there, I focus on the sample first and how I want the sample to sound, before I put other instruments around it. I like to try different things every time. One thing I do use a lot is the Filter Freak ’cause it’s so many options within that VST. I use it on whatever sounds good – I use it on my vocals or I use it on the drums or sometimes just the snare or the main melody – all depending on which option I use within that VST.

Signal processing and effects for drum sounds I save until the end. I just duplicate it – like, I will duplicate the drum track probably twice and then I will try an effect on one to see if it makes the drum hit different. In terms of keeping the rhythm section altogether to put one effect on it, it just depends. Sometimes I put different VSTs, different presets, ’cause when I use Logic they have different presets for kicks, presets for snares, so sometimes I’ll go in and just manually move it into what sounds good for me. It’s not always what I think it would be, but I go in and mess with it so it sounds like something different, something that I never did before. That’s why I can never make the same thing twice! ’Cause I don’t write down what I do and everything’s based off of feeling to me.

Johannes Ammler

Harry Fraud (Brooklyn – Surf School)

I’m a mix-as-I-go type of guy, because I have enough of a background in engineering where I understand what I want sonically, not only composition-wise, but how it’s gonna occupy the spectrum. So I will definitely play with saturation, distortion, crushing bitrate, crushing sample rate, running something out. I have a bunch of Universal Audio tube pre-amps and a bunch of guitar amps and keyboard amps, and I might pre-amp something and re-record it in or send it out to to one of the tubes, where you can push the saturation and get natural distortion or use a distortion plug-in. You want to stay away from digital distortion, meaning clipping and things like that, but if you can get any type of analog distortion, I really love those. I think it’s great and important to use. I also think playing with bitrates and sample rates is really important ’cause it can really do a lot for you, making things kind of fit in with samples. Using samples and keyboards together, for instance, crushing the bitrate or sample rate of your keyboard, can really help it fit in with your sample, ’cause if you took the sample you probably took it at 16-bit.

I usually process drum sounds individually. The main things with drum sounds for me is EQ. I think compression is important, obviously, and I use it, but a compressor for drums works better to me with live drums that are hitting at different dynamic levels and different velocity. You wanna use a compressor to kinda tame everything, bring it to more of a constant level. When you’re using a programmed drum, you can alter the velocity within the program that you’re using, but that drum sound is going to hit at the same level and velocity if you just leave it, right? So unless you’re trying to bring out a dynamic element of the particular sound, what is the compressor doing?

Read prior editions in the Modern Approaches series focusing on percussion and vocals.

I think people just lay compressors on kicks and snares because that’s what they’re taught is the traditional way. But that traditional way was developed for drums that are live. So for me, if I’m using a compressor on a kick or snare, it’s to bring out or highlight a specific dynamic element of that kick or snare. Maybe I want to control the attack, maybe I want to control the release. Maybe I want to compress this kick at a super-duper-duper high ratio so that the smallest, the least heard, aspect of the kick now becomes a more present aspect of the kick. Dynamics, that’s where compression comes in, but I think sometimes people blindly use compressors ’cause they think they’re supposed to on a kick or snare. But if you’re just blanket putting compressors on things, I don’t know that they’re always necessarily effective. A lot of times putting compression on a particular drum, they just remove the life out of that drum because they just squash the fuck out of it.

Now, I will bus everything to, say, a kick and snare bus and give that a little touch of analog compression to give it that color… Meaning a plug-in of a famous analog compressor, to give it the color of that past compressor, give it a little bit of fire. But I’m not hitting it super hard. The gain reduction is probably between 3 and 6 dB, it rarely goes higher than that and usually it’s less. I just kiss it, you know what I’m saying? Nothin’ crazy.

I’m totally into busses. Bussing is really important, and to me I’ll always set up a music bus so I can control the overall level of my music, set up a drum bus to control the overall level of my drums. Obviously, set up multiple vocal busses so that if we’re doing a song together, your vox are going to go through your bus, my vox are going to go through my bus – why would we treat our vocals the same? We have different voices, so they would not call for the same treatment. When you’re dealing with really huge mixes, you know, a hundred tracks, in order to keep yourself sane it’s important to dial in on your busses and know what elements can live together and what elements need to be kept separate from each other, how those elements are going to interact. I would say I look at a DAW in the real traditional mixing board system, where you have your tracks, you treat your tracks separately, but then you bus groups of tracks to where they can live together.

I wouldn’t say I lean on side-chaining. I lean on a lot of frequency carving, especially for kicks and bass, making sure your bass can hit clearly. I think side-chaining can be really helpful to duck certain elements and let other elements shine through, especially when you got a pumping-ass kick and you got an 808 that’s washing over your whole shit, you know? If you can, side-chain and figure out “Oh, let me trigger this to duck when the kick hits.” I definitely wouldn’t call myself a master of that, though. I’ve used it in basic stages, whereas someone like Jaycen Joshua is a side-chain master.

When it comes to keeping each sound very clean or sometimes letting them bleed together, I’m a little of both. I don’t have a problem with things laying on top of each other to create their own textures, but at the same time I think you have to decide: “What elements of this track do I want to sit upfront?” And right now in hip-hop, we’re at stage where we’re very big on drums and vocals sitting way out front. If you listen to a Migos record, they sit out front because a Migos vocal is almost like an instrument. They interact with the drums in an interesting way, and the interaction of the drums and vocal is so important to their music. So I don’t think there’s a problem letting things be a stew sometimes, but you as the producer or the mentor has to identify the element that you want to be recognized and then, frequency-wise, go figure out how to bring it out. Or subtract frequencies around them, to make them have their own space

Lunice (Montréal – LuckyMe, Warp Records)

When it comes to mixing, everything has always been done by ear but I’ve recently been leaning more towards thinking about it mathematically in the studio sessions I’ve been doing. Usually I make sure that I never spend too long EQing one element as I’m working on a track because it’ll get me out of my flow state. I’ve found that I’m the most productive when having every track separate so I can find a place for each sound on the mix.

I used to do a lot of side-chaining in my early productions like on Stacker Upper and One Hunned. But after those two records I stopped using it just to challenge myself towards a new creative direction and see what comes of it.

When testing a mixdown I’ll use car stereos, computer speakers, club, boomboxes and especially soundcheck for playback. I’ve also tested many tracks in the middle of a live set to get an honest first impression from the crowd.

Header image © Johannes Ammler

On a different note