Refinery v. Redesign
On mastering and mix bus processing
A colleague recently asked a small group of us mastering engineers how much mid/side processing we're doing these days. Our answers were rather uniform: not a ton. Maybe one out of every four or five songs for me.
He then mentioned a friend who does a lot of mid/side work at the mastering stage: detailed mid/side processing combined with extensive automation of the involved parameters. He added that his friend's masters sound full and polished. And they do.
Different strokes for different folks, after all. But the conversation kept going and it arrived at a place more interesting than where it had started: the difference between mastering and mix bus processing. The more I thought about it afterward, the more I realized these two processes get conflated in practice in ways that matter. I also realized that most of the confusion comes not from disagreement about technique but from a failure to define terms.
On Mastering
Mastering is the act of making a finished mix the best version of itself. The key word is itself. Each song arrives at a mastering desk with an identity – its tone, density, width, spectral and vocal balance, and the totality of aesthetic commitments that the artist and mix engineer have made. On a technical level, this identity includes tidbits like the native sample rate, word length, noise floor, and so on.
My role in mastering is to respect those commitments while ensuring that they translate faithfully across a wide variety of listening environments.
It follows that there is no such thing as predetermined mastering processing. No processing that occurs by default. If the final approved mix already sounds fully representative of the production team's vision, that final approved mix is the production master. My role in such a case is simply to present the song in the required formats for various distribution channels. This may sound trivial. It is not.
Even restricted to the digital domain, format delivery involves decisions that affect what the listener hears. Reducing word length from 32-bit floating point to 24-bit integer or 16-bit integer means a guaranteed loss of amplitude resolution. That resolution loss ought to be auditioned, and the appropriate dither noise or word length reduction processing should be intentionally selected rather than left to a default setting. Similarly, reducing frequency resolution through sample rate conversion, from 96,000 down to 48,000 or 44,100 samples per second, is a destructive process whose artifacts ought to be auditioned and, if necessary, one should compensate for any shortcomings. And these are the straightforward cases. Consider a final approved mix which sports intersample peaks that are legitimately intentional on the part of the production team. How does one ensure that lossy codecs do not fold as a result of the overages while preserving the transient response that the overages provide? These are mastering problems, and solving them well requires as much care and attention as any EQ move or compression setting.
The point is that mastering begins with the premise that the song is already what it should be. Everything that follows, whether it's format conversion, spectral refinement, dynamics management, or simply the act of listening and confirming, serves the goal of protecting and translating that identity. Not changing it. Not improving upon its fundamental character. Ensuring that what the production team heard in their room is what listeners hear in theirs.
On Mix Bus Processing
Mix Bus Processing is an entirely different discipline. While often employed in conjunction with or at the mastering stage, mix bus processing reshapes the internal balance of a song. It changes relationships between elements, altering the song's identity. One can pull the vocal forward or tuck it back, enhance or relax a song’s groove, impart harmonic saturation, widen the sides, fatten up or thin out the mid channel. These are creative decisions attempting to change what the song is, not attempts to refine or reveal what the song already was.
The distinction between these two processes is not about which tools are employed. Equalizers and dynamics processors (compressors, expanders, limiters) are fundamental to both mastering and mix bus processing. The question is what happens to the song's identity when the processing is complete. If the production team listens to the master and hears the same song, refined and translated and more intentional, that's mastering. If they hear that a quality of the mix has been added, removed, or obviously changed, that's mix bus processing. It is not the tool, but rather the outcome, that determines which process one is performing.
This is inherently a judgment call that deserves a bit of attention. A mixer might notice a change that the artist would never catch. Two mastering engineers might draw the line in different places on the same song. The sensitivity of the listener, the monitoring environment in which the comparison between final mix and in-progress master takes place, the genre conventions that inform what qualifies as a "noticeable" change: all of these introduce subjectivity. That is acceptable and expected. The problem isn't disagreement about where the line falls. The problem is ignorance of the line’s existence.
When the distinction gets lost, things go wrong in subtle ways.
A mastering engineer develops a process-first workflow, reaching for the same mid/side processors, the same automation patterns, regardless of whether the song calls for it, and stops hearing each song on its own terms. Or the mix engineer gets a master back and the vocal balance has shifted in a direction they never intended, and now there's a difficult conversation that could have happened before the work was done instead of after. Or worse still, the artist approved a mix they loved and receives a master that sounds different in ways they can't articulate, and their trust in the production process erodes.
None of these are catastrophic outcomes, but all of them are avoidable.
In Practice
Both mastering and mix bus processing are legitimate stages of the production process. The trouble starts when you do one while thinking you're doing the other.
Oftentimes, the listening environment in which the production team has been working is acoustically compromised, sometimes considerably so. A mixing room with deep nulls at 55Hz and 110Hz, for example, makes it extremely difficult for any engineer working in the space to build a strong and balanced bass response when competing low-frequency sources are present in a mix. The mixer fights the room to get the low end to feel right, and that fight is evident in the mix. What sounds correct in a compromised space (and maybe on a couple of small speakers or in the couple of cars available to a production team) may fall apart across the hundreds of different systems through which the audience will actually hear the song. The result is a mix that is not fully representative of what the production team wants listeners to hear.
Other times, the mixing engineer or artist has never heard their work at a release-ready loudness. The song may not hold up at competitive levels without spectral or dynamics intervention, and that can come as a real surprise.
So a mix arrives at my desk, and it needs help. The question is what kind of help.
The instinct, and it's a natural one, is to start reshaping on my end: widen the sides, rebalance the center image to tilt forward or back, change the spectral profile of the mid and side channels independently for maximum control. Sometimes that's exactly what the song requires. But the mastering engineer should recognize when the work has crossed from refinery into redesign. That crossing isn't a failure on anyone's part; it is a different kind of work which deserves to be acknowledged as such.
The best opening move is a conversation.
Before reaching for any tool, speak with the artist, the producer, and the mix engineer. Understand how they expect listeners to experience the song. Which qualities of the mix are intentional? What would they change if they could, and what would they leave untouched?
If the consensus is that the mix is close, that the identity is right and it just needs to translate and compete, then the mastering engineer does mastering work. The kind of work where, when it's done well, you can't quite point to what changed. The song simply sounds more like itself.
If the consensus is that the mix needs its internal balance reshaped, then the mastering engineer applies some degree of mix bus processing. Sometimes this happens because returning to the mix stage isn't practical. Sometimes it happens because the production team wants a higher level of intervention because the mixing environment available to them is fundamentally compromised. In all cases, everyone involved should know that's what's happening. It should be a deliberate choice, not an unexamined default.
And so we find ourselves back at the original question about mid/side processing.
Mid/side processing is a tool like any other. The question was never about how often we use it. It was a question about what kind of work we’re doing when we reach for it, and the answer depends not on the tool but on what the song sounds like when we’re finished. Mid/side EQ, detailed automation, multiband compression or expansion: all of these can be mastering processes if the song's identity passes through them intact. And all of them become mix bus processing the moment a quality of the mix has been perceptibly added, removed, or altered.
The line between refinery and redesign is not drawn by the technique. It is drawn by the result. Different engineers will draw it in different places. It is a line that ought to be drawn in a conscientious manner.