Here’s another one of those posts I started one day and then decided the next day that it doesn’t really say anything, and doesn’t contain any valuable content except the one link to Scopique’s post. But per the new 2019 blogging rules, I’m posting it anyway.
I saw Scopique’s post on audio settings for streaming, which gives me an excuse to write about one of my favorite things: Audio engineering! I could write thousands and thousands and thousands of words on the inside baseball minutia of all the work I put into the audio tracks on my YouTube videos. (“See how different it sounds when you move the microphone two inches *that* way?? See how much different it sounds with a 4.25:1 compression ratio instead of a 4.5:1 ratio??? Isn’t that the coolest thing ever???”)
But I don’t really have to, because Scopique’s post covers pretty much everything you need to know. A decent microphone, a compressor, noise gate, and ducking are the four main components of decent quality sound on an audio track.
The only thing I might add is that if your room is so noisy you have to use a noise suppression filter, you’re probably pushing a very large rock uphill to fix that in post-processing. Good sound quality always begins with a good room, then a good microphone, then microphone placement, then processing, in roughly that order. Noise suppression is probably a CPU-intensive operation, too. I imagine it operates on the frequency domain, which means math-intensive real-time conversions, where the other three are simple volume operations. But admittedly the last time I used noise suppression technology I was removing hiss and background noise from cassette tape transfers using a program called CoolEdit, which unfortunately no longer exists. And admittedly you’d have to be a super audiophile nerd like me to notice noise suppression artifacts on an audio track in the first place.
But… if you really can’t get rid of the noisy thing in the room, hang something between the source of the noise and the microphone, like drapes or something like that. And/or make sure the noise source is behind or at least to the side of the microphone.
One of these days I keep meaning to write a post about my current audio setup, because as I mentioned above I think it’s the most fascinating topic in the universe. Also I would like to write it all down for posterity, just in case I need to take it apart and put it back together someday. I have a fairly unconventional setup, you might say, not the kind of thing most people would want to try to replicate. It’s like how people put custom engines in cars or whatever it is weird car people do with their cars when they’re bored. I use a Behringer outboard compressor on the microphone, for example, so I don’t have to use OBS’s effects. And my microphone is a “vintage” Australian Rode NT-1 I bought in the 90s (vintage only in the sense that they don’t make it anymore-the current NT-1 is gray, while mine is the silver color of the current NT-1A) plugged into a little Mackie 1202 mixer.
But before I get sidetracked into all that, I’ll just reinforce what Scopique said and point out that the sound quality of the audio in a stream or video is quite literally the first and almost the only thing that I notice about it. I almost never sit and *watch* game videos or streams, unless I am studying to learn a particular technique or strategy. I usually listen to them and occasionally glance at the screen. That means I couldn’t care less about your overlays or web cams or chat scroll-I need to hear what’s going on, either from the game sounds themselves, or from the streamer narrating what they’re doing, or preferably both. Otherwise I’m probably not going to stay very long. (Not that anyone should be streaming to my tastes, since I’m way far into the margins of the target demographics of streaming viewership.)