Submitted by syahniel t3_11uitr1 in headphones

For example, spotify sounding more inside of head, and youtube (not all, some, such as guitar cover video) sounds like outside of head. I don't know how to explain this more detail. But that is how I felt.

Which is better actually? Cause I like the youtube more, cause it sounds more wide, rather than in head sounding.

To answer the question that might occur in the comment. No, the mono audio is not turn on. It's not mono type of inside the head. It just that, the youtube seems more wide.

0

Comments

You must log in or register to comment.

No-Context5479 t1_jcolcwp wrote

YouTube compresses audio so it sounds louder... Your brain can trick you into thinking louder is better... Spotify has a set loudness level so that going from one song to another doesn't sound jarring because someone wanted to compress the shit out of their music. Spotify is 320kbps OGG Vorbis which is great

3

kazuviking t1_jcor72o wrote

Its equalent but not 320kbit/s and only with premium, normal uses stuck at 160kbit/s. Youtube on the other hand uses 256kbit/s.

1

thebirdman9999 t1_jctcopq wrote

The * normalize volume * should be set to off to get the best quality out of spotify.

using the normalize volume setting is not great for the dynamic range.

Another thing i find strange is that you say that youtube sounds louder, im a bit confuse by that, if i dont touch the volume and i compare the same songs between youtube and spotify, spotify is louder for sure. I would guess that the normalize volume setting has probably something to do with that, or another setting maybe.

1

No-Context5479 t1_jctdr7u wrote

u/thebirdman9999, Huh.... Normalise doesn't impact dynamic range if it's set to Normal or Quiet... Where did you here such lies... Normalisation doesn't compress the file it just lowers the volume to a set standard LUFS across board so worse and more compressed songs don't sound jarring playing right after good dynamic range having songs that have acceptable LUFS.

Please don't go around saying Normalisation reduces dynamic range.

Lemme link this wall of text:

Spotify goes into detail on this page - (https://artists.spotify.com/faq/mastering-and-loudness#can-users-adjust-the-levels-of-my-music) about what each setting does.

> When we receive your audio file, we transcode it to delivery formats Ogg/Vorbis and AAC. At the same time, we calculate the loudness level and store that information as metadata in the transcoded formats of your track.

> Playback levels are not adjusted when transcoding tracks. Tracks are delivered to the app with their original volume levels, and positive/negative gain compensation is only applied to a track while it’s playing. This gives users the option to adjust the Loudness Normalization if they want to.

> Negative gain is applied to louder masters so the loudness level is at ca - 14 dB LUFS. This process only decreases the volume in comparison to the master; no additional distortion occurs.

> Positive gain is applied to softer masters so that the loudness level is at ca - 14 dB LUFS. A limiter is also applied, set to engage at -1 dB (sample values), with a 5 ms attack time and a 100 ms decay time. This will prevent any distortion or clipping from soft but dynamic tracks.

> The gain is constant throughout the whole track, and calculated to match our desired output loudness level.

> Premium users can choose between the following volume normalization levels in their app settings:

> Loud - equalling ca -11 dB LUFS (+6 dB gain multiplied to ReplayGain)

> Normal (default) - equalling ca -14 dB LUFS (+3 dB gain multiplied to ReplayGain)

> Quiet - equalling ca - 23 dB LUFS (-5 dB gain multiplied to ReplayGain) This is to compensate for where playback isn’t loud enough (e.g. in a noisy environment) or dynamic enough (e.g. in a quiet environment).

Emphasis mine -- basically Spotify's system is just normalization in most cases, and only behaves as a compressor to prevent clipping. I suspect if you were to set it to the "Loud" setting you'd hit that behavior on some tracks but in general their approach seems like it'd be avoided. Personally I'd recommend users enable audio normalization and use the "Quiet" or "Normal" settings.

To directly answer your questions:

> Does Spotify's volume normalization use compression?

Yes, only if clipping is detected, and only for the duration that clipping occurs. Since tracks are lowered in volume when this feature is turned on this should be exceedingly rare that this happens.

> Why does Spotify (and the internet) say that dynamic range is most preserved on quiet normalization?

Since the compression only kicks in if clipping (or near clipping) is detected it's fairly unlikely to kick in on the normal setting. It's basically guaranteed to not happen on the quiet setting. It might kick in on some cases on the loud setting, but from their documentation it sounds like this might only happen if a track was super quiet to begin with.

So it only compresses if said track was mastered terribly with extremely loud LUFS... So a well mastered song is never hit with a dynamic range penalty... This is good as it has helped us move away from the era people used to brickwall their audio so their CDs sounded louder than other acts, thinking that will lead to more plays of their CDs. Nowadays if you fuck up your own mix, you're gonna get a volume level the same as songs mastered to tons of dynamic range and your track will end up sounding trash....

TLDR: Sorry for the long winding text but no Spotify doesn't touch the Dynamic range of a song. They just lower the volume so all songs on the platform have the same volume.

1

thebirdman9999 t1_jctenmm wrote

all good :) you seem to know much more than me on that subject, i was just sharing what most people are saying about it if i googled * spotify normalize volume *, results seems to point towards turning it off to have better sound quality most of the time on reddit and head-fi.

1

No-Context5479 t1_jctf04e wrote

That was debunked years ago by Spotify themselves and people verified that with ABX tests.

So I advice putting it back on and using Quiet or Normal. Really makes it easy on the ears for long listening sessions. Loud is not it at all for me... Cranks the volume which I'm trying to avoid anything beyond 75dB...

Also all good... We all are still learning in this space... Learned something new the other day about Vents and IEMs which I knew nothing of. 👍🏾

1

thebirdman9999 t1_jctfvma wrote

I prefer to let it turned off, i dont feel the need to use it. the loudness level difference between tracks is really minor to me. i cant recall being surprised by a big change of loudness from one track to another but i know that it can happen sometimes, i guess im just lucky.

1

No-Context5479 t1_jctgabm wrote

Yeah I listen to a vast amount of genres who have their own "standard" industry LUFS. Classical tends to be the most dynamic rangefilled genre so their LUFS never gets beyond -14... Pop and other genres like electronic music who tend to crank their compression sometimes. A sudden change in songs is very audible and sometimes jarring so I never want to be overly distracted so smooth transitions all around... But I get you. Preferences, preferences 🤝🏾

1

kazuviking t1_jcorbwc wrote

If you don't have spotify premium then youtube is higher bitrate. On the other hand it depends in the recording and effects use. Stereo mics will always sound better than mono ones.

2

syahniel OP t1_jcs41jz wrote

I have premium. And i guess it bcs of the videos too. Maybe they intendedly made the audio more stereo like you said using the mic.

1

SilentRain2496 t1_jcpkmok wrote

The format may be relevant, but I think it's mostly just the recording setup. like binaural recording.

 

spotify: ogg(vorbis) AAC
 

youtube: opus m4a(AAC)

 

https://en.wikipedia.org/wiki/Opus_(audio_format)
Opus replaces both Vorbis and Speex for new applications, and several blind listening tests have ranked it higher-quality than any other standard audio format at any given bitrate until transparency is reached, including MP3, AAC, and HE-AAC.

2

wagninger t1_jcqgmsq wrote

YouTube does less compression, it is actually one of the most dynamic non-lossless „music streaming services“. That might influence staging, Spotify also uses a codec that is less good at stereo imaging. Sorry that I don’t have specific sources for this, for the first part I heard it on the mastering podcast with Ian Shepherd, the second part I remember from my audio technician training.

1

dimesian t1_jcqybce wrote

The better one is that which you enjoy the most. Just last night I listened to a much loved track on my old iPod that I haven't used for several years, I think I bought it on iTunes so its possibly a 128kbps track. I then tried the hi-res version on Tidal, didn't enjoy it as much though it didn't sound bad. I tried it on YouTube music (too much free time) and it sounded great, someone else might prefer the hi-res version.

1

thebirdman9999 t1_jct90wz wrote

the high freqs on youtube are less defined and clear than on spotify so the sound on youtube is in other words... warmer/smoother. Since its smoother, you probably can turn up the volume a bit more resulting in a wider stage, simple as that.

thats my observation when i compared the same songs between spotify premium and * regular* youtube music videos of them

i do use youtube to listen to music sometimes when i want to relax late at night (: even if i can have higher quality on spotify.

1

bearstakemanhattan t1_jctfl6z wrote

YouTube and Spotify have different loudness normalization standards. Spotify normalizes to roughly -14 LUFS whereas YouTube normalizes to -12 iirc. I noticed this too but if you level match it doesn't sound different. Could also have to do with the psychology of having a music video to go with the music but I have no evidence to back that theory.

1