Viewing a single comment thread. View all comments

rhalf t1_itr5m38 wrote

What you call technicalities in engineering we call nonlinearities. Frequency response is also called a linear distortion. It's the kind of distortion that doesn't change with drive level (how much power you're putting in). Nonlinearities in are very important in any audio transducer. You want them as low as possible. That's because music is a continually changing drive level. Things get quieter and things get louder all the time. That's where the fun is The most important nonlinearities are the ones that happen to loudness of a particular tone, also known as amplitude modulation. Your hearing system is very sensitive to that. Amplitude modulation suffer the most from compression which is an inherent quality of any audio transducer. Basically as the diaphragm wobbles it has increasing tensions in it's suspension as well as decreasing motor strength, the further away the diaphragm is from it's resting position. This means that the stronger the wobbles, the less accurate it becomes and also the less sensitive it is. This is very bad for sound quality and it's a universal mark of driver's limitation to properly reproduce sound.

Compression can get suddenly tragically worse with phase issues. FamouS examples are Sennheiser hd820 and buchardt s400. Both have severe compression at resonances that cause phase cancellation and consequently notches in frequency response.

Last thing is the time delay spectrum. It's frequency response taken as a 'lump'. Basically one very smart guy discovered that our auditory system catches short sounds like transients as lumps of sound. You can't hear how long they last. Instead, your hearing tells you that a longer sound is higher in intensity than a short one. This lead to development of time delay spectrometry, today known for example as waterfall plots. They are particularly useful in medium and high frequencies, because they allow us to find ringing that doesn't show up in frequency measurements. It is a crucial driver behaviour and measurement for sound quality and spatial effects.

90

rhalf t1_itr8fij wrote

The reason why peeps talk about frequency response and frequency response only is threefold:

First, headphones are tragically bad at it. Tuning a headphone is a nightmare as opposed to tuning a loudspeaker. They are often off by something like 10dB and it's not considered weird, that's how difficult it is to get that line flat. By comparison a reviewer's standard for a recommended loudspeaker is +/-1.5dB.

Secondly it's the only feature of sound that you, the user can affect. You can't change compression or ringing without comprehensive training in engineering. However you can push the sliders to make an EQ curve.

Lastly it has been shown that frequency response is very important. It's the most fundamental measurement that we have and our hearing agrees with the results of this measurement to the highest degree. Not 100% but still more than with anything else.

As a bonus I'll add that it's the easiest thing to measure. It doesn't require any special equipment or knowledge. The simple graph is also easy to understand. If I posted a waterfall graph instead, a common person reading it would be lost and had no idea what to make out of it.

42

Technical_City t1_itrac83 wrote

>First, headphones are tragically bad at it. Tuning a headphone is a nightmare as opposed to tuning a loudspeaker. They are often off by something like 10dB and it's not considered weird, that's how difficult it is to get that line flat. By comparison a reviewer's standard for a recommended loudspeaker is +/-1.5dB.

This is really interesting. As a non-engineer, can you explain why it is that tuning headphones is so practically difficult? Intuitively I know it to be true (hence all the substandard headphones, etc.), but have no sense of why it's so difficult.

11

rhalf t1_itrjv5j wrote

In order to understand the difference, you need to know how a loudspeaker is tuned. A hifi loudspeaker has bare drivers on a flat baffle. There is nothing affecting the sound between the diaphragm and a microphone. A loudspeaker designer than has a full arsenal of tools to affect it. It's like working in a chemical lab, where everything is in clear glass with no contamination and all the tools to precisely dose chemicals. If you want to alter a driver, you make a precise virtual model in software and the software spits out it's response. You know for sure that this response will be very close to the real thing. You can for example put the tweeter in a waveguide and an app will help you iterate the design 20 times before you can get all the lines parallel. You don't need to make the thing 20 times. You're done with the baffle and the thing is still not flat? That's OK, you can compensate any problems in the electric domain. You open another program that lets you make a filter out of electronic parts. It simulates a notch filter here and a shelf there and with 20 parts you have finally linearized the response. The whole thing weighs a ton and barely fits in the enclosure, but hey it sounds great. Oh did I mention that AI does that last part for you? Yup, there is an app that does just that. You give it a measurement and it spits out a circuit diagram.

Now headphones... Remember when I said that you can design a waveguide? So with headphones you are already forced to work with one. The ear cup is a waveguide, a terrible one and there's not much you can do about it. Headphone's ear cup is like room that cannot be detached from the loudspeaker. You can't take it to an anechoic chamber. You need to work with the mess that it creates. Sound is like light except everything is a mirror. A lightbulb in a torch illuminates the surroundings differently than a lightbulb in a chandelier. A headphone is a lightbulb in a crumpled tin can. The result is a mess.

Headphones are tuned by covering holes on the driver with lossy materials. You poke a hole and see what happens. Each time you change it, something drastic happens, but it's difficult to understand what. I have personally no idea what tools or software do the big guys like Sennheiser have to aid them with it. I guess a headphone designer would shed some light on it, not me. But even if you get the driver tuning right, you change the earpads and it sounds differently. More than that. You put it on someone with curly hair and glasses and that makes a difference as well :D That's because the enclosure is lossy. Very lossy. I personally don't understand why we can't add passive filters to headphones. It used to be problem of source output impedance, but now that everything is measured and reviewed, we can predict how a filter would sound. So it would be cool to see filter PCBs for headphones. It won't fix all issues because the most appalling resonances in headphones are destructive, meaning that they cannot be fixed without physical change. It can be used to attack broad valleys and bumps though.

48

The_D0lph1n t1_itrurpc wrote

There's a site called DIYAudioHeaven that does provide schematics for analog filters for certain headphones. And I've seen speculation that the Dan Clark Audio Expanse uses a passive filter to produce its bass shelf that would normally be impossible on an open-back planar-magnetic headphone. So it's not unheard of to use passive filters, but certainly not commonplace. I suppose people who use high-output-impedance amps on headphones with highly variable impedance curves are doing some passive filtering too.

23

rhalf t1_its457p wrote

Hey that's cool. I'll check it out.

5

ThatsAlotOfBeanz t1_itswgfo wrote

Great response, just had to say your statement “a headphone is a lightbulb in a crumpled can” is perfect 😂 sincerely - a headphone designer after a long day.

To elaborate slightly about the tools the various “big guys” use, there are a handful of mathematical methods you can use to predict/understand what sorts of changes to make regarding the variables of headphones, but it is quite messy. And a lot of the more boutique headphone places follow the “pole a hole and see what happens” approach. - which isn’t a bad thing, just one of many design methodologies.

13

Technical_City t1_itsf97l wrote

Thank you so much for writing this out. This is what reddit is great for. Very interesting.

3

gr8john6 t1_iufeok1 wrote

This is why back in the 90's people tried to make a smallest well tuned loudspeakers to hang over ear. ;P Bypass all the problems with having to deal with human imperfections.

1

knvngy t1_itrs83l wrote

> By comparison a reviewer's standard for a recommended loudspeaker is +/-1.5dB.

In real life you almost never get a loudspeaker that produces a flat frequency response within +/-1.5dB . That's fantasy unless you listen to very high end speakers in a perfect anechoic chamber which is not happening. So more like +/10dB in real life. In that sense both headphones and iems can produce a smoother frequency response than most loudspeakers.

5

The_D0lph1n t1_itrgmk8 wrote

Thanks for the info on Time Delay Spectrometry. I did some searching on the keyword and found a paper by Richard Heyser (1973) and another later one by Mark Fitzgerald (1989). Is the Heyser paper the one that started the practice? Or is there another one that came earlier?

The psychoacoustic effect of a longer chirp being interpreted as louder by our brains helps explain a few cases I've experienced where a part of the spectrum sounded louder to my ears than the graph would suggest, but was fixable via EQ.

I wonder if there is a way to incorporate the auditory effects of "lumping" into the FR graph. Like it adds to the parts of the spectrum where there is ringing to represent the audible effects of that ringing. It would no longer be a pure FR graph, but it might be helpful in some cases.

2

rhalf t1_itrpvx2 wrote

It does make sense intuitively, doesn't it? I'd love to see it made. Heyser was the pioneer. A real genius. Today maybe Tom Danley is comparable of the guys that I've heard. Not that I've read much.

Heyser basically found what everybody interested in audio wants to know - what is the connection between pleasure objective data. TDS is basically asking a driver to shut up and seeing how it complies. Spoiler alert - it doesn't. Complex diaphragms and motors have resonances that store energy and release it when there is no stimulus. These resonances rob us of silence! No other measurement finds that. There is a lecture on Heyser on YouTube and it captures all you need to know about the guy.

Most famously waterfall plots help us understand the smoothness of tweeter sound. Select people with enough money or DIY patience know that ribbon tweeters sound smooth and domes are harsh despite the fact that they're made of the same material. For a long time there was no graph to capture that, but waterfall makes it clear.

6

xstreamstorm t1_itua2ye wrote

from an engineer's perspective what would you say are the brands that actually seem to know what they're doing, contrary to a lot of the hype or otherwise?

2

rhalf t1_itwxxuk wrote

This is an interesting question. I was thinking for a while about it and came to conclusion that I haven't heard many headphones outside of the mainstream hifi. The ones that I've heard were way more deliberate that I could ever make them, except for Grado. The Grados I heard years ago were like hearing aid. I guess people with hearing impairment can have their hifi too.

Most pointless products that I bought however must be some multi-driver Chi-fi earphones. Really badly executed products from TRN and KZ. I keep them stored in hope that some day I'll retune them.

I was also disappointed with Shure. Not bad, not terrible, just uninteresting. You can see how many popular products in hifi were iterated even 10 times before reaching high status. Sennheiser is an example. I used to have their HD545 which was to a degree a precursor to HD600. You can see a clear direction of development there. They not only know what they are doing. They persisted for a long time. Same applies to KEF in the speaker world. Refinement takes time.

Most small manufacturers either make planar drivers, or use freeedge dynamic drivers found in Denon, Fostex and Creative. They're available online. You can have fun with them too and have a no-bullshit set of cans :)I'm personally happy with modded Fostex T40RP. Nothing too fancy, but gets the work done.

1

D1visor t1_ittqdvt wrote

Really nice explanation even if I can't quite visualize it or understand it perfectly.

So I don't understand the whole phase thing but the part about phase cancelation and consequent dips made me go "aha, I see" because I have AKG K371 and HD560S that both have notches where they get really quiet but also one side gets louder (K371 much worse though) and I can't fix it with EQ.

1

rhalf t1_itumlxi wrote

You are not alone. No one understands phase. I think Scott Hinson wrote an article on this, but I have yet to read it. Phase is a very abstract term that means time. But it's not time measured in arbitrary units like seconds, but in waves. A wave can have any propagation time so saying 0.5ms delay doesn't tell you much. However if you say "the length of 2khz wave, which is at the same time half the length of 1khz" then you can start imagining what that delay is going to do to your frequency response simply knowing how waves combine. As I said it's an abstract term and consequently we can apply it to different situations. For example phase cancellation suggests that there are two sources. One makes positive pressure, the other makes negative pressure and they end up working hard and achieving nothing. These two things can be anything. For example two halves of a diaphragm. Grab a sheet of paper and hold it flat in one hand. Move it up and down slowly. The paper should flex a little but generally move with your hand. The suspended part of your diaphragm is in phase with your hand. Now as you speed up, it starts to bend. At certain very specific pace, the end of this sheet will flap up when your hand goes down and vice versa. It'll be out of phase. When your hand makes positive pressure the suspended paper makes negative pressure. When these pressure regions propagate, they expand into each other and to a large degree cancel. Here your hand is driver's motor and the far end is diaphragms edge or suspension. Other examples of a phase issue can be waves bouncing back and forth between the driver and the back enclosure, effectively impeding it's movement. It can be some part of the enclosure ringing also out of phase with the driver. In these cases the propagated sound coming back is the second source. Phase issues are deeply connected with resonances. A resonance can be out of phase with it's energy source. Engineers often use that to their advantage for example Helmholtz resonator is a phase reversing device that is used to extend bass in a bass reflex enclosure. The air in the port is out of phase with the back of a driver inside the enclosure, consequently in phase with it's front. We like when this happens :) The last phase issue that comes to my mind is quite tricky. It comes from driver being relatively big to the wave that it makes. It's called directivity because it works in such a way that depending on the angle from which you listen to the speaker, it's frequency response will change. It will change because there will be travel distance differences between various points on the driver. The far part of the driver makes pressure that has to travel more to reach you and by the time it reaches the pressure from the close part of the driver, they are out of phase (on some specific frequency or frequencies). This is especially a problem on expensive headphones with big drivers. In speakers we fix that with so called phase plugs - obstacles that force selected parts of the wave to take longer paths.

5

knvngy t1_itrqnhy wrote

If you can hear non-linear distortion then the transducer is very low quality, excessive power is applied beyond what the transducer was designed for, or both. Most of this distortion is usually concentrated at very low frequencies. Decent transducers do not produce audible distortion other than the frequency response itself at normal listening levels.

Since most people who talk about technicalities do not measure nor talk about how the transducer creates distortion nor any technical aspect associated with distortion such as levels, their talks about "technicalities" can be dismissed as gibberish.

> waterfall plots

These plots which are nothing but a fancy and convoluted plot for resonances are can be very misleading and not very useful to meaningfully interpret data. Even more useless for headphones and iems.

−8

rhalf t1_its15a1 wrote

I think you may have just demonstrated your lack of understanding of how a driver works. I touched on it in the first post, so I'll begin where I left it. A transducer reproduces sound most accurately near it's resting state. This means that small amplitude vibrations are clean. The further away the diaphragm gets from the center, the less linear it gets and consequently the more the sound distorts. I think so far we are on the same page. So here's where your logic is failing - it's a fullrange driver. At the same time as it plays the lows that push it far from it's comfort zone, it simultaneously plays the highs, that are being reproduced in and out of that comfort zone. The small, high frequency vibrations are subjected to the same modulation of forces as the bass. This causes often audible distortion throughout the range.

The reason why opinions like the above circulate is because many people learn from basic theory and looking at graphs instead of using reasoning and insight. The distortion that you see in reviews is a so called THD or total harmonic distortion. Let's break down this cluster. HD or harmonic distortion is a measurement. It's not a physical property of a driver. It's an oversimplified measurement procedure that is older than sound reproduction itself. It wasn't adapted to your psychoacoustic model, and neither does it describe accurately the troubles of reproducing music. It is simply playing a SINGLE tone and seeing what other tones come out. Total HD is simply a way of presenting this data in even simpler form. No wonder you don't know how a headphone works, you're basing your knowledge on a simplification of an oversimplification. Real distortion test that's representative of sound quality is a multitone intermodulation torture test and a set of compression curves known from Klippel. There are plenty probably other flaws in your concept , but let's just stop and digest this. IMO the OP makes a great point. The basic measurements that we use dont fully describe sound quality. It may be enough for you, but that's just like your opinion, dude.

9

knvngy t1_its801k wrote

> multitone intermodulation torture test

I think you are missing the point here.

Even if it is true that the 'real and true' measurement for audible distortion is the 'multitone intermodulation torture test', absolutely nobody is using that to review headphones or iems to talk about 'technicalities' or 'sound quality' except perhaps for some obscure nerdy gnome in a cave.

Secondly, that 'multitone waterboarding test' is kinda silly because the overwhelming majority of the distortion is usually concentrated at very low frequencies, since that's where the drivers has to move more to displace air. That's where the non-linear distortions rear their ugly heads first. Something that can be more easily identified with a normal a total harmonic distortion measurement. If the transducer can't pass simple that test at decent loudness, I don't see what's the point to continue with more exotic tests.

> The basic measurements that we use don't fully describe sound quality

If people measure the headphone at 150db and is distorting like Death metal guitar of course that the frequency response is rather useless.

But if the traducer is not significantly distorted then I don't see what's the point that you are trying to convey here. If it matters then measure it and report at what level the distortion becomes audible. Then obviously, do not measure the frequency response when this is the case. Such a silly and moot point...

−3

rhalf t1_itsbj1a wrote

I feel like you didn't really read what I said and only implied an insult either to me or the OP with a malevolently worded ad populum. The fact that most people use THD instead of a more adequate form if measurement has nothing to do with your disdain and hateful view of this community. It's simply the easier thing to compare. THD is a standard, and Multitone tests are custom. You can't compare the results between different users who use different test procedures. The problem that I'm pointing at however is that you're not using THD for that. You use it to draw ignorant conclusions on working principles of transducers. Your reasoning only applies to multi way devices. A fullrange driver distorts in it's entire bandwidth. This is an obvious fact known to everyone who designs audio and a primary reason why it's worth to build multi way speakers. If a fullrange driver is playing bass, the vocals distort. That's just how it works. HD plots don't display that. Multitone tests do, because unlike HD, they were specifically designed for testing music reproduction.

10

knvngy t1_itsicmw wrote

Just show that distortion is significant/audible beyond certain level, then do not measure/publish the frequency response beyond that level. At that point the whole discussion about "multitone intermodulation torture test " becomes utterly irrelevant to the frequency response . It is that simple.

1

JivanP t1_iuiox2o wrote

> beyond certain level

At what frequency/frequencies?

1