Submitted by SupOrSalad t3_z715hp in headphones
Comments
Taraxian t1_iy4clfd wrote
A lot of audiophile woo can be dismissed by fully internalizing that human hearing is a purely "one-dimensional" sense -- at any given moment the only thing an eardrum perceives is an increase or decrease in air pressure, a single number going up or down, and this one number going up or down over time is what makes the waveform whose varying frequency makes a sound
Taraxian t1_iy4e3pi wrote
I guess technically this makes hearing in one ear a "zero dimensional" sense -- all you hear is a presence or absence of varying intensity with no "position" data -- and stereo sound is "1D"
I.e. all you have is two eardrums that let you judge how far a sound source is to the right or left of you by whether it's louder in your right or left ear
The other two dimensions of "3D audio", front/back and up/down, are illusory information your brain calculates via deduction, based on your personal HRTF -- clever little hacks based on stuff like the shell shape of your outer ear causing subtle changes in high treble frequencies depending on exactly where a sound is coming from (treble sounds coming from behind you get blocked while ones coming from in front get amplified) and making little judgments based on subtle 3D movements and rotations of your head
Which is the whole reason binaural audio works even though it's just recorded with two mics and played with two speakers in your headphones, and why it's so much more instantly convincing when it includes head tracking the way AirPods do
It's also why, even though your ability to hear positional sound is very convincing to the brain, it cannot be relied on without vision backing it up and why the game of "Marco Polo" is surprisingly difficult
Username_Taken_65 t1_iy5qoo0 wrote
Directional hearing is not only about volume differential, it actually has more with the slight delay between ears caused by the speed of sound.
[deleted] t1_iy7u956 wrote
[deleted]
[deleted] t1_iy7uyfh wrote
[deleted]
Username_Taken_65 t1_iy82owz wrote
Yeah, because if something is to your left it will take slightly longer for it to reach your right ear because the speed of sound isn't that fast
ScheduleExpress t1_iy85ybl wrote
How are 2 people able to agree where a sound comes from?
Boba_Fett_boii t1_iy5tas2 wrote
That's interesting
gooftrupe t1_iy5ugmi wrote
Can it not also perceive frequency as well as change in sound pressure? I’m not sure that’s really one dimensional.
Taraxian t1_iy5uz36 wrote
Frequency IS an increase in sound pressure, it's literally a measure of how fast sound pressure changes back and forth (a vibration)
Taraxian t1_iy5w05v wrote
Like, frequency is how fast the magnitude of sound pressure changes, amplitude is how far up it goes before it goes back down, but both those numbers are just derived from one number that's going up and down over time
Failing to understand this is where a lot of audiophile woo sneaks in, like this is why "high-resolution" sound files just means files that can record higher frequency sounds, these two concepts are the same thing
This is the principle behind how a DSD file works and why it has a "bitrate of 1" -- at any given timestamp there's just a 1 or 0 telling you whether the magnitude is currently increasing or decreasing (as opposed to PCM, which directly encodes the 2D image of a waveform we look at, there's a 16 or 32 or 64 bit number telling you what the volume of sound pressure is at any given timestamp)
THEOTHERONE9991 t1_iy6fmgt wrote
I don't know what DSD is, but a bitrate of 1 per ear I assume? Otherwise it would have to be mono I suppose. Wait, actually even with that I don't see how a decent result could be produced... I mean maybe. Sorry this is interesting to me, I'd like to know more. This would also require very high samplerates so it can do really weird stepping up and down to reproduce frequency? If it can only go up and down there would be some singular ideal frequency and amplitude (but still subject to samplerate loss) and anything other than that / as it gets farther from it causes more and more loss... also it wouldn't be able produce silence... But I suppose it could produce a very high pitched sound that's above the drivers / human hearing instead. Okay... I want to read more about this now haha, I'm probably overlooking something but I can't imagine how a bitrate of 1 could work.
NahbImGood t1_iy87zhy wrote
*bit depth of 1, bitrate of 1 would be 1 bit per channel per second, which wouldn’t sound too good
gooftrupe t1_iy5vfyz wrote
Yes of course but the sound pressure + time is two dimensional
Taraxian t1_iy5wdfo wrote
Yeah but all our senses include time
gooftrupe t1_iy5z296 wrote
Right and therefore those senses are two dimensional
13Zero t1_iy6aujl wrote
It’s one dimensional. One input (time) and one output (pressure). When you add stereo, it becomes two dimensional because there are two outputs (right pressure and left pressure).
petalmasher t1_iy8shj1 wrote
Frequency and amplitude can be graphed with one line, can't it? http://www.sengpielaudio.com/calculator-wavegraphs.htm
gooftrupe t1_iyb3z8m wrote
A one dimensional line would never change direction or osciallate because it has not other dimensions besides length. An x-y plot has 2 dimensions: x and y. Similarly, a sound wave has two dimensions amplitude (sound pressure) and time.
petalmasher t1_iyf44ym wrote
Why are you talking about sound in terms of spatial detentions?
gooftrupe t1_iyf4hvb wrote
My comments were in reply to a parent comment regarding the ear and hearing, which are spatial
[deleted] t1_iyb0r8r wrote
[deleted]
gooftrupe t1_iyb16u1 wrote
How is frequency an increase in sound pressure? I can alter frequency while maintaining constant sound pressure, as well as vice versa. Frequency is independent of sound pressure. Again two dimensional
gooftrupe t1_iyb3pvk wrote
I think you might be conflating sound pressure (amplitude) and wavelength? Frequency is kind of like the speed of a wave yes, but it's not a product of the sound pressure at all. They're independent. If you haven't heard of sengpiel audio I highly recommend. He explained some of these things pretty well. I use the site for looking up acoustic calculations a lot.
​
alesimula97 t1_iy68xw0 wrote
If that were true, wouldn't I be unable to know where a sound is coming from? (Except for left and right)
Taraxian t1_iy69ma5 wrote
>The other two dimensions of "3D audio", front/back and up/down, are illusory information your brain calculates via deduction, based on your personal HRTF -- clever little hacks based on stuff like the shell shape of your outer ear causing subtle changes in high treble frequencies depending on exactly where a sound is coming from (treble sounds coming from behind you get blocked while ones coming from in front get amplified) and making little judgments based on subtle 3D movements and rotations of your head
alesimula97 t1_iy6al5u wrote
So how does that matter? Whether the dimensionality of sound is measured or "virtually" calculated, at the end of the day it is something we perceive (pretty accurately, I'd say)
So how would audio equipment taking advantage of this be "fluff"?
After all, what matters is that we can discern between two sound waves coming from different directions
Emmerson_Biggons t1_iy7lh98 wrote
You seem to have missed the point
alesimula97 t1_iy7lkj0 wrote
Then please enlighten me
Emmerson_Biggons t1_iy7lu7o wrote
The point was about a lot of audiophile mumbo jumbo can be ignored as effectively as fantasy or just absurd nonsense. Which is true, audio is a very complicated mixture of tech for a very basic form of hearing that is easily tricked.
alesimula97 t1_iy7nz4w wrote
That is the exact point I was arguing against, maybe you should read my comment again
Our hearing is not one-dimensional, whether the dimension of sound is measured exactly by our ears or interpreted by our brain does not matter, what matters is we DO perceive 3dimensional sound and that makes audio equipment that plays sound from different directions produce a noticeable effect
Now, you say that due to its "interpreted" nature it is easily tricked, but since it varies from person to person due to our personal HRTF, you cannot trick it both "accurately" and universally and it would require personalised tuning to achieve an identical result as sound coming from a specific direction
So wouldn't that make a "smart" headset that can adapt to its user or allow him to calibrate 3d sound, and an headset that actually plays 3d sound, both valid options?
And wouldn't the latter be the most accessible option, since it works ootb?
Emmerson_Biggons t1_iy89iwg wrote
Our hearing is objectively mono individually; one dimension. Our ability to tell direction is an evolutionary trait categorizing the frequencies and time the sound is heard between ears. Our ears are incapable of discerning direction without our brain telling itself what specific sounds in a specific order mean. Our brain is easily tricked by simply recreating those specific frequencies or adjusting when/if each ear hears a sound.
There isn't a substantial enough difference between people's ability to discern direction to need a significant personalized change to achieve a given 3d effect. Everyone's hearing is rarely accurate to direction, especially up and down anyway.
All that aside, a smart headset is a really interesting way for someone to label audio processing. Adaptable audio already exists and is not practical for headphones, as for actual physically 3d sound I'm pretty sure that exists and they are called "Speakers" specifically surround sound setups. As for headphones, binaural audio mixing is vastly more practical and is highly effective. Even to the point people, mostly audiophiles, make up nonsense terms and other weird things to headphones capabilities.
If your Audio Mastering is shit and you didn't do it right it doesn't matter how special your headphones are it won't sound good. But if you do it right then you can get a convincing 3D effect out of shitty decade old $5 ear buds you found in the back of your closet.
MathiasAudio t1_iy8jnmp wrote
> A lot of audiophile woo can be dismissed by fully internalizing that human hearing is a purely "one-dimensional" sense
I don't really think that dismisses anything tbh. Also, saying it is one-dimensional is just factually incorrect. There's a reason we represent audio waveforms on a two-dimensional graph; our perception of sound is dependent on time.
liosan53 t1_iy7y6oh wrote
So there’s no perceivable benefit to having a 2 or 3 way headphone?
MathiasAudio t1_iy8imo8 wrote
It depends. The reasoning behind using multiple drivers in headphones and speakers is the same, which is that different size drivers are more capable of reproducing certain frequency ranges. In theory a well made multi-driver headphone would probably sound better than even the best single-driver headphone, but the reason this hasn't been done is largely practical. In order to tune the headphone properly you'd either need separate amplification for each driver, or some sort of attenuator inside the headphone for the smaller driver(s). Given that we think of headphones as passive devices, I don't think this would be well received.
ryukin631 t1_iy48p17 wrote
There is something special when you close your eyes and try to pin-point every instrument being played and where they are in the room.
Dangerous-Ad5282 t1_iy4mojr wrote
What headphones does that?
ryukin631 t1_iy4n5ae wrote
Usually a set that has a wide Soundstage. It's pretty cool hearing that.
MorsTactica t1_iy5pjag wrote
Soundstage =/= imaging
Parvaty t1_iy5ku3l wrote
That's a misconception in my opinion. Soundstage alone doesnt do much for locating sounds, Imaging capabilities matter far more. My HD560s are far better at it than the X2HR I had, even though X2HR has a massive soundstage.
klogg4 t1_iy4u3fd wrote
Most headphones do when you use cross-feed plugin. The ones with more spacious sound do it better of course.
KiyPhi t1_iy5v138 wrote
That indicates a good mix, though a good headphone can play that mix more accurately than a non good one.
imahawki t1_iy6563x wrote
How can you hear multiple drivers at a time with just one eardrum? Checkmate!
SupOrSalad OP t1_iy670na wrote
Asking the real questions
ritzk9 t1_iy7dhj6 wrote
There are thousands or millions of ear hairs that vibrate with different frequencies to the sound received and these vibrations signal the nerve cells of that hair to send information to brain. We need individual driver for each ear hair for true hifi sound
keinstresskochen t1_iy4y5hs wrote
angry headphone noises
kazuviking t1_iy44gjw wrote
Well it kinda does.
FrenchieSmalls t1_iy69vk4 wrote
It absolutely does. I don't understand what this meme is trying to say.
SupOrSalad OP t1_iy6brcj wrote
It's just a play on how transducers make sound. The speaker itself is only moving up and down to generate a single waveform. It's just that the waveform is a combination of many frequencies which our ears and brain is able to decipher as individual sounds
FamousBear t1_iy7fsd0 wrote
Fourier series in a nutshell
FrenchieSmalls t1_iy91xwz wrote
But the Fourier series is "multiple sound waves at the same time with just one driver". That's precisely my point!
FrenchieSmalls t1_iy6cv4y wrote
If you consider digital reconstruction, though, that single wave is literally the combination of many simple sine waves.
SupOrSalad OP t1_iy6dnq6 wrote
Yeah. It's just a meme based on semantics, that's all. It's producing multiple sounds, just in one waveform. Nothing to take seriously
FrenchieSmalls t1_iy6dtva wrote
I mean, I guess. It just doesn't really make much sense how it's worded.
alpacasb4llamas t1_iy6kwms wrote
That's the best part, it doesn't
Marathalayan t1_iy88nxp wrote
You almost understood it but I think if you look deeply superposition once again you would understand how this meme is meaningful and deeply true. I had a hard time in understanding this for long, and one day I got in!!
Beor_TheOld t1_iyco6tg wrote
The transducer only has one degree of freedom, but the diaphragm has infinitely many 3D (axial, radial, displacement) vibrational modes, arguably 4D since different input frequencies generate different vibrational modes. There are some really pretty gifs at the end of this article https://en.wikipedia.org/wiki/Vibrations_of_a_circular_membrane
WikiSummarizerBot t1_iyco8kz wrote
Vibrations of a circular membrane
>A two-dimensional elastic membrane under tension can support transverse vibrations. The properties of an idealized drumhead can be modeled by the vibrations of a circular membrane of uniform thickness, attached to a rigid frame. Due to the phenomenon of resonance, at certain vibration frequencies, its resonant frequencies, the membrane can store vibrational energy, the surface moving in a characteristic pattern of standing waves. This is called a normal mode.
^([ )^(F.A.Q)^( | )^(Opt Out)^( | )^(Opt Out Of Subreddit)^( | )^(GitHub)^( ] Downvote to remove | v1.5)
Mr-Zero-Fucks t1_iy5vo57 wrote
Everything is in the mix, and that includes all the trickery used to emulate an actual acoustic dimension, a single speaker can only reproduce a single sound, how sophisticated is that single sound is what matters.
JustAu69 t1_iy4rdde wrote
Too factual to be a meme
iopjsdqe t1_iy46v3n wrote
Dark magic
Ashamed_Assistant477 t1_iy4u5fr wrote
But speakers do. What is this?
neon_overload t1_iy6o8b6 wrote
Wtf, how does the author of this think that audio works?
SupOrSalad OP t1_iy6p8wq wrote
It's just a play on sound waves and wave forms. Music has multiple frequencies, but they're all combined together into a single waveform that the driver follows to create a pressure wave. Then your ear hears that pressure wave and extracts the individual frequencies again
Formal-Cut-334 t1_iy654p4 wrote
Pork chop sandwiches!
AKlBA t1_iy79lil wrote
oh shit!
Lead_191 t1_iy7uce0 wrote
Normal mode of oscillation enters the chat
simurg3 t1_iy89e3q wrote
I don't quite understand the push back against the multiple drivers. Here are my observations:
-
I have never heard a sound reproduction system that can be even closer to the original sound of the orchestra. During reproduction, the timbre of the instruments, the fidelity of the multiple instruments producing the sounds together are lost. As the source of sounds gets more complex, the worst the divergence gets.
-
A single driver cannot produce all frequencies at the same magnitude. That's why we have subwoofers. Low frequency sounds from a smaller driver cannot have the same magnitude as the larger driver. Just like a large drum sound louder than small drum for the same frequency.
-
A driver cannot immediately changes its movement due to inertia. Now there are techniques like planar drivers, electrostatic drivers to reduce the challenge but each solution comes with its compromise. This is also why we have equalizers to compensate on the physical limitations of the driver.
Now going from a single to multiple drivers introduce more noise as we add more processesing. Yet it also adds more flexibility to reproduce sound by using specialized driver for given target frequency.
We also need to understand the source of the sound. For orchestral sound like classic music, jazz music, the challenge is higher as the goal to reproduce original sound. For pop and rock music, sound is already engineered as there is no original sound but multiple soundtracks that are mixed by an engineer for optimal listening pleasure for target configuration. For the latter, two drivers are sufficient if the sound engineer targeted headphone based listening.
In a world with perfect microphone and perfect driver, yes we only need two drivers but they are not. The music is also not always mixed, prepared for two drivers.
therealrydan t1_iy9f5ew wrote
> For orchestral sound like classic music, jazz music, the challenge is higher as the goal to reproduce original sound
I think this is incorrect. It's not any more difficult to correctly reproduce a recording of an orchestra than of pop/rock or even entirely DSP-generated sound. The challenge is probably rather in recording the music in the first place.
If you would record an orchestra with a high quality binaural rig, run some 3d-scanning + DSP to correct for the shape of your head and ears, and listen through high quality headphones, you would probably not be able to tell the difference (atleast not if we assume we could achieve a truly blind comparison, that is, you would not see whether it was the orchestra playing or not, and that you wouldn't feel if you had headphones on or not...).
You only have two ears after all, and they just react to sound pressure level changes in a very small volume of air.
simurg3 t1_iyaxlxz wrote
I agree that the bigger challenge with reproduction could be at recording. We must still recognize the challenges associated at the speakers to perfectly reproduce the sound.
I don't agree with orchestral music and rock/pop/digital music having the same challenge.
I was actually going to ask what prevents recording companies to produce binaural records. It cannot be technical as production costs for such music cannot be too high. Music consumption is more and more through headphones or Iems, especially for critical audience. Yet there is almost no records coming with binaural recordings. What ama I missing?
therealrydan t1_iyc1sbv wrote
Binaural doesn’t translate well to speakers. (And it’s slightly flawed in that it still doesn’t model your specific head/ears so without significant wizzardry it won’t be 100%)
But yes, as more and more music is listened to through headphones, I’m kind of wondering this as well…
There are a lot going on with stuff like Atmos though, and virtual studio simulations for mixing in headphones, so we might see interesting things in the future.
Re repriduction of different kinds of music I think the challenge is exactly the same. The challenge is to reproduce a signal with flat frequency response, correct transient and phase response, with low distorsion. As long as you do that, you reproduce every kind of music well. There may be tradeoffs that ate more important in some kinds of music. Soundstage/positioning perhaps being more important than sub bass in jazz/chamber music but (perhaps) not in EDM for instance. But I still think the challenge is the same. Have a good enough system and everything will sound good on it. Or be correctly reproduced atleast, which may not be what sounds the best, but that’s a whole other can of worms.
That can of worms may also be part of it, because, we might for the most part not actually want the recorded music to sound like the real thing, we want larger than life. Have an anecdote from a former colleague who's worked a lot on recording classical music for radio and TV. He had this story where he worked on an audiophile symphonic recording. They used a small set of really high quality microphones, set up as a blumlein pair as main source + some more microphones to capture and be able to adjust tonal balance, width and room in the final result. Apart from slight corrective EQ, they weren't supposed to process the sound at all, it should be all natural. Compression or artificial reverb were strictly prohibited. They did several different mixes with different mic balances, but the producer weren't satisfied and still thought it sounded unnatural. So, without telling anyone, they sent the mix through a Manley VariMU compressor, just compressing a few dB:s at most. That's the version that got released...
simurg3 t1_iydgqp5 wrote
I was hoping that recording companies sell both versions of the same album. One for speakers and one for headphones.
Your anecdote resonates with my understanding. The music we listened is enginereed to sound good and optimal. For pop/rock music, engineering is part of the creative process. As you mentioned that classical music recordings are supposed to adhere to source but they also get edited. Thanks for shari
therealrydan t1_iy985pm wrote
It’s a question of amplitude. In theory, a single driver is optimal, since it’s a point source. Creating a driver capable of loudspeaker sound pressure levels, that can reproduce the entire human hearing frequency range with high resolution and low distorsion is tricky (almost impossible). Doing the same at headphone audio levels is much much more doable, probably even easier than overcoming the problems with multiple drivers and extremely short listening distances. With multiple drivers you also have crossovers, with their problems and added distorsions and phase issues.
Also, you have technologies as planar magnetics or electrostatics, that are very difficult to use well at loudspeaker volumes, but extremely viable at headphone levels. (Electrostatic designs are also an example of a seriously high-performing single driver loudspeaker, albeit one that requires BIG speakers if you want full range bass reproduction at somewhat respectable levels.
In the loudspeaker world, some companies are jumping through a substantial amount of hoops just to get speakers to behave like a single point source even though constrained by the need to use multiple drivers, like Genelec TheOnes (three-way coaxial point-source design with substantial audio and DSP trickery...).
Most (all?) high end headphones are single-driver designs. If there would be substantial advantages to instead using multiple drivers I'm sure many high-end headphone designs would, but they don't...
TillFragrant t1_iy9yzgq wrote
Sound can sometimes just be a weirdo
Prophetarier t1_iy4tb3w wrote
It absolutely does produce multiple soundwavez simultaneously
Phoenix-Anima23 t1_iy5adff wrote
Not really, it produces the sum of various sound waves added up together. It's just one sound wave
WoodenSporkAudio t1_iy5e08x wrote
so when the cone is moving at 80hz while also vibrating at 2kHz on and off, it's not making more than one sound? The source is two sounds, and it plays both by playing them by making only one sound at the same time from one driver... it's to be considered only one sound? Semantics is weird.
tummo t1_iy5fatz wrote
I think to take this any further you'd need a technical definition of what "a sound" is
farmyardcat t1_iy6i5an wrote
The mark of any quality meme is that it leads to disagreement about the definitions of basic nouns that compose reality
Beor_TheOld t1_iycoqx6 wrote
You can’t describe reality in words because it isn’t words, it’s rings bell
Marathalayan t1_iy8a1br wrote
How can one thing vibrate 2000 times and second and 80 times a second at the same time ? Answer is it does not.
Try superpositioning two waves of 80hz and 2000hz and then you get a pattern of vibration which doesn’t look like 80 and 2000 but it’s a pattern which is the resultant of 80 and 2000. At some parts it’s added I amplitude (at time periods both parts are in the positive cycle and is present in the same time slice. Read superposition to undertand exactly the way in which how this pattern is formed.
Then this pattern of vibration reaches your ear where again different haircells deflect differently as they have different thresholds to produce an auditory signal.
WoodenSporkAudio t1_iybh2ts wrote
it vibrates quickly while also oscillating slower. back and forth at 80 times per second while going smaller quicker back and forth motions while riding the bigger slower wave... Of course it is a mashup of the two, but it makes two tones, even if it is one combined output at the transducer. It's all in the semantics of how you want to approach and define these things, really.
SupOrSalad OP t1_iy5sabv wrote
Disregard my original comment, I misread the comment above, and mine is a poor and incorrect explanation>!Yeah it all combines together through the fourier transform. The movement of the driver is a sum of its frequencies, and even if the driver seems to be moving up and down in a simple pattern, it is doing that as a result of the different frequencies all adding together. Your ear is able to take that sound and through a reverse of the same fourier transform equation, each individual frequency is separated and heard individually!<
20EYES t1_iy5vip2 wrote
That's not exactly what a Fourier transform means but you are on the right track for sure.
SupOrSalad OP t1_iy5ykpb wrote
Thanks. I know it's a math equation that I really don't understand. Hoping to learn as much as possible, but yeah my understanding is definitely limited
20EYES t1_iy62k8y wrote
If you are curious, I highly recommend these 2 videos. The first one gives a high-level explanation and the second one dives in a bit deeper. You absolutely do not need to fully understand the math to understand the concept in general btw.
SupOrSalad OP t1_iy62xv9 wrote
Will definitely when I get off work. Thanks
SupOrSalad OP t1_iy6z48m wrote
Watched them now. Really informative and actually enjoyable to watch. Thanks for sharing that!
Marathalayan t1_iy8a6m5 wrote
Exactly this
nutyo t1_iy5j0pl wrote
It doesn't matter how many upvotes this gets it is wrong. A single driver absolutely does produce multiple frequencies or sound waves at once. It is just that at any single point in time the sound pressure it is producing is a summation of those frequencies.
EDIT. My first sentence was too harsh.
Phoenix-Anima23 t1_iy5jvy8 wrote
How I see it is like in an instrument. Do you think of the sound it produces as 1 sound or multiple sounds that ultimately add up to what you hear?
nutyo t1_iy5lba3 wrote
You could absolutely say that the summation of all the frequencies a driver produces is also a soundwave. It would just be irregular and ever changing.
And a driver is far more capable than a single instrument seeing as it could reproduce an entire orchestra, choir and band's worth of sounds all at once. The sound it is producing is a summation of all the sounds it is producing and our experience of listening to it is definitely better described as multiple sounds as your brain can easily pick apart a violin and a drum playing and the same time.
Phoenix-Anima23 t1_iy5ljo8 wrote
Yes, that was my argument from the beginning
20EYES t1_iy5vp43 wrote
This is really semantics. IMO it produces multiple frequencies but not multiple waveforms.
Turtvaiz t1_iy5tmzd wrote
So is two speakers playing different sounds at once. It gets summed up physically just the same
WoodenSporkAudio t1_iy5el29 wrote
both can be correct depending on how you want to define the parameters of the semantics. the meme isn't funny to me, but it makes sense, but it can also be considered as you said.
[deleted] t1_iy5lupd wrote
[removed]
headphones-ModTeam t1_iy6bact wrote
This comment has been removed. Please note the following rule:
>Rule 1: Be most excellent towards your fellow redditors > > And by "be most excellent" we mean no personal attacks, threats, bullying, trolling, baiting, flaming, hate speech, racism, sexism, or other behavior that makes humanity look like scum.
But they're wrong!
Disagreeing with someone is fine, being toxic is not.
Don't impede reasonable discussion or vilify based on what you or the other person believes or knows to be true.
Look at what they said!
Responding to a person breaking Rule 1 does not grant a pass to break the same rule. Everyone is responsible for their own participation on r/headphones.
Violations may result in a temporary or permanent ban.
20EYES t1_iy5wfd4 wrote
It's really an argument about semantics more than anything but to me a single sound source can only make one sound at once. Two speakers are making two sounds even if each is playing the same frequency.
NFTOxaile t1_iy4yrst wrote
Any sufficiently advanced technology is indistinguishable from magic.
...Except that this technology isn't very advanced and most people aside from op seem to understand it.
RB181 t1_iy46re1 wrote
The best part: you don't have to, because your ears would perceive it all the same.