Binaural Audio

Construction WorkerWork-in-progress. Information here is subject to change.
By  3DJ 
 HRTF 

3D audio in a nutshell

Humans can pinpoint the location of sounds in any direction thanks to the brain's ability to interpret subtle cues (like  volume ,  frequency  and  delay ) perceived differently by the left and right ear. This sound modification (called  HRTF ) is the result of sound waves bouncing off the ears and head differently into each eardrum, depending on the  physical shape  and the  sound location . This ability is called  binaural   localization , because it requires 2 ears, and can be simulated on headphones in up to 3 dimensions:

1D




2D




3D




Software

Ear Binaural profile  ( HRTF )

 HRTF  is  unique to every person  (a sound coming from a certain angle may sound like it's coming from a different angle for someone else because of different ear shapes) but it can be  personalized  for accuracy by using  algorithms  or  recording  sound transformation when reaching the ears of  dummy  or  human  subjects. However, this requires expensive equipment so most binaural software only include one generic HRTF profile (HRIR) for an "average head", while some can load custom HRTFs that offer better sound quality and accuracy with profiles that match people's ears more closely.

For best results, HRTFs ideally should have:
🌟High accuracy (pinpoint exact angle of sound source. Different for everyone)
Clear externalization (sense depth between your head and the sound location)
Minimal coloration (shouldn't sound bassy, muffled or tinny)
No reverberation (echo should be in the content environment, not the HRTF)
Open compatibility (should be available in multiple, standard formats)
Here you can listen to and compare HRTFs:





Hollow Red Circle  Virtual Surround  ( 2D )

Software that converts  Channel-based Audio  —like  5.1  and  7.1  Surround Sound— into binaural audio using  HRTF , which allows hearing sounds coming from  360 degrees, at ear level  ( 2D ) using just a pair of headphones or speakers.





Red Circle  Spatial Audio  ( 3D )

Software that converts  Ambisonics / Object-based Audio  —sources like video games that allow applying HRTF to sounds individually, using 3D coordinates— into binaural audio, which allows hearing sounds  all around, including behind and overhead , using  headphones  or  speakers .



Wrench  Binaural Audio Manager  (WIP)

Tool that easily installs Spatial Audio and Virtual Surround by simply  dragging and dropping  games/programs.





Hardware

Headphone  Headphones 

Binaural audio can work on any stereo headphones, but for best results, we recommend:
StarNeutral Signature (flat-ish impulse response/equalization)
Accurate  imaging  (balanced and accurate stereo localization)
Minimal soundstage (avoid  virtual / passive   soundstage )
Low latency (avoid  wired / wireless  delay)
Here's a list of some known headphones that adhere to all of the above criteria:





Speaker High Volume  Speakers 

While binaural audio on headphones is more common and generally better, speakers can also be used, using  crosstalk  cancellation (a filter that minimizes audio meant for one ear to "leak" into the other one for proper effect).




Radio  Sound Card 

Chances are your PC has an integrated  sound card  (hardware that outputs audio) that's good enough; but for best results, make sure it's capable of:





Content

Configuration depends on the content (games, applications, media, etc.). Here you can search for the best known setup guides:

Video Game  Video games 

Here's a list of demos sorted by HRTF rating and user review score:





💻  Applications 





Movie Camera  Video 

Usually, DVD and Blu-ray movies and some streaming services offer 5.1 and 7.1 Surround Sound; both of which can be used with  Headphone Virtual Surround .
Additionally, some  Ultra HD Blu-ray  discs and  streaming services  offer Object-based Audio for spatial audio when using compatible  software  or  hardware .




Musical Score  Music 

There are rare 5.1 Surround Sound albums released in  Super Audio CD ,  DVD Audio  and  Blu-ray  discs that can be used with  Headphone Virtual Surround  with compatible players, as well as  Speaker Virtual Surround   Sensaura mixes , but nowadays, it's more common to find music mastered in  Dolby Atmos ,  binaural mixes  or  "8D audio" .





Microphone  Other 

Binaural Recordings

These are made using 2  special microphones  that capture audio in a way that's similar to how human ears would.



 Frequently Asked Questions (FAQ) 

Common questions

 A) What are the best games with binaural audio? 

Lists sorted by HRTF rating and user review score:


 B) What is the best HRTF/HRIR? 

While HRTF is subjective (accuracy of the expected sound source angles varies from person to person), some are  overall better  than others.


 C) Can I use Virtual Surround on regular stereo content? 

Yes, but it won't be nearly as good as using surround content.
If you just want to simulate a pair of stereo speakers in front of you, then any Virtual Surround software can do just that. Alternatively, there's software like HeSuVi capable of  Crossfeed  and/or upmixing stereo (2.0) into 7.1 virtual surround with varying results, depending on the content and HRTF used.


 D) Can I use multiple headphone binaural effects at once? 

No, you shouldn't.
Stacking multiple headphone HRTFs (like Virtual Surround on top of Spatial Audio) will alter audio frequencies, muddying pinpoint accuracy, and possibly add unnecessary reverb, so if you're listening to audio that's already processed with HRTF, you should turn off any other audio effects.


 E) How much CPU load does HRTF use? 

Depends, but in general, very little.
Surround virtualizers like HeSuVi are lightweight, and even OpenAL Soft has been  tested  to have almost no impact on performance.
However, there are methods that require high-end hardware to perform complex calculations to generate realistic environmental reverb in real-time using hardware-accelerated wavetracing, such as  NVIDIA's VRWorks Audio path-tracing , or  unoptimized implementations of Steam Audio .


 F) How much audio delay/latency does HRTF add? 

Depends, but in most cases, not too much.
HeSuVi has been  tested  to add merely a few milliseconds under optimal conditions, and there's a  WASAPI exclusive fork of OpenAL Soft  that offers ultra low latency Spatial Audio, even though regular OpenAL Soft doesn't add significant lag to begin with. On the other hand, using a virtual audio device to forward binauralized audio to the actual sound card is known to yield noticeable delay.

 G) Can I get banned for using virtual surround/spatial audio in online games? 

Depends.
While it does provide an advantage over stereo, Virtual Surround software like HeSuVi should be safe since it doesn't require modifying/adding game files and just uses the game's audio mix. Spatial Audio (3D) is safe if it's built into the game. If it requires modified DLL files, those might get blocked or even flagged by anti-cheat engines which might lead to a ban.
So rule of thumb: if you're playing online/with anticheat, use built-in spatial audio and if that's not available, use virtual surround.

 H) Is "8D/9D/xD" music any good? 

Depends who you ask but it's just a low-effort, gimmicky trend.
It flattens a stereo track ( 1D ) into mono ( 0D ) and makes it constantly spin around your head ( 2D ). If it was at least  3D , you would hear sounds above or below you. For decent results, check out  virtualized music originally mixed in surround sound , music mixed in Spatial Audio (3D) like  Reality 360 / Dolby Atmos , or at least semi-decent unofficial  binaural mixes , which pan individual instrument/vocal tracks in 3D using HRTF.

Expert AMA

 1) What PC games used Sensaura technology? 

All of them? Pretty much every game used DirectSound3D and so Sensaura was supported. For reverb, the API supported I3DL2 and EAX 1.0 and 2.0 (I think Creative made the EAX 3.0+ API proprietary so we couldn’t support that), which most games that implemented reverb used. For the specific extensions that we had, it’s hard to say. The API was freely available, so there was no way to really know who was actually using what features unless they explicitly said so.*
- /u/fromwithin (ex-Sensaura employee)


 2) What were the mid/long term plans for Sensaura tech that never saw the light of day before being bought by Creative? 

There wasn’t anything particularly groundbreaking in development at that point. It was all evolutionary. The focus was really just to keep licensing as much as possible and get a strong foothold into console games, so most people were working on GameCODA rather than researching new stuff. There was a definite move towards motherboard audio and we were in an excellent position to completely dominate that space with our software driver, but after we were bought, few companies wanted to pay a license to their biggest audio competitor, Creative. We designed a PS3 soundchip that had all of our tech and multiple reverbs on it for one of the chip manufacturers. They pitched it to Sony, but Sony went the software route on the PS3 instead. We had some 3DS Max and Maya plugins to setup geometry for occlusion and obstruction as part of GameCODA, so we probably would have expanded those to encompass new stuff as we came up with it. Personally, I would have liked to start getting our stuff licensed into TVs and Hifi equipment. I really wanted there to be TVs and amplifiers with our cross-talk cancellation built into them so that the 3D audio could work seamlessly on speakers or headphones with a flick of a switch. From there, it’s a much easier sell to get other media to mix in 3D. However, we didn’t really have any marketing people or budget to go down that route, so it was a bit disheartening when everything started getting crappy SRS WOW added into it.
- /u/fromwithin (ex-Sensaura employee)

 3) How much of the lost technology was used in Creative's CMSS or similar technologies? 

None of it as far as I remember. After we had integrated into Creative, there was a re-evaluation of the 3D audio. Every employee was encouraged to download and run an application that played a series of blind audio clips, each of which used a different 3D technology that Creative owned. I strongly suspected that it was deliberately fixed because, surprise surprise, Creative’s new internal CMSS was voted the best. I say it was fixed because I knew perfectly well what ours should have sounded like and it was way better than Creative’s own, yet I explicitly remember that the one that was revealed as Sensaura sounded dreadful. I remember being shocked to discover how bad Creative’s 3D stuff had been up until that point. I think it only used 8 HRTFs positioned around the listener and panned between them.
- /u/fromwithin (ex-Sensaura employee)


 4) Have any of the employees considered working on an alternatives from scratch after the Creative fiasco? were they allowed to? How much did they have to give up in terms of patents and technology? 

Not that I know of. The original founders who did most of the groundwork at Sensaura when it was still an EMI research project did actually leave before the buy-out to form another company called Sonaptic. They licensed the Sensaura technology for their products and didn’t start from scratch. I’m pretty sure that they knew that they couldn’t beat what they’d already done at Sensaura. Sonaptic did other clever new stuff instead and worked a lot in the Japanese phone market. One of their things was calculating and designing manifolds for mobile phones that enhanced the fidelity of the output. Basically they would calculate the shape of the speaker port in a way that would enhance or reduce certain frequencies to give the phone a flatter frequency response. They also worked on noise cancellation and eventually got bought by Wolfson Microelectronics.
- /u/fromwithin (ex-Sensaura employee)


 5) Is it possible to convincingly emulate far sound distance to go along with the fixed 3D spherical plane (redundant, I know) limitation of most -if not all- spatial audio engines (volume/reverb aside)? Would it require layers of bigger/smaller spherical HRIR recordings or can those be extrapolated from a single HRIR set? 

What you hear as distant sound has nothing really to do with the mechanics of the head and ears. HRIRs need to be recorded in an anechoic chamber and under those conditions, there’s no concept of distance other than volume level and filtering due to air absorption, which is not perceptible in a small room. If you didn’t record in an anechoic chamber, the HRIRs would always include some effect of the environment in which they were recorded, which you could never get rid of. So recording multiple layers could only ever work to simulate one single static environment. The perception of distance is almost all to do with the environmental reflections and if distant enough, wind. I think the real problem is the scale of possible complexity of the reflections the further away a sound is. There are so many tiny cues that contribute to the perception of distance that it’s very difficult to come up with a simple way to emulate it that will cover all cases convincingly without a comprehensive realtime audio wavetracing system, and we’re a very long way off from that. An air absorption low-pass filter is really the best that we have at the moment.
- /u/fromwithin (ex-Sensaura employee)


 6) Were any open source alternatives to proprietary spatial audio be under risk of being taken down by Creative or any other company at any point? 

No, I don’t think so. Creative were very Sound Blaster-focused. That brand name was their big cash cow, so the only things that they would have been interested in stopping were anything that could dilute their branding. Although they did a truly terrible thing when they killed Aureal, that was at a time when the Sound Blaster brand hadn’t really cemented itself in the same way as it had a few years later. Aureal was a genuine threat to their business model as they had a strong brand and technology that Creative couldn’t compete with. With Aureal out of the picture, the marketing of the Sound Blaster branding was far more important than the minor tech upgrades that happened every now and then, so alternative technologies were not really seen as a problem. The company’s business model was largely based on being second-best and copying what was already successful rather than being an innovator and as long as the latest Sound Blaster had enough marketable features for the user base to buy the new one, they were happy. 3D audio for them was just a means to sell soundcards and as long as alternative options didn’t dent their sales, they couldn’t have cared less.
- /u/fromwithin (ex-Sensaura employee)


 7) Could recording proprietary HRIRs (Atmos for Headphones, CMSS, etc) and distributing it for free like in  HeSuVi  put the makers at legal risk? 

I’d have to say...probably. There’s a similar precedent somewhere in the jurisdiction of the USA to do with synthesizers. I can’t find the actual reference, but I remember that there was a case that resulted in the conclusion that selling a sample library of sounds from analog synthesizers was fine, but selling a sample library of sounds from a ROMpler (a synthesizer with samples in its ROM) wasn’t. The reason is because analog sounds are generated in real-time and so can’t be copyrighted because there’s no recorded data to infringe upon. With a ROMpler, if you record its sounds you’re recording a copy of the samples in its ROM. Those samples are recordings and so are copyrightable. You’ll usually find some licensing clause somewhere that explicitly states that you’re only allowed to use the sounds as part of a musical production, meaning that you can’t record them dry and re-sell them or give them away. The HRIRs are copyrightable data; either as the binary data that exists as part of the software or as a series of audio recordings. If you make a copy them, either by extracting data from the source, or by recording them as an audio stream, you’re infringing. The question then becomes, “Are the owners of the HRIR data bothered enough to do something about it?”
- /u/fromwithin (ex-Sensaura employee)


 8) What kind of headphones had the best spatial sound accuracy back in the day? 

I honestly couldn’t tell you. It’s not really something that we were bothered with and I only had my own headphones to go from. Spatial accuracy is not really anything to do with the headphones specifically. It’s all about the frequency response of the headphones matched with your brain’s perception. For example, a certain headphone might have a -3dB reduction at 2KHz. This might wreck everything for one person, but make almost no difference to someone else because their brain is wired up to respond to their own pinnae, which don’t have much effect at 2KHz. The best thing to do is just try to get headphones with as flat a response as possible. - /u/fromwithin (ex-Sensaura employee)

The debate about headphones sounds like there is a problem of perspective. It's like asking the question, "My mixes always sound bad. What monitor speakers should I buy to improve them?". And the correct answer to that would be, "Fix your room". If somebody is talking about "wider soundstage" when describing headphones, then I'd suggest that they have little idea about how the ear work and how the brain perceives sound. Whether a headphone is open-back or closed-back has no bearing on whether the headphones themselves are good quality. The important thing is to have a flat frequency response. That's it. It's possible that the people who claim that open-back headphones are better are just unnerved by the isolation that closed-back headphones give you. Some people experience dizziness from being in an anechoic chamber, some may feel claustrophic. There could be physical effects that make them feel uncomfortable in a way that they can't describe and so just come up with: "Closed-back = bad. Open-back = good". Open-back headphones just let in more of the ambient noise. If you want your audio to sound like it's in the room with you, then fine, use open-back. But if you're playing a game that's using 3D audio, I would think that you'd want full immersion. You'd want to feel that that sounds are around you in the game, not around you in the room in which you are playing so you want to remove as much ambient noise as possible from what you hear.
- /u/fromwithin (ex-Sensaura employee)

 9) Back in the day which games, in your opinion, made the best use of 3D audio? 

I can’t really think of anything that went out of its way to showcase 3D audio. It was more that there were games with which 3D audio naturally worked extremely well, such as Deus Ex. Good 3D audio made a huge difference in stealth games because you could hide, but still hear exactly where the baddies were, allowing you to know exactly when they would be in the best position for you to step out and give them a whack on the head (or tase them in Deus Ex). One game I always liked was Wild Metal Country. It’s quite a barren game, so the sounds are very strongly defined. I clearly remember getting bombarded by one of the baddies and hearing the projectiles whizzing right past my shoulders or just above my head. It was great.
- /u/fromwithin (ex-Sensaura employee)


 10) Can I get a download of the original Sensaura startup sound? 

Not from me. That was created before I joined Sensaura. Who knows if it even exists anywhere anymore.
- /u/fromwithin (ex-Sensaura employee)


 11) What type of headphone do you think offers the most accurate positional accuracy with HRTF? Closed-back, open-back, or in-ear monitor? 

Whatever gives you the purest result, which would be closed-back that are as isolating as possible. But it’s not just the technology, it’s how well it is used and how closely the HRTFs match your own head.
- /u/fromwithin (ex-Sensaura employee)


 12) Why do some games have more pinpoint sound accuracy than others with the same audio API and HRIR data? For instance, Frontlines: Fuel of War (rated A+) compared to SWAT 4 (rated D-). 

Listening to those two games, it’s clear that the implementation in SWAT 4 is terrible. There are various things that contribute to that, but fundamentally it’s because the programmer didn’t know what they were doing and either the sound designer was not technical enough to educate them or that the audio was not deemed important enough for the team to bother listening to the sound designer. It sounds like none of the sounds have any positioning relative to their root game object. That would suggest that the audio designer gave a bunch of sounds to a programmer and the programmer just used the most rudimentary way to attach them to the appropriate object: If the origin point of an NPC is positioned at [100, 120, 30] then all of its sounds, the footsteps, the gun sounds, the clothing sounds, the voices, are all also at [100, 120, 30].

That’s bad enough but when you apply the same mechanism to the player’s object, things are much worse. The reason is because the listener would also be attached to the player at the same position as its sounds. When the audio engine updates, it has to determine where all of the playing sounds are in relation to the listener. If the sounds and the listener are in exactly the same place, there is no direction. Depending on how the engine approaches this, it can result in the audio equivalent of z-fighting ( https://youtu.be/CjckWVwd2ek ), where the sounds jump around randomly. The other possibility is that the listener and emitters are not updated in the audio engine in the same way and at the same time. It could be that the listener, being the most important object is guaranteed to be updated immediately, but the sounds are not. This could result in positional latency between the listener and emitter, so as the player moves, its sounds are always one frame behind. The player’s own sounds will emanate in the opposite direction to the direction of the player’s movement and will only catch up when the player stops. Or maybe it’s just that the player’s team members keep banging into things and moving around unnaturally when you can’t see them. The game is also set in very confined spaces, which means that every sound’s perceived direction changes very rapidly. If the game was set outdoors and an object was distant, the relative direction might only go from 87° to 93° when the player walked forwards a few metres. But if the player moves past an emitter that is very close, the emitters direction might go from 10° to 170° within the same motion. This makes the point-source nature of all of the sounds much more obvious, especially because you can easily see the graphical size of the object that the sound represents and there’s no reverb in the game to soften the positioning.
And then you’ve got poor source samples that exacerbate the poor implementation. Listen to the door opening at 0:34. The origin of the door object will be at the hinge because that’s how the door object can rotate to open correctly, The sound is naively positioned there at the hinge. The door opening sound has reverb baked into it, so when the door opens, you hear a door sound with an inappropriate reverb collapsed into mono and positioned at the hinge of the door. Dreadful. The footsteps don’t take the surfaces into account and there’s hardly any variation so all you hear is a constant “dof dof dof dof dof” from every character at random points around you. And finally, there’s no effort whatsoever put into playback logic. The player team voices just fire off whenever a simple event occurs (like NPC < 5m away) with no context nor anything better than “play any voice sample at random”. The cacophony that it produces detracts from any immersion that could have been there.
- /u/fromwithin (ex-Sensaura employee)

 13) Sensaura had a technology called ZoomFX that simulated volumetric sound sources back in the early 2000s. Several plugins for Unity have come out over the years, but none have been used in any games. Why do you think volumetric sound failed to catch on despite fixing one of the biggest problems with audio in gaming? 

It all comes down to the cost of implementation. Unreal and Unity today have superb editors with complete integration of everything from shaders to game logic to audio. If one of them added a ZoomFX-type feature by default (for free), it would be pretty easy for the audio person to directly set the properties of the effect themselves for each object it needed to apply to and so would have the potential to catch on. Third-party features such as SabreGSC are largely irrelevant unless they can integrate seamlessy across all platforms with your chosen middleware with extremely minimal cost in time and budget.

Volumetric sound also suffers from poor ability to collapse down to the lowest common-denominator. You might have a single volumetric sound that is set to cover a large area that sounds great. But then the next platform doesn't support volumetric sound, so your large sound becomes a single point source and the audio for the whole scene is destroyed. So then you have to do a different implementation for the lesser platform that consists of multiple emitters spaced across the large area. Well if you have to do that anyway, what's the point in using a volumetric sound on the other platform in the first place?
Back in the day, pretty much every company had their own game engine and the tools for audio would likely be non-existent. The audio designer would determine the necessary sounds, create them, then give them to a programmer with instructions on how and where to implementation them. To add ZoomFX would be a very manual process of the programmer having to specify the ZoomFX extents that apply to very specific objects in the game. If the game object changed during development, it would have meant the programmer having to go and change the values for that specific object. Also, there weren’t that many actual audio programmers back then who specialised in audio implementation. Sounds would just largely be implemented by whoever was available, whoever had a vague interest in audio, or by the junior programmer because nobody else wanted to take responsibility for it. These days, Wwise has unfortunately become the de facto option for AAA games. It has a feature called focus that you can use to do a type of volumetric effect. Basically, as you approach a point source, you can reduce the focus, which causes Wwise to create virtual sound sources that are decorrelated from the root sound and positioned around the listener. It’s really only useful for sounds that represent an area that you can go into, rather than a shaped sound emitter but it’s something at least. If Wwise had a properly volumetric feature, it would definitely get used.
- /u/fromwithin (ex-Sensaura employee)

 14) What is your opinion on games such as Euro Truck Simulator 2 that swap out fully-fledged 3D audio engines like OpenAL in favor of closed-source middlewares that offer no spatialization like FMOD? 

I love Euro Truck Simulator 2. It’s very cathartic. :) It’s not really FMOD’s fault that Brett hasn’t spent a load of money creating his own HRTFs to be able to provide it for free. FMOD has the option to use spatialisation plugins, so companies can always do that if they think that it’s worth it. I was a Technical Director at an independent developer for 10 years, so I know very well the corners that have to be cut when there isn’t enough time or money to do things “properly”. It is what it is. If you can afford it, you do it. If you can’t, you don’t. If they swapped out OpenAL, there could be any number of reasons why they switched away from it. Maybe their audio programmer left and the new one had no clue how to use it, but was familiar with FMOD? I’d like all games to have great 3D audio, but that’s just not how the world works.
- /u/fromwithin (ex-Sensaura employee)


 15) Why do you think 3D audio on early 2000s consoles (QSound on Dreamcast, Nvidia SoundStorm APU on Xbox, Sony S-Force for PS2 etc.) failed to take the gaming public's notice? 

Mostly because of TVs. You can’t notice the difference when the audio is playing through the mono speaker on a television. Rarely would anyone plug headphones into their TV, and if they did, they'd probably get the buzzing of inteference whenever anything white appeared on the CRT screen. Another contribution was the constant increase in fidelity of the source samples that masked the decreasing quality of 3D positioning.
- /u/fromwithin (ex-Sensaura employee)


 16) What is your opinion on the current gen of spatial audio tech (Dolby Atmos, Steam Audio, Resonance Audio, Sony Tempest, OpenAL Soft etc.) and how do you think it compares to technology from 1998-2002? 

I haven’t tried them so I can really only go on their purported feature set, but I’d be very happy if Dolby Atmos went and died in a ditch. I’m currently working on a sound propagation for a large AAA franchise game, but they’re using Wwise and I don’t know what they plan on using the for the actual 3D. The last 3D audio thing that I did was a VR game demo that never got signed by Sony. For that I did an evaluation of what was available for Unity and went with DearVR. Their positioning was the best of what I tried, but it really was just good HRTFs and little else. The one thing that any technology needs for proper adoption is a comprehensive toolset and support on all platforms. Steam Audio and Resonance Audio are impressive, but there are no console versions. OpenAL Soft requires too much programmer intervention. Companies want the path of least resistance. Sony will use Tempest, we'll see how well it is supported by middleware, but it's still limited to one platform.
- /u/fromwithin (ex-Sensaura employee)


 17) What do you think needs to be done for 3D audio to become fully mainstream in the gaming public, and will it happen within this new decade? 


Dolby really needs to be removed the equation. That company has done far more to damage consumer perception of 3D audio than any other. That’s not going to happen of course and if they have their own way, they’ll completely take over the whole thing so that Dolby becomes synonymous with binaural 3D audio. Then they’ll use that to make people buy more speakers. I honestly can’t see 3D audio becoming mainstream. So much would have to happen. It would have to become so widespread that it gets taken for granted like it did in the early 2000s. It would have to be the case that in the same way that music is not mixed in mono because stereo is the default, games would have to use 3D audio because what kind of weird luddite wouldn't?
As someone who travelled the world trying to get people to license Sensaura, I can tell you from experience that it’s incredibly difficult to even explain what 3D audio is to someone who doesn’t already know. Its best feature is also its worst: it produces audio that sounds natural. If you play someone a 3D recording and tell them nothing about it, they won’t understand what you’re trying to demonstrate because it sounds perfectly natural. They’re just hearing a transparent effect that they’re used to hearing all day, every day. Sony has done a great thing with the PS5 by having it supported in hardware, not allowing Dolby to get its claws in, and making a big song and dance about it, but it really needs a lot more companies doing that. All of them, in fact. EA, Ubisoft, etc. need to make a big deal out of “not Dolby, much better, and you don’t need 12 speakers in your living room”. But they won’t. These are companies that use Wwise because the zeitgeist tells them that that’s what AAA games are supposed to use, even though it’s a horrible system. They’ll keep supporting whatever Dolby’s, Microsoft’s, or Sony’s marketing money tells them to use. - /u/fromwithin (ex-Sensaura employee)

Got questions?

Join our Discord community for further discussion:  https://kutt.it/BinauralDiscord