I would also say that a lot of people expect unrealistic audio when they think of realistic audio. Just like a lot of people say headphones are good just because they can hear the bass.
@LAxemann5 ай бұрын
Yess! Literally the topic of my previous video! :)
@Shack2635 ай бұрын
This is exactly the kind of video I've wanted to see for a while now. Im very interested in advances in games that I think get overlooked such as audio and animation.
@luckyankraj5 ай бұрын
This was a great watch. I honestly enjoy the workarounds more than absolute realism. It gives each game it's sonic signature. And the various techniques used to simulate the same goals on different games is a treat to listen to and observe.
@LAxemann5 ай бұрын
100% agree! :)
@Lippeth5 ай бұрын
6:26 was not expecting to see the house from House Party used to demonstrate sound blocking and diffraction.
@ПётрПавловский-щ1х5 ай бұрын
its an asset you can buy
@wile1234565 ай бұрын
Back when sound cards were needed for surround sound and physics based audio, there was a period where advanced sound was pushed, in order to sell these sound cards. But now that these sound cards are all niche, and don't provide tangable upgrades over the motherboards built in audio, there isn't much of a push. PS5 is the main exception, with raytracing cores of the gpu being dedicated for sound rendering. And the built in binural audio for headsets in the SDK.
@MurderCrowAwdio5 ай бұрын
Top shelf tutoring my friend!
@MrShrion5 ай бұрын
totaly agree with this. while i couldn't put my finger on what the problem was, i realy thought most games have a sounddesign that is a little bit off. like in "the last of us" where you would exit the room and you couldn't hear the NPC's talking anymore. glad that sound gets more atention now that the graphics are smooth ;) a little question that came up in my head. Explosions, gunshots and talking ppl have a clear beginning and end. What about something like bird tweets and stuff like that? how can you loop this audio in games so that it feels natural? Great Video again! keep them comin
@LAxemann5 ай бұрын
Glad you liked it! As for things like ambient wildlife etc: This is more the "systems" category, and nowadays usually solved by creating functionality which actually dynamically places sound emitters on fitting locations. E.g. bird sounds on trees which trigger randomly. Our audio lead for Arma Reforger made an in-depth video explaining the game's system in-depth: kzbin.info/www/bejne/pHKce5-aqZtoers
@MrShrion5 ай бұрын
@@LAxemann thank you for the answer and the videolink.
@rafaeltmanso5 ай бұрын
I love your videos, man! They're really insightful.
@jamieleesounds5 ай бұрын
you're an AMAZING teacher. Thanks so much for sharing this video!! I learned a ton
@LAxemann5 ай бұрын
Thank you, Jamie! People learning new stuff is all I could wish for. :)
@gambetta_5 ай бұрын
i think the real question is why would want to emulate every molecule of sound or weather conditions if we know how it is in his completion? is like emulate a tree by rendering every atom of the tree
@LAxemann5 ай бұрын
That in an on itself is a fair point. The reverse question is: How do we get to a point in which we can reliably get a 100% (or at least 95%) accurate recreation of the "completed" state without doing these checks?
@rampantporcupineandfriends37935 ай бұрын
I wondered about this very topic on a past video of yours. So, I was super excited to see the title of the video pop up in my subscription feed! 😊 And you did not disappoint with the quality of the analysis. I especially enjoyed the parallels you drew between lighting tech and audio tech. It definitely helps put things in perspective for me. Your excitement about the subject makes me super excited as well! Recently I saw news about Cyberpunk 2077 getting an upgrade to its audio systems and I was really excited to see it! Before watching your videos, I probably wouldn't have taken much note of it. I would love your thoughts on the quality of the sound design they are showing off here. kzbin.info/www/bejne/hYm6ZoV9r8Shm5Y The one thing I think I hear is the reduction of higher frequencies when the sound source (a gunshot) is behind a wall. This effect does not seem to be as pronounced or perhaps not present at all with the original audio system.
@LAxemann5 ай бұрын
Glad you liked it! And the demo in the video is not directly related to obstruction, but "spatial audio". The TLDR is that it applies certain effects on the sounds based on their relative location to make locating them easier. I guess that works, but at the same time think that it destroys the audio fidelity most of the time as things start to sound very unnatural. Not a fan personally.
@rampantporcupineandfriends37935 ай бұрын
@LAxemann :0 Thanks for the explanation. I guess I got ahead of myself. If I understand correctly, this means that spactial audio is on the trickery side of things rather than the pure simulation side.
@MFKitten5 ай бұрын
It's so much easier to fool your ears than it is to fool your eyes.
@MFKitten5 ай бұрын
I wonder if it's possible to use the data from graphics ray tracing to calculate audio stuff. Occlusion and reflection.
@LAxemann5 ай бұрын
@@MFKitten Huh, that's an interesting idea! A large limitation might be the fact that, at least currently, raytracing/pathtracing have a finite number of light bounces before it stops simulating (at least from my understanding). While sound, in turn, would spread in pretty much every direction and reflect off a huge number of times. Also, it'd only apply to the viewable surroundings of the player. If e.g. someone shoots a gun 500m away behind some buildings, there'd be no raytraced info about it available.
@MFKitten5 ай бұрын
@@LAxemann you could use it for early reflections around the player, changing as you move around a space.
@morgan05 ай бұрын
it might work ok for higher frequencies, but lower frequencies are much more affected by wave properties and resonances in the room. you basically have to do either a lot of tricks to emulate what the resonances will do, or run a 3d wave simulation in real time. and the source and listener position affect this. you could probably get away with only doing this for low frequencies and doing a separate method for high freqs, which would allow a lower sample rate in time and space without significant loss in quality.
@VambraceMusic5 ай бұрын
Such a great video, im glad i found your channel!
@rickyalan12274 ай бұрын
Great insights about sound diffraction. If you don´t mind me asking, how was the visualization of the clip @5:37 created? Thanks for your videos!
@LAxemann4 ай бұрын
Hellooo! Thanks for the kind words! :) The clip in the video is not mine, but I'm pretty sure it was done like this: - Define points to the left and right of the object relative to the player's rotation - Do line traces from each point to the player (or the other way around) - Visualize the traces as green lines, and change their color to red if they hit an object - Tadaaaa
@rickyalan12274 ай бұрын
@@LAxemann Thanks brother! Keep it up.
@Noone-of-your-Business5 ай бұрын
10:13 - And this is what I don't get: I had my first _Soundblaster Live_ card back in the last millenium, and it boasted a huge amount of audio effects, especially reverbs. It basically had it own digital sound FX unit on board to alleviate *_all_* the issues mentioned here. It seems the EAX standard it introduced did not catch on, did it?
@LAxemann5 ай бұрын
Nope, that's a common misconception. Those dedicated soundcards were made at a time in which a) the (physical) audio processing of computers wasn't very good/artifacty and b) computers were a lot less powerful. Those soundblaster cards didn't do any of the complex stuff mentioned in the video. They simply did these simple, synthetic reverbs and other audio related task we consider basic in today's standards. But at the time, such a reverb effect was expensive to compute and had to be done in these dedicated soundcards. Think of the transition from using an external alarm clock to using the one in your mobile phone. Nowadays, most mainboards have perfectly fine/clean audio processing and the calculations for reverb can easily be done on the CPU, so the need for external soundcards simply vanished for most consumers. Not to say a "modern soundcard" for audio-related calculations couldn't be a thing in the future. But I'd say we're currently at a stage in which we'd first need to do some more R&D on standardized acoustic wave propagation in order to determine what sort of hardware we'd need to create, just like Nvidia's RTX models for graphics.
@LAxemann5 ай бұрын
Found this one which I think summarizes it well and quickly: kzbin.info/www/bejne/embOeIt_bduIgck
@Noone-of-your-Business5 ай бұрын
@@LAxemann Soooo... EAX is still being used? I am aware of the difference of hardware based "rendering" and software - VST plugins do the same, and they do it well and some even with very little CPU usage. But my question also was if the standard has become obsolete or if it is still being used to trigger software FX instead of addressing a specific hardware. Do you know anything about that?
@LAxemann5 ай бұрын
@@Noone-of-your-Business It's become obsolete, basically, yup
@MrMaidenHell5 ай бұрын
That was an amazing tutorial, thanks!
@LAxemann5 ай бұрын
Thank you, dude! Happy about every share to people who might be interested! :D
@jeesobeeso5 ай бұрын
accidentally thumbsed this down but i reversed it! great video!
@r3vo8305 ай бұрын
Wieder sehr interessant!
@andrej_novosad5 ай бұрын
another banger!
@LAxemann5 ай бұрын
No u! :D
@fkeyzuwu5 ай бұрын
10:48 so... whats that if not the most realistic result we can get out of reverb audio? programmatic reverb anywhere(not just in games) acts the same way, what more can you ask of it atm? btw, if you didnt already see it i would look at the new zelda tears of the kingdom GDC talk, some nice ideas there.
@LAxemann5 ай бұрын
That's the point - It acts the same way everywhere at the moment but is far off from actually doing a full-on calculation of emitted sound spreading omnidirectional, diffracting and all the other stuff talked about in the video. It's mostly still just getting a rough idea of the surroundings and tweaking the parameters of a somewhat arbitrary reverb algorithm with it. Far from any "true" sound propagation. Not the best comparison, but it's like comparing early rasterization to raytracing in graphics.
@Kynatosh5 ай бұрын
Video games have been very realistic without ray tracing, it was all smoke and mirrors too. And it still is, or nothing is runnable :)
@Lord_Alhaitham5 ай бұрын
noice
@a_grin_without_a_cat5 ай бұрын
Physically correct (raytraced, advancely simulated) does not and should not equal photorealistic, I dont think videogames will start to look same at some point. There's a great Acerola video on a topic -- kzbin.info/www/bejne/gZyynKtme857eqM
@LAxemann5 ай бұрын
I get the point and it's generally valid, but think my arguments still hold up: If everything is based on the same methods of rendering/lighting, the only thing to change the final result is a change in artstyle and/or lighting setup. Artistic things. And I'd say that e.g. pixar movies indeed do look very much alike in terms of lighting, the only difference is aesthetics and the way the scenes are being lit. The visual trickeries mentioned in the video added another layer that contributed to the overal, final image which was/is being taken out of the equation.
@a_grin_without_a_cat5 ай бұрын
@@LAxemann >The visual trickeries mentioned in the video added another layer that contributed to the overal, final image which was/is being taken out of the equation. I think that is the whole point. Physically correct light / fluid / physics / sound etc can be used as a foundation and a framework to create art that is beyond realistic, but that is not constrained by realism. It just allows developers to not invent the wheel in an each new game but build their vision upon existing physically correct tech. And, when you have the whole correct equation, you can freely take pieces out of it and/or change them, creating artistic derivatives.