I’m a musician who has played loud music live for over fifty years and I hear the a and b examples 24/7. Thanks for identifying the frequency of my tinnitus!
@julesc80543 жыл бұрын
Lmao. My tinitua is a little higher.
@alexatkin3 жыл бұрын
Never been able to figure out where mine is as it doesn't seem to be a single tone.
@wado19423 жыл бұрын
7.3K in my right ear 😢
@MrSpeakerCone3 жыл бұрын
Engineer here. This is a good explanation and easily the best visualisation of aliasing I've seen. nice!
@brayoungful3 жыл бұрын
Wave scientist, here (not audio). I agree this is an excellent demonstration of aliasing. However, I think this video seems like an argument primarily for *mastering* above 44.1 kHz, particularly if you're generating a lot of synthetic sounds, rather than recording or playing back audio above 44.1 kHz. I wouldn't expect human voices or musical instruments to produce a lot of power in frequencies above the human range of hearing, so you're probably not going to get a lot of audible aliasing if you record audio at 44.1 kHz. And then if that aliasing hasn't been baked into your digital audio file to begin with, then you won't be hearing it. An exception I could imagine would be if you're recording in a noisy environment where the "noise" isn't Gaussian--in which case, perhaps you could get some beat-like pattern of "noise" in your audible range. Edit: the other caveat would be that if you have high-fidelity audio and you're playing at back at a lower sampling rate, it's anyone's guess how that downsampling/resampling algorithm is working. It might introduce it's own wonkiness. But then if you're trying to drive speakers at a frequency beyond which they've been tested, you might get non-linear weirdness, too.
@FilmmakerIQ3 жыл бұрын
I'm really focusing on the capture side of things. My interest is really in the "making" side of things. Some cymbals can shimmer up in the high range of human hearing - I've heard tell of some loss of the "brightness" of cymbals because recording at 44.1kHz and compounding of a bunch of low pass filters causes that upper range to just lose power... but that's theoretical to me. But I really never answer the question in the title "What is the Optimal Sampling Rate"... haha. I think my purpose was really to understand what the argument is, not necessarily advocate for it. That's my impression when I was trying to work through Lavry's paper.
@brayoungful3 жыл бұрын
@@FilmmakerIQ this is an interesting discussion and has me sitting here on a Saturday afternoon tinkering with MATLAB and Audacity.... :-) So what I just tried was I created a 7 kHz square wave sampled at 44.1 kHz. It looks like a typical square wave. Sounds like your video Then I generated a 7 kHz square wave in a 192 kHz track, and applied a bunch of aggressive 22.05 kHz lowpass filters so it has no frequency content above 22.05 kHz. Then I made another 7 kHz square wave in a 192 kHz track, and just told Audacity to resample it to 44.1 kHz The two 7 kHz square waves generated in 192 kHz tracks, one lowpass filtered, and one resampled, sound like square waves, and sound almost.the same. The one 7 kHz square wave generated in the 44.1 kHz space has dozens of overtones and sounds totally different. I don't have an easy to way generate a square sweep, but this would be an interesting experiment for you to try on a square wave sweep and see what happens.
@FilmmakerIQ3 жыл бұрын
So here's the deal with 7 kHz square wave and any flavor of 48kHz (96kHz, 192kHz) - they will all sound the same! Remember in my video where I did the frequency analysis? There was the fundamental at 7kHz - that first harmonic at 21kHz and then aliases separated by 2kHz starting at 1kHz and going up. That works out because cycle of reflections caused by the 24khz Nyquist limit cycle around and around on odd numbers. Doubling the sample rate to 96 or quadrupling it to 192 doesn't change the cycle, it just changes where the cycle ends and picks up again! And since a square wave is suppose to be an infinite series, it doesn't matter where the cycle picks up.... Change the fundamental frequency of that square wave to 7001hz - and it won't cycle around like that and you'll hear the difference between 48 and 192khz.
@DamjanB523 жыл бұрын
@@FilmmakerIQ "aliases separated by 2kHz starting at 1kHz and going up. That works out because cycle of reflections caused by the 24khz Nyquist limit cycle around and around on odd numbers" - sorry, don't understand this: where does the 2kHz separation come from ? How do you get from 7x5, 7x7, 7x9, ... to all those frequencies ?
@MrBitflipper3 жыл бұрын
I have no problem letting the "Hz"/"KHz" misstep slide, as it's a common-enough slip-up. But sorry, I gotta call you out on repeatedly referring to a low-pass filter as a "limiter". Otherwise, this explanation - like all of your tutorials - strikes a comfortable balance between technical accuracy and accessibility to a broad audience. Well done! Btw, be forewarned: you just know somebody's going to link to this video in a Gearslutz post.
@henri.witteveen3 жыл бұрын
I used to work in sonar engineering in which we used digital signal processing. To avoid aliasing the first 'processing' step was an analog filter which would cut off frequencies that could cause trouble because of this aliasing.
@yaakoubberrgio52713 жыл бұрын
Hello 👋 👋 👋 I need a course of Signal Processing Can you help me Thanks in advance
@Lantertronics3 жыл бұрын
@@yaakoubberrgio5271 I've put up the lectures from my ECE3084: Signals and Systems course at Georgia Tech: kzbin.info/www/bejne/jKW2naCaqM2kqKs
@kensmith56943 жыл бұрын
A well designed delta modulation ADC does part of filtering for you. A part like the AD7768 uses a modesty high order integrator so you get more than one pole of anti-alias for free.
@Lantertronics3 жыл бұрын
@@kensmith5694 I've never been able to fully wrap my head around delta-sigma converters. Like... I can sort of follow the math line by line, but I can't really develop and intuition for the "heart" of how they work.
@henri.witteveen3 жыл бұрын
@@kensmith5694 When I mentioned working in sonar engineering I was talking about 1979 and 1980. We had to construct our own processing unit by using a 5 MHz 32 bit 'high speed' multiplier as the heart of our system.
@croolis3 жыл бұрын
Excellent and interesting video .. I would to like one term here, that is 'oversampling' .. When digitising an analog waveform it is quite normal to have a relatively tame analog filter, but run the sampling at a much higher frequency than the output requires, 8x or 16x oversampling is common. The next step is to have digital filter operating on this high frequency sampled signal and then downsample to the required frequency eg. 44.1kHz. The 10kHz squarewave has audible undertones because it was simply generated mathematically - there is no oversampling or anti-aliasing going on at all - if the signal was filtered properly before being recorded the 10kHz squarewave and 10kHz sinewave would, of course, sound exactly the same (since the next harmonic is not captured).
@Lantertronics3 жыл бұрын
I'm a professor of Electrical and Computer Engineering at Georgia Tech, and have taught courses in signal processing for 20 years. Besides an excellent tutorial by Dan Worrall, this is the only video on the topic I've seen on KZbin that doesn't make me cringe. In fact, your video is superb. :)
@stephenwong97233 жыл бұрын
What you explained is just that, either have a good recorder (A/D converter) which can do a good job at filtering out > Nyquist frequency signal, or, record at higher sampling rate (for example, 96kHz is much more than enough), and then, down sample to 44.1kHz/48kHz when you do your post. In digital domain, you can do (very close to) exact calculation, and at the end, save a few bytes on the final product (without jeopardizing quality). However, to those crazy guys who insist to get so called high res files for PLAY BACK, they are just crazy, forget them!
@harrison00xXx3 жыл бұрын
I mean, even cheap equipment like 100$ DVD Players in 2007 had already 192kHz DACs avoiding any problems like this at all. But for the final media, more than 44,1kHz doesnt make much sense since anyways most released music is still in 44,1/16Bit. Even most(or all!) vinyl records are made from 44,1kHz samples. Tidal even dare to upsample/"remaster" 44,1kHz/16Bit originals to expand their "HiRes" collection... Since anyways every HiFi gear have HPF for anything above 20kHz, in combination with internal 96kHz+ processing, more like 384kHz nowadays, no. 44,1 is just fine... more is acceptable in cases of digitalized vinyl records or yeah - why not. 44,1/48 vs 96+ is like comparing 4K vs 8K... it doesnt make practical sense, probably a bit with the perfect circumstances... but hey its possible. Thats why my AVR has 9x(or 11x idk) 384kHz/32Bit (32Bit!!! wtf?!) DACs, by numbers even better than my Hi End Stereo Gear with "only" 192kHz/24Bit Wolfson DACs. Only in recording and mastering more than 44kHz are needed, and these are anyways at 96kHz+ since its possible. I dont get people when they complain about "only CD quality"/44,1kHz... damn! Thats at least completely uncompressed, not like the lossy! MQA garbage for example. In fact (and already proven...) CD quality is better and more accurate than MQA (which is another compression format like mp3 - but worse, and with high license fees haha). Some of my friends are completely addicted to HiRes and/or Tidal/MQA, only because they see any blue light or 96/192kHz on their receivers screen... despite having absolutely the same sound as a 44,1kHz CD with the same mastering. Damn, they use soundbars, garbage "HiFi" gear, BT headphones and they dare to complain about 44,1 kHz only! I also prefer HiRes source material, but mostly because of different masterings, less loudness, more audiophile/dynamic, easily for the "demanding" people mastered.
@arsenicjones91253 жыл бұрын
@@harrison00xXx I believe you are incorrect about vinyl masters. Mastering for vinyl is a separate master than the master for CD. Professional mastering engineers want to work with the highest quality mix which means NOT 44.1kHz/16bit. And most likely the vinyl press wants to make their master from the highest quality version available. At least for the major label artists. Independent artists, well ya know they get what they pay for and can’t be reasonably used to make statements about what’s used to make vinyl records.
@harrison00xXx3 жыл бұрын
@@arsenicjones9125 Ofc its differently mastered for vinyl, but still, the samples used to make the "negative" are for probably 99,9% of the (non quadrophonic) records 44,1kHz/16Bit, as CD quality, that was my point As if CD Quality is "bad"....cmon, thats the most accurate and "lossless" quality standard we ever got. Ofc there is now "HiRes" but thats more of a voodoo/too much...
@arsenicjones91253 жыл бұрын
@@harrison00xXx no I’m afraid you’re again incorrect. In major studio albums they regularly record at high sample rates then down sample to 48khz 24bit to edit & mix. Some major studios do all their editing and mixing work in 96khz/32bit floating. Then it will be down sampled again after mastering. Again we can dismiss what independents do because they don’t do anything in any standardized format. CD quality is not the most accurate, lossless standard available. 🤦♂️🤣 An original recording made in a 96khz/32bit wav file is more a more accurate representation of the analog signal. If there are more samples w greater bit depth it MUST be more accurate than a lower sample rate and bit depth. Just because you cannot discern a difference in every piece of music you hear doesn’t mean there is no difference or that there is no difference which affects the experience. Just to be clear I don’t think CD quality is bad just that it’s not without flaws either. Upsampling won’t increase fidelity in anyway but a higher sampled recording is higher fidelity
@harrison00xXx3 жыл бұрын
@@arsenicjones9125 So you have proof that the source material for making vinyls ins more than 44,1kHz? Sure they edit and master at higher bitrates, but the end result is mostly 44,1/16Bit sampled. This change probably with HiRes for the customers slowly, but its a known fact that 44,1kHz were used for vinyls FOR DECADES at least.
@Goodmanperson553 жыл бұрын
4:50 a tiny bit of correction on this part. If you actually activate the "stats for nerds" option, you would see that KZbin actually uses a much newer audio compression format called Opus, developed by the same xiph foundation that Monty himself works for. And what's interesting about this audio codec is that the developers have decided to restrict the sampling frequency to 48 kHz (44.1 kHz sources get upsampled upon conversion, hi-res sources get downsampled and 48 kHz sources are essentially no-op and passes through). The reason for this is exactly the same reason you mentioned a few seconds ago, the math is just easier that way. You will only get 44.1 kHz if for whatever reason, your device requests KZbin to fall back on to the old AAC or Vorbis codecs for compatibility reasons which will almost never happen especially if you're watching from a web browser or using an Android phone. But considering that Opus is still a lossy format, it's still gonna cut off any frequency above 20 kHz anyways.
@nickwallette62013 жыл бұрын
There's a lot that gets said about "KZbin compression" and how it affects audio. Generally, the degree to which it affects the sound of any given audio demo is nearly moot. These days, few of us are hearing _anything_ that hasn't already passed through a perceptual audio encoder of some sort (MP3, AAC, Bluetooth audio codecs, Netflix / Hulu / YT, and so on...) and nearly all of those codecs are going to brick-wall filter the highest of the high frequencies to avoid wasting data bandwidth on stuff only our pets will hear anyway. The exception to this rule is the rare fabricated audio example like in this video, which uses a signal that is rarely something you'll encounter in a typical audio presentation of any sort. Yep. Those are affected by compression. Sure enough. But most of the time, when somebody is comparing a direct feed of a source audio file with one picked up through a lavalier microphone from sound being played through a 3" cube smart speaker, and then says "you won't get the full impact of this because of KZbin audio compression", I just roll my eyes. haha I _think_ that 128kbps Ogg stream can adequately capture the sonic differences you were trying to convey, don't you worry about that.
@laurenpinschannels3 жыл бұрын
don't underestimate the degree to which lossy compression might actually be doing a better job of preserving the signal than you think - eg, check out dan worrall "wtf is dither"; it's a long video and I don't remember exactly where in the video he does it, but somewhere in the middle he compares mp3 to 16bit wav in a situation where the mp3 *unequivocally destroys wav* in terms of which one represented the data better. wav was more lossy than mp3. That's because quantizing to 16bit integer naively actually introduces more noise than mp3 compression, if your signal is simple enough. it's all about what bitrate mp3 or ogg needs in order to near-losslessly compress a given section; and ogg vorbis is based on wavelets, not discrete cosine, which was why ogg vorbis can handle certain kinds of phasing sounds much better than mp3. so - yeah, as long as you're in a high enough quality mode that the bitrate compression is in the -100db range, you'll probably be able to hear whatever -70db effect they're trying to show. it's only when you turn down to 240p and your mp3 noise is -10db that we have a serious problem from audio compression. now, video on the other hand... :D
@alexatkin3 жыл бұрын
In my experience watching a movie/TV show on Netflix and watching it on Bluray is usually night/day difference. Its not so much that you obviously lose highs, you seem to lose dynamic range, it sounds flat and dull. Of course its not always enough to spoil the experience, but sometimes it definitely is. Same with the picture quality.
@jhoughjr13 жыл бұрын
disagree comoletely. YT audio is highly compressed and I can tell the difference between songs in YT vs Apple Music. No contest
@alexatkin3 жыл бұрын
@@jhoughjr1 Music videos strangely are often the worst offenders, whereas some youtubers use music and it sounds fine. I'm very sensitive to lossy codecs too. Hated Bluetooth audio until LDAC and Samsung Scalable came along.
@erewrw19063 жыл бұрын
@@jhoughjr1 i dont know super exactly what youre taking about. but ive heard youtube uses aac codec. Imho for certain bassheavy generes, youtube is miserable. bass just doesent translate well on it. guitars are ok, but i still preffere mp3s of mine.. Apple music also uses aac i heard. But i found it bit better, dont know if its a specialized aac version they use. Other than that i seen a test video that compared waveforms to lok for normal compression (audioplugin compression) , and nothing was found.
@4i203 жыл бұрын
great content, thank you 💚
@jmitzenmacher53 жыл бұрын
So here’s the thing, KZbin does support 48 kHz audio, and it does support higher frequencies than 16 kHz... sometimes. Every time you upload a video to KZbin, the encoder creates about 6 different versions of the audio with different codecs, sample rates, bitrates, etc. On playback, it will automatically choose the audio based on your network, decoding capabilities, etc. Just because the video was ruined after you checked the download, that doesn’t mean it would have been ruined for all listeners. Really it’s KZbin’s technical inconsistency you have to worry about (I think that might also be true for your video about cutting the video 1 frame early) TLDR; Your description of KZbin’s capabilities wasn’t strictly true, but you were still right to cater to the worst case scenario. Very interesting video!
@taragwendolyn3 жыл бұрын
Love the deliberate error 🥳 also thought my hearing was failing with the sine sweep until you pointed out KZbin hard cuts at 16khz. I'm one of those weirdos in their 40s who can still hear when shopping malls have a mosquito device... Or could during the before times at least .. haven't been to a mall in 2 years
@timbeaton50453 жыл бұрын
@@MyRackley Hmm, sadly i know mine doesn't at 65, but then i've played in too many bands with overloud guitarists, and in one case, a drummer who overhit his cymbals all the time, where we rehearsed in a small room. Still have a low level of tinnitus in my right ear, but luckily it's not really noticeable unless things are really quiet, and i guess i've become quite good (or at least my brain has!) at filtering it out of consciousness!
@eddievhfan19843 жыл бұрын
An exceptional video, sir, especially for going the extra mile and looking into KZbin's own codec shenanigans with your own examples. I regret to say, I didn't hear much difference in the 7kHz files, but considering I'm getting older and adults lose top end in their hearing range over time, I'm not surprised anyways. (I can barely hear CRT yoke noise anymore, which I definitely could as a kid) Aside from pure monophonic sound, I think higher sampling rates have a dedicated purpose when doing any kind of stereo/surround real-time recording, or doing any audio processing involving pitch/duration manipulation. In the first case, human hearing becoming more sensitive to phase differences between ears as frequency increases, such differences in phase and arrival time contribute to our sense of a physical space the audio is occurring in. (Worth noting here that the Nyquist-Shannon sampling theorem assumes a linear and time-invariant process, where it doesn't matter how much or how little the signal is delayed from any arbitrary start point-human hearing, however, is definitely NOT a time-invariant process) When dealing with sampled audio, at higher frequencies, the number of discrete phases a wave can take drops off considerably: assuming a wave at exactly half the sampling frequency, you can have it however loud you want (within the limits of bit depth), but you can only have two phases of the signal (0° and 180°). One octave down, you only have 4 available phases (0, 90, 180, 270), and so on. This might contribute to the sense of "sterility" and "coldness" associated with older digital recordings that didn't take this into account. So if you're mixing audio that relies heavily on original recordings of live, reverberant spaces (drum kit distant-miked in a big room, on-set XY pair, etc.), it's an advantage to get the highest sample rate you can afford when recording/mixing, then downsample your audio for mastering/publishing, if needed. This way, you can preserve as much detail as possible, and give your audio the best shot at being considered realistic. In the second case, having extra audio samples helps when you want to pitch audio up/down or time compress/stretch. Since some of the algorithms for doing these techniques involves deletion of arbitrary samples or otherwise bring normally inaudible frequencies into human hearing range, having that extra information can also be a benefit for cleaner processing, depending on your artistic intent.
@FilmmakerIQ3 жыл бұрын
Yes, I haven't factored in pitch alterations.
@nickwallette62013 жыл бұрын
That's not entirely true, actually. The Xiph video mentioned in the content here covers the waveform phase topic as well. The reconstruction filter post-DAC is basically turning discrete samples into a Bezier curve. Just sliding the sampled points around on the X/Y axis (if X is sample, and Y is word value -- i.e., the amplitude of an individual sample) will alter the resulting wave's phase. Another way to think of this is to imagine using a strobe light to capture an object moving in a circle. If the speed of the object rotating about the circumference was perfectly aligned with the flashing frequency such that there are exactly two flashes per revolution, it would look like the object is appearing in one spot, then another spot 180 degrees from the first, and repeating indefinitely. This is basically the Nyquist frequency. From that, you could construct a perfect circle because you have the diameter. So now, imagine altering the "phase" of that object so that the strobed captures place those objects at different places around that circumference. You can still construct a perfect circle. Same with audio samples. It doesn't matter if the phase changes. As the Xiph video says (I'm paraphrasing because it has been a while since I watched it), there is one and only one solution to the waveform created by a series of samples, _provided that the input waveform and output waveform have both been band-limited to below the Nyquist frequency._
@eddievhfan19843 жыл бұрын
@@nickwallette6201 Well, yes, for any arbitrary signal, you can still reconstruct it with sampling, but I was mostly thinking psychoacoustically, where delay and phase variations between ears plays such a big deal in stereo sound. And one of the side effects of sampling is that you get phase constraints, like I described above. For example, with a signal at half the Nyquist frequency, how do you distinguish between a full-amplitude sine wave and a cosine of -3dB intensity, when they both share the exact same sample representation (alternating between .707 and -.707)? Since that phase information can spell the difference between a centered (in-phase) or diffused (out-of-phase) stereo sound space, preserving phase and delay information is super important, and with finite sample intervals, there's only so many phase states you can have at high frequencies. I also acknowledge, however, that bandlimiting filters induce their own phase delays as well, which can have a significant effect on the perceived audio-hence one of the other advantages of higher sample rate is to relax the requirements of bandlimiting and reconstruction filters to minimize their coloration of the audio.
@FilmmakerIQ3 жыл бұрын
Delay is not an issue with the sample rate. Sample rate does not affect the precision of timing of the wave in any respect
@nickwallette62013 жыл бұрын
@@eddievhfan1984 With two samples per cycle, you can reconstruct a waveform with any phase you want. You could indeed have phase and anti-phase waveforms at 20kHz with a 44kHz sample rate. Try it. Use an audio editor to create 20kHz sine, then invert the phase. Zoom in to the sample level and look at the waveform it draws. This is a representation of what the reconstruction filter does. I think it would be an academic exercise though, as 1) who's going to be able to determine relative phase between channels at the theoretical threshold of human hearing?, and 2) that's going to be in the knee of the low-pass filter curve, where any passive components on the output are going to affect the signal. It would not be unlikely to have a mismatch between L and R channels. High-end stuff might try to match capacitors to 1% or so, but there's plenty of gear out there (even respectable gear) that uses electrolytics rated at +/-20%. There's a lot of concern over perfection that is not at all practically relevant.
@JAmediaUK3 жыл бұрын
The problem with the 1st group mentioned (44.1 vs 48 etc) reminded me of "Complex Problems have simple, easy to understand, wrong answers." The same is true for Flat Earthers, young earth creationists etc. They have a very simple solution that seems to work because the [majority of the] people they are talking to don't understand the complexities. The problem Group 3, the Audio Engineers, have is the majority don't understand the solution as presented mathematically and say "that is just your opinion!" and no more important than their opinion.... You see a lot of this these days. It is Great to have videos like this one that go far enough to explain simply the problem for the majority without going of in to deep (group 3) Audio Engineer Geek speak of MSc maths.
@FilmmakerIQ3 жыл бұрын
That is really an insightful way to look at it.
@JAmediaUK3 жыл бұрын
@@FilmmakerIQ Hi John, You call me "insightful" again and I will sue! :-)
@FilmmakerIQ3 жыл бұрын
Need to put a low pass filter on that comment.
@LocalAitch3 жыл бұрын
You switched it up between A and B lmao. Interestingly, the frequency of the harmonic you used is really close to NTSC horizontal refresh rate (15734Hz), which a CRT’s flyback makes audible as it deflects the electron gun left to right and back. I’m 41 and so far I’ve always been able to hear 15kHz flyback
@FilmmakerIQ3 жыл бұрын
Yep
@GoodOlKuro3 жыл бұрын
So that's why you can hear this high pitch noise from CRT TVs?
@sivalley3 жыл бұрын
39 and oh gods do I NOT miss working on TVs and that wretched noise. I can only imagine how horrific that nose must be to cats and dogs. We practically used to torture our pets with those damnable things.
@jhoughjr13 жыл бұрын
yep. as a kid I could hear if a TV was on even if the screen was dark.
@ClosestNearUtopia3 жыл бұрын
I remember as a kid freaking want to smash all school tv’s what a trash they let us watch in the first place and then the fucking beep, will hear even now I think, I did run out the classroom sometimes and told the teacher to blast herself with this earpiercing beep! She was like: what beep!? Bitch.. the older the crt, the more chance you may use it to deflect vermin out of your garden..
@wado19423 жыл бұрын
Another great video. One thing about your sine/square test, you can simulate what would happen in a real-world situation by generating your waves at a sample rate like 3,072KHz (64x48K) and convert to 48KHz to listen to it. That's because all modern ADCs sample at at least 64fs, often 128 or 256fs, filter out everything above 20KHz, then down-sample to your capture rate. Another experiment I ran a few years ago was record a series of sweep tones to my blackface ADAT, which allows the sample rate to be continuously varied from about 40KHz to 53KHz. At 53KHz, aliasing is *almost* eliminated where it's quite audible at 40KHz. Yes, those converters are out of date, but it's still a valuable learning tool. That said, I'm a huge proponent of 96KHz in digital mixers, where the ADCs are working low-latency mode. At 48Khz, an unacceptable amount of aliasing is allowed to keep latency through the mixer below, say 1ms (not a problem in analogue mixers). At 96Khz, the converters can run in low-latency mode and have no audible aliasing. When I'm working in the box on material that was captured by dedicated recording devices (latency is not an issue), 48KHz is fine.
@FlamingChickenG3 жыл бұрын
I think it is interesting how many people rag on CD quality, CD sound pretty good and I think most people have a colored memory of it. It is the same thing that Techmoan talks about in his video about cassettes most people where not listening on quality equipment and I know for my generation we most used CDs that we burned which had mp3s which are lower quality then CD audio. Spotify only recently got "CD quality" audio but people don't complain about there quality.
@lamecasuelas23 жыл бұрын
CD's rule baby!
@Carlos-M3 жыл бұрын
My earliest memories from the early 90's regarding CDs is that, a) they sounded really, really good, and b) my mom will get REALLY mad if we play with her discs (they were expensive)! My dad had a Panasonic component stereo setup, nothing high-end or audiophile grade but it was half-decent at least. He had some Type-II cassettes too which sounded really good on that player. By the mid to late 90's CDs were starting to replace cassettes as the on-the-go medium for portable players, boomboxes, and car audio, which tended to sound bad to start with, but no matter how good your system is all of these are frankly crappy listening environments. Whereas vinyl was never a portable medium so even now if you had a vinyl player you'd probably have it in a dedicated listening room at the very least.
@peteblazar55153 жыл бұрын
1st Harmonic with 3 times fundamental frequency? Where is harmonic with 2 times frequency?
@Carlos-M3 жыл бұрын
@@peteblazar5515 the components of a square wave are the sum of infinite _odd_ harmonics. So the first harmonic is 3x the fundamental frequency, the next is 5x, and then 7x, etc.
@negirno3 жыл бұрын
I wouldn't rag on mp3s either. Unless the bitrate is really low or it's encoded with an old encoder I just can't tell the difference.
@TheAnimeist3 жыл бұрын
19:07 "I just want to cover some interesting notes" Clever ... John, Thanks for sending me down the rabbit hole. It took me 5 days to finish your video. Your instruction is always good, because of the practical examples you provide. Your videos inspire conversations outside of KZbin and outside of film making. Thanks for that too. edit: sorry wrong time stamp, could not find original ...
@ThisSteveGuy3 жыл бұрын
As soon as you mentioned Monty, I knew that you got it right.
@DrakiniteOfficial3 жыл бұрын
My electrical communications systems prof literally just covered the sampling theorem in class today, and by chance I saw this on my recommended. This video is an EXCELLENT demonstration of aliasing. Thanks so much for making this. BTW: I can totally hear the difference between A and B on YT, but I can't tell the difference on the 7kHz one. But that could be my Bluetooth headphones. I'll edit this comment when I get home and try my corded headphones/speakers.
@TurboBaldur3 жыл бұрын
Another thing to consider is that at exactly the Nyquist limit, the signal contains no information whatsoever on the phase of the signal, so if you had a 90 degree phase shift between the left and right channel (or multiple channels in a multi track recording), that information would not register correctly in the audio samples. This may not be so important when listening to the audio as our hearing is not so sensitive to the phase of such short wavelengths, but if you start to do addition of the channels or other signal processing where the different channels interact, the same signals oversampled vs sampled at the Nyquist limit can produce a different sounding result, even after the result has been downsampled back to the Nyquist limit.
@ABaumstumpf3 жыл бұрын
Nyquist will accurately reproduce the sound, if you THEN add extra modifications on that then that in no way implies anything about nyquist not being 100% correct.
@TurboBaldur3 жыл бұрын
@@ABaumstumpf Nyquist is correct about the absolute minimum sampling rate, but there are benefits in oversampling.
@ABaumstumpf3 жыл бұрын
@@TurboBaldur Yes, of course, but that in no way has any effect on what us humans can actually hear, and there the 44kHz 16 bit is enough. if the mastering of the audio is done poorly that is not the fault of the medium not does it make Nyquist any less correct.
@TurboBaldur3 жыл бұрын
@@ABaumstumpf exactly, if the sampling is being done for playback to a human only then 44.1k is fine. But if you plan to edit the audio it makes sense to get more samples, even if the final export is to 44.1k
@peetiegonzalez18453 жыл бұрын
This is a great point, and I believe it may be why many digital recordings made in the early 90s sound "flat" compared to late-generation analog recordings. Too many engineers just relied blindly on the digital technology without thinking of consequences like this. Nowadays of course studios work with much higher bitrates and bit-depths for processing and mastering before producing the 44.1kHz or 48kHz files for release.
@MovieMongerHZ3 жыл бұрын
So in depth. Thank you so much!
@toddhisattva3 жыл бұрын
The Fourier Transform tells you how loud each sine wave in your signal is - a spectrogram, if you plot it. It also can tell you the phase, so all 3 parameters - frequency, amplitude, and phase - of a sine wave are covered. The Inverse Fourier Transform puts all those sine waves back together. In computers we use Discrete Fourier Transforms, and usually a "fast" implementation known as an FFT for "Fast Fourier Transform." (Which BTW is one of the top 3-5 hacks in all of computer science.)
@FilmmakerIQ3 жыл бұрын
Yes but the how gets way more complicated
@moddquad83623 жыл бұрын
Aliasing is pretty much a non issue when going though a modern codec. The generated square wave example was not filtered, as it would be on any DAC. If you recorded that wave and then displayed it, it would sound the same but not look square anymore, but look like 2 sines mixed together. Codecs sample at a much higher rate (>1Mhz) with fewer bits of resolution then down sample using a CIC filter and multiple half band filters. Through the magic of poly phase filtering, an 18th order elliptical halfband filter is only 4 multiplies to drop the rate by 2 with a very steep cutoff. You chain multiple half bands together, maybe a 3 or 5 phase if needed, to drop down to 44.1 or 48K rate. Its pretty easy to knock out any audible aliasing with a chain of tuned 18th order filters.
@FilmmakerIQ3 жыл бұрын
This video isn't about codecs
@moddquad83623 жыл бұрын
@@FilmmakerIQ Then congrats at demonstrating why an anti-aliasing filter is important and what happens without one.
@wngimageanddesign95463 жыл бұрын
Double blind tests of Redbook 16 bit 44.1kHz digital audio vs. high res 24bit, 96kHz digital audio, played for average listeners, audiophiles, and high res audio 'experts'....all couldn't accurately pick out the the high res files. The average listeners had a 50/50 probability, while the rest of the audiophile/experts scored even lower! As an EE, and music lover, I've always stressed the importance of the master recording being the great deciding factor on the quality. Quality in, quality out. No amount of oversampling, upscaling, or bit rate will improve a crappy initial master source.
@noop9k3 жыл бұрын
This is about extra noice introduced during processing of the audio. Not about the output format really.
@davidasher223 жыл бұрын
Omg! So glad you mentioned the hard cut-off YT does at 16. I thought I was loosing my hearing during those sine wave sweeps.
@squidcaps43083 жыл бұрын
Project and storage samplerate at 48k with each processing stage using oversampling has been proven to be optimal. You have to increase project sample rate to 384kHz to get the same. The trick is in the oversampling, allowing for wider bandwidth while processing to reduce artifacts and then filtering the unnecessary frequencies out keeps it cleaner. 48k is not enough for some signal processing, while it is plenty for other. A gain change can be done in 48k but compressing, anything that modifies the phase or time domain in anyway has to be oversampled to decrease overall antialiasing. The strangest thing is that despite having additional filtering stage at each processing block (for ex, each plugin in a project) and converting back and forth, it is less CPU intensive. Higher samplerates by far most of the time run "empty" signal, the entire bandwidth is processed at each stage while oversampling is not needed for linear operations. This is not very known thing, which is a bit odd in my opinion. You can test this at any point, device antialiasing stress tests and compare 192k project rate to same processing done in 48k base and oversampling. The latter has less artifacts.
@RJasonKlein Жыл бұрын
Excellent video. You dealt with complex issues in an easy to understand and fun way - nice job, man.
@cjc3636363 жыл бұрын
This is so cool. As a former TV audio mixer, this just rocs. And, by the way, the square wave sweep reminded me of some unknown 60s era Saul Bass movie credit animation.
@tiarkrezar3 жыл бұрын
So, after you showed the example at 4:40, my first thought was, "well, what if you instead choose a frequency that exactly divides the sampling rate?". So I opened up audacity, made sure both my audio device and the project were set to 48KHz, and tried generating a 12KHz tone - in that case, a square wave sounds just like a sine, but slightly louder. It's easy to make sense of it if you think about it in terms of generated samples - you just get two high ones followed by two low ones, and that pattern repeats *exactly* at a rate of 12KHz. If you choose a frequency that doesn't cleanly divide your sampling rate, you have to resort to an approximation - some runs of high/low samples will be longer, some shorter, so that over a longer period, they average out to the frequency that you're trying to achieve. But in that case, you're essentially creating a longer pattern of samples that takes more time before it repeats, which creates a bunch of other spurious (aliased) frequencies in your signal. I think the real takeaway here is that mathematically ideal square waves are awkward and don't work out that great in reality. Sines are way nicer.
@FilmmakerIQ3 жыл бұрын
You choose a special case which is square wave with the frequency of the sample rate divided by four! There's two ways to think about that. Either the mathematical sum as you described or as a visual graph. Only one sinusoidal wave can fit the given samples... Instead of the sample defining the top of the square wave, it defines each side of the crest and trough of a sine wave with greater amplitude!
@Photovintageguy3 жыл бұрын
Sound people that slow down sounds for sound effects etc, say they need more room like 192k. It's kinda like slowing down 120fps to 25fps in video.
@Lantertronics3 жыл бұрын
I've heard that too -- but unless they have special scientific microphones designed to capture frequencies above human hearing, I'm not sure it matters.
@Photovintageguy3 жыл бұрын
I don't think it's about frequency width. It's about stretching the entire recording. When you stretch it makes everything thinner. Like pulling rubber band. Signal would get less resolution. Less data points. Through entire range of frequencies.
@Photovintageguy3 жыл бұрын
This guy talks about pitch and time stretching uses 96k recording. Sound effects for movies. kzbin.info/www/bejne/aaCQhJhupraXes0
@Photovintageguy3 жыл бұрын
Reasons for 96k recording. Time and pitch stretching. kzbin.info/www/bejne/aaCQhJhupraXes0
@FilmmakerIQ3 жыл бұрын
Problem is the rubber band analogy doesn't work because Nyquist does not work that way. Using 24khz, the audio would be EXACTLY the same as 48khz in every respect BUT only up to 12khz. So it's not that you have more data points - that doesn't matter when the audio is sent back to analog in the speakers. I suspect the reason 96hz would be used for slow downed effects is the same reason I discussed in the video: headroom. With 96kz there's about an octave and change you can manuever around in without running into a Nyquist limit that dips into the perceivable range.
@geoffstrickler3 жыл бұрын
When you first brought up harmonics and square waves, I thought about posting a correction cause it sounded like you were about to make a big mistake by ignoring band limiting filtering, but I watched the rest of the video…and you handled it all. Well done, including your edit post KZbin processing. Yes, I did hear a tiny difference between your 5.2khz sine wave and the 5.2/15.6khz additive construction square wave synthesis. I do have exceptionally good high frequency hearing for a 55yr old, however, it’s also important to note that music is never a pure sine wave, nor a square wave, so you would never hear even the tiny (barely noticeable even to excellent hearing and only because it was a pure note of extended duration) differences I heard in an actual piece of music. The important part, as other have pointed out as that your waveform must have and appropriate low pass filter applied. That could be a 20khz analog filter, with sampling at 48khz or higher, or 20-24khz filter before 57.6khz, or 20-25khz before 60’hz, or a 20-35khz analog filter and sampling at 88.2 or higher sampling. And it’s always good to lower the noise floor by recording at 20 or 24 bit depth. Do all your editing and mixing at something above 48khz and above 20bot depth, then master for 44.1/48 16/18/20 bit sure, you can master for 24bit depth, but no one will actually be able to tell the difference.
@PatrickPoet3 жыл бұрын
John, this is the worst explanation of the connection between aperture, circle of confusion, and infinite focusing I've ever seen!
@FilmmakerIQ3 жыл бұрын
I agree.
@Frisenette3 жыл бұрын
These concepts are connected however.
@wngimageanddesign95463 жыл бұрын
LOL!
@KK-pq6lu3 жыл бұрын
Hey John, I’ve been doing digital signal processing since 1980, 41 years, including spatial digital signals. Nyquist can be grasped with knowing one concept: that sampling at the Nyquist frequency there is no phase information. Phase information is restored as the sample rate is increased above Nyquist. To differentiate a square wave from a sine wave, both still have to be faithfully reproduced, including the phase information. At 10 KHz, a 44.1kHz sample rate only produces 4 samples her sine wave, partially preserving the phase of the signal. Since a square wave is made up of more than one frequency, the phase information becomes important, as it affects the sound not just the amplitude of the sound. 44.1 kHz works because most of what we listen to is under 8kHz. If you want to preserve phase up to 15kHz, really should sample above 60kHz. Now, if you are listening to stereo, you really want to preserve more phase information, so makes even more sense to go 60kHz or higher. Even though to me 44.1 kHz seems fine enough for me. I always wanted to make a spatial audio standard that recorded phase information as well as sampling information, a transformation rather than sledgehammer sampling. This has been done commercially outside the audio industry for over 35 years.
@trulahn3 жыл бұрын
You are totally ignoring the sound reproduction equipment's role in this. Sure at 10 KHz, a 44.1 kHz sample only produces 4 samples. So? The signal recreated by the DAC sent to the vibrating membrane or paper cone of your headphones or speakers while reconstructing this 4 sample pulse of 1 second, it's plenty. 60 kHz may be useful during mastering of the original, but at the consumer level, we don't benefit from it with proper noise cancelling and anti-aliasing applied.
@wimdouwe3 жыл бұрын
Nice video and thanks for the link to Monty Montgomery's explanation
@peregreena90463 жыл бұрын
I remember some article in an audiophile magazine about a study in the early days of CDs. A recording company recorded a classic orchestra on both reel-to-reel tape and a PCM processor. When played back to an audience, there was no clear line between the media. Depending on the piece played, the majority preferred one or the other. The conclusion at this point was that each recording added some specific artifacts to the music, which might benefit one piece, but not the other. After this, they went to analogue and digital mastered vinyl records and high end tape cassettes on one hand, CD on the other. All of the same performance. Oddly enough, here the lines were defined more clearly. The digital camp voted for the CD, the analogue camp for vinyl and cassette. Then one of the technicians had an idea: They went back to the master recordings, but added noise from a blank vinyl record or a blank tape. The result was that everyone voted for their favoured medium. Vinyl enthusiasts picked up on the clicking noise from the blank record, the tape guys picked up the tape noise. So either consciously or subconsciously, they confirmed their bias. I wish I could find that study online, maybe someone reading this can help? Different sample rates, compression methods and bitrates affect music recordings. The artifacts become part of the music and some will prefer the sound of one type over another. A lot of it also depends on how much care has been taken during production, from recording to mastering to compression of the publishing file. The audible difference between low and high sample rate might be minuscule, but because more care has been taken to produce the high end recording, the result sounds better. Now throw in confirmation bias, and everyone will say they are right because ...
@a1guitarmaker3 жыл бұрын
One time you said the right words "four ninety-three" while the numbers on screen said "439". I was not expecting to hear the difference between 440 and 439!
@FilmmakerIQ3 жыл бұрын
Yeah, I dyslexia
@kensmith56943 жыл бұрын
A small error: The Nyquist frequency is the first one you can't reproduce not the last one you can. Imagine a sine wave that happens to cross through zero right at each sampling point and you will see why.
@FilmmakerIQ3 жыл бұрын
Yes that is correct
@mhoover3 жыл бұрын
As usual a very thorough and clear exposition.
@c.augustin3 жыл бұрын
Interesting! I never thought about *not* having a low-pass filter (to cut out higher frequencies) in front of an AD converter - because it would sound really, really ugly! (There are some tricks to get away with weak analog filters, but they involve oversampling and digital filtering, aka signal processing.) As an engineer it was always clear that you would need this high-cut filter. And on your 5.2 kHz demonstration - I can only hear the switching itself. There's a discontinuity in many switching events, but when the switching was continuous (on crossing the zero-line I'd guess) I couldn't hear it at all. Yes, my hearing is already that bad (but nearing 60 this is quite normal). Where it does make sense to use higher sampling rates (and 24 bit) is in audio processing, because higher "resolution" (in amplitude and time) makes it easier to manipulate signals. Same as in image processing: It makes perfect sense to use 16 bit per channel (or even 32 bit float) images in high resolution when doing advanced image editing, but the end result could be distributed in much lower resolution with just 8 bpc (this is common practice); yes, there's still a chance that you run into issues with color management, but there are ways to deal with that on the "output" side.
@boozydaboozer3 жыл бұрын
I'm 47, suffer from tinnitus and use $25 wireless Logitech headphones but even I could hear the difference between the two 5.2kHz samples. The aliased one sounds 'dirty' to me. Not sure what this proves though.
@FilmmakerIQ3 жыл бұрын
Maybe it proves your wireless Logitech headphones arent very good? Try it on a speakers...
@milasudril3 жыл бұрын
Yup, for audio processing it always makes sense to use float. You get * A higher dynamic range (145 dB vs 96 dB), which gives you more headroom before clipping * Simpler (and possibly faster on anything newer than a Pentium II) code when working with normalized range -1 to 1 For image editing, it depends on your purpose, but VFX requires the higher dynamic range of 16 or 32 bits per channel. Editing for a website or printer may work with less headroom.
@butson893 жыл бұрын
Always the best videos!
@SianaGearz3 жыл бұрын
Curiously, it's possible to hear frequencies up to at least 50KHz, it has been demonstrated with bone conduction experiments all the way back in the 50s or so; however it was also demonstrated that they basically don't really matter - it's impossible to tell apart frequencies above approximately 16.5 KHz, they all sound the same, and there is some hard anatomical reason for that, i forget. So you may perhaps actually want to capture a little ultrasonic energy, but you can fold it back into the band above 17-ish KHz. Band limited synthesis of the square wave is a solved issue. I think the simplest way is additive synthesis from sines, which you cover right in this video. Since Adobe has ignored this well known insight, one can consider their square wave synthesizer buggy by design, maybe they made it this way to look good to amateurs, band limited square wave always looks like it's heavily ringing, even though it's not. Unfortunately a lot of algorithms and plug-ins have some aliasing or other sampling rate related issues such as "EQ cramping", either due to limited computational budget or by oversight. So high sample rate intermediates are sometimes good, though should be ever more rarely actually needed as far as DAWs, their builtin effects and generators, and higher end plugins are concerned. Audition probably doesn't have quite that professional an ambition for a silly effect. Something to keep in mind that most recording devices don't truly have a configurable sampling rate at the lowest hardware level. The reason is that the analogue filter that would reject aliasing needs to be tuned to the sampling frequency, and you don't want to include the same hardware several times, and yet more hardware to switch between those variants, not only for cost, but also for noise and other degradation that ensues. So the internal sampling rate can be for example 384KHz, and often analogue anti aliasing filter will have a corner frequency of somewhere just north of 20KHz. So you have over 3 octaves of filter room, so at 36db/oct filter, there's like 110db of suppression for all the junk. Then the ADC will have an internal downsampling to something more palatable, like 48/96/192 KHz, and these are easily aliasing-free. This isn't entirely how modern ADCs work (keyword delta-sigma modulation), but it's not too unfair a simplified representation. If 44.1/88.2 KHz are desired, resampling happens elsewhere downstream, in a DSP or software, and of course then it's the issue of how much you trust that particular implementation to be low aliasing. Just 12 years ago, it was not uncommon to find fairly low quality sample rate conversion in a major DAW! It's not entirely trivial and fairly computationally taxing to get right. Things got a lot better since. But for a given audio interface, you shouldn't expect 48KHz mode to introduce any aliasing that you can avoid by recording at 96/192. Besides aliasing, the other potential resampler behaviour trait is phase shift, which nominally isn't audible, but under circumstances can be.
@VioletGiraffe3 жыл бұрын
I bet it's not 50 kHz sound you can hear but lower harmonics from that vibration exciting stuff in your body.
@SianaGearz3 жыл бұрын
@@VioletGiraffe Harmonics are always above the fundamental, not below. But indeed it has shown that there are no auditory hair cells that correspond to higher frequencies than about 16.5 KHz. And yet there is apparently a mechanism to excite them with a higher frequency signal.
@laurenpinschannels3 жыл бұрын
yeah subharmonic resonances would make that possible, it's the same sort of thing where humans can (pretty easily) distinguish phenomena above 60hz - despite that when *staring dead-on at a screen*, your eyes can't tell the difference. but your cochlea has actual pitch-specific resonators; the hairs float in the resonator bit, and the resonator bit definitely does not have a 50khz band. so, yeah, it makes sense that you could identify the presence of sound in your environment that was generated by a 50khz emitter, but there is actually no possible way your brain could receive it as 50khz sound, it would be like seeing gamma rays or being touched above the top of your head - you can hallucinate the experience due to a real phenomenon in your environment, but it's not really representing reality correctly.
@TheTechnoPilot3 жыл бұрын
This was FABULOUS as always John! Amazing description!
@muizzsiddique3 жыл бұрын
For me the 16kHz limit is fine because my hearing ends sooner than that :( Definitely wasn't the case 10-15 years ago.
@ABaumstumpf3 жыл бұрын
It is a similar story with image resolution where people claim that a 4K TV is way better than their old 1080p TV - but the difference was not really due to resolution but size. You need a rather large screen at a close distance for any visual difference between 1080p and 4K, and now with 8K.... you need like 60" monitor at 1m distance for there to be any visual difference. 44 kHz 16 bit is enough for humans - for us that can be called "perfect". There has not been a single human that has ever shown to be able to accurately hear anything above 21kHz. For the bit-depth - kinda debatable as without noise-shaping, dithering or anything like that this is "only" ~96 dB SNR - so from the faintest sound perceivable (you'd need to be literally dead to not have the sound of blood flowing through your veins) up to soundlevels that cause permanent hearing-damage with just half an hour of exposure per day. You could literally have an audio-track with the drop of a needle and being on a busy road - and both things would be fully captured. Doing ANYTHING but listening to the audio is a different beast. Just image taking a photo with a resolution just high enough that it looks perfect to you (doesn't even matter what actual size/resolution) - ok. Now take the same image and stretch it to say 5 times the size - oh, it suddenly is no longer perfect. When you want to manipulate any data, be it image, sound, or anything else - you end up introducing distortions and losing some precision, so you'd better make sure that the initial data you got is way more than you actually want to deliver at the end, and do all your manipulations with as much USEFUL data as possible. With audio that often means capturing >20bits of depth at 96 kHz - which allows you to squeeze and stretch the sound a lot before any unwanted distortions become audible. Useful as in like this video is showing the problem of aliasing.You do NOT want that in your data so you better just use >96kHz during manipulation and then filter all the high-frequency stuff out before it ends up getting folded into the audible range. Cause once it is there you are not getting ridd of that anymore.
@nathan430823 жыл бұрын
As someone who has repeatedly defended digital audio, including debunking false claims, I've been posting that Monty video for years. Great stuff that. Dan Lavry's White Paper has also been quite informative. I own a Lavry AD11 as well as a DA10 and record at 24/96 kHz most of the time for my songs, a handful of which you can find on Soundcloud. You can almost make out the AD11 under the desk behind my guitars in this video: kzbin.info/www/bejne/fqixf5ewnq-VmNk.
@dodgingrain36953 жыл бұрын
As a mixing engineer for over a decade I'm glad to see you got this right. I'm also glad that at over 50 years old I can still hear the difference between waves A and B. And for the vast majority of people listening to audio on crappy playback systems it doesn't matter one bit.
@shootinbruin36143 жыл бұрын
When you said you loved Monty's video, I didn't realize you loved it so much you'd make your own (also very informative) video adding to the topic! Makes me glad I shared the links! In regards to gaining a perceptible increase in audio quality, I personally believe that data is better spent increasing the bit depth of the digital recording. Doing so would improve the noise floor, but even this would only make a real difference in the highest end headphone setups or a dedicated speaker room (then again, the people who own these things are generally the ones debating this topic to begin with, right?)
@FilmmakerIQ3 жыл бұрын
I'm firmly in the increase bit depth camp as well - but from my perspective, it's just buying insurance. I shoot a lot of stuff where I can't really monitor the audio - I'm just capturing everything - and with 24 bits have a LOT more room for error in the volume.
@shootinbruin36143 жыл бұрын
@@FilmmakerIQ The nice thing is that relative processing and storage cost is constantly going down, and there's always that consumer who's willing to pay for "the best." Who knows, maybe in our lifetimes 480kHz 64 bit will become mainstream haha
@nickwallette62013 жыл бұрын
As was said here, it's cheap insurance, so why not. But, TBH, even good 16-bit converters are already near, at, or better than the noise floor of the analog signal chains on either side. (Especially when you start digging into the technical details of dithering.) Even if you think about the absolute top-shelf reproduction chain some crazy audiophile may have, with a home mortgage poured into their Class A amplifiers and directional speaker cables held up by non-resonant cable guides.... The studio was still combining a dozen tracks together (combining each of their noise floors) through a sound board with a gazillion passive components and a ton of make-up gain after the summing stage, sourcing each of those channels from pre-amps, EQs, and compressors built in the 1960s for that warm analog sound.... Did ANY of that equipment have a -120dB noise floor? God no. When combined, do you think there's any chance that a good CD player's DAC is the bottleneck? :-) About the only thing 24-bit (or higher) DACs can do is handle those digitally-generated fade-outs with a little more accuracy. Again, in the recording chain, there is actually incentive to use higher bit depths: To provide margin for error. In most editing suites, all source material will be converted to 64-bit floating point values on-the-fly anyway, and only re-quantized to integer samples for playback or bouncing to the master files. But still...
@shootinbruin36143 жыл бұрын
@@nickwallette6201 I didn't even realize directional cables were a thing. How does that work?
@nickwallette62013 жыл бұрын
@@shootinbruin3614 Your guess is as good as mine. Probably about as well as using hospital grade AC outlets, or coloring the edge of CDs with Sharpie to prevent light refraction.
@yuan-jia3 жыл бұрын
Hey John, this is great seeing you do some new technical and concise teaching videos. Your work is so helpful for anyone digging in a bit in the subjects you tackle, so thank you for that!
@nocturnus0093 жыл бұрын
PS, I’m that guy that noticed FFT was one of the SETI at home work units math problems analysis & immediately deep dived into it. Glad I did because it was later covered in a Numerical Methods class… good times.
@davewestner2 жыл бұрын
Thanks man....really useful info, but the main reason I wanted to leave a comment is that I really dig your set! Looks cool!
@AdrianBacon3 жыл бұрын
Another case for generally higher sample rates has less to do with being able to accurately reproduce frequencies and more to do with channel to channel temporal precision. In short, when dealing with more than one audio channel, like stereo, or a theater sound system, higher sample rates tend to produce a superior sounding stereo field. Each ear may not be able to hear much above 16Khz, but both ears combined are extremely good at distinguishing differences between each ear. This is pretty straightforward to check out for yourself. Get a CD quality recording of a live performance, then get the same recoding but remastered at 24/96 or 24/192 (as long as the recording was originally recorded to analog tape) and listen to each one on a reasonably good stereo system. They should technically be audibly indistinguishable… except, that they aren’t…. The higher sample rate version has a superior stereo sound field. It’s like night and day, and you can’t unhear it once you hear it. Now… this doesn’t mean we should universally use higher sample rates. Not everything has a stereo sound field that will benefit from that, but…. For the stuff that does, man, what a difference.
@FilmmakerIQ3 жыл бұрын
That's not quite true - Again, Monty's video explains: kzbin.info/www/bejne/mXq0anyOiLqtq68
@AdrianBacon3 жыл бұрын
@@FilmmakerIQ That's not exactly what I was referring to, but yes, the part you linked to is technically correct, but with a few things that he doesn't address (the sins of omission that he refers to in the epilogue). This is why a higher sample rate stereo recording does in fact have an audibly better sounding stereo sound field. As usual, there's more to it than that, and yes, I have watched to original video in full. Multiple times. It's a classic.
@FilmmakerIQ3 жыл бұрын
I don't see the argument for why there would be a better stereo field if the timing is already perfectly established. Your exercise of listening to a CD master and then a remaster is also just an example of bad science. Of course the remaster is going to sound better - it's remastered... that's not apples to apples.
@AdrianBacon3 жыл бұрын
@@FilmmakerIQ *sigh* The timing he demonstrated is correct *as long as the frequencies involved (and the differences in frequencies involved) are within the bandwidth imposed by the sample rate* (one of his sins of omission). The point I was attempting to make was that humans can hear complex high frequency differences between two different sound sources in space at much higher timing and frequencies than can be accurately represented at commonly used lower sample rates. The example I gave with the remastered version was a simplified attempt. You can start with the high bit rate remastered version and generate a low sample rate version from it and get the same audible effect. Just be careful about what recording you start with because very few recordings actually contain enough high frequency content. I'll leave it at that. I'm happy to have a productive conversation, but I'm not particularly interested in trying to change anybodies position on this, and you clearly have one that differs from mine. BTW, I'm not attacking the content of your video, it's actually quite good.
@FilmmakerIQ3 жыл бұрын
Got it. See my frustrations was you were basically making a claim with absolutely no argument to back it up other than he it wasn't mentioned. Lots of people who really don't know what they're talking about do that so I'm already wary. Thank you for explaining the reasoning. I'm still not really buying the stereo argument though. As someone that can't really sense the high frequencies, I could not tell the timing of high frequencies that are outside the band limit anyways. And it would need equipment far superior to what I have to actually even have a chance to hear. That and I'm so unbelievably jaded about placebo effects at this point that I don't trust anything that's "slightly different"
@mattstegner3 жыл бұрын
The 16k cut off is probably the encoding setting KZbin picked for the codec, not some hard filter they applied. Most perpetual encoders (AAC, MP3) will throw away high frequency content. I mean, it probably wasn’t a nefarious decision by KZbin.
@FilmmakerIQ3 жыл бұрын
Of course it wasn't nefarious... But it was one annoying obstacle in trying to demonstrate this concept. And then it's only on SOME of the streams, not all...
@aarongrooves3 жыл бұрын
This is such a fun and informative video! Two quick thoughts: 1) Are you messing with us?? 😂 At 17:16 "WAVE A" is the sine, & "WAVE B" is the square, but then at 17:21, it switches (I can clearly hear the difference yay lol). If it's just a mistake, then no worries; but if this is intentional, you are hilarious, my friend! 2) I discovered this issue the hard way, working on a distorted guitar sound for a youtube animation. I was getting horrible artifacts, and it drove me crazy. Ultimately, I hypothesized that frequencies were interfering oddly at 44.1k, so I rendered at 48k, which helped. But after seeing your video, I now understand what was going on. Unfortunately you can still hear the artifacts. If you're curious, it's here: kzbin.info/www/bejne/mWHImJZ7p9CUppI with clear artifacting from about 5:03-5:08 (I queued it up). Anyway, you rock! Thank you so much for this incredible video!
@FilmmakerIQ3 жыл бұрын
It's intentional to get rid of one more bias to prove you could actually hear it! ;)
@aarongrooves3 жыл бұрын
@@FilmmakerIQ I see. Clever! Btw, I downloaded the 7kHz file and listened, and I could hardly tell a difference, but I DID hear a consistent difference and was able to discern the square wave. It just feels/sounds a little louder. When I looked at the wave to verify, it appears louder. So I adjusted them to have the same level in the meter, but the square STILL sounds louder to me. I even made the square softer, and it still sounds louder. For them to sound the same, I had to make the square about 10% softer than the sine. So maybe I really am picking up that 21kHz sound. This is really interesting. Thanks for the adventure!
@FilmmakerIQ3 жыл бұрын
When I constructed the two sine wave combination - I left the power of the waves equal - but since there are 2 waves instead of one it should show up in the waveform as slightly more powerful (because after all, there are two wave energies being added to the signal).Could also be there's some intermodulation between the two tones - that's a subject I'm not so clear on yet. It would take a lot more studying and reading to see where that inquiry goes... Try generating a 21 kHz tone and see if you can hear just that. But this could be a case why higher sampling frequency for playback isn't a good idea. You have stuff you can't hear affecting what you can hear. BTW - how old are you?
@timbeaton50453 жыл бұрын
@@FilmmakerIQ Haven't checked your download yet, but as to the "volume/loudness" difference between the sine and two sine combination, as mentioned by Aaron, you say you left the power of the two waves as equal? Surely, and correct me if i am wrong (yes. It does sometimes happen!😁) to accurately add the overtone, you should have calculated the amplitude of the first partial (the 15 KHz sine) according to the Fourier transform of an ideal 7KHz square wave. I don't have the math to do this, but i would expect that each successive tone in the Fourier series (yes, in theory, an infinite series) would decrease in level, presumably converging to zero at infinity* (sorry, told you maths wasn't my strong suit!) So the first odd partial should be multiplied by the coefficient as calculated by the Fourier series expansion, whatever that turns out to be. but you simply added the partial at the same amplitude. So that probably explains the audible difference between the two waves, PS found this on the Khan Academy site, wherein the actual derivation of the odd numbered partials (and their amplitude coefficients) becomes apparent. www.khanacademy.org/science/electrical-engineering/ee-signals/ee-fourier-series/v/ee-fourier-coefficients-for-square-wave *Presumably due the the fact that if the series DOESN'T converge to zero, then the infinite series will blow up to infinity, sort of like the famed Ultra Violet Catastrophe from the theory of black body radiation.
@FilmmakerIQ3 жыл бұрын
I left them the same power because the higher overtone would have been outside the range of my hearing. Visually you should see the combination of tones two be more powerful but the question is can you hear that additional tone?
@steveg2195 ай бұрын
Good explanations of complex information, great demo ideas
@mikethek54943 жыл бұрын
the higher the sample rate the bigger the file: practical limitations come in play. that's why everyone uses MP3s. a 160mp3 sounds as good as an uncompressed CD to most people.
@kernelpickle3 жыл бұрын
I’ve been recording and mixing for years, and the only time sample rate matters is on the recording. 24-bit audio @ 192KHz is indistinguishable from analog tape, and if you can record your audio at that sample rate, that will give you the option to master it for any format you want, with the least amount of degradation to the sound. For folks that understand how film and video work, it’s similar to folks that are shooting video in 4k if they plan on making 1080p content or in 8k if they’re planning to release something in 4k, because even though they never plan to release anything at that higher resolution, it gives them more options for cropping the footage and doing other stuff that you wouldn’t be able to if you shot video at the intended output resolution of the finished product. Applying high, low or bandpass filtering to audio is essentially the same as cropping an image, and the more detail you have to crop, the better it’s going to look or sound. Just think about an image, if it’s the size of the file you want the final output to be, and you decide to trim off the edges to reframe the photo, and then if you increase the image size so it matches the output resolution you started with, then you’re gonna be looking at something larger and far less detailed and blurry than you would have if the image had started out at a much higher resolution. I will be the first to admit my recordings are all at 44.1KHz or 48Khz, but that’s because I couldn’t afford the hardware (or it didn’t exist when I made the recordings) so the end results that I got with those mixes never sounded as clear or crisp as the stuff you hear that’s been stamped with the official “Mastered for iTunes” label. Another interesting topic I think that builds on this lesson would be to discuss the process of dithering when mastering audio. Some folks might be surprised to find out that the best sounding digital masters deliberately introduce white noise into the file as part of the mastering process, especially when downsampling from something like 192KHz audio to 44.1KHz.
@FilmmakerIQ3 жыл бұрын
Okay I've gotta do a video on this because that analogy is completely wrong. Also analog tape has less specs than 16bit 44.1.
@kernelpickle3 жыл бұрын
@@FilmmakerIQ analog tape has way more dynamic range and headroom than 16-bit audio at 44.1KHz. That's why everyone was still recording to analog tape, well after the CD, DAT and other forms of digital audio were invented. Believe me, they didn't do it because it was easier or saved money. Maintaining an analog recording studio with massive tape reels was an expensive and fiddly endeavor, so anyone running a studio back in the day would've jumped on the latest technology if it would've simplified that process. It wasn't until everyone eventually converted to digital recordings in the 2000's, when sample rates and quality of studio gear were high enough to record 24-bit audio, at sample rates well above 44.1KHz. You don't have to like my analogy, because it's not exactly perfect, but people know more about editing photos and videos these days than they do about audio--and they just need something they can wrap their heads around, to know why people choose to record at higher sample rates than what we hear as the finished product. However, my explanation and analogy are not wrong--let alone completely wrong. I not only studied digital audio in college, I also worked in radio, and even helped teach a class on digital audio production. The professor wasn't the most skilled at recording and editing, because he came up in the analog era, and just used the computer like a tape deck and did everything old school. So, I helped him teach students one-on-one, how to actually use a DAW in one of the studios, so that they could record their assignments. I still record, mix and produce music for myself and others in my spare time, so I might not be a KZbinr but I know what I'm talking about, and I'm not sure you know what I'm talking about, because if you did, you wouldn't call me "wrong" and use that as the catalyst for making a video to correct me. I have no idea what your credentials are or experience in this field is, but I got the impression that you're someone who has some technical understanding, and just learned all of this shit in the process of making your video, and you really don't have more than a decade of actual knowledge. It's funny, because this video was actually lacking some pretty basic information about the topic. You didn't even explain why someone would want to record anything at 44.1KHz, when there are much higher sample rates. You brought up using 48KHz as the sample rate, but didn't explain where that comes from. I think your viewers are even more ignorant than you on the subject, and might not know that CDs happen to use 16-bit @ 44.1KHz, and that DVD audio uses 48KHz. For anyone else reading this that actually cares to learn something, CD's compromised on the sound quality, because they couldn't make players that played back compressed audio without making them super expensive, and that was the highest quality sound they could use and still fit an entire symphony onto a single disc. (Audiophiles are historically fans of classical music, and when you're launching a new music format that's only going to be be affordable to the wealthy and/or those with "discerning taste", you kinda want to make sure you can cater to them a bit. It was a huge selling point for anyone sick of flipping albums to hear the second half of the performance, and I'm sure that without the support of those snooty weridos, CDs might never have taken off.) DVD's used 48KHz because it was the base sample rate used by DAT, which was one of the original digital recording formats, and because it was what people were using in studios, it got adopted by Mpeg2, DVD and digital broadcast formats. It only sounds slightly better, and it's almost imperceptible if someone uses proper dithering when creating the final audio file. It was simply a matter of compatibility with existing pro-audio equipment, which also supported higher sample rates like 96KHz. Good studios would record at the higher sample rate, and then downsample their work for the finished product. DVD-A used 24-bit audio @ 48KHz, because they were purely an audio experience, so they could use up more of the space on the disc for higher quality sound. Newer formats like BD (and the new dead HD) DVD used 96KHz, again, because of the larger amount of space available. Which is still really good sounding, but it's still only half the sample rate of the highest quality digital recordings, which is 24-bit @ 192KHz. There may eventually come a time when there's equipment that can capture audio at a higher sample rate, but even the obnoxious audiophile community that would typically support anything that's higher quality, just for the sake of it being measurably better (even if it wasn't perceptibly better) hasn't been pushing for anything higher. Turns out, even they can't tell the difference between 24-bit audio @ 192KHz, when compared to a super clean analog recording, from a well maintained deck with Dolby noise reduction. If you don't overdrive the tape, or have it distort in the upper frequencies, and you play it back on equipment that doesn't have any ground hum, it sounds fucking amazing--and so does 24-bit audio @ 192KHz, which I guarantee you've never heard in your life. Unless you're in a legit recording studio with high end gear to hear the difference, you can't tell. You can absolutely hear the difference between analog tape and the much lower quality audio used by CDs, because the dynamic range is reduced to 96 dB (which is a non-trivial 48 dB less than 24-bit audio) and more importantly, it's less than the 110 dB range of analog tape when recorded using a Dolby SR noise reduction system. 32-bit audio hasn't really taken off, because 24-bit audio already overkill with a wide dynamic range of 144 dB, which is already higher than the theoretical dynamic range of human hearing, which taps out at 140 dB--so 192 dB is just needlessly wasting storage space. That said, 16-bit audio with proper noise shaped dithering can have a perceived dynamic range of 120 dB, but again pure analog tape also has an effectively infinite sample rate, so that combined with the actually greater dynamic makes it sounds better than CD audio. Honestly, I'm not even sure what the point of your video even was, because KZbin isn't the platform capable of even showing the subtle differences between audio using sample rates of 44.1KHz and 48KHz, especially when KZbin already filters out everything over 15KHz. You may not be able to hear sounds over 15KHz, but I still can, and at this point if your hearing is already damaged enough to the point you can't even hear a sine wave between 15-20KHz, then you're clearly not the guy who should even care, because those sounds are for you, and I would agree that you shouldn't invest in anything better than CD audio, because it's completely lost on you. For those of us that actually understand digital audio, and have fully functional ears that can hear everything from 20Hz to 20KHz, there's plenty of reasons to record or listen to music that's using a higher sample rate and bit depth than CD audio. Of course, that's just a simplified explanation of some of the vast amounts of information your video was lacking, because I didn't even discuss the bit rate of digital audio (mostly because we were discussing uncompressed digital audio, and it's only when compressing audio files that bit rate becomes an issue, because that's where the sound quality gets drastically reduced.) But hey, you're just a guy who doesn't really have a background in this stuff, so I don't expect you to talk shop on the fine points of all this. Those of us who work with this stuff for real actually need to know about how our recording medium actually works, and we have to know how audio works, so that when we're mixing it for your consumption, that it sounds right--so we don't expect laypeople to know how the Fletcher-Munson curve affects our hearing during the process of recording and mixing, or on playback over a sound system of any kind. So, while they title of your video isn't wrong--the work you showed to get to the right answer is, because nobody in the history of the music and recording industry, or tangentially film and television, ever said 44.1KHz was optimal. The reason it's not optimal, is because the low pass filter is still attenuating frequencies within the audible range. So when Harry Nyquist figured all this shit out, he was merely pointing out the bare minimum that audio had to be sampled at to reproduce the full range of human hearing. He wasn't wrong, it's just that there's no perfect low pass filter that exists, capable of attenuating frequencies outside the range of human hearing, without attenuating audible signals. So, even with the best possible filter, you're still going to cut things off well above what we can hear, just to make sure nothing gets cut. In the real world, I typically don't allow my mixes to contain very much above 15KHz, because as you've noted, it's not supported by KZbin, and most people won't hear that stuff anyway. However, I do allow reverb to contain as much high end content or "air" as we call it in the business, because those are the subtle things your ears will detect and miss if it's unnaturally chopped. It's like bad lighting in a poorly edited photo, or CGI--you have to be an expert to know what you're looking for to see it, but we instinctively know when those subtleties are lost and it will seem wrong or fake. Anyway, good luck with your channel. Hopefully you spend some time learning and doing some research before you go off and make something that's going to confuse or misinform your viewers.
@FilmmakerIQ3 жыл бұрын
I'm not reading this novel especially when you start with a completely false statement that tape has more dynamic range... There's no point when you're so off base from the start.
@kernelpickle3 жыл бұрын
@@FilmmakerIQ Maybe if you read what I wrote, you'd actually learn something smart guy. Feel free to look it up. Analog tape recorded with Dolby SR noise reduction, which was the standard in professional studios, had a dynamic range of 110dB, while 16-bit digital audio has a dynamic range of 96dB. I'm not talking about cassette tapes here bud, I'm talking about 1/2 tape used in professional studios to make multi-track recordings. So, please just STOP with your nonsense, because you don't know what the hell you're even talking about. You looked some things up on Wikipedia, and think that you're a professional because you make KZbin videos. How many professional studios have you been in that actually had 1/2 tape machines? I guarantee you've never even seen a 1/2 inch tape in your life, let alone heard one played back over the studio monitors in a real studio. Clearly, you seem to fancy yourself a "Filmmaker" and not a recording engineer, or producer--so why don't you go make your silly little videos about lenses, or light meters, because you don't know shit about digital audio or recording.
@Drysart3 жыл бұрын
Man I can't believe you confused hertz with kilohertz. What a rookie mistake. Also I don't believe this, I'm just commenting to help juice the KZbin algorithm. Engagement!
@FilmmakerIQ3 жыл бұрын
Love it
@stucorbishley3 жыл бұрын
This was amazing, fantastic explanation! I've been curious about this for a long time..
@BThings3 жыл бұрын
My dog does not like this video.
@FilmmakerIQ3 жыл бұрын
Give the poor thing a hug....
@ScottGrammer3 жыл бұрын
In the mid-70's I used to read all the stereo magazines regularly. Julian Hirsch reviewed stereo equipment regularly for many of them. He once reviewed a Harmon Kardon amplifier that had an advertised frequency response of 50 kHz. He tested it, and found that it did indeed go that high. He said of this, "Only your cat can hear that high, let him buy his own stereo."
@CORVUSMAXYMUS7 ай бұрын
YOU KNOW WHY U DOG DOESNT LIKE THIS VIDEO?BECAUSE YOUR DOG IN FACT IS YOU.IS LOGIC BECAUSE YOUR DOG DOESNT KNOW TO THINK.
@mmf-d7i3 жыл бұрын
Great video, thanks! FWIW (and that’s not much) at 5:00 you say that KZbin samples everything to 44.1. But actually, KZbin uses the opus codec for the audio channels of videos, and that format is locked to 48. I think a few older vids might also have ogg or m4a which may be in 44.1, but “most” are sent in 48. It’s certainly not substantive for the point you’re making, more just trivia. Thanks!
@FilmmakerIQ3 жыл бұрын
AAC is used for Apple devices which is locked to 44.1. It also happens to be what they use for the download file option in YT's creator studio.
@mmf-d7i3 жыл бұрын
Aha. Interesting. Using youtube-dl, here are the streams available for your video (limited to audio): 249 webm audio only tiny 52k , webm_dash container, opus @ 52k (48000Hz), 7.13MiB 250 webm audio only tiny 61k , webm_dash container, opus @ 61k (48000Hz), 8.37MiB 251 webm audio only tiny 108k , webm_dash container, opus @108k (48000Hz), 14.81MiB 140 m4a audio only tiny 129k , m4a_dash container, mp4a.40.2@129k (44100Hz), 17.71MiB I'm on a mac here (but not an iOS device); in Firefox, the youtube web app uses stream #251 (as visible in the "stats for nerds" right-click; in Safari it uses #140, so you are indeed correct! Again, thanks for the excellent video.
@itsjusterthought79413 жыл бұрын
Comparing a 10khz sine wave to a 10khz square wave, only proves you have 10 more khz left to hear the aliasing artefacts of the higher overtones that you cannot hear. You are not hearing the 30khz overtone. You are hearing the aliasing artefacts at 20khz. That's what is making the slightly crisper sound. Harmonic distortion. One of the reasons why people prefer analogue sound over digital is because it sounds warmer and more organic. But that's not how the real music sounded. The digital version is a perfect replica of the original sound. The analogue version suffers harmonic distortion which is pleasing to the ear. The Minimoog analogue synth had such a great sound because Bob Moog miscalculated the resister values in the filter, causing the sound to distort in a pleasing way.
@mikes99393 жыл бұрын
A truly great video about this complex subject with an appropriate amount of humor concerning the state of the commenting on KZbin in these times. Thank you for your efforts, they are well appreciated.
@pokepress3 жыл бұрын
I do know that MP3 compression cuts out at 16khz because of the way the standard was designed. Also, I think some devices start to roll off frequencies in the last octave or so, so even if you have speakers and ears that can reproduce and perceive those frequencies, your hardware may be reducing their amplitude.
@Liam30723 жыл бұрын
Not all MP3 encoders cut at 16KHz though . LAME Encoder does not beyond a certain bitrate. And anyway, KZbin does not use MP3 compression. It uses either AAC or Opus.
@tommccaff3 жыл бұрын
Thank you for this excellent explanation. I am an audio engineer for a living, for many years I used a digital mixing console (a Panasonic Ramsa WR-DA7) which can operate at both 44.1k and 48k. I was always able to hear the difference between the two even when only recording voiceover, which I've done a lot of. I also have read Lavry's work in the past, when he previously insisted that there was no difference whatsoever between the two sampling rates and no need to ever use above 44.1K, and knew something had to be wrong. I also have used high sample rates, particularly 96k, and agree that they require a LOT of processing power, which translates into a lower track count and fewer native plugins that can be used, which makes those high rates inconvenient at best, at least for now. Coincidentally, it always seemed to me that the best compromise between computing power and the audio problems I was hearing would a sample rate of 64kHz (since in computing we like to use powers of 2 as factors, mostly because it's easy to clock-divide by 2 or 4, etc.). It's interesting that Lavry's proposed sample rate of 60K is very close to my own thoughts, and personally I'm glad to see that he has come around from his prior position that 44.1k was just fine. I also knew that when using wave generation software just like you illustrated in Adobe Audition, when generating a 16K sine wave at 48k sampling rate, the result is a wave with only three data points per cycle: one at zero, one near the peak, and one near the trough - which is of course a 16K TRIANGLE wave, not a sine wave, albeit a someone oblique one. Yes, those overtones are outside the range of hearing, and yet you could hear that something was wrong - it definitely was not a sine wave that was playing back. Aliasing is exactly the problem - there was no anti-aliasing applied to the data generated by Audition or any other similar program, or any anti-aliasing generated by the WR-DA7 that was outputting it and that the computer was digitally connected to - and there still isn't today on most high-end professional equipment. So there's just no question that the VAST majority of digital playback equipment out there simply applies no anti-aliasing filtering at all and never did. To my trained ear, this has been quite annoying indeed. I also remember the very early days of CDs, and the first CD player I bought, a Sony. I didn't like it, because the top end sounded "brittle", which was a common complaint in those days. And in fact it wasn't until CD players introduced "oversampling" that the problem went away - basically moving the aliasing frequencies so they are all hypersonic, by extrapolating and outputting one or three "samples between the samples" caused later generation CD players to sound significantly better. The bottom line is that Nyquist really doesn't handle the concept of aliasing very well, as you aptly point out. And what is needed, particularly for audio production, is a sampling rate that allows all of the alias frequencies to be moved above the 20kHz threshold of hearing. Computing power is a temporary problem, so I have a feeling that in the not too distant future all professional audio production will be done at 96k, even though we don't really need it to be quite that high. Thank you for what I believe settles this issue hopefully for good.
@FilmmakerIQ3 жыл бұрын
Sorry but three sample points do not produce a sawtooth wave, it's produces a sine wave. You don't connect the dots with straight lines, you draw a sine wave through the dots. A saw tooth wave has integer harmonics, it would need to be constructed with many sine waves which works probably be above Nyquist if the wave is only 3 samples wide. Lastly, I don't think you understand why Lavry suggests 60. He stated in the paper that 44.1 is if not perfect, close to perfect.
@tommccaff3 жыл бұрын
@@FilmmakerIQ I think you misunderstood what I said - "triangle", not "sawtooth". And I wasn't referring to an actual triangle wave, I was only referring to the shape created by the three points if you connect them, which isn't exactly what's going to happen in the DAC anyway, because DACs don't transition from one point to the next in any smooth way, they simply jump to the next value. The bottom line is that for a 16kHz sine wave, only three data points are created, and only three data points are going to be output by a DAC. The DAC itself is not going to "draw a sine wave through the dots". It's just going to output stairsteps at three data points and that's it (unless of course we're talking about oversampling, which would instead use spline interpolation or some similar approach to approximate where the additional samples would be. But to my knowledge no production hardware - such as Pro Tools or UAD Apollo etc. - utilizes oversampling on output). For example, if you create a 16kHz 24-bit sine wave at -3.0db, each cycle will have exactly three points - one at zero, one at -4.2 db above zero (sample value 5,143,049) and one at -4.2 db below zero (sample value -5,143,049). The DAC isn't going to transition smoothly between those points, it's simply going to output a zero for 20.83 milliseconds, followed by a sample value of 5,143,049 for 20.83 ms, and then a sample value of -5,143,049 for 20.83 ms. If DACs did indeed "draw a sine wave through the dots", then aliasing wouldn't be a problem, because the DAC itself would be reacting perfectly to the INTENTION of the data - just as analog tape used to do. But the problem is of course, as with many things computer-related, DACs simply don't do that. They just output a voltage corresponding to a number for a specified number of milliseconds as dictated by the sampling rate. It is of course this behavior that causes the alias frequencies to result, as you have very correctly and articulately described. As for Lavry's 60, correct me if I'm wrong, but my understanding is that the advantage here is twofold: 1) it pushes the vast majority of alias frequencies into the supersonic range, making them a non-problem, and 2) it provides more headroom for creating anti-aliasing filters, should a playback hardware developer choose to do so, which sadly, very few ever seem to. My point was merely to essentially agree with Lavry, but I'm suggesting that when taking into account the fact that digital hardware designers prefer to do things in powers of 2, that a better choice for "optimal sampling rate" should be 64kHz specifically. Personally, I wish hardware developers provided that option in addition to 48k and 96k because that's what I would use for production instead of 48k or 96k. It would be quite a good compromise.
@FilmmakerIQ3 жыл бұрын
That's completely incorrect. Yes the DAC does draw a sine wave because it's coverting it back to analog. The speakers cone is a physical object and it moves through space with inertia, it can just jump to each sample point and hold for the next one. So If you produced three samples you will not get a triangle, you will get a sine wave. Watch Monty's video in my description. Samples are not stair steps, they define the points of a sinusoidal wave. This is the key to Fourier transform and Nyquest theorem. Aliasing has nothing to do with stair steps (because there aren't any stairsteps). Aliasing is the result of frequencies that are higher than the sampling frequency. Your understanding of Lavry's 60 is incorrect as well. It doesn't push alias frequencies in the ultrasonic... You don't push alias frequencies... it provides enough headroom for anti aliasing filters to work without affecting the audible range. Lastly clock speed has zip to do with binary. 64khz is meaningless because time is irrelevant construct. Look at the history of computing you will not see any clock speed correlating with any binary numbers... Because it's been simply not how that works... Also 64khz isn't a binary number. The closest is 2^16 which is 65.636khz
@darrenlucas8043 жыл бұрын
Well done, brilliantly explained
@35milesoflead3 жыл бұрын
Nice video. There's an interesting tidbit that I have noticed with the whole 44.1 v 48.8 - you need to be consistent even though it doesn't matter. If you playback a 44.1 file inside a 48.8 project (or vice versa) you get pitch drift phenomena. This is why consistency is key even though sampling rate doesn't matter. The real key is "mastering for your platform" as it were. Understanding the playback limitations of youtube and making sure you sort your audio for playback. Tis redundant to do all your audio at 88/-8dB if youtube is going to down sample to 44.1./ -15dB.
@audio.paisajes Жыл бұрын
from Argentina I say THANKS! TU CONTENIDO ES BRILLANTE!
@HansBaier3 жыл бұрын
It's the other way round. The Gibbs phenomenon shows up, when there is NO aliasing. It is the result of running through the antialiasing-filter in a DAC. The antialiasing-filter rolls off all frequencies above 20kHz and the result are the squiggles around the edges. A perfect square wave has an infinite number of overtones, and when you cut those off with the antialiasing filter, the result is a band limited square wave, which exhibits Gibbs phenomenon.
@FilmmakerIQ3 жыл бұрын
Yeah I over stated the Gibbs part
@michaelkreitzer13693 жыл бұрын
This was great! Thank you. It seems that ultimately, this matters for producing, but not at all for listening. By time I get it to listen to, those high frequencies should have been long filtered out. However, I wonder how many PC sound systems (windows, alsa, openal, etc) bother to apply a low pass to signals they downsample in order to avoid aliasing?
@MiddleMalcolm3 жыл бұрын
Glad to see you dug in a little more to check out the difference between the theoretical "ideal", and what actually works in practice. There are still, of course, many other variables, but the answer to "which sample rate?" is always "it depends". Jumping back to the last video, my comment was only that I found it interesting that the original concept sample rate being 60K was almost a happy accident of ending up with that ideal range suggested by folks like Dan Lavry. It would likely have radically changed the course of digital audio development as we all know it.
@napalmhardcore3 жыл бұрын
I'm so happy I watched this video because a while back I watched a video which contained a sweep up to 20kHz and noticed that the sound cut off abruptly at 16kHz. I was unsure whether the culprit was KZbin, some other link in my audio chain or if the limit of human hearing is experienced as a hard limit (intuitively, this didn't seem right). I really need to have my hearing properly tested. I'm 38 now and I was "still in the game" comfortably up to 16kHz and I can definitely hear below 20Hz (I think it was somewhere around 16-18Hz when I stopped experiencing it as sound when I tested a while back). My mother told me that when I had my hearing tested as a kid by my school, they said my hearing was above average and that I could hear tones most couldn't. The funny thing is, the reason my hearing was being tested was because they thought I was deaf. My brother used to throw tantrums at home and I learned to "tune out" sounds I found annoying. Turns out I found the teachers annoying too.
@leonardhindmarsh23523 жыл бұрын
48kHz is enough for playback, usually the steepness of the LP filter to bandwith is 45 to 55% of the sample output, it gives almost non-existent phase errors and ripple within maximum audible range. 96kHz can provide benefits when it comes to pitch shifting of high frequency information and lower latency. Softer filters can also be used with 96kHz and possibly less ringing and phaseshifts, but it is rare for different applied filter to be used between different sampling frequencies, in addition, higher sampling frequencies often results in an extra component-instability. Basically all DACs and ADCs use delta sigma modulation with multiple bits (often 2-6bits). This involves a sampling frequency of several MHz, but they utilize another more effective type of modulation for the purpose, this modulation arise from a sawtooth that follow the analogue tone frequency which provides a pulse density/width that is digitized with 1-bit for a bitstream, partly and continuously analog for a certain period and compared to the analog input signal with differential circuits which results in different high frequency pulses designed to add or remove the energy in certain frequency bands, the distorsion energy is in this way increased in higher frequency bands and decreased in lower frequency bands, which is continued until the noise is satisfactorily reduced within the desired frequency band, this is done in several steps by several circuits, divided by amplitude for more effective noise shaping while maintaining stability, after this the process of demodulation and decimation takes part from several 1bit PDM bitstreams divided by amplitude to one 24bit PCM, with applied digital filters and downsampling.
@DavidLindes11 ай бұрын
Is anyone else not able to hear the 10kHz sine wave at all, and the 7kHz sine wave only barely? I really hope it's something in my hardware configuration, rathen than me having lost that much hearing. 😢 (FWIW, I'm on a Framework laptop on Ubuntu GNU/Linux... could probably go into more details on what audio system, but don't know off-hand.) Edit: P.S. In the sweep, the audio cuts out for me at about 7:46, so whatever frequency that is.
@Wegetsignal3 жыл бұрын
Very informative and clearly a ton of research went into this!
@johnjacquard8633 жыл бұрын
the issue is more to do with the fundamental frequency of the instruments we use and the way we construct music. we only have drums in high frequency or transients of vocal sibilance.
@johnjacquard8633 жыл бұрын
we don't need to hear anything about 10khz ( except harmonics)
@GaryFerrao3 жыл бұрын
Thank you for also talking about and checking the audio uploaded to YT. Years ago, some science documentary on NatGeo or Discovery was being broadcast on TV, explaining how adults can't hear above 16kHz and to test it out with your friendly adult (or parent) nearby. To my shock, i myself couldn't hear the 16kHz wave they were "playing". Not wanting to age so quickly (and good thing i had a computer as well), i generated a 16 kHz sine wave, and i was _so relived_ to know that i could hear it lol. And sadly the TV didn't have a comment section like here to complain. Rant: Then, wanting to check "how old i was", i tried with higher frequencies, and found out that i couldn't hear more than 18 kHz. Still not wanting to age so quickly, i was sure something was amiss. Then i found out. My speaker system itself had a frequency response range from 18 Hz to 18 kHz. argh lol. I bought better speakers with response up to 20 kHz and sure enough, i could hear it. This just makes me wonder. Do we really "age" out of this frequency or do we just "waste it away" because we don't use it any more. I still practise hearing 18 kHz (with good speakers/earphones) every now and then. And also have a save file on my phone to test out earbuds; before i can buy them and it making me lose my hearing range. P.S: i couldn't hear 20 kHz sine wave. I don't know if it's my limitation or the speaker's. Until i can get a volunteer who can blind test, i'll still be searching. (i'm not sure earbuds/speakers produce enough power anyway at the 20 kHz frequency, to use resonance on other objects.)
@FilmmakerIQ3 жыл бұрын
It has to do with the hairs in the cochlea of the ear. The ones responsible for the highest frequencies are in the smallest part of the cochlea (they have to vibrate the fastest). As we age, the cochlea becomes more rigid and inflexible to those high frequencies, and that's why we lose the high range.
@GaryFerrao3 жыл бұрын
@@FilmmakerIQ oh my…
@The_Dingus_Boi3 жыл бұрын
Absolutely fantastic work as usual. I'd love to see how this whole thing compares to analog sound though. I've only ever worked digitally before, but I've always been fascinated by the physical manifestation of sound and it's analog recordings.
@LeutnantJoker3 жыл бұрын
I once studied electrical engineering with a bit of signal processing but then went into energy (the big kilo volts stuff) and finally computer science. And in computer graphics I was right back in the fourier transform again, because yep.. its exactly the same thing in computer graphics. And while all the theory has been ages ago so I really need a refresher myself, I find this discussion everywhere: This debate of higher sampling rate, completely ignoring aliasing is going on in graphics just as well. Just look at all the "graphics mods" for games that upload huge textures for absolutely everything and then change the engine settings so bigger textures are being sampled for small objects, then wonder why performance goes down the toilet while aliasing artifacts appear and make things look worse instead of better. Its almost as if game and engine developers know about these engineering principles and optimize for them. Like... as if they know what they're doing :D Same goes for mesh level of detail too btw. Rendering a triangulated mesh is nothing but sampling. The sampling rate is your screen resolution. If you make an insanely detailed mesh that will show up small on your screen, you'll get mesh aliasing which will also look crap. People always thing smaller textures, mipmaps, and LODs are only used for performance, and if my PC is kick-ass, I should always load everything at the biggest size (bigger/more is better), completely ignoring signal processing principles and aliasing.
@FilmmakerIQ3 жыл бұрын
Fascinating analogy
@LeutnantJoker3 жыл бұрын
@@FilmmakerIQ at the end of the day it's all about sampling at a limited frequency. Doesn't matter what the data is
@overheardatthepub12383 жыл бұрын
Crazy technical and interesting. I learned more bout audio encoding than I ever knew. And I learned how little I know.
@GodmanchesterGoblin3 жыл бұрын
Fun fact... People that lived with older TVs with noisy line-output transformers may have developed notches in their hearing at 15734Hz (NTSC) or 15625Hz PAL) although if they are that old they may not now hear much above 12kHz or so anyway (that's me at 63). I remembered this when you picked 5.2 and 15.6kHz for the demonstration. I also wondered how hard that 16kHz wall is that KZbin apply, and would probably have gone with 5 and 15kHz or even 4 and 12kHz. If interested, it's also instructive to construct square waves visually using a graphing calculator to help with understanding how each odd harmonic improves the squareness of the waveform, although I guess Audition can do that as well. Great video, too, by the way.
@k7iq3 жыл бұрын
GREAT video and explanation ! I won't mention the 439 vs. 493 😃 BTW, I used to work with Dan. Very smart guy. Nice guy too :)
@adrianstephens563 жыл бұрын
Another engineer here. In my laziness I was edging into camp 2. Thank you for showing me the error of my ways, and reminding me of what I knew 40 years ago. Nyquist's sampling theorem is correct, and it assumes a perfectly band-limited signal. You band-limit a wider bandwidth signal using a low-pass (anti-aliasing) filter. Precision analogue filters can be expensive and difficult to create. Further, if you have a sharp transition in the filter, you introduce artefacts which are visible on transients in the signal, and might be audible, although I really don't know. To allow an easy-to-implement gentle roll-off filter without attenuating your wanted signal in the passband, you need a lot of headroom. BTW, to me this is all theoretical. As somebody of retirement ago, with loud tinnitus, 20 KHz sampling rate would be just fine.
@m132533 жыл бұрын
4:51 KZbin actually converts to 48kHz. The reason is that the developers of Opus audio codec decide to support 48kHz but not 44.1kHz. (They have an FAQ for this.) But if you watch KZbin on an Apple device, KZbin will deliver an MP4 format with AAC audio codec, that will be either 44.1kHz or 48kHz.
@FilmmakerIQ3 жыл бұрын
Well when I download the video from my own KZbin Studio - it's 44.1 - so I think most everything is delivered at that sample rate and it conforms with everything I've read so far.
@m132533 жыл бұрын
@@FilmmakerIQ That's probably because you are downloading it as MP4 format (H.264+AAC). For Chrome / Firefox / Edge streaming, KZbin defaults to use WebM format (VP9+Opus), which uses 48kHz sample rate.
@FilmmakerIQ3 жыл бұрын
Ah
@kaneltube3 жыл бұрын
Great video. Equally entertaining and informative as always!
@shiraga05163 жыл бұрын
It’s a great video! Many thanks.
@ezgarrth45553 жыл бұрын
Oh, that makes perfect sense, it's kind of frustrating that there's still so much confusion when the explanation is pretty graspable.
@AndrewAliferis3 жыл бұрын
Great job. Thank you for this interesting info.
@peto3483 жыл бұрын
Any video that links to the Monty's video is good video 👍
@xtrct73033 жыл бұрын
Signal Engineer here. You also just made a brilliant explanation of Gibbs phenomenon in less than 1 minutes too!
@quantum_ocean11 күн бұрын
the video "Samplerates: the higher the better, right?" on the FabFilter channel by Dan Worrall is one of the best
@EAST493 жыл бұрын
What I do and it works flawlessly is use 48 k 16 bit with a limiter ceiling of -0.1 dB, with a compression of 2-3dB and I do not get any compression on my tracks and its actually below the loudness maximum. It seems like it actually counteracts the effects from yt
@TheJediJoker3 жыл бұрын
You skipped an important point: the steeper a filter, the greater the induced phase shift introduced to the signal. You can get around this using different types of filters, but those introduce other temporal artifacts (such as pre-ringing with linear phase filters). And crucially, just as an anti-aliasing filter is needed at the analog input to a digital system, a reconstruction filter is needed at the analog output from a digital system. Therefore, the primary advantage of higher sampling rates in audio is that one can use less steep anti-aliasing and reconstruction filters starting at higher frequencies well outside the audible range, but well below the Nyquist frequency-all while generating fewer artifacts within the audible range.
@brantisonfire3 жыл бұрын
The DCC (digital compact cassette) standard was 48 kHz. I wonder why they added that ability, maybe to be like “this is better than CD by this many kHz!” I did a test to see what my hearing range was and I couldn’t get above 12-13 kHz before it was silent.
@stephenwong97233 жыл бұрын
And so as DAT, on a proper DAT recorder, 32kHz, 44.1kHz, and 48kHz are supported and can be properly recorded and play back.
@PhilippedeBersuder3 жыл бұрын
It has(48k) something to do with the frame rate in USA (30 fps) and Europe (25 fps), I don't remain the details...
@FilmmakerIQ3 жыл бұрын
That was my previous video: kzbin.info/www/bejne/o2bFpGiqiamVr5I
@4Nanook3 жыл бұрын
I'm glad someone GETS IT, regarding aliasing. I've had this argument with so many tone-deaf wanna be engineers that do not understand why percussion sampled at 44 Khz sounds like so much white noise but sampled at 192 Khz sounds like percussion instruments.
@nitram4193 жыл бұрын
>> percussion sampled at 44 Khz sounds like so much white noise but sampled at 192 Khz sounds like percussion instruments...
@aggibson743 жыл бұрын
Is the difference between sine and square wave sounding different due to the non-perfect reproduciton of the square wave? Both in how the speaker generates it and how your ear responds to it?
@FilmmakerIQ3 жыл бұрын
Not in the case in demonstrating. But yes, your speaker and ear drum are also limited to a certain number of frequencies. So at a high enough frequency, the sine wave and square wave will sound the same (without aliasing)
@GaryFerrao3 жыл бұрын
Wow this video is so interesting!~ I was (thinking) sure it's not just twice the frequency because if i downsampled some audio file to just 22.1 kHz (after checking that the treble was well below 10kHz) to save space on my CDs, it just didn't sound right, almost like sandpaper trebles. Well, now i know, thanks to your helpful explanations. Harmonics do affect the timbre of the sound, even though we can't hear them directly.
@mirakel643 жыл бұрын
What is your opion about the loudness war?
@nonsomega_official3 жыл бұрын
Alright 😂 😂, I think I'm that very one person he talked about at the end of the video 😂😂😂. I had already written two previous comment before seeing this bit 😂. Verify your edit and tell me that you didn't screw up the edit 😂. Much love from Nigeria 😂
@paulsangiorgio30933 жыл бұрын
I'm probably missing something big, but wouldn't high sample rate be useful for slowed down audio? Kind of like shooting in highres for legroom when it comes to editing and zooming in in post?
@FilmmakerIQ3 жыл бұрын
I think that would be entirely based on how the audio is slowed down. The thing to remember with Nyquist is that higher sampling rates do not actually give us more information in the frequency covered under the Nyquist limit, they give us a higher Nyquist limit.
@paulsangiorgio30933 жыл бұрын
@@FilmmakerIQ Thanks for the response. I need to go learn more 😅.
@Lantertronics3 жыл бұрын
Perhaps, if you had not very good resampling algorithms. But nowadays even a laptop or table can run high quality resampling algorithms.
@aaronsmith47463 жыл бұрын
Very informative, thanks John. Reminds me of my electrical engineering classes back in school :)