2021's Biggest Breakthroughs in Math and Computer Science

  Рет қаралды 1,105,035

Quanta Magazine

Quanta Magazine

Күн бұрын

It was a big year. Researchers found a way to idealize deep neural networks using kernel machines-an important step toward opening these black boxes. There were major developments toward an answer about the nature of infinity. And a mathematician finally managed to model quantum gravity.
Read the articles in full at Quanta Magazine: www.quantamagazine.org/the-ye...
- VISIT our Website: www.quantamagazine.org
- LIKE us on Facebook: / quantanews
- FOLLOW us Twitter: / quantamagazine
Quanta Magazine is an editorially independent publication supported by the Simons Foundation www.simonsfoundation.org/

Пікірлер: 824
@QuantaScienceChannel
@QuantaScienceChannel 2 жыл бұрын
Read the articles in full at Quanta Magazine: www.quantamagazine.org/the-year-in-math-and-computer-science-20211223/
@naturemc2
@naturemc2 2 жыл бұрын
Your last few videos in this channel is killing it. Need it. Much need it ❤️
@zfyl
@zfyl 2 жыл бұрын
I think the opposite. All i see here, is just mathematicians coming up with new approaches to existing problems (made by previous mathematicians) and publishing new approaches. These are not results, and i feel like these are practically useless. So sad to see, that the education system embraces pointless research in such overly sophisticated, yet never applied, fields of science! What a shame, as it happens on the background of the world in fires, looking for help...and what is give?...some over-engineered half solution for made up problems...
@antoniussugianto7973
@antoniussugianto7973 2 жыл бұрын
Please Riemann hypothesis progress updates...
@EmperorZelos
@EmperorZelos 2 жыл бұрын
Uh yeah no, I have to correct you. The continuum hypothesis is UNDECIDABLE in ZFC. Meaning there is no way to decide it. There is nothign to SOLVE there, there is nothing unanswered. It was resolved and understood many many decades ago. We KNOW it is independent and we cannot say c=Aleph_1. We can assume it axiomatically if we so want, or assume its negation and both are EQUALLY valid. What you're talking about here is adding an axiom to create a NEW axiomatic system where we CAN say it, but that does not mean it was "resolved" or anything because we already knew the answer.
@eeemotion
@eeemotion 2 жыл бұрын
Thanks for sparing me the trouble of watching. As anything significant could be buried in such an annal. The only real breakthrough in lamestream science is how to get them to shield for a plasma environment while still thinking almost exclusively in terms of 'heat'. The almost being the novelty. Electricity still being a dirty word in space. Hence its smell at first described from the suits after a spacewalk as that of electric soldering was then peppered with burnt chicken and BBQ insinuations to make for the usual clumsy narrative reminiscent of the sticky tape on the supposed lunar landing module. Ah, who knows what's in the peel of an onion? It's a slow boil to get to the truth and for the cluttered cosmogony of the believers it seems all too much useless toil...
@ruchirkadam8510
@ruchirkadam8510 2 жыл бұрын
Man, loving these 'breakthrough' videos! It's feels fulfilling to see the progress being made! I mean, finally modelling quantum gravity? jeez!
@Djfmdotcom
@Djfmdotcom 2 жыл бұрын
Same! I think in no small part it’s because we have all these KZbin channels focusing on them! I’d much rather watch Videos about science, exploration and learning than MSM garbage that divides us. Science brings us together!
@v2ike6udik
@v2ike6udik 2 жыл бұрын
BS. Gravity (as a separate force) is a hoax. It has been done for a reason.
@irs4486
@irs4486 2 жыл бұрын
cringe bruh, stop commenting, ratio + yb better
@sublimejourney3384
@sublimejourney3384 2 жыл бұрын
I love these videos too !!
@The.Golden.Door.
@The.Golden.Door. 2 жыл бұрын
Quantum gravity is far more simpler to calculate than what modern day Physicists have known to be true.
@MargaretSpintz
@MargaretSpintz 2 жыл бұрын
Slight correction. The infinite limit of shallow neural networks as kernel machines (specifically Gaussian processes) was established in 1994 (Radford Neal). This was updated for 'ReLU' non-linearities in 2009 (Cho & Saul). In 2017 Lee & Bahri showed this result could be extended to deep neural networks. Not sure this counts as "2021's biggest breakthrough", though it is a cool result, so happy to have it publicised. 👍
@PythonPlusPlus
@PythonPlusPlus 2 жыл бұрын
I was thinking the same thing
@lexusmaxus
@lexusmaxus 2 жыл бұрын
Since there are no physical infinite machines so there must be mathematical operators that eliminates these infinities?
@hayeder
@hayeder 2 жыл бұрын
Was about to post something similar. The recent famous paper in this area is Jacot et al with the NTK in 2018. It’s also not clear to what extent this explains practice. Eg see the work of chizat and Bach on lazy training.
@ramkitty
@ramkitty 2 жыл бұрын
@@lexusmaxus or is infinity an inversion in some way
@Ef554rgcc
@Ef554rgcc 2 жыл бұрын
Obviously
@OneDayIMay91Bil
@OneDayIMay91Bil 2 жыл бұрын
Glad to have been a contributing member to this field had my first peer reviewed paper published in IEEE this year :)
@kf10147
@kf10147 2 жыл бұрын
Congratulations!
@thatkindcoder7510
@thatkindcoder7510 2 жыл бұрын
What's the paper?
@zfyl
@zfyl 2 жыл бұрын
Too bad ieee is just an international conglomerate of science paper resellers. I, and everybody else in this planet want to know why are you writing these papers, and what is you contributed progress. Sorry for the negative tone, and congrats to the publishing 😉
@sampadmohanty8573
@sampadmohanty8573 2 жыл бұрын
@@zfyl Exactly. Why is everyone writing these papers? And if it is for advancement of science, why is it not accessible to the general public? Is science a business - it is but many intellectuals do not want to see it as such because they want to believe that they do it for "a bigger cause" while in reality they do it selfishly which accidentally sometimes might actually do good, without the original intent being so. Please do not point to Arxiv.
@dougaltolan3017
@dougaltolan3017 2 жыл бұрын
@@sampadmohanty8573 don't you just have to pay for access?
@MarcelBornancin
@MarcelBornancin 2 жыл бұрын
I appreaciate the efforts in trying to make these heavily technical subjects understandable to the general public. Thank you all : )
@primenumberbuster404
@primenumberbuster404 2 жыл бұрын
Mathematics is like that wind your sail boat needs to move way ahead on your journey. This was so heart warming to watch. There is really a thin line between maths and magic! Thanks a lot Quanta Magazine for this beautiful summary! loved it!
@jackgallahan9669
@jackgallahan9669 2 жыл бұрын
wtf
@criscrix3
@criscrix3 2 жыл бұрын
Some bot stole your comment and slightly reworded it lmao
@michaelblankenau6598
@michaelblankenau6598 6 ай бұрын
That's a funny looking cat .
@williamzame3708
@williamzame3708 2 жыл бұрын
Also: Aleph 1 is *by definition* the smallest cardinal bigger than Aleph 0. The question is whether the size of the continuum is Aleph 1 or something bigger ...
@alexantone5532
@alexantone5532 2 жыл бұрын
The continuum of natural numbers?
@LeBartoshe
@LeBartoshe 2 жыл бұрын
@@alexantone5532 Continuum is just a nickname for cardinality of real numbers.
@whataboutthis10
@whataboutthis10 2 жыл бұрын
and the new result makes it seem it is less likely that continuum is aleph1, which was Cantor's guess that seemed the most plausible for many years
@EM-qr4kz
@EM-qr4kz 2 жыл бұрын
if you take an infinite number of line segments one centimeter each..then you have an infinite line..this set of line segments are No = aleph 0 infinity..the line is one dimension object.. but! * if you take a square.. one square centimeter in size..the parallel straight sections that make this square up are infinite.. but the set of them is aleph 1 in size..and square in 2 dimension object.. could that be the key of dimentions ? especialy when we have fractals objects to describe?
@moerkx1304
@moerkx1304 2 жыл бұрын
@@EM-qr4kz I'm not sure if you have some typos or I'm not exactly understanding what you're trying to say. But your analogy of a straight line being the natural numbers and then extending it to a square seems to me like Cantor's prove that the rational numbers are countable and hence of the same cardinality as the natural numbers.
@hansolo9892
@hansolo9892 2 жыл бұрын
I have been using these kernel vector spaces for QML recently and this is one of those mathemagics I honestly adore!
@WsciekleMleko
@WsciekleMleko 2 жыл бұрын
Hi I could take 2 fists of shrooms and it still would have same sense to me as it does right now. Im glad You are happy tho.
@joshlewis575
@joshlewis575 2 жыл бұрын
@@WsciekleMleko yeah but just a few years ago you could've ate 2 ounces in your example. That's some crazy advancement, only a matter of time
@RexGalilae
@RexGalilae 2 жыл бұрын
Yo I worked on QML too back in college! I used to devour papers by Anatole Lilienfeld and Matthias Rupp coz of how interesting they were. Gaussian and Laplacian Kernels were the bread and butter of my Kernel Ridge Regression models and I was pleasantly surprised to see kernel vector spaces here lol It's one of the dark horses of ML
@gregparrott
@gregparrott 2 жыл бұрын
Just discovered 'Quanta Magazine'. Your articles on Physics, Math and Biology are all top notch! Subscribed
@midas2092
@midas2092 2 жыл бұрын
These videos last year introduced me to this channel, and yet I still have the same excitement when I see the new ones
@quentingallea166
@quentingallea166 2 жыл бұрын
You know the channel is pretty good when you watch full length video while understanding about half of the content
@szymonbaranowski8184
@szymonbaranowski8184 Жыл бұрын
No. It means it still sucks half of the time. And in this case i bet it sucks much more than a half. And it means it's useless to watch it since you end up in the same spot you started but fooled & getting more arrogant having an opposite feeling
@quentingallea166
@quentingallea166 Жыл бұрын
@@szymonbaranowski8184 when I was a teenager, I was reading Hawking, Brian Green etc and understand maybe 10% the first time. I would read and read again the pages and chapter to understand more each time. The world is a complex place. As a scientific researcher, I face everyday this complexity. Over simplifying is possible and useful. Kurtzgesagt is a pretty neat example. However, in some cases, in my opinion, if you still want to go far, you can't explain it in 10min simply. But well, you are perfectly free to disagree .
@Levi_Ackerman_7
@Levi_Ackerman_7 2 жыл бұрын
We really love watching breakthrough in science and technology.
@Geosquare8128
@Geosquare8128 2 жыл бұрын
hadnt realized that svms were being applied to dnns like that
@alany4004
@alany4004 2 жыл бұрын
Geosquare the GOAT
@marcelo55869
@marcelo55869 2 жыл бұрын
Support Vector Machines is somehow equivalent to neural networks?? Who knew!?! I would love to see the proof. I might lack the fundamentals to understand everything but it might be interesting anyway...
@cyanimpostor6971
@cyanimpostor6971 2 жыл бұрын
This has actually been around for 3 decades now. Since the 1990s in fact
@nabeelhasan6593
@nabeelhasan6593 2 жыл бұрын
Thanks to RBF kernel
@varunnayyar3138
@varunnayyar3138 2 жыл бұрын
yeah me too
@saiparepally
@saiparepally 2 жыл бұрын
I really hope you guys continue to publish these every year
@Epoch11
@Epoch11 2 жыл бұрын
These are really great and I hope you do more of these. Hopefully we don't have to wait till the end of the year, to get more videos which talk about breakthroughs.
@whataboutthis10
@whataboutthis10 2 жыл бұрын
this lol, give us more breakthroughs!
@aayankhan6734
@aayankhan6734 2 жыл бұрын
one of the few joys of the end of the year is watching these types of video....loved it!
@AdlerMow
@AdlerMow 2 жыл бұрын
Quanta Magazine is incredible! Their style make everything affordable to the interested layman and it grips, you can start with any video or article and see it by yourself! So thank you all Quanta Team and writers!
@bolducfrancis
@bolducfrancis 2 жыл бұрын
The animation at 5:12 is the last piece I needed to finally understand the diagonal proof. Thank you so much for this!
@markusheimerl8735
@markusheimerl8735 2 жыл бұрын
Love these videos. Gotta say as much as I wow'ed at the bubbles around our supermassive black hole in the physics video, I just have a specially warm spot in my heart for mathematics :)
@zight123
@zight123 2 жыл бұрын
same. I now jack about math, but its so fascinating.
@szymonbaranowski8184
@szymonbaranowski8184 Жыл бұрын
You believe in black holes? Seriously?
@MrMann163
@MrMann163 2 жыл бұрын
It's crazy how much stuff from uni started flowing back watching this. The fact that I can actually be able to understand all these complicated maths is crazy but exciting
@matthewtang1489
@matthewtang1489 2 жыл бұрын
I was like. Damn... I know all of these ideas when I was watching it. I guess I can finally taste the fruits of my university education.
@MrMann163
@MrMann163 2 жыл бұрын
@@matthewtang1489 They told me the quadratic formula would be important, but no one said I'd ever need to know set theory. Oh such ripe fruits .-.
@kevinvanhorn2193
@kevinvanhorn2193 2 жыл бұрын
Radford Neal explored this same idea of expanding the width of a neural net to infinity over a quarter-century ago, in his 1995 dissertation, Bayesian Learning for Neural Networks. He found that what you get is a Gaussian Process.
@zfyl
@zfyl 2 жыл бұрын
is this single handedly makes all this breakthrough just a simple revisiting of an existing conclusion?
@Luizfernando-dm2rf
@Luizfernando-dm2rf 2 жыл бұрын
the real MVP
@daviddodelson8870
@daviddodelson8870 2 жыл бұрын
@Gergely Kovács: no. Neal's work dealt with neural networks with a single hidden layer, this breakthrough studies the limit of width for deep neural networks, i.e, many hidden layers.
@kevinvanhorn2193
@kevinvanhorn2193 2 жыл бұрын
@@daviddodelson8870 Thanks for the clarification. Strange, though, that it took 25 years to take that next step.
@yakuzzi35
@yakuzzi35 2 жыл бұрын
that's what I love about maths lots of times something that started out as a game or a fun curiosity turns out to be extremely applicable and equivalent to something unpredictable decades later
@primorock8141
@primorock8141 2 жыл бұрын
It's crazy that we've been able to do so much with deep neural networks and we are only now starting to figure out how they work
@ajaykumar-ve5oq
@ajaykumar-ve5oq 2 жыл бұрын
We made machines but we don't know they perform task? sounds counter intuitive
@jakomeister8159
@jakomeister8159 2 жыл бұрын
Ever done a task that just works, you don’t know how, it just works? Yeah this is it. It’s actually pretty cool
@balazsh2
@balazsh2 2 жыл бұрын
@@ajaykumar-ve5oq more like we can measure how well they perform tasks, so we don't care about the whys :) transparent statistical methods exist and are widely used, just for AI black box methods perform better most of the time
@jirrking3461
@jirrking3461 2 жыл бұрын
this video is idiotic, since we do know how they work and we have been visualizing them for ages now
@Elrog3
@Elrog3 2 жыл бұрын
Saying we don't know how neural networks work is a stretch to the same caliber of saying we don't know how cars work.
@binman5753
@binman5753 2 жыл бұрын
Watching this and not understanding anything make these videos all the more magical 💫
@AnthonyBecker9
@AnthonyBecker9 2 жыл бұрын
Hmm, I'm not sure how the neural net to kernel machine model is a breakthrough. Maybe that was left out. But the idea that a neural net divides data points with hyperplanes in high-D space goes back decades.
@PedroContipelli2
@PedroContipelli2 2 жыл бұрын
Kernel machines are linear, whereas neural networks are, generally, non-linear. Showing that an infinite-width network can be reduced to linear essentially raises suspicion about whether finite neural networks can be simplified in some novel way as well. The consequences could be groundbreaking.
@satishkpradhan
@satishkpradhan 2 жыл бұрын
@@PedroContipelli2 arent all layers of neural network just linear functions of the previous layer, so technically isnt it possible at some conditions a multi layer neural network can be a linear function.
@PedroContipelli2
@PedroContipelli2 2 жыл бұрын
@@satishkpradhan The activation function of each layer (sigmoid, tanh, relu, etc) is usually where the non-linearity is introduced.
@lolgamez9171
@lolgamez9171 2 жыл бұрын
@@PedroContipelli2 analog artificial intelligence
@joshuascholar3220
@joshuascholar3220 2 жыл бұрын
I stopped at the "nobody knows how neural networks work" and "billions of hidden layers" sentence. MY GOD, why did they have some moron who has no idea what he's talking about write this? And another one read it? MY GOD.
@Pramerios
@Pramerios 2 жыл бұрын
Bravo!! This was SUCH an awesome video! Definitely saving and coming back!
@mathman274
@mathman274 2 жыл бұрын
interesting, when I was in school, many decades ago, 'we' always had the idea that there's no reason something couldn't exist between aleph-0 (size of N) and aleph-1 (size of R) however, a "finger was neverput on it". There were wild speculations about fractal dimensions, but that was just a fashionable thing to look at , at the time. Interesting where this is going.
@ferdinandkraft857
@ferdinandkraft857 2 жыл бұрын
This question was answered in 1964 by Paul Cohen and Kurt Gödel. The Continuum Hypothesis (CH) is _independent_ of Zermelo-Fraenkel axioms (plus the axiom of choice). In other words, standard mathematics cannot prove it nor it's negation. You can, however, extend standard mathematics to include the CH or some other axioms. David Asperó et al "breakthrough" doesn't use only standard math. They only proved the equivalent of two axioms that are known to imply one particular hypothesis that is incompatible with CH... The video is unfortunately very superficial and gives the false idea of an "answer" to a problem that, in my opinion, is already answered.
@mathman274
@mathman274 2 жыл бұрын
well... the keyword 'H' being hypothesis of course there's also the "incompleteness theorem", and extending the "axioms" might lead to inconsistency. Indeed "standard math" can't touch it, however including CH might be a little too much. Maybe I was just too "classically" educated, but still... interesting, as was the video, i think.
@Noname-67
@Noname-67 2 жыл бұрын
@@ferdinandkraft857 it independent from ZFC doesn't mean that it's neither true or false. Axiom of pairing, axiom of infinity, axiom of union, etc.. are all independent from each other and we all know they are true. If anything non standard is just a conventional there wouldn't be ZFC as we know it, only ZF. Gödel himself believed that the Continuum hypothesis was wrong, without prove nor disprove rigorously, we still can use logical deduction and reasoning to get a agreeable answer.
@viliml2763
@viliml2763 Жыл бұрын
@@Noname-67 "Axiom of pairing, axiom of infinity, axiom of union, etc.. are all independent from each other and we all know they are true." define "true" none of them describe the physical universe, there's no reason someone can't say they're false and work with that
@johnwick2018
@johnwick2018 2 жыл бұрын
I didn't understand a single thing but it is awesome.
@KimTiger777
@KimTiger777 2 жыл бұрын
Math is art as one needs creativity to arrive to new solutions. Big WOW!
@zfyl
@zfyl 2 жыл бұрын
okay, this is actually a fair point totally agree
@Rotem_S
@Rotem_S 2 жыл бұрын
Also because it's (sometimes) beautiful and can engage deeply
@bobsanders2145
@bobsanders2145 2 жыл бұрын
That’s everything though not just math
@warpdrive9229
@warpdrive9229 2 жыл бұрын
I wait for this video eagerly every year! Much love from India :)
@AUniqueName
@AUniqueName Жыл бұрын
These videos are severely underrated- Thank you for the knowledge you share and hopefully millions of people will be watching these per week- It's so good for people to know about these things
@jordanweir7187
@jordanweir7187 2 жыл бұрын
I love how you guys don't leave out the gory details, thats what we all wanna see hehe, also great to have an update each year
@warpdrive9229
@warpdrive9229 2 жыл бұрын
This was just awesome! See you guys next year again. Much love from India :)
@KeertiGautam
@KeertiGautam 2 жыл бұрын
I don't understand much but I feel happy that good science is happening. It means there's still some sense and logic in this world alive 😄
@Irrazzo
@Irrazzo 2 жыл бұрын
1:01 "What happens inside their billions of hidden layers". I think you confused layers with parameters, or weights, here. The largest GPT-3 version for instance has 96 layers and 175 billion parameters.
@shambhav9534
@shambhav9534 2 жыл бұрын
Parameters are whatever the starting nodes pick up and layers are layers, right? Or are parameters the starting nodes themselves?
@Irrazzo
@Irrazzo 2 жыл бұрын
@@shambhav9534 In a simple feed-forward neural network like a multilayer perceptron, you can represent a neuron / node by the equation y=h(w*x + b). x is what goes into the layer that neuron belongs to (if it's the first hidden layer, x is just an unchanged input feature vector), y is what goes out. w are the weights (all the edges) connecting all the neurons in the previous layer with the one in the current layer we're currently looking at, b is a bias. '*' is a dot product. h is a nonlinear activation function. The union of all weights and biases of all neurons between all the layers are the parameters which are learned during training.
@shambhav9534
@shambhav9534 2 жыл бұрын
@@Irrazzo Okay I get it now.
@Irrazzo
@Irrazzo 2 жыл бұрын
Just one more thing about layers: instead of thinking of layers in terms of the nodes of which they consist, you can also think of them in terms of the data that flows through your network (the x's and y's). Then, layers are different, increasingly abstract representations of your data, connected via transformations, or functions. And the complexity, the 'billions', are due to the enormous size of the function space of the overall function (transformation) which the network approximates by a series (or rather, composition) of functions which only slightly differ from one to the next.
@shambhav9534
@shambhav9534 2 жыл бұрын
@@Irrazzo I understood nothing but I do think I understand layers. They're layers which modify the starting input and at the end that input becomes the output. I tried(just tried) to make a neural network back in the day, I think I know the basics.
@jman997700
@jman997700 2 жыл бұрын
This is the best news I've heard all year. People want to know about the good news too.
@zfyl
@zfyl 2 жыл бұрын
what good is about these things? whom this will benefit?
@nullbeyondo
@nullbeyondo 2 жыл бұрын
@@zfyl If you want a really accurate answer, then It is "what" will this benefit which is mainly all of our technology. And only if they're used right, then they'd improve the quality of life overall; but no guarantee on human behavior.
@NovaWarrior77
@NovaWarrior77 2 жыл бұрын
these are awesome! I'm glad we don't just have to look back to textbooks to see cutting edge advances!
@frankferdi1927
@frankferdi1927 Жыл бұрын
What I dislike is, that many videos, this one included at some points, reward before there is proof, stimulating excitement in the viewers. Generating publicity is important, I do know that.
@aniksamiurrahman6365
@aniksamiurrahman6365 2 жыл бұрын
What what what what what? Finally, such a result in continuum hypothesis! Unbelievable.
@dylanparker130
@dylanparker130 2 жыл бұрын
I love these videos & QM's articles too!
@ChocolateMilkCultLeader
@ChocolateMilkCultLeader 2 жыл бұрын
Thanks for making these. Very important
@srivatsavakasibhatla823
@srivatsavakasibhatla823 2 жыл бұрын
The last one made me remember what David Hilbert implied. "Physics is too complicated to be left for Physicists alone".
@miguelriesco466
@miguelriesco466 2 жыл бұрын
Hey it was pretty nice! Just to clear things up, the continuum hypothesis is whether aleph 1 is the cardinality or size of the real numbers. By definition it is the smallest infinity greater than aleph 0.
@IvanGrozev
@IvanGrozev 2 жыл бұрын
We dont know the size of set of real numbers, we just know its bigger the aleph0. It can be aleph1, aleph2 .... even can be monstrously big as aleph_omega_1 etc. And in current state of most widelly accepted axiomatization of mathematics called ZFC is impossible to sovle continuum hypothesis. One watching this video get the impression that real numbers are aleph1 in size which is not true.
@sweetspiderling
@sweetspiderling 2 жыл бұрын
@@IvanGrozev yeah this video is all wrong.
@richardfredlund3802
@richardfredlund3802 Жыл бұрын
that equivalence between the infinite width NN's and Kernel machines is really a very surprising and interesting result.
@nichtrichtigrum
@nichtrichtigrum 2 жыл бұрын
With only a high school maths background, I couldn't understand any of the concepts in the video. I'd be very happy if you could explain in more detail what a Liouville field actually is and what a free Gaussian field is and so on
@badalism
@badalism 2 жыл бұрын
We have known for a while that infinite width neural network + SGD is equivalent to Gaussian Process.
@zfyl
@zfyl 2 жыл бұрын
thanks for single handedly eradicating the breakthrough level of that paper 😅
@Bruno-el1jl
@Bruno-el1jl 2 жыл бұрын
Not for dnns though
@robertschlesinger1342
@robertschlesinger1342 2 жыл бұрын
Very interesting, informative and worthwhile video. Be sure to read the linked articles.
@Psychonaut165
@Psychonaut165 Жыл бұрын
Out of all the science channels I understand nothing about this is one of my favorites
@edgedg
@edgedg 2 жыл бұрын
My favourite videos of every year!
@YouChube3
@YouChube3 2 жыл бұрын
Natural numbers, floating points and that third set I couldn’t bare even to try explain. Thank you narrator?
@RegiKusumaatmadja
@RegiKusumaatmadja Жыл бұрын
Superb explanation! Thank you for the video
@chilling00000
@chilling00000 2 жыл бұрын
Isn’t the equivalence of wide NN and kernels known for a long time already…?
@satishkpradhan
@satishkpradhan 2 жыл бұрын
even i thought so... but as i saw all comments of people in amazment i was confused. Thank God someone else also think so ... else I thought to reread everything I had learned... or revisit my analytical thinking.
@StratosFair
@StratosFair 2 жыл бұрын
It is in fact (part of) what my Master's thesis was about and I am quite confused because indeed this has been known for some time already
@David-rb9lh
@David-rb9lh 2 жыл бұрын
It’s about dnn here
@StratosFair
@StratosFair 2 жыл бұрын
@@David-rb9lh I did a bit of digging and it turns out that the paper which introduces the result (wide deep neural networks are equivalent to kernel machines) has in fact been written in 2017. Now don't get me wrong, this is a very nice result, but by no means a 2021 breakthrough unfortunately.
@David-rb9lh
@David-rb9lh 2 жыл бұрын
@@StratosFair I’m agree with you . I’ve not digged to much into details to be honest .
@akshaysingh11990
@akshaysingh11990 2 жыл бұрын
I wished I lived a million years and watched all the content created forever
@J3Compton
@J3Compton Жыл бұрын
Love this! It would be nice to have the urls to the papers here if possible
@mdoerkse
@mdoerkse 2 жыл бұрын
Interesting that all three breakthroughs have to do with connections between different theories and 2 of them are mapping something useful to something easy to compute.
@zfyl
@zfyl 2 жыл бұрын
what useful?
@mdoerkse
@mdoerkse 2 жыл бұрын
@@zfyl Deep neural nets and quantum physics/gravity.
@seenaman96
@seenaman96 2 жыл бұрын
I learned about kernels back in 2017 when using SVM... How are kernels breakthroughs? If you have inputs that are not activated in 1 dimension, exploding to a higher dimension will not include them... So it's fine to skip the work, DUH
@mdoerkse
@mdoerkse 2 жыл бұрын
@@seenaman96 I'm not a mathematician and I don't know anything about kernels, but the video wasn't saying that kernels are the breakthrough. It's saying they are the old, easily computible thing that neural nets can be mapped to. The mapping is the breakthrough.
@pvic6959
@pvic6959 2 жыл бұрын
I love how google showed up in both the physics and math/comp sci break through videos. it shows how much theyre doing and how much they're pushing humanity forward little by little. love them or hate them, its so cool to see science being done!
@martinschulze5399
@martinschulze5399 2 жыл бұрын
Google is not altruistic ;)
@LA-eq4mm
@LA-eq4mm 2 жыл бұрын
@@martinschulze5399 as long as someone is doing something
@willlowtree
@willlowtree 2 жыл бұрын
i have great respect for the scientists working at google, but as a company it is inevitable that their goals are not always allied with humanity's interests
@pvic6959
@pvic6959 2 жыл бұрын
@@willlowtree yeah my comment wasnt about about goals or anything. just that theyre doing so much science and sharing a lot of it with the world
@baronvonbeandip
@baronvonbeandip 2 жыл бұрын
@@martinschulze5399 Water is wet. Nothing is altruistic.
@monad_tcp
@monad_tcp 2 жыл бұрын
So they proved the equivalence between convolution kernels and neural networks. As someone who does searchers in computing graphics, I always had this feeling that they were very close, as you could use them together and sometimes even replace one for another.
@szymonbaranowski8184
@szymonbaranowski8184 Жыл бұрын
Seems not as any great or surprising breakthrough then.
@elmaruchiha6641
@elmaruchiha6641 2 жыл бұрын
Greate! I love the video with the Animations and the topic!
@quicksilver0311
@quicksilver0311 2 жыл бұрын
Am I the only one who was totally clueless for all 11 minutes? This video literally gives me "What am I doing with my life?" vibes and I love it. XD
@tetomissio8716
@tetomissio8716 2 жыл бұрын
Fantastic set of videos
@scifithoughts3611
@scifithoughts3611 2 жыл бұрын
Great video series!
@droro8197
@droro8197 2 жыл бұрын
Talking about the continuum hypothesis without mentioning the results of Cohen and Godel is pretty much a crime. Basically the continuum hypothesis is independent from the the rest of set theory axioms and can be assume to be true or false. i guess the real problem here is talking about very heavy math problem in 10 minute video…
@MadScientyst
@MadScientyst Жыл бұрын
I'd sum this up with a reference to a title of author Eric Temple Bell: 'Mathematics queen and servant of science'.....brilliant read & exposition as per this Quanta snippet!!
@raajjann
@raajjann 2 жыл бұрын
Great exposition!
@user-ei8yd3tm9l
@user-ei8yd3tm9l 2 жыл бұрын
towards the end of the video, I was like: this is pretty much why my naive thought of majoring in pure math got crushed after first-year university... math before university is nowhere close to real hard-core math, which is a different beast altogether.
@piercevaughn7000
@piercevaughn7000 2 жыл бұрын
Excellent intro Edit: excellent everything I’m pretty clueless on all of this, but this was awesome
@mobjwez
@mobjwez 2 жыл бұрын
would be nice to see how these theories and works can be applied to real-world situations, cheers
@josueibarra4718
@josueibarra4718 Жыл бұрын
Gotta love how Gauss still somehow manages to butt in to present-day, groundbreaking discoveries
@gettingdatasciencedone
@gettingdatasciencedone Жыл бұрын
I love these intro videos that try and convey the complexity of recent advances. One small problem with this video is that the opening line is not strictly speaking true. The 1950s neural networks did not use the same learning rules as the human brain. They were very simplified models based on a bunch of assumptions.
@andraspongracz5996
@andraspongracz5996 2 жыл бұрын
Got halfway through the video, and stopped. I wonder if the creators ever asked the scientists in the video (or any expert, really) to check the final version of the narration. It is full of inconsistencies, and in case of the second segment (continuum hypothesis) just completely off. We know that the continuum hypothesis is independent from ZFC (the standard system of axioms of set theory) for nearly 60 years. It was famously Paul Cohen who proved this, and he was the one who developed the technique of forcing (in order to prove this result and others). He even got a Fields Medal for his work. I'm not sure about the relevance of the Aspero-Schindler theorem ("Martin's Maximum++ implies Woodin's axiom (∗)") as I'm not a set theorist, but it must be much more subtle than what the video suggests. It is well-understood for decades what the possible alef indices of the continuum can be. In particular, it is not necessarily alef_1, as suggested early on in this video, and contradicted later. The video has very nice graphics and catchy phrases, but the content is just wrong. It was quite cringey to listen to it, really.
@pingdingdongpong
@pingdingdongpong 2 жыл бұрын
Yea, I agree. I know enough set theory (and it ain't much) to know that this is a bunch of hogwash.
@Macieks300
@Macieks300 2 жыл бұрын
Yes. I agree. Set theory basics are easy enough to understand for undergraduates so it's the most approachable subject among all in these videos but hearing how wrong their explanation is I now must wonder how wrong are their explanations of the other discoveries.
@Quwertyn007
@Quwertyn007 2 жыл бұрын
6:33 Saying an axiom is "likely true" makes no sense, unless it was to follow from other axioms and thus be unnecessary. Axioms are what you start with - you can start with whatever assumptions you want, the best they can do is not contradict each other and lead to interesting/useful mathematics. Math doesn't take into account the physical world - it is only based on axioms. Maybe you could make an argument about this axiom likely being related to the physical world in some way, which in some non mathematical sense would make it "true", but that seems rather difficult.
@Quwertyn007
@Quwertyn007 2 жыл бұрын
@FriedIcecreamIsAReality I think you make a good point, but I don't think many people would understand "likely true" as "intuitively making sense". That's just not what "true" means.
@Quwertyn007
@Quwertyn007 2 жыл бұрын
@FriedIcecreamIsAReality I'm still just a mathematics student, so I'm not in the best position to judge whether it really is used this way, but this video isn't aimed at professors, so I think the phrasing is at least misleading
@kravandal
@kravandal 2 жыл бұрын
Omg. I can't wait next year's video.
@Rawi888
@Rawi888 2 жыл бұрын
Thanks for making me feel smart.
@peterb9481
@peterb9481 2 жыл бұрын
Wow - all so interesting. Good video.
@nateb3277
@nateb3277 2 жыл бұрын
I discovered Quanta only a few months ago but already love coming back to them for this kind of quality content on new developments in science and tech :) Like it's well written, well animated, and easily understood *chef's kiss*
@caracasmihai01
@caracasmihai01 2 жыл бұрын
My brain had a meltdown when watching this video.
@viniciush.6540
@viniciush.6540 2 жыл бұрын
"This enables to compute things that physicists don't know how to compute" oh man how i love this phrase lol
@charlesvanderhoog7056
@charlesvanderhoog7056 2 жыл бұрын
Kernel Machine new? We used variance analysis in multiple dimensions as far back as the 1970's and it was developed into what is called positioning in marketing. These techniques enable the researcher to extract immense amounts of data from small samples.
@deantoth
@deantoth 2 жыл бұрын
I've watched several of these breakthrough videos and although extremely interesting, you simplify a concept so much that rather than clarifying the topic, you make it more opaque. And just when I think you are about to provide some insight, you move on to the next segment. You could spend a few more minutes on each topic.. OR make a full video per topic please ! Thank you for your hard work.
@Amir_404
@Amir_404 2 жыл бұрын
Bit of a nitpick, but "neural networks" in computer science(or at least the ones that people use to solve problems) are not comparable to the neural networks in the brain. The two fundamental differences are that computers are "feed forward" and synchronous. in English, every layer fires at the same time and there are no loops. It is not that we can't make a neural network more similar to a brain(there is a lot of interesting research going on), but nobody has found an effective way of training those types of networks.
@thanhtunghoang3448
@thanhtunghoang3448 2 жыл бұрын
The first breakthrough is called Neural Tangent Kernels, first introduced in 2018 by Arthur Jacot at EPFL. He at that time, not a Google employee. Attributing this breakthrough to Google is unfair and misleading.
@WilliamParkerer
@WilliamParkerer 2 жыл бұрын
No one's attributing it to this Google employee
@Ashallmusica
@Ashallmusica 2 жыл бұрын
I'm the least educated person watching this( had only completed junior school ) now as a 21 years old. I just get curious with different things and clicking this video get me to learn a new word - Aleph. It's amazing for me yet i still didn't understand much here but I love this.
@JustNow42
@JustNow42 Жыл бұрын
If you would like to crack anything, try group theory . Split observations into groups and then use groups of groups etc.
@SilBu3n0
@SilBu3n0 2 жыл бұрын
incredible video!
@saugatbhattarai9826
@saugatbhattarai9826 2 жыл бұрын
I like your explanation ........and thanku for updates .................
@cobywhitw5748
@cobywhitw5748 Жыл бұрын
Does anyone know where I can read the paper about the Deep Neural Networks shown in the video??
@SolaceEasy
@SolaceEasy 2 жыл бұрын
Man, math's mysterious.
@Fan-fb4tz
@Fan-fb4tz 2 жыл бұрын
great videos always!
@UsamaThakurr
@UsamaThakurr 2 жыл бұрын
Thank you
@domdubz7037
@domdubz7037 2 жыл бұрын
2021 and Gauss is still with us
@lebiquo8501
@lebiquo8501 2 жыл бұрын
god i would love a "breakthroughs in chemistry" video
@goldensnitch1614
@goldensnitch1614 2 жыл бұрын
great vid! btw 11:08 is Simons foundation made by the guy who made Rennaissance Technologies?
@diegonayalazo
@diegonayalazo 2 жыл бұрын
Thanks for sharing
@a.movement
@a.movement 2 жыл бұрын
Appreciate this!
@deleted-something
@deleted-something Жыл бұрын
I knew in the moment they started speaking about the Continuum hypothesis this was gonna be interesting
@kamabokogonpachiro6797
@kamabokogonpachiro6797 2 жыл бұрын
"When you watch a video, you get the sensation of understanding, but you never actually learn anything" ~ Veritasium
@bobSeigar
@bobSeigar 2 жыл бұрын
John Conway started my love for math. Rest in peace.
@Po0pypoopy
@Po0pypoopy Жыл бұрын
I wish I was smart enough to contribute to humanity like these people I would feel so fulfilled in life :/
@EM-qr4kz
@EM-qr4kz 2 жыл бұрын
you have a square with vertices A, B, C, D. get all parallel straight segments from A, B to C, D. This set of line segments are aleph1 .. greater than the set of straight segments that make up an infinite line * * ... This is my observation. I do not know if it is true but it is interesting as we can say when a body is one dimensional or two, not in terms of geometry but through set theory.
@nicholasb1471
@nicholasb1471 2 жыл бұрын
This video makes me want to do my calculus 3 homework. If only it wasn't winter break right now.
2021's Biggest Breakthroughs in Physics
10:31
Quanta Magazine
Рет қаралды 1,4 МЛН
The Riemann Hypothesis, Explained
16:24
Quanta Magazine
Рет қаралды 5 МЛН
[Vowel]물고기는 물에서 살아야 해🐟🤣Fish have to live in the water #funny
00:53
Маленькая и средняя фанта
00:56
Multi DO Smile Russian
Рет қаралды 4,8 МЛН
WHY DOES SHE HAVE A REWARD? #youtubecreatorawards
00:41
Levsob
Рет қаралды 27 МЛН
The Big Misconception About Electricity
14:48
Veritasium
Рет қаралды 22 МЛН
AI Just Changed Everything … Again
18:28
Undecided with Matt Ferrell
Рет қаралды 48 М.
Animation vs. Math
14:03
Alan Becker
Рет қаралды 60 МЛН
2020's Biggest Breakthroughs in Math and Computer Science
7:46
Quanta Magazine
Рет қаралды 2,1 МЛН
Inside Math's Famous Fractal: The Mandelbrot Set
8:08
Quanta Magazine
Рет қаралды 186 М.
2023's Biggest Breakthroughs in Math
19:12
Quanta Magazine
Рет қаралды 1,6 МЛН
The Standard Model of Particle Physics: A Triumph of Science
16:25
Quanta Magazine
Рет қаралды 3,1 МЛН
The more general uncertainty principle, regarding Fourier transforms
19:21
The mathematical 'worlds' of cryptography: Which one do we live In?
5:29
How charged your battery?
0:14
V.A. show / Магика
Рет қаралды 1,9 МЛН
Apple watch hidden camera
0:34
_vector_
Рет қаралды 47 МЛН
Приехала Большая Коробка от Anker! А Внутри...
20:09
РасПаковка ДваПаковка
Рет қаралды 85 М.
Xiaomi Note 13 Pro по безумной цене в России
0:43
Простые Технологии
Рет қаралды 1,8 МЛН