How AI Discovered a Faster Matrix Multiplication Algorithm

  Рет қаралды 1,344,203

Quanta Magazine

Quanta Magazine

Күн бұрын

Researchers at Google research lab DeepMind trained an AI system called AlphaTensor to find new, faster algorithms to tackle an age-old math problem: matrix multiplication. Advances in matrix multiplication could lead to breakthroughs in physics, engineering and computer science.
AlphaTensor quickly rediscovered - and surpassed, for some cases - the reigning algorithm discovered by German mathematician Volker Strassen in 1969. However, mathematicians soon took inspiration from the results of the game-playing neural network to make advances of their own.
Read the full article at Quanta Magazine: www.quantamagazine.org/ai-rev...
Correction: At 2:53 in the video, the text previously read "67% less" but has been changed to "67%" for accuracy.
00:00 What is matrix multiplication?
01:06 The standard algorithm for multiplying matrices
02:06 Strassen's faster algorithm for faster matrix multiplication methods
03:55 DeepMind AlphaGo beats a human
04:28 DeepMind uses AI system AlphaTensor to search for new algorithms
05:18 A computer helps prove the four color theorem
06:17 What is a tensor?
07:16 Tensor decomposition explained
08:48 AlphaTensor discovers new and faster faster matrix multiplication algorithms
11:09 Mathematician Manuel Kauers improves on AlphaTensor's results
- VISIT our Website: www.quantamagazine.org
- LIKE us on Facebook: / quantanews
- FOLLOW us Twitter: / quantamagazine
Quanta Magazine is an editorially independent publication supported by the Simons Foundation: www.simonsfoundation.org/
#math #AlphaTensor #matrices

Пікірлер: 1 000
@alexanderkvenvolden4067
@alexanderkvenvolden4067 11 ай бұрын
So to make matrix multiplication more efficient, they used machine learning, which relies heavily on matrix multiplication. Very cool!
@aritroshome8480
@aritroshome8480 11 ай бұрын
The earth is flat.
@cyancoyote7366
@cyancoyote7366 11 ай бұрын
@@aritroshome8480 Your brain is flat, my friend.
@SuperMaDBrothers
@SuperMaDBrothers 11 ай бұрын
dont read into it chief
@pyropulseIXXI
@pyropulseIXXI 11 ай бұрын
this video is pure hyperbole. They said seasoned mathematicians haven't mastered matrix multiplication. So fcking stupid
@whannabi
@whannabi 11 ай бұрын
To launch a rocket stage filled with fuel even further you indeed need another stage that also contains fuel in the first place.
@sheevys
@sheevys 11 ай бұрын
One cool thing from tha paper which didn't make it to the video was that instead of using number of multiplications as your cost function, one can use the actual timings from running each algorithm. This allows to find fast multiplication algorithm which is tailored for the device it's running on. Interestingly, they showed that indeed for different hardware platforms, different algorithms performed better.
@AleXander-eo3iz
@AleXander-eo3iz 11 ай бұрын
Naturally, one might ask what platform would provide the fastest algorithm and why.
@beargiles4062
@beargiles4062 11 ай бұрын
I don't know the details but there's now some logic designs where addition is significantly slower than multiplication! If I had to guess it's because multiplication of 2 n-bit numbers will always fit within 2n bits. In many common scenarios you'll always use all 2n bits in the results. In contrast with addition of 2 n-bit numbers plus a carry bit you'll get, at most, an n-bit number plus an carry bit. (Ignoring 2s-compliment and that it would be a 'borrow' bit.) That's a tremendous waste if you're working with 64 bit words, much less 128 or even 256 bit words. I know - dedicated hardware can use 65-bit words internally. Or you could write a library that quietly converts the data to 63-bit words, plus a carry, for calculations and them convert it back to the standard notation at the end. You'll lose a little bit of range but it might be an acceptable tradeoff. (With floating point numbers you would perform your shift so you would just lose a little bit of precision.) It's some work upfront but it might be worth it if you can keep everything in the innermost cache until you've finished your calculations. But that's all just guesswork based on some classes that are all decades out of work....
@alphalunamare
@alphalunamare 11 ай бұрын
I think that that is just incidental dust. Consider for example that the elements of the matrices are objects of a form defined in a class by C++ say. You have then no access to architectural bias. The complexity of the object itself overrides any platform consideration.
@Henrix1998
@Henrix1998 11 ай бұрын
​@@AleXander-eo3iz ASIC
@gubgubbubbub
@gubgubbubbub 11 ай бұрын
More likely, it's because platforms have different performance profiles for addition, multiplication, and memory accesses, and in the case of different ISAs maybe even the number of registers to accumulate dot products into. For example, if computer A has memory access that's four times faster than computer B, it's probably worth it to spend more time doing memory-intensive algorithms which use less multiplication/addition operations.
@Wooksley
@Wooksley 11 ай бұрын
I finished university 4 years ago and I still get nightmares about my diploma being revoked because they’ve discovered that I failed linear algebra exam. Doing matrix multiplication by hand is a literal nightmare for me.
@francogonz
@francogonz 11 ай бұрын
If the Nemotechnic is hard enough for you, try by definition 🤗 The Sigma sum is a lot easier to memorize the method
@shoam2103
@shoam2103 11 ай бұрын
For a lot of cases there are arguably better alternatives. But numerical computations are better left for computers anyway..
@Wooksley
@Wooksley 11 ай бұрын
@@francogonz I’m just really bad at mental arithmetic. And even so, I didn’t actually fail it…
@Penrose707
@Penrose707 10 ай бұрын
@@Wooksley I personally find matrix multiplaction by cofactor expansion to be the easiest method to compute by hand
@Wooksley
@Wooksley 10 ай бұрын
@@Penrose707 I find the best way to do matrix multiplication is to write some python code. Why would I ever do it by hand outside of that dreadful exam? It’s not the 17th century…
@ABaumstumpf
@ABaumstumpf 5 ай бұрын
For matrix-multiplications Strassen is the best overall - works for any size and any form of matrices. But there are of course faster methods for specific cases. There are algorithms that are faster for where large matrices, that are better for diagonalised matrices etc They found an algorithm for matrices only containing 1 and 0 that is slightly faster - single-digit percentage faster. It is an achievement and does help, but the bigger benefit might just be that with specialised hardware it can use fewer multiplication-units - thus less power.
@razorblade413
@razorblade413 5 ай бұрын
so based on the video, the discover from alpha tensor doesn't work in real matrix multiplication because its only 0 and 1. But in 3d games for rotating or translating and object, you need to add different numbers than zeroes or ones when updating in a game loop.
@ryanfeng
@ryanfeng 4 ай бұрын
Multiplication of Fourier matrix only need N*logN, so for some specific matrices, it is definitely possible to reduce the complexity. But maybe it is not generic.
@achdetoni5008
@achdetoni5008 4 ай бұрын
It kind of is pretty dumb. Muliplication of binary numbers is very fast, it's not like that algorithm is really neededbthere anyway. The general case is way more interesting.
@webgpu
@webgpu 2 ай бұрын
best overall? is there anything "particularly best" ? 🤣
@ABaumstumpf
@ABaumstumpf 2 ай бұрын
@@webgpu "best overall? is there anything "particularly best" ? 🤣" I mean - no - why would there? Why would any sane person assume that a solution that is the best in general would be the best for any specific scenario? Modern desktop CPUs are the best for general computation. Are they the best in matrix-multiplication? No. The best at encryption? No. Best at image-recognition? No. Best at network-trafficing? No. There are specialisations when you need those. Hopefully that was simple enough for you to comprehend.
@QuantenMagier
@QuantenMagier 11 ай бұрын
I read the corresponding papers and this is a pretty good recap video on the topic. Kudos for visualizing the algorithms in an understandable way and even giving some background information from the researchers! 🧑‍🎓
@missionaryrav628
@missionaryrav628 10 ай бұрын
Tbh i didn't understand shit from the visuals...yes ofc it seemed legitimate and got the vibe going...but it wasnt understandable as a highschooler
@foncbi
@foncbi 2 ай бұрын
​@@missionaryrav628 git gud kid
@alphalunamare
@alphalunamare 11 ай бұрын
I devised an algorithm in 1977 that utilised 24 multiplications and 69 additions for the 3x3 case but didn't get much further as it was equivalent to Strassen's in the 2x2 case. I did it with pencil and paper. After seeing this video I don't feel so bad now about my failure :-)
@evamkaushik5392
@evamkaushik5392 11 ай бұрын
I'd like to learn from you, sir.
@alphalunamare
@alphalunamare 11 ай бұрын
I'll be totally honest and say that I took Strassen's lead and tried it out in the 3x3 case ... hence it was inevitable that it would reduce to his case. There was no spark of a new idea just a plodding through the algebra, it took me about 12 hours and a few hundred feet of line printer paper. When I opened my door, the next morning, Mike's (Atkinson) room opposite seemed open so I walked in and asked for him to look at 'this' and he was most joyed. He smiled so much I felt that I had done something useful, He asked if he could use it in his class and I said 'Yes! f course' and then I left Cardiff. Me and Henry had shared our office ... I am afraid that he was probably quite unhappy that I had left the paper all over the place .. He was a good mate though :-)
@shoam2103
@shoam2103 11 ай бұрын
You 'just' had to continue a 'little' bit 😬
@justingolden21
@justingolden21 10 ай бұрын
That's AWESOME
@taranjotsingh3981
@taranjotsingh3981 9 ай бұрын
That's amazing nonetheless!
@RepChris
@RepChris 11 ай бұрын
I mean _technically_ weve had "faster" algorithms than the stassen one for quite a bit. Were down from the factor of ~2.7 to ~2.37 for very large matrices. Although an actually usable algorithm thats faster even in the relatively special cases is always nice to see
@NoNameAtAll2
@NoNameAtAll2 11 ай бұрын
even recursive strassen-like algos had better versions than 2x2 with rectangular (3x5, iirc) still scaling better than deepmind's 4x4
@FerrisMcLauren
@FerrisMcLauren 11 ай бұрын
technically, your comment is useless and provides nothing to the conversation.
@tunafllsh
@tunafllsh 11 ай бұрын
They are not talking about algorithm complexity for large matrices. This is about the fastest way to compute a 2x2, 3x3, 4x4 and 5x5. Algorithms with "faster" complexity end up having larger overhead and perform worse than strassen on smaller matrices. With this new algorithm for 4x4, you can use your favorite low complexity algorithm to break down the multiplication to 4x4 matrices. And then use the decomposition found by AlphaTensor.
@RepChris
@RepChris 11 ай бұрын
@@tunafllsh Yes, im aware. Due to ambiguity of their wording they said that Strassens algorithm was unbeaten which is technically not true since there are algorithms with much better asymptotic time, albeit not practical ones. A small nitpick for my entertainment and an opportunity to share some trivia for those who might not know
@lukealadeen7836
@lukealadeen7836 11 ай бұрын
​@@RepChris did you get a kick out of that? How fast is your algorithm?
@KnakuanaRka
@KnakuanaRka 5 ай бұрын
The way you described the tensor decomposition problem as a game of removing tensors from the original to reduce it to all zeros as fast as possible, and how you showed it with the Strassen example, reminds me a lot of the Lights Out game where you press a light to flip it and all adjacent to make all the lights dark. The moves in tensor decomposition are a lot more complex of course, but seeing it as a game makes a lot of sense.
@silentblackhole
@silentblackhole 11 ай бұрын
I didn't follow along with some of the more detailed parts but over all found it to be really interesting. Nice work to everyone involved in this directly or otherwise.
@heyhoe168
@heyhoe168 5 ай бұрын
They managed to put AI into juggling complex algebraic compositions/decompositions and applied results to computer calculus tasks optimizations. Nothing mind-blowing, but important step towards self-improving algorithms.
@LordofDorkness94
@LordofDorkness94 11 ай бұрын
Minor error at 2:53. Shouldn't it just say 67% rather than 67% less?
@QuantaScienceChannel
@QuantaScienceChannel 11 ай бұрын
Good catch! We'll add a correction in the KZbin description.
@kamel3d
@kamel3d 11 ай бұрын
I was confused
@viliml2763
@viliml2763 11 ай бұрын
heh they blurred the word "less"
@jerecakes1
@jerecakes1 10 ай бұрын
​@@viliml2763 good enough :D
@Alexandre-ur1oi
@Alexandre-ur1oi 11 ай бұрын
For the sake of completness, in practical applications one should keep in mind that reducing the number of basic operations is not the main bottelneck for matrix multiplication efficiency, moving data is (bus 500Mh, cpu 1-2Ghz). However, without new ideas on this (last not understood) basic operation (polynomial, integer multiplication is mastered), brute computer force (cleverly used in the present case) is welcome.
@btf_flotsam478
@btf_flotsam478 10 ай бұрын
Surely, fewer calculations means fewer movements of data?
@pmnt_
@pmnt_ 9 ай бұрын
@@btf_flotsam478 not necessarily. On hardware, you don't access single numbers, you always access multiple bytes that happen to be adjacent in memory at once (e.g. if the data bus is 128 bits wide, you access 2 or 4 floating point numbers at once, depending on precision). A naive alorithm that uses matrix element in order can basically half or even quarter the number of memory accesses. On the other hand, if the optimized algorithm makes heavy use of out-of-order access to the matrix (the technical term is random access), it might "waste" some of the transferred numbers, just because the adjecent numbers are needed at a later step of the algorithm. An asymptotically fastest algorithm might not be the fastest algorithm in practice. That's especially true for applications that are already ridiculously optimized to specific hardware, such as the matrix multiplication.
@tappetmanifolds7024
@tappetmanifolds7024 7 ай бұрын
@@btf_flotsam478 Not always. Bott periodicity in 8 dimensions has values 16 times as big. Even though lines of code can be n - strings short the amount of complexity embedding in a single character can be infinitely large.
@JaxVideos
@JaxVideos 2 ай бұрын
The caching behavior on nearly every processor will determine the absolute speed limit for MM. One reason Strassen-style partitioning works is that it continuously tesselates the large matrices which brings down the workingset size and reduces cache overflow lossage. Similar considerations apply to parallel implementations.
@Dismythed
@Dismythed 10 ай бұрын
Never explained how AT achieved the matrix calculations with fewer steps than previous (47 to 45 or 96 to 95).
@danielhaacke8641
@danielhaacke8641 2 ай бұрын
Indeed! Left me gravely unsatisfied.
@OtakuSanel
@OtakuSanel 11 ай бұрын
matrix math is just one of those things that while more efficient equations might be good on the theory side of things, there is far more gains from specialized hardware to do that kind of math. if something has 8 vs 7 multiplications might be a factor if you are forced to do it consecutively, when you have hardware that does it all at once one more "lane" of computation for that 8th figure is a moot point. Back in the day there used to be math co-processors and we came back full swing where now the gpu IS the math co-processor once more with things like cuda and tensor cores that are specifically designed for that kind of math. The big question is, what other kind of specialized hardware would be of high value and implement that rather than pushing the limits of math theory to find efficiency.
@ronald3836
@ronald3836 11 ай бұрын
If you have to deal with NxN matrices where N grows arbitrarily large and a multiplication takes N^k operations, then faster hardware can't compete with an algorithmic improvement that lowers the exponent k. The point of the 8->7 trick in the Strassen algorithm is that it lowers the exponent k from 3 to 2.808 for larger matrices. However, if N is fixed and the improved algorithm is very complicated, then it may well be faster to use a simpler algorithm. According to a 2016 paper, on then modern hardware the Strassen algorithm was 20% faster for N=512 (and for larger N the gain will be even larger).
@MrMctastics
@MrMctastics 11 ай бұрын
@@ronald3836 my man here with that DATA. Dayum
@geekswithfeet9137
@geekswithfeet9137 11 ай бұрын
Wow you’re short sighted, like you have no idea what you’re talking about it’s so wrong
@xghqhj7910
@xghqhj7910 11 ай бұрын
​​@@ronald3836 can u mention a real life application where we'd need a 512 * 512 matrix? Edit: I just remembered photos are basically matrices of pixels. But would still love to hear about any other applications?
@promethuser
@promethuser 11 ай бұрын
⁠@@ronald3836can you cite the paper? Not calling you out or anything but no modern BLAS implementation makes use of strassen's. I'm sure OP understands algorithmic complexity, and they're still right - hardware is the deciding factor for all practical implementations of Matrix mult.
@GrimJerr
@GrimJerr 2 ай бұрын
The power of AI isn't intelligence, it is a feature of Iteration.
@naysay02
@naysay02 10 ай бұрын
thanks for the accessible yet detailed explanation. really cool
@vardhmansingh3463
@vardhmansingh3463 11 ай бұрын
Quanta 👌🏻👌🏻 Always make these awesome and informative videos. Some enlightenment everytime!
@sayyamzahid7312
@sayyamzahid7312 10 ай бұрын
Punit paaal Singh 😅
@QuantaScienceChannel
@QuantaScienceChannel 11 ай бұрын
You can read more about this work on Quanta's website: www.quantamagazine.org/ai-reveals-new-possibilities-in-matrix-multiplication-20221123/ Correction: At 2:53 in the video, the text previously read "67% less" but has been changed to "67%" for accuracy.
@simplicitas5113
@simplicitas5113 11 ай бұрын
Error in percentage of reduction at ca 3 minutes. You meant 33 % reduction for 8 rows.
@LucenProject
@LucenProject 11 ай бұрын
11:50 What is the second word on the screen?
@Flovus
@Flovus 11 ай бұрын
@@LucenProject It's the initials of the team that published the paper "Discovering faster matrix multiplication algorithms with reinforcement learning".
@Flovus
@Flovus 11 ай бұрын
3:47 Why is the matrix broken into four 4x4 matrices instead of breaking it into 16 2x2 matrices? When I perform the new algorithm recursively on an arbitrarily large (N x N) matrix, I'd break it into 4*4=16 matrices of dimension (N/4)x(N/4). These "blocks" are then combined exactly the way the scalars in a 4x4 matrix are combined with 47 multiplications, only here the multiplications are performed recursively. This means that the input size N is divided by four in each step of the recursion and the runtime is N^{log_4(47)} ~ N^{2,777} by the Master theorem. Doing it the other way around as shown in the video would be performing Strassen on the topmost call (7 multiplications of problem size N/2), am I wrong?
@LucenProject
@LucenProject 11 ай бұрын
@@Flovus thanks!
@miloweising9781
@miloweising9781 11 ай бұрын
Okay but do we know yet if this type of multiplication algorithm will work in any context other than Z/2Z? It is not surprising to think that there are extra tricks to multiplying matrices in Z/2Z that one could use which are useless over the reals.
@davidcotham1939
@davidcotham1939 11 ай бұрын
Good point. Seems like there are some p vs np implications. If matrices with only binary elements troubles the Google Deepmind team, then doing this with matrices with any non-binary element sets would probably be too convoluted.
@caspermadlener4191
@caspermadlener4191 11 ай бұрын
The records of ℤ/2ℤ are of course better than ℂ, where ab=ba, although this is not as important as the records where ab≠ba, like matrices. But the records are really close to each other.
@SterileNeutrino
@SterileNeutrino 11 ай бұрын
@@davidcotham1939 My dear colleague, I fail to see how this has anything to do with P vs NP - this is not search nor is the complexity exponential.
@SterileNeutrino
@SterileNeutrino 11 ай бұрын
No it seems one can apply improvs to any floating point number (or complex number for that matter). The article says that for ℤ/2ℤ however: "In a few cases, AlphaTensor even beat existing records. Its most surprising discoveries happened in modulo 2 arithmetic, where it found a new algorithm for multiplying 4-by-4 matrices in 47 multiplication steps, an improvement over the 49 steps required for two iterations of Strassen’s algorithm. It also beat the best-known algorithm for 5-by-5 modulo 2 matrices, reducing the number of required multiplications from the previous record of 98 to 96. (But this new record still lags behind the 91 steps that would be required to beat Strassen’s algorithm using 5-by-5 matrices.)"
@jakobm.4183
@jakobm.4183 11 ай бұрын
It seems very likely that 49 is optimal over the reals. One way a 47 algorithm could arise is if two rank one tensors which are distinct over the reals become equal over Z/Z2 and cancel (since addition and subtraction are the same in Z/Z2). Also for some other matrix formats where better algorithms are known for Z/Z2, those beat the best known algorithms for the real case by exactly 2 multiplication steps.
@Techmagus76
@Techmagus76 11 ай бұрын
To explain the Alphatensor result it would be better to show and explain the Karazuba algorithm as Alphatensor found something similiar for 4x4 to Karazuba for 2x2. Sure Strassen is beating Karazuba on 2x2 so it should be mentioned, but for the didactic of the solution Karazuba is giving more insight to the result.
@tappetmanifolds7024
@tappetmanifolds7024 7 ай бұрын
@Techmagus76 You are on the right track. Karatsuba discovered n ^ 1.58. The improvement was n * log n.
@akisuihkonen9056
@akisuihkonen9056 5 ай бұрын
These are completely different problems. Karatsuba and its extension like Toom-3 are for polynomial (or big-int) multiplications, where as this is about matrix multiplications. One source of the mistake might be, that after Karatsuba discovery there was also an algorithm by Strassen (And Schonhage) performing the polynomial multiplication (or convolution) using Number Theoretic Transform recursively.
@artieschmidt3039
@artieschmidt3039 11 ай бұрын
Thank you for your work! Best math and tech channel ❤
@rashid8646
@rashid8646 11 ай бұрын
Math just goes over my head, but it is cool that we can automate the trial-and-error aspect of mathematical models.
@dekippiesip
@dekippiesip 6 ай бұрын
It's better than trial and error, but still kind of mindless. It's like evolutions survival of the fittest.
@ilikegeorgiabutiveonlybeen6705
@ilikegeorgiabutiveonlybeen6705 5 ай бұрын
yeah its greedy with extra steps
@vivekdabholkar5965
@vivekdabholkar5965 11 ай бұрын
Truly amazing solution to centuries-old problem
@debblez
@debblez 11 ай бұрын
this video finally helped me understand tensors. also super cool
@icekoolkr5046
@icekoolkr5046 11 ай бұрын
Love this channel
@Omena0MC
@Omena0MC 2 ай бұрын
Bruh ai literally made itself faster 💀💀💀
@raheelrehmatbc
@raheelrehmatbc 14 күн бұрын
14:56₩☆
@user-francescob
@user-francescob 11 ай бұрын
Has any of these researchers looked the impact of this new algorithm on specialize matrix libraries such as Tensorflow, and GPUs. It's possible that the 50-year old algorithm will execute faster than the new DM algorithm, IF for example, there's a significant advantage using specialized functions tuned for GPU instruction sets. It would be important to #1 - Know this answer? #2 - Where the cross-over occurs? #3 - Which and When existing libraries will adopt this new algorithm?
@KabelkowyJoe
@KabelkowyJoe 5 ай бұрын
I guess - none, because it can only do 1 bit computations :> 10:45
@asicdathens
@asicdathens 5 ай бұрын
The first Multimedia Extensions for CPU's that Intel made, the MMX instruction set, was basically matrix operations and mostly multiplications
@Iaotle
@Iaotle 11 ай бұрын
Already knew about the AI, but didn't know that we managed another reduction! Cool stuff!
@niks660097
@niks660097 5 ай бұрын
You should make video on constraint based solvers, its just amazing you can bypass all physics and still do real physical simulation just by few linear equations, which in turn just uses matrix multiplication..
@ictogon
@ictogon Ай бұрын
It's not really physics it's more of geometry
@MathematicFanatic
@MathematicFanatic 5 ай бұрын
Both this result and this presentation of it are absolutely spectacular
@estebanmarin002
@estebanmarin002 8 ай бұрын
I love the explainations
@cbunix23
@cbunix23 10 ай бұрын
For very large matrices you can get an easy 40% speed-up over the naive algorithm if you order the multiplication and addition operations in a cache coherent order. I'm skeptical would be faster than Strassen's though.
@jimwinchester339
@jimwinchester339 5 ай бұрын
Agreed, but these are considered implementation optimizations, not a different and superior algorithm. The criteria used for a better type of algorithm is a reduction of the overall power by which the matrix dimension are raised by; that is, the "big O" [Order of complexity] of the algorithm.
@cbunix23
@cbunix23 5 ай бұрын
@@jimwinchester339Yup, I took that class decades ago. ;)
@YouRich92
@YouRich92 5 ай бұрын
Algorithms that reorder operations to achieve better computation locality are not simple implementation optimizations. They are genuinely different algorithm. For instance one can use them to decompose matrices in sub-blocks. This allows for instance better parallelism. Or it can enable you to break your original problem in different way, which can give you speed up orders of magnitude better than a pure better multiplication algorithm. Another thing is that algorithm with better O may be much more difficult to implement in a "computer friendly" way. For instance they make difficult to take advantage of long cache line memory fetch or poor locality lead to cache thrashing. So, OK, they are more satisfying on the paper but worse in the real life.
@cbunix23
@cbunix23 5 ай бұрын
@@YouRich92A small follow-up. Suppose we're multiplying two large square matrices (A X B = C). First flip B about the main diagonal, then both A and B can easily be traversed in a cache coherent order, instead of only B. I'll have to experiment with this to see how well this works.
@shannonbarber6161
@shannonbarber6161 18 күн бұрын
@@jimwinchester339 Many theoretically superior algorithms run much slower than optimized "inferior" implementations. Any linked-list case being the poster-child. Cache-coherency is King.
@Tony-dp1rl
@Tony-dp1rl 11 ай бұрын
Um .. 4x4 of ones and zeros ... I would just pre-store a cache of combinations of the row results ... 4 bits per row, 16 bits for the second matrix, 4 bit result row = 512KB of nicely cached memory. 4 lookups, done. ;)
@enterprisesoftwarearchitect
@enterprisesoftwarearchitect 11 ай бұрын
Great article!
@peterlindner3283
@peterlindner3283 10 ай бұрын
Thank you Dr Simon for funding this
@PS3PCDJ
@PS3PCDJ 5 ай бұрын
Love the recursive nature of computer science
@chrisf1600
@chrisf1600 4 ай бұрын
Love the recursive nature of the recursive nature of computer science
@macronencer
@macronencer 5 ай бұрын
This is amazing stuff. Just one point of curiosity: why are modulo 2 matrices useful? Is there some real world application that uses just 0 and 1 in a matrix, which can benefit from a speed increase?
@Robbe1912
@Robbe1912 5 ай бұрын
everything computer related can pretty much break down into binary computations as far as I know. complex boolean logic algorithms. I hope somebody comments a real world application, I am interested too.
@Western_ENT
@Western_ENT 4 ай бұрын
Thank you for keeping me posted on new discoveries in Maths. Hope u do Physics as well. Subscribed!
@meccamiles7816
@meccamiles7816 5 ай бұрын
Truly extraordinary!
@kin0cho
@kin0cho 11 ай бұрын
Well done!
@johnchessant3012
@johnchessant3012 11 ай бұрын
Awesome! But do these results extend to real number matrices? I believe most applications use real number matrices rather than mod 2
@nickfaire
@nickfaire 11 ай бұрын
No computer works with real numbers. Computers work with finite fields, as F2 (which is the same that working mode 2)
@myxail0
@myxail0 11 ай бұрын
​@@nickfaire floating point numbers if you wanna be that picky
@nickfaire
@nickfaire 11 ай бұрын
@@myxail0 First, I wasn't picky. I just explained with few words that real number matrices are not used by computers, but matrices over finit fields, and hence this results are helpuful. And second, floating-point arithmetic allows you represent real numbers while working with finite fields. What I said is as "picky" as you can be. What you said adds extra info, but does not correct me, because there is nothing to be corrected in what I said.
@myxail0
@myxail0 11 ай бұрын
@@nickfaire what you said wasn't helpful at all. There's a high chance the commenter used real numbers meaning floating point numbers. It's not sth outlandish, it's what they are for, for modeling real numbers. On the other hand, it's a huge step algorithmically from binary matrices to floating point matrices which is not trivial at all. Hencewhy the question you answered to is interesting, and your answer isn't. In my opinion.
@shoam2103
@shoam2103 11 ай бұрын
@@myxail0 I think you're both correct, but from different perspectives. Being based on binary (even floating points!), actual efficient and fast algorithms used are binary or a power of 2. I realize you wanted to know from a practical perspective in terms of real world applications? The situation is, it can wildly vary between processor architectures. But discounting that, the real bottleneck is memory and bandwidth. Maybe this could help in theory, but I don't think at all, as long as the input and output sizes of the multiplication don't change.
@homerp.hendelbergenheinzel6649
@homerp.hendelbergenheinzel6649 11 ай бұрын
"The biggest threat that arises from AI is that some people will conclude too early that they have fully understood it."
@jondo7680
@jondo7680 5 ай бұрын
You are probably talking about general ai, this thing can just do one thing. Find better algorithms to do the multiplication.
@homerp.hendelbergenheinzel6649
@homerp.hendelbergenheinzel6649 5 ай бұрын
@@jondo7680 really, oh thanks for the explanation, I didn't knew that, because I thought it was a Skynet prototype. What would've we done without your crystal clear and precise analysis. Please delete your stupid comment and I will delete this one here, which I wouldappreciate. Maybe you can stay away from the Internet for a couple of days as well.
@beepboop6212
@beepboop6212 5 ай бұрын
@@homerp.hendelbergenheinzel6649 what the hell did he do to you, damn, you really have nothing better to do than diss internet strangers
@jondo7680
@jondo7680 5 ай бұрын
@@homerp.hendelbergenheinzel6649 the only thing stupid here is your 🧠 my friend.
@backwashjoe7864
@backwashjoe7864 11 ай бұрын
“Don’t praise the machine.” The Simpsons, S05E17 I hear a giant “for now” after each reassurance that we are not being replaced by these software.
@Soupie62
@Soupie62 5 ай бұрын
Circuit analysis using SPICE involves converting nodes in electronic circuits into values in a matrix. The node values change with frequency. While this sounds good, limiting a matrix to values of zero and one would impact the ability to model frequency response.
@arifmammadov736
@arifmammadov736 11 ай бұрын
Holy crap, I just understood matrixes in a much more deeper level than what I was taught in university.
@SockPuppet69
@SockPuppet69 3 ай бұрын
I have zero clue what matrix multiplication is
@luthandomdadane
@luthandomdadane 19 күн бұрын
It's when you multiply a matrix
@MichaelKolczynski
@MichaelKolczynski 19 күн бұрын
Its when one set of robots that make batteries out of people, themselves become batteries for another set of robots
@davidwilkie9551
@davidwilkie9551 5 ай бұрын
Not a criticism of what looks like a great systematic approach to a necessarily exhaustive methodology, to which we just add that the Universe of Logarithmic Time Duration Timing is obviously self-defining probability, and will evolve such solutions as demonstrated by Euler's pure-math function, ie it's QM-TIME Completeness cause-effect holography in the rough. (But a proper Mathemagical analysis will deduce the best smoothness choice of processes, fun to imagine for Mathematicians)
@2Oldcoots
@2Oldcoots 2 ай бұрын
Fantastic advance!!!
@NataliaBazj
@NataliaBazj 4 ай бұрын
«Wow! This algorithm uses fewer multiplications!» - «Aha, and it works only when multipliers are zeros or ones.» 😉
@achdetoni5008
@achdetoni5008 4 ай бұрын
It's funny cause it for sure adds more additions than the amount of multiplications it saves. This should make it slower than the regular algorithm?!?!
@warpdrive9229
@warpdrive9229 11 ай бұрын
It blew my mind when Deepmind discovered Strassen's Algorithm in just minutes.
@warpdrive9229
@warpdrive9229 11 ай бұрын
​@MorTobXD Good for you sir!
@conquererme
@conquererme 5 ай бұрын
I think I've found a channel to binge watch.
@arpitgaur4310
@arpitgaur4310 2 ай бұрын
Now I got to know that how important it is for researchers to create KZbin videos of their research because it's feels very daunting to read research papers and even with KZbin videos because it gives more reachability and readability which will increase the citations it.
@foo0815
@foo0815 11 ай бұрын
Remember that matrix multiplication is the basis of neural nets, so AI is actually improving itself already!
@tolkienfan1972
@tolkienfan1972 5 ай бұрын
Not yet. The multiply it imporved is over GF2. The multiplies used in AI are finite precision "reals". Aka floating point. 😁
@AndreiFierbinteanu
@AndreiFierbinteanu 5 ай бұрын
Thanos: I used matrix multiplication to improve matrix multiplication
@vorpal22
@vorpal22 5 ай бұрын
I was going to say exactly what @tolkeinfan1972 did... This only functions with entries in F_2 or GF(2) (same thing, different convention: F_2 means the unique finite field of two elements up to isomorphism, whereas GF(2) means the Galois field of two elements, and they are the same). In neural networks, the matrices used don't have entries from F_2: they have entries that are floating point numbers, and thus this improvement isn't applicable to neural nets.
@dieSpinnt
@dieSpinnt 5 ай бұрын
@@AndreiFierbinteanu Theranos: We used only one drop of your blood ... But here, look at this nice and colorful presentation about matrix multiplication. ... oh and btw. we also burned all of your investment capital for that. Good bye!
@xl000
@xl000 5 ай бұрын
so what you should have a look at how compilers compile earlier versions of themselves, and the trick they have to use in terms of the feature set of the first compillers. This is too long to explain in a comment and it' s well documented in the Dragon book (one book on compilers)
@joaowiciuk
@joaowiciuk 11 ай бұрын
The final comment on mathematicians being replaced by computers was very cliché
@myxail0
@myxail0 11 ай бұрын
it was a cliche question
@shannonbarber6161
@shannonbarber6161 18 күн бұрын
There once was a job called "Computer".
@shrdshld
@shrdshld 4 ай бұрын
quaternion is used for 3d rotation, 16 multiplies versus 27. problem in matrices is floating point, composition of matrices increase level if imprecision quickly
@stabgan
@stabgan 11 ай бұрын
This video makes me so happy:)
@cjburian1
@cjburian1 5 ай бұрын
It's mind-boggling that Strasser's method wasn't discovered a century ago or more. Mathematicians seem to have been expanding and reducing and playing with equations since antiquity. I would have never assumed that multiplying two 2x2 matrices cannot be done with less than 8 multiplications, but simply assume that it had been found by consensus to be objectively or subjectively the 'best' way.
@voidwalker9939
@voidwalker9939 5 ай бұрын
It's always a case of what's more intuitive to teach. The "classic" method is neat and understandable, and efficient for teaching purposes (no exam will ask a 5x5 * 5x5 matrix multiplication). Machines don't have an instinct, they execute code so what matters is only the code itself, not how understandable it is to a human. The strength of this method of finding things is that they can just tune the parameters to whatever specific case they need and the model will, hopefully, eventually converge to a solution or to a path to the solution, which is incredibly fascinating.
@janekschleicher9661
@janekschleicher9661 5 ай бұрын
Also, most of the mathematicians of past centuries, were often extreme good in calculating things in head and probably more often than not more savant like. They had to do all kind of calculations in either their head or by hand with close to no support from machines. And in this environment, there is not much to gain from reducing multiplications und have more additions instead. For smaller numbers, it's close the same effort for (very skilled) humans and for bigger numbers, they'd usually just approximate the calculation with pre-calculated logarthmic tables, and then it's just an addition again. It's just computers that can additions much more performant than multiplications or lookups in (big) precomputed logarithmic tables. What I miss here in the video is why the results for matrices \in [-1, 0, 1]^2n matter. Shouldn't be a multiplication for these 3 nrs be have the same computation time as an addition. I guess, we can use the algorithm to adapt for R^n spaces, but that's somehow unmentioned. Or maybe I don't understand it.
@niallrussell7184
@niallrussell7184 5 ай бұрын
AlphaGo found new strategies in the game, that has been played by humans for 4500 years. The human brain has it's limits.
@gehirndoper
@gehirndoper 5 ай бұрын
There was never a use for Strassen's method before the use of digital computing. If you have to multiply matrices by hand (or abacus) it's more important to not make mistakes and to not get ridiculous intermediate values. Strassen is also not as numerically stable.
@alphalunamare
@alphalunamare 5 ай бұрын
@@gehirndoper It is as numerically stable as the normal algorithm. You are right about the digital computer and Abacus though :-)
@VideoNOLA
@VideoNOLA 11 ай бұрын
Reminds me of the famous Seven Queens problem in Chess.
@tappetmanifolds7024
@tappetmanifolds7024 5 ай бұрын
Eight Queens problem has a solution.
@noelwass4738
@noelwass4738 Ай бұрын
Strassen's algorithm was the work of a genius because he paved the way and showed it was possible. Without that breakthrough nobody would have tried. I have no idea how he came up with his method, in particular because the additions needed before the multiplications are done are not at all obvious. The graphics here showed nicely how the 8 3d tensors could be used to represent the operations needed for the matrix multiplication of two 2 x 2 matrices in the standard algorithm and similarly for Strassen's algorithm.
@StephenC555
@StephenC555 5 ай бұрын
If it is only for some cases, that may not be really useful in application. It is just like you are adding an "If" condition to check whether for a case should use the old approach or use new approach to get a relatviely faster result, that checking logic may also time consuming, and that time consumed can in fact simply spend on solving the calc directly, and also, this will only adding complexity to the straightforward calculation solution.
@cheesySamar
@cheesySamar 4 ай бұрын
Me who hasnt even graduated highschool watching video and reading all the comments/arguments like i know better
@gg.3812
@gg.3812 11 ай бұрын
As long as people do not start calling computer programs ‘proofs’, with no formal error analysis, I m fine with machines helping out. What I do not like is calling a theorem or a proof what is clearly not. And it is already happening in many areas.
@MrDomiko
@MrDomiko 2 ай бұрын
"within minutes it rediscovered Strassen's algorithm" 😳
@ip-dj7hb
@ip-dj7hb 11 ай бұрын
So nice that google invests money in such fundamental research. Still, I think the video title is kind of click-bait because the method is only applicable to very specific matrices.
@Rocksite1
@Rocksite1 5 ай бұрын
The day will probably come, when AI mathematicians can go over the possibilities so much faster and so much more creatively, it'd be like asking a five year-old to advance the state of mathematics now. Wolfram has an AI that has been coming up with new theorems for some time. I'm not fooled, when ppl say "It won't replace ...". We'll have significant, useful input to add for a while. The time when we won't, I think, isn't nearly as far off as we think. Now, imagine Malthusian elites directing AI, while they still can ...
@MarsCorporations
@MarsCorporations 5 ай бұрын
If the matrices only contain 0 or 1, you can build the whole naive algorithm in hardware where you multiply 2 matrices in a few ticks (dont know how many layers that algo has... should be around 1 to 4... multiplying, and adding mod 2 are trivial bit operations which can be executed in parallel). There are propably ASICs for exactly that situation, where you have "zero" multiplications per 4x4 Matmul mod 2.
@robosergTV
@robosergTV 5 ай бұрын
do you really think no one thought of that? Of course there companies for years who build such tensor accelerators for Deep Learning. Google "TPU"
@unkn0vvnmystery
@unkn0vvnmystery 29 күн бұрын
2:23 Quanta Magazine: Going from eight down to seven multiplications may seem trivial. me thinking: for 2x2 sure but for something like 8x8 it is significant. 2:29 *shows 8x8* me: lol
@qafmbr
@qafmbr 22 күн бұрын
I so respect guys like Fawzi. Amazing. Imagine what is going to happen within the next 30 years?
@sahilgoel4777
@sahilgoel4777 11 ай бұрын
One can never imagine the success of Srinivasa Ramanujan if he had access to these type of computers. I wish he was still alive and contribute to our society. 😢
@Leon_George
@Leon_George 11 ай бұрын
Correct me if I'm wrong, but I've always considered society to be pushed on by a few brilliant people, and then a lot of very intelligent people having their minds 'opened' let's say by a breakthrough that the likes of Da Vinci or Einstein developed. For example, as a very basic example, many secondary school pupils can get a good understanding of integration, but without Newton, none of them could ever have been able to perform such math. in this way, a 'normal' or 'bright' person is accelerated by the one-in-a-century geniuses that pop-up every now and then. Could AI act as a faster replacement for these geniuses, providing these new ideas and breakthroughs that accelerates what a 'bright' person could do? Are we replacing our geniuses? (not that this is necessarily bad, just are we)?
@okhsunrog
@okhsunrog 4 ай бұрын
AI is still just a tool and it can't be compared to human's mind. Not yet. A lot of work is done by humans that stand behind this discovery
@user-cm1mu6ce9y
@user-cm1mu6ce9y Ай бұрын
dude without farmers and food producers nobody could make any ai
@valiile
@valiile 4 ай бұрын
The algorithm to perform the quaternions multiplication gives it its compactness that would be available to matrix multiplication if it would use the same path of using the projection of the results on the hypercube and back to 3d cube in DeepMind's AlphaTensor. I bet a 6-pack of beer that it will improve the actual best result.
@4Gehe2
@4Gehe2 2 ай бұрын
I hate the whole "There are more options than atoms in universe..." etc. Because whenever that is brought up, it tends to be that basically everything but few get eliminated very quickly when basic rules get introduced in to the system. If you got a huge floor and need to make mosaic on it, your possibilities are absurdly huge if you just fill it with random tiles, but soon as you set any kind of rules or boundaries the possibilities start to reduce quickly, until a limit between "there is one solution" and "there is no solution" is reached.
@minuspi8372
@minuspi8372 11 ай бұрын
This specific kind of technique isn't going to replace mathematicians, but other techniques, some that already work and just need to be scaled up, almost definitely will.
@alt3241
@alt3241 10 ай бұрын
Since I do not use these methods because of the obvious bottlenecks , this presentation shows me a root aspect of the tunnel vision plaguing industry . This motivates me to work out an explanation for a presentation of my methods .
@wc447
@wc447 4 ай бұрын
"The right way to look at this ..." seems like a very short sighted / unimaginative statement
@xxytxx
@xxytxx 11 ай бұрын
The narration says (2:55) that Strassen's method requires "a third less" computation while the video contradicts this by saying "67% less". 343 is 67% of 512 so it is 33% (a third) less than 512.
@kossboss
@kossboss Ай бұрын
They meant 67% of the original value but yes if one is being pedantic then you are correct and in this world one can never be too pedantic or descriptive in my opinion
@achatinaslak742
@achatinaslak742 11 ай бұрын
Very interesting ! It went from 2-D to 3-D. Is it possible to use a 4-D space or higher, to reduce the number of steps even more?
@shoam2103
@shoam2103 11 ай бұрын
A 3d tensor is used because it's multiplication of 2 2d tensors. So for higher dimensions you need to multiply higher order objects, or do something else. 4d space is naturally suited for 3d rotations for example..
@shoam2103
@shoam2103 11 ай бұрын
You might be interested in geometric algebra and projective geometry too
@theepicturtleman
@theepicturtleman 10 ай бұрын
@@shoam2103 I’m a bit confused. Are you saying when you multiply 2 matrices in R^2 you can a matrix in R^3? I don’t believe that’s correct
@pyropulseIXXI
@pyropulseIXXI 11 ай бұрын
This is pure hyperbole; seasoned mathematicians haven't mastered matrix multiplication? Pure absurdity
@deendeon10
@deendeon10 11 ай бұрын
Maybe what the author means to say is that they have yet to find the most effective method of multiplying matrices.
@ErikS-
@ErikS- 4 ай бұрын
Nice video... I think though that the word "tensor" is being hyped though. In 25+ years in using math, it always runs down to vectors (1D matrix) or matrices (of any dimension you like). And the only times I saw it, is in a "tensor product" of matrices.
@quantum_ocean
@quantum_ocean 5 ай бұрын
How do you make these animations? Very well done
@johanlarsson9805
@johanlarsson9805 11 ай бұрын
Yeah, I learned about this a few days after it happened. This along with AI speeding up the process for the design in the lenses that print the silica chips as well as the improvements it does to the transistor layout........ I'm thinking we can already say it is improving itself rappidly. These improvements will then lead to other improvements shortly- Here we go singularity!
@FractalBob
@FractalBob 11 ай бұрын
Will a computer make mathematicians obsolete? I'll believe it when an AI resolves the Riemann Hypothesis.
@NeovanGoth
@NeovanGoth 11 ай бұрын
Then you'd still need mathematicians who are able to understand both the algorithmically generated proof, and well as the algorithms themselves. AI will always automate some tasks, while at the same time create new tasks. The outcome won't be less human work, but higher efficiency for the same or even higher amount of human work.
@argandzero0
@argandzero0 11 ай бұрын
I'd say it more impressive what Strauss did but kudos to the team over at Google. ;)
@ahmadal-diofy8319
@ahmadal-diofy8319 4 ай бұрын
very nice I love this, What is the graphic software name? please
@user-francescob
@user-francescob 11 ай бұрын
Nice work DeepMind, and the humans who collaborated with he/her. While matrices of 1s and 0s are common in many situations, did you ask DM to work on a general solution for all matrices? I'm hoping you'll put DM on solving the Kronecker matrix multiplication next. It's resultant tensor grows much faster then the matrix cross-product - Linear vs. Square. Hence, a [4 x 4] x [4 x 4] cross product only results in a [4 x 4] resultant matrix. Where as a [4 x 4] x [4 x 4] Kronecker results in a [16 x 16] resultant matrix! We use this a lot in physics.
@piyh3962
@piyh3962 11 ай бұрын
Deep mind is a company and isn't gendered
@Anonymous-df8it
@Anonymous-df8it 10 ай бұрын
It's either "them" (the epicene) or "it" (the neuter)
@saliherenyuzbasoglu5819
@saliherenyuzbasoglu5819 11 ай бұрын
your videos are great! Lot of science videos on youtube are bullshit filled with stock photos and does not even explain the concept that title says
@riccardosilli9461
@riccardosilli9461 5 ай бұрын
what font do u use?? nice video btw
@mikeycomics
@mikeycomics 11 ай бұрын
Great video
@MathOrient
@MathOrient 11 ай бұрын
Wow, this video is truly mind-blowing! Training the AI system AlphaTensor to explore faster algorithms for matrix multiplication is a remarkable feat with far-reaching implications. The potential impact on various disciplines is truly exciting to envision. Thank you for sharing this fascinating glimpse into the future of mathematical research and its potential real-world applications!
@novakonstant
@novakonstant 11 ай бұрын
get out of here with this goofy ass chatgpt comment
@tomkhxde3495
@tomkhxde3495 11 ай бұрын
Correction: the strassen alg has 33% less operations, instead of 67% at 2:57. if 512 is 100%, 343 is 67 then it's 33% less.
@johanastborg8066
@johanastborg8066 Ай бұрын
I was thinking about stochastic multiplication using graphs via Laplacian a few years ago…
@Blackwhite2277
@Blackwhite2277 11 ай бұрын
It empowers top-notch people to do more. Everyone else is kind of screwed
@ariadame102
@ariadame102 11 ай бұрын
Nothing new. Only top-notch people bother to study enough of the cutting-edge to be empowered by it, the rest don't. Tale as old as time. Nothing has changed, despite easy access to massive amount of information.
@profdc9501
@profdc9501 11 ай бұрын
If one is just multiplying two 2x2 matrices with ones or zeros, there's only 8 combinations of those, and so you can just use a 256-entry, 4-bit lookup table and get the answer in O(n^2) time. Then you just divide into blocks. It doesn't seem like the case of elements being zero or one is that interesting.
@achdetoni5008
@achdetoni5008 4 ай бұрын
You don't need lookup-tables, this can be implemented directly into hardware and most alu (cpu) should have it.
@-_Nuke_-
@-_Nuke_- 11 ай бұрын
wow! WOW! The significance of this is unfathomable;
@alecepting1371
@alecepting1371 4 ай бұрын
The caveat is that it works only with binary elements. What's interesting, though, is the fact that there's a loss of rigor that comes from mathematical proof whenever you use a experimental approach (try and see if it works; if not, try something else but make sure you don't try anything close to what didn't work before). That is why the mathematicians were able to find a better solution that the machine learning algorithm missed. These two caveats suggest that the method (of the machine learning algorithm) is not a "proof" in the purest sense, but rather a "proof" in the pragmatic sense ("the proof's in the pudding"). It's typically what engineers do. I remember when I studied digital signal processing in the early days prior to the advancement of discrete mathematics. Mathematicians claimed that the sampling theorem was wrong, and they were right in the purist sense, but that doesn't mean it can't be used in practice, because the ear cannot perceive the errors if you sample beyond the Nyquist rate. At some point, the mathematics journals will come to accept machine learning experiments as "proof", and the meaning of the word and mathematics itself (as a language) will lose the precision it's known for that comes from the rigor it has traditionally enforced.
@hai-duynguyen8429
@hai-duynguyen8429 11 ай бұрын
I freaking love this company and the work they produce; all thought provoking and incredibly intuitive. And for all the hate the company might get because of the CEO, SpaceX is the other I love as well.
@Atomchild
@Atomchild 11 ай бұрын
This discovery stands to reason that it should be applied to the study of file compression.
@user-ul5pt1yb8z
@user-ul5pt1yb8z 5 ай бұрын
Thanks a lot
@none7378
@none7378 11 ай бұрын
when AI's start making theories, you know they are either going to soon replace us, or something will be done. I say, AI's should be allowed for most things digital (ex: photoshop) or making a characters mouth move better in a different language of a cartoon, busywork things that were created by the internet, so it doesn't interfere with the human pursuit of happiness
@MrMctastics
@MrMctastics 11 ай бұрын
The first lesson you should learn to truly enjoy your life is that nothing you ever do will be truly unique, and that the satifaction of doing something should only come from within
@marc-io
@marc-io 11 ай бұрын
This channel became just a Google propaganda vehicle lately.
@adiaphoros6842
@adiaphoros6842 11 ай бұрын
If you discovered anything of note, then this channel will also feature you.
@huehuehue0425
@huehuehue0425 11 ай бұрын
Well, don't just let google win this war, be the person to beat google! You can do it!
@jinjunliu2401
@jinjunliu2401 11 ай бұрын
I don't see any other recent videos of theirs that's about something of Google lol
@deendeon10
@deendeon10 11 ай бұрын
​@letmekillstephen This isn't a war. Humans may be competing but they are building on the same foundation.
@marc-io
@marc-io 11 ай бұрын
@@jinjunliu2401 maybe the other amazing breakthrough video got lost inside a wormhole?
2023's Biggest Breakthroughs in Math
19:12
Quanta Magazine
Рет қаралды 1,5 МЛН
P vs. NP: The Biggest Puzzle in Computer Science
19:44
Quanta Magazine
Рет қаралды 623 М.
Genial gadget para almacenar y lavar lentes de Let's GLOW
00:26
Let's GLOW! Spanish
Рет қаралды 36 МЛН
Which one will take more 😉
00:27
Polar
Рет қаралды 13 МЛН
Someone improved my code by 40,832,277,770%
28:47
Stand-up Maths
Рет қаралды 2,3 МЛН
ChatGPT: 30 Year History | How AI Learned to Talk
26:55
Art of the Problem
Рет қаралды 918 М.
How This Pen Changed The World
9:17
Primal Space
Рет қаралды 142 М.
Researchers thought this was a bug (Borwein integrals)
17:26
3Blue1Brown
Рет қаралды 3,2 МЛН
AI Discovers Faster Algorithms
19:30
ThePrimeTime
Рет қаралды 202 М.
Watching Neural Networks Learn
25:28
Emergent Garden
Рет қаралды 1,1 МЛН
Why Computer Vision Is a Hard Problem for AI
8:39
Quanta Magazine
Рет қаралды 113 М.
The Scientist Who Discovered the World's Most Beautiful Equation
14:58
A world from a sheet of paper - Tadashi Tokieda
1:01:13
Oxford Mathematics
Рет қаралды 364 М.
The PA042 SAMSUNG S24 Ultra phone cage turns your phone into a pro camera!
0:24
Нужен ли робот пылесос?
0:54
Катя и Лайфхаки
Рет қаралды 706 М.
Why spend $10.000 on a flashlight when these are $200🗿
0:12
NIGHTOPERATOR
Рет қаралды 15 МЛН
Subscribe for more!! #procreate #logoanimation #roblox
0:11
Animations by danny
Рет қаралды 3 МЛН