Are we going to ignore the Animations? You better pay your animators because ilustrating these complex concepts is incredible work.
@udaysingh-wr2kwАй бұрын
Are they not paying their animators?
@jamesfrancese6091Ай бұрын
@@udaysingh-wr2kw They do, Quanta is not some KZbin channel lol they’re a reputable science journalism publisher funded in part by the Simons Foundation and their graphic designers make upwards of $110,000/yr
@KuKoVisualsАй бұрын
They don't. They get interns
@ShenanigamblingАй бұрын
"Yeah do you mind animating all combinations of 6 learned skills from a universe of 100 possible skills?"
@pirateonmeleeisland2973Ай бұрын
no joke?
@kenankaneki4310Ай бұрын
İ love this channel this is more exciting than spotify wrapped
@notgaybear5544Ай бұрын
Spotify wrapped was so good. I wish i could have that wrap up for all my content consumption
@kenankaneki4310Ай бұрын
@@notgaybear5544 l can give my all data to gpt ( or any model) for this
@Giallo43Ай бұрын
😅😮ì😮@@notgaybear5544😮😮😅😅😊😅😅😅😅😅😅😅😅😅😅
@oswach8706Ай бұрын
Intellectual humen race wrapped
@james-calАй бұрын
@@oswach8706 more like science wrapped
@Nex_AddoАй бұрын
Not a single link to the actual paper discussed, just a link to your article that also has no links to the paper.
@dennisfast9331Ай бұрын
Google -> skill mix paper -> paper found!
@quantumskull2045Ай бұрын
It's not very clearly highlighted but they do generally have links to the original paper somewhere in the article. In the article on skill-mix it's in the middle in one of the links.
@kingmidasthagoatАй бұрын
@@dennisfast9331zip it dennis
@Michael-rk3xgАй бұрын
Maybe you can't read. That's why you can't find the links in the article
@penguinpatrollerАй бұрын
could one of you post the title of the paper? i could not find it either.
@tghuy8384Ай бұрын
No one remembered Kolmogorov Arnold Network? LLMs are cool and all (tho not my favourite/specialty area), but I'd love to see people pay more attention to the scientific machine learning field (e.g. physics-informed models, interpretable models like KAN). I kinda feel like so many other fascinating usages of machine learning are being outshined by LLMs nowadays...
@a_soulsparkАй бұрын
not much came out of KANs yet, that's probably why. (with emphasis on **yet** )
@dropbitАй бұрын
Unfortunatly in the computer science (not only in cs) the more a piece of technology or idea is profitable and money making the more it will be studied and be trending
@mb2776Ай бұрын
afaik from my fairly limited understanding, physics-informed models do need differential equations which are super hard to implement in higher dimensions.
@demetriusmichaelАй бұрын
@@mb2776 they all do in NN as they rely on the chain rule to update params.
@pelegshilo6358Ай бұрын
I think your examples also fall into LLM-related hype. The KAN paper is full of marketing language and proves very little with regards to either theory or practical performance. Physics-inspired models are cool ideas that people have been trying for years, and we really need to judge them case by case. Just having a model inspired by Physics does not achieve much
@michaelpeeler7030Ай бұрын
No mention of BB(5) being proven? That’s perhaps the biggest CS news of the year, especially since there’s a fairly good chance we’ll never again prove a busy beaver number for a two-symbol Turing machine.
@holdthatlforluigiАй бұрын
Yeah that's sad
@DistortedV12Ай бұрын
haha you are doing their job for them
@dddddddddddkkkkkkkkkkkkkАй бұрын
Bro that has no AI in its name it will not even make it to notable mentions
@dashadowerАй бұрын
No buzzwords, just theoretical jargon that the average Joe doesn't care
@herghamoo3242Ай бұрын
Imo, computing Busy beaver numbers is like mathematical trivia. Am I impressed they managed to do it? Absolutely, but there's absolutely nothing you can do with this knowledge. Also, the Busy beaver numbers are so dependent on the specific definition of a Turing machine that I don't feel it's that interesting to be honest.
@garyz904Ай бұрын
Is the "skill-mix" paper we are talking about in the first part? I don't see why this paper is a breakthrough, much less the biggest breakthrough this year. BTW, where exactly is the "mathematically provable argument" presented in the paper? You cannot simply call a paper with equations theoretical.
@KeinGescheiterUserАй бұрын
It sounds like it - and skill-mix is mentioned in a linked article in the "the year in cs" article accompanying the video. But there are a lot more things wrong with framing the paper in that way. Overstating results is already bad - but the paper does not even convincingly show that models have these skills. Grading models with other models defeats the purpose, their argument that it is more reliable than humans is weak at best. Also: what about prompt variations? Does this observed behavior for these "skills" really generalize? As an NLP researcher, I'm disappointed in quanta magazine for this one.
@NeekuАй бұрын
It really seems like the paper wasn't properly digested or compared to current and past norms in NNs before including it here. Its nothing more than feature learning.
@acmhfmggruАй бұрын
🎯
@Sheldonsheldon109Ай бұрын
At 7:31 the chess board in the background isn't set up correctly! The color complexes are flipped, i.e. the board is rotated 90⁰ from what it should be!
@user-qw9yf6zs9tАй бұрын
thats why we need a 9x9 chess board
@NuncNuncNuncNuncАй бұрын
He's solving symmetry problems in a higher dimension. He dreams Pauli matrices.
@nicksamek12Ай бұрын
It’s funny that’s it’s set up to make him look smarter / more sophisticated, but then he goes and sets it up wrong and throws the absolute opposite message.
@TheBooker66Ай бұрын
@@nicksamek12 Exactly my thought process.
@aineshbakshi5556Ай бұрын
It’s isomorphic
@joaoedu1917Ай бұрын
IA company employee publishes a paper that "proves" that its LLM is even more awesome that we all thought it was. solid.
@theresalwaysanotherway3996Ай бұрын
if you have a criticism of the actual work or method then bring it up. But dismissing it simply because of the conflict of interest (which they're not trying to hide) doesn't get anyone anywhere.
@xCromlechxАй бұрын
@theresalwaysanotherway3996 but this has nothing to do with "breaking through". It should (!) be dismissed until actual progress is made. I am pretty sure there are other 2024 breakthrough achievements in CS that would be more appropriate to cover...
@Alpha_GameDev-wq5cc18 күн бұрын
@theresalwaysanotherway3996 Here's, criticism of actual work. They have done no work to show how their methodology has any relation to their claims, that part of the paper is a "trust me bro" and very cunningly so because it's a charade. Not uncommon in today's world of exponential paper publication and stagnant innovation. They either don't know how LLMs work internally or they are pretending not to. I'll put it like this, they've taken the core property behind the function of LLMs and simply tested if that core function works and called it research for some futuristic feature. It's like testing if a car can move without a human insight, and then saying that it's very likely that the car can exhibit self-driving behaviors. NO, that's hilariously misleading.
@EvanMildenbergerАй бұрын
When researchers are so obsessed with AI and quantum, I believe it leaves much less incentive for more classical/pragmatic computer science. But if you've worked in industry before, there are so many systems that are duck tapped together and have so much bloat that it's more cost effective to just add new code rather than touch the old. The fact that programs require thousands of times more space to run than years ago but are not generally thousands of times faster is actually disappointing. And the fact that many things like the banking systems of the world using COBOL will have practically no one with sufficient experience is worrisome. It seems like a genuine solution to managing complexity and being able to safely transpile an entire legacy system into a modern one is a pressing need but I think academics all want to be the next Turing or Church rather than solve a more down to earth but important problem.
@eSKAone-Ай бұрын
Ai could help with that
@eternaldoorman5228Ай бұрын
Yes, to see "2024s Biggest Breakthroughs in Computer Science" headlined by a system designed to "distil" knowledge from bad practice is pretty depressing!
@ai_outlineАй бұрын
I see were you are coming from, but AI is a classical subfield of Computer Science. Quantum not so much, but I believe it will become very important in the future.
@FiniteSimpleFoxАй бұрын
For each of the issues you have highlighted the problem is practical, not conceptual. In theory we have all the tools we need, it's just that big tech and other institutions are not making a meaningful effort to solve them. By definition, academics deal in coming up with new ideas so I do not think it would make sense for them to work on this sort of stuff; this is really a problem for software engineers. That being said I don't want to rule out the potential for research into frameworks for how to better manage large interdependent systems of software although I am not knowledgeable about this sort of thing if it exists.
@morenorueda946Ай бұрын
Just ask an Llm to translate from whatever that old language banks use to a newer one. So hard
@AlfredoMahnsАй бұрын
A question for somebody that manage this topics, cause I stuggle to understand: The researcher made a bipart graph, with one part beeing skills. The flaw I see is, how can you prove the skill are different/not equivalent? Because you need to prove this for it to work.
@mtw2457Ай бұрын
Try to ask LLMs about alternative interpretations of sequences in the OEIS (Online Encyclopedia of Integer Sequences,) or to summarize a given sequence. They cannot even get the proper ID matched with the proper sequence description (in my case sequence ID A000371). In other words, they cannot even regurgitate what is already in the OEIS comments area. The alternative interpretations - if they can come up with anything at all - sound authoritative but are completely WRONG! LLMs are authoritative sounding noise.
@mb2776Ай бұрын
yeah, we all know that they just repeat what they allready learned. you can brick gtp4 by just asking for a multiple nested exponential function with positive x value. It WILL produce code with an overflow. test it.
@acmhfmggruАй бұрын
A handy tip for lazy students: just ask an LLM to generate citations for your papers. 👹😫
@rubennijhuisАй бұрын
Been waiting for this one the most
@zacharykosove9048Ай бұрын
me too
@ccriztoffАй бұрын
everyone has bro
@s1lliАй бұрын
welp i assume u're disappointed
@gamalalejandroabdulsalam904Ай бұрын
"Minimize the training loss... that is called Emergence 💫" Bruh anyone that's actually trained a model understands that's just hype-talk
@mb2776Ай бұрын
emergence is unexpected behaviour. minimizing the training loss is just using an optimizer like adam
@gamalalejandroabdulsalam904Ай бұрын
@@mb2776 The exact quote from the video is: "[...] they minimize training loss, or make fewer errors. This sudden increase in performance produces new behaviors - a phenomenon called Emergence 💫" All the while, the typical training loss curve is shown going from untrained to converged. Yeah you don't say, the more they train on the data the better they get. Now that I think about it, they probably were going with "convergence", but decided that wasn't enough hype-talk for the video.
@mrgoodmood28Ай бұрын
So, what are we actually looking at here, guys? How brains also “might” be working under different circumstances so that’s why we have different psyche’s, culture, beliefs, and such? Skillsets we learn based on nature + nurture produces very unique individuals? Say, that our parameters are very large?
@vidal9747Ай бұрын
@@gamalalejandroabdulsalam904 Emergence is the emergence of behaviours not directly trained. It is not just having your loss function minimized. Let's not be overhyped, but correct.
@acmhfmggruАй бұрын
@@mrgoodmood28bro they just recently found like 15 new organelles that we didn't know existed. We don't know how much we don't know (but it's obviously a lot). We most certainly do not understand how or brains work, much less how consciousness and cognition work. Beware the childish graphics and graduate students...
@ai_outlineАй бұрын
For the people in the back: Computer Science is not programming. It is way more. It is Algorithms, Artificial Intelligence, Quantum Computing, Computer Graphics, etc
@DextrostatАй бұрын
I think Universities need to separate out Computer Science as it's feels like it's only a degree just to get a job. Computer Science is supposed to be a theoretical field on software and focuses on optimization, math behind it, and new research into algorithms. Software Engineering should be a derivative major on CS but with more a focus on industry software, infrastructure, and programming for businesses. Less math(don't really need Calc, only Discrete) and more practical applications.
@ai_outlineАй бұрын
@@Dextrostat completely agree. CS degrees should teach all the mathematics behind CS/Informatics. It’s the fundamentals! Never be too focused on frameworks and technical details that are hot today but obsolete tomorrow.
@keyboardtaskforcephi-3689Ай бұрын
Nah the movies told me all you need to do code really fast to generate an universe
@t3lesph0reАй бұрын
Spot on!
@XxRiseagainstfanxXАй бұрын
As a mathematician i fail to see why the first one has anything to do with computer science or a scientific breakthrough but I`m not enticed enough to look up the original paper.
@ai_outlineАй бұрын
@@XxRiseagainstfanxX why? It’s about AI, so it’s computer science.
@ai_outlineАй бұрын
But I do understand the lack of analytical details. It’s weird to call it a CS breakthrough without much math and formality.
@EvanMildenbergerАй бұрын
I agree. It doesn't mean that their research is useful for testing and improving LLMs but it doesn't seem to be that much of a breakthrough. People forget that AI is not a new field even though practical natural language processing models is recent since ChatGPT. But LLMs are just a part of NLP which is just a part of machine learning which is just a part of AI. AI has become a vague buzzword when it used to be an intersection between neuroscience and computer science and focused more on what it means to think or be intelligent rather than just language tasks. Generating human-quality text was an intractable problem for deterministic programming so LLM advancements *are* very useful. But believing language is the sole path to intelligence seems to forget the importance of math, and history has shown that language with math/logic has generally led to dogmatic thinking rather than advancement.
@Tau-qr7fАй бұрын
Can we get a list of the biggest breakthrough in Computer Science without mentioning ai? ai should have a separate video
@karthikgarimella2131Ай бұрын
Why though? It's part math and part computer science. It started in that domain and it should be part of that domain no? In that case, quantum problems also shouldn't be part of computer science then as mentioned in the video?
@rohanbuch2344Ай бұрын
AI = computer science
@ai_outlineАй бұрын
@@Tau-qr7f a separate video? What? AI is literal computer science… ffs 🤦🏻♂️
@ai_outlineАй бұрын
@@Tau-qr7f do you also want a separate video for electromagnetism (physics)? A separate video for mechanics (physics)? A separate video for topology (math)? Jesus Christ…
@Tau-qr7fАй бұрын
@@rohanbuch2344 no, it’s not. there are many other interesting topics in cs besides ai. they receive negligible coverage after the release of chatgpt although llm existed long before that.
@DistortedV12Ай бұрын
The REAL list: o1 (now o3), quantum computing breakthroughs, veo2, successful diffusion world models, Arc challenge success, and robotics advances starting to work with ML. I would be curious one of these for causality and ML research.
@mmmmmrto958Ай бұрын
Also idk if you mentioned it because i forgot the name but I saw a video where a development in using photons to connect CPUs and even to do actual computations was made this year. (Although i may be wrong as I did not look into it a lot)
@R4Y7Ай бұрын
Our yearly update :D
@ccriztoffАй бұрын
for those curious the real important stuff starts here 9:05
@ai_outlineАй бұрын
Computer science may be more fundamental to the universe than we know!
@deeplearningpartnershipАй бұрын
Computation, when it intersects with physics.
@christhurman2350Ай бұрын
Only if computer science keeps evolving
@memecachedАй бұрын
After physics, the goal is to combine computation and biology. Imagine biological computers with human brain like efficiency and memory.
@christhurman2350Ай бұрын
@ you just described a fungus -
@ai_outlineАй бұрын
@@christhurman2350 it obviously will. There are still endless algorithms and computational solutions to be discovered yet!
@kevinmilner2072Ай бұрын
man, this is what I got a degree in CS for, not *shudders* working in salesforce.
@fr3847Ай бұрын
yee most cs majors just end up writting java or javascript in web dev probs backend, sad it limits potential but that's where all the jobs are, and the few hardcore cs jobs are only for geniuses
@henrycgsАй бұрын
computer science starts at 6:13
@sleepynoob1000Ай бұрын
May I ask why?
@RS-kb2mfАй бұрын
Thanks for the heads up! :)
@paxcoderАй бұрын
Things that can be applied to programming do not start
@ai_outlineАй бұрын
What do you mean?
@sleepynoob1000Ай бұрын
@paxcoder CS isn't restricted to things you can program? Every program starts from theory you know.
@Ctrl_Alt_SupАй бұрын
I said to myself "wow finally an interesting video"... I hadn't seen that I was on Quanta Magazine!
@Caliban314Ай бұрын
These videos are awesome, you guys always cover complex topics with such ease! As an aspiring computer science researcher (and science enthusiast in general), this is exactly the kind of content I want. One suggestion - I always want to learn more about a breakthrough but cannot find citation to the original papers. Even the quanta articles don't have them. It would be great to go directly to the source!
@dennisfast9331Ай бұрын
Google -> skill mix paper -> paper found!
@absence9443Ай бұрын
2024 had such sick neural network breakthroughs it's relatively irrelevant what was shown here
@Techmagus76Ай бұрын
I would argue that if generalisation skills occur in LLMs, why did they struggle so hard when a task is changed only very slightly and then give very different answers. That is very hard to explain in the realm of generalisation, but it fits quite well to the statistical parrot.
@nodavoodАй бұрын
Excellent work. But the scaling law curve was wrong. That's not a scaling law curve.
@KshitijPawar77Ай бұрын
Me( being an Indian) feeling proud seeing so many Indian origin scientists in this video 😊
@ArthasMalАй бұрын
It's more likely for brilliant people to be born from nation that fucks a lot :D
@neils3ji28 күн бұрын
Our parents are also watching disappointed that it isn’t us in the video (I’m Indian too) 😂
@KshitijPawar7726 күн бұрын
@neils3ji 😆 never show these videos to Indian parents
@_bigman8593Ай бұрын
Why not update the foundation code or whatever it runs off to add in an explination function on how the AI came to that conclusion, like a log, being created on a user accepted bases with major correct analyses
@sam_o_chess25 күн бұрын
Where can I find the sound or the music at the end of video it's very motivating
@anonymouse5217Ай бұрын
Ugh, it's really bad that Quanta calls these the biggest breakthroughs in computer science. You will be hard pressed to find a single researcher who thinks this Deepmind paper is a breakthrough. The Hamiltonian learning paper is a nice achievement, but hardly makes sense to single out as 'the biggest breakthrough of the year'. Do better Quanta -- you are losing our trust!
@purplewine7362Ай бұрын
Oh no, they're going to lose a self-described expert in the audience. The horror!
@noobvsprominecraftbuildАй бұрын
I LOVE THE PUBLIC SECTOR!!! I TRUST MY GOVERNMENT AND CENTRAL BANK!!!!!
@samuelbucher5189Ай бұрын
Could you link me to some better candidates? Or just tell me about them, if KZbin doesn't like the links.
@CybitClubАй бұрын
Nah buddy. If you were in tune with the breakthroughs in computer science you'd find it pretty absurd to make half of the video about LLMs. is it cool ? Yes. Were there some breakthroughs? Yes . But doesn't really deserve half the spot so as the Hamiltonian . @@purplewine7362
@marcomartins3563Ай бұрын
can you recommend a better list then?
@Elrond_Hubbard_1Ай бұрын
I am in no way an expert in any of this, but I have a wild idea that I thought of a while back. I did do a BSc and majored in molecular bio, but I didn't really end up using it professionally. But I've always held up my interest and fandom for science. Anyway, my idea came when the ChatGPT thing came out, and I remembered that the human brain is compartmentalised. There is, presumably, I'm not a neurologist, some specific chunk of throbbing brain matter inside of your skull that specialises in language comprehension. Kind of like a ChatGPT AI. But then I thought, well, maybe if you trained one AI to learn language like the ChatGPT, and you taught another AI to learn physics, and you taught another to learn how to analyse visual data, and on and on, and then you somehow bundled them all together, would that not resemble a brain? A bunch of specialised compartment intelligences that combine to make an overall intelligence, or a general AI?
@rustprogrammerАй бұрын
What about BB(5)?
@NGBigfieldАй бұрын
08:40 he sounds like a very smart version of Jihaniou (the Good Place) 😅
@isolatedsushi5996Ай бұрын
That chess board being set up wrong at 7:34 annoyed me more than it should have
@johnskarha357526 күн бұрын
Same here! Very distracting. Does not help with the credibility of what is being said...
@bitik984712 күн бұрын
It is perfect. Tell us you dont know chess or increase the video to 4k
@andrwsk23Ай бұрын
Emergence are complex behaviors that are created when combining simple behaviors.
@XenoCrimson-uv8uzАй бұрын
I may be wrong here, but shouldn't the math at 3:32 be 100!/(100 - 4)! instead of 100^4 so it doesn't include the same skill multiple times? For example: ( Metaphor, Metaphor, Metaphor, Metaphor ) ? so the answer is 94,109,400 instead of 100 million.
@AlSteve7Ай бұрын
It think it's a with sampling with replacement
@nessiecz2006Ай бұрын
im pretty sure its 100 choose 4, which is 100!/(4!*(100-4)!)
@edwardgongsky8540Ай бұрын
I'm amazed at the pace of this progress, scary to think how much our robot overlords will be smarter than us.
@KohctaАй бұрын
I love how this "scientists" forget everything about scientific method when they rush to explain llms. instead of trying to disprove their hypothesis their doing what ever this thing is, But those guys are not biased at all, because google have nothing to gain from AI hype. It is like gihub release their "scientific" paper about how great "github copilot" is
@purplewine7362Ай бұрын
You can say the same thing about protein folding, which also uses DeepMind. But I guess assuming ulterior motives behind everything makes you sound smart.
@petrospaulos7736Ай бұрын
AI bubble vs Quantum bubble.... which will burst first???
@molybd3num823Ай бұрын
AI
@rebellion112Ай бұрын
Not everything is a bubble.
@salihalbayrak-es8kyАй бұрын
neither.
@kylebowles9820Ай бұрын
Quantum, at least AI can do something
@Rockyzach88Ай бұрын
From an economic stance? AI definitely because there's no quantum bubble right now and "AI" is basically dominating the stock market atm.
@WalkerSlavichАй бұрын
4:15 this is a joke this is what it spits out when i tell it to write my english essay
@weroleoifyАй бұрын
Hamiltonian Algorithm representation looks like a Loom, could be a nice name for a future quantum computer.
@johnnyBrwnАй бұрын
It's not emergence @1:53, apple literally refuted this. They are parrots. Apple demonstrated that the more weights, the larger the memory. The more data, especially diverse data, the more likely to get a sample in training days set that shows up in test set and benchmarks.
@sheggleАй бұрын
One paper a fact does not make
@deeXaeedАй бұрын
Please share paper.
@NickforGreekАй бұрын
The chessboard in the back is wrong...
@soundofbombingАй бұрын
Calling the first one a CS breakthrough is insulting to all of the hard technical work Computer Scientists have done this year.
@rohitagarwal9520Ай бұрын
@3:38 Shouldn't it be 100_Choose_4 ways to combine 4 skills to understand a text? Since 100^4 would include cases like choosing the same skill 4 times which seems redundant for the context to understand a text.
@kylebowles9820Ай бұрын
Omg you guys, more than 1 thing happened this year. The reality distortion is insane
@henrycgsАй бұрын
this video really annoyed me.
@mattlopez487Ай бұрын
AI AI AI AI AI AI AI HEY GUYS DID YOU HEAR ABOUT AI
@mqb3gofjzkko7nzx38Ай бұрын
@@mattlopez487 I did it's pretty neat stuff.
@jamesfrancese6091Ай бұрын
The LLM results were just one of four research topics (all different) that they discussed for their 2024 CS “roundup”, which they also do for numerous other disciplines (physics, biology, mathematics…) - I guess you didn’t watch the whole 10min video lol but you could read their articles if you prefer. But tbh I wouldn’t, they don’t actually talk about “breakthroughs” and as we’ve been forced to learn most “scientific progress” is extremely unpredictable and heavily retrospective in nature rather than prescient, as professional scientists are now required to sound in grant applications Nobel prizes are awarded decades after the work they recognize for a reason - the idea of an “annual breakthrough” story for all major scientific disciplines is a totally artificial emulation of the modern 24/7 news cycle model
@kylebowles9820Ай бұрын
@@jamesfrancese6091I did watch the video all the way through but I already keep up in real-time so it's always interesting to comment on the selection process. Some of Quantas stuff, the long form articles, can be really good though
@oneparticle19 күн бұрын
This year there's gonna be a new image and video lossless compression algorithm. that compresses gigabytes worth of footage into literal bytes
@xCromlechxАй бұрын
I don't think that LLM research broke through anything this year...
@rudihoffman2817Ай бұрын
Interesting to see the second guessing on this beautifully presented info…easier to criticize than create real content…my opinion, I could be wrong, but I don’t think so.
@hoenchiomaАй бұрын
4:13 Did ChatGPT just make a Gojo reference? xD
@drxydАй бұрын
Tired of ML
@udaysingh-wr2kwАй бұрын
Get ready to live the rest of your life with it
@raphaelfrey9061Ай бұрын
Ml already existed before language models. What do you think how weather is predicted? Or how voice is percieved by siri?
@marijn6009Ай бұрын
You can choose to be tired of it, or to be inspired by it
@fugamantewАй бұрын
The structure that let us all express our disgust towards it
@jimmynoosetron6518Ай бұрын
lets go back to banging rocks together
@IkedaBC25 күн бұрын
So the first break through is just a group of computer scientists learning how to properly prompt a large language model to do a specific task ? Nice
@ArthasMalАй бұрын
1:28 well placed oatly add
@metylenАй бұрын
the background music is totally unnecessary, it's a distraction for someone trying to understand the content...
@TriPham-j3bАй бұрын
Energy state equation could predict large body enough to safety and error trapping
@th_bessaАй бұрын
Can you do one for Material Science?
@siphosethudlomoАй бұрын
LLMs end-point is predicting the future?
@Sharol-b3lАй бұрын
I’ve been waiting for this video!! 😂
@manmohanmahapatra6040Ай бұрын
AI isn’t just computer science-it’s interdisciplinary. It’s rooted in math (like calculus and probability), shaped by psychology, neuroscience, and linguistics, and powered by hardware innovations from electrical and mechanical engineering. Computers are just tools; AI draws from many fields, and calling it purely computer science ignores the huge contributions of others. Let’s give credit where it’s due!
@paaabl0.Ай бұрын
Kinda bullshit - the only test for presenting each skill by the model was "asking it about it", not actually solving the novel the task. It's like a reversed argument of "Chinese room" or "Mary's room" from psychology. Empirically, anyone who's using LLMs on a daily basis knows how dumb they actually are.
@abbaquantum431Ай бұрын
Picky, picky. This research goes one step at a time, trying stuff out. No need to be rude. Ewin Tang is on to something big. Keep up the good work.
@bumpyfx4900Ай бұрын
I think that this problem help Google release willow that can reduce the error rate by adding more quebit
@mastergluexАй бұрын
Predictive AI is programming the animal kingdom's interpersonal relations and intrapersonal communication to improve prediction. One way to predict the destiny is by being the architect of the destiny.
@Computer-v5eАй бұрын
Smart channel
@spakecdkАй бұрын
Cant wait to see what exciting news quantum computing beings in the near future. Showing breaking some older cryptography will be very interesting, then we can move onto discussing how vulnerable bitcoin is to quantum computers.
@jordanwyltk4569Ай бұрын
New title- "Google's research in basic machine learning"
@tanthiennguyen9308Ай бұрын
Vielen Dank allen Doktoranden & Menschen Verwerfen mit Songs......! Bis jetzt 1.Fehler angemeldet ist
@luvon1114Ай бұрын
Something has gone wrong if you got computer scientists talking about consciousness and understanding - a field completely different than their own field.
@javen9693Ай бұрын
I hope the Princeton man properly cited the oat milk in his reference library.
@GenaroCameleАй бұрын
The whole first part of the LLM is a lie, there is nothing like “Emergence”. LLMs predict the next word and that's it, people who develop these models like Yann LeCun already dismissed the reasoning in transformers long ago. This kind of misinformation detracts from the credibility of the channel and only serves the hype of the moment.
@elwendyrofys5944Ай бұрын
Yann leCunn will say what YAnn leCun will say, but he's not the only person in the world working on those things. On top of that, can you prove to me that you are not "just a stochastic parrots"?
@BigSourcesАй бұрын
Weird to say "one expert said this so it's true", while casually not acknowledging dozens of other experts saying the opposite thing. LLMs are based on a neural network. That's the thing we have in our brain. If all they do is predict the next word, then what exactly are we doing?
@CuriousKushАй бұрын
They just need content 😂😂
@severaux682Ай бұрын
calling it a lie is a bit strong imho. They proved that larger LLMs use a greater set of skills. Whether that is emergence or not is down to semantics that AI companies used out of context for marketing purposes. What's the more interesting follow-up question is: "How many skills does a human use at a time, how far off are LLM's & how much scaling was needed to increment the number of skills?"
@Homerow1Ай бұрын
Emergence just means simple rules combining to form complex behavior, usually in such complexity that while a person can understand one part, maybe a group of parts, the way the whole works is beyond them.
@curious_one1156Ай бұрын
3:01 - Why does OpenAI not give access ?
@ericchang0704Ай бұрын
You should do Electrical Engineering
@uddhavachikaraАй бұрын
I think, first we need to provide them a sense to sense their surrounding, unlike just processing electricity in terms of 1 & 0s.. only this may increase the prob. of make them evaluate things by themselves.. Btw, our lives just getting interesting and this makes me sad to miss out the far future’ but it’s another kind of pleasure or may be this pleasure is just the emergence of my helplessness against the end..✨
@daniloruffino9093Ай бұрын
Could you add Italian audio?
@nessiecz2006Ай бұрын
3:35 supposed to be 100 choose 4. 100!/(4!*96!) = 3 921 225 .
@moroteseoinageАй бұрын
These stochastic parrots regurgitate outputs that are indistinguishable to the layman from human made outputs.
@jadetermig2085Ай бұрын
which is not the flex you (maybe?) imply it is. however it does make it a brilliant way to scam people that are illiterate in that specific area, which is why the marketing hype around ai still hasn't died down. Because enough people haven't realized the scam yet.
@moroteseoinageАй бұрын
@ I agree with you.
@TriPham-j3bАй бұрын
The new break throught in data science is like a rubicube everything inside a cube and changing coloris information and time resolution is the types , class , prototype so x difference from y is plank lenght in times
@SayutiHinaАй бұрын
Visualising tricks of computer science
@aryanmusicacademy94414 күн бұрын
just another ml/ai video telling LLM are getting good and all, CS is not only about ML/AI, their are many field in CS. Please think before writing a title, since this is just a 'clickbait' I am not undermining your work. Great video, and this is just a feedback.
@bandr-devАй бұрын
animations are crazy
@pain2.14 күн бұрын
I most likely think that this is content aimed at a popular science basis and not at science in fact
@AnneRavenStar16 күн бұрын
"The sky is"... "falling" has more probability than "blue"
@haydencoolidge8679Ай бұрын
My man blinked 21 times in 16 seconds... 5:14
@raxneffАй бұрын
2 breakthroughs? One just an explanation of LLMs, another an optimization? Doesn't seem a lot has happened in 2024...
@SherlockHolmes-dx7do15 күн бұрын
can we get 2024 in engineering also please and nuclear science too if possible
@weili-ey9zmАй бұрын
Is there any related article?
@TheSpiritualCollective444Ай бұрын
It’s so much better when spontaneously sparked
@eSKAone-Ай бұрын
Humans are also "just" stochastic parrots.
@jadetermig2085Ай бұрын
By your logic humans are also "just" bananas because both are "just" atoms.
@mehdizangiabadi-iw6tnАй бұрын
👍🙏 thanks
@coeusuniversityАй бұрын
It should be obvious by now that the genius and talent behind Ai isn't coming from any of these western organizations rather it's The Eastern/Asian brilliance at play in all of them
@asimwilliams861Ай бұрын
please do this every year
@gauravghodinde2949Ай бұрын
Basically no breakthroughs
@LiamDirk-u8vАй бұрын
How did you forget to include willow, googled first quantum chip
@jlee-mp4Ай бұрын
random question, but does anyone know the name of the painting at 8:54?
@leoprice4685Ай бұрын
nighthawks by edward hopper
@4115steveАй бұрын
I was trying to think of a way to describe quantum entanglement in a simple way. I'd describe it as seeing the plane in the sky but there not being a shadow. There is no shadow because light bends around the plane and reforms to the path of least resistance. Light particles are attracted to other light particles and they reform to their original state because it's projected path. Like wave oscillation, eventually things sync back together
@WalkerSlavichАй бұрын
4:15 this is a joke. this is what it spits out when i tell it to write my english essay
@user-powerzikoАй бұрын
Can you do about Earth science
@genobohez6374Ай бұрын
Every language has so many under layers no system will ever solve to compute this every ancient culture says language was a gift from the gods. Language even evoves and changes faster than you can even predict with your sripejacket computer idiotory