No video

Beyond the Hype: A Realistic Look at Large Language Models • Jodie Burchell • GOTO 2024

  Рет қаралды 102,235

GOTO Conferences

GOTO Conferences

Күн бұрын

Пікірлер: 155
@tiger0jp
@tiger0jp Ай бұрын
Excellent talk. More than the USING LLMs section which was great, however quite a few would be familiar with given the massive focus on GenAI...it was the first 30 minutes that was really useful especially to our responsibility as technologists. Listening to the categorisation of intelligence and where current LLMs are set the context on where they should be used for and where they shouldn't e.g. fighting wars.
@etunimenisukunimeni1302
@etunimenisukunimeni1302 26 күн бұрын
This is the best talk I've seen on any subject in months. Very well put together, super informative even in this short time. You can tell there would've been more where that came from, and the knowledge and experience of the speaker shows. No over/underhyping, just how, where and why LLMs work the way they do.
@TechTalksWeekly
@TechTalksWeekly Ай бұрын
This is an excellent introduction to LLMs and Jodie is a brilliant speaker. That's why this talk has been featured in the last issue of Tech Talks Weekly newsletter 🎉 Congrats!
@dwiss2556
@dwiss2556 Ай бұрын
This has been one of the best if not the best demonstration of what AI is actually capable of. Thank you for a great talk and most of all keeping it on a level that is understandable even for non-Gurus!
@Crawdaddy_Ro
@Crawdaddy_Ro Ай бұрын
She isn't taking exponential growth into consideration. It's the same reason many researchers didn't see the current AI boom coming yet others did. If you understand exponential growth, you'll also understand that, even though true AGI is a long way off, it will only take a few more years. Look up the Law of Accelerating Returns.
@dwiss2556
@dwiss2556 Ай бұрын
@@Crawdaddy_Ro Exponential growth is as limited as other growth. Just because there are more transformers does not automatically increase the actual quality of the outcome, which is still the major factor why the label 'hype' is very true in the context. We can make cars drive insanely fast but that doesnt mean we can actually get them to be driven safely on public roads at that speed. This very much translates to AI as the energy consumption at current stages doesnt correlate to the outcome it provides. Any advantage in time saving is eaten up by other negative factors currently.
@desrochesf
@desrochesf 3 күн бұрын
@@Crawdaddy_Ro Which exponential growth is she not considering? Even training counts haven't been exponential. Transistor counts / Moore's law hasn't been exponential since 2010 if not sooner and is about to run out with transistor size-shrink return on equity gains coming to an end. Current LLMs aren't much different than the first 'big L' version pre-2000. The AI "boom" currently taking place is largely consumer grift and investors marketing.
@seanendapower
@seanendapower Ай бұрын
This is the clearest explanation of how this works I’ve come across
@alaad1009
@alaad1009 Ай бұрын
Jodie, if you're reading this, you're amazing !
@mikemaldanado6015
@mikemaldanado6015 Ай бұрын
Chatgpt measured its performance in the Bar exam against a set of people that have taken and failed the exam at least once. Research show that people that have failed once have a high probability of failing again. ie. do not trust research funded results, independent research disproves a lot of the gpt claims. ex. independent research found that gpt3.5 gave better answers than 4.0 just not as fast. Thanks you for this "no hype" talk, it should be the norm when it comes to discussing LLM's.
@andrewprahst2529
@andrewprahst2529 13 күн бұрын
I like when she says "so" I would be sad if Australia stopped existing
@samsonabanni9562
@samsonabanni9562 4 күн бұрын
She's a great teacher
@ankurbrdwj
@ankurbrdwj 8 күн бұрын
thank you for such a great great talk , best I have seen to get an introduction to current state , really great , no bs no hype talk, that's why everyone should study psychology not moral science .
@ManuelBasiri
@ManuelBasiri Ай бұрын
I wish we could mandate watching this talk for all of those over excited business decision makers.
@aishni6851
@aishni6851 Ай бұрын
Jodie you are a great speaker! Amazing talk, very insightful ❤
@sayanmukherjee1216
@sayanmukherjee1216 9 күн бұрын
Loved it! She is such a wonderful narrator and presenter. I have a question regarding #generalintelligence #ai #agi #llms - what if we link AI and one with “regional agency” forms a network with others?
@prasad_yt
@prasad_yt Ай бұрын
Great presentation - concise and loaded. Removal of the hype and capturing the essence .
@yeezythabest
@yeezythabest Ай бұрын
The bare mention of hallucinations is the weak point of this presentation, especially on RAG part but it was very interesting
@apksriva
@apksriva 25 күн бұрын
woah .. ! brilliant talk. Very well constructed, I was immersed in the talk till the end.
@VahidMasrour
@VahidMasrour Ай бұрын
Great talk! The best introduction to LLMs i've seen so far.
@fabiodeoliveiraribeiro1602
@fabiodeoliveiraribeiro1602 Ай бұрын
Last year I created an ancient philosophy quiz ("To pithanon ecypyrosis apeiron") and submitted it to ChatGPT. OpenAI's AI did very badly, because it calculated the answer by giving too much value to a word in the sentence and this led it to attribute it to a philosopher Heraclitus whose work emphatically repels another philosophical concept present in the sentence ("apeiron" was coined by Anaximander and used by Xenophanes, philosopher ridiculed by Heraclitus). Some time later, I applied the same test to another AI and the result was surprising. The same mistake was made, but this time the AI ​​cited the ChatGPT test result that I had previously published on the Internet. People in the field of philosophy do not make similar mistakes, especially if they intend to maintain their credibility. And yes, generative text AIs don't just infringe copyright, they do so by mixing good content with bullshit they invent themselves and inappropriate responses provided by other AIs.
@antonystringfellow5152
@antonystringfellow5152 Ай бұрын
Yes, this is what's referred to as "contamination". It's a growing problem for models that are trained on publicly available data from the internet.
@trinleywangmo
@trinleywangmo 21 күн бұрын
@@antonystringfellow5152 And in a day and age when facts and truth mean so little.
@dp29117
@dp29117 22 күн бұрын
Thank You Jodie for very nice explanations
@MaciejHajduk
@MaciejHajduk 24 күн бұрын
She makes it so clear. Best introduction to LLMs and "AI" I have seen ❤
@kehoste
@kehoste Ай бұрын
Excellent talk, thanks for recording and sharing!
@InsolentDrummer
@InsolentDrummer 25 күн бұрын
17:04 Jodie, your remark about scientists being well established in physics, mathematics, computer science etc but not in psychology is rather important, but a bit incorrect. I've been following the development of LLMs loosly, and still, every one seems to be missing the most important point. How many linguists were involved in such endeavours? Natural language does not boil down to just learning strings of characters by heart, generating new string upon them. Unless we consult those who study natural language as it is, LLMs are doomed to be just T9 on steroids.
@mortenthorpe
@mortenthorpe 22 күн бұрын
You are correct in what you write, but this is a mere single subset of the main issue with AI… it needs to know the context, if required to actually solve a problem (the outcome being useful, somewhat correct, and repeatable in solving it). Since no one can communicate context exhaustively, if tasked to input this into an AI generator, this is impossible… AI for generating solutions remains impossible.
@GregoryMcCarthy123
@GregoryMcCarthy123 Ай бұрын
Excellent talk! Would like to see more from Jodie
@serakshiferaw
@serakshiferaw Ай бұрын
Fantastic speech, now i think AI is on a stage where kids do what parents do and they don't know why. just imitating
@samvirtuel7583
@samvirtuel7583 Ай бұрын
Disagree. Humans also simply obey the functioning of their network of neurons. Reflection, understanding are emerging properties, these properties also emerge from LLM and will be more and more precise.
@sUmEgIaMbRuS
@sUmEgIaMbRuS Ай бұрын
@@samvirtuel7583 Counter disagree. Human neurons are non-linear, which makes them way more versatile than digital neurons. And a human brain also constantly evolves and adapts its own structure to problems it encounters. These are both fundamental properties that will never emerge from simply scaling up the number of parameters in linear pre-trained NNs.
@samvirtuel7583
@samvirtuel7583 Ай бұрын
@@sUmEgIaMbRuS This is why I talk about precision, this precision makes it possible to make the LLM less myopic. The hallucination is linked to this myopia due to the lack of precision. But I remain convinced that LLMs 'understand' in the same way that we understand.
@sUmEgIaMbRuS
@sUmEgIaMbRuS Ай бұрын
@@samvirtuel7583 GPTs are pre-trained, i.e. they never learn, they're completely static. Reasoning is sequential. You can't get sequentiality out of a static system by just making it bigger (or more "precise" as you prefer to say). They are also transformers, i.e. their entire thing is taking some text as input and pushing out some other text as output. Compilers do the same, they transform C code to x86 assembly for example. They even "optimize" their output by applying certain transformations that don't affect the observable behavior of the program. But this doesn't mean they "understand" the program in any way. I'm not saying we'll never make an AGI. I'm saying that if we do, it will probably be very different from today's LLMs.
@samvirtuel7583
@samvirtuel7583 Ай бұрын
@@sUmEgIaMbRuS This is precisely the magic of these systems, and this is why even scientists do not fully understand them, we simply know that properties or behaviors emerge that go beyond what the system is supposed to do. It is not a question of a programmed expert system based on statistics and a knowledge base, it is more of a sort of holographic database formed by a network of information fragments. If you understand what these systems actually do, you will understand that LLMs will soon be able to reason like us. I agree with Geaffrey Hinton and IIlya Sytskever, it's just a question of scale.
@Anna-mc3ll
@Anna-mc3ll 27 күн бұрын
Thank you very much for sharing this interesting and detailed presentation! Kind regards, Anna
@santhanamss
@santhanamss 19 күн бұрын
Excellent talk, very concise.
@AnthatiKhasim-i1e
@AnthatiKhasim-i1e 23 күн бұрын
"As a curious AI enthusiast, I'm fascinated by the potential of SmythOS to make collaborative AI accessible to businesses of all sizes. The ability to visually design and deploy teams of AI agents is a game-changer. What use cases are you most excited about?"
@Flylikea
@Flylikea 15 күн бұрын
15:51 I genuinely believe this part is very clear and eloquent, but will still confuse a lot of people (partially due to these people's difficulty coping with the notion of raw ability in other humans). Train an LLM on dictionaries and grammar books and then ask it to write The Odyssey. If that cannot help people understand intelligence and its difference to how ML/AI models (statistical models on steroids) work, I don't think we can help anyone here. It's a great tech. It's not revolutionary (though I can see how it can be used to trigger a revolution), and it is helpful to move faster through repetitive or repeatable components in a task.
@jamesreilly7684
@jamesreilly7684 Ай бұрын
All of this can be summarized in the statement that agi will not exist until ai systems can learn by the socratic method as well as it can teach it
@miketag4499
@miketag4499 Ай бұрын
Absolutely lovely talk
@stevensvideosonyoutube
@stevensvideosonyoutube 13 күн бұрын
That was very interesting. Thank you .
@rflorian86
@rflorian86 Ай бұрын
Got damn....vI will have to listen multiple times, thank you.
@jmonsch
@jmonsch 28 күн бұрын
Great talk! Thanks you!
@trapkat8213
@trapkat8213 25 күн бұрын
Great presentation.
@mioszdaek1583
@mioszdaek1583 Ай бұрын
Great talk. thanks Jodie
@davidporter6041
@davidporter6041 Ай бұрын
Jodie rocks generally, but also this exactly the kind of talk we need
@jodieburchell
@jodieburchell Ай бұрын
Thanks so much David!
@NostraDavid2
@NostraDavid2 Ай бұрын
Note that chatgpt-3.5-turbo has been replaced by chatgpt-4o-mini, it's successor. It was likely not live when this talk was given.
@ytdlgandalf
@ytdlgandalf 14 күн бұрын
such a clear talker!
@NuncNuncNuncNunc
@NuncNuncNuncNunc 23 күн бұрын
As long as LLMs get the answer to questions like, "three towels dry on a line in three hours. How long will it take for nine towels to dry on three lines*" wrong, I am not too worried about AGI. LLMs are basically cliche machine that happen to know a lot of cliches in a lot of different domains. * Gemini provides this reasoning: If it takes 3 hours to dry 3 towels, it means it takes 1 hour to dry 1 towel (assuming consistent drying conditions). If you have 9 towels, and each towel takes 1 hour to dry, then it will take 9 hours to dry all 9 towels.
@dimitriostragoudaras8682
@dimitriostragoudaras8682 Ай бұрын
ok the content is suberb, the presentation is top notch, but OMG I would like to be able to replicate her accent (especially when she says DATA).
@davidsault9698
@davidsault9698 24 күн бұрын
She's scary intelligent combined with the ability to speak - which is not necessarily intelligence based.
@emralcanan9556
@emralcanan9556 20 күн бұрын
Nice talk
@BasudevChaudhuri
@BasudevChaudhuri Ай бұрын
That was a fantastic presentation! Absolutely loved it!
@shikida
@shikida 18 күн бұрын
Great preeentation
@AIhyp
@AIhyp 21 күн бұрын
Just woww
@toreon1978
@toreon1978 Ай бұрын
30:17 did you forget Embedding, Context and Prompt Engineering?
@hcubill
@hcubill Ай бұрын
Jodie is awesome. Really cool presentation!
@Hank-ry9bz
@Hank-ry9bz Ай бұрын
42:50, still it's impressive in its own way. Admittedly not AGI though (whatever that means)
@shoeshiner9027
@shoeshiner9027 Ай бұрын
Agree. This video contents are same as I have thought before.😊
@sirishkumar-m5z
@sirishkumar-m5z 23 күн бұрын
Exciting news about META's new open-source model! SmythOS is perfect for integrating and experimenting with the latest AI models. #SmythOS #OpenSourceAI
@generischgesichtslosgeneri3781
@generischgesichtslosgeneri3781 22 күн бұрын
Don't call it hallucinating, it fabulates.
@BoomTechnology
@BoomTechnology 15 күн бұрын
AI solving problems that are irrelevant to us 42:49 > Like bad humans, = Delete human genetic error 🤖;
@charlessmyth
@charlessmyth 23 күн бұрын
Good talk :-)
@seenox_
@seenox_ Ай бұрын
Very comprehensive and informative, thanks.
@campbellmorrison8540
@campbellmorrison8540 23 күн бұрын
Excellent talk thank you. My primary concern is the suggestion that Extreme generalization is at the human level. While Im sure it is I'm equally sure it doesn't apply to a very large percentage of humanity. It seems to me a very large amount of humanity needs to be trained and they too would fail given problems that have not seen before. That suggests to me that there is a very large percentage of the population that are only, in employment terms are only at or below the level of current LLM's and hence very likely to be replaced by AI. as a result I dont think its unreasonable to think that AI as it stands in going to be a threat to human work and hence income. However my real fear is, as the models get larger and the volume of information input becomes wider the less anybody will be able to predict what the output from AI will be. While that may not be a problem in a fixed role, the more we give AI control of infrastructure and especially military and scientific realms the less we will actually be able to control these as we will not be able to predict problems before they happen. From what I am seeing LLMs are not my real concern as I agree they are about natural language so have applications the require language manipulation but what about systems that really appear to have little to do with language such as things that now manipulate images along with geographic data the sort of thing you might need for a self driving car or missile. Tell me I'm wrong please
@soulsearch4077
@soulsearch4077 Ай бұрын
I really enjoyed this. I actually got an extra knowledge, and it kind of aligned with my suspicions about the current state of AGI.
@jonchicoine
@jonchicoine 20 күн бұрын
If you're new to AI and python, good luck getting the example notebook working on windows. It appears to me that more than one package doesn't support windows.
@lancemarchetti8673
@lancemarchetti8673 Ай бұрын
Facts
@ahmedeldeeb6893
@ahmedeldeeb6893 14 күн бұрын
Good talk all in all, but I found the section on "are LLMs intelligent" to be less than coherent. The placement of current LLMs on the chart is completely subjective, and the classification of generalization levels relevant but wasn't really brought to bear. The method of generating a "skill program" is only a preferred way of designing a system and by no means the only way, so why bring it up?
@SimonHuggins
@SimonHuggins 26 күн бұрын
Hmmm. But you can encode lots of data as though it was language - more efficient tokenization of symbolic representations of different modalities will most likely get us a long way too. And finding generalizations from this may well help LLMs speed further towards AGI. I think the origins of LLMs hides their potentials outside this space. But yeah, there’s a lot of … ahem, attention on this problem.
@pmiddlet72
@pmiddlet72 Ай бұрын
Generating the ultra generalized model-of-everything doesn't even meet the variation in humans that can remotely constitute these vague notions of AGI. So more domain specific models would appear, to an extent, more solve-worthy. Generating the 'Rennaissance SuperIntelligence' isn't IMO a reasonable goal, for so many reasons - a large part of which are philosophical/ethical. What's the point, unless what we're generating is a reasonable model for better understanding ourselves - specifically, how the human brain works and processes the world around it (this is a notoriously complicated area of study)? Conducting scientific research simply 'because we can' (or more likely, because it brings in the Benjamins), while it may generate some new insights, more often than not has driven historical bad actors (and I"m being QUITE nice here) to engage in some ethically horrendous activities. So the hype in this regard and the droves of misunderstanding created around it is important to look at with healthy skeptical eyes. Sometimes 'excitement' over an idea, and over-promising various facets of value can run rampant enough to drive this seeming dichotomy between 'accelerationists' and 'doomers', as if there's NO SUCH thing as a spectrum of thought nor the existence of a middle ground. Is it important to have watchdogs over big tech? You bet. We wouldn't expect any less of watchdogs over big finance, big oil, big watermelon growers - you get the idea.
@musicbuff81
@musicbuff81 Ай бұрын
Really wonderful talk. Thank you!
@Darhan62
@Darhan62 Ай бұрын
I think there is reasoning going on in LLMs, what can only be called a form of reasoning, and by that token (no pun intended) a form of intelligence. It may be just reasoning based on language, but it is reasoning. Also, what about multi-modal models, that can look at a photo and give you a text description of what's in it, or can analyze a piece of music and tell you the genre, or give you a text description of it?
@MaxMustermann-vu8ir
@MaxMustermann-vu8ir Ай бұрын
Today I asked MS Copilot aka GPT4 to return a list of African capitals starting with a vowel. It returned 20 results, some of them being repetitions, and 13 out of 20 did NOT start with a vowel. I'm sure AGI is near 😀
@jan7356
@jan7356 Ай бұрын
I asked GPT-4o to do the same. It gave back 7. All starting with a vocal. All different Only one of them wasn’t a capital (but former capital and biggest city in the country). It needed 2 seconds for something I couldn’t have done. I’m sure AGI is near 😀
@MaxMustermann-vu8ir
@MaxMustermann-vu8ir Ай бұрын
@jan7356 I will try it out. But it's still not correct. And you could have done it by yourself. Not in 2 seconds but you would have checked if the result provided by the LLM is correct.
@arnavprakash7991
@arnavprakash7991 Ай бұрын
@@MaxMustermann-vu8ir it doesn’t break down words into individual letters like we do so it will struggle on tasks like that. Co-pilot gpt 4 also is not good compared to chatbot/normal gpt 4 and normal gpt 4 is now surpassed by claude 3.5 sonnet.
@TheRealUsername
@TheRealUsername Ай бұрын
Yeah of course, AGI is near, thanks to mathematical algorithms, which don't have any components required for intelligence, but I'm sure statistical pattern within a data distribution is sufficient to outperform the human brain, not to mention the data has to be mathematically readable (tokenization), you can surely get AGI from text and (encoded) images, even though it isn't building a unified representation of the world, even though it isn't capable of extrapolation, abstraction and extreme generalization therefore unable of create novel patterns, unable to be creative, even though the model can't detect its own mistakes while inferencing. AGI is near ? Thanks to mathematical algorithms.
@markmonfort29
@markmonfort29 26 күн бұрын
It doesn't do math so getting an AI model to do math or counts or sums etc is not great. It's why it can't properly count how many Rs there are in the word strawberry etc However, it could if it's told to use function calling .. that's how ChatGPT can pull in an excel file and work on it. It turns your query into code and then does that. Not sure if copilot can do function calling but if you type into ChatGPT the followjg "Using function calling tell me all the African capitals that start with vowels " Response is The African capitals that start with vowels are: Abuja Accra Addis Ababa Algiers Antananarivo Asmara Ouagadougou
@vasvalstan
@vasvalstan 28 күн бұрын
There should have added the new research from Deepmind and what they did, not just Kasparov and chess
@InfiniteQuest86
@InfiniteQuest86 Ай бұрын
Thank you. She's one of the few rational people left on Earth. You would not believe the type of hate speech pure rage angry arguing I get when I mention that a language model should be used for language, and if we want to do something else we should use whatever tool is suited to that task. How are we living in a time where people think this is a controversial idea?
@tsilikitrikis
@tsilikitrikis Ай бұрын
Bro if you say that GPT4, a Human level language understanding system, have to work only to translation-like tasks is like you say that a man have to work only as translator🤣🤣
@TheRealUsername
@TheRealUsername Ай бұрын
​​@@tsilikitrikis "Human-level" ?? do you think GPT-x can think like you ? Can reason ?
@tsilikitrikis
@tsilikitrikis Ай бұрын
@@TheRealUsername Why you ignore the rest of the sentence ? it can understand language at human level. The other things are the results of this understanding. If you cannot distinguish from human and can do work like human...what is it?
@seanys
@seanys 21 күн бұрын
“FORTY TWO!” Universal intelligence solved.
@kevinamiri909
@kevinamiri909 20 күн бұрын
GPT3.5 is not 355 B
@dennisestenson7820
@dennisestenson7820 Ай бұрын
Artificial general intelligence will be built from components that are algorithmic, systematic, and not intelligent at all. Same as us.
@TheRealUsername
@TheRealUsername Ай бұрын
I think you should learn biology, our brain is incredibly complex, even more for the neo-cortext, it couldn't be farther from mathematical algorithms. All ML models are statistical patterns learners, they can only learn patterns instead of actual data, because it's mathematically possible, and it requires all the dataset to be mathematically readable.
@sdmarlow3926
@sdmarlow3926 Ай бұрын
Is anyone else distracted by the person taking a pic of every slide? *pro tip: announce where slides can be found online before starting a talk
@millax-ev6yz
@millax-ev6yz Ай бұрын
Excellent video...although i had to stop my mind from thinking about what would happen if i mixed Foster's and vic bitter together because of the accent. Thats my own neural net working against me
@richardnunziata3221
@richardnunziata3221 Ай бұрын
test data in the training data is a rookie mistake..i have to wonder if that is true or there is a misunderstanding here
@aaabbbccc176
@aaabbbccc176 Ай бұрын
This is what I think. You might try asking GPT4o who wins the gold medal RIGHT AFTER the 100m dash at Paris Olympic Games. If it answers, "I do not know," or it answers wrong, then you know if there is a rookie mistake. The answers to MOST of the questions people ask GPT are indeed in the training data, somewhere. GPT is just smart enough to extract (probabilistically) the right answer and put it in well-organized natural language sentences.
@suisinghoraceho2403
@suisinghoraceho2403 Ай бұрын
@@richardnunziata3221 When you have bots automatically crawling the internet data and OpenAI being very opaque about how their models are trained, this is actually quite difficult to avoid. You can only try to validate whether the test data is in training data after the facts.
@mortenthorpe
@mortenthorpe 22 күн бұрын
Notice that you can literally substitute the term AI with statistics, and the content and message remains the same… what does this mean, semantic fun aside? Well, for starters it means that Generative AI delivers statistically predictable results, which is the crux and reason to completely negate AI for generative. - it will never deliver correct solutions! The solutions completely rely on quality of data input, and knowing context - neither is trivial, or achievable in any meaningful sense… and you don’t even have to be a programmer or technical to know this - the mere foundation of AI as a concept relies on these factors… in brief, for anything truly meaningful, AI is and remains useless, forever!
@stratfanstl
@stratfanstl 21 күн бұрын
@mortenthorpe EXACTLY. There is no "intelligence" in such systems, only statistical probabilities concerning the next likely token to appear given the "context" of a set of prior tokens ("the prompt") based on everything the model has been supplied to calculate its statistics ("its training"). If you prompt such a system for "energy as a function of mass," it might spit out e = mc^2. But since it is only representing the likelihood of next tokens based on prior tokens, if the world was filled with a million idiots who all believed e = mc^3 and blogged about it 20 times per day and responded to other blogs on the topic five more times each day reiterating their belief that e = mc^3, that scientifically INCORRECT content would eventual distort the probabilities in a LLM to the point where the INCORRECT formula would become increasing likely to appear as output. These models have zero means to weight probabilities based on TRUTH. They are solely capable of weighting based on frequency of appearance. That's not intelligence.
@afterthesmash
@afterthesmash 16 күн бұрын
"I actually got my PhD in psychology." Immediate translation, from her next comment: I'm a widget produced by the Concern Industrial Complex. It's sad the world we now live in automatically translates "watching with a lot of concern" into "oh, you must have a recent humanities degree" but there it is.
@afterthesmash
@afterthesmash 16 күн бұрын
Having now finished the video, she did a good job factoring the world as a practicing data scientist after this brief, but worrying moment.
@Theodorus5
@Theodorus5 Ай бұрын
OK for folks that know something about the subject but a woefully inadequate introduction for those who may not
@NostraDavid2
@NostraDavid2 Ай бұрын
"goto" is a software development conference, so the target was developers, which makes sense.
@nikjs
@nikjs Ай бұрын
sentience is not a pre-requisite for destruction of civilization. for that, the primitive the better
@marccawood
@marccawood Ай бұрын
Sorry? 11:20 she says you can use an LLM to generate training data?? I’’ gonna call BS on that claim. If you’re not learning from real world data you’re pissing in the wind.
Ай бұрын
@@marccawood Was she meant was that NLP are really good for getting the parameters you need from raw texts/reports. Something that can be very time consuming when setting up and training ML models.
@MarkArcher1
@MarkArcher1 Ай бұрын
I enjoyed the talk but a bit of a red flag that the speaker isn't familiar with the difference between AGI and ASI.
@janicewolk6492
@janicewolk6492 4 күн бұрын
Do you think the emergence of these ideas is leading to the significant drop in birth rates worldwide? As in, who wants to expose their child to neural nets? Maybe this partially explains the rise of right-wing anti-intellectual political movements? As in, who benefits from all of this? It certainly seems as if this is a lovely intellectual game that, like social media, has significantly serious consequences. I have a Master's degree in Slavic Linguistics. Am I now extraneous? Am I just supposed to say, oh well? Is the speed worth the social upheaval? It is no doubt the response of espousers of these ideas that determining outcomes isn't their responsibility. By the way, who watches computer chess games? I love the way the speaker refers to "humans". Who is her constituency?
@irasthewarrior
@irasthewarrior 19 күн бұрын
AI is sophisticated degeneracy.
@afterthesmash
@afterthesmash 16 күн бұрын
Intelligence is _not_ controversial. It's divergent. It's the same as religion. There is no controversy between Christianity and the Muslim faith. But there are definitely major points of divergence. To call perspectives on IQ controversial rather than divergent gives far too much voice to the squeaky wheels.
@afterthesmash
@afterthesmash 16 күн бұрын
What I just did there is a mode of generalization, that of rising above brainless cliche, that I would dearly _love_ to see manifest in my future chatbot companions.
@pristine_joe
@pristine_joe Ай бұрын
Just thinking, LLMs have been a subject of research for the last few decades & is limited by the computing capacity our technology has thus far produced. Human intelligence is backed by training through evolution & has perfected the art of passing it down to the next generation through DNA, the community etc.,. Could we be in the very early stages of trying to replicate our consciousness & maybe it may eventually emerge if we overcome the limitations we currently face 🐝🌻
@TheRealUsername
@TheRealUsername Ай бұрын
LLMs and all ML models are pattern learners and only work when the data is mathematically readable (tokenization) whereas biological intelligence is firstly omnimodal, secondly it relies mostly on abstraction, constant reasoning, intuition and continual learning (neuroplasticity), LLMs aren't a form of intelligence, they're sophisticated mathematical algorithms.
@afterthesmash
@afterthesmash 16 күн бұрын
"it's been an overwhelming flood" Really? An overwhelming flood of sensationalist headlines that you tune out completely if you wished to do so, and it barely impacted anything in your day to day life? At least not yet. And possibly not ever.
@afterthesmash
@afterthesmash 16 күн бұрын
Having now finished the video, to Jodie's credit, this was a passing turn of phrase and the rest of the talk never went here again.
@askurdija
@askurdija Ай бұрын
GPT's performance was measured on various intelligence benchmarks and tasks outside of its training data. She doesn't explain what's wrong with these measurements and she doesn't give a concrete proposal of how to measure intelligence in a better way. Human intelligence is measured on tests that are similar or equal to the ones given to LLMs. Instead of engaging with all this body of research, she just shows an anecdotal counterexample (the Codeforces problems).
@rob99roy
@rob99roy 17 күн бұрын
This presentation is going to age very badly. Let's not underestimate how quickly AI will progress. I suggest you revisit this talk in a year.
@nikjs
@nikjs Ай бұрын
where's the code
@Klayhamn
@Klayhamn Ай бұрын
anyone who has spent enough time with GPT-4 and GPT-4o would easily know the presenter is wrong. Good enough LLM's are capable of REASONING and not just "text generation". i have myself crafted arbitrary problems that require math and logic to solve, and had GPT-3.5 fail at them and GPT-4 SUCCEED in solving them, and there is no way it has "saw this problem before" because i made it up on the spot. so i think the title of "emerging AGI" is perfectly fine. you have no idea what would be the path to AGI, and just like we "unintentionally" discovered LLMs by trying to solve something else, we might end up creating AGI out of LLM's without directly trying to instill some kind of "general intelligence" into them. the main thing i believe LLMs are currently missing is the ability to LEARN - i.e. dynamic plasticity in real-time. they also lack memory, for the very same reason. so i think the KEY to achieve AGI would be to grant them memory and plasticity (in some form) - and this would probably be the stepping stone that takes us to AGI levels. Even if they don't START with AGI capabilities from the outset, they might EVOLVE to have AGI capabilities just like babies grow up and learn more about the world and gain skills and mental faculties.
@muhammeryesil3331
@muhammeryesil3331 Ай бұрын
Definitly agree with you
@ASmith2024
@ASmith2024 Ай бұрын
lol
@jpphoton
@jpphoton 27 күн бұрын
hmmm
@tsilikitrikis
@tsilikitrikis Ай бұрын
Got 10 problems for the periods of trainning got them all right aand got 10 SAME difficulty problems after and got them.. all wrong?? No way this is right guys
@tsilikitrikis
@tsilikitrikis Ай бұрын
Also the understanding of language is much more broader than cracking the game of chess. You learn something of the real world so it brings you more close to a general entity!
@jan7356
@jan7356 Ай бұрын
I am sure this is completely outdated. This as done on the first version of GPT-4. Coding abilities and generalization abilities have gotten much better with later version.
@InfiniteQuest86
@InfiniteQuest86 Ай бұрын
Lol I hope you are being sarcastic. That's always been my experience. These companies are lying about training on the test data. It's actually pretty sad that they don't score perfectly having trained on it. That's pretty pathetic actually.
@InfiniteQuest86
@InfiniteQuest86 Ай бұрын
@@jan7356 On anything I've asked GPT-4 and GPT-4o, the original 4 was far better. So this statement doesn't really hold.
@tsilikitrikis
@tsilikitrikis Ай бұрын
Bro you understand nothing of this technology. Read the paper "Spikes of AGI". You sya that they train them on test data 🤣🤣. I hope you have no relation with software
@Peter.F.C
@Peter.F.C 24 күн бұрын
What we have here is a lazy person who doesn't even do basic research and doesn't know what they are talking about. Take for example her description of the 1997 Kasparov match against Deep Blue. In that match, Kasparov did lose the second game. At least she got that right. But he did not lose the third game and he did not lose the fourth game. Both those games were draws. He did lose the match and that was despite at that point in time still being stronger than the chess engine. This is information on the match that she could have easily checked if she'd bothered. The chess engine's play had had a psychological effect on him which is why he lost the match despite being a stronger player. But it was very well understood at the time that the chess engine that had defeated him had the intelligence of a cockroach. As it is now, chess engines are far beyond the strongest humans, but they still only possess a cockroach level of intelligence. And are irrelevant anyway in discussion of the capabilities of these LLMs. This talk sheds no light on the subject matter.
@PACotnoir1
@PACotnoir1 17 күн бұрын
It's interesting to see that she can't figure that compression of informations in a trillion of parameters constitutes an elaborated form of intelligence, alien to us but still with "cognitive abilities" and that reducing it to simple maths correlations is like reducing human intelligence to chemical reactions. It just forgets that emergence properties arise in complex systems.
@odiseezall
@odiseezall Ай бұрын
the speaker presented 0 (zero) evidence regarding the timeline of increasing generalization of AI.. she's saying "there's a long way to go" but there is no proof to support that conclusion
@larsfaye292
@larsfaye292 Ай бұрын
@@odiseezall because it's self evident...
@TheRealUsername
@TheRealUsername Ай бұрын
I believe LLMs can mimick a certain form of understanding of certain parts and aspects of their training data, it won't achieve AGI but it can still be useful for certain tasks.
@ASmith2024
@ASmith2024 Ай бұрын
bananums.
@ggrthemostgodless8713
@ggrthemostgodless8713 17 күн бұрын
GROK will rule them all. ... Elon has been at it for two years only... and look!!
@user-wr4yl7tx3w
@user-wr4yl7tx3w Ай бұрын
Basic info
@tyc00n
@tyc00n 25 күн бұрын
first half was great, 2nd half was just terrible
@sblowes
@sblowes 19 күн бұрын
Wow, this is so off! AGI didn’t become ASI because it’s sexier, it’s a different class of AI. LLMs have a base level of reasoning properties already at the ChatGPT-4 level, and the _general consensus_ among current leading AI researchers is that we expect a higher level of reasoning once we multiply the size again.
@raiumair7494
@raiumair7494 Ай бұрын
Nothing new at all - in face boring to some degree - except a short at why current LLMs are not on its way to AGI - which I agree with
@antikras666
@antikras666 Ай бұрын
babka
Unreasonably Effective AI with Demis Hassabis
52:00
Google DeepMind
Рет қаралды 157 М.
WILL IT BURST?
00:31
Natan por Aí
Рет қаралды 40 МЛН
Bend The Impossible Bar Win $1,000
00:57
Stokes Twins
Рет қаралды 39 МЛН
هذه الحلوى قد تقتلني 😱🍬
00:22
Cool Tool SHORTS Arabic
Рет қаралды 90 МЛН
Programming's Greatest Mistakes • Mark Rendle • GOTO 2023
51:24
GOTO Conferences
Рет қаралды 91 М.
Large Language Models (LLMs) - Everything You NEED To Know
25:20
Matthew Berman
Рет қаралды 87 М.
What do tech pioneers think about the AI revolution? - BBC World Service
25:48
The moment we stopped understanding AI [AlexNet]
17:38
Welch Labs
Рет қаралды 1 МЛН
Terence Tao at IMO 2024: AI and Mathematics
57:24
AIMO Prize
Рет қаралды 261 М.
AWS CEO - The End Of Programmers Is Near
28:08
ThePrimeTime
Рет қаралды 398 М.
WILL IT BURST?
00:31
Natan por Aí
Рет қаралды 40 МЛН