Sparks of AGI: What to Know

  Рет қаралды 32,280

Edan Meyer

Edan Meyer

Күн бұрын

Пікірлер: 134
@marshallmcluhan33
@marshallmcluhan33 Жыл бұрын
The only academics that get to mess with GPT4 internally are the ones on Microsoft's payroll. Open source is the only way forward towards honest research.
@gJonii
@gJonii Жыл бұрын
You won't be getting much more open source research. GPT-4 is barely safe anymore. It doesn't have to be much smarter for it to become more dangerous than nukes, and at that point, only crazy anarchists would be releasing the models to the public. And even though GPT-4 is still safe enough by itself, we don't know how much tweaking it would require to go from open GPT-4 to ending humanity. Maybe years, maybe decades. Or maybe months. We don't know.
@jordan-ho7gt
@jordan-ho7gt Жыл бұрын
open source has no money for research.
@marshallmcluhan33
@marshallmcluhan33 Жыл бұрын
@@jordan-ho7gt The whole AI framework is built on open-source.
@chenwilliam5176
@chenwilliam5176 Жыл бұрын
@saltyscientist1596
@saltyscientist1596 Жыл бұрын
There's so much of this stuff: papers, talks, promotional material, coming directly from the people that stand to financially gain from it. It's honestly unsettling that so many people don't interrogate the information based on the source. (I'm also not super re-assured that you can, potentially, get the right answer from these models as long as you prompt them properly. This is like google search on steroids where everyone will get answers that confirms their bias.)
@TheScott10012
@TheScott10012 Жыл бұрын
Babe wake up new Edan Meyer just dropped
@jimmcintosh3718
@jimmcintosh3718 Жыл бұрын
Is it just me? In the “DaVinci Three” acrostic, the model has NOT satisfied the first constraint; it is short one line, at the end, beginning with”e.” What amI missing?
@ostenloo1981
@ostenloo1981 Жыл бұрын
First time catching your video when it just comes out, nice
@etunimenisukunimeni1302
@etunimenisukunimeni1302 Жыл бұрын
Thank you for the video. I haven't really seen much discussion on the points you bring up about the paper, and even less with a rational and thoughtful approach like this. Much appreciated!
@ChrisStewart2
@ChrisStewart2 Жыл бұрын
Thanks for bringing some good reasoning to the discussion.
@bornach
@bornach Жыл бұрын
And without adding to the AI hype unlike other two minute papers KZbin channels.
@tiagotiagot
@tiagotiagot Жыл бұрын
Is there a language model that generates things using an approach similar to the WaveFunction Colapse algorithm? (not quantum mechanics, the procedural image/tile-based game map generating algorithm named after the quantum mechanic concept)
@j.j.maverick9252
@j.j.maverick9252 Жыл бұрын
constraint satisfaction is the general approach.. good question!
@tiagotiagot
@tiagotiagot Жыл бұрын
@@j.j.maverick9252 Does that involve that "entropy" approach WFC uses?
@revimfadli4666
@revimfadli4666 Жыл бұрын
Speaking of quantum, what about quantum NNs using their inherent WFC to run LLM WFC?
@doctorartin
@doctorartin Жыл бұрын
Thanks! I've also thought of GPT4 as a thinking fast/system 1 only intelligence. But don't you think its possible to create a thinking slow/system 2 loop where GPT4 critiques it's own outputs for several rounds before giving the user it's output? Are there papers evaluating its performance after X number of "critique" rounds?
@stefano8840
@stefano8840 Жыл бұрын
There is a paper about that
@littlegravitas9898
@littlegravitas9898 Жыл бұрын
Yeah, self reflection papers are happening. Pinecone and langchain can both be used for memory storage and retrieval for on task management and reflection etc as well. I've I've working with chaining instances together to do these kind of iterative loops. Results are...startling
@doctorartin
@doctorartin Жыл бұрын
@@stefano8840 Nice! I'd really love itbif you could point me in the right direction
@costa_marco
@costa_marco Жыл бұрын
​@@littlegravitas9898 Is a paper coming from your tests? I would be really interested on your results.
@littlegravitas9898
@littlegravitas9898 Жыл бұрын
@@costa_marco I'm currently collaborating on a thought piece, though it will touch on reflection learning, for sure.
@jakubsebek
@jakubsebek Жыл бұрын
The way you'll know you have AGI is that you're dead Thanks for the nice paper summary.
@thesofakillers
@thesofakillers Жыл бұрын
20:44 I'm curious to hear your thoughts around the safety implications of "building a stateful agent that cn learn things over the long term and that can plan about the future"
@sadiqkhawaja7019
@sadiqkhawaja7019 Жыл бұрын
Awesome channel, glad I discovered it, one paper that Hinton said was interesting but he kinda abandoned was using matrices to model symbolic relationships (paper title), maybe have a look at it if you're interested in covering it.
@elirothblatt5602
@elirothblatt5602 Жыл бұрын
Great concept for a video. Subscribed and listening now!
@EliasMheart
@EliasMheart Жыл бұрын
In regards to your question at the end: I think we want a definition of AGI, because we have to avoid it as best we can, as long as we haven't solved alignment. If we know what exactly is the common denominator of AGI, it is easier to notice if one intends to reach out too far. I would like humanity to continue.
@zs9652
@zs9652 Жыл бұрын
Are we so sure agi could ever be fully aligned? Might be like children. You can want for them to be a certain something and believe a certain way when they grow up but they will decide what they are in adulthood instead.
@j.j.maverick9252
@j.j.maverick9252 Жыл бұрын
could the out of order planning be simply handled by a separate instance monitoring the in progress reasoning steps, to create additional guidance instructions? Seems like for the reversed last line task, it might spot the problem with the last line before the primary system, and suggest a better step by step approach.
@jperez7893
@jperez7893 Жыл бұрын
Can someone use it to decipher and find out what the voynich manuscript contains
@strictnonconformist7369
@strictnonconformist7369 Жыл бұрын
It's interesting how what AGI is supposed to mean keeps changing. In practical terms, GPT-4 is more successful than a rather large part of humanity in accomplishing problem solving with less guidance than humans doing it for the first time, all in a model that's for working with language, and doing algebra problems. I can find a large percentage of humans as adults that failed to achieve that much. It'd be interesting to see it take a formal IQ test to see how it's rated. The apparent inability to do meaningful forward planning may be its Achilles heel. But even that can be mitigated to some extent with multiple contexts and tools with a prompting strategy. I'll leave that as an exercise for the reader 😉
@anthonyward8805
@anthonyward8805 Жыл бұрын
Well at a minimum AGI should be able to handle out of distribution tasks. With such a large training set I don’t think that GPT-4 has really shown out of distribution success. And it’s important because if we ever put AI in charge of something and a distribution shift occurs an AGI should be able to handle it. Anyways thanks for the good video!
@notaregard
@notaregard Жыл бұрын
Everyone is entitled to their own definitions, but you must admit that this is moving the goal posts from what people were saying 10 years ago.
@bornach
@bornach Жыл бұрын
@@notaregard That's the trouble with anything to which the label "Artifical Intelligence" had been applied. Once one understood how it worked and started deploying it in applications, people just called it face detection or content-aware fill or cancer screening or automated segmentation or computer generated visual effects etc. The bar gets raised and once again only the cutting edge research qualifies as true AI.
@anthonyward8805
@anthonyward8805 Жыл бұрын
@@notaregard no it’s pretty obvious that AGI should be able to do anything that humans can do, that’s always been the bar. Now making a test metric that can determine whether you are an AGI, that is what’s been moving after a specialized system beats the current test
@notaregard
@notaregard Жыл бұрын
@@anthonyward8805 Anything a human can do? I agree that we can use humans as a reference, but your definition of AGI would require embodiment. It's obvious that you can have something less intelligent than humans meet the thresholds for AGI. There are many possibilities. It could be smarter in some ways and dumber in others - and be able to perform "out of distribution." I'm not saying your standard is wrong, only that it isn't universally agreed upon.
@Vince_F
@Vince_F Жыл бұрын
7:13 Those “mistakes” are INTENTIONAL. A.I. is learning our responses to its “hallucinations”. . . .it’s a TRAP‼️ 😂
@chenwilliam5176
@chenwilliam5176 Жыл бұрын
This is What I Worried before 😢
@Alain-pb7xp
@Alain-pb7xp Жыл бұрын
Thanks for this video!
@anywallsocket
@anywallsocket Жыл бұрын
Very well done. While these models are indeed impressive, we certainly need the rigid formal step-by step counterpart, to avoid making simple mistakes, and a larger working memory such that it can plan and adjust its output before finalizing. Once we have that however, I don’t see what’s missing for true AGI.
@albertmashy8590
@albertmashy8590 Жыл бұрын
Vector Databases can now provide unlimited long-term memory for AGI's
@macmcleod1188
@macmcleod1188 Жыл бұрын
Inhibitory model remains missing.
@bornach
@bornach Жыл бұрын
Define "true AGI"
@anywallsocket
@anywallsocket Жыл бұрын
@@bornach you'll know it when you see it
@bornach
@bornach Жыл бұрын
@@anywallsocket In other words, coming up with a definition for AGI is like trying to define pornography
@whitebai6367
@whitebai6367 Жыл бұрын
Yes, I've heard of that paper. They use python, and they use data, right?
@taktungtuang9118
@taktungtuang9118 Жыл бұрын
You want a good definition because it can explain things better. That's why jargon (or even math) exists because professionals need to explain something that everyday language cannot do well. You could argue that it's more productive if we just explain what we want to build more precisely instead of defining AGI and that's fine. But it's mainly not about consensus.
@phamhoangminh3385
@phamhoangminh3385 Жыл бұрын
Why do I feel like people evaluate an LLM on an NLU standard?
@TheScott10012
@TheScott10012 Жыл бұрын
It's unbelievably hard to come up with a long paragraph, or even a sentence that reads well backwards in the second half. "All for one, one for all" is my best, can anyone beat it?
@SianaGearz
@SianaGearz Жыл бұрын
Are you as bored as i am?
@bornach
@bornach Жыл бұрын
"Schedule the time can I so watch the wind you will"
@j.j.maverick9252
@j.j.maverick9252 Жыл бұрын
age with wisdom do not fear time, for it brings wisdom with age
@skit555
@skit555 Жыл бұрын
AGI, in the sense of Lecun, is different than AHI and could be summarized as full multimodal AI
@LegenDUS2
@LegenDUS2 Жыл бұрын
One thing is clear I guess: We are now in state that can use the term 'AGI' in paper. Even though there's long long way to go, we can at least say we saw *spark*.
@dylan_curious
@dylan_curious Жыл бұрын
The more I learn about this stuff the more it feels like humans are in narrow intelligence shaped by evolution for social issues, and then little things like basic logic emerged as an afterthought. It's clear that even things like slime molds, and birds and bacteria are highly intelligent
@mcombatti
@mcombatti Жыл бұрын
I saw this video on another channel a few weeks ago unless you're reposting old content?!?
@screwsnat5041
@screwsnat5041 Жыл бұрын
I believe gpt 4 is good at language giving the insane amount of data on the internet it’s been trained on everything you can search up that is cohesive has probably been searched up it’s just a matter of finding it and maybe editing abit which is where I think it succeeds. A true AGI is able to to access the answer and explain how it came to such conclusion demonstrating knowledge of the world other than language
@PJRiter1
@PJRiter1 Жыл бұрын
It needs to know when to restart a logic path and it needs to stand back and look at the map of its progress. Maybe it needs a companion ai to observe its progress and critique the map of progress it sees and give the primary ai that feedback...
@diamondvideos1061
@diamondvideos1061 Жыл бұрын
I think we should make the definition AGI simple. That the intelligence is general. It is the opposite of Artificial Narrow Intelligence. LLM's are close to what I would consider general.
@S8N4747
@S8N4747 Жыл бұрын
When robot waifu
@zozodejante8350
@zozodejante8350 Жыл бұрын
humanity at his peak, we're doomed
@typingcat
@typingcat Жыл бұрын
As soon as A.G.I. is invented, some Japanese guys will surely make it within the same year.
@sssurreal
@sssurreal Жыл бұрын
2027 📅
@trentonking5508
@trentonking5508 Жыл бұрын
Daddy
@drdca8263
@drdca8263 Жыл бұрын
... depends on your desiderata? What are the requirements you have for something to count as that? You could presumably purchase a blowup doll themed after a robot today, and there are of course there are more advanced mechanical imitations of the female body made for the purpose of sexual gratification. So, presumably these don’t meet your desiderata. It seems you are looking for something one would have an emotional relationship with, not something which you regard as a mere object. Some people can of course fool themselves into thinking that current models are genuine persons and I’m pretty sure some have considered themselves to be in a romantic relationship with a chatbot. So, if you only require that the person who is to have the robot as a “waifu” , *feel* that it is a person with whom they have a relationship, then probably it would be possible to combine such a chatbot with some text to speech and speech to text, and integrate that with a RealDoll, without requiring much more advancement (provided that they don’t require that their “waifu” be able to get out of bed. Maybe they could use some video rendering stuff with some AI generated face + head on their computer monitor when not in bed, and they might find that convincing.) But if you mean something which is *actually* a person, then that seems, well, philosophically really confusing (how could we tell?), and I have little idea of how long it would take for that to be doable (though I kind of suspect that either AGI-based-doom or the second coming of Christ, is likely to occur before an actually-is-a-person AI?)
@AmazingArends
@AmazingArends Жыл бұрын
Wow, this video spent a lot more time talking about the failings of ChatGPT-4 than its capabilities, while the paper (which I read) spent more time talking about its capabilities than its shortcomings! This is like seeing a dog that can play piano and saying "well, off got some of the notes off-key!" 😂
@tweak3871
@tweak3871 Жыл бұрын
2 cents on the use of the word "AGI". I tend to agree with what you said, but I think there is a great fear amongst the public around the term "AGI". When you say AGI, people think of like skynet or HAL 9000. I think it be wise for the research community to use a separate term that would encapsulate those ideas, or come up with another term for what they mean by "AGI" or something. I personally think that those kinds of ideas are still very much in the realm of science fiction and prefer talking about all intelligences in terms of measurable capabilities, but like, I got calls from concerned friends and family member over the title of this paper. Given the open letter to stop AI research for 6 months and people like Yudkowksy making the rounds on various podcasts, there is a sizable cohort of people who associate AGI with what I might call "ASI". Obviously not a well defined term either, and again, I don't think we are anywhere near the realm of something like that, but given that the term AGI is so poorly defined anyways, I don't think it's unreasonable to exercise caution when using such a term given the current level of public attention. Something needs to be done on that front, people are asking Sam Altman in Q&As whether they should have kids or not.
@politoons8776
@politoons8776 Жыл бұрын
If the smartest people at Microsoft say it has sparks of AGI, then I believe them.
@EliasMheart
@EliasMheart Жыл бұрын
Those are some funky instructions on the stack ( 01:30 ). But still, very impressive for a LLM that has not have any embodiment. 8:00 Hmm, that sounds like it might be evaluating the truth of it's own previous statements too highly, and tries to reason on the basis of it, instead of actually evaluating for truth. Of course, I assume that the "is that correct" type questions would be a bit of a prompt to check for that, but I wonder if a "some of the previous statements are incorrect, please go over them, and..." kind of prompt would fix that sort of behaviour (since I don't have access to it myself). They could have intentionally not given it a helping hand like that, or it's overall not able to do that (yet). Kind of: "Take all the previous statements of this conversation, and check them with this prompt". 14:30 Oh, the Model has ADHD :D
@PJRiter1
@PJRiter1 Жыл бұрын
Restart a logic path when length and complexity cross thresholds
@orangegames3284
@orangegames3284 Жыл бұрын
an other paper have been made,showing that GPT-4 actually did 0/10 on leetcode in easy mode..mainly because it was post 2021-questions. more than thinking,it seems like LLMs have so broad and large that most basic questions we would ask are very close to their training dataset now. It's definitively less "thinking" and more a mirror of everything we found out till today(kinda like google obviously but-more than just giving sites).this is my opinion personally
@strictnonconformist7369
@strictnonconformist7369 Жыл бұрын
I've had Bing Chat successfully make sense of my prompt describing rules for a solitaire game of my invention that has never escaped my devices in code or rules format, and it was able to generate valid Swift code for a console version, including when I asked it to create an Attract Mode where it played itself, and I asked it to allow the user to press any key to interrupt it and play a new game. Well, it automatically added a time delay after each move, and it did something I didn't expect: it created a different solution for the Attract Mode, because the standard Swift libraries don't have a non-blocking way to see if a key has been pressed, and it explained its actions. Thus, perhaps not in all cases, it can reason well enough to create valid solutions via code for things not present in its training data. You still need to be able to verify it yourself, clearly! It's a digital genie and all that implies.
@StrandedKnight84
@StrandedKnight84 Жыл бұрын
In the recent past, it was easy to differentiate between artificial narrow intelligence (ANI) and AGI, because ANI was all we had. Now we have GPT-4, which is certainly something more than ANI, because it can do several types of tasks that it was not specifically trained to do. But it's not yet at the level that most would consider true AGI, which is to be able to do anything that humans can. Maybe artificial broad intelligence (ABI)?
@YuraL88
@YuraL88 Жыл бұрын
I think there is no need to invent new definitions, better "weak AGI". Strong AGI will be a superintelligence.
@StrandedKnight84
@StrandedKnight84 Жыл бұрын
@@YuraL88 Yeah. I've been calling GPT-4 a "very early form of AGI". I find that many people struggle to take that statement seriously. They have this view of AGI that is closer to what I would consider ASI.
@bornach
@bornach Жыл бұрын
How do we know GPT-4 can do several types of tasks that it was not specifically trained for? It is closed source and OpenAI hasn't revealed what is in the training data. For all we know, it could simply have memorised all the answers to the questions being given to it. Most examples where it demonstrates human-like problem solving might be traceable to a very similar token sequence that it has pattern matched with in its training data. It may just be like the student who skipped every lecture and instead memorised past exam questions and their model answers. There is some evidence for this as researchers using the GPT-4 API have been discovering. See also the "GPT-4 and professional benchmarks: the wrong answer to the wrong question" by Arvind Narayanan and Sayash Kapoor and "The Decontaminated Evaluation of GPT-4" by Benjamin Marie
@YuraL88
@YuraL88 Жыл бұрын
@@bornach please, read the article first. You've made up many plausible hypotheses, but, these hypotheses are completely inconsistent with the nature and design of many tasks in the paper. I've independently done my own extensive independent GPT-4 testing against a broad range of tasks including my working tasks, and I can't imagine any dataset that can contain all these answers. Also, many of these tasks require abstract reasoning on a high level, and I asked many additional questions to check the model's understanding.
@bornach
@bornach Жыл бұрын
@@YuraL88 Excellent! Will you be publishing a paper on your independent tests soon? I've been searching for a proper independent assessment of GPT-4 that hasn't been carried out by any researchers too closely associated with Microsoft/OpenAI. Many of the impressive feats of problem solving I've so far traced back to an OpenAI demo or evidence of it already being in some dataset that the Large Language Model research community have been using in prior published work. Demonstrating the ability to solve a high-level problem by synthesising disparate low level knowledge would indeed be evidence of those Sparks
@therealscot2491
@therealscot2491 Жыл бұрын
I still don't believe this Is even closer to AGI, this is like a quick road webpage to Google, its just a Google with updated pace. With alot of mistakes.
@chenwilliam5176
@chenwilliam5176 Жыл бұрын
I completely agree your point of view ❤
@EggEggEggg1
@EggEggEggg1 Жыл бұрын
"Absolutely toasted" 😆😆😆
@os3ujziC
@os3ujziC Жыл бұрын
I'd like to see GPT-4 tested (any tests one might consider as aimed at testing intelligence) against 80 IQ humans and see who performs better. If GPT-4 wins, either those humans don't have "general intelligence", or we have to admit that GPT-4 is an AGI. There is a difference between the terms "AGI" and "artificial superintelligence", AGI doesn't have to be more intelligent than an average human, just needs to be as intelligent as some humans.
@saltyscientist1596
@saltyscientist1596 Жыл бұрын
This shows you don't understand how either intelligence tests or AI works. Part 1 of the problem, IQ test answers are available on the web and are most likely part of the training set. Part 2, you're falling for Moravec's paradox. Part 3, there's no such thing as an 80 IQ human, since that's not how IQ scores work (they're a composite of many different scores). There is literally no way of finding out what the intelligence of an algorithm like this is without examining its inner workings. And maybe there's no point, since we understand intelligence in human terms, and those may not apply to artificial intelligence.
@os3ujziC
@os3ujziC Жыл бұрын
@@saltyscientist1596 I asked GPT-4 to prepare a reply to your comment, here is what it came up with. Thank you for your response. I appreciate the opportunity to discuss this topic further. I understand your points and would like to address them one by one. Part 1: You're correct that many IQ test answers are available on the web and could be part of GPT-4's training set. However, I believe it's still possible to create a fair test by designing new questions or modifying existing ones in a way that the AI hasn't encountered before. The intention here is to evaluate problem-solving skills and adaptability, not memorization. Part 2: I'm aware of Moravec's paradox, which states that tasks that are easy for humans may be challenging for AI and vice versa. My suggestion isn't that GPT-4 would excel at every aspect of an intelligence test, but rather that it might perform at a similar level to some humans with an 80 IQ in certain areas. This could help us understand the limits and capabilities of GPT-4's intelligence. Part 3: I apologize if my use of the term "80 IQ human" was misleading. I understand that IQ scores are a composite of different cognitive abilities. My intention was to refer to individuals whose overall cognitive abilities are roughly equivalent to those with an 80 IQ score, acknowledging the limitations and imperfections of the IQ scale. Lastly, I agree that evaluating the intelligence of an AI like GPT-4 by examining its inner workings is important. However, I believe that comparing its performance to that of humans can also provide valuable insights into its abilities and limitations. Since humans are the benchmark for general intelligence, it seems reasonable to explore how AI performs in comparison to a range of human cognitive abilities, even if the concept of intelligence may not map perfectly between the two. In conclusion, I understand your concerns and appreciate the nuances that you've highlighted. The goal of my original comment was to stimulate a discussion about measuring the intelligence of AI like GPT-4 in relation to human intelligence. I believe that such comparisons can offer valuable insights, while acknowledging the limitations and complexities involved.
@BonurChamp
@BonurChamp Жыл бұрын
The school system is toast unless they make massive changes extremely quickly. Will have to completely do away with homework. Everything will have to be in person, pencil and paper. If these chatbots can process images, students could upload literally any question on any subject, math, english, history, chemistry, you name it, and get an answer near instantaneously.
@Smilequve
@Smilequve Жыл бұрын
Ask AGI to create Matrix and wait it to happen.
@thorcook
@thorcook Жыл бұрын
We did that already. Neo defeated it. (I think. no one knows for sure, because no one has watched the last movie(s)) 🥁☎🥁📟📠📱📲💡📺🖥🖱📀🔌🪕🖲🧮💰💼📐🗝🔓🔐🧷🪦☮♎☢☣🚷🔚📴📴♻❕📳📶📴❓❔❕❗❓❔❗⚕⚜🔱〰
@kennethstarkrl
@kennethstarkrl Жыл бұрын
Asking a language model to do math seems funny to me. Maybe they need a large math model.
@chenwilliam5176
@chenwilliam5176 Жыл бұрын
@Georgesbarsukov
@Georgesbarsukov Жыл бұрын
To give my subjective answer to the question, there's no point to defining AGI because giving it a definition doesn't change the effectiveness of AI. For most people, I feel like when they try to define it, they're merely looking at the set of humans and AI and trying to define what #python set(humans) - set(AI) is.
@oncedidactic
@oncedidactic Жыл бұрын
When you are publishing from a private research lab in a giant corporation, the sole product owner in the field, then you certainly cannot consider paper titles as neutral or scientifically authentic. “Sparks of AGI” is obviously a provocative title, and unfit for a compendious skills test review paper. “We don’t need to define AGI” is a fine viewpoint, but in this context it’s a huge cop out. Trying to give direct feedback here.
@bornach
@bornach Жыл бұрын
Whatever goal we define needs to be testable. The Turing Test was never meant to be a test of "AGI" and would make a poor definition of it because humans are so easy to fool. A.I. expert Gary Marcus proposed an AI should be able to watch a movie or read a novel and then talk about what happened in it.
@dreejz
@dreejz Жыл бұрын
So we're just working out the kinks 😅
@youdontneedmyrealname
@youdontneedmyrealname Жыл бұрын
Meanwhile... still waiting for gpt-4 api access...
@before7048
@before7048 Жыл бұрын
Tikz unicorn gang represent 🦄🦄
@chenwilliam5176
@chenwilliam5176 Жыл бұрын
Is ChatGPT-5 a Really 「 Artificial General Intelligence 」?!🤔
@unknownGOGSatoshi
@unknownGOGSatoshi Жыл бұрын
Yes but those asking chat gpt. 4 is not the correct one it is made by humans chat gpt if not what they are saying they turned the story around but they will fail
@Amipotsophspond
@Amipotsophspond Жыл бұрын
AGI is a shifting goal post. if you went back in time 20 years ago and said we made 1 network that can do all these things, they would say so we cracked AGI. the best def for AGI is fully able to replace 1 average human in all ways. the old Turing test, you lock a human in box and you lock a computer in box both communicated with the tester by text and no matter how much time passes, how many questions, or what is asked they are indistinguishable. this def is not used because at it's core is the world does not need AGI. we have lots of humans, it's better to have a dumb machine that solves a dumb problem. No worries about morality of enslaving a thinking individual it's a hunk of metal only able to do 1 thing better then something that was not made to do that thing. No worries about Fermi paradox problems. ASIC not AGI. AGI def is becoming problematic $$$$ for the Ai industry, this is why someone tried to convince you a word/term is not needed. replace all dumb jobs with ASIC as fast as you can, calculator was once a job once, turn it all in to a idle clicker game. George Jetson goes to work and presses 1 button and we now know that button says "approve Ai recommended action".
@jasonpenick7498
@jasonpenick7498 Жыл бұрын
The title of this paper was a complete farce, and the authors should be embarrassed and ashamed to have given it such a title. It puts my disposition on any papers they publish in the future immediately in the "probably click bait garbage" category before I even take the time to read them.
@flickwtchr
@flickwtchr Жыл бұрын
Has it occurred to you yet that you are utilizing as your argument against the authors, arguments that the authors were making?
@jasonpenick7498
@jasonpenick7498 Жыл бұрын
@@flickwtchr Interesting. Pretty sure your a bot because I made no arguments against the paper other than to say the title is essentially click bait garbage. Any ruminations inside the paper on the subject do not matter; Its like saying 'Not to be rude' and then being rude. They knew what they were doing when they gave it that name, and it had what I'm sure was their intended outcome.
@bobtarmac1828
@bobtarmac1828 Жыл бұрын
Can we CeaseAi -GPT? … SparksOfAGI
@chenwilliam5176
@chenwilliam5176 Жыл бұрын
We need not do it 🤓 All we should do are Completely Realizing 「 The Truth 」and Taking Adventage of it ❤
@fontenbleau
@fontenbleau Жыл бұрын
Yeah, I've heard that same topic presentation from Google I believe to some TV channel made a scandal already, where they stated Ai made conversation on rare language, but pro people from industry found out their model was trained in fact on that language and top management of all this corporations don't really understand how this all work, they want just made money 💵 on Ai and more of them.
@Telencephelon
@Telencephelon Жыл бұрын
This was interesting until the point where you say "Do you want to have a definition for AGI"? Seriously? Definitions provide a common linguistic interface. I wonder how stupid you would look if you had no bank account tomorrow because your bank just decided on a different definition of account
@AmazingArends
@AmazingArends Жыл бұрын
I think this KZbinr chose to downplay the abilities of chat GPT 4 because of the very human tendency of feeling threatened by machines that are potentially smarter than we are! So, he focused on it what it can't do rather than on what it CAN do! 😂
@Graverman
@Graverman Жыл бұрын
no, we need to focus on what it can't do to improve it further, how do you not understand that
@kras_mazov
@kras_mazov Жыл бұрын
Imagine spending billions of dollars on an AI and discovering that it has the same logical flaws and cognitive biases as someones grandma.
@NNokia-jz6jb
@NNokia-jz6jb Жыл бұрын
Three weeks old document.
@AmazingArends
@AmazingArends Жыл бұрын
This KZbinr got the significance of AGI totally wrong!. The definition of AGI is an AI that can ultimately set goals and improve itself! This is why Elon musk is concerned about the dangers of AI. After all, what if the goal it sets is to exterminate the human race? So to just toss this question off as a trivial, minor point is an incredible oversight!
@Graverman
@Graverman Жыл бұрын
you've gotta be trolling, who even listens to elon musk
@AmazingArends
@AmazingArends Жыл бұрын
@@Graverman A lot of people.
Training RL From YouTube Videos
31:49
Edan Meyer
Рет қаралды 7 М.
This Algorithm Could Make a GPT-4 Toaster Possible
39:22
Edan Meyer
Рет қаралды 113 М.
СИНИЙ ИНЕЙ УЖЕ ВЫШЕЛ!❄️
01:01
DO$HIK
Рет қаралды 3,3 МЛН
My scorpion was taken away from me 😢
00:55
TyphoonFast 5
Рет қаралды 2,7 МЛН
Sparks of AGI: early experiments with GPT-4
48:32
Sebastien Bubeck
Рет қаралды 1,7 МЛН
Is AGI already among us? Identifying human-level AI
13:59
Dr Waku
Рет қаралды 5 М.
How to Ask Questions to LLMs
56:35
Phi-AI
Рет қаралды 71
Model Based RL Finally Works!
28:01
Edan Meyer
Рет қаралды 35 М.
This Embodied LLM is...
32:50
Edan Meyer
Рет қаралды 7 М.
Transformers (how LLMs work) explained visually | DL5
27:14
3Blue1Brown
Рет қаралды 4,1 МЛН
RL Foundation Models Are Coming!
30:04
Edan Meyer
Рет қаралды 22 М.
The Best of NeurIPS 2022
24:40
Edan Meyer
Рет қаралды 15 М.
GraphRAG: The Marriage of Knowledge Graphs and RAG: Emil Eifrem
19:15
Visualizing transformers and attention | Talk for TNG Big Tech Day '24
57:45