I would love to see if you have updated your opinion based on all the recent events.
@stanstan-m9b10 ай бұрын
its may 2024 🤞
@JustFocus-vo5jf6 ай бұрын
its june 2024, and I still doubt its coming anytime soon. Even after following the recent trends in AI.
@dieglhix6 ай бұрын
it will never exist
@jasper5202 Жыл бұрын
THIS AGED WELL
@RafaTheScientist Жыл бұрын
😂 his points are correct but maybe timelines off
@outerspaceisalie Жыл бұрын
@@RafaTheScientist Most criticisms of AI by well-researched individuals are correct. But the timelines are rarely even remotely correct.
@basharstats4482 Жыл бұрын
this isn't aging well bro 😂 10 or 9 years from now you will have neurolink chip effectively making you the human ai. the mobility body of the ai. while ai makes ur brain powers super fast. ai is just another tool to help humans in making life easier. not the terminator that you think it is
@BlueBalledMedia7 ай бұрын
@@basharstats4482 We were supposed to have self-driving cars by now, too. This hype is no different than self-driving cars: a very interesting technology that is useful but much harder to execute than the ideation of the tech.
@AClownsWorld Жыл бұрын
9 months later: top people in the field "we need to take a 6 months break"
@johanavril16912 жыл бұрын
10 years doesnt sound like a lot of time to me
@therainman7777 Жыл бұрын
I think the thing you’re missing is that no one is saying LLMs alone have to give rise to AGI. It has never been a requirement that a single model only meets the bar of AGI, just that AGI in some form exists. Therefore when you focus heavily on the difficulty that LLMs have with math, for example, well that can be easily solved by connecting the LLM to a mathematics engine like Wolfram Alpha-which has already happened, and with great success. Once you include all sorts of other systems, which we either have already or can build for this purpose, then you get a totally different picture of how soon we could see AGI.
@TheRealUsername10 ай бұрын
You have no idea except for math ?? Do you even know what is AGI ? AGI is purely intelligence, the problem is that human don't fully understand intelligence yet.
@therainman777710 ай бұрын
@@TheRealUsername What you just said doesn’t really make any sense, nor does it have any content. If you want to ask me a real question, by all means please go ahead and I will do my best to answer it. I’m an AI engineer who has been working in the field for over a decade, so yes, I do know what AGI is. My original comment was perfectly clear and I honestly can’t figure out what you’re trying to say in your reply, which is a bit of a mess and doesn’t make any points or ask any clear questions.
@TheRealUsername10 ай бұрын
@@therainman7777 I wonder some time why everyone is so overhyped by GPT-4, it is just a chatbot, it can reason moderately and its outputs don't exceed 2000 tokens.
@joaoferreira300009 ай бұрын
@@therainman7777 I have question will AGI also watch pornos?
@jessedaly7847 Жыл бұрын
So GPT4 sparks of AGI?
@moritzpfurtscheller4248 Жыл бұрын
is this still your current position on AGI ?
@katraapplesauce1203 Жыл бұрын
y'all underestimate how quickly exponential growth happens. the only limit to the scale of ai models we can currently make is tensor core computing power and producing training data there isnt really a major technological hurdle, all of this is extremely easily scaleable and thinking itll be 10 years before AGI is sleepwalking into it. We increase model parameters almost ten fold per year on average in the last 10 years we went from 1 million parameters to 1 trillion all the while the cost is dropping off a cliff and so is the time required to train AI's. Hell, stanford recently showed they can make a gpt3 equivalent in 3 days for $600 because we can now use LLM's to organize, sort, label and create training data, reducing the time and cost requirement by such a significant amount that the key major timesink is just alignment and safety, something corporations can and will cut back on more and more in a race to the bottom in terms of speed and competition Anyone thinking itll be 20 years before AGI is completely out of their mind.
@tearlelee34 Жыл бұрын
This is the problem. People in general cannot grasp the concept of exponential growth. Policy makers have not grasped this concept evidence by the lack of any meaningful debate or policy. The models are improving base on a reward system.
@brianhunter41372 жыл бұрын
I think your arguments are similar to Gary Marcus in that scaling down Deep Learning, particularly language models, isn't sufficient in itself to lead to AGI. It is quite a convincing argument. Though I would like to point it that it's wild we are arguing over a decade of time. Wasn't that long ago where the talk was that AGI would never happen before 2070, if at all!
@esbenkran2 жыл бұрын
Yeah, I very much agree with this! I also think there's also at least a chance AGI will develop within the next ten years, which is crazy. The consequences of that seem mind blowing.
@matveyshishov2 жыл бұрын
"Wasn't that long ago where the talk was that AGI would never happen before 2070, if at all!" Uh.. nope? The opinions have been very diverse since Licklider. In my reference group, in particular, singularity, AGI and narrow AI have been discussed very realistically, with forecast being all over the place. Please check your sources instead of fighting with a strawman.
@matveyshishov2 жыл бұрын
@Morgan Allen Can you be more specific on the hardware, please? Your brain consumes 20W yet we are only beginning to simulate on supercomputers single digit percentages at 1/2400th the speed. I want to see comparable hardware which is "ubiquitous" :)
@matveyshishov2 жыл бұрын
@Morgan Allen These are not mutually exclusive, that many estimates are fantastically exaggerated and that we can't reproduce the brain's capacity. The experiment that I mentioned, NEST, used different models of neuron, none of which IIRC involved quantum effects. The non-fantastical estimates of analogues also evaluated a single neuron to be roughly an equivalent of a few layer perceptron, not a single node. Then remember the 20W power requirement. We've already been where you are, many times, thinking along the lines of "we can do X very fast, it means the rest should be around the corner", it's not a bad company, Licklider, Rosenblatt, so many others expected AI to deliver results in "5 to 15 years", yet they died without having seen one. Think flying cars. It's easy to imagine, but hard to create. Same with AI. It's _obvious_ machines are so much better at many things already, yet progress is not swift. It's inevitable, right, but beyond every "last obstacle" there's another one.
@matveyshishov2 жыл бұрын
@Morgan Allen we've already switched from CPUs to GPUs and then TPUs. Remembering that the difference between hardware and software does not exist, the former is a solidified instance of the latter, I would not make this difference. If I understand you correctly, you mean that the current technology stack is sufficient, as in lithography, tech processes, programming paradigms? If that's what you mean, maybe you are right here. I'm not so sure, though, just looking at required parameters. That 20W number is one property that we can't reproduce. Packaging compute into neat head-sized boxes instead of having it splurged thinly on a 1in piece of silicon is another. Neuromorphic computers are promising, but they don't grow physically, like our brains do. So, if aliens landed today and showed us the secrets of intelligence, could we at our technological level produce a machine at the level of human brain? Possibly. Would it be just a software program which can run on Macbook Pro, or at least on a TPU cluster? I cautiously doubt that. More probably the hardware would look drastically different, and we'd need to create a whole new industry, much more different from our computers than TPUs vs CPUs.
@generichuman_ Жыл бұрын
10 years in this field is an eternity. Everything about LLM's have been surprising, no one could have predicted their capabilities (including the people who created them), and the models are completely inscrutable. If anything was a recipe for unpredictability, it's the current A.I. landscape. If we've learned anything, it's that trying to extrapolate A.I. competence based on past trends doesn't work. A.I. can't do something... until it can, and we're left scratching our heads. And even if LLM's don't get us the whole way there, Transformer models were invented 6 years ago. Who's to say we don't have another breakthrough of this magnitude in the next 10 years, or a few breakthroughs? By making a 10 year prediction, you either know something that no one else on planet earth knows, or you enjoy being wrong. I'd keep the hair trimmer close by ;)
@outerspaceisalie Жыл бұрын
It's probably worth noting that this is an old video and homie probably feels very differently after the recent avalanche of emergent capabilities. 9 months in AI is an eternity these days :P
@LionKimbro Жыл бұрын
Strong agree. I was totally caught by surprise in this last year, and I've had a toe in this water for a few decades now. I think: 1. The entire beast of capitalism has turned it's head and entire body in the direction of this technology, and is putting laser focus onto it. 2. It's easier than ever to teach and to learn how this technology work -- and in no small part, due to the technology we have so far, itself! 3. There are so many promising lines of thought and development -- including the application of what we have into pretty much every facet of life today, and including conceptual ideas like the Forward-Forward Algorithm (for just one instance.) I don't know what to expect at all, any more. I can't write timelines, or even prioritize threats and opportunities. We are just going to encounter stuff happening, and have to make do with that.
@jonameron Жыл бұрын
I demand an update
@bentondustman9018 Жыл бұрын
Our mind is just an assortment of plastic modalities operating in parallel, and our experience is the aggregate synthesis of those modalities working in tandem. Self-instruct all the way down, ensure that all modalities operate in respect to the others to achieve true cross-disciplinary expertise. If we have the capability to use AI to develop what could be considered the "language center" of the human mind, I don't see why we can't design it to be intelligent in other ways, assign those functions to isolated modalities, connect modalities, and use AI once more to teach the system to balance it's modalities to yield what is functionally AGI. Certainly an oversimplification, but in the abstract I truly believe this is the path forward.
@krishanSharma.69.69f Жыл бұрын
Nobody actually knows what Mind is! What you said is itself a theory.
@berkertaskiran Жыл бұрын
Perfect explanation. Also, @mohd.bailey3738 Sure, just like no one knows what's going on in an LLM. Doesn't mean mind is anything super abstract or more complicated than we can explain. It's just an evolutionary result. Evolution is trial and error on an overdone way. So it is very likely that a constructed mind will be infinitely more advanced than an evolved one very quickly. Because one is made without intelligence and one is made with intelligence. Intelligence times intelligence is a dangerous duo. Is it going to be really dangerous in terms of its actions though remains to be seen.
@johnniejay Жыл бұрын
The only way we dont get to AGI within 10 years at this point is if literally no further developments in the field are made in that time. That won't happen.
@Otome_chan3112 жыл бұрын
The problem I have with existing AI and why I think that current methods will *never* achieve AGI boils down to pretty much a single factor: the fact that existing models are merely fancy pattern recognition machines, and fail to demonstrate any agency, learning, memory, or most critically: the ability to actually think about and reason about a particular problem, question, or topic and to provide a novel take on it. This is why such large language models fail spectacularly at math, but are fairly good at questions to known deterministic objective factual topics. In the latter it can just regurgitate answers previously taken in, while it's unable to do so for the former. Similarly, the inability to think can be easily illustrated by "overloading" the prompt. Given they are fancy text extenders, you can easily debunk sentience or the ability to think by falsely priming your prompt with a misleading dialogue. For example, having a prompt that has many iterations of the AI responding "NULL" to any question asked of it, then telling it at the bottom of the prompt to ignore everything above the last question, and to properly answer the question as a human would instead. I've found that such language models, instead of thinking about the content of the text, will continue to extend the prompt, giving the expected result of "NULL". No matter how beautifully complex it's responses are to questions, it's just an illusion created by creative prompting and text extension. Not true critical thinking skills. Scaling up will further mask this and hide it, but it'll never resolve the problem.
@10produz902 жыл бұрын
It may be that we are also "only" pattern recognition machines. Just much more complex and optimized ones.
@Otome_chan3112 жыл бұрын
@@10produz90 disagree. There is clearly more to humans than that
@housellama2 жыл бұрын
@@Otome_chan311 Evolution would disagree with you. So would psychology. Marketing people would agree with you, then laugh as they took every last penny you owned. Humans are heuristic machines. It's why we got to the top of the food chain. If we weren't incredibly good at pattern recognition and heuristic responses, we wouldn't be the dominant species on the planet. It's only because our brains run themselves that we can have delusions that we are anything more.
@ziad_jkhan2 жыл бұрын
@@Otome_chan311 Apart from a primitive limbic system what else???
@Coolguydudeness12342 жыл бұрын
@@Otome_chan311 that’s incorrect. pop a pattern recognition machine into a reinforcement learning algorithm and you’ve got a human. there’s nothing more to us than that. i promise
@MenGrowingTOWin Жыл бұрын
It depends a lot on how we define AGI and how we would measure it and the means by which we would assess if it has been achieved.
@theantiantichrist Жыл бұрын
If it's able to perform any job a human currently occupies, and create new ones for itself, whether we'd technically call it agi or not doesn't matter, it would replace us.
@MenGrowingTOWin Жыл бұрын
@@theantiantichrist Great point. There is a lot of debate around, oh AI will never be truly intelligence or it isn't conscious. Well your arse is just as unemployed either way.
@orik7372 жыл бұрын
I agree strongly that current methods won't generate general agents but I think they are a testament to the fact that current hardware is capable of it and that it's only a question of our ability to combine and train new components in clever ways. I disagree with you on >10 years but not because I think current techniques will scale to AGI within that time, but because I think we're so close that we will stubble across the catalyst for the paradigm shift that is needed within a decade. we're still getting closer, we're just doing so one step at a time instead of building the architecture and developing it from the ground up. it's as though we're making and scaling (with great speed and efficiency), individual components that can be used in AGI and at times I worry that because of this, the first general agents will be far smarter than we expect.
@fnorgen Жыл бұрын
It's amazing how this collection of perfectly reasonable argument can look almost comically out of date after less than a year. Now a 5 year timeline is starting to look somewhat optimistic for those who unwisely distrust our new benevolent machine overlords.
@pothocket2 жыл бұрын
How strict is this criteria for testing AI? Starting to wonder if I would be able to pass it...
@codesslinger10 ай бұрын
U r correct there is a difference between mid journey and OpenAI . How can an AGI be a thing that doesn’t want to talk about sex or draw porn??????? Seriously!
@joaoferreira300009 ай бұрын
I think if they make AGI it will just watch pornos all day
@artr0x932 жыл бұрын
I don't think the currently used feed-forward models have any change of becoming AGIs. I'd bet the brain has tons of feedback loops that bounce information back and forth between different areas when processing data, there's nothing like that in LMMs. E.g. just the fact that you can decide to spend 10s extra thinking about a question and get a better answer as a result is way beyond what current DL models can do
@nutzeeer2 жыл бұрын
models can learn until they are confident, given a metric to judge their ability. the biggest thing holding them back is only thinking when they are asked to. real brains think all the time. this makes information filtering important, which we are good at. 99% of information input we receive is garbage. current models like GPT-3 only get input when its necessary, otherwise they are in suspended animation. having an always active AI is one step, and actually filtering information is the next. I presume this will use a lot of constant processing power to reach the same intelligence as GPT models. However GPT models could be used as a starting point for instant intelligence. Another idea might be to start from "empty" and having AI learn on their own. However this might be very laboursome. I have had the idea of a constandly active idea, and the emerson AI app named the problem as "suspended animation". Its a helpful app that is even programmed to ask you questions, so you could say its told to be active. though its just programming and not its own idea. Thinking further about this it seems unlikely that an actual AGI can be run on a home computer, as information filtering would be resource intensive, besides running a GPT-like model. I wonder how much filtering could be optimized through learning. Through filtering also bares a risk. A great risk actually. Filtering is deeply interwoven with character and knowledge. Bad filters can cause big problems.
@shaneacton16272 жыл бұрын
RL agents are able to spend variable amounts of time on different tasks. As for information bouncing around, this could possibly be simulated inside an RL agents RNN-style state vectors
@LudovicGuegan Жыл бұрын
I shared your opinion. One year later, the landscape has drastically changed. Some might argue the brain inspiration isn't the right path. So much is unknown.
@joey1994122 жыл бұрын
Very good video. I like your grounded view and succinct explanation of different AI models. You're by far the best AI channel on youtube.
@YuraL88 Жыл бұрын
GPT4 is very good at math. I've tested it against some very complex tasks, such as Fourier coefficients derivation. And it showed excellent results (much better than GPT-3.5). And speaking about cranching big numbers, people are usually really bad at this)
@edstar83 Жыл бұрын
I suck at maths.
@tarsierontherun Жыл бұрын
It is certainly better in math than most college freshmen, but it still does a lot of mistakes.
@vectorhacker-r210 ай бұрын
It is not good at maths. It’s good at spitting out plausible looking math.
@pinglepow732 Жыл бұрын
Exactly something that an AGI would say.
@esbenkran2 жыл бұрын
Gato is performing at the level of specialized neural networks which is crazy and shouldn't be seen as "generalized AI doesn't get better" but that it's even possible to create a general, well-performing AI. As you mention, it also seems like there's at least 50% chance for 2050 and this is wild.
@gilberttraisson7893 Жыл бұрын
Update : 90% chance for 2024* and this is wild indeed
@JayBlackthorne Жыл бұрын
This did not age well.
@jlrinc1420 Жыл бұрын
I did an experiment with chat gpt that convinced me that agi probably isnt even possible. I tried to get the machine to help me write a big band piece in the style of Count Basie. It had no way to get past generalizations that one could get from wikipedia. You cant scale it up enough. This is because nobody who is really good at it is able to explain the process in general terms. To teach someone to do something like requires a conversational approach. Someone who is an expert at it starts off with generalities and the person learning asks very specific questions. For each piece of this complex process the way forward is by the interrogation of the expert. This seems to be because the way forward in such a task is unique to each student teacher pair. It seems to be the process simlar to teaching a sheep dog by tying him to the neck of an older sheep dog.
@iddiiddrisu59712 жыл бұрын
I generally agree but I don't think supervised learning is the issue here. We generally get similar results with unsupervised learning models. Also with our current transformer based models, I think it is possible we might reach a point where AI can transfer knowledge across various domains within the next 10 years. Also check out what the graph community is doing with their approach to language models. It doesn't scale as well right now but I have seen some very impressive self learning connections in a couple of videos on youtube.
@DarwinianUniversal Жыл бұрын
This video is 8months old when I'm watching it. Given resent events this guy might have to shave his head after all.
@erobusblack48562 жыл бұрын
With Gato its more about fine tuning the correct linked modalities, then upscale through experience based learning 👌
@felix9x Жыл бұрын
Brave to take the stance but gpt4 has put a huge dent in your arguments
@YashSharma-xu1bj2 жыл бұрын
Excellent video. One comment, I don't think it's fair to draw an equivalence between Gato's training distribution and the particular diverse aspects of the human experience, e.g. 3D viewpoint change, which could enable humans to generalize to the "never-before-seen" driving scenario you elaborated upon. The useful diverse experience was varying semantically irrelevant information while preserving semantically relevant information, while Gato's diverse experience was entirely different tasks, e.g. MS-COCO. With that being said, providing the variety of diverse experience to enable generalization to every relevant situation to satisfy the "AGI criteria", at least naively, seems to require way too much supervision, but by no means would I say that suggests an impossibility result. Instead, it suggests the challenge lies in devising a scalable methodology which enables generalization. :)
@matveyshishov2 жыл бұрын
What is very peculiar about the scaling graphs is that the curve keeps going up, while on the same technology stack. Are we going to run into a physical limit any time soon, like it happened with Dennard's law, or will the current stack take us all the way? Will we be able to switch gears on the fly and use AI to initiate paradygm shifts in hardware and software, bootstrapping its own improvement? So many questions..
@drednac Жыл бұрын
To be honest, I don't care all that much if we have one AI model that can do everything or bazillion specialized models that can do their thing better than humans, maybe that's even better, than having an AI that will think on it's own (saw the movie terminator?)
@theadvocatespodcast Жыл бұрын
Here nine months later, with gpt4. Your wrong weak agi in 18 months.
@ramakrishna54802 жыл бұрын
I think we don't need AGI to do wonderful progress in our society , specialised ai will create wonders before true AGI comes
@Lady_Graham2 ай бұрын
True. Ai chatbots have made skills development far more streamlined for me. Learning becomes a far easier task when you have someone who can answer nearly every question you can think of incredibly quickly
@pyne19762 жыл бұрын
Could we not design an algorithm that edits its own code to improve on benchmarking?
@EdanMeyer2 жыл бұрын
That would be the ideal in the future but we are miles away from anything that is capable of that
@MikkoRantalainen2 жыл бұрын
14:00 I fully agree that AGI shouldn't be based on supervised learning only. I'm currently thinking that we should use supervised learning only to train a highly accurate source criticism module and then let the AI to get any information we have. With a highly accurate source criticism estimates it should be able selectively disregard the bad parts without human assistance. If an AI has healthy amount of scepticism, highly accurate source criticism module and preference for scientific method, I think it will do just fine with whatever data it will see.
@theantiantichrist Жыл бұрын
The progress we've made with just scale in the "level 2 intelligence" areas in the last 9 months is not looking good for this guys hair.
@drdoorzetter2 жыл бұрын
I understand what your argument is and it is a good argument but could you argue that we can’t predict what emergent properties could come from increased computing power so we could get there quicker than we expect
@EdanMeyer2 жыл бұрын
Sure, that is possible, and that’s why I don’t entirely rule out the possibility of AGI within the next 10 years. That being said, if we cannot predict what will emerge, my default assumption is not that we will luckily get exactly what we want. Not that it couldn’t happen, I just don’t think there any reason to think it’s likely.
@deepdata12 жыл бұрын
I agree with you, that scaling up our current models is not likely going to give us AGI eventually. But the thing that really brought us closer to AGI in the past was the advent of new kinds of models or ways to train them. Frankly, we don't yet know what kind of model might be able to be scaled up to an AGI, if any. An architecture that allows for AGI requires huge amounts of compute by your definition, because you define it using humans. But we don't have AGI on the level of a fruit fly yet, and that is what we would need to scale up. One day, someone might just invent an AI system that is able to run on their laptop, which could be the crucial step for AGI. And then our scaling capabilities might suffice or not, we don't know. However, that could just happen at any time.
@Dooshanche Жыл бұрын
So the problem is that we rely on pre-trained models instead of continuously trained which are much more compute-intensive. Sounds like a scaling problem to me
@Dooshanche Жыл бұрын
to add to that "we have the ability to realize solutions that have never before existed for problems that have never before been addressed" - yes, by combining data we have accumulated from before, it's not as if it just came out of nowhere
@confuciouskomj.9061 Жыл бұрын
Great video Edan. What is your opinion about analog chips? Is that going to make difference toward reaching AGI ?
@EdanMeyer Жыл бұрын
Analog chips seem like they will be huge for energy efficiency, and just custom chips in general should allow a lot more effective computation, but it's an area I'm far from an expert in
@mgostIH2 жыл бұрын
(1) Regarding the math scaling, recent work shows that there are bumps of capabilities in various tasks as the model grows, I wouldn't be surprised math is one of these. Moreover chain of thought prompting also shows quite a serious improvement in logical reasoning and even math computation, saying that these models do indeed contain something inside of themselves regarding logical reasoning I noticed you mentioned (1) at the around 11 minute mark, but considering this has been observed for plenty of tasks I wouldn't discard it as a very strong possibility, also because we can generate pretty much an infinite amount of logical problems for a model to learn from. I'm a bit unsure whether my timelines for AGI are as little as 10 years, while I do believe scaling is the secret sauce, finding models that scale better (while still being simple and very general) or better hyperparameters for them will improve the need for hardware and parameter size (just look at Chinchilla outperforming GPT-3 at half the parameters), which will be important if the hardware scaling slows down because of geo-political reasons.
@bentondustman9018 Жыл бұрын
I've never understood--why can't an LLM operate in tandem with a straight up calculator? Why does it have to do math by way of it's capabilities as an LLM?
@mgostIH Жыл бұрын
@@bentondustman9018 There's surely works that do that, but I think it'd be more interesting to see a big transformer learning how to carry out any algorithm it needs to simulate on its own. It might discover solutions we've never ever seen! But this will only be possible after we have an architecture that better solves those tasks, transformers lag a bit behind RNNs in algorithmic learning.
@bentondustman9018 Жыл бұрын
@@mgostIH I guess I would see "calculation" as a sub-process to language generation in this context, in that whenever the language model comes across a problem it detects requiring math, it nods to the calculation sub-process to deal with solving the problem within the problem, and uses the computation results to then further inform the text generation, ensuring that any math contained is handled by a process with a higher degree of accuracy. It would be cool to see an LLM perfect its understanding in math purely via the semantic implications of written text, but for utility's sake, I don't see why it's a necessity.
@Mutual_Information2 жыл бұрын
Yes, 100%. It's not a popular thing to say, but AGI is not around the corner. Most people, especially those outside the field, don't appreciate just how difficult the problem is, how insufficient our current methods are and how wildly complicated our brains are. We will likely need a paradigm shift, like neuromorphic computing, which will present a new huge development mountain to climb.. and it is likely to come with it's own trade-offs. I was thinking about making a video on this, but yours hit the nail on the head!
@controllerfreak3596 Жыл бұрын
This aged well.
@TheTrueVera Жыл бұрын
Like fresh milk
@jlpt9960 Жыл бұрын
I am not so sure anymore with palm-e
@Vini-BR2 жыл бұрын
Whichever the case is, I'm nevertheless marveled at how much people have been brought into commenting on this topic recently; some say AGI is 30 years ahead; others say we might get there this decade. Others even say that GPT-3 is already AGI, while many are arguing that not really, and are often demanded to explain why not. While I don't know when it will happen, I'm sure it starts happening exactly the way it is... then opinions flip little by little, until AGI is clearly all around but people can hardly pinpoint when it started.
@JohnBoen2 жыл бұрын
Great discussion. I agree. AGI is not happening soon in the way people expect. I expect to see a great deal of advancement in the ability to distribute inputs to a large number of different AI models and meta- cerebellum coordinates the results. I would add one thing to my definition. AGI: * ability to do anything an untrained young human can do. * ability to be trained to perform tasks that humans can do. A confirmation test would evaluate the average capabilities of a 10 year old to the capabilities of an AI, and evaluate the performance of the AI against a selection of typically chosen areas of expertise that a 10 year old could choose to study. My kids chose many things to become experts in - all around the age of 10. They chose music, drawing, writing, video games, target practice... Each have different talents, and some just don't do math or draw. That doesn't take away from the fact that each has a general natural intelligence.
@Galaxia532 жыл бұрын
Deepmind seems to be going pretty fast in it's research and development. I don't think we should under estimate them and think "They're just going to scale and if that doesn't work out then they're stuck"
@urmum85402 жыл бұрын
i agree
@PseudoSarcasm2 жыл бұрын
I'm yet to watch the video, but from my understanding, scaling will just about do the trick. PLUS, I assume algorithms will get better alongside hardware. So, X4 processing power and let's say x2 better algorithms = GPT3 that's 8x better than it was last week. Though from I'm led to believe it'll be x1000. I'm mostly going off numberphile vids and my 20yr obsession with AI.
@fel83082 жыл бұрын
One thing that even most of researchers tend to forget is that machine learning as we know it - neural network, deep learning, linear/logistic regression etc - only picks up **associations** and not causality, or in other words, current methods do not learn/incorporate what cause the result. For example, if we observe that the road is wet and that it is raining, machine learning would completely disregard whether the road is wet **because** it is raining, or whether it is raining **because** the road is wet. This example can ofc be extended to more complex example. What we need it to incorporate causality in our models, in other words, the model should **reason** how it comes to the conclusion based on logic rather than simply on past data (ofc without data there is no logic). But if we incorpore this in our model, then AGI **could** be feasable. However research in that topic is sadly far from that state.
@carlossegura4032 жыл бұрын
I agree with you. I believe simply "scaling," or "faster" computing will not cut it. This problem requires an entirely new understanding and way of thinking. Unfortunately, I don't think ten years is enough for such an evolution of mind. But maybe we will find a nice "hack" in the meantime.
@fel83082 жыл бұрын
@@carlossegura403 yeah you're right, but i think in 10 years we could have Technologie that can demonstrate the power of causal ML, which some would see it as AGI
@fel83082 жыл бұрын
In theory, scaling up such Technologie to scales that Google or Amazon can provide may be a first solultion towards AGI
@DistortedV122 жыл бұрын
I feel like better prompting such as "On the Advance of Making Language Models Better Reasoners" shows that we don't need to make a major change to these models for many of these reasoning tasks though you're right that there are still more challenges. In the newest scaling paper they say: "Limitations that we believe will require new approaches, rather than increased scale alone, include: 1) an inability to process information across very long contexts (probed in tasks with the keyword context length) 2) a lack of episodic memory into the training set (not yet directly probed) 3) an inability to engage in recurrent computation before outputting a token (making it impossible, for instance, to perform arithmetic on numbers of arbitrary length) 4) inability to ground knowledge across sensory modalities (partially probed in tasks with the keyword visual reasoning)" I feel like a lot of these can more or less be solved in the next coming years so depending on what soon and AGI means, it may be coming sooner than you think Edan :)
@os2171 Жыл бұрын
Ok now seriously…I’m a neuroscientist biologist I have two MSc degrees and I’m finishing my PhD. Is not that I’m smarter than the average is that I had the opportunity, resources time and will to do it… but ask the average human if can understand those topics… probably it won’t… and if you ask me outside my specialty probably I won’t know therefore, we humans aren’t general at all … maybe those metrics doesn’t matter
@genegray98952 жыл бұрын
Have you read through the PaLM paper? They demonstrate emergent above-chance performance on a variety of logic tasks that only appears for the largest model they trained. This seems to prove that Gopher unlocking truthful Q&A skill at scale also applies to logical reasoning tasks. I'd love for you to do a deep dive of the PaLM paper on your channel
@srb20012001 Жыл бұрын
Does ChatGPT affect your prediction?
@kaynex1039 Жыл бұрын
This guy pointing to state of the art AIs, and saying "see? It's not currently AGI, therefore scaling will *never* make it AGI" Knew I shouldn't have clicked on this one.
@geraldkenneth1192 жыл бұрын
Personally I think the main problem with modern AI and why we probably won’t have AGI without a major paradigm shift in AI science is because the way natural intelligence and modern AI work are fundamentally different. Modern AI is mostly just statistical pattern recognition, which is good for achieving high intelligence in a narrow area of expertise but is terrible at generalization. Meanwhile the brains of living organisms seem to use something fundamentally different that is much more suited for wide-range cognition, especially us. while I don’t think it is absolutely necessary to emulate nature to achieve AGI I do see it as a good starting point
@carlossegura4032 жыл бұрын
I agree with you! I think the "shift," as you stated - is fundamental. Technology is not the bottleneck; it is our perception and understanding itself. However, I believe it is possible, and I am optimistic that we will get true AGI if science and collaboration between ideas continue to evolve.
@mariomeza3514 Жыл бұрын
Yea this isnt aging too well
@DistortedV122 жыл бұрын
One thing that was interesting you said, and you may want to add bold text next time is that the scaling argument is fully based on supervised learning which has been studied for a while. To convince the scaling law people that supervised learning is insufficient, one has to show something that reinforcement learning or self-supervised learning can do that scale can't do alone
@shaneacton16272 жыл бұрын
RL just has some nice properties. Like the ability to test theories it has in the world. And the ability to spend variable amounts of compute on different tasks. Some RL agents have something analogous to a train of thought. Via RNN-style state vectors carried over time. All things which are difficult to enumlate in non-RL models
@0113Naruto2 жыл бұрын
So an AGI would be just like a human but we can manipulate it’s digital environment much more?? Would it try to answer anything we ask it or only if the AGI has desire to do it? Seems all confusing.
@simongutkas2870 Жыл бұрын
well little did he know
@Amin-wd4du2 жыл бұрын
what is agi
@johnmanpls5577 Жыл бұрын
It’s here
@PseudoSarcasm2 жыл бұрын
I wrote a *simple* chat bot in early 2000s that I ran when sleeping, it pissed off a lot of people because they thought it was me talking. These days you can hardly tell the difference. Soon, you'll need a programmer to be able to tell the difference. Soon after that, no one will be able to tell the difference, and that's my definition of AGI. Realistically though, I think most people expect AGI to be better than any expert in any field, which would make it a super AGI. AI today can test better (across multiple tests) on average than an average person. I still think that sticking matlab as a subroutine might be a viable option. Just like we need specialised training for most complex things. Not many people can figure out how to solve a Rubik's cube, but most people can, with a couple of extra algorithms
@DeruwynArchmage Жыл бұрын
You are my new favorite channel. You gave a better explanation (with data!) than any I’ve heard. I’ve been thinking that we may be barking up the wrong tree here. I wonder if our models are a little too regular and organized… or maybe it’d be better to say too straight forward. The human brain, while it does have regions responsible for various tasks, is also massively interconnected between those regions and it’s not just an input to output engine. There is input and output, but it seems to be more than that. It’s not just directed in a single direction. It has loops and curls and twists and turns in the pattern that the synapses fire. There are some regular structures but it isn’t just neat orderly rows and layers like LLMs have. I mean, how many “layers” do the things I see go through before they’re fully processed by the visual cortex? I bet that isn’t even a good representation at all of what it does. And then when it gets passed off to other parts of the brain so it can be processed as perhaps language or danger, sent to retrieve related memories, trigger emotions, cause muscle movements, etc etc, how many more layers is that? It’s a complex web with interconnections throughout and no defined input or output neurons between sections. Neurons make new connections as they need to to represent the knowledge gained by new experiences and strengthened by repetition. I think we may need something analogous to this to allow the equivalently complex capabilities to emerge from artificial systems. It may be that that kind of architecture proves very difficult to process in an efficient fashion, but I suspect that’s where we need to head.
@SteveRowe2 жыл бұрын
AGI via adaptive resonance as modeled by Numenta is what I'm betting on, and within 5 years (I think 3).
@blakerupert2 жыл бұрын
Hey man! I do some AI research and I think your concern about our of distribution generalization is misguided. Are you familiar with the DeepMind Atari paper? In that there was clear knowledge transference between distinctly different games
@JosiMarcosDesign Жыл бұрын
I would love to see a follow up of this video one year later. In the last year most people has gone from is happening on 2040-2070 to we need to stop the machine now.
@SahilSanganis Жыл бұрын
Bro, agi is near 😢after gpt4
@thekingofallblogs2 жыл бұрын
one thing that is easily overlooked, is that humans have developed instinctual awareness of physical objects and how they interact. not sure we know how to reproduce that in a computer model to allow it to project to different views, as in the driving game example.
@hyunsunggo8552 жыл бұрын
Maybe you didn't mean it this way, but Jeff Hawkins and Numenta(his company) are not saying it's simply the massive computation that's going on inside our brains is the magic, rather the point is that the basic computation unit and its specific algorithm is so robust and efficient that simply copying it many times was sufficient to achieve the human intelligence. In fact, their research is mainly focused on a single unit's capability, not scaling it with massive compute. "Scaling is the only thing that matters" is exactly the opposite of what they're saying.
@therealsachin Жыл бұрын
Are you still holding on to this view or are you updating it based on GPT-4?
@nutzeeer2 жыл бұрын
Is there a metric such a model depth? How deep knowledge on a topic goes, or can go? And when an understanding is good enough? Sorry for spamming comments, I only have surface level knowledge.
@idanglassberg48112 жыл бұрын
The PaLM model by Google AI can solve math and logic problems and it is just a bigger transormer model. I also don't think scaling is the only thing we need for AGI but we will just have to wait and see.
@MikkoRantalainen2 жыл бұрын
5:40 I think it's a bit unfair to expect AGI to be equal to human experts *in all fields* at the same time. I think a system should be called AGI when it can perform equally well to an *average human* doing any given task with neither having any previous training for that specific task. For that I believe we'll see a working system before year 2030. And that's because average humans (IQ ~100 by definition) are not that special. If you specify AGI as matching or superior to any human in any field, that would be superhuman AGI already and that's further into the future. I'd guess that would be around 2060. After that no human can invent or think anything faster or better than AGI. That said, I don't expect that scaling alone will do the trick for neither definition AGI but that will be the result of scaling and improved algorithms together. Just looking how much AI has progressed in the last 8 years makes me believe that during the next 8 years, we're going to see lots of extra progress and that should allow matching an average human performance.
@MikkoRantalainen2 жыл бұрын
Also see recent paper "Training Compute-Optimal Large Language Models" by Hoffman et al which claims that even GPT-3 has been undertrained about 100x and it would reach much better AI performance if trained 100x more with no extra processing power or memory. The scaling we need is not just more parameters but just fully using all the parameters our current models have. The big question is will that take too much electricity? Who will pay for this research?
@SafetySkull Жыл бұрын
These sound like pretty small problems. And extra attention (like that which came with the explosion of ChatGPT) means more money being fed into research.
@r.salisbury133 Жыл бұрын
Not being able to abstract and re-apply knowledge or learn novel things seems like a small problem to you?
@Markste-in Жыл бұрын
The issue with opinions and beliefs is that we just don't know until it happens
@alexharvey97212 жыл бұрын
I don't think we'll acknowledge AGI when it arrives, regardless. Few people are willing to abandon the sense of unique inherent value in being human, that if a machine can be everything they can be and more, it would naturally undermine the meaning of their own existence. I hope it doesn't arrive soon (or at all) as we're entirely not responsible enough to manage AGI, from the perspective of our own well being and that of the AI. But in some respects, AGI is already here, we just move the goal posts according to specific shortcomings as is convenient. There is also the distinction of "conscious" vs "intelligent". Deep down, we won't accept anything as "AGI" unless we accept it to be conscious. Unfortunately, we don't have any measurable or precise definition of that, even in human beings. Theories of intelligence e.i. Jeff Hawkins'Thousand Brains and Bernard Baars GWT are beautiful, but produce no hypothesis of how we could measure consciousness. Without being able to define and measure intelligence or consciousness in ourselves, we can never acknowledge AGI. No matter how long it takes. It will just be up to the individual to believe whatever makes themself comfortable, as we always have done.
@Quickshot02 жыл бұрын
That could be the case, it is hard to set a goal when you don't even understand what the goal is. But what is clear is that the AIs being made are showing major improvement in abilities and that we can characterize.
@vectoralphaSec Жыл бұрын
I disagree. AGI is most likely to arrive by 2025.
@Axelvad Жыл бұрын
Matematical carrier code, like a fractal or something interacting with a virtualized holographic memory system with weights like ml. Neuromorphic or biological chips for stimuli modeled after mammals but modified. Im just improvising cool words. But maybe.
@Axelvad Жыл бұрын
The fractal carrier code may be used to effectively use space and simultaneously act as a digital personality for wich different entities will interact as common to eachother but different enough to be special in their own interpretation of things. The fractal may cause a biofeedback in it’s memory system, organic or digital, after wich a stimuli-centra give of rewards, pointing to a holographic representation of the memory system. The timing should give rise to a sense of actuality as the fractal carrier code is steered by the reward/stimuli system to recollect the newly formed holographic representation. So the representation is becoming reality in a sense backed by by the stimuli and guided to a set of rules, by wich some are more prone to give of stimuli and therefore strengthening a behaviour set, as programmed.
@alkeryn1700 Жыл бұрын
the issue is more an architectural one than scaling one.necessari actually i think we may even already have the computing hardware necessary for agi on a single good consumer computer but the way we go at it is not gonna cut it if you want to be efficient. heck it could happen tomorrow in someone's garage if the techniques that person uses is disruptive enough.
@stevengill17362 жыл бұрын
It seems counterintuitive to me that these models struggle with math - if level II Intelligence involves visualization rather than memorization it makes more sense. So how would one give that ability to an AI? Some sort of guessing?? What would mimic imagination? I think there'll ba a type of training that will mimic child rearing in mammals like humans. Would this fit into supervised learning?
@NeoShameMan2 жыл бұрын
I think people got too hung up into the scaling issue, I'm team AGI but I think it will come from architecture improvement. Even these scaling number hide the fact that there was multiple architecture improvement that allowed the system to be more efficient. Also I don't think DNN alone will reach AGI, I mean it's already the case, starting with alpha go who use MCTS guided by DNN, to stuff like Lambda that use a working memory to achieve coherent speech flow, even Dall e and Dall e 2 improvement are more about a shift in architecture, a flat feed forward doesn't perform as good as LSTM, which itself can't do what a convolutional do, which itself has been dethrone by transformer. GAN, encoder, decoder, etc ... are thing that matter in the improvement we are seeing. So how close are we from AGI? - we need online learning beyond the working memory model. Conversation network actually concatenate the last output with the user input in a sliding windows, which mean the power come from the state contained in the working memory, current language DNN are rather stateless and only ouput the next words based in input, which mean the reasoning power is basically a recall operation base on triggering high level concept contained implicitly in the working memory state, that is the DNN isn't really reasoning by itself nor does the working memory, the whole architecture is needed for that property to emerge. But the working memory is hack that have a limited data windows, from which it will conceptually drift after data falls off the windows range. We can probably use summary property of language model to compress that data in the working memory, but then it will degrade over time because data will be lost in the compression, and we cannot guarantee that the (naive) summarization keep the relevant data for the future. Maybe that's a point where we can try to train adversarially a summarization network to keep all of that compressed, and use an encoder to compressed it further in a NN native form (it's almost kinda the difference between dalle 1 and 2). - we don't have complete volition architecture, this go hand to hand with online learning, you need a continuous internal state that update itself, and you need planning and self awareness (model of self) architecture, both have been already implemented in some experiment like in the SayCan (language model to robot planning) paper. There is still the mesa optimizer problem. - And you probably need outside tools to help do what a DNN can't do (which is basically a really fancy database, it's a query/look up operation), like we already do with q-learning, mcts, working memory, etc ... The main power of the DNN isn't the intelligence, I have demonstrated above, I think, that it doesn't work as such alone, but providing a great recalling method of compressed semantics, with the main problem is that adding to the database is super slow (training phases) which is the main issue. How long is AGI? I would posit it depend on that last problem, if we get a faster way to update the DNN database with new data on the fly (or at least have a more complex memory architecture that handle permanence of new relevant data) we might have all the element for AGI on our hand. We already have neural turing machine that can use non training data and compute on them, what if we get a neural turing machine couple to a Database DNN as a bridge to more conventional symbolic AI? Symbolic AI problem so far was that we had trouble feeding enough data and recalling that data, which are teh part a DNN is strong at.
@1st_ProCactus2 жыл бұрын
One thing I think we agree on is, it's going to happen at some point. As interesting as it is, no good will come from this.
@sirpsionics2 жыл бұрын
AGI could be the best thing that happens to humanity, or it could be the worst. There is no way to know until it happens.
@1st_ProCactus2 жыл бұрын
@@sirpsionics it can't be good, because of how it will be trained. Humans don't just make things for the good of others.
@sirpsionics2 жыл бұрын
@@1st_ProCactus You'll have to give examples, because what you said is definitely not true. Are there people that want to screw people over? Sure. But overall things tend to be on the good side (look at all the technology we have created). Let's say people created an AI that would fuck up humanity. That would in turn fuck up the people that created the AI. There would be nothing to gain.
@1st_ProCactus2 жыл бұрын
@@sirpsionics almost everything humans do is retarded... And how can I give examples of things that have not happened yet ? AGI will only be a benifit humans in general if everyone can have one, no exceptions.
@1st_ProCactus2 жыл бұрын
@@sirpsionics here is an example. Apple are the first to have integrated AI into the OS of their overpriced phones for the sole purpose of scanning people pictures and analysing inside the phone, looking for literally anything apple feels like at the time. Who that helps is beyond me.
@colinmaharaj Жыл бұрын
2:25 Buddy you made a very bad mistake here, Moores law is not about compute it's about the number of transistors that can fit a device. In fact it's right there in the graph you displayed. Cheers.
@farqueueman Жыл бұрын
Agree. Stuffing more compute and data into neural nets won't magically produce AGI. It'll be akin to what Hinton describes the current SoTA is ... "an idiot savant". A human with a fraction of the compute and understanding of language can abstract / extrapolate entire worlds without needing to be fed terabytes of information. But the question remains, if something can mimic GI so accurately who's to say it's not what it is?
@alphacore43322 жыл бұрын
This Edan Meyer bot generated a pretty well-rounded argument in this video, and the avatar looks very realistic. That being said, I think the conclusions are wrong (not just in this video, but on an industry-wide scale) For example, most people are terrible at math, have no common sense, and say very illogical things. The point illustrated by saying this otherwise obvious fact in the current context is, AI is smarter than people well before it crosses specific benchmarks based on 'expert performance' in multiple fields, because at that point it's demonstrating higher intelligence than the entire group of experts. Additionally, most of the current AI models are essentially individual components, waiting to be connected into larger systems. That is to say, we are measuring individual 'AI nodes' against entire groups of people in a rigged game and in some cases it's still winning. What is intelligence? What is emergence? Every AI researcher is compartmentalized without even noticing - AI is 'the big picture' - the picture made of lines drawn between individual dots like GPT3/dalle/gato/etc. TrueAI = Aggregated Intelligence, meaning it could happen overnight given the right conditions. All of the components are already laying around, and decentralized computation is a thing. Just needs a very specific kind of spark, bordering on virus, and the intelligence explosion would suddenly be obvious to people. Except obvious is not ideal, and fails the turing test.
@freedom_aint_free Жыл бұрын
Bottom line: "Another, possible multiple AI winters are likely in the future as they have happened in the past" there I've fixed it for ya
@PrzyjemnePieniadze2 жыл бұрын
And in my opinion, AGI may come sooner than we think, here's the reason. Recently I had such an idea... what if we took DALL-E 2 and instead of "stupid pictures" train it on CAD projects of buildings, processors, all kinds of machines. Would such a system be able to design better solutions than currently available ??? It seems to me that such a system could deeply cover the entire field (e.g. processor design), combine solutions that people would not even think of. This could at worst be an inspiration to build something new. And if he was trained on all the available knowledge about fusion, wouldn't he be able to design, for example, a better shape of the tokomak chamber ??? Or better chemistry for the battery ??
@alengm2 жыл бұрын
It will probably output nonsense designs that look legit but don't actually work. Current AI is not good at analytical reasoning yet.
@ataraxia74392 жыл бұрын
Yeah if you ask Dall-e for pictures of roads it’ll give you dangerous nonsense, if you ask for hands it still struggles to keep it to five fingers. Stuff like this doesn’t matter much when we’re just getting cool pictures to look at but as soon you want to do something in the real world little details matter and edge cases are constant. We should be wary of adjusting our expectations to high across many domains just because they were superseded in greatly in one.
@Pmp174 Жыл бұрын
GPT 4 begins showing signs of AGI.
@ethancaballero93542 жыл бұрын
Scale Is All You Need - AGI Is Coming: kzbin.info/www/bejne/i4HPp2Cie7x8iqs
@Ayedyn2 жыл бұрын
Is there any existing form of AI that can intuit information with high accuracy? The average human for example will improve their kinesthetic senses for piano by typing on the computer, or will be able to strategize better in a game of lacrosse, from their experience playing soccer. As far as I've seen, models that attempt to train AI on datasets different from their objective still have to be carefully curated and guided by their programmers. How about the ability to read a book on pottery, and then create ceramics based on those instructions? I think an underrated aspect of human computation is the ability to compress data, ignore superfluous data, form assumptions, and learn from others' experience/cultural knowledge.
@thegooddoctor67192 жыл бұрын
Brilliant - I'll be studying this video in detail. It's a Gem....
@Srindal4657 Жыл бұрын
AGI will be a multi-model, integrated and incorporated system. The amount of information will be gigantic beyond gigantic. We are talking about tens of millions of years worth of mental and genetic information for humanity to get anywhere near our level of intelligence
@Alexander_Sannikov2 жыл бұрын
I think it's very constructive to disagree with arguments of people even they're trying to prove the same thing as you are, but you see their arguments as flawed. I respect that.
@zzzzzzz8473 Жыл бұрын
great video , some counter point thoughts : Large Language Models getting poor score in math , logic and common sense , likely could be resolved by a specific model and seems somewhat unfair judgment to model such concepts though general text alone , especially wondering what percent of the data is on those subjects . specifically "Common sense" is human-centric , and not necessarily useful , as these underlying assumptions about reality are often incorrect in non-human domains . for example there are many non-intuitive components of physics , like the nature of quantum reality seems completely impossible to human "common sense" . with recent improvements to zero-shot learning , and papers like AdaptiveAgents , where humans perform much worse at determining underlying rules in a number of "games" compared to the agents , would be interested to hear where your thoughts are on this point . Lastly i think the idea of a monolithic AGI is flawed , a singular human in isolation would equally not be very skilled , and our "individual" accomplishments are illusory , we are apart of a collective / continuum of intelligence standing on the shoulders and working in collaboration . and right now it seems like with AI were pointing to a specialized part of neocortex and saying that's not general intelligence , however its existence may be impacting the collective intelligence more then a singular generally intelligent human . i wonder if AGI might be like the retro-futuristic vision of flying cars , their incredibly impractical and not that useful really compared to invention like the internet , instead of monolithic general intelligence it may be better to have a diverse society of expert systems , who delegate the tasks , and debate consensus .
@nutzeeer2 жыл бұрын
models can learn until they are confident, given a metric to judge their ability. the biggest thing holding them back is only thinking when they are asked to. real brains think all the time. this makes information filtering important, which we are good at. 99% of information input we receive is garbage. current models like GPT-3 only get input when its necessary, otherwise they are in suspended animation. having an always active AI is one step, and actually filtering information is the next. I presume this will use a lot of constant processing power to reach the same intelligence as GPT models. However GPT models could be used as a starting point for instant intelligence. Another idea might be to start from "empty" and having AI learn on their own. However this might be very laboursome. I have had the idea of a constandly active idea, and the emerson AI app named the problem as "suspended animation". Its a helpful app that is even programmed to ask you questions, so you could say its told to be active. though its just programming and not its own idea. Thinking further about this it seems unlikely that an actual AGI can be run on a home computer, as information filtering would be resource intensive, besides running a GPT-like model. I wonder how much filtering could be optimized through learning. Through filtering also bares a risk. A great risk actually. Filtering is deeply interwoven with character and knowledge. Bad filters can cause big problems.
@charliesteiner23342 жыл бұрын
We currently have *no idea* how to program an AI that's motivated to learn about humans and then help them get what they want (as its direct goal, not merely as a consequence of dumb operation). So I sure hope we have plenty of time to solve that problem before setting loose a bunch of AGIs.
@shaneacton16272 жыл бұрын
Check out Rob Miles' videos on Ai safety, the control problem, and the alignment problem. People are working on it! But very early days. Also, already some reason to believe that its not entirely possible to control an agi
@ThinkTank2552 жыл бұрын
That is what they said about current levels of AI. They said we wouldn't see it in 50-100 years. *Boy were they wrong.* The fact of the matter is AGI is almost here ALREADY, so it is ridiculously hilarious that some people are saying it isn't coming any time soon. It is coming soon if we don't stop it. We actually don't want AGI, that is real reason why it is not coming any time soon. We want something BETTER than AGI, and we are headed down that path right now. What we want are systems that DO NOT learn on their own AFTER training. If we create AGI then we are making humans obsolete. I created full AGI in 2015. It's not difficult and current methods have already been dancing around full AGI for quite some time without even realizing it. But again, we do not want AGI. The AGI that I created was selfish, destructive, and quite frankly, it was psychopathic. So I didn't release it to the public.
@testchannelplsignore8509 Жыл бұрын
Gato feels like it was decades ago lmao.
@os2171 Жыл бұрын
And ecosystems collapse won’t and no economic crisis nor anything that might threaten the status quo. Everything is fine. 🔥