GPT-3 vs Human Brain

  Рет қаралды 219,113

Lex Fridman

Lex Fridman

3 жыл бұрын

GPT-3 has 175 billion parameters/synapses. Human brain has 100 trillion synapses. How much will it cost to train a language model the size of the human brain?
REFERENCES:
[1] GPT-3 paper: Language Models are Few-Shot Learners
arxiv.org/abs/2005.14165
[2] OpenAI's GPT-3 Language Model: A Technical Overview
lambdalabs.com/blog/demystify...
[3] Measuring the Algorithmic Efficiency of Neural Networks
arxiv.org/abs/2005.04305

Пікірлер: 668
@lexfridman
@lexfridman 3 жыл бұрын
GPT-3 has 175 billion parameters/synapses. Human brain has 100 trillion synapses. How much will it cost to train a language model the size of the human brain?
@haulin
@haulin 3 жыл бұрын
Not all of human brain synapses are used for language processing, though. It's gonna be super-human.
@waynecake5867
@waynecake5867 3 жыл бұрын
@@haulin I was thinking about the same. Not all parts of human brain is used to get there.
@facurod1392
@facurod1392 3 жыл бұрын
Today (2020) it costs a human life to train a human brain 🧠👀 👁 👅 👄 🩸💪 🦵
@YouuRayy
@YouuRayy 3 жыл бұрын
thanks Lex:)
@louisv4037
@louisv4037 3 жыл бұрын
It depends on whether the lottery ticket hypothesis is verified or not at brain scale. In this case the cognitive power of a much larger brain could be reached within a much smaller one. I suspect new search mecanisms would have to be invented to discover these optimally sized architecture . The level of brain plasticity observed on subjects that have lost part of their brains leans toward that hypothesis .
@engboy69
@engboy69 3 жыл бұрын
That's interesting because, if the trend continues, it will also cost $5M to train a human brain at college in 2032
@rml4289
@rml4289 3 жыл бұрын
College trains the human brain to be a good obedient worker slave for the big corps.. 9 to 5 9 to 5 9 to 5 9 to 5
@bozo5632
@bozo5632 3 жыл бұрын
And it extinguishes all sense of humor.
@kumarmanchoju1129
@kumarmanchoju1129 3 жыл бұрын
I am certain this comment was generated using GPT-3
@Lars16
@Lars16 3 жыл бұрын
I don't know about that trend as you would be multiplying with 0 if you did this in any civilized country. This comment was made by Scandinavia gang
@chunkydurango7841
@chunkydurango7841 3 жыл бұрын
R M L college is what helped teach me Java, Python, JS, etc... but yeah, totally a scam 🙄
@xvaruunx
@xvaruunx 3 жыл бұрын
GPT-800: I need your clothes, your boots and your motorcycle 😎😂
@vkray
@vkray 3 жыл бұрын
Lol
@xyzabc6741
@xyzabc6741 3 жыл бұрын
I think that model's name is T-800 :)
@keithrezendes6913
@keithrezendes6913 3 жыл бұрын
Don’t forget sunglasses
@Kalumbatsch
@Kalumbatsch 3 жыл бұрын
@@xyzabc6741 whoosh
@kumarmanchoju1129
@kumarmanchoju1129 3 жыл бұрын
More likely it will be GPT-5
@georgeprice7351
@georgeprice7351 3 жыл бұрын
These short highly focused videos are a nice mental appetizer, and its easy to set aside 5 mins to watch them between consecutive unsuccessful model training runs
@AbhishekDubey-mp3ys
@AbhishekDubey-mp3ys 3 жыл бұрын
lol, watching a model train is soon going to be a trend (❁´◡`❁)
@abdulsami5843
@abdulsami5843 3 жыл бұрын
on point man
@Noah97144
@Noah97144 3 жыл бұрын
lmao
@rudrakshsugandhi2305
@rudrakshsugandhi2305 3 жыл бұрын
actually my model is training and i am watching this video. lol
@incognitotorpedo42
@incognitotorpedo42 3 жыл бұрын
@@AbhishekDubey-mp3ys I have some model trains. They're HO scale. I don't play with them much any more, though.
@twinters8
@twinters8 3 жыл бұрын
It would be awesome to see your breakdowns on GPT-3. Explain to us dummies how it works!
@sjoerdgroot6338
@sjoerdgroot6338 3 жыл бұрын
kzbin.info/www/bejne/iYqYgamQp6-bgqc Yannic's video does a good job explaining the paper but might be a bit long
@WasguckstdudieURlan
@WasguckstdudieURlan 3 жыл бұрын
How about writing a gpt-3 app that explains you how it works
@Alistair
@Alistair 3 жыл бұрын
it's basically the auto-predict-next-word feature on your phone after a few cups of coffee
@a_name_a
@a_name_a 3 жыл бұрын
You forget that the 100 trillion synapses doesn't only do language, it does vision, reasoning, biological function, fine motor control, and much more. The language part (if we can isolate from other parts) probably uses a fraction of those synapses
@postvideo97
@postvideo97 3 жыл бұрын
It might be hard to quantify how many neurons are associated with language, since language, vision, hearing and touch are very much interconnected in our brains. You can't learn a language if you can't see, hear and touch.
@anvarkurmukov2438
@anvarkurmukov2438 3 жыл бұрын
@@postvideo97 you actually can learn a language with any of these senses (e.g. touching is enough)
@farenhite4329
@farenhite4329 3 жыл бұрын
The brain is so interconnected it’s hard to put a figure on how many synapses are used for a single task although estimations are good.
@Guztav1337
@Guztav1337 3 жыл бұрын
@@thomasreed2427 That seems like a bit of an exaggeration to me. To replicate some of the behavior of a single brain neuron (eg xor), you would need 4 of our current neurons. Let's take 10 times that and round upwards, 40≈100, to cover it more accurately. The structure of the brain could also give an additional 10, or even 100, times requirement with our type of neurons. Remember that just giving it an additional 10 times, is 10 times its current size, i.e. it could do the same job 10 different ways. So personally I think you might need at most 100*100 = 10 000 times larger than 100 trillion. But idk ¯\_(ツ)_/¯
@JulianGaming007
@JulianGaming007 3 жыл бұрын
Wow u guys are all yt og's
@hubermanlab
@hubermanlab 3 жыл бұрын
Thank you for this post. Powerful topic. Excellent description of the potential for this platform and hurdles involved.
@dhruvverma7087
@dhruvverma7087 Жыл бұрын
Wow ! Surprised to find your comment here
@georgelopez9411
@georgelopez9411 3 жыл бұрын
2:58 I would love an in-depth GPT-3 video, explaining how it works, the algorithms behind it, the results it has achieved, and its implications for the future.
@Wulfcry
@Wulfcry 3 жыл бұрын
If you're brave you can always try to read the papers much more informative , But I call them a total waste of time cause its allot. Look for "Autoregressive model that an easy start but before that read about entropy coding - Shannon. Then you basically have a little grasp whats going on just a little.
@Guztav1337
@Guztav1337 3 жыл бұрын
You can always go on youtube and search for it on your own. I recommend videos that are about 40 minutes long on the subject, else they are cutting too much of the details.
@pigshitpoet
@pigshitpoet 3 жыл бұрын
it is spyware
@Jacob_A_OBrien
@Jacob_A_OBrien 3 жыл бұрын
I think it is important that computer scientists use the term neuron and synapse very carefully. I am a molecular biologist and to equate neurons in neural networks to biological neurons, or even a synapse, is like calling an abacus a quantum computer. I don't say this to diminish machine learning at all, I use it as a biologist, and I've been showing my whole family AI Dungeon 2 utilizing GPT-3; it really is tremendous. But there is such a large difference between computer neurons (I'll just call them nodes) and biological neurons. Each neuron itself could be represented as a neural network with probably 100s of trillions of nodes or maybe magnitudes more, and each of those nodes would itself consist of probably thousands or millions of nodes in their own neural network. This is to say that the computation involved in determining whether there is an action potential or not is truly massive. I wish I could put this into more precise words but the complexity of even a single neuron is far, far greater than the complexity of all human systems of all times compiled into even a single object. I will try to exemplify this using a single example in my field of expertise, microRNA. The synapse consists of multiple protein complexes that work to transmit a chemical signal from outside the cell to inside the cell. In this case, the outside signal is created by another neuron. Every one of those proteins has dozens (and probably a lot more than that) of regulatory steps along the path of its production, localization, and function. These regulatory steps happen over time and themselves consist of other molecules produced/consumed by the neuron, each of which have their own regulation. Now let's say we have neuron 1 and it is trying to form a synapse with neuron 2. At the position neuron 1 and 2 physically interact, communication has already happened and all the necessary players (small molecules, RNA, and protein) have been recruited to this location. The moment of truth arrives, neuron 1 has an an action potential. Neuron 2 starts to assemble a brand new synapse at that location but this does not end in the production of a new synapse. In neuron 2, perhaps hours or days previous it decoded a complex network of extracellular signals that culminated in the localization of a specific microRNA at the location of this potential synapse. At the same time neuron 2 receives the signal from neuron 1, that microRNA is matured and is made active over a period of minutes. Instead of this new synapse being formed on neuron 2, this specific microRNA causes the production of protein necessary for its completion to stop and the whole process is aborted. At every step of every process in normally functioning neurons, these seemingly miraculous processes are occurring. They are occurring in our billions of neurons over our entire lives, existing in a body of trillions of cells that are all equally as complex, communicating with each other always and for our entire lives. I say this not to demean or lessen the work of you, Lex, or any other computer scientist. But I say this to humble us, for us to be a little more careful when we say so casually, "it's just computation."
@christhomasism
@christhomasism 3 жыл бұрын
Wow - thanks for the comment this is very interesting!
@namaan123
@namaan123 3 жыл бұрын
Reading this, what's truly miraculous to me is that the organization of these 100 billion neurons with 100-1000 trillion synapses into something that can reason and see and hear and feel and smell and remember can be thought to be explained by natural selection occurring over a mere 4-5 billion years. Anyone who's tried to simulate evolution in computers with a tiny fraction of variables of the real world should have some idea of how tiny that time really is to produce our brains by evolution, especially while recognizing that complexity explodes with increasing variables. It's miraculous then that the idea that we are designed and created isn't the normative claim.
@japoo
@japoo 3 жыл бұрын
@@namaan123 I guess you should read more about evolution than what's given in the comments lol. Designer, my ass
@namaan123
@namaan123 3 жыл бұрын
Ahammed Jafar Saadique I know I don’t know enough about evolution to defend that position with scientific evidence; I defend it by other means. That said is your confidence to suggest that you can defend the contrary position with scientific evidence?
@japoo
@japoo 3 жыл бұрын
@@namaan123 The study on the evolution of human brain is pretty wide and diverse. I don't know what I'm supposed to prove here. Btw I can link you to some cool reads you can do in your free time to broaden your knowledge on human brain evolution. And I'm sorry if I came as arrogant in the last comment. humanorigins.si.edu/human-characteristics/brains www.yourgenome.org/stories/evolution-of-the-human-brain
@josephkevinmachado
@josephkevinmachado 3 жыл бұрын
the cost of training will be nothing compared to all the money they make on selling this as a service :o
@facurod1392
@facurod1392 3 жыл бұрын
funny that the money to pay all this AI services will be produced by other machines/AI's...at the end the human is practically out of the equation...nwo...cite:max tegmark,life 3.0
@chris_tzikas
@chris_tzikas 3 жыл бұрын
Let's hope it won't be about advertisements.
@hecko-yes
@hecko-yes 3 жыл бұрын
@@444haluk only humans have money though
@rajiv8k
@rajiv8k 3 жыл бұрын
@@444haluk I'd love to see how species will fare as the sun grows into a red giant. Before you say humans won't last long with the way we are polluting the earth. Humans will survive, the numbers will vary and many may perish. But the species will survive, we are the only ones with the best chance to turn into a spacefaring civilization.
@christophegroulx8187
@christophegroulx8187 3 жыл бұрын
baby bean That is so wrong it’s not even funny
@AkA888
@AkA888 3 жыл бұрын
Good job Lex, really like the format. Thank you for sharing the knowledge.
@obasaar68
@obasaar68 3 жыл бұрын
Hello, Thanks for your time and efforts! I love the idea of the short videos! I'm very grateful for all of your hard work!
@learnbiomechanics860
@learnbiomechanics860 3 жыл бұрын
Love these quick videos! Keep it up!
@fugamante1539
@fugamante1539 3 жыл бұрын
These short videos are so good. Thanks for sharing them with us.
@dhruvshn
@dhruvshn 3 жыл бұрын
absolutely love this!!!! need more videos and a jre appearance from you to explain Gpt 3 deeply!
@gauravbhokare
@gauravbhokare 3 жыл бұрын
These are great Lex!! Keep em coming !
@alexdowlen9347
@alexdowlen9347 3 жыл бұрын
You're awesome, Lex. Keep up the incredible work.
@PartyRockAdviser
@PartyRockAdviser 3 жыл бұрын
This is what I love about 2020 and the Internet. Two decades ago a channel concentrated on the eclectic scientific subjects that Lex covers would have had little activity. But I was thrilled to see that this video, released only hours ago, has a ton of comments and likes on it already, just like a typical KZbin "video star" channel! :D On the darker side. The millions of dollars required to train a network like GPT-3 does torpedo somewhat the "democratization" of AI initiative. And yes, in X years the power required to train a GPT-3 system might fit in a smart phone. But when that happens there surely will be new hardware as powerful to that coming "genius" smart phone, as the computing cluster that GPT-3 was trained on is to the typical computing resources the average person can afford today. Perhaps it will be some astonishing combination of quantum computing and vast distributed parallel processing (or said more humorously by Marvin in The Hitchhiker's Guide to the Galaxy, a computing platform with the "brain the size of a planet") . Maybe that's just the way the Universe is and always will be??
@vertonical
@vertonical 3 жыл бұрын
I'd love to see what Google's/BERT's response is to GPT-3. After all, Google has the largest amount of compute resources in the world. Plus I'm sure their newest cloud TPU's can train GPT-3 much more quicker and efficiently than the many "general purpose" GPU's this exercise by OpenAI required.
@user-zk1rv2je2s
@user-zk1rv2je2s 3 жыл бұрын
Yep, but maybe we should improve ourselves too. Great technology become nothing, when it operated by idiots without power of will. We already drag behind of instruments, that could improve our live quality and this is already insane.
@markmui
@markmui 3 жыл бұрын
These short vids are great, keep them coming man, nice job!
@whtsapp3642
@whtsapp3642 3 жыл бұрын
Thanks for feedback for more guidance from me on digital currencies... W. H. A.. T.. S. A. P. P.........+1........4..........3..........5........2........2.........4..........5.........1..........5..........6@
@stanbrand1
@stanbrand1 2 жыл бұрын
Thank you lex for explaining this. I am extremely grateful for your videos and explanations
@AliKutluozen
@AliKutluozen 3 жыл бұрын
Keep making "this kind of things" please! :) Bite-sized ideas and information!
@Keunt
@Keunt 3 жыл бұрын
Would honestly love to see some lectures or video essays on these subjects from you
@DynamicUnreal
@DynamicUnreal 3 жыл бұрын
2:30 Looks like Ray Kurzweil’s prediction for the singularity is tracking pretty accurately.
@puckry9686
@puckry9686 3 жыл бұрын
Don't be so confidence Remember that was what scientist said about the TOE back in 1980's and now in 2020 we are not even close
@zaidboe
@zaidboe 3 жыл бұрын
Loving these short videos.
@BlackCat.Designs
@BlackCat.Designs 3 жыл бұрын
Thats the price of our last invention... After that...we might just be at best associate producers on everything.
@SameLif3
@SameLif3 3 жыл бұрын
Yep because we have more reason to decide By ourselves
@aweslayne
@aweslayne 3 жыл бұрын
The Erudite AI sees another field to takeover: “another one” ☝️
@atrox7685
@atrox7685 3 жыл бұрын
thank you for all the great content bro
@ChampionRunner
@ChampionRunner 3 жыл бұрын
Now this what the whole world needed, Getting a bit of an idea from different articles stating what GPT-3 is but not really we got any update or clue.👋 This is the real thing that you have talked about Lex.👌👌👌👌👌👌 Good one....😺😺😺😺😺😺😺😺😺😺😺😺😺😺😺
@mnesvat
@mnesvat 3 жыл бұрын
Liked this format, small, easy to digest 🙌
@jasonfelice75
@jasonfelice75 3 жыл бұрын
I love these little vids!
@TheJorgenbo
@TheJorgenbo 3 жыл бұрын
More of these videos! Especially from a philosophical standpoint
@tehnokracijad.o.o.tehnokra5940
@tehnokracijad.o.o.tehnokra5940 3 жыл бұрын
Absolutely brilliant Lex .
@MolotovBg
@MolotovBg 3 жыл бұрын
Fantastic video, would love to see your thoughts on the potential of this technology and how you think it will impact the world
@canaldoapolinario
@canaldoapolinario 3 жыл бұрын
To be honest, I was expecting a figure in the ballpark of the hundreds of trillions of USD, more the entire World's GDP and stuff. USD 2.6 billion doesn't sound impossible even in 2020. Maybe I'm poisoned by reading about billions too much, and startups like WeWork being worth dozens of billions - but some company/individual investing USD 2.6 bi / USD in 2020 to have a "maybe-too-close-to-human-like" language model, or at least something that is at least hundreds or thousands of times better than GPT-3 sound feasible to me.
@KeeperOfKale222
@KeeperOfKale222 3 жыл бұрын
“How do you snorgle a borgle?” GPT3: “With a snorgle.”
@calivein
@calivein 3 жыл бұрын
Big fan! I love your videos!
@Bati_
@Bati_ 3 жыл бұрын
It's not an exaggeration when people say that the most valuable possession you have is your brain...
@LordAlacorn
@LordAlacorn 3 жыл бұрын
Not for long... :)
@armhf9617
@armhf9617 3 жыл бұрын
You are your brain
@victoraguirre92
@victoraguirre92 3 жыл бұрын
@@LordAlacorn Care to explain? I heard something about neurolink but I just can't comprehend the idea of uploading your consciousnesses.
@LordAlacorn
@LordAlacorn 3 жыл бұрын
@@victoraguirre92 neurolink is outdated. :) Search for "artificial dopamine neuron" - we just invented it, no upload, direct brain expansion is possible. Basically we invented possibility of a better brain for ourselves.
@victoraguirre92
@victoraguirre92 3 жыл бұрын
Alacorn Thanks
@shubhamkanwal8977
@shubhamkanwal8977 3 жыл бұрын
Amazing short video. 👍
@felixthefoxMEXICO
@felixthefoxMEXICO 3 жыл бұрын
Love it! Great video....MOAR
@FirstLast-gk6lg
@FirstLast-gk6lg 3 жыл бұрын
I like the short video, just started learning to code 3 months ago. Hopefully, I'll get the chance to work on some machine learning before the programs write themselves haha
@VincentKun
@VincentKun 3 жыл бұрын
I thought 175 billion parameters were a lot... They are actually! Wonderful!
@aiart3615
@aiart3615 3 жыл бұрын
At this time there has came out faster and memory efficient training method: There is no need to train every synapse at each iteration, but only subset of them
@douglasjamesmartin
@douglasjamesmartin 3 жыл бұрын
Very sexy
@Guztav1337
@Guztav1337 3 жыл бұрын
More efficient learning was covered by the video
@danielro1209
@danielro1209 3 жыл бұрын
Not all human synapses are dedicated to image processing
@revimfadli4666
@revimfadli4666 3 жыл бұрын
Imagine a computer model that does
@revimfadli4666
@revimfadli4666 3 жыл бұрын
@Ronit ganguly on the way, we're more likely to encounter a partial AGI(which may or may not be misleading) though. Are there discussions about it(rather than full AGIs)?
@revimfadli4666
@revimfadli4666 3 жыл бұрын
@Ronit ganguly that's such an optimistic spirit :) I brought up about AGI on r/machinelearning and they confused it with those sensationalist fearmongering news that make AGI seem to be real-life Terminator/Matrix/Screamers, claimong AGI is jus a fantasy and such. They think MLPs being math functions mean they can't be intelligent or the like, even though that's a bit like saying "a bunch of proteins and electric signals can't have self-awareness". Btw AFAIK most discussions regarding AGI hazard(like on Robert Miles' channel) seem to revolve around a hypothetical 'perfect'/'full' AGI, but what about the sub-AGIs we're likely to encounter first? Would they make different mistakes due to not being as intelligent? Btw which papers have you worked on?
@projectileenthusiast1784
@projectileenthusiast1784 3 жыл бұрын
Would love for a video on Lex's work station/setup, what laptop he uses, what OS, his daily tech bag etc etc.
@yviruss1
@yviruss1 3 жыл бұрын
Thanks Lex!
@mpsoxygen
@mpsoxygen 3 жыл бұрын
Since I saw openAI play dota 5v5 it was clear to me that ML is capable of amazing things. Sure there are a lot of edge cases and weird stuff, but to see human behavior (like self sacrifice for the team good, intimidation tactics, baiting, object permanence, etc) emerge from a machine was just mind blowing. It would be really nice if you did a video about it or invite someone from the team on the podcast to talk about it.
@HypnoticSuggestion
@HypnoticSuggestion 3 жыл бұрын
I really like this idea, shorts are great.
@bschweiz1321
@bschweiz1321 3 жыл бұрын
this is great thanks Lex
@jackhurley8784
@jackhurley8784 3 жыл бұрын
Awesome video, I hope you do keep making more like this. One question though, you account for the improving efficiency of neural networks leading to less expensive training, but is it not also true that compute will continue to get cheaper as well?
@jackhurley8784
@jackhurley8784 3 жыл бұрын
@stack Whale if it is, that's kinda what I'm asking about. I don't actually know if compute is factored into the increased efficiency he is talking about.
@jackhurley8784
@jackhurley8784 3 жыл бұрын
I see it as two factors of the same problem. I am happy to discuss how I may be wrong as I am always eager to learn.
@jackhurley8784
@jackhurley8784 3 жыл бұрын
@stack Whale I honestly wasn't trying to "click bait" anyone. If my question doesn't apply, I am happy to discuss that. I'm pretty sure we are actually both just on different pages and discussing this through the low bandwidth of the comments section clearly isn't going to rectify that issue. I frankly really don't appreciate this comment, especially as someone who is just trying to gain knowledged.
@deeplearningexplainer2139
@deeplearningexplainer2139 3 жыл бұрын
Interesting reasoning. I wondered if we can continue doubling training efficiency at this rate. But yes... things might get interesting if there is a model containing 100 trillion parameters.
@DanGM123
@DanGM123 3 жыл бұрын
with a tiny fraction of Jeff Bezos' fortune we would be able to train GPT-4 today.
@VincentKun
@VincentKun 3 жыл бұрын
Bezos could become god right now
@xsuploader
@xsuploader 3 жыл бұрын
@@VincentKun no he couldnt. Even with a 100 trillion parameter network you may not reach general intelligence. A scaled up language model isnt the same thing as an AGI.
@VincentKun
@VincentKun 3 жыл бұрын
@@xsuploader i was joking about it, of course you can't develope self consciousness with this GPT3
@inutiledegliinutili2308
@inutiledegliinutili2308 3 жыл бұрын
Vincenzo Gargano Do you need self consciousness to have an agi?
@VincentKun
@VincentKun 3 жыл бұрын
@@inutiledegliinutili2308 it's kinda yes and no. For example in Asimov's books we have Strong AI, that is basically an AGI, with self-consciousness. But in reality we don't know even what it is consciousness ... And when it appears. So i can't answer for sure, i could also be wrong
@kayrosis5523
@kayrosis5523 3 жыл бұрын
GPT-3 Sentence completion: Humanity is...[about to become obsolete] [unprepared for what's coming] [blissfully ignorant of the future they are about to experience]
@nimmernomma8830
@nimmernomma8830 3 жыл бұрын
all of the above
@ravencross7779
@ravencross7779 3 жыл бұрын
Thank you!
@pacoysutabaco82
@pacoysutabaco82 3 жыл бұрын
Loved the video
@schritteplusneu1293
@schritteplusneu1293 3 жыл бұрын
Grate work!
@whtsapp3642
@whtsapp3642 3 жыл бұрын
Thanks for feedback for more guidance from me on digital currencies... W. H. A.. T.. S. A. P. P.........+1........4..........3..........5........2........2.........4..........5.........1..........5..........6.;
@twgardenia
@twgardenia 3 жыл бұрын
Yes !!!! Always learn something new everyday !! :)
@satoshinakamoto171
@satoshinakamoto171 3 жыл бұрын
what about quantum computing in training models? how may that affect future ML algos and its associated costs? any idea if there are works going on with quantum computing and training models?
@JS-ho6hv
@JS-ho6hv 3 жыл бұрын
86 billion neurons in the human brain. And about 10.000 connections between neurons (action potential), so it results 860 trillions (almost 1 quadrillion) of potentials connections what is called “connectome” and somehow make you - you.
@MrX-st4kk
@MrX-st4kk 3 жыл бұрын
Question: What in we run a gpt3 on a NEST (neural simulation technology) using folding@home computational power?
@ClaudioDrews
@ClaudioDrews 2 жыл бұрын
Lex, it's not about how complex is the box, it's about how the box can interface with the world in a close loop of changing the world and being changed by the changes it makes upon the world.
@Iaminationman
@Iaminationman 3 жыл бұрын
Can you please do a long-form interview with GPT-3? Its responses depend heavily on the questions asked and I think you could extract some fascinating answers from it.
@whtsapp3642
@whtsapp3642 3 жыл бұрын
Thanks for feedback for more guidance from me on digital currencies... W. H. A.. T.. S. A. P. P.........+1........4..........3..........5........2........2.........4..........5.........1..........5..........6.s
@oohboi2750
@oohboi2750 3 жыл бұрын
great job
@fourthz4460
@fourthz4460 3 жыл бұрын
Lex could you make a video explaining if this neural network powered technology could eventually lead to an AGI??
@antoniovelazquez9869
@antoniovelazquez9869 3 жыл бұрын
And what would be the cost of actually gathering the data on which to train the gpt3?
@joeedgar634
@joeedgar634 2 жыл бұрын
I realize this video is a year old, but just wanted to point out the incredible rate of progression on this stuff. Only a year after this video was made, there are now many models in the 1-2 trillion parameter range and Cerebras claims it can handle 175 trillion parameters (price unknown to the public as of yet). There are also open source models that can be trained on commodity hardware that achieve benchmarks close to gpt-3. Incredible (and somehwat frightening) stuff.
@lashropa
@lashropa 3 жыл бұрын
Singularly exciting!
@ggkk1667
@ggkk1667 3 жыл бұрын
When is GPT-6 come out lex cant wait to play it
@keenheat3335
@keenheat3335 3 жыл бұрын
There was a paper "Adaptive Flight Control With Living NeuronalNetworks on Microelectrode Arrays" by Thomas B. DeMarseand Karl P. Dockendorf. Where they connect a chips to rat neuron cells and use it to train a "literal" neural network to create an adaptive flight control in a flight simulator. The weights of the network are adjusted via killing or stimulating growth of cells at specific location via low/high frequency electric pulse. There was also another channel "The thought emporium" where the content creator cultured a petri dish full of commercially bought human neuron cells (grown from induced stem cells, so no human was harmed) and hook them up with electrodes and attempt to do some basic neural network task like number recognition with it. So technically a neural network might be replicated within a human brain with a bit more technology. Although that machine version of neural network might not be the same as the natural version of neural network.
@thatmumwiththehair7438
@thatmumwiththehair7438 3 жыл бұрын
So good 👌🏽
@Iwijn2000
@Iwijn2000 3 жыл бұрын
Would you have to train a neural net on a bunch of different things ( natural language, solving math problems, ....) to achieve general AI?
@JGlez-bv7xm
@JGlez-bv7xm 3 жыл бұрын
Good Morning Dave..
@MountThor
@MountThor 3 жыл бұрын
Can you go into deep detail about how GPT-3 works? Thanks.
@haritssyah7434
@haritssyah7434 3 жыл бұрын
i like this, keep going
@MJ-ge6jz
@MJ-ge6jz 3 жыл бұрын
Our Digital On Line Helper is closer than you think! And I am talking about the level of sophistication akin in the move " Her ."
@fzigunov
@fzigunov 3 жыл бұрын
I'm pretty sure linear scaling does not apply here. I think N log(N) is probably more applicable, perhaps even N^2. AI training is an optimization problem, and optimization with linear scaling is like the holy grail. I would like to see some expert input here!
@user-lk1ky1hx5r
@user-lk1ky1hx5r 3 жыл бұрын
waiting for the 20k challenge eagerly
@HalfIdeas
@HalfIdeas 3 жыл бұрын
It took OpenAI about a year and a half to go from GPT2 to 3. Figure they will continue this path with 5-10X increase as the cost to train the network drops.
@dislike__button
@dislike__button 3 жыл бұрын
I'd like to see a GPT-3 chatbot
@ONDANOTA
@ONDANOTA 3 жыл бұрын
the text game "AI Dungeon" currently uses Gpt3
@bozo5632
@bozo5632 3 жыл бұрын
AMA
@AC-zv3fx
@AC-zv3fx 3 жыл бұрын
Chatbot called Replica AI also uses GPT-3
@mtvgrif
@mtvgrif Жыл бұрын
here you go
@dislike__button
@dislike__button Жыл бұрын
@@mtvgrif lol
@alxleiva
@alxleiva 3 жыл бұрын
more GPT-3 please!
@kmh511
@kmh511 3 жыл бұрын
GPT-4 would be about 400TB on disk, and GPT-3 was trained on roughly 2TB of data (499B tokens). If GPT-4 is trained on the same data it would be overfitting it in less than 1 epoch - it would just memorize it perfectly. I agree with what the community is saying, that # of parameters and synapsis isn't everything. The brain has different types of synapses, processes information asynchronously, has other types of supporting neural cells, there is just no comparison between a biological neuron and an ANN. What is interesting is to see how the GPT architecture/methodology can be extended beyond a language model. Incorporating elements from cognitive systems such as planning, reasoning, etc for more generalized intelligence. Albeit even if just in the NLP domain, we should start to see something getting closer to AGI.
@NextFuckingLevel
@NextFuckingLevel 3 жыл бұрын
Government knows this, but they don't see a practical and proven result that this model scale up could achieve human level understanding and reasoning, so they decide to wait for a couple of years
@chessmusictheory4644
@chessmusictheory4644 2 ай бұрын
Depending on the speed at which they attempt to develop it, the cost of creating GPT-Human level could reach billions. However, there are significant challenges they must overcome. For instance, addressing short-term memory loss and the even more critical issue of catastrophic forgetting is essential. These obstacles prevent them from seamlessly combining too many models to create a supermodel. Doing so would necessitate reconstructing the models from scratch. When the announcement of AGI achievement eventually arrives, skepticism is warranted. It’s unlikely to be a pure AGI but rather an evolution of existing models-perhaps an advanced GPT-5 or a cleverly woven amalgamation of specialized agents.
@someonesomeone529
@someonesomeone529 3 жыл бұрын
Why dıd you defıne parameters as synapses or vıce versa? Or what do you mean by parameters?
@regalhearts3944
@regalhearts3944 3 жыл бұрын
Info like this can tell us how long it will take to populate a city with these models seem like it's going to take longer to mass produce than to actually achieve the science
@dentarthurdent42
@dentarthurdent42 3 жыл бұрын
The singularity is rapidly approaching!
@Fussfackel
@Fussfackel 3 жыл бұрын
If Nick Bostrom is right with assuming that language capability equals general intelligence - then, by your estimation Lex, we will have AGI by 2032. "If somebody were to succeed in creating an AI that could understand natural language as well as a human adult, they would in all likelihood also either already have succeeded in creating an AI that could do everything else that human intelligence can do, or they would be but a very short step from such a general capability" (Bostrom 2014)
@amirramezani9135
@amirramezani9135 3 жыл бұрын
It's not just about the number of parameters.
@richardpalme5b
@richardpalme5b 3 жыл бұрын
I agree, but I suppose that until such huge models are feasible to train, researchers will have found appropriate model architectures for them.
@Alorand
@Alorand 3 жыл бұрын
You better get some impressive scalability for that price, because you can get quite a few talented people to be "as smart as humans" for $2.6 billion.
@EmiDiez
@EmiDiez 3 жыл бұрын
Excelent video
@vikrampande6379
@vikrampande6379 3 жыл бұрын
Technical details, please!
@pandemik0
@pandemik0 3 жыл бұрын
Mind blown. We're in an overhang where so much is feasible it's just a matter of trying it out.
@humanperson8418
@humanperson8418 Жыл бұрын
Yay Moore's Law. 🥳 More Moore. 🎉🎉
3 жыл бұрын
2033 - GPT-4 Helps humanity create the Warp Drive. Asimov said in his (fiction) books that AI would help humanity accomplish just that - really really hope it comes true.
@zubinkynto
@zubinkynto 3 жыл бұрын
True that, but screw that Asimov ruleset
3 жыл бұрын
@@zubinkynto Do you mean the four rules of robotics? (I'm counting with the zeroth law here)
@FailTrainS
@FailTrainS 3 жыл бұрын
Have you thought about the implications of quantum computers on Machine Learning? A quantum computer would be exponentially faster at the optimization problems required as the matrix gets larger and they're scaling much faster than I thought they would.
@SteveWindsurf
@SteveWindsurf 3 жыл бұрын
Human brain has many synapses yes, but most are static or moving incredibly slow. Perhaps a Trillion node neural network running at say 100Hz around the outside and quicker around sound and vision processing centers could fit on a single chip and run cool.
@gpt-jcommentbot4759
@gpt-jcommentbot4759 Жыл бұрын
ok what would we do with it
@jeffwads6158
@jeffwads6158 3 жыл бұрын
Yes, it passes the Turning test for language models, but it doesn't know what an apple is. That aspect is what I'm watching out for. Exciting times. GPT-3 would be amazing at grading grammar papers, though.
@anonmouse956
@anonmouse956 3 жыл бұрын
Is there a coherent argument explaining how transistors compares to synapses for total computing power?
@vincentbrandon7236
@vincentbrandon7236 3 жыл бұрын
I think 100 trillion (or even an order of magnitude or so lower) is a reasonable upper bound for language in and of itself. This is just NLP. The brain does much more and there's reason to believe it can work with abstract primitives, archetypes and a limited geometry subset, which this GPT doesn't really cover. While I believe we're closer to an AI we can write to than we think, we'll need similar effort in a few other key domains required for making sense of the world more generally.
@worldcitizen1118
@worldcitizen1118 3 жыл бұрын
hI Lex (or anyone else who might be able to help), i am working on a serious medical problem which impacts about 15 million people. a high quality text synthesizer would be very helpful to advance opur work. is anyone aware of a synthesizer which has been built on GPT 3 that i might be able to access? thanks
@swwei
@swwei 3 жыл бұрын
In 1980, a Digital Vax 780 sold at price USD 500,000. Today a Raspberry Pi 4 with at least 1000 times more computing power than Vax 780 cost only USD 50. So, I would say the most important thing is not the cost, but the idea. If the idea is feasible and with infinite potential , then people will do their best to bring down the cost.
@whtsapp3642
@whtsapp3642 3 жыл бұрын
Thanks for feedback for more guidance from me on digital currencies... W. H. A.. T.. S. A. P. P.........+1........4..........3..........5........2........2.........4..........5.........1..........5..........6@
@hypersonicmonkeybrains3418
@hypersonicmonkeybrains3418 3 жыл бұрын
Holy shit thats cheaper than an aircraft carrier.
I literally connected my brain to GPT-4 with JavaScript
5:16
Fireship
Рет қаралды 1,3 МЛН
What are Transformers (Machine Learning Model)?
5:50
IBM Technology
Рет қаралды 344 М.
Miracle Doctor Saves Blind Girl ❤️
00:59
Alan Chikin Chow
Рет қаралды 23 МЛН
FOOTBALL WITH PLAY BUTTONS ▶️ #roadto100m
00:29
Celine Dept
Рет қаралды 71 МЛН
ХОТЯ БЫ КИНОДА 2 - официальный фильм
1:35:34
ХОТЯ БЫ В КИНО
Рет қаралды 2,1 МЛН
Two AIs Have An Existential Crisis (GPT-3)
5:58
Decycle
Рет қаралды 3,2 МЛН
DeepMind solves protein folding | AlphaFold 2
16:42
Lex Fridman
Рет қаралды 340 М.
Human Brain Vs Computer #michiokaku
0:26
The Science Fact
Рет қаралды 155 М.
Lex Fridman does judo with Travis Stevens, Olympic Silver Medalist
5:33
How AIs, like ChatGPT, Learn
8:55
CGP Grey
Рет қаралды 10 МЛН
StarCraft 2: Google DeepMind AlphaStar (A.I.) vs Pro Gamer!
20:53
Why You Can't Asian Squat (And the Benefits You're Missing)
7:26
Upright Health
Рет қаралды 4,3 МЛН
What’s your charging level??
0:14
Татьяна Дука
Рет қаралды 6 МЛН
Обманет ли МЕНЯ компьютерный мастер?
20:48
Харчевников
Рет қаралды 186 М.
AMD больше не конкурент для Intel
0:57
ITMania - Сборка ПК
Рет қаралды 418 М.
Трагичная История Девушки 😱🔥
0:58
Смотри Под Чаёк
Рет қаралды 364 М.
Полный обзор iPad Pro M4 - хвалю!
26:27
Rozetked
Рет қаралды 222 М.