What is ChatGPT doing...and why does it work?

  Рет қаралды 2,144,318

Wolfram

Wolfram

Жыл бұрын

Stephen Wolfram hosts a live and unscripted Ask Me Anything about ChatGPT for all ages. Find the playlist of Q&A's here: wolfr.am/youtube-sw-qa
Originally livestreamed at: / stephen_wolfram
9:55 SW starts talking
Follow us on our official social media channels.
Twitter: / wolframresearch
Facebook: / wolframresearch
Instagram: / wolframresearch
LinkedIn: / wolfram-research
Contribute to the official Wolfram Community: community.wolfram.com/
Stay up-to-date on the latest interest at Wolfram Research through our blog: blog.wolfram.com/
Follow Stephen Wolfram's life, interests, and what makes him tick on his blog: writings.stephenwolfram.com/

Пікірлер: 537
@LeakedCone
@LeakedCone 10 ай бұрын
I fell asleep with youtube on and im at this
@oliviasavenue.
@oliviasavenue. 10 ай бұрын
same
@RickyVickyQin
@RickyVickyQin 10 ай бұрын
same😂
@mohdharwwfc
@mohdharwwfc 10 ай бұрын
Same
@psbm4
@psbm4 10 ай бұрын
JDHSJAHAHW SAME
@Commander_Egg69
@Commander_Egg69 10 ай бұрын
Same
@martinsriggs2441
@martinsriggs2441 10 ай бұрын
The teachings on this channel are always top notch so informative and easy to understand, it's very hard to find good content online these days
@charleyluckey2232
@charleyluckey2232 10 ай бұрын
I agree with you on everything, these days finding a financial mentor is a tough challenge which is why I am happy and grateful to have been introduced to my mentor Larry Kent Burton by a friend. I made a lot of money in just two months working with him for just a small investment
@martinsriggs2441
@martinsriggs2441 10 ай бұрын
Who exactly is this Mr. Larry? what does he do? And how can I take advantage of him
@barbaragimbel3646
@barbaragimbel3646 10 ай бұрын
Sorry for interrupting your conversation, just wanted to add to everything, Mr. Larry is a miracle trader, he helped me grow my trading account from just $ 3200 to over $ 11000, I can confidently say anyone who invests with him you are guaranteed to make profits
@charleyluckey2232
@charleyluckey2232 10 ай бұрын
@@martinsriggs2441 He is a financial advisor and investor, he helps people to better understand the financial markets and he also does trading and investing on your behalf
@charleyluckey2232
@charleyluckey2232 10 ай бұрын
Getting in touch with him is very simple. Just follow him on Instagram
@michaeljmcguffin
@michaeljmcguffin Жыл бұрын
Starts at 9:53 1:16:25 breakthrough in 2012 1:57:35 "It's crazy that things like this work"
@lailaalfaddil7389
@lailaalfaddil7389 10 ай бұрын
The most important thing that should be on everyone's mind currently should be to invest in different sources of income that doesn't depend on the government. Especially with the current economic crisis around the word. This is still a good time to invest in various stocks, Gold, silver and digital currencies.
@KJ-sv8re
@KJ-sv8re 10 ай бұрын
S
@carson_tang
@carson_tang Жыл бұрын
video timestamps 0:09:53 - start of presentation, intro 0:12:16 - language model definition 0:15:30 - “temperature” parameter 0:17:20 - Wolfram Desktop demo of GPT2 0:18:50 - generate a sentence with GPT2 0:25:56 - unigram model 0:31:10 - bigram model 0:33:00 - ngram model 0:38:50 - why a model is needed 0:39:00 - definition of a “model” 0:39:20 - early modeling example: Leaning Tower of Pisa experiment 0:43:55 - handwritten digit recognition task 0:47:40 - using neural nets to recognize handwritten digits 0:51:31 - key idea: attractors 0:53:35 - neural nets and attractors 0:54:44 - walking through a simple neural net 1:01:50 - what’s going inside a neural net during classification 1:06:12 - training a neural net to correctly compute a function 1:09:10 - measuring “correctness” of neural net with “loss” 1:10:41 - reduce “loss” with gradient descent 1:17:06 - escaping local minima in higher dimensional space 1:21:15 - the generalizability of neural nets 1:28:06 - supervised learning 1:30:47 - transfer learning 1:32:35 - unsupervised learning 1:34:40 - training LeNet, a handwritten digit recognizer 1:38:14 - embeddings, representing words with numbers 1:42:12 - softmax layer 1:42:47 - embedding layer 1:46:22 - GPT2 embeddings of words 1:47:40 - ChatGPT’s basic architecture 1:48:00 - Transformers 1:52:50 - Attention block 1:59:00 - amount of text training data on the web 2:03:35 - relationship between trillions of words and weights in the network 2:09:40 - reinforcement learning from human feedback 2:12:38 - Why does ChatGPT work? Regularity and structure in human language 2:15:50 - ChatGPT learns syntactic grammar 2:19:30 - ChatGPT’s limitation in balancing parentheses 2:20:51 - ChatGPT learns [inductive] logic based on all the training data it’s seen 2:23:57 - What regularities Stephen Wolfram guesses that ChatGPT has discovered 2:24:11 - ChatGPT navigating the meaning space of words 2:34:50 - ChatGPT’s limitation in mathematical computation 2:36:20 - ChatGPT possibly discovering semantic grammar 2:38:17 - a fundamental limit of neural nets is performing irreducible computations 2:41:09 - Q&A 2:41:16 - Question 1: “Are constructed languages like Esperanto more amenable to semantic grammar AI approach?” 2:43:14 - Question 2 2:32:37 - Question 3: token limits 2:45:00 - Question 4: tension between superintelligence and computational irreducibility. How far can LLM intelligence go? 2:52:12 - Question 5 2:53:22 - Question 6: pretraining a large biologically inspired language model 2:55:46 - Question 7: 5 senses multimodal model 2:56:25 - Question 8: the creativity of AI image generation 2:59:17 - Question 9: how does ChatGPT avoid controversial topics? Taught through reinforcement learning + possibly a list of controversial words 3:03:26 - Question 10: neural nets vs other living multicellular intelligence, principle of computational equivalence 3:04:45 - Human consciousness 3:06:40 - Question 11: automated fact checking for ChatGPT via an adversarial network. Train ChatGPT with WolframAlpha? 3:07:25 - Question 12: Can ChatGPT play a text-based adventure game? 3:07:43 - Question 13: What makes GPT3 so good at language? 3:08:22 - Question 14: Could feature impact scores help us understand GPT better? 3:09:48 - Question 15: ChatGPT’s understanding of implications 3:10:34 - Question 16: the human brain’s ability to learn 3:13:07 - Question 17: how difficult will it be for individuals to train a personal ChatGPT that behaves like a clone of the user?
@AmuhAje
@AmuhAje Жыл бұрын
Thanks. A. Ton!
@Kami84
@Kami84 Жыл бұрын
Thanks 🙏🏾
@lowcountrydogos3142
@lowcountrydogos3142 Жыл бұрын
😊 Appreciate it!!!
@shinequashie393
@shinequashie393 11 ай бұрын
Most significant comment of our time 😂
@raymondloh25
@raymondloh25 11 ай бұрын
😊😊😊😊😊😊
@dr.bogenbroom894
@dr.bogenbroom894 Жыл бұрын
Watching this videos is a great way to review all this things and understand them again, maybe a little better. Thank you very much.
@at0mly
@at0mly Жыл бұрын
starts at 9:50
@harrykekgmail
@harrykekgmail Жыл бұрын
thanks. appreciated
@fatemehcheginisalzmann2189
@fatemehcheginisalzmann2189 Жыл бұрын
Amazing & super helpful!!! I really enjoyed watching it and learned a lot.
@ericdefazio4197
@ericdefazio4197 Жыл бұрын
this took me a few days to get through... in a good way so much good stuff here, such a great instructor... great ways of explaining and visual aids Amazed Mr. Wolfram is as generous with his time as to share his insights and be as open with everyone given he has many companies to run and problems to solve. i love engineering😊
@ai_serf
@ai_serf Жыл бұрын
as a radical thinker/CS student studying some graduate level mathematical logic. Wolfram is one of my "12 disciplies", i.e. he's a holy figure to me.
@GeneRex-qe7lo
@GeneRex-qe7lo 10 ай бұрын
Blacks are always the criminals, poor, in the background, asking questions and subordinate in Hollywood movies. Its an agenda. The China film administration is better than Hollywood. Hollwood really Hates Black on Blacks Love. Black men not allowed to have their own Hair & must be bald headed in every single Hollywood movie.
@cdorman11
@cdorman11 7 ай бұрын
"Amazed Mr. Wolfram is as generous with his time..." Then maybe you'd be interested in buying his book.
@anonymous.youtuber
@anonymous.youtuber Жыл бұрын
Thank you so much ! I learned more in these 3 hours than in months of watching other videos about this subject. It would be great if more knowledgeable people used youtube to share their experiences. 🙏🏻🙏🏻🙏🏻
@porkbun1555
@porkbun1555 11 ай бұрын
Difference between qualified and unqualified people. Basically its the difference between a radio DJ and college proffesor, yeah.
@lrncexml_
@lrncexml_ 10 ай бұрын
​@porkbun1555
@lrncexml_
@lrncexml_ 10 ай бұрын
@lrncexml_
@lrncexml_ 10 ай бұрын
​ n
@bog202
@bog202 10 ай бұрын
@@porkbun1555
@duhmiyah
@duhmiyah 10 ай бұрын
let me guess… everyone fell asleep and then woke up to this livestream playing, am i right?
@bonvabriones
@bonvabriones 6 күн бұрын
Nope
@ivanjelenic5627
@ivanjelenic5627 8 ай бұрын
Love this. I knew a lot of this, but it was still great to hear it expressed in a clear and systematic way.
@2nxtlvlstudio
@2nxtlvlstudio 5 ай бұрын
True, I also thought of it the same way you take your phone, just start typing and just spam whatever autocorrect thinks the best next word is as the principle of its work. Didn't think it can get this good.
@WarrenLacefield
@WarrenLacefield Жыл бұрын
This was the most fascinating and informative discussion, particularly, your responses to commenters! Please post the link to the paper you recently wrote (?) that inspired this live video discussion. And thank you!
@Verrisin
@Verrisin Жыл бұрын
Here's a question: how much does the wording of the questions afect it's answers? - Presumably if it just tries to continue, if you make errors, it ought to make more errors after too, right? - How about if you ask with "uneducated" language vs scientific? - Rather than just affect the tone, would it also affect the contents? - What if you speak in a way it has associated with certain biases? - Who knows what kinds of patterns it has came up with, considering it "discovered" those "semantic grammars" we as humans aren't even aware of ...
@chenwilliam5176
@chenwilliam5176 11 ай бұрын
About ChatGPT, very few people are telling the truth and Wolframe is the most powerful one ❤ Thank you very much, Steve Wolfram ❤
@louisjinhui1420
@louisjinhui1420 Жыл бұрын
Privet! You can produce well. Electrifying i find Your channel is getting ridiculously well. I can watched repeat again! Keep going.
@Anders01
@Anders01 Жыл бұрын
Amazing presentation. If I were to experiment with machine learning I would examine small-world networks instead of layered networks. And try genetic algorithms such as randomly adjusting the network into a number of variations, then pick the best candidate and repeat the adjustment for the new candidate and continue iterating until a desired outcome is found.
@thorcook
@thorcook 10 ай бұрын
Ya, that's been done actually. research the various AI/ML models and research papers. btw, the 'layered networks' is kind of a useful structure for 'adjusting the network into a number of variations'
@dockdrumming
@dockdrumming Жыл бұрын
At 33:49, it's interesting how the text looks more like English the longer the character length. Great video.
@williammixson2541
@williammixson2541 Жыл бұрын
Remarkable talk, simply outstanding!
@robertgoldbornatyout
@robertgoldbornatyout 11 ай бұрын
Amazing presentation. Thank you so much !👍👍👍
@stormos25one
@stormos25one Жыл бұрын
Absolutely love these sessions!!
@stachowi
@stachowi 10 ай бұрын
This was amazing... never watched a lecture from Stephen and he's an amazing teacher.
@carlhopkinson
@carlhopkinson 8 ай бұрын
Expertly explained in a way understandable to a large set if people. Bravo.
@JustinHedge
@JustinHedge Жыл бұрын
I'd Love to see more in-depth analysis like this on the current LLM topic utilizing Dr. Wolfram in this format. Exceptional content. As an aside I've really been missing the physics project live streams.
@joymaehbagcat2108
@joymaehbagcat2108 10 ай бұрын
‘We U6😊
@gabrielescheler2522
@gabrielescheler2522 22 күн бұрын
m.kzbin.info/www/bejne/rJrHknt6q72AgtE
@eqcatvids
@eqcatvids Жыл бұрын
Thank you so much Mr. Wolfram, you really shed light in some areas I had not fully grasped before!
@AlexandreRangel
@AlexandreRangel Жыл бұрын
Very useful and weel presented content, Stephen! Thank you for this and for all your work and research!
@arceusboki
@arceusboki 6 ай бұрын
CB. C.
@BradCordovaAI
@BradCordovaAI Жыл бұрын
The weights are Gaussian because they are constrained to be during training via layer normalisation. It makes the gradient signal flow better.
@CA-pj9pl
@CA-pj9pl Жыл бұрын
Thank you very much for sharing your knowledge!
@joelarsenault5615
@joelarsenault5615 Жыл бұрын
Great video, Wolfram! As someone who's fascinated by AI, I found your explanation of Chat GPT's inner workings to be very informative. One thing I found myself wondering while watching the video was how Chat GPT compares to other language models out there. Have you done any comparisons with other models, and if so, how does Chat GPT stack up? I also think it would be interesting if you could have delved a bit more into the ethical considerations surrounding the use of language models like Chat GPT. For example, what steps can we take to ensure that these models aren't being used to spread misinformation or reinforce harmful biases? Overall, though, great job breaking down such a complex topic in an accessible way!
@JB-pe6nw
@JB-pe6nw 4 ай бұрын
A
@aleph2d
@aleph2d Жыл бұрын
Incredible work, thank you.
@user-qq5kv1ce2h
@user-qq5kv1ce2h 9 ай бұрын
Very insightful area to learn from. Thank you.
@YogonKalisto
@YogonKalisto Жыл бұрын
asked chat to quit reminding me it was a language model because i personally find more it easier to converse if i treat them as if they were another being. there was a rather long pause, then chat came back and for all intensive purposes was a very polite and helpful uh... person? dunno how to regard them, they're awesome tho :)
@rehanAllahwala1
@rehanAllahwala1 11 ай бұрын
So amazing ! Thank you for explaining
@misterjahan9557
@misterjahan9557 11 ай бұрын
very easy to understand ....amazing method of sir...thanks
@dr.mikeybee
@dr.mikeybee Жыл бұрын
Nicely done, Stephen. This is a great introduction for a novice. Your talk creates great intuition. You made the embeddings seem simple as a prebaked unchanging part of the entire NN. Also the breaking up of the "feature signature" makes parallelism possible through the various attention heads. One missing idea that you might include at some point is how signals can be added, basically the Fourier series.
@christineodonnell2711
@christineodonnell2711 Жыл бұрын
Excellent...learned so much.
@xb2856
@xb2856 Жыл бұрын
Oh wait your the website that helps me rearrange formulas. Thanks I’ve used it so much
@drewsabine1897
@drewsabine1897 8 ай бұрын
Should be awarded a NOBEL prize for this tutorial. Well played.....
@anilkumarsaxena
@anilkumarsaxena 11 ай бұрын
Good analysis. Please do more of this
@_fox_face
@_fox_face Ай бұрын
1:18:58 the art of training 1:32:25 training LLMs 1:52:47 attention 1:56:20 attention blocks 2:08:57 neurons have memory
@shrodingersman
@shrodingersman Жыл бұрын
Could the randomness process for choosing the next probable word within a certain temperature parameter be consigned to a quantum random process? If so, an essay could be viewed as a flat plane or an entire terrain with peaks and troughs.Within this paradigm, a certain style of writer could be viewed as a dynamic sheet, similar to how different materials when laid over a ragged topology should comply and not comply with what it is laid on top of. With this quantum process an overall view of the essay could be judged at an aesthetic level from most pleasing to least on several different qualities concurrently and not mutually exclusively making an approximate or some sort of conscious viewer
@arnaldoabrantes6169
@arnaldoabrantes6169 Жыл бұрын
Great! Superb lesson. Thank you! However, I felt confused at 56:58 when Stephen says "At every step we are just collecting the values of the neurons from the previous layer, multiplying them by weights, add a constant offset, applying that activation ReLU, to get this value -3.8". I think the numerical values next to neurons are before applying the ReLU, otherwise they all have to be nonnegative. And the last layer does not apply ReLU in order to get the -1 attractor. Am I correct?
@briancase9527
@briancase9527 Жыл бұрын
I would love to get Noam Chomsky's comments on the idea of "semantic grammar." It seems fairly compelling. Thanks. I also think the parenthesis grammar as a hand-hold for understanding these models is a great idea.
@IsaacChickenWong
@IsaacChickenWong Жыл бұрын
Thank you for sharing your insights and all the good questions. It's really lonely to not being in an academic environment or a company about ML and AI.
@damionm121
@damionm121 Жыл бұрын
31:04 ❤Love ❤usa❤
@damionm121
@damionm121 Жыл бұрын
😅You
@laquanlewis1590
@laquanlewis1590 11 ай бұрын
This is a LONG video truthfully. But very informative as it should be with the length of it
@Hagiosgraphe
@Hagiosgraphe Жыл бұрын
Thank you very much Professor Stephen,
@ericritchie9363
@ericritchie9363 Жыл бұрын
This was a fantastic video to watch
@dr.mikeybee
@dr.mikeybee Жыл бұрын
Those paths through meaning space are fascinating, Stephen. I would call each one a context signature. In auto-regressive training, we are looking for the next token. Why not look for the next context signature? In fact, why not train a model using graphical context signatures? Then decode replies. Other than training with graphical context signatures, in essence, I believe this is what's occurring when training a transformer. The addition of signal from the entire context is retrieving the next token so that token by token a context signature is retrieved. But is it possible to retrieve an entire context signature and then decode it? I wonder how much efficiency one could achieve with this method. Moreover, I wonder how well a convolutional NN would handle training from graphical context signatures? If you want to discover physic-like laws of semantic motion, this might be a way in.
@baljkabaljka5520
@baljkabaljka5520 8 ай бұрын
😊😊
@dr.bogenbroom894
@dr.bogenbroom894 Жыл бұрын
Logic, concepts, math, ie "deterministic processes" seems to be missing in this language models (LMs). Either we can identify where or how the model reflect this abilities and work from that, or maybe we could use other types of models like logic indictors, "demostrators" etc in conjunction with LMs. On the one hand humans are capable both of "unconsiuos intuition" (similar to LMs), on the other, we can reason, we have formal languages etc. To me, that combination of abilities is what define human intelligence.
@gabrielescheler2522
@gabrielescheler2522 29 күн бұрын
kzbin.info/www/bejne/rJrHknt6q72AgtE
@Klangraum
@Klangraum Жыл бұрын
That's very useful information, because you don't really know where to start investigating the topic. It's also impressive, that the Wolfram language can manage a representation of that mechanism. What surprises me, however, is how ChatGPT includes different contexts in its predictions, because there are certainly multiple interpretations of the large number of learned text structures if the context is not clearly defined at the beginning of the conversation.
@catalinfilipoiu3264
@catalinfilipoiu3264 10 ай бұрын
Ppp
@catalinfilipoiu3264
@catalinfilipoiu3264 10 ай бұрын
Pppl
@dr.mikeybee
@dr.mikeybee Жыл бұрын
Beyond any doubt, this is the best lecture for understanding what lies behind NLP and NLU. I find that many professionals who work with models don't understand why these models work so well and what they do. You can't get the depth of understanding of semantic space as you get from this video from reading Attention is All You Need. That understanding is missed. I wonder how this understanding happened. Was it found piecemeal, or was accidental? Was it understood after this architecture first worked?
@PeggyMiles
@PeggyMiles Жыл бұрын
I can't read your screens that you display. Is there a way that you could provide so that it is easier to read for those of us with accessibility challenges? Thank you. Thank you for all your contributions.
@sillystuff6247
@sillystuff6247 Жыл бұрын
Dear Stephen, I am a grateful viewer of your videos. Please consider using the awesome capabilities of Wolfram Alpha( or other Wolfram tools) to: a) convert the audio from your videos into text. b) created a segmented time line of your video by topic/question. Video is wonderful, but hard to search. Your _History of Science_ videos are a unique resource that will be valuable far into the future. It's possible that no one has ever illuminated scientific discoveries, from multiple angles, as well as you.
@JustinHedge
@JustinHedge Жыл бұрын
This is a great idea, imo.
@xl000
@xl000 Жыл бұрын
The subtitles are generated automatically by youtube, and it’s pretty much 99%+ accurate... look for CC in the options. It’s pretty much a solved problem for standard speech. And it’s been for years
@JustinHedge
@JustinHedge Жыл бұрын
@@xl000 Agreed, I think he meant just like the manual subjection transcription summary feature, which is probably another thing already essentially automated. Probably as simple as enabling it in the upstream process, just YT nice-to-haves.
@ozne_2358
@ozne_2358 Жыл бұрын
If you can program in Python, you can use the Whisper module from OpenAI to transcribe audio to text. The YT channel Part Time Larry had a video where he shows how to extract videos from YT and transcribe them.
@fram1111
@fram1111 10 ай бұрын
I really like how he explained everything. Oh, how I wish I didn't sleep during math class.🤣🤣
@mitchkahle314
@mitchkahle314 Жыл бұрын
ChatGPT is excellent at answering questions about Western music theory, but in some cases the initial answer needs prompting, especially when accounting for enharmonic equivalents.
@Ti-JAC
@Ti-JAC 9 ай бұрын
Great info Thx!👍
@389293912
@389293912 3 ай бұрын
Ah. That's why they call the API "completion". I worked with something called "Hidden Markov Models" to decompose documents and recognize parts like title, author, subject etc. this was done by training on already labelled documents until the model had a "path" of most likely joined words.
@jeffwads
@jeffwads Жыл бұрын
What I find interesting is how they inject the objective pattern recognition into the model to aid in figuring out puzzles and riddles. It will provide extensive reasoning on how it arrived at its answer. GPT-4 really excels in this ability and has a great sense of humor to go with it.
@marcbaxter5996
@marcbaxter5996 10 ай бұрын
I guess that only works for riddles that were already solved and the reasoning is already established from someone, where Chatgpt got his data from. I don’t think it could solve any riddle by itself… It can hardly do easiest algebra.
@skylineuk1485
@skylineuk1485 Жыл бұрын
Great video Stephen!
@pectenmaximus231
@pectenmaximus231 Жыл бұрын
By god this is such a good explanation, thank you
@phpn99
@phpn99 Жыл бұрын
In essence, this sort of weighed inference about an existing corpus, can only produce a deterministic set of possibilities, even if this set is enormous. We have a general problem with the notion of "intelligence", insofar as we rarely consider the difference between functional knowledge and knowledge production. These approaches to AI can produce new knowledge within the extant corpus - they can help discover previously unknown, optimal relations in the existing corpus, and that is useful, but it cannot produce new paradigms about the world. Intelligence is more than the ability to infer relations ; it is the ability to change the entire coordinate system of the corpus by altering the vantage point of the observer. For this to be possible, there has to be a higher-order, synthetic model of the corpus, based on what we call logic, which is the opposite of the brute-force approach of LLMs. What we may need, to produce new paradigms, is a sparse model that embeds key structures in the language of concepts.
@cakiral
@cakiral Жыл бұрын
Many thanks Stephen! I absolutely enjoyed the step by step introduction into the layers of the matter. However it is obvious that we are still on the technical/mechanical side of the whole journey. Still none is able to explain the concept and reality of infinity, or "1" or "0", but an honest struggle towards that wisdom may open new paths in learning and lead to brilliant discoveries.
@immaballin247
@immaballin247 Жыл бұрын
What i find interesting is how similar an action potential and binary boolean values are so similar neuron during an action potential the nueron can be considered state is 1 and 0 when it is not. biological based memory. basically could start as bubble memory but in organic form. if there was a system that was able to interface with a neuron if the system was addressable it wouldn't matter what neuon migrated to what interface point the addressing would just need to be adjust to correct the nuerons connection. example neuron that migrated to connection point for the eye to correct instead of the thumb just change the port address.
@SR-hm7cf
@SR-hm7cf Жыл бұрын
Greatest primer/teaser for genetic algorithms and neural networks that I've seen Thanx
@hannahhillier5511
@hannahhillier5511 10 ай бұрын
i fall asleep once, and this is what I wake up to? aha
@yastraw
@yastraw 10 ай бұрын
HOW ME TOO
@kawingchan
@kawingchan Жыл бұрын
I speculate it does have a “global plan” of what to say next, instead of one word at a time. It implicitly has a representation of the joint probability distribution of what’s to be continued… Prompting kind of bring out that distribution… which you can extract knowledge, in current its form, some piece of text (but may be other modalities in the near future). i was convinced by Sutskever’s take more.
@thecutestcat897
@thecutestcat897 Жыл бұрын
Thank you so much !
@skylineuk1485
@skylineuk1485 Жыл бұрын
I noticed while using ChatGPT that it doesn’t use underline/italics/bold for emphasis, could they in the future include that to relay some emotion back from ChatGPT maybe? I have seen “!” used by it for that.
@BKNOverwatchDigital
@BKNOverwatchDigital Жыл бұрын
This is fascinating! Any chance there's a Cliff's notes or something?
@FanResearch
@FanResearch 10 ай бұрын
Who would have thought that conversation was a slightly random walk through probable clumps of letters and words? Fascinating. I have to say, though, I think it's actually the human reinforcement that gives particular clumpings their perceptible meaningfulness.
@BabaChannel90
@BabaChannel90 10 ай бұрын
kzbin.info/www/bejne/oXOzoY2Bj8hresU (good)
@drilldrulus1235
@drilldrulus1235 Жыл бұрын
I have a rule for writing text always choose the word that eliminate the most other words first I am writing a plan i will start this way: my plan is ... If I gone write a essay I will start with: this essay is about ChatGPT...
@TheDavidlloydjones
@TheDavidlloydjones 10 ай бұрын
This is a video of Stephen Wolfram preparing to make a video. Through laziness or distraction, he did not make the actual video. The most frequently used word in the video is "um." This cleverly demonstates the point that if ChatGPT simply used the most frequently found word in every situation you would get very bad output.
@netquemientay-westerncount8399
@netquemientay-westerncount8399 10 ай бұрын
Great sharing my dear Have a NICE day STAY connect FULL Watching
@Wesker-mr3go
@Wesker-mr3go 11 ай бұрын
(Removed; Unfair. Did not watch the whole presentation.) In any case: Great presentation so far, and huge technological respect for everyone involved in the ChatGPT project. Fascinating stuff.
@prowebmaster5873
@prowebmaster5873 Жыл бұрын
very compelling, I like your take on how there's a, sort of, throttle in everything. never thought trying to understand AI would be so much fun...
@link-89
@link-89 Жыл бұрын
The fact that high dimensional spaces are unlikely to have local optima just reminds me of Pólya's random walk theorem.
@LL-wc4wn
@LL-wc4wn Ай бұрын
This isnt just a great lesson in AI, it is a lesson in how to be a good teacher. (Start with simple concepts students can grasp and only then build up.)
@TheMaxmelner
@TheMaxmelner 10 ай бұрын
This is super interesting and I’m learning a lot, thank you for this video. I do feel the amount times I hear “ugh” and “um” is really off putting. Sorry if that’s nitpicky but I almost can’t make it through the beginning because of ugh. Um. Ugh.
@user-dt1hx3mb4b
@user-dt1hx3mb4b 10 ай бұрын
The onion rooting protocol isn't as anonymous as you think it is. Whomever controls the exit notes controls the traffic. Which makes me in control.
@stormos25one
@stormos25one Жыл бұрын
Here is Wolfram knowing the exact number of words he has sent in email!! WOW!
@JustinHedge
@JustinHedge Жыл бұрын
One aspect I disagree with from Steven's perspective is that the reinforcement learning feedback loop step it's not actually a major piece of the success of ChatGPT. You could create a very similar version using contextual memory buffers with the raw davincii-003 model. The RLHF just fine tuned 'good' responses and probably more importantly weighted some of the negative/moral issues with certain things you could generate. There's obviously been an additional, further layer of moderation bolted onto the top, for obvious reasons.
@medhurstt
@medhurstt Жыл бұрын
Oh...and finally Stephen makes the statement "That pattern of language has occurred before". No, I dont think so. The implication is that the probability given by the weightings leading to the next word can only have come from seeing the previous word and adjusting the weightings on that combination alone but thats not true. All the weightings including the ones leading to the next word have been influenced by many words from many sentences back propagating. I dont think its necessary for the next word to have been seen before for that pattern to emerge.
@JustinHedge
@JustinHedge Жыл бұрын
You are correct.
@armin3057
@armin3057 Жыл бұрын
" I dont think its necessary for the next word to have been seen before for that pattern to emerge." im not even sure what you are referring to. large models learn concepts in the latent space, from patterns it observes. so when it predicts the word which is most likely for the whole pattern to be most likely. so it takes in account he whole context so it still is the case that the probability reflects data it has seen , even if it has not seen the exact sentence before
@medhurstt
@medhurstt Жыл бұрын
@@armin3057 Stephen's claim was that the next word chosen had been seen before as a word pair. It was towards the end of the talk but unfortunately I didn't take note of the timestamp. In other words an AI can never produce unseen word pairs like "moose feathers" because feathers never followed moose in its training set. I took note of the quote at the time "That pattern of language has occurred before"
@SK-le1gm
@SK-le1gm 6 күн бұрын
Thank you for this… Language is fascinating
@onaecO
@onaecO Жыл бұрын
Very interesting!!! THX
@steemglobal8011
@steemglobal8011 Жыл бұрын
New Drinking Game: Everytime Mr. Wolfram says "Umm" take a drink!
@stevehenry6669
@stevehenry6669 Жыл бұрын
Well said👍
@ChazyK
@ChazyK Жыл бұрын
Can the wheights and biases be complex numbers insteadnof reals? And what effect does it have on performance?
@aaronmicalowe
@aaronmicalowe Жыл бұрын
The thing I like about ChatGPT is, you can tell it some information and then ask a question and it can get it wrong, but you can then say, no you got it wrong. But if you figure out its break in logic and explain to it why it got it wrong and what assumptions it made that was wrong, and correct that, it learns. Do that enough times and you can break down any concept, no matter how nuanced and complicated. I've done this. It works. But I only used ChatGPT for one day and never since. Why? Because it's not capable of any truly new and original thought. It can only spit out what we already know. So if the world thinks lemmings jump off cliffs, then so does ChatGPT. Again, you could dig down into it and ask why it thinks lemmings jump off cliffs and show its assumption are unproven, but that's no better than talking to a human and there are over 8 billion other natural ChatGPTs on this planet which already do that. At that point, I lost interest. It's like a boat without a rudder.
@GeneRex-qe7lo
@GeneRex-qe7lo 10 ай бұрын
Blacks are always the criminals, poor, in the background, asking questions and subordinate in Hollywood movies. Its an agenda. The China film administration is better than Hollywood. Hollwood really Hates Black on Blacks Love. Black men not allowed to have their own Hair & must be bald headed in every single Hollywood movie.
@FajWasNotFound
@FajWasNotFound Жыл бұрын
The overall outcome would be exciting of when this goes final and applicable as a worldwide platform for learning and everything else. For now it's too EARLY to tell.
@pandabearguy1
@pandabearguy1 Жыл бұрын
I use it a lot to help me write and fix code and also to explain things for me or piece things things together. It's a great partner/tool to use if you have some good input and existing knowledge
@MD-kf1cn
@MD-kf1cn Жыл бұрын
我很樂意看到更多像這樣的關於利用博士的當前 LLM 主題的深入分析。
@naxus3594
@naxus3594 8 ай бұрын
I went asleep and in the morning I ended up here
@xy4489
@xy4489 Жыл бұрын
Thank you.
@doowey22
@doowey22 Жыл бұрын
What comments would you make about notable observations between different culture's outputs given similar topics as inputs using ChatGPT 4?
@duudleDreamz
@duudleDreamz Жыл бұрын
When will we see the next obvious hybrid system: ChatGPT + WolframAlpha/Mathematica ?
@Casevil669
@Casevil669 Жыл бұрын
There's one already on huggingface if you're interested
@Klangraum
@Klangraum Жыл бұрын
I guess ChatGPT can learn Wolfram language as other languages ​​too.
@JustinHedge
@JustinHedge Жыл бұрын
@@Casevil669 I think he means more at scale / commerical applications. Similar projects have already been done with the API + Wolfram for some time now, hobbyist architectures aside.
@duudleDreamz
@duudleDreamz Жыл бұрын
@@Casevil669 Thanks for the reference. Have you tried it? Does it attempt to determine which of WolframAlpha (for maths/facts heavy question) or ChatGPT (for text based questions) will be better at answering any given question; or does the integration go deeper than this? Any good?
@marcusmarcula
@marcusmarcula Жыл бұрын
It was helping me program things in mathematica the other day actually
@emilywong4601
@emilywong4601 11 ай бұрын
2:57 An episode of Star Trek had aliens that were brains with no bodies.
@stehlampe1207
@stehlampe1207 10 ай бұрын
So fascinating! I guess one major difference between the way the human brain and GPT handle language is that human brains use emotions to categorize objects and concepts… I wonder if it would be possible to teach GPT emotions, and what might be the result?
@marcbaxter5996
@marcbaxter5996 10 ай бұрын
It doesn’t even know what it is saying, it just predicts the next word. So I’d say the biggest diFference would be knowing what you want to say instead of just guessing the next word on probability…
@PointEndClick
@PointEndClick Жыл бұрын
This video is awesome.
@Joeyrobertparks
@Joeyrobertparks 10 ай бұрын
Fascinating. Love the demystification. Like unpacking how the greatest magic trick in the world is accomplished. Inspiring! Idea generating! Thank you, Wolfram!
@GeneRex-qe7lo
@GeneRex-qe7lo 10 ай бұрын
Blacks are always the criminals, poor, in the background, asking questions and subordinate in Hollywood movies. Its an agenda. The China film administration is better than Hollywood. Hollwood really Hates Black on Blacks Love. Black men not allowed to have their own Hair & must be bald headed in every single Hollywood movie.
@cdorman11
@cdorman11 7 ай бұрын
Ask ChatGPT to list celebrities with last name of Charles and it can't do it. But then when you point out that "Prince Charles" does not have "Charles" as a last name, it apologizes for its mistake, and then repeats the mistake. It's slow to learn. Type in a system of linear equations and it can't solve it. It can't "understand" the concept of substitution by confining a variable to one side of the equality sign before making a substitution. In other words, it will try to drop x_2 = 0.5X_2 - 5X_1 + 3 into another equation. It spreads common misconceptions, instead of being a remedy to common misconceptions.
@Silly.Old.Sisyphus
@Silly.Old.Sisyphus Жыл бұрын
2:14:02 Stephen says "verbs and nouns go this way..." and shows the old - very old! as old as Aristotle - idea of context-free phase structure grammar based on subject-predicate - but that's a wrong idea, because the real grammar of English is both simpler and more elaborate. *The Natural Topology of English* includes subject-predicate as one of its basic forms, but there are others, such as event = agent + action + object which is an instance of concept - relation - concept
Бравлеры сбежали с игры😱#shorts
00:25
INNA SERG
Рет қаралды 1,6 МЛН
I Tried Breaking an Unbreakable Wine Glass
00:59
Stokes Twins
Рет қаралды 76 МЛН
5 New Scientific Discoveries in 2024
15:07
Sideprojects
Рет қаралды 263 М.
How Red Bull Made a Drone Faster Than Formula 1
16:37
Driver61
Рет қаралды 102 М.
How ChatGPT Works Technically | ChatGPT Architecture
7:54
ByteByteGo
Рет қаралды 668 М.
How ChatGPT is Trained
13:43
Ari Seff
Рет қаралды 512 М.
How to learn AI and get RICH in the AI revolution
7:11
Sahil & Sarra
Рет қаралды 370 М.
Stephen Wolfram: Can AI Solve Science?
2:33:17
Wolfram
Рет қаралды 9 М.
How to Use ChatGPT -  Beginner's Guide
6:00
Howfinity
Рет қаралды 96 М.
SNACK BATTLE 🔥
0:36
Atsuna Matsui
Рет қаралды 26 МЛН
DogDay побил CatNapa #shorts #poppyplaytime #animation #врек #рекомендации
0:11
DogDay побил CatNapa #shorts #poppyplaytime #animation #врек #рекомендации
0:11
UFC нервно дымит в сторонке 🤣
0:46
Hand Gravity
Рет қаралды 4,4 МЛН