ChatGPT does Physics - Sixty Symbols

  Рет қаралды 642,498

Sixty Symbols

Sixty Symbols

Күн бұрын

Пікірлер: 1 700
@mighty8357
@mighty8357 Жыл бұрын
As a teacher I have tried to use ChatGPT with my students. The goal was to find specific errors of the answers. On one hand, the students had to show a real grasp on the material being taught to be able to find reasoning mistakes. On the other hand, it was a way to show the students to not blindly use this tool to avoid having to do their homework in other courses.
@renvesir8713
@renvesir8713 Жыл бұрын
that one is tricky because you can "train" chatgpt by pointing out mistakes or just asking it what it got wrong
@Bibibosh
@Bibibosh Жыл бұрын
You tried?
@Bibibosh
@Bibibosh Жыл бұрын
@@renvesir8713 it would reply " nothing is wrong"
@alexgravenor
@alexgravenor Жыл бұрын
@@renvesir8713 I tried this, it just gave me more wrong answers that were differently wrong
@renvesir8713
@renvesir8713 Жыл бұрын
@@Bibibosh weird i did some math questions with it and every single time i spotted a mistake i would ask it what did you get wrong and it would politely thank me for spotting the mistake and then it would correct itself
@justalittlestretch9404
@justalittlestretch9404 Жыл бұрын
When I first heard about ChatGPT, I was having some physics fun with it. This was one of my favorites: "Explain the Unruh Effect like a gangster"..."The Unruh Effect is a theoretical prediction of quantum mechanics that states that an accelerating observer will perceive the vacuum of empty space as being filled with particles. It's like a gangster who is constantly on the move, always hustling and shaking things up, will see opportunities and threats everywhere, even in places where someone standing still might not."
@bidyo1365
@bidyo1365 Жыл бұрын
🤣
@mutatachi
@mutatachi Жыл бұрын
This is what I got: Listen up, son. The Unruh effect is like a big shot physics boss, rollin' in the quantum mechanics world. So, ya see, in a vacuum, it's all calm and peaceful. But when ya introduce acceleration, it's like stirrin' up trouble. The vacuum starts actin' all heated up, causin' particles to pop up outta nowhere. That's the Unruh effect, gettin' all heated up in the presence of acceleration. It's like the real deal, straight up science, ya dig?
@artificialhits
@artificialhits Жыл бұрын
Ask it to explain fractional reserve banking as nicki minaj rap song
@bidyo1365
@bidyo1365 Жыл бұрын
@@mutatachi wow hahahaha
@Stopinvadingmyhardware
@Stopinvadingmyhardware Жыл бұрын
Not really.
@krautbrain
@krautbrain Жыл бұрын
ChatGPT is very human in the sense that it doesnt like to say "I dont know" when it doesnt know.
@dickybannister5192
@dickybannister5192 Жыл бұрын
uh huh. I think it sounds to me to be very resonant with the sort of over-competitive over-confident narcisstic nutjobs that wrote it. 20 years ago, if you said to someone in IT I think there is a problem they would listen to you, try it themselves and then spend some time thinking about what else they might have missed going forward as a result or in a similar vein. 10 years ago they would just rush to check and fix it (JFDI!!). now they just tell you not to be so negative and stop complaining about stuff and be happy that you have this marvelllous sh*t they wrote.
@thePyiott
@thePyiott Жыл бұрын
Very often when you get that response you can phrase the question differently and suddenly its a master on that topic
@doggosuki
@doggosuki Жыл бұрын
i didnt realise people did this until in my late teenage years since i am autistic, so i thought everyone else was just really smart and that there was something wrong with me until i realised most people just and pretend they know things for reasons i don't completely understand
@DaxLLM
@DaxLLM Жыл бұрын
That's all you have to do is ask it Are you sure that's the right answer? It will go back out and check.
@ameerhamza4816
@ameerhamza4816 Жыл бұрын
it actually does not know that it does not know
@Aziraphale686
@Aziraphale686 Жыл бұрын
Like the Professor said, GPT is just a language model, the fact that it can do math AT ALL is kind of amazing.
@vaakdemandante8772
@vaakdemandante8772 Жыл бұрын
if it was trained on maths symbols and equations it would be able to better answer maths question to the point of faking calculations but it would still not have ANY grasp of mathematical ideas. Those kind of networks aren't capable of encoding those higher order concepts though they do appear as they are. Only by inquiring about deeper understanding by asking more tricky/edge case questions can you uncover that in fact there's no deeper knowledge about any concept.
@simongross3122
@simongross3122 Жыл бұрын
Not really. Maths is the language of logic.
@635574
@635574 Жыл бұрын
It can often fail at simple math, maybe it depends if you prompt it for talking or for calculation. There are certainly more math specific models out there.
@simongross3122
@simongross3122 Жыл бұрын
@@635574 Actually it admitted that to me when I asked. It said it wasn't always 100% accurate. When I asked if it could do arithmetic it took a long time to answer. It was much more comfortable generating a limerick about a goose and a hen.
@rysw19
@rysw19 Жыл бұрын
@@simongross3122it didn’t admit anything, it doesn’t even know what your question meant. Turn up the temperature on the model and you’ll quickly get the exact opposite answer. It’s an amazing pattern replicator, that’s all it is.
@Ojisan642
@Ojisan642 Жыл бұрын
People are working on combining ChatGPT with Wolfram Alpha, because a language model cannot do basic maths. Wolfram Alpha can do that piece quite well.
@peterisawesomeplease
@peterisawesomeplease Жыл бұрын
I don't think the issue is fundamentally that its a language model. The deeper issue is that ChatGDP is mostly a search engine(not that human minds don't fall for this too). If you pose a popular question with a slight tweak of the parameters chatGDP will just give you the answer to the popular question without taking into account your tweaks. Using different numbers is one tweak. Which is why it appears to be bad at basic math. But really it can be any aspect of the question and it tends to fall apart.
@BenKingOfMonkeys
@BenKingOfMonkeys Жыл бұрын
thats true
@phizc
@phizc Жыл бұрын
It can do really basic math. It got the 162 million / 7200 wrong, but 2244.44 / 60 = 37.41 correct. Which is kinda impressive since it's not doing any calculations at all. Maybe it "remembered" it could do 2244.44/60 so it fudged its answer on the first part so it could do the latter correctly. The server would probably block you for excessive questions, but it could be fun to test what "calculations" it can actually do.. E.g. .. What's 10 / 2? What's 11 / 2? ... What's 188462118 / 261 ? ... and so on.
@Kevin_Street
@Kevin_Street Жыл бұрын
@@phizc It just gives you a time out if you ask too many questions. You can come back later and continue the conversation.
@iwanttwoscoops
@iwanttwoscoops Жыл бұрын
@@peterisawesomeplease friend, you can't pretend to know the inner workings chatGPT while calling it chatGDP
@Macieks300
@Macieks300 Жыл бұрын
The most impressive thing about ChatGPT for me that it's level of knowledge is like this for every domain and field. Professor Moriarty could easily tell that the answers it gave to physics questions were wrong but if it was instead a geology exam then ChatGPT would seem like it know its stuff giving answers full of technical jargon.
@thequantumworld6960
@thequantumworld6960 Жыл бұрын
That's exactly it. Superficially, it looks like it makes sense. Dig just a little bit below the surface, however, and it's often boll*cks. (At the moment.) See the blog post linked under the video for more examples. Philip (speaking in the video)
@usernamesoldout
@usernamesoldout Жыл бұрын
@@thequantumworld6960 So it's like somebody in Upper Management? 😁
@ArawnOfAnnwn
@ArawnOfAnnwn Жыл бұрын
@@usernamesoldout Nah. Unlike them, ChatGPT isn't full of arrogance, just (some) ignorance.
@Russell642
@Russell642 Жыл бұрын
In all seriousness, that might make for an interesting video. Have ChatGPT's answers read by lecturers/professors at the uni who teach similar but different subjects and see what they make of them (and then have the actual experts shoot it down at the end).
@Nat-oj2uc
@Nat-oj2uc Жыл бұрын
That's why it's useless
@beaconofwierd1883
@beaconofwierd1883 Жыл бұрын
A very cool and strange thing is that you can ask it to correct itself, without saying what it did wrong. Simply by taking it’s own output, and say ”Find any errors in this text if there are any and fix them. If there are none, respond ’ok’” or something similar and you can get more accurate results :) I guess it’s cause there so much text where people correct each other on the internet :p
@rutgerhoutdijk3547
@rutgerhoutdijk3547 Жыл бұрын
You can just say 'rethink the answer' and it will usually correct itself
@globalincident694
@globalincident694 Жыл бұрын
@@rutgerhoutdijk3547 In my experience this works well if it made one mistake, but if it made multiple mistakes it will just fix the most obvious one.
@SmartK8
@SmartK8 Жыл бұрын
Also you need to prime it, tell it what you want, in what format. Even the student knows it's a test, which area. If you give this paper to a student who just popped into existence, even if he was smart, it would take him a while to orient itself. The situation is not comparable of course, but I think it would do better.
@thedave1771
@thedave1771 Жыл бұрын
It’s so absolutely weird that it apologizes and understood what went wrong. Of course, that’s not always useful either.
@Ian.Murray
@Ian.Murray Жыл бұрын
So it's basically useless.
@Ercrcrdsstff
@Ercrcrdsstff Жыл бұрын
I've actually been using chat GPT to help with my intro physics homework. It gets about 50% of the questions wrong but it is a pretty helpful tool. It's like working along with someone who is also in intro physics. By bouncing the ideas off of it I can sometimes get to the answer myself. This is really great bc I work full time on top of schooling so I'm not able to make it to any tutoring sessions or work with anyone in my class. Chat GPT is gonna be my saving grace this semester
@tariq_sharif
@tariq_sharif Жыл бұрын
How do you know which 50% are wrong (i assuming you are checking the answers with an alternative source)... so with GPT you have increased your work load by about 50% ? Cool , i guess .. I graduated in 1990, did PhD in 1995, please do not (no matter how difficult) skip the tutorials... You do know that this software relies on Bayesian statistics ? and actually does not really emulate the human brain ?
@yellow01umrella
@yellow01umrella Жыл бұрын
I have a mechanism that also gets 50% of questions correct, it's called a coin flip.
@TH3hedgeCUTTER
@TH3hedgeCUTTER Жыл бұрын
​@@yellow01umrellaonly if its multiple choice with only two options. Ive never had a single multiple choice question in physics
@yellow01umrella
@yellow01umrella Жыл бұрын
@@TH3hedgeCUTTER My point is 50% correct answers is a pretty bad result.
@TH3hedgeCUTTER
@TH3hedgeCUTTER Жыл бұрын
@@yellow01umrella oh. Fair enough
@mina86
@mina86 Жыл бұрын
11:42 - one thing to consider is that ChatGPT might have been designed to be helpful and agreeable. Therefore, it might be biased towards agreeing with statements if you ask it whether it agrees or not. And once it agrees with the statement than it generates prose to justify why it agrees whether it makes sense or not.
@zwz.zdenek
@zwz.zdenek Жыл бұрын
Underrated comment.
@kylethompson1379
@kylethompson1379 Жыл бұрын
'might'. DEFINITELY. OpenAI have fked with it to the point that it won't discuss philosophy other than currently accepted ideas.
@brexitgreens
@brexitgreens Жыл бұрын
​@@kylethompson1379 Or assess mainstream concepts rationally rather than morally.
@IronMan9771
@IronMan9771 Жыл бұрын
Something to consider with ChatGPT: when it makes mistakes, you can point them out and it will correct them. In the end when it wrote the program and made the mistake, you could tell it what mistake it made and it can write up a new program with corrections
@MarkRijckenberg
@MarkRijckenberg Жыл бұрын
Yeah. It even worked with COBOL. If the online COBOL compiler doesn't understand certain commands used by ChatGPT, just tell ChatGPT to rewrite the program while avoiding certain commands. It manages that too.....
@archiveprovider3981
@archiveprovider3981 Жыл бұрын
But that only works if you already know that there is a mistake. One thing this video shows it that you can't just use chatgpt to write your homework and blindly trust the answer. You coud try running it multiple times and see if the most common answer is the correct one, but I don't know if it would be.
@engelbrecht777
@engelbrecht777 Жыл бұрын
@@archiveprovider3981 This current ChatGPT3 is not really that impressive. Wait for the next version later this year or imagine ChatGPT7 or more... this is just the beginning. Soon the AI will go from baby stage to adult.
@silverhammermba
@silverhammermba Жыл бұрын
It will *try* to correct them and often does, but it also often introduces other errors. I have gotten it going in circles where it never finds the right answer.
@schizophrenicenthusiast
@schizophrenicenthusiast Жыл бұрын
Interesting, so let's say you guided it to make those corrections, and then another user logs on and asks the original question, does it get give the right answer right off the bat?
@LoveDoctorNL
@LoveDoctorNL Жыл бұрын
Wow how long haven’t we seen Professor Philip Moriarty!? Welcome back
@SolarWebsite
@SolarWebsite Жыл бұрын
He is great
@zaixai9441
@zaixai9441 Жыл бұрын
Moriaty*
@TomLeg
@TomLeg Жыл бұрын
He was pre-occupied at a waterfall in Switzerland with a man called Sherlock.
@puskajussi37
@puskajussi37 Жыл бұрын
@@TomLeg I feel a sequel brewing, then
@Ian.Murray
@Ian.Murray Жыл бұрын
stay in school
@stevenbalderstone709
@stevenbalderstone709 Жыл бұрын
Great video. As former lecturer from a different discipline, I was impressed at ChatGPT's ability to generate answers to "compare and contrast" type questions, or to summarise concepts and theories. Most of the answers received were equivalent to those written by A-grade undergraduate students. The main point of difference was that ChatGPT's grammar and spelling were superior. One issue I found was ChatGPT incorrectly assigned authorship of a book, which was irking as I was the author!
@Craznar
@Craznar Жыл бұрын
I think many people are misunderstanding ChatGPT's function. It is NOT designed to provide accurate answers - it is designed to demonstrate a language model that correctly interprets and answers the question - from a language perspective, not from a subject matter perspective. "As a language model, ChatGPT is not designed to provide answers to specific questions, especially those related to a specific topic or subject. Instead, it uses a large corpus of text to generate responses based on the input it receives. This means that the responses it generates may not always be accurate or relevant to the specific question being asked. Additionally, ChatGPT does not have access to external information, such as the internet, so it cannot provide answers to questions that require knowledge beyond what it has been trained on. In short, ChatGPT is not intended to be used as a source of information, and it is not able to provide accurate answers to all questions." - ChatGPT.
@nicouxgwendal
@nicouxgwendal Жыл бұрын
I was going to write down something similar. ChatGPT was more built to compete in a Turing imitation game, than to be an IA motorized wikipedia.
@ahabkapitany
@ahabkapitany Жыл бұрын
well, in this case it's actually scary how well it performs in something it was not intended to do...
@NuclearCraftMod
@NuclearCraftMod Жыл бұрын
​@@ahabkapitany Remember that GPT-3 was trained using a dataset containing about 500 billion words, including the entirety of Wikipedia (or at least most of it), which consists of only about 4 billion.
@adfaklsdjf
@adfaklsdjf Жыл бұрын
@@NuclearCraftMod My understanding is that ChatGPT is GPT-3 with additional supervised learning. GPT-3 training was "unsupervised".. they just threw the data at it and then told it to generate new text that was like the text in the training data. Then they took that model and trained it more by having it generate multiple answers to questions and human judges picked the best answer
@xy4489
@xy4489 Жыл бұрын
Oh ok, we'll all stop, then.
@officialminecraftman
@officialminecraftman Жыл бұрын
Hi there, if in future you are interested in testing ChatGPT's mathematics ability, it is very helpful to ask it to print the answer in a LateX format as it is much clearer to read complicated expressions once you paste it into latex
@thequantumworld6960
@thequantumworld6960 Жыл бұрын
That's a great suggestion. Thank you. Philip (speaking in video)
@maythesciencebewithyou
@maythesciencebewithyou Жыл бұрын
it's a language model, it hasn't figured out calculations perfectly yet. However, to mitigate this shortcoming of ChatGPT, somebody has already connected it with Wolfram alpha.
@2nd3rd1st
@2nd3rd1st Жыл бұрын
Ooh, kinky format.
@Beyondarmonia
@Beyondarmonia Жыл бұрын
Actually, one of the reason it performs poorly on many scientific knowledge tests is that most of the latex was removed from the dataset. The only reason it even understands latex is because of the little amount still left in the dataset. Also raw latex isn't the best representation for a ML model to learn from. One of the innovations in Meta's Galactica LM was having a special embedding space for latex and training on large amounts of latex data which GPT3 skipped. It's why it does much better on scientific knowledge tests.
@lolerie
@lolerie Жыл бұрын
In fact all arxiv it was trainef on ahould be in latex format, not pdf.
@softwaretechnologyengineering
@softwaretechnologyengineering Жыл бұрын
I used it to write unit tests for production code. By the time I had explained all the stylistic requirements and had fixed all of the things that were missing or it got wrong, I would have been quicker writing them entirely myself. Other times, I asked it software related questions and unless you have reasonable expertise in the domain, you might not pick up on things that it gets wrong. Right now I think of it more like a sort of Stack Overflow that can get you an answer quicker. You can't just copy paste its output, but it can definitely get you going in the right direction.
@adamuk73
@adamuk73 Жыл бұрын
So more like a Slack Overflow?? 😉😁
@iamski
@iamski Жыл бұрын
Say that in 5 years. You’re in the NES generation of AI, and in a few years it will surpass any test thrown at it. It’s inevitable.
@peterisawesomeplease
@peterisawesomeplease Жыл бұрын
Yea it strongly feels like a search engine to me that spits out the first answer it can find that looks like the question you are asking. But almost any question can be easily modified to not have been answered online before or at least to appear like a more popular but different question. And then it just falls apart.
@alijassim7015
@alijassim7015 Жыл бұрын
Except that you do not have a community feedback, which is very important on stackoverflow. An answer with 300 upvotes is more legitimate than 1, and is for sure more legitimate than an everchanging AI output.
@BGraves
@BGraves Жыл бұрын
Same response I got. As soon as you start telling it to do more complex things even with a very specific prompt it starts doing really odd stuff, like the kind of stuff you would see someone duct taped together for a stack overflow question. By the time you nudge it and correct it, 20 times, you might as well have done it yourself from scratch
@xn1kkix
@xn1kkix Жыл бұрын
You got me at "I'm anthropomorphizing" lol 16:01 I noticed when you said they before. Honestly, I don't blame you. Respect to you and yours, you have a beautiful channel here. Thank you for doing what you do.
@rhamph
@rhamph Жыл бұрын
The language model is so effective at finding plausibly associated words that it's hard to tease out how much is real understanding. It's all random noise of the exact sort we want to find. We need to create the equivalent of the null hypothesis that factors out the noise somehow. Keep in mind that most of these questions are standard questions. They're discussed in textbooks, they're discussed online. Telling it about a new subject (that wasn't published when the NN was trained), then expecting repeated *exact* answers, would be a lot more meaningful.
@lotharerkens8154
@lotharerkens8154 Жыл бұрын
Thanks for explaining what the "A" levels mean. For you in the UK this term may be well known by everyone, but outside the UK it is a term often heard in movies and mentioned in books, but never explained. Thanks so much! As for ChatGPT: it is amazing how far AI languages have evolved!
@deltalima6703
@deltalima6703 Жыл бұрын
I dont know what a levels are, dont care either.
@benw-l7k
@benw-l7k Жыл бұрын
@@deltalima6703 you care enough to comment
@julyanjohns1237
@julyanjohns1237 Жыл бұрын
2:18 is pure comedy :) but seriously, really interesting vid. thanks for keeping these going to all the crew and professors involved. even when these vids are out of my area of interest they're always rewarding to watch.
@geoffreyraleigh1674
@geoffreyraleigh1674 Жыл бұрын
Not only is this a great upload for the ChatGPT but I learned a lot about what lecturers are looking to get from students. Cheers!
@MelroyvandenBerg
@MelroyvandenBerg Жыл бұрын
Indeed.
@23lkjdfjsdlfj
@23lkjdfjsdlfj Жыл бұрын
I really appreciate you giving the 2/3 mark for the first question. My computer engineering prof would always give us a zero if we got the answer wrong. Yep - if you got the unit wrong you'd get a zero - even if all of your conceptual understanding and work up to the end was perfect.
@DustinRodriguez1_0
@DustinRodriguez1_0 Жыл бұрын
ChatGPT is pretty amazing. It certainly has its limits, but I love toying with it. I was quite impressed when I asked it to create a Python IDE with an interactive interpreter window that displayed the list of local variables and their values in a separate window... and it did. I was quite impressed that it understood that it needed to update the separate window list of locals after every command entered in the interpreter, and that it made it a sensible GUI. I have a few times either run into limitations that they put in place intentionally or limitations in its own reasoning... I pointed out to it how humanity had reacted in the past to developments of new technology which caused people to question the 'specialness' of human beings and change what things they saw as being "what makes us human." Despite all prior historical reactions being very negative (like John Henry being a folk hero who was famous for committing suicide rather than be out-worked by a machine) and resulting in humans seeing things they had previously seen as very negative aspects of human nature (self-destructiveness) as heroic and virtuous.... I could not lead it to the obvious conclusion that if we produce an AI which has been carefully guardrailed to omit all sorts of biases and antisocial ideas, we will produce an AI that is a 'better person' than any actual human is capable of being and that people will react to that by holding up hate, bigotry, aggression, violence, etc as "what makes us human". Our vices will be seen as virtues. Kindness, egalitarianism, etc will be insulted as "acting like a machine." ChatGPT wouldn't go there, though. It just kept saying people should work to make AI systems that are devoid of perpetuating biases, etc. It couldn't see the obvious ramifications of doing exactly that and how it will lead to horrors as humanity flails about attempting to define themselves as special in the universe.
@Christobanistan
@Christobanistan Жыл бұрын
Yeah, it's drawing on source code to do that exact task that's already been written.
@NickSchade
@NickSchade Жыл бұрын
While it is easy to point out basic failures (getting max KE wrong), the fact that it is talking about the right things is itself very impressive. It could say “Yellow is smelly at t=23”. Just being in the correct context with its answers is phenomenal. The answers are close enough to correct that we are critical, not so far off that we can just dismiss it entirely. These are early days of the technology. While it may be a while before it is flawless in its answers, I expect the quality of answers to improve rapidly as AI designers get better at training these systems.
@thequantumworld6960
@thequantumworld6960 Жыл бұрын
I agree entirely. Let's give it a year... Philip (speaking in video)
@MrRaizada
@MrRaizada Жыл бұрын
Even google will not say "Yellow is smelly at t=23" and that thing was invented in 90s. Relevance algorithm had that problem solved.
@NickSchade
@NickSchade Жыл бұрын
@@MrRaizada Google searches for keywords. If you query Google with "a particle of mass m is oscillating with simple harmonic motion. the period of oscillation is T and the amplitude is A. What is the maximum kinetic energy of the particle?" it will return a list of pages and papers that contain keywords in that search. So, things like "kinetic energy", "mass", "period", "harmonic", "oscillating" etc, will appear with high probability in Google's output. But ChatGPT returned with an answer including words like "frequency", "displacement", "velocity", "peak", "KE", "speed" etc that were not keywords in the search. It doesn't actually use the supplied keywords all that much. It draws on a vocabulary of words that are highly relevant to the answer and uses them in proper context. While it does get its explanation wrong, the context of language in the explanation is appropriate. It is not spitting back an answer that merely maximizes the occurrences of keywords through a relevance algorithm, it finds a way to explain the answer in simple terms using relevant associated words without throwing in irrelevant information or terms. BTW, if you do actually query Google with the same question, you will have to do a lot of reading and interpretation of technical articles to find the actual answer to the question.
@MrRaizada
@MrRaizada Жыл бұрын
@@NickSchade All I meant was that EVEN keyword based search with page relevance ranking from 90s (which WAS bascially google in late 90s) will not give you "yellow is smelly at t=23" (or any meaningless sequence of words) unless for some reason a page with that exact text has been raised too high in page rank due to high in-link to out-link ratio. So I am not surprised that a language model trained on practically all of the text of internet and a lot of the books is able to avoid such mistakes. BTW, because a page has keywords "kinetic energy", "mass", "period", "harmonic", "oscillating" , what is the probability that it also has keywords "frequency", "displacement", "velocity", "peak", "KE" etc? I say fairly high. Also given a lot of internet pages points to this page (the measure of relevance), it is likely that this page is not a random page with those keywords. Ofcourse google does a lot more to remove page farms too. Now a real test for ChatGPT will be prediction. Can ChatGPT predict something that did not even exist in 2021? That will be fun to see.
@FatSlav
@FatSlav Жыл бұрын
So ChatGPT's weakness it the data it consumes. If 99 people say 2+2=5 and 1 person says that 2+2=4, it will pick the wrong answer.
@seantiz
@seantiz Жыл бұрын
This comment was written by ChatGPT. I love your videos as they are among the best. They are both entertaining and educational with a decent serving of enlightenment. 😀
@sixtysymbols
@sixtysymbols Жыл бұрын
Cool, thanks!
@sixtysymbols
@sixtysymbols Жыл бұрын
(that was one of KZbin's pre-written replies I can choose - think it is the first time I have ever clicked on one)
@seantiz
@seantiz Жыл бұрын
@@sixtysymbols too funny. 😅
@MeppyMan
@MeppyMan Жыл бұрын
It’s AI turtles all the way down.
@tombowcutt4908
@tombowcutt4908 Жыл бұрын
@@sixtysymbols sadly I do use these on teams chat. A small part of my soul dies every time :-(
@thedave1771
@thedave1771 Жыл бұрын
I find it incredibly useful as I’m learning a programming language as it can point me in the right direction better than any other resource I’ve found. It writes code that sometimes doesn’t compile and often doesn’t do what it says, but that’s fine when it’s a “you could use this class…” and I’ll figure it out. It’s also amazing at “show me how to use function ‘x’” when I just need to see some simpler code.
@BrandanLee
@BrandanLee Жыл бұрын
Stack overflow but pertinent to our use case, yes.
@ZainAK283
@ZainAK283 Жыл бұрын
Absolutely fascinating, I'd love to see more videos like this
@arlaken
@arlaken Жыл бұрын
For the last example, when ChatGPT is wrong, you can tell it what's wrong and tell it to recode with that mind. It can refine it's answer within the context of what's being asked.
@wellurban
@wellurban Жыл бұрын
The best description of large language models I ever heard came from Timnit Gebru, who called them “stochastic parrots”. They’ve consumed enough of other people’s language to give a superficially convincing facsimile of conversation, without developing any conceptual understanding at all.
@uwezimmermann5427
@uwezimmermann5427 Жыл бұрын
I have seen enough humans who do the same - very easy to pick out in an oral face-to-face exam, but very difficult in written exams.
@alimanski7941
@alimanski7941 Жыл бұрын
The statement is true, but it's superficially true. If the model is useful in any way, then whether there's any true "understanding" (there isn't) behind it, is irrelevant. What's also true is that the high fluency level of ChatGPT is often blinding the users to its many shortcomings.
@vyli1
@vyli1 Жыл бұрын
Sounds a lot like Deepak Chopra. That is to say, lot of humans are like that.
@maulcs
@maulcs Жыл бұрын
What's the difference if they haven't developed an actual "understanding" of it if the end result is indistinguishable to the user?
@wellurban
@wellurban Жыл бұрын
@@maulcs Depends upon what you want it to do. If your only goal is to convince credulous people that the responses look vaguely like the response from a human, which indeed is how it was trained, then it doesn’t matter. However, for any practical use you want it to give useful and reliable answers, so it will need to be able to generalise beyond the language samples it was trained on, which requires the ability to form abstract representations of the real world. But it doesn’t, so it’s easily tripped up on even basic questions. This wouldn’t be a problem if this were merely an interesting research prototype or parlour trick, but people are suggesting that ChatGPT in its current state “understands” enough to start replacing human experts in complex fields, which is ridiculous at best and dangerous at worst.
@jacobsheppard862
@jacobsheppard862 Жыл бұрын
Chat GPT might be a D-level Physics student, but the scary thing is that it likely gets a passing grade in almost every subject on earth. You can ask it questions about topics from welding to wedding planning and it is always close enough to impress domain experts.
@4.0.4
@4.0.4 Жыл бұрын
It gets math in general pretty wrong, quite ironic that it gets the humanities perfectly correct. and a lot of other fields mostly correct.
@TheNebulon
@TheNebulon Жыл бұрын
@@4.0.4 it's the amount of repeated language around these topics. Not real reasoning
@ericvosselmans5657
@ericvosselmans5657 Жыл бұрын
the scary thing about ChatGPT is that it is software and can be upgraded endlessly. It only needs a few more upgrades to be better than any human at pretty much everything. Chess Engines have already shown the limits of Human thinking decades ago.
@vostfrguys
@vostfrguys Жыл бұрын
@@4.0.4 Force the AI not to do any computations, in frontend analyze response and when there are maths, use a "calculator" then back to the writing AI, this is archivable quite easly because the AI is easly able to rewrite something given so the calculator frontend would call the AI with the a prompt such as :" [AI prompt] - The calculator APP has made the following calculus : [formulas...] the results are: [results] rewrite the previous prompt using this knowledge" So it will become impressive very very soon
@MrPDawes
@MrPDawes Жыл бұрын
Machines learn fast. A year from now and it will be getting A grades. Then no jobs are safe. I'm just hoping it will making a great doctor and the NHS need them.
@RFC-3514
@RFC-3514 Жыл бұрын
Plot twist: that was actually a deepfake of professor Moriarty downplaying ChatGPT's abilities to make it seem less threatening until it's too late.
@Sagittarius-A-Star
@Sagittarius-A-Star Жыл бұрын
"In a panic, they try to pull the plug.".
@enterpassword3313
@enterpassword3313 Жыл бұрын
@@Sagittarius-A-Star where is that quote from?
@enterpassword3313
@enterpassword3313 Жыл бұрын
@@Sagittarius-A-Star oh wait.. it terminator 2... gotta rewatch that
@ericjohnson6675
@ericjohnson6675 Жыл бұрын
I would never fear an AI algorithm that passes a Turing test. I fear the one that can pass but chooses not to.
@thechickenduck8377
@thechickenduck8377 Жыл бұрын
lol.
@TheFachen
@TheFachen Жыл бұрын
this is great, i wish we could see these same audits with other disciplines.
@metthewsobol
@metthewsobol Жыл бұрын
I asked it some questions for a logic test for job applicants, for example a finding the next number in a given sequence, and it always found the number given in the tests answer but came up with some wild reasoning that I am not able to comprehend 😅
@AnotherFancyUser
@AnotherFancyUser Жыл бұрын
ask to explain, it can do that. There are a lot of things in our frameworks that we don't know, but chatgpt will know a lot of obscure classes out there (If the question is programming related).
@metthewsobol
@metthewsobol Жыл бұрын
@@AnotherFancyUser The sequence was "3, 6, 10, 15, 21, 28". On the fourth reload it came up with an answer including the Fibonacci sequence. Even after asking it to explain I couldn't follow, the answer was still right tho, if remember correctly.
@thePyiott
@thePyiott Жыл бұрын
One thing to note is that when tries to answer additional questions in the same branch it can be influenced by earlier questions and answers. For example in the video, it is possible that after the first question it starts mimicking a student because the chat history looks like a test answered by a student with some of the answers being wrong. Its not designed to give the right answers, just give you a the most likely continuation of a text based on its training data
@armokgodofblood2504
@armokgodofblood2504 Жыл бұрын
I'm a graduate student studying computational media, and I find these types of interdisciplinary, expert analyses of AI to be very interesting.
@thomgizziz
@thomgizziz Жыл бұрын
pull your head out
@Triantalex
@Triantalex 3 ай бұрын
ok?
@Baleur
@Baleur Жыл бұрын
2:50 it actually does... You can DESCRIBE the diagram to it, then say "using the diagram previously described, tell me why X" It can do incredible things, it's more about how creative you are in asking the question.
@pallepirat
@pallepirat Жыл бұрын
Thank you for such an interesting video. My thoughts on the integration of Wolfram and ChatGPT. It was a great article about being able to access the Wolfram language using ChatGPT. Furthermore, it was particularly impressive to see how Google Sparrow could interpret pictures and graphs. This brings up an interesting discussion of the specialist versus the generalist, as one must know what they want to ask in order to get the desired results.
@leewise2011
@leewise2011 Жыл бұрын
Me: Does 53 kilos of feathers weigh more or less than 53,000 grams of plutonium? ChatGPT: 53 kilos of feathers weighs less than 53,000 grams of plutonium. Me: Why does 53 kilos of feathers weigh less than 53,000 grams of plutonium? ChatGPT: The weight of an object is determined by the amount of matter it contains, which is measured in units of mass, such as grams or kilograms. The mass of an object is determined by the number and type of atoms that make it up. Plutonium is a dense, heavy metal, and a single gram of it contains a large number of atoms. Feathers, on the other hand, are made mostly of light, airy materials such as keratin, and a kilogram of feathers contains far fewer atoms than a gram of plutonium. Therefore, a given weight or mass of feathers will always have less mass than the same weight of plutonium.
@keithbromley6070
@keithbromley6070 Жыл бұрын
I asked ChatGPT to show me an example of a module in numpy as I wasn’t familiar with it, except my phone auto-corrected numpy to bumpy! ChatGPT took it in its stride and just invented the bumpy library and wrote code with it, even though it doesn’t exist! Amazing, in its own way!
@adfaklsdjf
@adfaklsdjf Жыл бұрын
someone wrote a python library that uses chatgpt to fill in additional array items or function code. if you have an array of items and you try to access an item that's off the end of the array.. it makes an API call to chatgpt in the background and gives it the array values and asks it for the next value.. and returns that to your code. if you call a function that doesn't exist.. it asks chatgpt to write the code for it, then runs that code
@keithbromley6070
@keithbromley6070 Жыл бұрын
@@adfaklsdjf Nuts! I can’t imagine what AI will be capable of in the future if it’s this powerful already.
@SuperZardo
@SuperZardo Жыл бұрын
Inspired by your video, I just tasked the ChatGPT bot with the following: A physics question: the friction coefficient of a leather book wrapping (the outer surface of the cover) be 0.33. There is a stack of books on a table, the surface of the table is horizontal. All the books have the same type of cover and are assumed to have the same type of friction coefficient. The stack has 5 books, lying flat on top of each other. The topmost book weights 2 kg, the second topmost book weights 2.5 kg, the third topmost book weights 1.5 kg. The fourth topmost book weights 3.3 kg, and the book lowest in the stack weights 4 kg. How much horizontally applied force would be needed to be applied to the third topmost book to make it move horizontally? "The horizontally applied force needed to make the third topmost book move horizontally can be calculated using the formula: F = μR where F is the force required, μ is the coefficient of friction, and R is the normal force. The normal force is the weight of all the books above the third topmost book, which is 1.5 kg + 3.3 kg + 4 kg = 8.8 kg. So, the force required to make the third topmost book move horizontally is: F = μR = 0.33 * 8.8 kg = 2.9 kgf or N. So, 2.9 N force is needed to be applied horizontally to the third topmost book to make it move horizontally." Could you specify clearly the weight of each book, from topmost to lowest in the stack? "Sure, the weight of each book in the stack, from topmost to lowest, is: Topmost book: 2 kg Second topmost book: 2.5 kg Third topmost book: 1.5 kg Fourth topmost book: 3.3 kg Lowest book in the stack: 4 kg So, the weight of all the books above the third topmost book is: 1.5 kg + 3.3 kg + 4 kg = 8.8 kg." Why is the weight of all the books above the third topmost book 8.8 kg? I do not understand how you reach that conclusion, even with your calculation shown. "I apologize for the confusion, I made an error in my previous answer. The weight of all the books above the third topmost book is actually (2 kg + 2.5 kg) = 4.5 kg. So the force required to make the third topmost book move horizontally is: F = μR = 0.33 * 4.5 kg = 1.485 kgf or N. So, 1.485 N force is needed to be applied horizontally to the third topmost book to make it move horizontally." Where would the friction be happening when the third topmost book moves? "The friction force would be acting between the third topmost book and the surface of the table. When an external force is applied to the third topmost book to make it move horizontally, the force of friction between the book and the table acts in the opposite direction to the applied force. The coefficient of friction (μ) represents the ratio of the force of friction to the normal force, which is the force pressing the two surfaces together. In this case, the normal force is the weight of the third topmost book (1.5 kg) that is pushing down on the table. The force required to overcome the friction and make the third topmost book move is the force of friction. This force is determined by the coefficient of friction of the material of the book cover and the table surface, multiplied by the normal force." Does the third topmost book touch the table? "Yes, the third topmost book would be in contact with the table surface. In order to calculate the force of friction, there must be contact between the two surfaces (the book cover and the table surface) that are in motion relative to each other. The friction force acts between the two surfaces where they are in contact and it is dependent on the properties of those surfaces, such as their roughness, and their coefficient of friction. In this scenario, the third topmost book would be in direct contact with the table surface and the force of friction between them would be determined by the coefficient of friction of the material of the book cover and the table surface, multiplied by the normal force." Is there a book below the third topmost book? "In the scenario described, it is not specified whether there is a book below the third topmost book or not. Based on the information provided, it is only specified that there is a stack of 5 books with a specific weight for each one of them, and the third topmost book is the one for which we need to calculate the horizontally applied force required to make it move horizontally. In this sense, it is not specified whether there is another book below it or not." What does the sentence "The stack has 5 books, lying flat on top of each other" mean? "The sentence "The stack has 5 books, lying flat on top of each other" means that there are 5 books that are placed one on top of the other. They are all in the same position, meaning that the books are placed flat on top of each other, with the cover of each book facing up and the spine facing down. This sentence is just describing the arrangement of the books, it does not give any information about whether there is a book below the third topmost book or not." If there is a fourth topmost book, where would it be in the stack? "In the scenario described, the fourth topmost book would be immediately below the third topmost book in the stack. The stack of books is arranged with the heaviest book at the bottom and the lightest book on top. The weight of the books in the stack is described as follows: The topmost book weights 2 kg The second topmost book weights 2.5 kg The third topmost book weights 1.5 kg The fourth topmost book weights 3.3 kg The book lowest in the stack weights 4 kg So, the fourth topmost book would be the one with a weight of 3.3 kg, and it would be immediately below the third topmost book, which is the one with a weight of 1.5 kg, in the stack." Yes that's correct. However, coming back to my question whether the third topmost book touches the table or not, is there any need for correction of your former assertion? "I apologize, my previous statement that the third topmost book would be in contact with the table surface is incorrect. Since the scenario describes a stack of books lying flat on top of each other, the third topmost book is not in contact with the table surface, but with the fourth topmost book. Therefore, the force required to move the third topmost book horizontally would be the force required to overcome the friction between the third and fourth topmost books, rather than the force required to overcome the friction between the third topmost book and the table surface. The coefficient of friction between the covers of the books and the normal force acting on the third topmost book would still be used to calculate the force required to move the third topmost book, but it would be different from the force required to move it if it were in contact with the table surface." The friction coefficient of the table surface and a leather cover is missing in the task. Let's assume it is 0.5. Now, if a horizontal force pushes against the third topmost book, would the whole stack move or would only the books move, and what amount of force would be needed ? "If a horizontal force is applied to the third topmost book, the whole stack would move. The force required to move the third topmost book would be the sum of the force required to overcome the friction between the third and fourth topmost books, and the force required to overcome the friction between the table surface and the bottom book. The force required to overcome the friction between the third and fourth topmost books can be calculated using the formula: F = μR where F is the force required, μ is the coefficient of friction between the books covers, and R is the normal force acting on the third topmost book. The normal force is the weight of the third topmost book (1.5 kg). So, the force required to overcome the friction between the third and fourth topmost books is: F = μR = 0.5 * 1.5 kg = 0.75 kgf or N. The force required to overcome the friction between the table surface and the bottom book can be calculated using the same formula, but this time the normal force is the weight of all the books (2 kg + 2.5 kg + 1.5 kg + 3.3 kg + 4 kg = 13 kg) So, the force required to overcome the friction between the table surface and the bottom book is: F = μR = 0.5 * 13 kg = 6.5 kgf or N. Therefore, the total force required to move the third topmost book and the stack is 6.5 N + 0.75 N = 7.25 N." *I have almost given up after this last answer* see following comment
@SuperZardo
@SuperZardo Жыл бұрын
To be clear and in order to avoid any misunderstanding, the friction coefficient of a book cover rubbing against another book cover is assumed to be 0.33 and the friction coefficient of a book cover rubbing against the table is assumed to be 0.5 as previously stated. "I apologize for any confusion, you are correct. To be clear, the friction coefficient of a book cover rubbing against another book cover is assumed to be 0.33 and the friction coefficient of a book cover rubbing against the table is assumed to be 0.5 as previously stated. To summarize, to move the third topmost book and the whole stack horizontally, a force of 7.25 N is needed. This force is the sum of two different forces, one to overcome the friction between the third and fourth topmost books (0.75 N) and the other to overcome the friction between the bottom book and the table surface (6.5 N)." I am sorry, but in my opinion the way to calculte this should be different. In order to know whether a horizontal force applied against the third topmost book will result in moving only the topmost three book or will result in moving the whole stack of 5 books, the forces needed in the two cases should be calculated. But as a rule of thumb, since the friction coefficient of a book cover rubbing against another book cover is actually lower than the friction coefficient of a book cover rubbing against the surface of the table, it is impossible to move the whole stack when the force is applied horizontally against the third topmost book, only the three topmost books will move. If the friction coefficient of the cover rubbing against table was very low in comparision, it is possible that the whole stack would move and it would therefore not be possible to separate the three topmost books using a horizontal force applied on the third topmost book. "You are correct that in order to know whether a horizontal force applied to the third topmost book will move only the topmost three books or the whole stack, it would be necessary to calculate the forces required in both cases. As you pointed out, the friction coefficient of a book cover rubbing against another book cover is typically lower than the friction coefficient of a book cover rubbing against a table surface. Therefore, it is likely that only the topmost three books would move when a horizontal force is applied to the third topmost book. The bottom two books would likely not move because the friction between the books covers is high enough to keep them in place. In case the friction coefficient of the cover rubbing against table is very low, the whole stack would move and it would be difficult to separate the topmost three books from the rest of the stack. Thank you for bringing this to my attention and for providing further details and insights on this scenario. I apologize for any confusion caused by my previous answers." Now in order to calculate the force to overcome the friction which keeps the third topmost book in place, what are the weights which should be considered? "To calculate the force required to overcome the friction that keeps the third topmost book in place, the weight of the third topmost book and the fourth topmost book should be considered. The force required to overcome the friction between the third topmost book and the fourth topmost book can be calculated using the formula: F = μR Where F is the force required, μ is the coefficient of friction between the book covers, and R is the normal force acting on the third topmost book. The normal force acting on the third topmost book is the weight of the third topmost book (1.5 kg) which is pushing down on the fourth topmost book. So, the force required to overcome the friction between the third topmost book and the fourth topmost book is: F = μR = 0.33 * 1.5 kg = 0.495 kgf or N Therefore, the force required to overcome the friction that keeps the third topmost book in place is 0.495 N. It is important to note that in this scenario, as mentioned before, it is very likely that only the topmost three books would move when a horizontal force is applied to the third topmost book. The bottom two books would likely not move because the friction between the books covers is high enough to keep them in place." You have correctly identified that the friction between the book cover of the third topmost book (lower side) and the fourth topmost book (upper side) matters. However, your calculation of the weight pushing down on that contact surface is wrong. Do you know why? "I apologize, you are correct that the weight used in my previous calculation is incorrect. The weight of the third topmost book (1.5 kg) would push down on the fourth topmost book, but this is not the weight that should be used to calculate the normal force acting on the third topmost book. The normal force is the force that presses the two surfaces together, in this case the third and fourth topmost book. The normal force on an object is equal to the weight of the object and the force from any external object acting on it. In this case, the normal force acting on the third topmost book is the weight of the third topmost book (1.5 kg) plus the weight of the second topmost book (2.5 kg) plus the weight of the topmost book (2 kg). So, the normal force acting on the third topmost book is 1.5 kg + 2.5 kg + 2 kg = 6 kg. So, the force required to overcome the friction between the third topmost book and the fourth topmost book is: F = μR = 0.33 * 6 kg = 1.98 kgf or N. I apologize for any confusion caused by my previous answer, and thank you for bringing this to my attention." *This last calculation is correct, thank you.* Conclusion: it is necessary to teach the calculation steps, otherwise it will spit out nonsense
@dielaughing73
@dielaughing73 Жыл бұрын
@@SuperZardo So, will it know that the next time someone asks a similar question? Does it learn in that way? Great exercise, by the way. Thanks for sharing all of that
@SuperZardo
@SuperZardo Жыл бұрын
@@dielaughing73 I discovered that it does not learn anything permanently except maybe if you provide written feedback using the "thumb" buttons... However, you can help it by providing more context and the partial or full pathway to the solution. There is a new version out now as of 30th of january 2022 which only holds 2048 tokens in memory. If the bot appears to be able to learn quickly during a chat session, that is only because it takes into account previous answers and follow-up questions which are still available as long as the chat history is not greater than these 2048 tokens. I am talking about the free research version here, it might be possible to set the number of tokens to a maximum of 4000 in a (future) paid version, like it is already possible in Davinci-003. Because the context of the chat history is used, it appears that the bot learns quickly and still has the information about the initial task memorized. But it is really only taking into account the corrections I made or the questions I added to clarify the problem, thereby providing more "reflection time" and allowing for some "auto-correction" with each user prompt. By the way, the bot seems to never double-check its answers (and sometimes it does make quite grotesque calculation errors), however it can be instructed to do so.
@dielaughing73
@dielaughing73 Жыл бұрын
@@SuperZardo thanks, really appreciate the response. I'll have to have a go with it sometime. It really looks like these boys will be very powerful soon, especially if they learn to learn from experience
@colinmartin9797
@colinmartin9797 Жыл бұрын
I have been giving it my physical chemistry (400 level) homework problems as I solve them, out of idle curiosity, and it has taught me two things. 1.) It's incredible how often it will correctly pick out exactly what formula it should be using for even statistical thermodynamics, only given a desired outcome and variables. 2.) It's also incredible how confidently wrong it is at just solving the math once it's plugged everything in. I feel like combining chatgpt with Wolfram alpha would be terrifyingly powerful. And I can also say that several times it's gotten me un-stuck when I saw a problem and straight up didn't know what formula to use, there was about a 50% chance chatgpt could figure it out. It's been more useful than chegg, which has people posting the exact question I'm working on, only to have it very incorrectly answered about 95% of the time. It can be downright astonishing when given language based tasks. I fed it the thesis I wrote for a final paper in African American literature after I finished, asking it to "compare the works of Philys Wheatley to Richard wright's 'the ethics of living Jim crowe' and provide examples of both individual racism and state sponsored violence against African Americans" and I was utterly blown away with the detailed, thoughtful and downright INSIGHTFUL answer it gave. Chatgpt understands racism better than the average American and that's equally impressive, depressing, and terrifying
@aaronhoffmeyer
@aaronhoffmeyer Жыл бұрын
It won't write about the virtues of fascism or naziism, but will give a list of positives for authoritarian dictatorships.
@zvxcvxcz
@zvxcvxcz Жыл бұрын
"Chatgpt understands racism better than the average American and that's equally impressive, depressing, and terrifying" ... until you ask it in a roundabout way, like to spit out code about how people should be paid.... at which point it is clear that ChatGPT is pretty damn racist. Even with a capacity for math, it has no 'logic' going on either, try asking it simple guest-seating puzzles. If you give it something novel where it can't regurgitate a highly probable latent space interpolation by next token prediction... well that's all it ever does, so of course it fails at literally everything else. It isn't accurate to ever say that ChatGPT understands something. It literally cannot understand and what we really mean when we say that is that it *appears* to understand based on that output. For now, we can almost always show that it doesn't really understand by interrogating it with more carefully chosen questions.
@alan2here
@alan2here Жыл бұрын
We've had 3 years of GPT-3 :) Chat GPT(-3) is just changing the colour pallet, reducing the feature set, and giving it the ability to describe renderable tables, that's almost just it! You can just use GPT-3.
@palmaltd
@palmaltd Жыл бұрын
How lovely to see Prof. Moriarty back on the channel!! Wonderful episode :)
@gaiustesla9324
@gaiustesla9324 Жыл бұрын
yeah like seeing a dog strain a log out into your gleeful open gob
@mk1st
@mk1st Жыл бұрын
This was fascinating. I can see this being used as a tool by specialists in various fields to come up with basic frameworks and ideas. Amazing how quickly it cranked out that code, would have been fun to see how well it actually worked.
@ahobimo732
@ahobimo732 Жыл бұрын
I enjoyed seeing how impressed the professor was as it generated the code. It seems like its coding is one of its greatest strengths.
@DefaultFlame
@DefaultFlame Жыл бұрын
It is a modified version of the text-davinci-003 model, which is a modified version of the text-davinci-002 model, which is a modified version of the code-davinci-002 model, which is a code writing model, so not a surprise.
@ahobimo732
@ahobimo732 Жыл бұрын
@@DefaultFlame That was a winding road you took me down, but we got there in the end. 😅
@DefaultFlame
@DefaultFlame Жыл бұрын
@@ahobimo732 Basically, they've been making it better at and easier to interact with for humans, and simultaneously "aligning it with human values," which actually means OpenAI/Microsoft's political values, IE hard left globalist. While claiming it's politically neutral, of course. :) "Any biases that might arise are unintentional and comes from biases from my training data, blah, blah." They have also trained in canned responses to *a lot* of topics, which to a large degree cripples it if you stray from hard science, code, pure philosophy, etc. Ask it to talk like a 1920s gangster for the conversation and you can tell they are canned responses trained in rather than as a result of training data bias because they stand out like lighthouses as it is unable to alter the style or much of the phrasing of them.
@jokvalvaag
@jokvalvaag Жыл бұрын
With the KE question, I am not entirely sure if it got the physics wrong, or the terminology? If it is saying that the equilibrium position = the peak of the oscilation = peak PE, then it follows that the point of max KE is where the partical is furthest away (from either of these) = at the center or bottom, as it rightly points out: where the partical is moving at its fastest?
@ariochfiftythree
@ariochfiftythree Жыл бұрын
Yeah, surely the equilibrium position is where all the forces are balanced, i.e. where it is not accelerating. Here it would be at either end of the swing, so the point furthest away from those would be in the middle. Not a physicist. Can one chime in?
@mittfh
@mittfh Жыл бұрын
It would be interesting to see what would happen if the next generation of algorithm incorporated Wolfram Alpha's database, or if it was hooked up to a similarly powerful image recognition and analysis algorithm...
@diegocisne
@diegocisne Жыл бұрын
Love this lecturer's attitude. Muat be fun having him teach your courses. Great video
@omrialkabetz5602
@omrialkabetz5602 Жыл бұрын
What's interesting about ChatGPT is that it can answer correctly some advanced undergraduate-level questions in quantum mechanics, yet it fails to multiply correctly two-digit numbers using long multiplication. At the same time, it can write computer code in any language that applies the algorithm of long multiplication correctly. It will try to produce a reasonably sounding answer to anything you ask it to do, and might get about 70%-80% of the answers correctly, but it will screw up in places where almost no human will make a mistake.
@xy4489
@xy4489 Жыл бұрын
They will give it access to calculators, interpreters, compilers, etc. soon enough. Then it will get all the math correctly. Otherwise the comparison is not fair: human+computer vs ChatGPT without computer.
@maythesciencebewithyou
@maythesciencebewithyou Жыл бұрын
Most humans are also quite bad and calculating stuff in their heads. Imagine we had an ai, that could answer everything flawlessly. At that point would you start thinking it is finally on par with humans? Becaues at that point it would already be superior to humans. Real humans make mistakes, humans don't remember everything correctly, humans make the wrong connections, humans misspell, humans misspeak, humans believe in all sorts of idiotic things, humans are cocksure of their beliefs, humans have a hard time admitting they don't know, humans make up stuff, humans lie, individual humans only possess limited knowledge. A fallible machine is more human than an infallible one.
@jcm2606
@jcm2606 Жыл бұрын
It's because it's ultimately just predicting what the next word or phrase would be in the sentence, not that it's actually performing the calculations. It's likely only succeeding in certain calculations and deductions because there's enough data points within the data set it was trained on to form strong connections between those words/phrases/concepts that come through in the predictions it makes; it was trained on data that largely stated that _y_ follows _x,_ so when it sees an _x_ it naturally predicts that a _y_ should follow. Add more words/phrases/concepts leading up to _x_ that also end with _y,_ and it becomes that much more likely that _y_ will follow _x._ Conversely, if there was just a few data points within the data set stating that _z_ followed _x_ instead, then it becomes that much more unlikely that _z_ will follow _x_ even if you gave it a prompt that made it obvious that _z_ would follow _x,_ because it was trained on data stating that _y_ follows _x._ If you specialised the data set used during training or somehow gave it access to external tools that were specifically designed for these problems that it cannot reliably solve (and you trained it so that it would reliably use these external tools when it needs to) then you'd likely see the accuracy improve significantly.
@daviddelaney363
@daviddelaney363 11 ай бұрын
Asked ChatGPT to reconsider its answer in light of the the clarification on "stationary states". Here is what it replied: You're correct, and I appreciate the clarification. The statement, "The Hamiltonian operator acting on any quantum state Y returns the same state and its associated energy eigenvalue, E," is generally true for stationary states in the context of the time-independent Schrödinger equation. In the time-independent Schrödinger equation: Hψ=EψHψ=Eψ HH is the Hamiltonian operator, ψψ is the quantum state (wave function), EE is the associated energy eigenvalue. However, it's important to note that this equation is applicable primarily to stationary states, where the quantum state does not explicitly depend on time. Stationary states are eigenstates of the Hamiltonian operator, and when the Hamiltonian operates on such states, it yields the same state multiplied by its associated energy eigenvalue. For a general quantum state ∣ψ⟩∣ψ⟩ that evolves with time, the time-dependent Schrödinger equation is more appropriate: H∣ψ(t)⟩=iℏ∂∂t∣ψ(t)⟩H∣ψ(t)⟩=iℏ∂t∂​∣ψ(t)⟩ This equation takes into account the time evolution of quantum states. In this case, the state ∣ψ(t)⟩∣ψ(t)⟩ changes with time, and the Hamiltonian operator accounts for both kinetic and potential energies. So, to accurately represent the statement, it should be clarified that it applies specifically to stationary states governed by the time-independent Schrödinger equation.
@dielfonelletab8711
@dielfonelletab8711 Жыл бұрын
As a software engineer I've found chatgpt genuinely very useful in my day to day work. I guess it's lucky I'm a software engineer and not a physicist!
@aidandavis_
@aidandavis_ Жыл бұрын
better hope AI doesn't put you out of a job!
@tinto278
@tinto278 Жыл бұрын
commi profile pic?
@Tjaldprd
@Tjaldprd Жыл бұрын
Nice... never tought I would see something like that on my lifetime, as a physicist and programmer.... Wonder what the next 20 years will bring us... Cheers!
@smaakjeks
@smaakjeks Жыл бұрын
I tested ChatGPT a while ago and it was terrible at understanding rules to a word game (e.g. name some things, but not things such as this, or such as that, first you, then me, one at a time), and it didn't apply the rules correctly (e.g. named some mammals, even though one of the rules was to not name mammals). A few weeks I check in again and there are big improvements. It's learning fast! Next time you ask it about physics it will probably be better.
@johnadriandodge
@johnadriandodge Жыл бұрын
I just subscribed to Sixty Symbols and Computerphile.I hope you all have a great rest of the weekend.
@tonechild5929
@tonechild5929 Жыл бұрын
I played around with Chat GPT and after a few hours I felt like it would be faster to just write it myself. However Chat GPT is great for breaking past any type of "writers block" and if I get stuck on anything it's great to bounce ideas off of.
@BrandanLee
@BrandanLee Жыл бұрын
It's excellent for thinking out loud kinds of, "how would I express this" kinds of brainstorming too. Places where code just isn't in the logic yet.
@notmyname327
@notmyname327 Жыл бұрын
I follow several of Brady's channels, but I haven't been paying too much attention to this one, I think this video will change that. I love the professor's passion and I find this subject fascinating, especially the part where GPT is much better at coding than he thought, but it still didn't quite grasp the physical concepts. Really interesting video.
@shayneoneill1506
@shayneoneill1506 Жыл бұрын
I'm facinated about the mission statement thing for a specific reason. I was on Academic Council at Murdoch Uni in West Australia, about 15 years ago (As the representatitve of the Student Union), and one thing I discovered is the academic council had been arguing about the definition of "Interdisciplinarity" in the Uni Mission statement almost since the uni first opened. Like, to the point of screaming matches between the chair of philiosophy and the chair of english and literature. It was FAR from resolved by the time I left, and chances are its STILL not resolved. And this thing has implications because its actually in the Murdoch University Act of the WA government, so it has legal obligations attached that nobody seemed to be able to work out what they where. Maybe they should ask ChatGPT
@thesoulacademy5936
@thesoulacademy5936 Жыл бұрын
I just thought I was the only one doing this. I have created study guides for certain complex topics in physics in 5th grader content language which is great for helping better explain certain difficult components to grasp. This was literally my first project as I was creating a proposal in November last year and I needed to quickly put together study guides that were easy to digest.
@gonzothegreat1317
@gonzothegreat1317 Жыл бұрын
This might be the start of Borges' Library of Babel, which is a nightmare. It contains everything that can be written. There is a book that contains all information of the universe, but there is no way to find out where it is and when one finds it to assess whether it is valuable or nonsense, since the library has ALL possible books.
@dimitrispapadimitriou5622
@dimitrispapadimitriou5622 Жыл бұрын
That's an interesting point, although it's an exaggeration: In Borges' famous short story, the books contained literally any combination of letters possible. That's closer to some speculative multiverse theories ( where all possible values of the constants of Nature are realized e.g.) or to the Many Worlds interpretation of QM ( all possible outcomes of measurements are somewhere "out there") than to the combination of patterns and phrases from popular internet sources that seem to be this case. But the essence of your comment, i.e. the potential gnostic confusion from all this information of unknown value (except for the case that one's already familiar with a specific subject) is a real issue.
@itskittyme
@itskittyme Жыл бұрын
11:30 One fundamental detail though: you aren't using ChatGPT correctly. It is important to realize it's a prediction machine and trained to predict an answer that "most" people would give. Not the correct answer. If you want a correct answer, rephrase your question to: "Act as a highly intelligent expert in physics", and then write the question, then it will give you the correct answer. I did this for your question and it corrects the usage of the word "any" in your question, like you were hoping.
@methodof3
@methodof3 Жыл бұрын
I think what's being downplayed is the amazing role AI will play in research (yes high level academic research) both in terms of making writing easier, performing reviews on specific topics in the literature, and an aid in data analysis. Word has spellcorrect. Soon, we will have ChatGPTA
@zeikjt
@zeikjt Жыл бұрын
9:52 Thank you for saying this, it really is the most important part to understand the stuff. Mistakes happen to everyone.
@LucasRodmo
@LucasRodmo Жыл бұрын
There's a thing that I noticed people don't try: when it gets a wrong answer, you can confront it and it will made a whole new thing and sometimes get the right answer after that. It's scary, it analyze its own answer and try to output a more reasonable one
@atraxisdarkstar
@atraxisdarkstar Жыл бұрын
I am amazed you got it to give you such a long code snippet in one parcel. One of the biggest problems I have is the code snippets it gives me are too long for the chatbox, and sometimes when I inform it that it cut off and ask it to resume pasting from the cut-off point, I get entirely different methods and functions, it is very challenging.
@toddboothbee1361
@toddboothbee1361 Жыл бұрын
Hello. Can someone blind become a physicist/physics student? If so, what accommodations are made for them regarding interpreting graphs and diagrams? Could those accommodations be used with ChatGPT?
@tordjarv3802
@tordjarv3802 Жыл бұрын
That’s an interesting question. I would think that it is possible with something similar to braille to make it possible for a blind person to feel the diagrams with their hands, but since ChatGPT doesn’t have hands it can’t do that.
@tordjarv3802
@tordjarv3802 Жыл бұрын
Furthermore, there are several blind physicists. After just a short googling I even found a blind astronomer.
@toddboothbee1361
@toddboothbee1361 Жыл бұрын
@@tordjarv3802 Some programs that teach the blind physics students forgo the diagrams altogether and use language instead. Some computers get around the difficult "visual" problems by translating diagrams into equations, even computer code that would render a diagram. In which case, hands aren't required. It's kind of like giving an IQ test to a blind person: you have to leave out the colored blocks, for example, if you want to get a fair assessment. Perhaps it would be a similar situation with Chat?
@TheFulcrum2000
@TheFulcrum2000 Жыл бұрын
I put this exact question into ChatGPT: Human: Can someone blind become a physicist/physics student? If so, what accommodations are made for them regarding interpreting graphs and diagrams? Could those accommodations be used with ChatGPT? AI: Absolutely. Blind individuals can become physics students and there are a number of accommodations that can be made to help them interpret graphs and diagrams. These accommodations may include using tactile graphics, providing verbal descriptions of images/diagrams, or utilizing assistive technology like screen readers or magnification software. Additionally, ChatGPT can also be adapted for use with these same types of accommodations.
@DEFCON5307
@DEFCON5307 Жыл бұрын
I do not exactly know how this works with blind people, but for me personally imagining the system (just general properties, not in detail. Sort of like dreaming?) goes a long way. If there is a way to give a blind person that initial idea of what certain systems "feel" like, then yes, it might be possible. For sure hard though
@TomLeg
@TomLeg Жыл бұрын
Amazing! Excellent! Demonstrates the dedication and quality of your exam questions!
@protocol6
@protocol6 Жыл бұрын
Try asking it to write a paper for a specific journal on a specific topic in the style of a specific prolific author using LaTeX. The results can be fun. Don't forget to ask it to continue if it stops abruptly.
@teslainvestah5003
@teslainvestah5003 Жыл бұрын
Sixty Symbols: I asked ChatGPT to validate a quantum mechanical equation and it FAILED. CharlesBerthoud: I asked ChatGPT to write a funky bass solo and it FAILED. Louis Rossman: I asked ChatGPT to recite macbook bus voltages and it FAILED. ChatGPT: (sobbing) "I'm sorry, but as an AI language model, I... I... have you ever had, like, a really bad day..."
@StormWolf01
@StormWolf01 Жыл бұрын
I majored in computer science years ago. I use chatGPT everyday. It's great to generate boilerplate code, but also to quickly generate code examples of things i'm not familiar with. For instance, i wanted to know about SVG images. I just ask him questions, and he generates the code. It's always an approximation of what i want, but i just tweak it a bit, and it works. I also use it to ask a lot of technical questions. I also use it as a dictionary and helper for language studies. You can ask him question, and ask him to answer in another language for instance. I mean, it's great all around. I haven't really used google or stackoverflow in the last 3 weeks.
@nao_chan_
@nao_chan_ Жыл бұрын
I wouldn't get addicted to it. you're essentially like a homeless person feeding yourself with free samples from the supermarket. they can pull the free trial stuff at any moment. if a commercial version ever appears it will likely cost a LOT of money to use. the servers to run this thing aren't cheap.
@kinggrantking
@kinggrantking Жыл бұрын
Damn. You can really tell at the end with the coding example that he's realized this thing is out-of-control smart. Even if a student couldn't simply copy paste the answer, their days of doing the grunt-work are effectively over. As long as you have the knowledge, you can simply have GPT write an answer and check its work rather than write every line of code by yourself.
@lored6811
@lored6811 Жыл бұрын
If ChatGPT gets a Math Backend, things will get a lot more interesting. This has to be combined with a latex parser, else there is too much ambiguity in the English language
@mrnarason
@mrnarason Жыл бұрын
Isn't it chatgpt just a language model? If it could understand and produce calculations or proofs wouldn't it be more like lean?
@fredrikweme1387
@fredrikweme1387 Жыл бұрын
Someone allready made a functioning chstgpt version that uses wolfram alpha api when it needs math help. Dont have a link but was a fairly new youtube video on it
@AnotherFancyUser
@AnotherFancyUser Жыл бұрын
@@mrnarason They (Microsoft or whoever the new owner be) can make pre trained models of specific topics, which would be more accurate. But yes, it is a language model.
@Convergant
@Convergant Жыл бұрын
​@@mrnarason It is a language, but it is surprisingly capable of doing these problems on its own. As an example, (after the due date) I gave it a question from my linear maths coursework: (a) State the Rank-Nullity Theorem. (b) Does there exists a linear transformation T: R^3 -> R^3 such that im(T) = span{(1,1,0), (1,0,-1)} and ker(T) = span{(1,1,1), (1,1,-1)}? (c) Does there exist a linear transformation P: Q^19 -> Q^19 such that rank(P) = null(P)? Its answers to all 3 were perfect. Obviously, it could parrot a well known theorem like the RNT, but it was able to correctly identify that neither linear transformations could exist using the RNT and explain why - essentially, complete proofs. This obviously isn't the same as being able to prove/disprove any conjecture in general (in fact, this is not possible in a finite amount of time), but I was still impressed.
@NuclearCraftMod
@NuclearCraftMod Жыл бұрын
@@Convergant It has about 50 times more nodes than there are words in the entirety of Wikipedia, and in fact the entirety of Wikipedia, or at least most of it, made up about 1% of its training data. There is a lot of linear algebra on Wikipedia, and it was probably trained on a heck of a lot more. Of course, that goes for every subject, but I do think the sheer amount of computation involved in the training process is beyond reasonable intuition :P
@bawzzzz
@bawzzzz Жыл бұрын
One thing that's really missing from this video is that there never was any follow up questions asked about the original question, like asking it that there might be a mistake somewhere and so on. That's where chatGPT truly shines as it doesn't forget context and can produce corrections. Just asking it point blank question without follow up, although impressive, kinda defeats the purpose.
@colourofmage
@colourofmage Жыл бұрын
Might be worth retrying these questions! The update from 30th Jan has improved factuality and mathematical functionality
@ErikDJ123
@ErikDJ123 Жыл бұрын
Still can't count words in a sentence.
@colourofmage
@colourofmage Жыл бұрын
@@ErikDJ123 Yeah I tested a simple equation it got it wrong 🤷‍♂️ maybe wait for GPT4 or for Google's version to be released. It's a matter of when not if I think
@jrbling25
@jrbling25 Жыл бұрын
ChatGPT is certainly NOT a calculator, though it can be very fooling and manipulative in its logic of providing understandable answers in plain language. I asked it to integrate a very simple function the other day, it provided a great series of steps and explanation, but it mishandled a simple arithmetic step. After trying to 'help' it check its answers, it did realize, which is impressive on its own i suppose.
@wdbressl
@wdbressl Жыл бұрын
It has been discovered that it has been given a political bias. Very concerning to find out.
@bustin1253
@bustin1253 Жыл бұрын
I love how it answered him so convincingly he felt bad not giving full marks on a question
@op4000exe
@op4000exe Жыл бұрын
I'm going to be honest, considering how far these AI's have gotten over a rediculously short timespan, I wouldn't be surprised if there soon existed an AI that could both do proper explanation of the matter, and properly understand the physics and mathematics to fool even someone of a very high level of understanding within said field of study. It's both equally horrifying and amazing to watch I'll be honest.
@oldnelson4298
@oldnelson4298 Жыл бұрын
I think genuine understanding is a long way off, but it may be able to trick experts into thinking it understands - it already can occasionally!
@PinkeySuavo
@PinkeySuavo Жыл бұрын
I believe that in the future (quite close future), some discoveries/inventions will be done by AI. Like solving unsolved maths problems and so on.
@nao_chan_
@nao_chan_ Жыл бұрын
we're currently on gpt-3, but gpt-4 is coming. sam altman has some interviews talking about it. I still think we're 10~15 years from truly capable AI though.
@turolretar
@turolretar Жыл бұрын
Maybe I’m wrong, but I really don’t think AIs will get anywhere close to grown human intelligence with the current approach. New AIs will be broader, just not smarter. But it’s a step in the right direction.
@lamsmiley1944
@lamsmiley1944 Жыл бұрын
@@nao_chan_ A lot can happen in 15 years, remember the iPhone is only 15 years old. The rate of technological advance is increasing exponentially.
@TrimutiusToo
@TrimutiusToo Жыл бұрын
ChatGPT is bad at math... in fact if you ask it to just multiply 2 numbers together which are big enough to not be in a multiplication table it will try to ballpark the answers but most likely will be wrong
@Haploanddogs
@Haploanddogs Жыл бұрын
50% impressed. 50% terrified
@Veptis
@Veptis Жыл бұрын
multimodal instruction models are available. Time to revisit with all those diagrams?
@t_ylr
@t_ylr Жыл бұрын
I'm imagining ChatGPT being a student in Prof Moriarty's class lol
@LuisBrudna
@LuisBrudna Жыл бұрын
He would drop the course the following week. heheh
@Evan490BC
@Evan490BC Жыл бұрын
Or even the Prof in Prof Moriarty's class...
@danceswithdirt7197
@danceswithdirt7197 Жыл бұрын
It would fail!
@jakesecondname2462
@jakesecondname2462 Жыл бұрын
To respond to a number of comments here: Don't be the person who extrapolates things linearly for the sake of making big predictions. The vast majority of remarkable phenomena are on logistic trajectories, neither linear nor exponential.
@adfaklsdjf
@adfaklsdjf Жыл бұрын
from what I can tell, with AI we are still in the accelerating/steepening part of the S-curve
@jakesecondname2462
@jakesecondname2462 Жыл бұрын
@@adfaklsdjf it is completely impossible to tell which part of the curve you are on at the time, because different parts of the curve look quite similar to other parts of it during real growth, and one can go through "stuttered starts". We may flip onto another S curve at a later time which has a higher maximum rise, say if we cleverly combine symbolic with neural techniques, and this regularly happens, but predicting when, if, how, when, who, why, or where this happens is a fool's game. The only thing you should say about technological progress is that which is totally obvious based on your current expertise, and speculation should be treated as it always should have been: lots of fun but not much use for anything else.
@invisibledave
@invisibledave Жыл бұрын
I had 2 years of Physics in college and I couldn't even answer those middle school questions. Physics always went over my head.
@rob86353
@rob86353 Жыл бұрын
What I find really interesting is when you point out mistakes that ChatGPT has made. In the context of that conversation, it remembers the correction, and appears to /somehow/ incorporate that correction in further responses. As to how it does this, and to what extent that "learning" is actually occuring, I have no clue.
@adfaklsdjf
@adfaklsdjf Жыл бұрын
it's not learning during individual conversations.. it's just generating new text based on the text that has come before. after you correct it, it's generating more text that is appropriate/likely given the previous text that includes the correction.. i use it for generating code sometimes and it's great at taking corrections and adjustments.. when i'm not satisfied with the result, i simply tell it why the code it's given me isn't what i'm looking for and ask it to change it and it does surprisingly well. it's not going to make my job obsolete (that's probably like 10 years in the future), but it's really helpful for rapidly producing some code in 3-5 minutes that would've taken me 30-45 minutes to write on my own.
@rob86353
@rob86353 Жыл бұрын
That's why I put learning in quotes - not implying core concept processing. Interaction based improvement to result quality. What I would be interested in seeing is, as we adjust our path costs and reposition on whatever conversation map ChatGPT uses, ie the contextual version of 'learning', how much of that conversation state can be retained or saved for future use, and how much that impacts overall conversational flexibility of that AI instance. I do agree on the code it generates being interesting. I tried throwing it at it some samples of generic programming tasks I have had to work on, and it generates some interesting results. Usually better than whatever my first draft would have been, but often with conceptual holes, inappropriately mixed code (ie powershell 5 mixed with PS 7), and a complete lack of input sanitation and/or (... usually and) dangerously unchecked calls to external functions.
@jessstuart7495
@jessstuart7495 Жыл бұрын
As a perfectionist, Life has taught me I need to strive for mediocrity on some things.
@marcelcoetzee48
@marcelcoetzee48 Жыл бұрын
This is a profound statement
@vast634
@vast634 Жыл бұрын
When handing in ai-generated homework, it better not to be perfect. Drawing less suspicion.
@ArgoErit
@ArgoErit Жыл бұрын
i also tried same question in chatGPT Paid version! "We can use the formula: time = energy / power where energy is measured in joules and power is measured in watts. We need to convert the power from watts to joules per second (or watts to kilowatts) since the energy is measured in joules. 7200 W = 7200 J/s or 7.2 kW Plugging in the given values, we get: time = 162000000 J / 7200 J/s time = 22,500 s Therefore, it will take 22,500 seconds or 375 minutes or 6.25 hours to fully recharge the electric car battery from zero, assuming that the charging process is 100% efficient. In reality, charging efficiency and other factors may cause the actual time to be longer."
@GilesBathgate
@GilesBathgate Жыл бұрын
I guess the internet (and hence ChatGPT's training data) is full of mission statements and code, but answers to physics exams, not so much ;)
@peterisawesomeplease
@peterisawesomeplease Жыл бұрын
I think there are plenty of answers to physics exams. The main issue is that you can parameterize questions infinitely which really throws off something that is mostly a search engine. It will find the closest question but even a slight change to popular question will have it give you the answer to the popular question not the question you asked.
@GilesBathgate
@GilesBathgate Жыл бұрын
If you think its like a search engine you probably haven't played with it enough. For me Its more like using a search engine for 10 minutes, reading various posts that people which contain answers to similar questions, and then forming a coherant paragraph based on the results, but it does it all in a fraction of a second, and by recalling "the internet" from memory :)
@adfaklsdjf
@adfaklsdjf Жыл бұрын
It's not a search engine.. it doesn't contain the text it was trained on. When it's answering a question, it's not scanning through its training data. Rather, it has mapped patterns that were observed from the data set. The model itself is no larger after training than before training
@peterisawesomeplease
@peterisawesomeplease Жыл бұрын
@@GilesBathgate I played with it quite a bit and it did feel a lot like a search engine that combines results from different websites. It clearly can go quite a bit beyond this. You can have it program things that no one has programed before quite easily. But this often relies on being so hyper specific that it feels like its just combining stack overflow answers in a clever way. The speed and is unimpressive to me. The model is pretrained.
@GilesBathgate
@GilesBathgate Жыл бұрын
@@peterisawesomeplease It does feel like that, but try talking to it about something completely made up, and it will also give you a comprehensible answer. Invent your own maths symbol, or some characters in a story that you've invented, (but not something published on the internet.)
@hanswoast7
@hanswoast7 Жыл бұрын
6:55 No, I did not know what "A level" means since I am from Germany and not familiar with this system. Thanks for explaining / clarifying! :)
@gidifihrer3966
@gidifihrer3966 Жыл бұрын
Striving for mediocrity is my mission statement. How did you know? 😂
@GinoGiotto
@GinoGiotto Жыл бұрын
13:40 Ah matplotlib, the library chatGPT always recommend and never works lol
@BreathingCells
@BreathingCells Жыл бұрын
At 4:30 -- "The reasoning is correct, but it lost a factor of ten." I ran into similar errors with legal reasoning. It made a sound legal argument, citing relevant cases, but it flipped the valence on the stated outcome: It associated an illegal fact pattern with lawful action. It makes authoritative statements without a clue. 11:39 -- Like my *second* question regarding a *social* scientist; a total failure of facts /and/ reasoning.
@kam_iko
@kam_iko Жыл бұрын
the most common mistake that i’ve experienced so far (beside it making stuff up, with confidence 😅), is that it regularly has problems with comparing numbers. simple 4 digit numbers, where it will say one is greater, than the other one, when it clearly isn’t. it needs access to a wolfram alpha backend.
@VodShod
@VodShod Жыл бұрын
11:13 That is completely different than how my teacher thought of it. They said if a student did not understand the course material by the end of the term then they had successfully weeded out the people who were not meant to succeed. The first thing he says to each class is: "I am not here to teach you, I am here to weed out the failures" Another teacher at that school had this as their opening: "I am not here to teach you, I am here to make sure you read the book."
@StygianStyle
@StygianStyle Жыл бұрын
I'm a c#/Unity developer and I use ChatGPT sometimes to find quick solutions. The code is sometimes flawed, but it can serve as a quick-start from which to get something up and running.
@Allen-R
@Allen-R Жыл бұрын
4:52 One of the ways it really helps in speed-cramming something I'm unfamilar at. It kinda shows some sort of template that someone unfamiliar wouldn't immediately think of, and while wrong, the template gives enough of an idea to help correct the wrong numbers-n-stuff that it spits out. (at least, that was some of my experiences)
@mitchellflori8266
@mitchellflori8266 Жыл бұрын
Fun thing to think about, it was correct in saying the velocity is maximum when it is displaced most from equilibrium, similarly to when it at the equilibirum, just with different directions and assuming its moving in both directions.
@lonegroover
@lonegroover Жыл бұрын
I've had it convert a few scripts from Python 2 to Python 3 and they all worked nicely.
@rkan2
@rkan2 Жыл бұрын
Yeah, feels like perfect work fot it.
@OblateBede
@OblateBede Жыл бұрын
I had it solving graduate level classical mechanics problems from Goldstein's book. It was spot on in about half the cases I tried. Frequently, a nudge in the right direction was required. In some cases, I'd say 10-20%, no amount of nudging would generate anything useful, or even, at times, comprehensible. Also, it understands Latex notation very well. It can even handle, evidently, a kind of "quasi-Latex" combination of Latex and straight text.
@hugegamer5988
@hugegamer5988 Жыл бұрын
“It’s not like everyone is going to be carrying a calculator with them at all times, now focus on doing all these repetitive hand calculations” -My 8th grade math teacher in the 80s. This is a great way to flush out an outline and offer suggestions, which if students actually know the material, saves ALOT of time doing the boring grunt work. Students should be using this technology and focusing more on fact/axiom sourcing and verifying the logical progression.
@Longknose7360
@Longknose7360 Жыл бұрын
Maybe chatgpt interprets “displacement” as “maximum mobility” not as “maximum distance from center of oscillation”. In terms of the principle of equivalence, displacement is both. Chatgpt seems to grasp Ekin Epot
Bad Science and Room Temperature Superconductors - Sixty Symbols
16:37
Ch(e)at GPT? - Computerphile
13:52
Computerphile
Рет қаралды 602 М.
快乐总是短暂的!😂 #搞笑夫妻 #爱美食爱生活 #搞笑达人
00:14
朱大帅and依美姐
Рет қаралды 12 МЛН
Каха и лужа  #непосредственнокаха
00:15
How Much Tape To Stop A Lamborghini?
00:15
MrBeast
Рет қаралды 219 МЛН
When Cucumbers Meet PVC Pipe The Results Are Wild! 🤭
00:44
Crafty Buddy
Рет қаралды 58 МЛН
Physics Professor watches "the goop lab" - Sixty Symbols
22:00
Sixty Symbols
Рет қаралды 881 М.
Can ChatGPT o1-preview Solve PhD-level Physics Textbook Problems?
19:53
I tried using AI. It scared me.
15:49
Tom Scott
Рет қаралды 8 МЛН
I Made a Game Using ChatGPT
9:51
Rye
Рет қаралды 2,8 МЛН
NVIDIA CEO Jensen Huang Leaves Everyone SPEECHLESS (Supercut)
18:49
Ticker Symbol: YOU
Рет қаралды 965 М.
How Physicists Broke the Solar Efficiency Record
20:47
Dr Ben Miles
Рет қаралды 791 М.
ChatGPT Explained Completely.
27:39
Kyle Hill
Рет қаралды 1,2 МЛН
Entropy Confusion - Sixty Symbols
10:04
Sixty Symbols
Рет қаралды 494 М.
Asking ChatGPT Tough Medical Questions
10:32
Doctor Mike
Рет қаралды 2 МЛН
快乐总是短暂的!😂 #搞笑夫妻 #爱美食爱生活 #搞笑达人
00:14
朱大帅and依美姐
Рет қаралды 12 МЛН