🌏 Get NordVPN 2Y plan + 4 months extra ➼ nordvpn.com/tomrocksmaths It’s risk-free with Nord’s 30-day money-back guarantee! ✌
@lio12342342 ай бұрын
These models don't do any background reasoning (essentially thinking before answering). Definitely recommend trying out o1-mini which does do this. Currently o1-mini does better at maths than o1-preview, but o1-preview has better general knowledge reasoning. o1 when it's finally released should be just downright better than o1-mini at everything including maths. Highly recommend trying some of these out on that model :)
@IsZomg2 ай бұрын
This uses ChatGPT 3 which is outdated. The latest free tier model is ChatGPT 4o and the top model is o1. Both of these are much better at math that ChatGPT 3 which is TWO YEARS OLD now.
@nofilkhan67432 ай бұрын
Chatgpt doing black magic instead of geometry.
@asiamies91532 ай бұрын
It sees the world differently
@delhatton2 ай бұрын
@@asiamies9153 it doesn't see the world at all
@alexandermcclure61852 ай бұрын
@@delhatton that's still different from how humans see the world. 🙄
ChatGPT invoked the Illuminati on the Geometry question 😂
@bekabex86432 ай бұрын
the geometry drawing it produced had me gasping for air 🤣
@shoryaprakash89452 ай бұрын
I once asked chatGPT to prove that π is irrational. It gave back the proof of √2 problem, discuss squaring the circle problem and in final conclusion wrote hence π is irrational.
@RFC35142 ай бұрын
Wow, it independently (re)discovered the Chewbacca defence!
@Eagle3302PL2 ай бұрын
The problem is that chatgpt or any llm, they are not applying formal logic or arithmetic to a problem, instead they regurgitate a solution they tokenized from their training set, and try to morph the solution and the answer in the context of the question being asked. Therefore, just like a cheater, it can often give a correct result confidently because it has memorised that exact question, sometimes it can even substitute values into the result to appear to have calculated it, but in the end it's all smoke and mirrors. It didn't do the math, it didin't think through the problem, that's why llm's crumble when never before seen questions get asked, because an llm has no understanding, only memorisation. Also llms crumble when irrelevant information is fed alongside the question, because the irrelevant information impacts the search space that's being looked at, so accuracy of recall is reduced. LLM's do not think, they do not process information logically, rather they process input and throw out the most likely output, and use some value substitution in the result to appear to be answering your exact question. LLM's cannot do mathematics, at best they can spit out likely solutions to to your questions where similar or those exact questions and their solutions have been fed to them in their training set. An LLM knows everything and understands nothing.
@mattschoolfield47762 ай бұрын
I wish everyone understood this.
@Nnm262 ай бұрын
Try o1 brother
@mattschoolfield47762 ай бұрын
@@Eagle3302PL it's even in the name Large Language Model. I don't get how anyone thinks they have any understanding
@IsZomg2 ай бұрын
New o1 model can 'show its work' and reason in multiple steps. If you think LLMs won't beat humans at math soon you are mistaken.
@CoalOres2 ай бұрын
They _might_ process information logically, we actually don't know. Since they generate it word by word (or token by token), after enough training it might have learned some forms of logic because it turns out those are very good at predicting the next token in logical proofs. Logic is useful for many different proofs, just memorizing the answer is only useful for a single one (i.e. it would be trained out pretty quickly); this doesn't guarantee it knows logic, but it makes it plausible. It is a common misconception that these programs work by searching the dataset, 3Blue1Brown has an excellent video series I would recommend that shows just how complex its underlying mechanics actually are.
@Lightning_LanceАй бұрын
I feel like ChatGPT may have taken your first message to be meant as a compliment rather than as a prompt that it should pretend to be you.
@yagodarkmoon2 ай бұрын
Question 3 the geometry one ends up much better when you give it the graph with the instructions. I tried it and got a much better result. To do this I used the snipping tool to make an image of both the question and the graph. Then I saved it to desktop as screenshot.jpg and dragged that into the ChatGPT window. It read them both fine.
@Pedro-op6zjАй бұрын
after using snipping tool you can directly Ctrl C + Ctrl V in chat gpt
@tymmiara59672 ай бұрын
It becomes obvious that the language model is essentially a separate module to the image generator. I bet even if the solution had been flawlessly found, the drawing of a diagram would be completely bonkers
@Hankyone2 ай бұрын
Cool video and all but are you aware of o1-mini and o1-preview???
@TomRocksMaths2 ай бұрын
yes of course. the plan here was to use the free version as it is what most people will have access to, so I wanted to warn them to be careful when using it.
@IsZomg2 ай бұрын
@@TomRocksMaths 4o is the best 'free' model, not ChatGPT 3
@9madness92 ай бұрын
What to know if you could test with Stephen Wolfram add in! To see how good the addin makes chatgpt at maths
@devilsolution97812 ай бұрын
@@9madness9are there plugins???
@IsZomg2 ай бұрын
@@TomRocksMaths ChatGPT 3 is TWO YEARS OLD now lol you didn't do your research.
@toshiv-y1l2 ай бұрын
20:42 power of a point is a basic geometry theorem...
@patrickyao2 ай бұрын
Hey Dr Crawford - thank you for your video and insight. It seems that you are using the basic GPT4 model to solve these BMO questions. There is a different model ChatGPT provides called the o1-preview, which is specifically designed for complex and advanced reasoning and solving difficult mathematical questions like this. If you use the o1-preview model, it would take way longer time (sometimes even more than a minute) before giving you a response, and it thinks in a way deeper way than the model you have used here. With that model, I've tried feeding it questions 5 and 6 on the BMO1 paper, and it could solve them perfectly. Therefore I would encourage you to try again with that specific model. I do believe that you have to have ChatGPT subscription to access that model, but I think that they are going to release a free version of that model. Anyways, thanks you so much! P.S. It would have been better if you simply uploaded a screenshot of the question as diagrams could have been included, and ChatGPT would be able to read the question from the image (probably better than it being retyped with a different syntax)
@TheDwarvenForge05Ай бұрын
ChatGPT has, on multiple occasions, told me that odd numbers were even and vice versa
@abdulllllahhh2 ай бұрын
On an unrelated note, I remember sitting this BMO paper last year and struggling but enjoying it. I recently started uni in Canada and have been training for putnam, and now I’m looking back at these questions both cringing and being proud at how much I’ve grown in just a year, how I’ve gone from finding these questions tough, to now being able to solve them without much struggle. This is what I love about maths, how I can always continue with just some practice. P.s, great video Tom, really enjoyed watching it.
@MorallyGrayRabbit2 ай бұрын
25:43 Obviously it just used the power of a point thereom
@loadstone51492 ай бұрын
Tom is not locked in. Every uni maths student knows if you take a picture of the question it will always give you the right answer
@micharijdes9867Ай бұрын
Facts, but for some reason it has a really hard time with topology
@gtziavelis2 ай бұрын
19:35 LOL, the diagram drawing looks like equal parts 1) M.C.Escher, 2) Indian Head test pattern from the early days of television, 3) steampunk, 4) Vitruvian Man. It's all sorts of incorrect, its confidence is a barrel of laughs, but it's lovely to look at and fun to contemplate how ChatGPT may have come up with that. My favorite part is the top center A with the additional 'side shield' A, and honorable mention to how the matchsticks of the equilateral triangle have three-dimensional depth and shadows.
@dmytryk78872 ай бұрын
In Q1 there seems to be an error in chatgpt's explanation. For example, it says "D" must be in position 7, 8 or 9 but "DOLYMPIAS" is a valid misspelling...every letter is one late, except for D (early) and S (correct).
@SgtSupaman2 ай бұрын
Yeah, its mistaken assumption that a letter must be within one of its original location (in either direction) actually limits the number of possible permutations to 55. So, it definitely didn't properly pair up its explanation with its answer.
@coopergates96802 ай бұрын
You caught it first. I'm surprised GPT could pull out the correct number while misunderstanding the terms along the way.
@GS-td3ycАй бұрын
@@coopergates9680 it literally did 2^9=2^8=256
@Justashortcomment2 ай бұрын
Why didn’t you use OpenAI’s new model o1, which is designed for these types of problems? Would be interesting to see the performance of o1-preview with these.
@gergoturan40332 ай бұрын
I've only watched up to the first question so far, but I came up with a different solution that's interesting enough to mention. Another way to think of the problem is dividing the characters into 2 subsets, one of them is the characters that were typed 1 late and the other is all the others that weren't. If all the characters are different, these 2 sets give enough information to reconstruct any possible spellings. Therefore, we just need to count all the ways to make these subsets. We know that in an n character long word the last character can never be 1 late. So we only have n-1 letters left to work with. [n-1 choose k] will give us a k sized subset. To get all possible subsets, we need to sum up for every case of k. [sum(k = 0..n-1)(n-1 choose k)] This is the n-1st row of Pascal's triangle. We know that the sum of the n-1st row of it is 2^(n-1). The word "OLYMPIADS" has 9 letters, therefore the answer is 2^8 which is 256.
@JavairiaAqdas2 ай бұрын
we can add shape through the attachment icon right in the left corner of the prompt box, just take a Screenshot figure and put forward like this.
@rostcraft2 ай бұрын
Power of a point is actually real and while I’m usually bad in geometry at olympiads, some of my friends used it several times.
@deinauge78942 ай бұрын
ok. to use this at the point Z you need two lines through Z which cut a circle in 1 or 2 points. Say this circle is centered atB with radius BA. You can conclude: ZX*ZY = ZB*ZW (W is the point where ZB coincides with the circle) Since ZW=ZB-BA we get ZX*ZY = ZB*ZB-ZB*BA. This looks almost like what chatGPT wrote. I'd give it a pass 😂
@bigbluespike56452 ай бұрын
I asked the o-1 preview the geometric question and it approached the problem very analytically - by setting up a coordiante system, finding the points X,Y and Z by solving equation systems for lines and the circle and finally showing BZ is Perpendicular to AC using vectors and dot product of BZ⋅AC. I can't fully evaluate whether it's perfect, but I still think its solution was way better.
@bornach2 ай бұрын
@@bigbluespike5645 How does it do on the other problems that ChatGPT made a mess of?
@bigbluespike56452 ай бұрын
@bornach I didn't test yet, but i'll update you when i do
@SayanMitraepicstuff2 ай бұрын
You did not use the latest o1 series of models. I was trying to search for where you mention which model you were using - couldn’t find an exact response and you have cropped the part where it mentions the model and also haven’t shown the footage of the answer generation - which would give away the model you were testing. O1 can not generate images - which was the give away. Do the same tests with o1-preview.
@blengi2 ай бұрын
yeah this all moot if not o1 which is openAI's first reasoning model, all the others LLMs are just level 1 chatbots by openAI def
@dan-florinchereches48922 ай бұрын
The second problem reminds me of euclids alogithm and most notably the chinese usage of such method. If you got 2 vessels of volunes a And b the lowest volume which you can measure is the greatest common divisor of a and b. By using this logic and the fact that any ai and ai-1 are some linear combinations of a0 and a1 it folowsthat gdc(ai,ai-1)=gcd(a0,a1) henceif they are consecutive they both have gcd of 1.
@tontonbeber45552 ай бұрын
@2:41 There seems to be a problem in your definition of the problem. It is said a letter can appear at most one position late, but any position early as you wish. So the third letter Y can also appear in first position, am I wrong ? Like MATHS can be typed TMASH where you see 3rd letter appears in 1st position ...
@cheesyeasy12382 ай бұрын
0:24 maybe i'm too panicky but the mere mention of the MAT sends a shiver down my spine... hoping for a non-disaster tomorrow 🙏
@asdfghyterАй бұрын
20:00 I think it might have misunderstood the question. I think it interpreted "two apart" as "has two dots in between" despite the question being very clear about this
@jppereyra2 ай бұрын
Our jobs are safe, ChatGPT can’t do maths at all.
@asdfghyterАй бұрын
7:09 the second rule is incorrectly rewritten. the rewritten second rule ChatGPT wrote is just the rewritten first rule in flipped order and negated. the correct rewritten second rule would be: a_i - a_{i-1} = 2 * (a_{i-2} - a_{i-1}) this is impossible if a_i and a_{i-1} are consecutive (2*n can never be +-1), so by induction, the first case must hold for all i
@Justcurios-f7fАй бұрын
I love the way you approch problems, You should try a Sri Lankan A/L paper.
@komraaАй бұрын
That image had me dying for 2 minutes straight😂😂
@MindVault-t3y2 ай бұрын
Thanks for coming to my school (I was one of the year 10s), the presentation was very interesting!
@obiwanpez2 ай бұрын
19:50 - “Wull there’s yer prablem!”
@3750gustavo29 күн бұрын
the new QWQ 32b preview model amazingly does better at this hardcore math questions than bigger models, it outputs over 3k tokens for each question as it tries to brute force a solution
@KoHaN72 ай бұрын
Hi Tom, I really like the video! 😀If you want to see a good performance in logic and reasoning from GPT, using GPT o1-preview seesms to be the best at the moment. It would be interesting to repeat the same with that more advanced model. It thinks before answering which allowes it to check its own answeres before saying the first thing that comes to mind
@TomRocksMaths2 ай бұрын
ooooo this is exactly the kind of thing I was thinking it needs!
@samarpradhan39852 ай бұрын
Me who also can’t do math: “Maybe I am ChatGPT”
@uenu723017 күн бұрын
Forgot exactly how I phrased the question to chatgpt, but it involved splitters with 1 input and 1-3 outputs where the outputs were equally divided from the input, and mergers with 1 output and 1-3 inputs where the output is equal to the sum of the inputs and how to construct a sequence of splitters and mergers to end up with two outputs with 80% and 20% of the original source input. It said to split the first input with a 1-2 splitter (50%/50%) and split one of those outputs with a 1-2 splitter (25%/25% of the original input). then merge the remaining 50% with the 25% and that will equal the requested 80% output, and the remaining 25% equals the requested 20%. In summary, it thinks that 50% plus 25% equals 80% and 25% equals 20%. So, yeah, ChatGPT can't math.
@TheMemesofDestruction2 ай бұрын
I have found the WolframGPT is better at Maths than the standard ChatGPT. That said both often require additional prompting to achieve desired results. Then again it could just be human error on the prompter side. Cheers! ^.^
@Twi_5432 ай бұрын
When I did this practice paper I got the same thing as u for question 2 about how the difference either increases or stays the same at each point, so if it is 1 at 2024 then it must be 1 at 1 bc the each term is an integer but I was confused when looking at the mark scheme so wasnt sure it was right. Thanks for explaining the mark scheme it helped me understand it better😁👍
@asdfghyterАй бұрын
the reason why the "diagram" it drew was such complete nonsense is that the model for generating images is completely different from the one used to generate text, so all the image generator is given is a text description from the gpt model, none of the text model's internal "understanding" of the question
@Jordan-gt6gd2 ай бұрын
Hi Dr, Can you do a lecture series on any math course you like,similar to the ones you did with calculus and linear algebra.
@hondamirkadirov55882 ай бұрын
Chatgpt got really creative in geometry🤣
@Justashortcomment2 ай бұрын
Hey Tom, Thanks for the video. BUT! ;) OpenAI will release the full o1 “reasoning model” soon. Currently we only have access to the preview. It would be fantastic to see a professional mathematician evaluate its performance, ideally with a problem set that isn’t on the internet or in books or has only been put on the internet recently.
@thisisalie-s1s2 ай бұрын
Hi Dr Tom! I am a fan from Singapore and I would like to inform you about the Singapore A level, which is known to be harder than the IB HL maths paper. I think that you would probably enjoy doing that paper
@ramunasstulga82642 ай бұрын
Nah jee advanced is easier than IB HL, lil bro 💀
@thisisalie-s1s2 ай бұрын
@@ramunasstulga8264 If you are so retarded that youre unable to even do both paper before making a valid criticism you shouldn't even comment. I find it baffling someone like you is even watching this video.
@jesseburstrom59202 ай бұрын
First you did not use o1-preview would be more interesting, also 0-shot is something humans do not be limited 0-shot which means I have to give my first thought in university exam, no you have typically 1+h per question. So do test with o1 and give natural critique not leading might give better results. Like just try to convince it to be better to look into it's own arguments. It would be simple to do, better results hmmm no answer there from me but yes when they become better when? Great channel Tom!
@GodexCigas2 ай бұрын
Try using GPT-o1-preview - It uses advanced reasoning.
@ExzyllАй бұрын
Yeah I was gonna say that he will be shocked
@djwilliams82 ай бұрын
I found it works a lot better when you upload a photo of a question. Just press a screenshot or snippet tool and paste.
@lupusreginabeta33182 ай бұрын
1. The Prompt is definitely upgradable 😂 2. You should use the new preview model o1 it is quite a lot better than 4o
@mujtabaalam59072 ай бұрын
Is this GPT 4o or 4o1?
@caludio2 ай бұрын
I think this is a relevant question. O1 is probably a better "thinker"
@TomRocksMaths2 ай бұрын
the plan here was to use the free version as it is what most people will have access to, so I wanted to warn them to be careful when using it.
@mujtabaalam59072 ай бұрын
@@TomRocksMathsThat's fair, but you should definitely do a video where you compare the two. Or see if you can beat 4o1 at chemistry, physics, or some other subject that isn't your speciality
@kaisoonloАй бұрын
Try using GPT o1 preview. Unlike GPT 4o, it excels at STEM questions due to its "advance reasoning"
@justanotherinternetuser47702 ай бұрын
a british man saying math instead of maths is a thing i never thought id see in my life
@eofirdavid2 ай бұрын
ChatGPT's "proof" for the first question was wrong. According to its "step 2" the answer should be 2^2 * 3^7 which is false. Also, the possible position are wrong since the n-th letter can be in any of the 1,...,n+1 position (except for the last letter which is in 1,...,n). I have no idea why it needed to mention Young Tableaux in step 3, since even if they are related somehow, this is a simple problem that doesn't need anything advance in order to solve it. Finally, in step 3, without a proper explanation it suddenly only gives 2 possibilities for each letter, and for some reason the letter 'L' has either 2 or 3 possible positions. Even if you ignore this, and give 2 positions for each letter you get 2^9 and not the 2^8 correct answer.
@FlavioGaming2 ай бұрын
L has 2 possibilities AFTER placing O in position 1 or 2. Y has 2 possibilities AFTER placing the letters O and L in their positions and so on...
@FlavioGaming2 ай бұрын
For the last letter S, it should've said that since we've placed 8 letters in 8 positions, there's only 1 place left for S.
@eofirdavid2 ай бұрын
@@FlavioGaming You are right. Without the knowledge of the position of the previous letters, the n-th letter can be in 1,...,n+1 positions (which seemed what step 2 meant to say) and after you assume that you placed the previous n-1 letters, then you only have 2 possibilities (which should have been step 4), except for the last letter which only has one possibility. In any case, while somehow chatGPT managed to give the final right answer, everything in between seems like guesses. This sort of proof is something which I would expect from a student that saw the answer before the exam, didn't understand it, and tried to rewrite it from memory, which granted, this is how chatGPT works. I would not call this "mathematics", and I have yet to see chatGPT answer any math problem correctly, unless it is very standard and elementary and its the type of question you expect to see in basic math textbooks.
@jursamaj2 ай бұрын
On the unreliable typist: I feel ChatGPT mischaracterized the possible positions of letters (or I'm drastically misunderstanding the rules. In steps 1 ^2, it said 'S' can only be in the last 2 positions. But 'SOLYMPIAD' appears to fit the rules ('S' is way early, and each other letter is 1 late). It may have gotten the right answer, but it's argument was flawed. On the polygon: Step 1 is false. Convex with equal sides does *not* imply the vertices lie on a circle. A rhombus is convex and all its sides are equal, but the vertices are *not* on a circle. This alone invalidates all the rest of the proof, which relies on the circle. Also, in step 4 part 'n=5', the 3 diagonals do *not* form an equilateral triangle. Nor would it "ensure … a regular polygon" if they did. The important thing to remember is that LLM "AI" isn't *reasoning* at all. It's just stringing a series of tokens together based on how often it has seen those words strung together before, plus a bit of randomness.
@dominiquelaurain64272 ай бұрын
@20:00 : as an euclidean geometry addict...I like the diagram a lot ;-) "Power of a point" is of course a not accurate definition. I know the "power of a point with respect to a circle" only. "Please draw a sheep". I tried some months ago to get a generated picture but no way. They must be taught the Compass and Ruler techniques.
@CosmicAerospace2 ай бұрын
You can input images onto the prompt by copy pasting a screenshot or 🥇 lacing an attachment onto the prompt :)
@suhareb92522 ай бұрын
The way chatgpt makes Tom wonder is the same way I make my maths teacher wonder about my answers in exams 😂
@nicoleweigelt39382 ай бұрын
Looked for something like this after I got frustrated it was getting algebra and calculus wrong 😅 Thanks for the vid!
@coopergates96802 ай бұрын
Question 1, step 2, doesn't "SOLYMPIAD" fit the constraints? Same with "OLSYMPIAD"? At least some cases with a letter appearing at least 2 slots early seem omitted. D should not be restricted to 7 or later and S should be allowed before 8, for instance.
@vasiledumitrescu9555Ай бұрын
I use it to study some theoretical stuff, it’s good at explaining theorems and definitions and producing good examples. It can even prove things pretty well, because it’s not actually doing the proof but just taking it from its database and pasting it to you. Of course it makes mistakes now and then, but they’re so dumb they’re easy to catch. And by “using it” i mean: as i’m studying from my notes or books i ask from time to time chatgpt things in order to understand the mind bogglingly abstract stuff i have to understand. Overall it has proven to be a fairly useful tool to learn math, at least for me, as i’m pursuing my bachelor degree in math.
@Neuromancerism2 ай бұрын
Yes, you can copy the diagram. Thats no issue at all. You can just copy and paste an image into chtgpt (or click the image button) as long as you have access to full 4o, after a few prompts until you pay accordingly itll downgrade to 3 though.
@lipsinofficial36642 ай бұрын
You can UPLOAD PDFS
@Rubrickety2 ай бұрын
I think Numberphile did a video on the Power of the Point Theorem and the counterintuitive properties of the Perpenuncle.
@massiveastronomer10662 ай бұрын
I have this test coming up on the 20th, these questions are brutal.
@snehithgaliveeti32932 ай бұрын
Tom can you try the TMUA entrance exam paper 1 and 2
@nightskorpion13362 ай бұрын
Yesss I've been asking this too
@dean5322 ай бұрын
It works with the math needed for engineering but not what we come up with in Physics (theory)-we do rely on concepts freshly come out of pure math and a mathematician’s mind. How about showing chatgpt o1 getting literally tossed in the storm with G(n) 😅 20:05 Yea Sabine and the rest don’t like it too. Mathos is pretty decent compared with o1 but also fails later.
@SmashachuАй бұрын
You didin't use the newest model 1o, which is significantly better in every way at mathematics.
@yehet87252 ай бұрын
Whenever I am asking chatgpt for help with math questions, I almost always notice something went wrong. So I guess a tool made for helping me get the question right, made me help myself in knowing when things are wrong instead :3 (this makes sense in my head okay)
@KaliFissure2 ай бұрын
ChatGPT can't draw a simple cardioid. Even after I gave it the formula.
@ootakamoku2 ай бұрын
Would have been much more interesting with o1 preview model instead of 4o
@MorallyGrayRabbit2 ай бұрын
One time I asked it what an abelian group was as a test and it told me all abelian groups are dihedral groups and spit out a bunch of complete nonsense math and i was so sad because at first i saw all the math and thought it might be actually real
@OzoneTheLynx2 ай бұрын
I tried getting Gemini to draw its 'solution' to 3) and it responde with the link to the solutions XD.
@HBtu-f7y2 ай бұрын
Did you consider trying their o1 model
@ValidatingUsername2 ай бұрын
Have you ever had a question that used the arc length of equal sized circles to solve the question?
@HITOKIRI01Ай бұрын
Can you repeat the exercise with o1-preview?
@francoislanctot24232 ай бұрын
You should try o1 Preview, which is supposed to be very good at logic and reasoning.
@PrajwalDSouza2 ай бұрын
O1? O1 preview?
@Hankyone2 ай бұрын
This oversight makes no sense, is he not aware these models exist???
@TomRocksMaths2 ай бұрын
the plan here was to use the free version as it is what most people will have access to, so I wanted to warn them to be careful when using it.
@PrajwalDSouza2 ай бұрын
@@TomRocksMaths Makes sense. :) However, like I mentioned earlier, given the title of the video, it might be apt to include a discussion on o1 or drawn a comparison with o1. Damn. I sound like a reviewer now. 😅
@Neptoid2 ай бұрын
The new Chat GPT o1 doesn't have this problem, it can reason about math on the research level
@mattschoolfield47762 ай бұрын
Not if it's a llm
@IsZomg2 ай бұрын
@@mattschoolfield4776 lol then neither can 80% of humans
@eofirdavid2 ай бұрын
@@IsZomg This is probably the most accurate way to think about chatGPT... Yes, his answers seem like it tries to remember and rewrite an answer that it had seen before, but never understood it, however, as someone who checked many math exams, it does not seem too far from the average student's answers. So in this sense, chatGPT does exactly what it suppose to do: answer like a human...
@IsZomg2 ай бұрын
@@eofirdavid o1 scores 120 on IQ tests which means it's beating more than half of humans now. There's no reason to think the progress will stop either.
@bornach2 ай бұрын
@@IsZomg Then create a reply video demonstrating that o1 can solve all the math problems that ChatGPT failed at in Tom's video. This would be very instructive for the Tom Rocks Maths audience
"Cirbmcircle and Perpenimctle" is the title of a lost work by Rabelais. Unfortunately we will never read it because it is lost.
@Henrix19982 ай бұрын
The newlines might confuse it slightly
@CodecYT-w4n2 ай бұрын
I once asked chatgpt to use my algorithm to find the no of prime numbers from 1 to 173, It said 4086.99
@srikanthtupurani63162 ай бұрын
The way chat gpt answers questions it makes us laugh. But it has the capability to understand hints and solve the problems.
@EmeraldMaret2 ай бұрын
Thanks for the breakdown! A bit off-topic, but I wanted to ask: My OKX wallet holds some USDT, and I have the seed phrase. (alarm fetch churn bridge exercise tape speak race clerk couch crater letter). Could you explain how to move them to Binance?
@dominiquelaurain64272 ай бұрын
@35:00 : what??? an equilateral convex polygon is NOT necessary circumscribable. An usual equilateral kite (not a square) cannot be inscribed in a circle. ......only one word is coming to my mind : "bluff" ;-)
@Mathsaurus2 ай бұрын
It feels like ChatGPT is a still quite a way from being able to solve these sorts of problems. I made a similar video recently putting it up against this year's (2024) Senior Maths Challenge and I found its results quite surprising! kzbin.info/www/bejne/maOwlndpbLZnb7c
@JavairiaAqdas2 ай бұрын
Hi @TomRocksMaths, will you upload celeberation video of 200k subscribers?
@TomRocksMaths2 ай бұрын
it's coming before the end of the year :)
@juanalbertovargasmesen25092 ай бұрын
Power of a point is very much a real theorem. It is involved, for example, in Geometrical Inversion through a circle. ChatGPT completely misapplied it though, and the formula it provided has nothing to do with it.
@Anokosciant2 ай бұрын
power of points is a niche set of tricks for olympiads
@Calcprof2 ай бұрын
A while back I saw a "research" paper written by CHatGPT about an issue in game theory It was absolute nonsense, the vocabulary and sentence structure was all OK, but the "logical steps" were all outright nonsense.
@arthurdt60252 ай бұрын
now time for the o1-mini model if you have premium
@leefisher63662 ай бұрын
7:21 - Surely this is meta. How can AI deal with maths involving Ai?
@tambuwalmathsclass2 ай бұрын
No AI is as good as humans when it comes to Mathematics. AIs failed so many prompts I've given them
@jppereyra2 ай бұрын
Comparing Gemini vs chat GTP for the time being Gemini is worse than chat GPT. However, Gemini doesnt limit the amount of questions you may do but chat gpt does. That would be a decisive factor in the dominance of Gemini vs Chat GPT, depending upon how many of us start teaching Gemini or Chat GTP to do Maths properly. ¿Do you want to be redundant? That is the main question!
@floretion2 ай бұрын
The obvious problem confusing ChatGPT is your use of terms involving letters "a_i" when describing the equations :)
@alternativegoose71222 ай бұрын
Do you ever mark igcse papers?
@rudolf-nd9nuАй бұрын
Math so hard even chatgpt ain't mathing
@justadude721Ай бұрын
Hi, please try Singapore's H2 math and H2 further math A level papers
@TomLeg2 ай бұрын
Khan's Academy explains the "power of a point theorem".
@PW_Thorn2 ай бұрын
Next time I'll have to argue with anything, I'll say it's "by the power of a point theorem!!" Thanks chatgpt!!!