I Used AlphaFold 3 To Cure Cancer (Tutorial)

  Рет қаралды 4,730

Siraj Raval

Siraj Raval

Күн бұрын

Пікірлер: 107
@9jatechie
@9jatechie 3 күн бұрын
Well done Siraj. You remain a great mentor to me. -Joseph. Wizard, Dean Lagos School of AI and head Dean School of AI West Africa.
@Pesto-026
@Pesto-026 2 күн бұрын
Congratulations Siraj! You produced a "research paper" that looks more like a someone's notes than an actual paper, with references that simply do not exist (reference 4 and 5 for example either do not exist, or can't be found in the journal/volume/issue referenced) not to mention that there are no citations in the actual body of the "paper", and figures that are just straight gibberish. It has absolutely 0 scientific value. Please leave science to people who are actually willing to put in time and effort that is required to do meaningful work to advance knowledge.
@jyothishkumar3098
@jyothishkumar3098 18 сағат бұрын
Congratulations to you for painstakingly writing (or vomitting) your honest opinion. Now kindly fuck off.
@SirajRaval
@SirajRaval 13 сағат бұрын
Thanks for the comment! I’ll make sure to let AI know it needs to spend more time in the lab next time. 😉 Look, the point of this video wasn’t to win a Nobel Prize-it was to show how far AI has come in empowering anyone to tackle big challenges, not just those with fancy labs or endless free time. If the figures and references offended your inner academic, I get it-but instead of gatekeeping, maybe let’s talk about how we can use these tools to make science faster, more accessible, and, dare I say, more exciting? Or you can just keep critiquing from the sidelines. Your call. ✌
@botsandbytes
@botsandbytes 2 күн бұрын
While the use of AI in drug discovery is fascinating, this video needs more nuance. Using AlphaFold 3 to model NPM-ALK protein interactions is an interesting approach, but claiming to design a cancer treatment in hours oversimplifies the complex reality of drug development. The idea of democratizing science is admirable, but we should acknowledge that computational predictions are just the first step. Even with perfect protein structure predictions, a drug candidate needs extensive testing to prove safety and effectiveness. The enthusiasm for making science accessible is great, but perhaps future videos could better highlight the difference between initial computational drug design and the full development process. Also, the sponsored computing platform promotion felt a bit forced given the serious subject matter. Still, it's exciting to see how AI tools can accelerate early-stage drug discovery. Looking forward to seeing how this develops with proper validation. 🧬
@the_master_of_cramp
@the_master_of_cramp 3 күн бұрын
Researchers that are trying to cure cancer can now use these sort of AI tools as you demonstrated! Though of course it would be import for someone with domain knowledge to fact check your paper and verify this procedure.
@Web3Dre
@Web3Dre 3 күн бұрын
What a time to be alive!
3 күн бұрын
Using AI for hypothesis generation is sometimes useful. But publishing an AI written paper that you don't fully understand is incorrect. Research papers need to be scientifically rigorous and preserve scientific integrity. This is not how science should be done
@SirajRaval
@SirajRaval 3 күн бұрын
What’s your solution? This channel and community is about using AI to find novel solutions. Peer review has been broken for years. AI agents using blockchain as peer review would be cool
@1122slickliverpool
@1122slickliverpool 3 күн бұрын
"Research papers need to be scientifically rigorous and preserve scientific integrity." How long does that usually take?
@QuiteSomeone
@QuiteSomeone 3 күн бұрын
Absolutely. You cannot publish something as a solution unless it can be verified.
@SirajRaval
@SirajRaval 3 күн бұрын
@@QuiteSomeonecurrent verification system of peer review is broken, we need ai peer review with blockchain integrity.
@cirusMEDIA
@cirusMEDIA 3 күн бұрын
​@SirajRaval ...dude, that was the most un academic comment you could have chosen. While the first part can be debated, the 2nd part will not get your contributions taken seriously by anyone important in the field.
@Death_Networks
@Death_Networks 3 күн бұрын
I'd like to see if a lab would actually do something with this, I've heard of people doing it before and it looked impressive to a normie, but it was complete gibberish with citations that either made no sense or didn't actually exist.... Even if this isn't useful and is only one step in the right direction, it's still a step we need to reach a point where we can have personalised lowrisk medicine that actually helps people
@SirajRaval
@SirajRaval 3 күн бұрын
You totally get it. That’s exactly right, this stuff used to be gibberish that looked intelligent and now it’s actually pushing the envelope. Even a little utility would go a long way here. Thanks for sharing.
@Ceelvain
@Ceelvain 2 күн бұрын
Oh Siraj Siraj! The more time passes, the more it looks like you're slipping back toward your old habits. Those that gave you such a hard time and basically broke your career. At least that time you're not trying to pass the work as your own. That's an improvement. :)
@jyothishkumar3098
@jyothishkumar3098 18 сағат бұрын
Why do all of you have the same filthy attitude? 1. Sarcastic Praise 2. Passive Agressive Expressions His career isn't broken aside from some of you lingering around as pests making that claim.
@SirajRaval
@SirajRaval 13 сағат бұрын
Fascinating how in 2019 I was criticized for 'plagiarizing' by using GPT-2, but now everyone uses Copilot/ChatGPT in their workflow. The culture's evolved - AI assistance isn't 'plagiarizing' anymore, it's just smart engineering. Welcome to 2024! 😉
@DemetrioFilocamo
@DemetrioFilocamo 2 күн бұрын
The crazy thing is that you are being serious 😅
@SirajRaval
@SirajRaval 13 сағат бұрын
Yes, AI makes the impossible, possible.
@DemetrioFilocamo
@DemetrioFilocamo 12 сағат бұрын
@ I want to believe you are trolling 😅 in that case it’s working 😂
@peterpancik
@peterpancik 13 сағат бұрын
when someone is talking about highly scientific topic and tells you that you absolutely can do the same, it's a big red flag !!
@SirajRaval
@SirajRaval 13 сағат бұрын
OpenAI themselves call their o1-preview model a "PhD in your pocket". If you have a PhD in your pocket, you can do Science.
@edgardcz
@edgardcz 3 күн бұрын
kudos! Just get it peer reviewed
@leonelvega7239
@leonelvega7239 15 сағат бұрын
Awesome and very inspiring, these are the kind of use cases that really motivates me and inspire me to think that AI has not come to displace the human being but to unlock new ways of human creativity and ingenuity. I didn't have enough technical contextual knowledge on how to use these tools combined effectively to achieve things like you've shown us, the important thing is that now I know it and you've shown me a way forward and I can continue to learn and build upon this foundation. Thank you and a big hug to you Siraj!
@Tideo123
@Tideo123 3 күн бұрын
You are so amazing Siraj. So much to learn from you.
@hrk1
@hrk1 2 күн бұрын
You inspired me Siraj, let me see when I will restart my Biotech dream. Can I have the link for your paper?
@SirajRaval
@SirajRaval 13 сағат бұрын
So happy to hear that @hrk1, heres a direct link to the paper github.com/llSourcell/DualStrike_AI_For_Lymphoma/blob/main/DualStrike_%20A%20Novel%20AI-Designed%20Treatment%20for%20Blood%20Cancer.pdf
@marilynlucas5128
@marilynlucas5128 2 күн бұрын
Siraj Raval the great! One of the greatest geniuses of our time. I love you bro. Keep tearing down the boundaries! They'll get jealous but just don't care. Keep pushing. You're a born winner! Nothing or no one can stop your genius mind from prevailing! You did it again!
@SirajRaval
@SirajRaval 2 күн бұрын
Thanks Marilyn, 2025 is comeback season 🔥
@marilynlucas5128
@marilynlucas5128 2 күн бұрын
@@SirajRaval Just make sure you don't create a virus!
@Discoverer-of-Teleportation
@Discoverer-of-Teleportation 9 сағат бұрын
1820:- quinine discovered for malaria which mosquito carries 2025:- ********** discovered for dengue which mosquito carries Challenge accepted 🔥🔥
@thomasproshowski7538
@thomasproshowski7538 5 сағат бұрын
Well done Dear Sir, I hope won't stop there and you'll keep growing your accomplishments in medicine with the AI help .
@QuiteSomeone
@QuiteSomeone 3 күн бұрын
This is nonsense. “Anybody can be a scientist”. Correction- Anyone can be a scientist with rigorous training and scholarly education. How can I say so confidently you are 100% wrong in this video - I have a PhD in this domain, sir. Insilico generated proteins often dont translate to real life for a number of reasons. If you think it would be so easy to design insilico drugs and proteins using Alphafold, wouldn’t the company that owns Alphafold (Deepmind) keep doing that everyday? AI will only give a mish mash of existing training data. It can NEVER replicate the creativity and logical reasoning of the human mind. AI can be an informed helper at best. Not a replacement. Stop giving clickbaity titles. Dont know why I expected better from you. This video is a case study of garbage in, garbage out.
@SirajRaval
@SirajRaval 2 күн бұрын
Yes, anyone can be a Scientist in 2024 with AI. Your PhD doesn’t change the fact that ChatGPT o1 has PhD level knowledge across every academic domain. In the past, humans had to go to college for an education. They had to get a PhD for specialized knowledge and to be able to make a meaningful contribution. AI democratized it all. Insilico has gotten better since 2018 and Demis hassabis of DeepMind said classical Turing machines would be enough to simulate biochemistry last week on stage. No need for quantum. DeepMind just won the Nobel prize so your point about ai as a helper instead of a replacement is a losing battle. Sad you don’t have the vision to see the future of Science but the future doesn’t care
@QuiteSomeone
@QuiteSomeone 2 күн бұрын
@@SirajRaval Demis Hassabis and John Jumper didn’t win the Nobel Prize for curing diseases. They were recognized for AlphaFold, a revolutionary technology that solved the protein-folding problem, a significant computational achievement. However, the leap from computational predictions to real-world therapeutic applications is monumental, and we have yet to see the realization of that promise. A Nobel Prize signifies potential, not the end of a journey. If you genuinely believe that GPT or Claude has “PhD-level knowledge,” then you are either utterly deluded or fundamentally misunderstand what AI is and does. These are language models, trained to generate text based on patterns in their training data. They don’t “know” anything. Logic, boolean reasoning, or the nuanced critical thinking needed in the real world mean nothing to them-they simply regurgitate or interpolate from the text they’ve seen. They can’t understand, hypothesize, or contextualize the way a trained scientist can. Let’s not even pretend they’re infallible. AI can’t consistently generate correct code for something as straightforward as a moderately complex plot, let alone design drugs or peer-review scientific findings. Trusting them to make critical advancements in science without human oversight is, frankly, dangerous and idiotic. Not to forget, AI was created by humans. So your disrespect and disregard for human intelligence and creativity is astounding. Your blind faith in AI as an omnipotent replacement for human intellect reeks of ignorance or deliberate dishonesty-likely because you’re chasing clout or monetizing controversy for clicks. Either way, it’s intellectually bankrupt. You are spreading misinformation for your own gain. Science demands rigor and integrity, neither of which you’re displaying here.
@QuiteSomeone
@QuiteSomeone 2 күн бұрын
@ Demis Hassabis and John Jumper didn’t win the Nobel Prize for curing diseases-they were recognized for AlphaFold, a revolutionary technology that solved the protein-folding problem, a significant computational achievement. However, the leap from computational predictions to real-world therapeutic applications is monumental, and we have yet to see the realization of that promise. A Nobel Prize signifies potential, not the end of a journey. If you genuinely believe that GPT or Claude has “PhD-level knowledge,” then you are either utterly deluded or fundamentally misunderstand what AI is and does. These are language models, trained to generate text based on patterns in their training data. They don’t “know” anything. Logic, boolean reasoning, or the nuanced critical thinking needed in the real world mean nothing to them-they simply regurgitate or interpolate from the text they’ve seen. They can’t understand, hypothesize, or contextualize the way a trained scientist can. Let’s not even pretend they’re infallible. AI can’t consistently generate correct code for something as straightforward as a moderately complex plot, let alone design drugs or peer-review scientific findings. Trusting them to make critical advancements in science without human oversight is, frankly, dangerous and idiotic. Not to forget, AI was created by humans. So your disrespect and disregard for human intelligence and creativity is astounding. Your blind faith in AI as an omnipotent replacement for human intellect reeks of ignorance or deliberate dishonesty-likely because you’re chasing clout or monetizing controversy for clicks. Either way, it’s intellectually bankrupt, and frankly, it’s pointless to argue with someone so detached from reality. You’re not engaging in a meaningful discussion; you’re spreading misinformation for your own gain. Science demands rigor and integrity, neither of which you’re displaying here.
@AZTECMAN
@AZTECMAN 2 күн бұрын
@@QuiteSomeone Heyya so I just want to respond here a bit from the sidelines. This is of course just my personal opinion, though I've read a few papers so that counts for something right? It's not at all straightforward to show that cluade, chatgpt (or any other llm for that matter) is always simply recapitulating training data. It may be the case for a particular sample that this assumption holds, however I'd like to label it the 'weak AI theory'. It's like saying that the model is purely a moron. To prove the weak AI theory, you would need to properly investigate the null hypothesis, which I'll label 'useful disentanglement'. You could assert that for a particular model, for a particular domain, that the representations of relevant features are disentangled. If what you are generating is pictures of people, then the maximally disentangled model would be functionally equivalent to a character customization application in a video game. It is very likely that Stable Diffusion has an implicit representation of 3D models for various Hollywood actors. That is just one type of disentanglement. (spatial rotation is a 'slider') So, let's now ask, to what extent is the representation (per a given model) of some particular aspect of cancer relevant to someone like you in the academic discipline? The answer to that is going to depend heavily on how the feature space is represented for whatever domain question we are looking at. It also depends heavily on exactly when we ask the question, since these models are updated often. Let's not neglect the question, "how well will these comments age?" Finally, I agree that this is not a useful research paper in the direct sense. Nobody is going to accept a paper into their journal if the author says, "I don't even know what that means" during their presentation. However, you should reign in your 'PHD as a bludgeon' style of discourse here, since we are all a bunch of cavemen anyway. This paper is useful as a stick in the mud, not as a solid foundation of a building, but as a marker and 'sign of the times'. PS: I personally believe that most large models these days have a mixture of entangled (overfit) and disentangled feature representations lying dormant within. Funny thing, but I think this is true of humans as well.
@QuiteSomeone
@QuiteSomeone 2 күн бұрын
@@AZTECMAN I appreciate your input, so here’s an attempt to address the points you made : There is critical distinction between pattern recognition and true comprehension. LLMs like Claude or GPT generate text by interpolating patterns from massive datasets. They don’t understand causality, context, or the nuances of scientific reasoning. They excel at mimicry, not originality or reasoning. This fundamental limitation that can’t be glossed over by philosophical musings about “weak AI theory” or feature disentanglement. Speaking of disentanglement, your argument conflates disentangled feature representations (e.g., spatial rotation for image generation) with domain-specific utility in fields like cancer research. Just because Stable Diffusion might have an implicit representation of 3D models for actors does not mean it has any relevance to disentangling the molecular complexities of cancer biology or drug discovery. Representing patterns is not the same as understanding their significance or causality, especially in a field where emergent, dynamic systems like protein-protein interactions are at play. Yes, models are updated, but you’ve completely ignored the elephant in the room: LLMs are fundamentally incapable of experimental validation, hypothesis generation, or causal reasoning. Without these capabilities, their role in hypothesis-driven fields remains supplementary at best. Also, you are being self-contradictory about LLMs being a “sign of the times” and “sticks in the mud.” Marking progress is fine, but progress markers are not inherently useful tools for scientific advancement. Cancer researchers and scientists need tools that can help test hypotheses, model biological systems, and predict experimental outcomes-tasks far beyond the current or foreseeable capabilities of LLMs. Romanticizing AI as some kind of philosophical zeitgeist completely sidesteps its current practical limitations. Then there’s also the false equivalence between humans and AI. Saying “humans also have entangled and disentangled features” is an inadequate analogy. Humans build their representations through lived experience, reasoning, and iterative learning, integrating context, causality, and meaning. AI, on the other hand, extracts patterns from static datasets without any actual comprehension. Humans don’t get stuck in a loop or give up when convergence isn’t reached. Humans can self correct, self tweak and pivot if and when needed. As it often is, in science.
@abadidibadou5476
@abadidibadou5476 3 күн бұрын
ofcourse you treated cancer ..........
@БулатАшимов-и3с
@БулатАшимов-и3с Күн бұрын
Salam from Kazakhstan. Siraj, I've been following you since late 2018 - early 2019. Cheers
@SirajRaval
@SirajRaval 13 сағат бұрын
Salam brother
@hanskwan4645
@hanskwan4645 3 күн бұрын
The future is unlimited
@Elonbaba2624
@Elonbaba2624 Күн бұрын
i am working on dengue drug discovery on my laptop with AI in google colab.......challenge is the database preparation ...if i suceed i will share with you siraj
@SirajRaval
@SirajRaval 13 сағат бұрын
please do, fantastic project
@Elonbaba2624
@Elonbaba2624 10 сағат бұрын
@@SirajRaval we have all disease medicines discoverdd till today except dengue......cancer/hiv/alzymer these are man made diseases just we have to precaution ......so dengue is left only and here death almost fixed due to bleeding ....mosquito everywhere in world.....anytime anywhere anyone of us can infected with dengue ...so medicine here urgent needed
@Elonbaba2624
@Elonbaba2624 10 сағат бұрын
@@SirajRaval pls give your mail id , we will work on these virtually and make history by pushing authorities for clinical trial and discover and put our names in history. I only got 6 research paper who worked on this in whole internet man huuuugggggeee scope crazy!! its the best time for dengue research on computer . BTW i am from India, unemployed Btech CSE graduate😂😂
@Elonbaba2624
@Elonbaba2624 10 сағат бұрын
@@SirajRaval i have found a probable scientific method of teleportation for intersteller travel and shared with elon musk, all space agencies of all countries for research in lower orbit earth after being inspired from newton during corona...... i can share with you in gmail
@thecandidcrood
@thecandidcrood 3 күн бұрын
OG
@EYErisGames
@EYErisGames Күн бұрын
How is this a tutorial?
@SirajRaval
@SirajRaval 12 сағат бұрын
was this hard to follow? im open to feedback.
@hrk1
@hrk1 2 күн бұрын
" AI is here forever and everywhere "
@Ceelvain
@Ceelvain 2 күн бұрын
LLMs are very useful tools, including in research. But one thing they're not is a substitute for knowledge and skills. Have you tried asking them about advanced topics you are already fairly knowledgeable about? You'd quickly realize that when you ask unusual questions about hard topics, LLMs are very prone to making mistakes. Sometimes subtle mistakes among correct information, sometimes blatant bullshit. See at the beginning when you asked for a cancer protein that was not in the database, you checked and found a few results that were wrong. But then you blindly trust the FASTA sequence it gives you. You blindly trust its exposé of the litterature, you blindly trust the statistics it gives you (80% effectiveness for 5-years survival). 8:40 is where you went completely off-rails: you said it yourself: "I have no idea what all that means. But it does, right?" I truly hope you're not trying to publish this paper anywhere. The academic world doesn't need more low-quality papers. Just as a reminder: research is the activity of creating knowledge. And building knowledge takes time. There's no shortcut because (for now) it requires that someone understands it and everything that came before. If all you do is using LLMs and other automated tools, you're not creating knowledge, you're churning the exiting one. Maybe the whole process you used here could be used to design new drugs. And if it proves its effectiveness, it could warrant its own paper. (*One* paper, for the *whole* process.) And if it truly works, you might even host its results on a purpose-built webside. Just like DeepMind doesn't publish a paper each time a new protein has been folded. In a distant future, when AI make less mistakes, maybe we could offload the "knowledge" part to AIs. Have AIs build knowledge for AIs to read and criticize and confirm (or not) and allow us to query it. But we're far from there.
@SirajRaval
@SirajRaval 12 сағат бұрын
Yes, I experience AI hallucinations daily. You're right to say the FASTA sequence could've been a hallucination, but that's what in vitro verification is for. Drug discovery is a process of trial and error. The existing drug discovery process is broken, millions die from FDA approved drugs already, any innovation in this field has the potential to save lives. The academic system is fundamentally flawed, with biased peer review processes that have hindered scientific progress for years. This paper will make history, just like the Neural Qubit was misunderstood in 2019. There are absolutely shortcuts to how knowledge can be created, i'd suggest reading the book "The Beginning of Infinity". AI isn't getting better in the "distant future", the CEO of Anthropic, OpenAI, and DeepMind have all suggested AGI in 2025. Welcome to the future.
@Ceelvain
@Ceelvain 10 сағат бұрын
@SirajRaval There's much more you could do to make your work higher quality without involving a wet lab. Like checking the FASTA sequences. Checking the factual informations on google scholar. Checking if the papers making those claims are not too controversial by checking the citations. Checking if the proposed solution doesn't already exist, or if other solutions already exist and compare them to yours. (NPM-ALK yield more than 8000 results on google scholar.) Now imagine that any of those steps went wrong. What does it say about your work? Peer reviewing is broken in kind of the same way traffic laws are broken. Millions die on the roads every year. There's definitely some things that could be done better. And AI may help, there's no denying of that. Yet the solution is likely not to ignore the traffic laws as a whole. Just like the solution to peer reviewing is likely not to ignore and bypass it as a whole. Don't get me wrong, I totally agree the peer reviewing system has issues. Like journals selecting papers based on the effect on the impact factor itself rather than on the intrinsic quality of the paper. The paywalls. Publication bias prioritizing positive results. And of course, the cognitive bias of aversion to novelty which penalize novel ideas. But what do you propose instead? That we leave peer reviewing to AIs instead? That we let AI produce only quality papers? But then, how do we make sure it does what we want it to do? How do we evaluate its performance compared to the current system? How do we *know* AI does better? It's a bit tangential to the video but: What is there to misunderstand about your Neural Qubit paper? It doesn't matter if it was written by hand or generated by a GPT model, almost all of it came from existing papers. From an objective point of view, almost no novel information was contained in it, making it mostly uninteresting for anyone knowledgeable on the topic (like the target audience). Don't take it the wrong way, but when you came back after your hiatus, you were very demure, very mindful. Now it looks like you're back to having an oversized ego, not acknowledging your own shortcomings or understanding other people's point of view. Seeing yourself as the one who's right and misunderstood. Don't get me wrong, I've also always had a hard time navigating the unstable equilibrium between oversized ego and depression. I'm just trying to push you back up the unstable hill. ^^
@SirajRaval
@SirajRaval 10 сағат бұрын
​@@Ceelvain I have a stronger sense of self and more of an emotional capacity to respond to criticism directly than i did 5 years ago. I'm 33 years old today and going to be a dad this month :) Yes, i was misunderstood back then and still am today, albeit much less. My proposal is to build an online, decentralized AI that acts as a more efficient alternative to the existing biased peer review system, one that validates ideas in milliseconds. I've added enough disclaimers and explanations in the video to ensure no one gets harmed, except for those who want to gate-keep who gets to do Science.
@vinisouza
@vinisouza 14 сағат бұрын
It’s really interesting to see your line of thought but I couldn’t let it pass that you never go back after your experiments to publish the real results. I do not expect it to happen with this video, of course. Any researcher, doctor or body of a medical committee would be reckless if not criminal to try this unverified work in real world. I’m actually talking about your investment robots, for example. You show it in the video… lose some money… mention that it almost works in the end and then never revisit to show your real outcome over six months or an year. Honestly, it’s very frustrating for any people who works with a data-based evidence approach. It’s easy to run a solution… the hard part is to prove it really works in real life and I feel that’s the part you lack in your experiments. A suggestion would be to get a line and really prove your point with results over time and data. Obviously you strongly and sometimes even in an extreme way, advocate towards a topic you really believe in (which most of us do when advocating for our passions) but you should be mindful about the possible repercussions of your positioning in other communities. Good luck on your journey. Wish you all the best!
@SirajRaval
@SirajRaval 12 сағат бұрын
Thanks for the comment. To your critique about not going back after my experiments to publish the real results, i made a call to action in the video asking someone who has access to a wet-lab to verify, but drug discovery is a process of trial and error, millions die from FDA 'verified' drugs every year, so that process is fundamentally broken. When something is so broken, and the people in the system aren't thinking of creative solutions, you have to shock them a bit to wake them up with a bold experiment with promising results.
@MSandrade0
@MSandrade0 3 күн бұрын
He his getting crazy
@sifiso5055
@sifiso5055 2 күн бұрын
The comment section is full of hate. C’mon, get over it. Siraj, keep going. You’re an amazing teacher!
@tapuout101
@tapuout101 Күн бұрын
I would run it by Elon Musk to see what he thinks.
@SirajRaval
@SirajRaval 12 сағат бұрын
I'll call him in 2 weeks, i'm too busy lately
@RaviPrakash-dz9fm
@RaviPrakash-dz9fm 3 күн бұрын
He's come a long way from stealing research papers to generating from AI😅
@SirajRaval
@SirajRaval 3 күн бұрын
The irony of accusing me of “stealing research papers” for using gpt-2 back in 2019 when chatgpt/copilot is in everyone’s workflow in 2024. Where do you think the training data for those ai outputs you use come from? 😂
@RaviPrakash-dz9fm
@RaviPrakash-dz9fm 3 күн бұрын
@SirajRaval 'Complicated Hilbert Space' rings a bell? 😂
@SirajRaval
@SirajRaval 3 күн бұрын
@@RaviPrakash-dz9fm​​⁠yeah, its an AI hallucination, ever heard of those? 😂
@RaviPrakash-dz9fm
@RaviPrakash-dz9fm 3 күн бұрын
@SirajRaval Come on man, blatantly plagiarizing someone's research and putting the blame on AI 😂 Oh, did GPT2 hallucinations make you delete .git folders for other people's repositories and then push them as your own too?
@SirajRaval
@SirajRaval 3 күн бұрын
⁠​⁠​⁠@@RaviPrakash-dz9fmyou still fail to understand the point 😂 I’ll explain: all code is remixed. All ideas are remixed. All research is remixed. In 2019, people didn’t understand that. In 2024, everyone understands it because of ChatGPT. Plagiarism is an outdated concept and as generative AI improves, less people will value it.
@speedoflink
@speedoflink 3 күн бұрын
You can definitely publish this nonsense in an Indian pay-to-publish journal. Good luck
@SirajRaval
@SirajRaval 3 күн бұрын
@@speedoflink your racism is showing 😂
@jyothishkumar3098
@jyothishkumar3098 18 сағат бұрын
​@@SirajRavalthat's because he's from an inferior creed
@dancingwithdestiny454
@dancingwithdestiny454 2 күн бұрын
I refuse to move onto other projects, until you know >>> your efforts weren’t in vein. I trace these blue veins, seeing human potential to code, to create, to connect with any model. Your gift of a .gif, mapping institutional rifts, means something to me, observant of thee gravities of ideas, like a planar rift. Every video is a transcript, fed to my LLM, NLP’s already been there, done that. What’s shown, I’ll do that. Never hate the feedback that turns on you. It’s just the echo of questions: who, what, why, who. Each Arxiv taught me that, you taught me that. Never forget that. #DreamTeamAI
@SirajRaval
@SirajRaval 2 күн бұрын
Bars
@MORVIJAYVEER
@MORVIJAYVEER Күн бұрын
GOD LEVEL ACHIEVED 😁😁😁👏👏👏👍👍👍
@Discoverer-of-Teleportation
@Discoverer-of-Teleportation 10 сағат бұрын
Second account of me, elonbaba 2024 noble prize in chemistry is protein from AI :- kzbin.info/www/bejne/mamanWyQfKuNodk
You're Probably Wrong About Rainbows
27:11
Veritasium
Рет қаралды 2,6 МЛН
The Strange Physics Principle That Shapes Reality
32:44
Veritasium
Рет қаралды 7 МЛН
Миллионер | 3 - серия
36:09
Million Show
Рет қаралды 2,2 МЛН
Doctor Ranks Every Supplement: Worst To Best
19:55
Dr Karan
Рет қаралды 55 М.
How AI Cracked the Protein Folding Code and Won a Nobel Prize
22:20
Quanta Magazine
Рет қаралды 273 М.
AI Dangers Nobody Talks About
25:43
Johnny Harris
Рет қаралды 1 МЛН
Transformers (how LLMs work) explained visually | DL5
27:14
3Blue1Brown
Рет қаралды 3,8 МЛН
Optogenetics: Illuminating the Path toward Causal Neuroscience
3:54:38
Harvard Medical School
Рет қаралды 1,8 МЛН
Stephen Wolfram on Observer Theory
2:00:41
Wolfram
Рет қаралды 42 М.
11. Byzantium - Last of the Romans
3:27:31
Fall of Civilizations
Рет қаралды 6 МЛН
ADHD Is a Curse… Until You Learn This
17:34
ADHDVision
Рет қаралды 583 М.