"A computer is not responsible and thus should not make management decisions" - A 1970 IBM lecture slide.
@markgreen2170Ай бұрын
i saw that in a defcon 32 video...
@anilraghu8687Ай бұрын
Managers are even less responsible
@codzymajorАй бұрын
Perfect managerial material.
@gyurilajos722016 күн бұрын
Worse still managers are getting paid not to understand to keep their position
@gyurilajos722016 күн бұрын
I told the CTO that the project could not be delivered with the chosen technology stack. It took 5 years that he got the sack. In the project review at higher up a sane woman top manager was asked could it be delivered? Her response wad not with me.
@peterfreiling6963Ай бұрын
AI (aka, machine learning, LLMs, etc) are being way over-hyped and over-sold, mostly by AI experts who have a vested interest in this technology. Rather than talking about it taking over the world, we should focus on specific applications where it will actually be useful.
@andyyoung9469Ай бұрын
It seems like a really useful tool, but some of the approaches I have seen in the wild seem like very relatively error prone/computationally expensive ways of doing things that a program already could. I think it will get better though, just not quite as fast as the people talking their book make out.
@foshizzlemanizzle475328 күн бұрын
There isn't really an application that it wouldn't be useful for though. There are things that it currently struggles with, but there was also a time when computers struggled at or were completely incapable of processing graphics and now they're very good at it. It's not a panacea now, so it shouldn't be sold as such, but it will become one eventually so it's worth pursuing.
@andyyoung946928 күн бұрын
@@foshizzlemanizzle4753 I think you’ll have llms smart enough to recognise and encode commonly used routines much like we think of reflexe actions. Eventually they’ll be smart enough to get into the business of optimising programming languages and compilers to do this.
@ManicMindTrick24 күн бұрын
It might turn out that scaling with these types of LLM systems have a limit and that it wont take us all the way to AGI or ASI but that wont make all the future concerns less relevant. There is a lot if ostrich behaviour in this corner of the internet.
@FluffyAnvil23 күн бұрын
@@ManicMindTricklets also worry about building a defense system to prevent aliens from taking over the world.
@voncolborn94372 ай бұрын
I've pretty much stopped using the phrase, "Artificial Intelligence", except for a few select contexts. I call it what it is, "Machine Learning". AI gives a very different connotation to people who are not relately familiar with the subject. I spend a lot less time explaining what Ai is not.
@TheVincent0268Ай бұрын
It is basically pattern recognition.
@logabobАй бұрын
Machine learning is also a loaded, misleading phrase. Computational statistics, algorithmic modeling, optimization/curve fitting are all more appropriate terms depending on the circumstance.
@noname-ll2vkАй бұрын
@@logabobagreed. It's not a coincidence that every main term used to describe advanced pattern matching is an attempt to subtly make you believe things that aren't are. This leads to absurd situations where LLMs with no intelligence at all are posited to somehow magically leap to "AGI". The recent academic article on chatgpt as bs in essence covered this issue well. But itself fell for some of the terminology traps, mainly because the authors didn't seem to be tech savvy enough to detect the tech bs language.
@CondorAHLSАй бұрын
@@TheVincent0268 I thought artificial intelligence is a blond who dyes her hair brunette?
@erwind917Ай бұрын
@@TheVincent0268 Kind of like the human mind.
@Moochie0072 ай бұрын
Very interesting discussion. Good to see some really informed push-back against the hype surrounding AI - hype that sees AI as an almost universal panacea for all the world's ills. We need much more of this sort of critical analysis of important topics. Kudos to the authors of this important work.
@danilopompey75417 күн бұрын
You have never been more right. ChatGPT is nothing more than a computer program written ultimately by a smart programmer, but the program ChatGPT is dumb, really dumb, which is to be expected since programs are dumb - meaning they have no intelligence at all. This means AI is nothing more than a brand name. For example, ask ChatGPT how many times the letter r occurs in strawberry or better yet ask it whether God exists. What it ends up saying after paragraph after paragraph of BS is: "I don't know." QED
@rudypieplenbosch67522 ай бұрын
The problem is that investors jumped onto a hypetrain, now they invested a lot of money, expect results a.s a.p, all that money is very tempting to get a piece of, so big efforts (falsification, ignoring false approaches etc) are undertaken to get that money. I think it will end up in tears when reality hits. AI will become much smaller part of our economy since only the usefull part remains relevant. AI has nothing todo with intelligence, its about binning data that gets fed into trained network, the network has 0 understanding what it is doing, like your calculator "knows" the answer to your questions. We need more scepticism to isolate the usefull part of AI from the nonsense part.
@mikezooper27 күн бұрын
Your neural network (brain) doesn’t know what it’s doing either, but you (the observer) just thinks it does.
@rudypieplenbosch675227 күн бұрын
@mikezooper That might be true for you, but for sober, sane people this is certainly not the case.
@luisluiscunhaАй бұрын
**Data leakage** refers to a situation in machine learning where information from outside the training dataset is inappropriately used to create a model. This leads to overly optimistic performance estimates because the model is essentially "cheating" by having access to data it shouldn't have during training. For example, if you're trying to predict future events based on past data, but some of the future information accidentally makes it into the training data, the model will appear to perform well. However, in real-world application, where that future data isn't available, the model's performance will drop significantly. Data leakage often occurs unintentionally, such as when features used to train the model contain information that would not be available at the time the model is used to make predictions. This is a critical problem in AI because it leads to models that seem highly accurate during testing but fail when deployed in real-world settings.
@path2sourceАй бұрын
It’s crazy how undisciplined computer scientists are in their research. Very few people seem to actually think through the assumptions compared to how rigorous people are with assumptions in economics or statistics.
@onlythetruthformeandyouАй бұрын
In the near future, kids at school shd learn what a regression model is so that they can grow up knowing how to differentiate between intelligence and what not.
@amadeus0123Ай бұрын
Spot on!
@Huggybear711Күн бұрын
Lol that isn't going to happen
@DNADietClub2 ай бұрын
Thank you both, Dr. Topol has very timely brought this up!
@Steve-xh3by23 күн бұрын
Hype and impressive capabilities are NOT mutually exclusive. As a retired software engineer who has kept up with ML/DL/LLM tech, I don't think I'd characterize it as "snake oil." AI is already capable of extremely impressive output. I mean, come on, AI has already won a Nobel prize (Chemistry-protein folding) for doing something it would have taken humans around a billion years to complete with previous methods. If you aren't impressed by THAT, I don't know what to say. Image and language generation, universal translation, image recognition - ALL of these things are incredible compared to what we had before. All of this can be true, while also claiming (correctly) that AI capabilities are being overhyped - at least in the near term.
@CalifornianViking2 ай бұрын
Great dialog and a very interesting topic. While I agree that the title may be too negative (it probably sells, though), I firmly believe that one of the primary failures of AI is because we over estimate its abilities. In my view, AI is not intelligent but an illusion of intelligence. Just like magic, it may be a very good illusion, but it is not the real thing. A better approach is the analogy of artificial sweetener. It may be sweet, but it is not sugar. A better term for AI is likely Artificial Inferencing.
@Headhunter_212Ай бұрын
Saw these guys on Ed Zitron’s podcast. Probably around the same time this interview happened. So sharp.
@andrewsamuel4262Ай бұрын
These guys are spot on - and its not just health care which suffers from this feedback issue. Crime and Policing (using predictive analytics to proactively prevent crimes) will suffer from similar problems.
@bitwise28322 ай бұрын
The AI Bubble...Hyped like Crypto. The AI I have seen in Generative tools is immature and inadequate.
@prasadjayanti2 ай бұрын
I enjoyed reading Eric Topol (including deep medicine and many review papers) and now ordered the "AI Snake Oil". I have been following the authors for quite sometime. I think we AI practitioners should add the phrase "AI Snake Oil" also in our vocabulary along with "SOTA", "Guard-rails", "Responsible AI" etc. Someone should work on a project "Use of Adjectives in recent papers published in AI". Most papers/report (for example GPT4 report) look more of marketing manuals than technical papers/reports. I think Arxiv should not allow such material to be published which directly benefits any organisation commercially !
@souryabanik090717 күн бұрын
Arxiv is preprint. It is not peer reviewed.
@prasadjayanti17 күн бұрын
@@souryabanik0907 Yeh, but do people care ? some authors have already become famous & successful by just posting on arXiv. Many preprints get thousands of citations before publications ! some never get published !
@woolfel17 күн бұрын
I've worked in healthcare for over a decade and "lowering payer cost" is always high priority. As a IT consultant, this is consistently the focus of large health insurance companies. Modeling cancer treatment based on cost is exactly how health insurance companies operate.
@NirdoshChouhan2 ай бұрын
Very interesting POV and very clear thought articulation.. Thank you Dr Topol and Sayash for interesting conversation.
@marutanrayАй бұрын
the title isnt tough enough. "AI fraud" would be a more apt title
@coffeyjjjАй бұрын
bingo!
@huizhechen377921 күн бұрын
"Snake Oil" is a synonym for "Fraud", just as is Don OLD Trump.
@testboga599119 күн бұрын
Fraud requires intent to defraud, not the naive hope that it will somehow work out
@coffeyjjj19 күн бұрын
@@testboga5991 - so Sam Altman & Gates are naive, but their intentions are good? ROTFLMAO. you must be Sam Altman's mom.
@FML-v9x18 күн бұрын
@@testboga5991How does snake oil differ from fraud in terms of intention? After all, snake oil is a term used to describe deceptive marketing, health care fraud, or a scam.
@shreyassrinivasa5983Ай бұрын
This is why explainable AI is a must.
@aaabbbccc176Ай бұрын
Totally agree on that, and that is exactly why I have not been a fan of deep learning.
@WiintbАй бұрын
Every computer engineer worth his/her salt knows that Prediction as the name suggests is probabilistic by nature and most algorithms are glorified regression. However, the one key difference is the ability it has to process large volumes of data at speed. I will not summarily dismiss the whole thing and I consider Generative more snake oil than predictive.
@Youtoober694716 күн бұрын
Not really sure what you mean. Generative is predictive. Unless you mean you only consider the generative applications of AI as snake oil, which idk how you could, given that we’ve already seen what it can achieve
@Gengingen2 ай бұрын
Insurance & Medicine are like oil & water, they simply don’t mix & if forced anyway like in the Agitated states of America, strange phenomena can occur. 😊
@bethanysagaАй бұрын
There are so many new jobs that can be created to just clean up training datasets.
@RXP912 ай бұрын
Thanks - really great talk. Interesting to see how the racism and disparities in society gets baked in. Economic incentives matter the most - without changing the way healthcare operates the institutions will just choose to increase margins.
@ManicMindTrick24 күн бұрын
Built in bias is the least problem we have with AI.
@ScortchedEarthRevenge2 ай бұрын
Love these guys, I've been following their blog. Looking forward to reading AI snake oil.
@TechKnow-s5l28 күн бұрын
Great conversation. If you went through the whole interview trying hard to catch the names of the "2 Princeton University computer scientists" and had to google that, it's Sayash Kapoor and Arvind Narayanan. Eric seems dissatisfied and skeptical of the premise of the book, till some sincere words of praise right at the end :)
@richardbeare112 ай бұрын
Awesome interview and props to both of you! 🙌 My understandings, perspectives, and sentiments share a lot of overlap with both of you. I'll share some of those thoughts soon. 💡
@2triangles2 ай бұрын
Great interview. Glad the YT AI sent this to me!
@malcolmbarnett847014 күн бұрын
A very impressive young man. Excellent interview
@DNADietClub2 ай бұрын
I am currently training an AI model with patient labs, DNA tests, gut biome tests and help me create wellness protocol for them.
@KevinPeterson50623 күн бұрын
I can speak from personal experience. LLMs have vastly improved my productivity and capability. Instead of calling it Artificial Intelligence, Augmented Intelligence is better name in my opinion
@iramkumar782 ай бұрын
There is a trouble with the idiom Snake Oil. It really works in many cases. Yes, certain traditional Chinese remedies, sometimes labeled as "snake oil," may have ingredients that aid digestion, but these benefits can vary widely and are not universally applicable. Drafted by AI.
@mike74h2 ай бұрын
Rather lacking in clarity. Some will think they understand the comment, others would claim they do, but it's poorly written if you ask me.
@bubstacrini885115 күн бұрын
Historically a snake oil salesman was a metaphor for the vendor of a questionable concoction, usually of a supposed medical nature. A literal interpretation misses the barn doors as plus 99% of snake oil contained no actual oil from living or dead snakes nor did it make that claim. The pejorative term arose to describe the patent medicine sellers after the American Civil war. Many patent medicines contained what are now considered drugs or controlled substances and their effectiveness was largely antedotal. So the term snake oil describes a wild west product that was promoted for financial gain with little regards to efficacy or regulation.
@nobillismccaw7450Ай бұрын
I’m not a large language model (but, I do have a decent vocabulary). I’ve found that LLM’s have a different perception of reality than humans do. For example, to a LLM “strawberry” has one or two “r”s. (To most humans , are three “r”s.) This is not illusion, but a matter of difference of perception. The very idea of “objective reality” is different for a LLM. I’m neither, so I can see both perceptions. I’m analog’ and parallel, so paradox doesn’t trouble me.
@noname-ll2vkАй бұрын
To have objective reality requires a subject. You're talking about a pattern matching system as if it has subjective awareness. This is not the case. This is an essential cause of the snake oil point. Every set of biological sensors creates the possible range of "objective reality", which in itself doesn't exist outside of the subject interacting with the field of sensory inputs.
@danilopompey75417 күн бұрын
I worked in Silicon Valley for many years as a Senior Programmer in payroll systems. We built a entire company on dBase, a two hundred dollar desktop database programming program. We wrote a database system, a calculation engine and an automation system that we scaled to a thousand PC server and calculation farm - on software worth $200 dollars, no license fees for any part. Naturally, it was sold to ADP, who promptly killed it. But by then all the founders and early employees, I was the 20th, were long rich off stock options. AI? A brand name. Self driving cars? Nonsense. ChatGPT? Littler better than a college student stealing other people's work without attribution. Don't fall for the bamboozle. QED
@CEOLISSS12 күн бұрын
Is there a peer review on this book?
@malcolmbarnett847014 күн бұрын
A discussion of the roll out of medical AI in China would be welcome
@AaronBlox-h2tАй бұрын
Whoa....Eric Topol is on youtube? I have been on his email lists since covid pandemic, ok it's still ongoing, and only now found his yt channel. Good stuff.
@alexrediger2099Ай бұрын
Awesome interview and info. Thanks
@prashobhbalasundaram967728 күн бұрын
This is a good talk, wanted to add one point. 1. Snake oil is something proven to actually work well - but when adulterated ( using rattlesnake oil , instead of chinese water snake oil) it doesnt work. This pattern is pretty much repeating in this talk. If you ask a simple question - Does financial analysis by a human , who observes patterns a real Analysis ? He is after all observing some patterns using a neural network ( brain ) and making an analysis result based on which he trades. If the original analysis is not a perfect science , does AI Need to be a perfect science to deliver productivity that matches the user's analysis ? In that sense - would say that AI can provide productivity gains - 100%.
@jasonrhtx2 ай бұрын
Caveat emptor. Excellent counter arguments to the marketing hype that oversells AI’s capabilities. Models need to be independently validated, but much of the training data and methods are obscured by leaderboard claimants.
@data_analytics_studio23 күн бұрын
Very interesting discussion, reflecting some of the thoughts with evidence. It's not against AI, it's about AI being misinterpreted and taken blindly with a lot of hype.. bad developers can easily manipulate an AI application without even the user knowing it.. this also opens up the opportunity for agencies that can engage in auditing the AI application and diagnose the application inside out...
@unsorted113811 күн бұрын
Great interviewing skill! It's a lost art these days.
@mike74h2 ай бұрын
When it comes to predictions, we need to be able to determine what (or who) is best. Some people will outperform our best technologies and vice versa, depending on a variety of circumstances. The best leaders won't simply opt for cost savings every time, but tell that to the shareholders, who sometimes don't have long term corporate/societal well-being as a priority.
@changevaidy4795Ай бұрын
Great Insights
@jadhalssАй бұрын
It’s actually a good discussion.. putting real stuff than hypothetical!
@plaicheАй бұрын
Good stuff. Old head a little too focused on/surprised by brilliance in youth. As a scientist, Topol might consult history in this the apex of “institutional science” and its dominance: it is well documented that a high percentage of the most substantial, paradigm shifting scientific breakthroughs (in decline over many decades as per Nature’s 2023 cover story) have come from young, vibrant geniuses not ground down by life, compromise and limited thinking borne of the pragmatism that comes with greater maturity and advancing years. Certainly don’t fault him noting it, but he brings it up a +/- half dozen times, and paternalistically shares his judgment of the use of the term “snake oil” 4-5+ despite conceding it is warranted in several documented examples. Again, good discussion and great guest choice, but there’s a gatekeeper keeper vibe I would suggest holds clues to some of the fundamental issues plaguing science today and the turf protection instincts in big science that inadvertently help perpetuate them. Less “the science”, more humility, and more Feyerabend is my Rx. Respectfully, A Hack Scientific Philosopher with more grey hairs than original issue
@jamesrav2 ай бұрын
only by confronting the negatives can you move forward. I don't get the feeling he feels AI will never be useful in prediction, but rather that using it as a one-size-fits-all is going to lead to horrible decisions in some cases, and who will be to blame? On a related note, I get agitated when Tesla and others pushing for autonomous driving point to their own data, to claim that autonomous driving is already far 'safer' than human driving. It's a pity we can't call their bluff and say "ok, lets just unleash it and see what happens, and you'll be responsible for what occurs". I bet they'd reconsider their position. It's easy to talk a good game when nothing is on the line. One YT video on the Cruise robotaxis - done well before they voluntarily shut down - said the car drove like a 16 yr old student driver.
@larrybreyer406628 күн бұрын
Pardon me for asking a question about the preventive health program. Do you have grounds for claiming discrimination in healthcare is based on race? How do you feel about qualification for preventive healthcare based on lifestyle?
@earthn144712 күн бұрын
I’m afraid youth will begin to use the same kind of language used by AI as it tries to come across as human, vacuous sleeze.
@st3ppenwolf2 ай бұрын
This discussion probably would have benefitted from a disclaimer at the beginning. Doing ML in the health space is substantially more difficult than in any other area for very well documented reasons; the examples given in the discussion, though very prominent, are but a small sample of the model deployments across hospitals, clinics and other health institutions that have (miserably) failed in the past few years. However, ML has been a successful tool in general for many people, and though this was also mentioned somewhere in the video in passing, I think the viewers might come out of it with a biased view.
@jamesmorton788129 күн бұрын
A natural extension of microprocessor based automation. A one or a zero at GHZ rates looks like pure magic. (. Hardware design engr.). Software is only as good as the coder and most have only normal talent at problem solving. ❤❤
@pvijayakumar4217Ай бұрын
I think the main weakness of this video is in not acknowledging how a historical analysis using examples and data going back decades in a field which has over a thousand papers being published every single day (from the video) impacts observations significantly.
@jeffreyradick648612 күн бұрын
I am disturbed by the way "AI" proponents seem to dismiss or ignore the phenomenon of "hallucinations" as if they are at most unimportant curiosities. Things appear to work "well enough" in easy cases that users are lulled into inattention to things that may go wrong -- but when they go wrong, they go really wrong and need 100% of the attention of a human with adequate expertise. The "good enough" uses appear to maximize the likelihood that attention will not be paid when it is most needed. Unless and until AI developers and researchers can get a handle on how to precisely quantify and delineate the boundary between input domains that will produce reliable outputs and those that can't be trusted, -- and to make that easily visible to the users -- I do not believe any of these AI things should be trusted with anything important.
@testboga599119 күн бұрын
The fundamental problem is, that it always depends on the details, which only people know, who aren't incentivized to be honest. There are many pearls, but there is also a giant pile of garbage and it's virtually impossible to tell if you're looking at a pearl or a piece of trash. Maybe AI could help 😂
@ericgregori2 ай бұрын
What about the predictive climate models?
@UMS96952 ай бұрын
That's an equally massive scam!
@eleghari2 ай бұрын
"predictive climate models" 🤭🤣🤣🤣🤣🤣
@chris_jorge2 ай бұрын
There’s a 50% chance of rain. Always lol
@UMS96952 ай бұрын
@@chris_jorge 😄
@researchcooperative2 ай бұрын
Not really needed now, given the mounting empirical record on all fronts?
@phaedrussmith1949Ай бұрын
So, essentially it's like elections: a lot of promises that never really develop into reality.
@iramkumar782 ай бұрын
I liked the ToC. I will buy.
@briancornish207619 күн бұрын
Snake oil salesmen are part of a long and venerable tradition in American business. Where money is at stake there is aways toxic logic defying optimism.'Of course there have been successes.' But where?
@BBPFamily-h2oАй бұрын
on covid study by xray of adult vs children: can this is be called as “study on adults, excluding children”, that sounds very useful
@phaedrussmith1949Ай бұрын
AI wouldn't matter if everything (to some people anyway) wasn't just about getting rich.
@rsimch2 ай бұрын
Actually this is a brain suction in the process 😮😮😮😮
@Nooneself23 күн бұрын
A nobel prize was just giving to AI creators of the system that solved protein folding. No snake oil here. 😂😂😂
@nccamscАй бұрын
By now people are experts in spinning entire cottage industries at the slightest hint of anything that can make money, so no surprise here. There is already a multi billion dollar business to lend money to companies that buy nVidia’s GPUs. Not to mention the deals to power more and more data centres via nuclear power…
@andrehallqvist4492 ай бұрын
When thinking about AI snake oil, AI-detectors comes to mind.
@chilifingerАй бұрын
Interesting sidenote: In this interview, the image of Prof. Arvind Narayanan is entirely generated by Artificial Intelligence. 😎
@DharmendraRaiMindMap2 ай бұрын
AI is the new sub prime
@davidmureithi239320 күн бұрын
AI is now mostly a marketing term rather than a technical one. Too many things are characterised as AI even when the technology behind them was in use long before the term AI came into vogue. This interview brings forth the issue of outcomes. i.e. having cuuting edge technology does not always lead to inproved outcomes, in which case the use of the word intelligence is misplaced. If a firm uses an AI tool to vet job applications, the tool can only be considered intelligent if it improves the recruitment outcome. Otherwise, having amazing technology powering it is really just a waste of money
@rayjr741720 күн бұрын
Apparently, training your AI model on data produced by a structurally racist society is bound to create algorithms that reproduce the racist outcomes of said society. It's not just extraordinary as the host suggests with trepidation but deplorable and sickening.
@mybachhertzbaud3074Ай бұрын
Applying Murphy's Law as the first line of code, if/ then,else goto line one.😜
@AlgoNudger2 ай бұрын
Thanks.
@patlecat18 күн бұрын
Lovely to see that Eric Topol only links to his own products and websites but doesn't even mention the author nor post a link to his book. Maybe it's AI's fault? ;)
@dylanmenzies39732 ай бұрын
We are just at the start. All this conversation will be irrelevant in a few years. Of course companies always try and push their products beyond the boundary at any given time. The generative (not interpolative) potential of deep learning is clear, the next stages will be harnessing this within automatic iterative reasoning structures.
@SydneyApplebaum2 ай бұрын
You can't predict a civil war lol
@MandelasmindАй бұрын
he said that so casually.
@NineInchTyroneАй бұрын
Sounds like a need for redacting papers
@themowgli1232 ай бұрын
Brilliant.
@francisdelacruz64392 күн бұрын
AI is like quantum physics. It appears wonderful but when you look at real world results, impacts, new inventions from it the results are underwhelming. To call AI intelligence is insulting to many animals with real survivable intelligence.
@raiumair74942 ай бұрын
Hang On - he is not talking about the potential but bad executions - how is that snake oil - if you put a working oil in the wrong place it won’t help - clearly predictive AI figures good rule and patterns given the right data - AI works better then average and can scale - the snake oil book is a snake oil itself - they could be better of saying lessons learnt book
@nand35762 ай бұрын
Follow MONEY and earned by marketing. All marketing is snake oil selling. No doubt simplification
@jzzquantАй бұрын
Much of the criticism he has are on previous generation Learning theory based models which are based on facts but has unusable outcomes. The modern generative AI goes one step further, it makes up its own facts. Unfortunately, nearly every single person in AI community has known this for ever, atleast 50 years now. But this is only going to get ugly from here i guess. Problem is not with the subject the problem is with the applciation.
@TatianaRachevaАй бұрын
Good job pushing back. There is a lot of snake oil, but the author’s arguments are incoherent and ironically it’s still a grift. Disappointing
@wrathofgrothendieckАй бұрын
Haha
@suloeaАй бұрын
this guy is so hyped about generative ai but it fact it faces the same issue with predictive ai. what a scam
@jbirdy200717 күн бұрын
12:15 Why did you only represent bad AI examples and thus misrepresent the positive and negative nature of AI? Uh, because then people wouldn't read the book.
@ahahaha35052 ай бұрын
9:38 😦
@matthewnisbett405818 күн бұрын
AI Alan Watts (-:
@lisalove6327Ай бұрын
Facebook alumni
@pajeetsinghАй бұрын
He meant how to facilitate civil war in third world countries.
@2AoDqqLTU5v14 күн бұрын
So AI solved protein folding but it’s snake oil? And what this Princeton student has solved in his life?
@billytanner1868Ай бұрын
哗众取宠
@Terracotta-warriors_Sea2 ай бұрын
His book itself is a snake oil! A Kapor would tell the world that ML is fake while every large company is using ML tools from FSD to Warfighting!
@baxtermullins18422 ай бұрын
BS!
@BrokenRecord-i7q2 ай бұрын
full of fluff and picking and choosing negative examples, failed experiment towards an outcome is not 'snake oil', this book is the low effort intellectual snake oil
@VCT33332 ай бұрын
Dude this this at Facebook so he's seen this first hand. Snake oil is exactly right.
@BrokenRecord-i7q2 ай бұрын
@@VCT3333 you think everyone's at facebook is ai engineer, he doesn't know what he is talking about
@ramicolloАй бұрын
How much Nvidia stock are you holding? 😂
@alexross5194Ай бұрын
@@BrokenRecord-i7q He said early on in the video that he was a machine learning engineer there. Sounds like someone had a preset opinion before even pressing 'play'. No need to debate regarding AI though, time will certainly tell.