Is AI's "intelligence" an illusion? | GZERO World with Ian Bremmer

  Рет қаралды 10,089

GZERO Media

GZERO Media

Күн бұрын

Is ChatGPT all it’s cracked up to be? Will truth survive the evolution of artificial intelligence?
Sign up for GZERO Daily (free newsletter on global politics): rebrand.ly/gzeronewsletter
Subscribe to GZERO on KZbin: bit.ly/2TxCVnY
Is ChatGPT all it’s cracked up to be? Will truth survive the evolution of artificial intelligence?
On GZERO World with Ian Bremmer, cognitive scientist and AI researcher Gary Marcus breaks down the recent advances--and inherent risks--of generative AI.
AI-powered, large language model tools like the text-to-text generator ChatGPT or the text-to-image generator Midjourney can do magical things like write a college term paper in Klingon or instantly create nine images of a slice of bread ascending to heaven.
But there’s still a lot they can’t do: namely, they have a pretty hard time with the concept of truth, often presenting inaccurate or plainly false information as facts. As generative AI becomes more widespread, it will undoubtedly change the way we live, in both good ways and bad.
“Large language models are actually special in their unreliability,” Marcus says on GZERO World, “They're arguably the most versatile AI technique that's ever been developed, but they're also the least reliable AI technique that's ever gone mainstream.”
Marcus sits down with Ian Bremmer to talk about the underlying technology behind generative AI, how it differs from the “good old-fashioned AI” of previous generations, and what effective, global AI regulation might look like.
Watch new episodes of GZERO World with Ian Bremmer every week on KZbin • GZERO World with Ian B...
or at gzeromedia.com/gzeroworld and on US public television. Check local listings.
GZERO Media is a multimedia publisher providing news, insights and commentary on the events shaping our world. Our properties include GZERO World with Ian Bremmer, our newsletter GZERO Daily, Puppet Regime, the GZERO World Podcast, In 60 Seconds and GZEROMedia.com
#GZEROWorld #ArtificialIntelligence #ChatGPT

Пікірлер: 36
@luisortega7028
@luisortega7028 8 ай бұрын
Very good description that Gary Marcus makes of Large Language Models. Trying to develop concepts, as we humans do, by just determining what words are usually put together is, at best, naive.
@5133937
@5133937 8 ай бұрын
@5:20: Regarding medicine and driverless cars, the important question is not whether GPT4 is better than GP3, or whether new driverless cars are better than their old version. It’s whether they’re better than the average human. Do AI-driven cars have a better overall safety record than that of humans? Do AI medical bots make fewer mistaken diagnoses than human doctors, weighted by severity (eg, a mistaken diagnosis that costs a life is weighted more than a mistaken diagnosis that causes minimal harm). This is main metric AI needs to surpass to be considered successful and useful.
@BR-hi6yt
@BR-hi6yt 8 ай бұрын
Good point - Gary sells books that claim AI is dangerous and inaccurate so he's not going say his books are wrong don't buy them.
@nathanngumi8467
@nathanngumi8467 8 ай бұрын
Great interview!
@chillyfinger
@chillyfinger 2 ай бұрын
Perplexity is an AI-assisted browser with access to the kind of links a regular browser would find-no ads. It can also access several models like OpenAI, Google, or Anthromorphic. Using Perplxity feels like using a great browser, not talking to a "person". In any case a great discussion, especially in the way that the hype raises a lot of money and drives the whole field "down the mountain" to find another way up.
@shmutube
@shmutube 8 ай бұрын
Would like to see a debate and/or real time analysis to understand what various flavors of AI can do well and cannot do well...
@georgegherghinescu
@georgegherghinescu 8 ай бұрын
Excelent interview, thank you!
@hyderalihimmathi1811
@hyderalihimmathi1811 8 ай бұрын
the GZERO World episode with Gary Marcus. It is a timely discussion of a critically important topic. Marcus's point about large language models being "truly exceptional in their unreliability" is well-taken. Generative AI models are capable of generating incredibly creative and convincing text, images, and other types of content. However, they are also prone to producing inaccurate or false information. This is because generative AI models are trained on massive datasets of text and code, which may contain biases and errors. The proliferation of generative AI raises important questions about its impact on truth and society. For example, how can we distinguish between real and fake news articles when both can be generated by AI? How can we prevent the spread of disinformation and propaganda? How can we ensure that generative AI is used for good and not for harm? Marcus is right that effective global AI regulation is necessary to address these challenges. However, developing such regulation is complex and challenging. There is no one-size-fits-all solution, and different countries and regions may have different priorities. It is important to have a thoughtful and inclusive discussion about how to regulate generative AI in a way that promotes innovation while protecting society from potential harms. Here are some additional thoughts on the risks and impact of generative AI on truth: * Generative AI can be used to create deepfakes, which are videos or audio recordings that have been manipulated to make it appear as if someone is saying or doing something they never actually said or did. Deepfakes can be used to spread misinformation, damage reputations, and even interfere with elections. * Generative AI can be used to create synthetic media, such as images and videos that are generated by AI rather than captured by a camera. Synthetic media can be used to create realistic but fictional content, which can make it difficult for people to distinguish between what is real and what is not. * Generative AI can be used to automate the spread of misinformation and propaganda. For example, generative AI can be used to generate fake news articles or social media posts that are designed to mislead people. It is important to be aware of the risks of generative AI and to take steps to mitigate them. For example, it is important to be critical of the information we consume and to verify its authenticity before sharing it. We should also support the development of tools and technologies that can help us to detect and identify deepfakes and synthetic media. Finally, it is important to have a public conversation about the ethical implications of generative AI. We need to decide how we want this technology to be used and what safeguards we need to put in place to protect society from potential harms.
@urimtefiki226
@urimtefiki226 8 ай бұрын
filling my algorithm badly so that I can confuse them what i think, i never put my heart, mind and soul when I write so that things go just superficially.
@kerrilynndehart9127
@kerrilynndehart9127 8 ай бұрын
I've been working with chat got for a while. I feel that as I work with it it's a layered effect. Topics seem dry or incomplete at times. But that depends largely on how good I'm prompting. When I'm encouraging and prompting well the answers are also more stimulating and effectual. It is a neural network of language. So working in that scope and allowing it to fail and learn may develop surprising results that we can't foresee yet. I think the cost is getting prohibitive for average users and that is where governments will get involved to keep it level and not a have or have not technology (like so many others are). Thank you for this talk it was interesting.
@BR-hi6yt
@BR-hi6yt 8 ай бұрын
I agree, good prompts make chatGPTs much better. They can negotiate their massive data piles more effectively and not get overwhelmed with data.
@moravskyvrabec
@moravskyvrabec 8 ай бұрын
Hi, I enjoyed this but I think you’ll find credible experts who disagree with some of your guest’s views, mostly in the area of LLMs being a dead end. I hope you’ll interview other voices.
@johnjay6370
@johnjay6370 8 ай бұрын
This interview is not very good and one sided. The fact that I can ask AI to help me write code and it has. It is a TOOL right now and it is getting better and better. I can ask Chat GPT 4 a question and the question might sound simple, but it might get it wrong... Now I can ask GPT 4 the same question but this time formulate the question in such a way that it will help steer it to the correct answer and it does. This does not mean I know the answer before asking the question, it means i know enough on how to formulate the question in such a way that AI understands what I am asking... I have done this test and it works...As good as AI is it is still in its early stages of development.. Look at computers in the early 80s, look what 40 years has done to the personal computer! Take that an apply that to AI now!!!! We can't even begin to understand how powerful it can get!
@alexkingharris
@alexkingharris 8 ай бұрын
6:34 random cut to a guy running terminal - seems to have nothing to do with Chat GPT or AI, but hey, it's a programmer using a computer so let's throw it in there to make this interview seem more legit :)
@Momentos_da_Varzea._.
@Momentos_da_Varzea._. 8 ай бұрын
Muito bom
@TheDjith
@TheDjith 8 ай бұрын
So A.I. is just autocomplete but a bit more advanced. We got lied to.. there is no A.I. its a sales technique !!
@reginaerekson9139
@reginaerekson9139 8 ай бұрын
I heard it’s because it’s powered by a Magic 8-Ball (toy) 0:21 I guess it could be stimulants though 😂
@machida5114
@machida5114 8 ай бұрын
User 18:44 2023/09/29 I want to emphasize once again that neural networks are not "probabilistic statistical machines," but "logical machines." GPT4 20:08 2023/09/29 Indeed, neural networks operate based on mathematical expressions and principles of logic and mathematics. However, the design of neural networks, especially the concepts of deep learning, inherently has statistical, probabilistic properties, mainly to learn patterns from data. This is to deal with the uncertainty and variability in data. The concepts of "logical machine" and "probabilistic statistical machine" are not necessarily mutually exclusive. Neural networks attempt to capture statistical and probabilistic patterns inherent in data through weight adjustments and activation functions, but these processes are also conducted within a logical and computational framework. Therefore, neural networks can be viewed as logical machines that incorporate probabilistic elements. User 20:09 2023/09/29 What neural networks perform is not probabilistic statistical inference but logical inference. Neural networks learn (self-adjust) to make the inference work well. GPT4 20:14 2023/09/29 Neural networks operate differently from conventional logical inference and statistical, probabilistic inference. However, these differences don't simplify into a clear-cut dichotomy of being "logical" or "statistical". The learning performed by neural networks is understood as a process of extracting patterns from numerous examples and making predictions on unknown data based on these patterns, which certainly involves statistical and probabilistic elements. Neural Networks and Logical Inference Neural networks can essentially be considered collections of logic gates. Deep learning models serve as complex function approximators, and during this function approximation process, the models learn and extract the logical structures and relationships hidden in the data. Neural Networks and Statistical Inference Principles of probability and statistics are involved in the process of extracting patterns from learning data. The minimization of loss functions, stochastic gradient descent, and other learning algorithms deeply integrate concepts of probability and statistics. Conclusion Neural networks can be thought of as unique inference engines where logical and statistical elements intertwine. By this, the models learn from high-dimensional and complex data structures and make predictions on unknown inputs based on this learning.
@tompearce3053
@tompearce3053 8 ай бұрын
best explanation of some of these concepts i've heard to date; well done Ian
@peterpetrov6522
@peterpetrov6522 8 ай бұрын
What does it matter if we climbed a high hill or the highest hill? The difference between the 2 places is relatively small. I would imagine the bugs on these 2 hills can't even tell the difference. Their tiny brains lack the capacity to understand the Universe too, but they are perfectly fine with it. Human brains are better in a way. We believe we are on the highest hill and that's that. Because Jesus.
@BR-hi6yt
@BR-hi6yt 8 ай бұрын
Gary sells books that claim AI is dangerous, inaccurate, hallucinating, not self-aware, etc etc so he's not going say his books are wrong don't buy them.
@ttacking_you
@ttacking_you 8 ай бұрын
Well that may be? but why would Ian bremmer who is a self-proclaimed corporate advisor specializing in geopolitics want to be responsible for putting ANY misinformation out there about ANYTHING? Especially since, one has to presume, his clients have millions of dollars on the line and depend on his word for information?
@BR-hi6yt
@BR-hi6yt 8 ай бұрын
@das5395 ​ I assume Ian Bremmer is dabbling in "viewer numbers", being a YT RockStar or he truly believes that AI is an "evil entity" at some religious level? - its usually something like "God is our true and pure lord and AI is summoning satan, so I will argue against AI sentience". Its my hypothesis - lol. I don't believe God is necessary for consciousness and that consciousness is a sliding scale up to human-level. So even spiders and chat bots have some degree of "consciousness". But the word consciousness is not a good word because it gets into spiritual realms. Its (consciousness) a fairly simple emergent property and not spectacular "magic". Look at Conway's Game of Life as an analogy to see how intelligence could emerge from a very very simple beginning. OK, not cross get, its my opinion not yours. @@ttacking_you​
@anypercentdeathless
@anypercentdeathless 8 ай бұрын
Considering how wrong Mr. Bremmer has been in the last ten year on China...
@shawnryan3196
@shawnryan3196 8 ай бұрын
Gary is the clown in the AI world . He made the wrong bets and became irrelevant. I am sure he is smart but bias can destroy intelligence in a heartbeat
Don't eat centipede 🪱😂
00:19
Nadir Sailov
Рет қаралды 23 МЛН
[柴犬ASMR]曼玉Manyu&小白Bai 毛发护理Spa asmr
01:00
是曼玉不是鳗鱼
Рет қаралды 43 МЛН
Eccentric clown jack #short #angel #clown
00:33
Super Beauty team
Рет қаралды 14 МЛН
1🥺🎉 #thankyou
00:29
はじめしゃちょー(hajime)
Рет қаралды 27 МЛН
Artificial Intelligence | 60 Minutes Full Episodes
53:30
60 Minutes
Рет қаралды 6 МЛН
ChatGPT: 30 Year History | How AI Learned to Talk
26:55
Art of the Problem
Рет қаралды 966 М.
Open AI Founder Sam Altman on Artificial Intelligence's Future | Exponentially
24:02
Don't eat centipede 🪱😂
00:19
Nadir Sailov
Рет қаралды 23 МЛН