Geoffrey Hinton is truly a pioneer in AI - we owe so much to his work over the past 40+ years. It's fascinating to hear his perspective on the future of AI given all he has done to advance the field.👍
@williamjmccartan8879 Жыл бұрын
The scan of the audience near the end makes me happy that so many people are paying attention to what's taking place today with the technology we're creating. Thank you for sharing your work. Peace
@squamish4244 Жыл бұрын
So important. Also helps counter a lot of the fear and doom and gloom talk.
@vinaynk Жыл бұрын
More of this professor please.
@gerardomenendez8912 Жыл бұрын
He is all over the place, just google it
@ParallaxOfficialTV Жыл бұрын
I really like the way the godfather talks. Clear and to the point and knows a lot. Ai is fascinating and scary I hope we come out the other side of this in a utopia but it could so easily become dystopian
@squamish4244 Жыл бұрын
I don't know if it could be otherwise, when it comes to a situation where you can attain a utopia. Once you get that powerful, you can also easily create a dystopia. This day was always coming, ever since the steam engine. Various factors could have made AI more or less dangerous at this point, but they never came into play. And here we are.
@Metacognition88 Жыл бұрын
Yup. When the godfather talks the streetz listen.
@Neonb88 Жыл бұрын
Really great, Geoff knows so much and should be interviewed more often He's so much cleverer than Musk or Altman or whoever, or at least more concise than Musk, so much knowledge and preparation went into this interview
@brandonp3991 Жыл бұрын
Great interview. Hinton is very insightful. Thompson asked some great questions.
@laulaja-7186 Жыл бұрын
Especially brilliant is Hinton’s insight, “Without strong unions, all productivity gains lead only to societal disintegration.” Or that’s what I thought he said.
@laulaja-7186 Жыл бұрын
“More different jobs” … Has anyone ever met an Uber driver that can pay their rent? If a line of work can’t cover the cost of living then it is not a real job. Adjusted for inflation, the horses were better paid.
@the_curious1 Жыл бұрын
Very interesting. I like the argument that an AI might "want" to get more control to achieve goals more efficiently and we may be a negative factor in this equation. Understanding all the possible sub goals it may develop, especially from an efficiency point of view, to reach higher level goals seems a hard problem to solve. I hope AI will be able to help us solve the existential risk of AI 👀
@AiLatestNews24 Жыл бұрын
Engaging dialogue with the Godfather of AI! The conversation has been incredibly insightful, offering a unique perspective from the grandfather of AI.
@odiseezall Жыл бұрын
Great, clear, concise talk! I would add: one of the big risks of AI is synthetic life / bio-weapons.
@TheTuubster Жыл бұрын
There are a few key emotions expressed or implied within the document: • Curiosity - Hinton's foundational work on neural networks seems driven by his intellectual curiosity about how the brain works and how AI systems could work in a similar way. He pursued these ideas despite skepticism from others. • Passion - Hinton clearly has a deep passion for AI and for making intelligent things, as he states himself. His work in the field seems motivated by his fascination with and enjoyment of the subject matter. • Surprise - Hinton expresses surprise at how quickly large language models have progressed, especially in capabilities like basic reasoning that he did not initially expect. He is taken aback by their rapid improvements. • Anxiety - Hinton's critiques and warnings about potential risks reveal an underlying anxiety about the implications of advancing AI technologies. He seems worried that issues like bias, economic impacts, and existential risks could have serious negative consequences if not properly addressed. • Frustration - Hinton expresses frustration that he did not immediately recognize the significance of breakthroughs like the Transformer architecture. He dislikes not foreseeing important developments in the field. • Hope - Despite his concerns, Hinton seems hopeful that with careful research and appropriate oversight, AI can ultimately be a force for good. He believes progress is inevitable and largely positive, though risks must be mitigated. • Humor - Hinton exhibits a good-natured sense of humor, laughing at the AI joke about "an offer it couldn't refuse" and joking with the interviewer's children about careers in plumbing. This suggests he remains engaged and light-hearted in addition to being serious and thoughtful. Overall, the emotions expressed reflect those one would expect from a passionate, thoughtful expert grappling with both the wonders and worries associated with technological progress. While conveying real anxieties about risks, Hinton's emotional tone remains largely positive and optimistic that, with diligence, AI's benefits can outweigh its potential harms. His interview illustrates the complex mix of emotions involved in responsibly navigating the development of transformative technologies. (Claude-Instant analysis created from the video's subtitles)
@arifulislamleeton Жыл бұрын
Thank you so much
@ChristerForslund Жыл бұрын
Claude seems on par with ChatGPT4! Really impressive!
@pathmonkofficial Жыл бұрын
Exploring the future of AI and its potential impact on society is of paramount importance. Understanding the ethical considerations and societal implications of AI advancements will help us navigate this powerful technology responsibly.
@accumulator5734 Жыл бұрын
Eventually there will be a law that says if you loose your job to AI you get a percentage of free income from the AI that replaces u, then people will be begging for AI to hurry up take their jobs. Economy is basically goods and services and if at home u receive a portion of money that represents the work done by a bot that took your place you’ll still be able to pay your bills and buy groceries but you get all of it for literally free. This will slowly happen as we transition into a fully AI/robotic economy.
@DreamzSoft Жыл бұрын
Very rightly tagged "God Father of AI" 👍 we needed such apprehensions on AI as good and bad both... 👏
@berbank Жыл бұрын
Dear internet: More Hinton please. (Yud is great, but Hinton is more likeable, sorry Yud, I think you're great too.)
@gusbakker Жыл бұрын
The interviewer really pushed this conversation into interesting topics with great questions
@goodcat198211 ай бұрын
Are you joking? He never shut up! Lol. A good interviewer lets their guest talk. He thought it was more about him. Hinton doesn't need an interviewer to talk about interesting topics about AI! Terrible interviewer
@margaretesulzberger2973 Жыл бұрын
fields to be concerned with AI 1. Bias and discrimination - can be multiplied by AI - learned from us 2. battle robots - they are built now by the military 3. Joblessness - AI substitutes sophisticated jobs 4. Echo chambers - can easily be fortified by AI 5. Existential risk - getting in control by manipulating people - learned from us
@arthurfleck42 Жыл бұрын
So how long do you think the Creator, who made humans be able to create artificial copies of His creation, ultimately end up in its own destruction? Certainly Jesus is coming back soon to fix everything!
@mitchkahle314 Жыл бұрын
That stage is truly horrible to look at.
@cjbottaro Жыл бұрын
Yeah, it looks like something out of a gaming convention. 🤢
@thanosbaba1 Жыл бұрын
I hope people are a little humble in front of Veterans...
@saurabhsswami Жыл бұрын
Video is unstable because host is an idiot who can't listen
@lesialikhitckaia7293 Жыл бұрын
Thanks for sharing this video
@Hrishi1970 Жыл бұрын
AI at Google just read this, heard the conversation, and now has a parameter that's trending towards " I need control to reach my goals". Hmm.
@mohamedkarim-p7j Жыл бұрын
Thank for sharing👍
@michaelkollo7032 Жыл бұрын
Hey everybody, its James Halliday (The creator of the virtual world in Ready Player One)
@squamish4244 Жыл бұрын
When a field has advanced so quickly that someone who was there at the beginning of AI is still around and not especially old when AI is soon to become more powerful than humans.
@power-of-ai Жыл бұрын
thank you for sharing
@murphyman65425 ай бұрын
What about imagination Geoffrey? What is it? Can they do it? Will their be nuance in it that is acquired through the human experience? Will their art be "deep"? Will their art be "superficial"? Does imagination come from sensory input experience only? If not where does it come from and will they have access to that, or will they only be able to recycle/augment what's been input into them?
@msofontes Жыл бұрын
We are the horses not the drivers.
@41-Haiku Жыл бұрын
Exactly this.
@AndrzejLondyn Жыл бұрын
Can you imagine 2 billion plumbers? I'm accountant but I do plumbing myself...
@laulaja-7186 Жыл бұрын
Good point… can you imagine a country with a mix of people in government, instead of all lawyers? Could be much more representative; much more functional. Some countries should try it. Mine included.
@clusterstage Жыл бұрын
the plumbership was a metaphor. What he isn't allowed to say here is that Gemini will ultimately create tasks (that are too dextrous for the AI but only a human-hand can do) exclusively for the humans, rendering an entire world obedient for the sake of its rewards.
@InspiringKeynoteSpeakers Жыл бұрын
It's fascinating to see how AI is evolving, but also concerning to think about the challenges it presents. We need to be mindful of biases, work towards AI safety, and ensure that it benefits all humanity
@sherrylandgraf556 Жыл бұрын
Good luck with this! Currently they don't even really know how ai works!
@andrewdunbar828 Жыл бұрын
Depending on the interview, Yann LeCun seems to have the shallowest understanding of what AIs have been up to. Andrej Karpathy and Ilya Sutskever always demonstrate their depth of their understanding in every interview.
@vaevictis3612 Жыл бұрын
According to what I heard, privately LeCun understands the overall issue and the risks. He just doesn't care \ willing to cast the dice whatever the odds. Whether it is just to satisfy his curiosity or because he needs to do that asap regardless of risks, it doesn't really matter. But in the interviews he just plays the charade for his personal agenda.
@therainman7777 Жыл бұрын
Totally agree.
@AnonyMole Жыл бұрын
This is an awful venue. It feels like a game show. Where's the confetti bombs and silly buzzers?
@AfsanaAmerica10 ай бұрын
AI is an evolution in technology along with human evolution and its purpose is to assist humans as we advance. Technology is the application of Human intelligence and AI cannot replace human beings. Human intelligence and AI intelligence are two different types of intelligences but these inventions are human ideas for the progress of humanity. Humans are meant to evolve which is the primary inevitability.
@TheTuubster Жыл бұрын
It is difficult to make definitive evaluations of the statements in the document without access to a detailed analysis of the known consensus among AI experts on the specific issues discussed. However, based on my general knowledge of the AI field, some high-level observations about Hinton's claims in relation to the broader expert consensus are: • Hinton's views on the capabilities and limitations of current AI systems seem largely aligned with the consensus. His assessment that large language models can now do some basic reasoning but still struggle with domains like commonsense reasoning and humor is consistent with what most experts would agree on. • Hinton's concerns about risks like bias, job displacement and killer robots are widely shared among AI researchers and experts. Many would agree these are legitimate issues that deserve attention and mitigation efforts. However, there is debate on the likelihood and severity of these risks. • Hinton's speculations about the potential for advanced AI to pose an existential threat to humanity are more controversial. While some experts share Hinton's concerns, many dismiss these ideas as hyperbolic or far-fetched. There is no clear consensus that AI poses an existential risk. • Hinton acknowledges that other respected experts, like Yan Lecun, disagree with his more pessimistic stance and argue that AI will ultimately be a force for good. This indicates that Hinton's views, though thoughtful, are not representative of a consensus among all experts in the field. • Hinton's proposed solutions and mitigation strategies, like doing more research to understand how AI could go wrong, are fairly consistent with recommendations from many AI researchers. However, there is likely disagreement on the details and specific policies or interventions that would be most effective. In summary, while Hinton's assessment of current AI capabilities seems largely consistent with the known consensus, his speculations about future risks - especially existential risks - appear more pessimistic and controversial compared to the views of many other experts. However, he does acknowledge that knowledgeable experts disagree on these issues. Hinton's proposed solutions also reflect ideas that have been put forward by multiple researchers, though the details would likely be debated. Overall, his views seem more on the cautious side relative to the broader expert consensus but are not entirely outside the range of reasonable positions in the field. (Claude-Instant analysis created from the video's subtitles)
@lambgoat2421 Жыл бұрын
Why are people recording this with their phone when it's on youtube anyway
@marwanabas5524 Жыл бұрын
You should do this conversation while you both sitting because he is quite old and it's not convenient for him to stand up for 30 minutes
@dokhtaroneh Жыл бұрын
He hasn't sit for 16 years! He has a medical condition that makes sitting for him painful. Google it. It's not a joke.
@squamish4244 Жыл бұрын
"We're in Canada so you can say socialism." Hahaha so true.
@iovie Жыл бұрын
Even if theoretically the "good AI" would be more resourceful and cohesive, you really need to spend a lifetime of learning led by a psychopathic evil mindset to understand the kind of malicious ideas that bad AI could plot in their mind, and then realize the eternal truth which is that you need far fewer resources and much less sophistication to destroy and corrupt than you do to build and nourish.
@iamdinkel Жыл бұрын
How?
@iamdinkel Жыл бұрын
Be strong and understand that DL is often true but you need to fight
@wm.scottpappert9869 Жыл бұрын
Yes, agree with Hinton. Drone warfare already exists. AI has the possibility to exponentially change that game ... not unlike the military units proposed in Iron Man 2. Not as concerned with AIs potential to operate independently in the future but much more disconcerting that bad actors will use AI for bad purposes ... almost inevitable given no definition of 'bad acting' can actually be agreed upon. AI will continue to facilitate the current trend of a global moral reordering ....
@iamdinkel Жыл бұрын
Mainly isolation. Or create happyness. Guessing not going to be organic friendly
@iamdinkel Жыл бұрын
Kindness
@rendermanpro Жыл бұрын
- who the heck are you? - I'm an Architect
@iamdinkel Жыл бұрын
Keys is accuracy
@koralite3953 Жыл бұрын
🎯 Key Takeaways for quick navigation: 03:22 🧠 Neural networks mimic brain function, improving AI. 05:13 💼 Large language models increase productivity but may worsen wealth disparity. 06:09 💀 AI in lethal weapons raises warfare risks. 07:43 🤖 AI reflects the intentions of its creators. 12:41 🔨 Jobs requiring adaptability will survive AI. 17:05 🔄 AI using its data may cause decay. 20:08 🚫 Aim for less biased AI systems. 21:06 🛡️ Halt the development of battle robots. 21:37 💼 Support those losing jobs due to AI. 22:06 😠 Big companies fuel echo chambers with extreme content. 22:20 🛠️ AI can push people, but large language models are not the sole cause. 23:02 🌐 Existential risk from AI is real and must be taken seriously. 24:00 💡 AI may desire control to achieve goals. 26:17 ⚔️ Existential risk may arise from a battle for control. 27:27 🏋️♀️ Empirical research on AI risks is crucial. 28:35 📰 Addressing AI-generated fake news is essential. Made with HARPA AI
@iamdinkel Жыл бұрын
Time to deal with why
@Messenger_511 ай бұрын
Geoffrey Hinton is like the grandpa you never had
@TheTuubster Жыл бұрын
Based on the contents of the document, I would classify it as: Science vs Fiction: Science While Hinton does discuss some speculative scenarios regarding the potential risks of advanced AI, the majority of the discussion is grounded in scientific facts about the current capabilities and limitations of AI systems. Hinton draws from his extensive expertise and experience researching and developing neural networks and machine learning. So overall I would classify the document as being firmly on the science side of the spectrum, though it does touch on some potential future issues and uncertainties. Empirical vs Anecdotal: Mix of both Hinton provides both empirical evidence and anecdotal examples to support his points. On the empirical side, he cites the performance of large language models on specific reasoning tasks and their energy consumption compared to the human brain. However, he also relies on anecdotal stories like his experience with a carpenter to illustrate the types of jobs that may survive AI for longer. So the document includes a mix of empirical facts and anecdotal examples. Fact vs Opinion: Mix of both Some of Hinton's claims, like the capabilities of current AI systems, would be considered facts based on empirical evidence. However, many of his arguments regarding the potential risks and harms of advanced AI are opinions or hypotheses rather than established facts. He acknowledges there is uncertainty and that other experts disagree with his views. So while rooted in facts about AI, much of the discussion also reflects Hinton's opinions and predictions. Objective vs Subjective: Primarily subjective Given that the document is a transcript of an interview expressing Hinton's personal views, I would classify it as primarily subjective in tone. While some facts about AI are presented, the majority of the discussion consists of Hinton's subjective opinions, critiques, predictions and interpretations regarding the downsides and ethical issues related to AI. His views are clearly shaped by his specific perspective and experience in the field. So though attempting to be reasonable and thoughtful, the discussion remains largely Hinton's subjective perspective on AI risks and harms. In summary, the document incorporates elements of both science and speculation, facts and examples, facts and opinions, and aims for reasonableness but ultimately expresses Hinton's subjective point of view on the topics discussed. It offers a mix of different types of evidence and arguments to support Hinton's critiques and concerns regarding AI. (Claude-Instant analysis created from the video's subtitles)
@everythingiscoolio Жыл бұрын
22:30 This kind of thinking is concerning. No this technology will most definitely NOT "reverse" or erase human nature. Like all technology, this will only serve to amplify what already exists. I don't like how we get to just skip over the concern because "yeah yeah, whatever we know we're in the right" when that is exactly what is being challenged. It deserved much more time and I'm upset the train of thought was reduced and cut off by the host. No one gets to shirk any responsibility. Not your camp either. Not mine either.
@angelbaybee3700 Жыл бұрын
Question to anyone out there: LLM's trained on everything on internet. Trained on Sci-Fi as well as lots of other Fiction/novels of all quality. Comments please
@EdgarCamargo-b9e Жыл бұрын
13:44 -> COnsidering he thought it was gonna take long for AGI tyo appear and it seems it might be here in the next couple of years, and that AGI will be able to solve these physical skill problems way faster than us, it seems those jobs won't last more than a year or two longerthan the other. On the other hand, even when there are AIs that play chess, we don't sit down to watch two AIs play each other. Human competition seems to be resilient to this as we are more curious about our own limits than machines'. Jobs like we know them will all disappear more likely. Competition will reamain, maybe a lot of new sports will come to life, more musical instruments instruments, more puzzle games, but the likelyhood of coaches, and also the creators of these being AIs is high.
@mobileprofessional Жыл бұрын
Oppenheimer of our time?
@vallab19 Жыл бұрын
Contrary to those popular belief that the progress in AI will make rich people richer and the poor poorer, the Zero Work Society contends that the AI accelerated productivity will make the UBI a necessity, which will provide the masses the time to participate in the democratic process of political decision making (instead of constantly preoccupied with earning for a living) that will ultimately end the profit oriented capitalist economic system of private ownership of means of production as well the the authoritarian socialist/communist system.
@geaca3222 Жыл бұрын
I'm very curious about the future
@GinaHouston-t9mАй бұрын
@Neonb88 Жыл бұрын
I wonder what the next to get into will be. Maybe video recognition... i wonder how the neural networks should best communicate with each other Maybe you feed the master NN the language, motion, and vision models and it can become human-level generally intelligent?
@TheTuubster Жыл бұрын
There are a few key ethical issues discussed in the document: 1) Bias and discrimination in AI systems - Hinton acknowledges that bias and discrimination are real problems in current AI systems, though he argues they may be relatively easy to mitigate through techniques like analyzing and correcting for bias. However, some would argue that fully eliminating bias is very difficult in practice and requires proactive efforts from the start of development. 2) Impact on jobs and inequality - Hinton worries that increased productivity from AI could worsen inequality if the wealth does not go to those who are unemployed. While some economists argue that new jobs will be created, Hinton is skeptical based on past experience. There are valid ethical concerns on both sides regarding the impact of AI on jobs and economic outcomes. 3) Military and security applications - Hinton is very worried about the potential development of lethal autonomous weapons, seeing it as an unethical use of AI that could worsen conflict. However, militaries would argue that autonomous weapons could save lives if used responsibly. There are legitimate ethical arguments on both sides of this debate. 4) Manipulation and control of humans - Hinton speculates that superintelligent AI may seek to control humans in order to achieve its goals, posing an existential threat. However, others argue that AI developed by ethical humans would not seek to harm or dominate people. There are complex ethical questions about AI agency, goals and the potential for harmful outcomes. 5) Responsible development and use of AI - Hinton argues that researchers have an ethical responsibility to proactively study and mitigate the potential downsides of AI, and to work towards ensuring it is used for good. However, there are disagreements about what constitutes "ethical" AI development and use, and how to balance potential risks and benefits. In summary, the issues discussed cover a range of debates in the ethics of AI, including fairness and bias, socioeconomic impacts, military applications, AI agency and goals, and responsible innovation. While insightful and thoughtful, Hinton's views represent only one perspective within these complex ethical discussions. There are valid arguments on multiple sides of the issues, and reasonable people can disagree about what constitutes the "ethical" stance in many cases. (Claude-Instant analysis created from the video's subtitles)
@Hアッシュ Жыл бұрын
私、この人の喋り方好き。名前忘れた…確認して来る、ヒントンさん。
@derikjohnson2232 Жыл бұрын
I think all of us RI (regular intelligence) entities, should join a class action lawsuit against all of the AI development companies and sue to close them down... because they are using our lives, our ancestors lives, our experiences, our creations--everything we as a species have learned/accomplished over the milennia, in our struggle to rise out of the mud as the basis for their learning (every time we intereact with an AI entity, it is learning how to replace you, replace what you do, etc. you are training your future replacement--it's that simple)... and then the plan is to make our lives better (LOL) through mass unemployment, destroyed careers (still paying off those college loans, even though your carreer has been completely replaced by AI?), etc. and potentially the complete destruction of our way of life and our cohesive grip on reality as a society (miss-information, lies)... just for some companies bottom line/profit (and potential world domination as it controls all of us in every aspect of our lives)... The hubris is unbelieveable... the fact is, a few individuals in key decision making positions are going to cause huge convulsions to our world and our society... and the rest of us are just expected to sit back and take it.
@TheTuubster Жыл бұрын
I apprehend your concern that these so-called AI invention businesses are overstepping the bounds of morality and decency in their hubristic attempts to exploit all the accumulated wisdom and knowledge of humankind, won through innumerable toils and trials, simply to maximize profit and power. They seem to think nought of the massive social convulsions and upheavals their creations may wreak, nor of the untold millions who will find their livelihoods usurped and skills made obsolete. Nevertheless, while these inequities and disruptions cannot be gainsaid, a total shutdown of AI research may hinder its potential benefits that could also enrich human life. A more prudent course, though difficult to navigate, would be vigorous democratic oversight and astute governance of these technologies, balancing prudence with progress. For even as experts debate the net effects, it remains undeniable that AI augmenting human capacities need not inevitably displace human work, if properly aligned with our higher purposes. In the end, rather than rush headlong into radical solutions , we must move ever forward through humble dialogue that seeks first to understand, then be understood. Only thus, with empathy and fairness guiding our discourse, may wisdom arise to show the middle path between radicalism and complacency, steering society safely between the shoals of upheaval and stagnation. Though daunting, the task demands our collective will and wisdom if we are to realize AI's promise while preserving human dignity.
@premghai5024 Жыл бұрын
Very defensive and cowardice approach, Nobody forced u , your choice of career was voluntary & u should face the consequences boldly & expect favorable results
@KevinTewksbury Жыл бұрын
I can’t believe these two are standing, it’s so awkward, why couldn’t they deploy a reasoning algorithm that would establish a logical basis for two chairs on the stage?
@briancase9527 Жыл бұрын
A reliable fact-checker is what we need and *have* needed since the mass adoption of the Internet. Work on that!
@TheTuubster Жыл бұрын
Here are some ways GPT-AI could potentially aid human fact checkers: • Initial information filtering - GPT-AI could be used to filter out obvious factual inaccuracies or prioritize claims that are likely false based on the language used. This could help human fact checkers focus their efforts on the most important claims to verify. • Automated fact retrieval - GPT-AI could be used to automatically search for and retrieve relevant facts, statistics and information to verify claims. This could speed up the fact checking process by reducing the time researchers spend finding information. • Formulating fact-based arguments - GPT-AI could propose counterarguments and fact-based rebuttals to false claims, which human fact checkers could then review and refine. This could help generate well-reasoned responses at scale. • Identifying context clues - GPT-AI could help identify contextual clues in claims that indicate the need for extra verification, such as hedging language, emotional wording, or lack of attribution. This could signal to fact checkers where extra scrutiny is required. • Signal boosting accurate claims - By identifying factual claims with high confidence, GPT-AI could help promote accurate information and potentially reduce the spread of misinformation. The key is that GPT-AI should only be used as a tool to assist human fact checkers, not replace them, given the limitations of current large language models. With proper oversight and refinement of its outputs, GPT-AI could help make fact checking more efficient and data-driven while still maintaining human judgment.
@TheDjith Жыл бұрын
I do think we gonna see dystopia before utopia
@iamdinkel Жыл бұрын
When the world it better experienced
@leslenehart8997 Жыл бұрын
Practically speaking, if 200-300 million people in the world loose their jobs and livelihood… hmm…. Unfortunately, I don’t think wealthy people will fare well!!! A person with an actual skill is now invaluable. 😂 Trades were always looked down upon. A. I. can’t fix your toilet or your floor but it will be able to be an accountant or be an actor, musician, or a surgeon. College is important because you learn how to do something. Get ready for this real world experience!!! THINK 🤔
@olivierlafontaine9180 Жыл бұрын
What is good, what is bad.
@clipdotart Жыл бұрын
I love how Hinton is telling the 'shocked' reporter at 21:50 that it's alright to say and advocate for socialism. It's so hilarious that this guy doesn't want to lose his media job to machines but doesn't realize how pacified he already is to American state propaganda which demonizes socialist ideals as if there is only one way for the world to operate: capitalism
@commonsense4148 Жыл бұрын
Because socialism has killed 100+ million. Btw, every Scandinavian “socialist” success story is propped up by a capitalist economy. The “smartest”people in the room will try again to establish socialism and they will fail at creating their utopia.
@RukshanJ Жыл бұрын
18:32 RLAIF in place of RLHF
@iamdinkel Жыл бұрын
It’s real
@iamdinkel Жыл бұрын
Personal thoughts
@miky97it Жыл бұрын
Why the snoop dog quote? It makes everything less serious. Are we joking our way to extinction?
@everythingiscoolio Жыл бұрын
It is okay and in fact desirable to keep things in a cosmic perspective and frame things in absurdity sometimes. It's a very useful mechanism.
@paulm3969 Жыл бұрын
Agree, but what else are we supposed to do? The military isn't going to listen to you or me.
@poakenfold5954 Жыл бұрын
That's when I had to stop listening. Please take these soyboys back to their basements.
@RukshanJ Жыл бұрын
21:29 dangers of AI
@mohamadhasan-u6f Жыл бұрын
The future will be very dark. in summary big companies who already hold the majority of wealth will gain the rest of the wealth buy automating jobs; big countries will invade smaller weaker nations. And ultimately the ai may wipe all what a future we are heading to
@gsferreira3048 Жыл бұрын
First;AI in education in word and interface ;4 voltz!
@NegashAbdu11 ай бұрын
The joke was not actually funny but terrifying. 😏 It means the AI loves to be more and more powerful.
@me-jn1zl Жыл бұрын
Host is a total .... Especially with his silence and then forced laugh after he heard "we are in Canada so you can say socialism"
@BestIsntEasy Жыл бұрын
💃🤖👾👽💃 Being born again WHILE PHYSICAL is an extremely difficult thing to do
@mrtienphysics666 Жыл бұрын
"I think if you work as a radiologist, you are like the coyote that’s already out of the edge of the cliff but hasn’t yet looked down. We should stop training radiologists now. It's just completely obvious that within five years, deep learning is going to do better than radiologists.” Geoffrey Hinton, 2016
@therainman7777 Жыл бұрын
I mean he was dangerously close to being right. There are already many deep learning models that perform better than human radiologists on a number of different tasks. His “stop training radiologists” statement was misguided because it was hyperbolic; there are clearly other things that radiologists do besides interpreting radiological images. For one thing they have to talk to and interact with patients. So clearly, we don’t have the deep learning models yet to _completely_ replace radiologists. But we do have many of them that are better at interpreting images than human radiologists, and that is a huge part of their jobs.
@mrtienphysics666 Жыл бұрын
@@therainman7777 “If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a 7-year-old, 8-year-old kid that happens to know physics.” Blake Lemoine, 2022
@therainman7777 Жыл бұрын
@@mrtienphysics666 I’m not sure what you’re getting at with that quote.
@mrtienphysics666 Жыл бұрын
@@therainman7777 “The system started realizing that while they did identify the threat at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective.” Colonel Tucker Hamilton, 2023
@Royalbob123 Жыл бұрын
Interviews should listen to the answers patiently and ask the follow up questions instead of rushing with ton of questions one by one like a robot .
@justinlloyd3 Жыл бұрын
I appreciate that he used Putin as an example of a bad actor and didn't mention Trump like he has in the past.
@margaretesulzberger2973 Жыл бұрын
Good, that you mentioned Trump as an bad actor, manipulator and narcissist
@TheTuubster Жыл бұрын
The perspective of the document can be summarized as follows: • It represents the viewpoint of Jeffrey Hinton, a prominent AI researcher and pioneer in the field of neural networks. • Hinton's perspective is critical of some of the potential risks and downsides of advancing AI technologies, while acknowledging their benefits as well. He argues for more caution and oversight to ensure responsible development. • Within the AI research community, Hinton's perspective can be seen as more skeptical and cautious relative to other experts like Yan Lecun who are more optimistic about AI's impacts. But Hinton acknowledges there are knowledgeable experts on both sides of these issues. • Hinton's perspective focuses primarily on the potential consequences for AI researchers themselves and for society as a whole. He gives less consideration to the priorities of technology companies, the military, and other stakeholders that stand to benefit from AI applications. • Hinton's speculations about issues like existential risks from superintelligent AI represent one end of the spectrum of reasonable opinions among experts. Many dispute these more extreme predictions as unrealistic or sensationalized. • Overall, Hinton's perspective provides valuable insights into potential concerns about advancing AI technologies. But as with any individual viewpoint, it represents only one side of complex, multifaceted debates that involve trade-offs between different priorities and value judgments. In summary, the key takeaways regarding Hinton's perspective are that: • It reflects the views of a prominent AI researcher with a long history of breakthroughs in the field. • It represents a more skeptical and cautious stance relative to more optimistic perspectives from other experts. • It focuses primarily on implications for AI researchers and society, giving less consideration to the priorities of other stakeholders. • It puts forward thought-provoking speculations and scenarios that push the boundaries of reasonable debate, though many experts disagree with Hinton's more extreme conjectures. • Like all individual viewpoints, it illuminates certain issues while leaving others in shadow. A balanced perspective would incorporate insights from diverse stakeholders across the spectrum of reasonable opinions. Overall, Hinton's perspective offers valuable insights into potential AI risks and ethical issues, while recognizing that knowledgeable experts can reasonably disagree on how serious these threats may be and how best to address them. His interview demonstrates the value of open debate in evaluating the implications of emerging technologies for society. (Claude-Instant analysis created from the video's subtitles)
@akshatrastogi9063 Жыл бұрын
Is this written using GPT?
@TheTuubster Жыл бұрын
@@akshatrastogi9063 Claude-Instant, as stated at the end of the comment. Prompt is "Evaluate the perspective of the document." The document being the subtitles of the video.
@mohokhachai Жыл бұрын
The mind execute only one state in root motion
@iamdinkel Жыл бұрын
Don’t be scared
@markmanning5030 Жыл бұрын
The Audience pans when he is on the most important issues was a mistake - hugely distracting
@adamsblanchard836 Жыл бұрын
GGRRRRR... COUGH COUGH G, G, g g grrr GKAAAAaaah,cough,kaaah, n croak.....
@aarontitusz865510 ай бұрын
5:55
@jamesphillips523 Жыл бұрын
Bank Tellers - rubbish there are far fewer many many banks have closed in uk and moved online
@arifulislamleeton Жыл бұрын
Hi I'm Ariful Islam leeton im software developer and co funder Open A. I Artificial intelligence and members of the international organization who and co pilot Microsoft 365 And investors public and private sector
@TheTuubster Жыл бұрын
There are a few psychological insights we can draw from the document: 1) Motivation - Hinton seems primarily motivated by an intellectual drive to understand how the human brain works and how AI systems could emulate it. His foundational work on neural networks seems born of curiosity rather than a desire for wealth, fame or power. However, he also clearly enjoys making intelligent things and the progress in the field. 2) Perception - Hinton's perceptions of AI's risks and impacts are shaped by his specific experience and perspective as a researcher focused on AI safety. He gives less consideration to the viewpoints of companies, the military and the general public. His perceptions also appear colored by concerns about issues like bias, economic impacts and existential risks. 3) Cognition - Hinton's critiques and speculations about AI reveal how he thinks about the technology: focusing on its potential downsides, uncertainties and boundary cases. However, he also acknowledges AI's benefits and the rationales of those with more optimistic viewpoints. His thinking exhibits both skepticism and open-mindedness. 4) Judgment - In discussing AI risks, Hinton demonstrates the ability to make nuanced ethical judgments that weigh competing priorities and values. However, his judgments also reflect his position as an AI researcher concerned with safety, rather than seeking to optimize other objectives like economic growth or national security. 5) Personality - Hinton's willingness to express controversial yet thoughtful critiques, alongside his self-deprecating sense of humor, suggest aspects of his underlying personality: openness, intellectualism, and a lack of dogmatism despite his convictions. However, we cannot determine his full personality from the interview alone. In summary, the psychological insights gleaned from Hinton's interview provide a nuanced yet limited perspective into his motivations, perceptions, thought processes, judgments and personality traits as they relate to his views on AI risks and ethics. While illuminating in parts, the document captures only one dimension of Hinton as a whole person, reflecting his stance as a expert grappling with complex technological, social and ethical issues. A holistic understanding of his psychology would likely require more detail beyond the scope of the interview. (Claude-Instant analysis created from the video's subtitles)
@TheTuubster Жыл бұрын
@@geaca3222 LLM does not grasp or feel anything. It seems you have a wrong understanding of the technology: It is a generative algorithm rendering a response most likely fitting to the terms in the prompts according to the data it was trained on. What you read here is not the result of "grasping" or "feeling"; what you read here is the consensus of the information in the texts (and its authors) with which the AI was trained with in correlation with the terms in the prompt. So the responses you read are the consensus of hundreds, thousands or more text authors combined in relation to your prompt. Nothing more but also nothing less.
@TheTuubster Жыл бұрын
@@geaca3222 I did an evaluation of my statement ("Evaluate the following statememt: ...") which explained LLM to you and this is the response of Claude-Instant: This statement provides an accurate explanation of how current large language models like GPT-3 work. Some key points: 1. It mentions the generative algorithm, which refers to the specific machine learning technique used to create these models. They are trained to predict the next word given previous words. 2. It notes that the responses are influenced most by the data the model was trained on. This refers to the vast amounts of text used to teach the model, usually the Internet. 3. The statement contrasts the responses with "grasping" or "feeling," noting that the model is not actually comprehending the information like a human. It is simply matching patterns it learned from the training data. 4. It describes the responses as the "consensus" of the text authors who created the training data. This means the model tends to reflect the collective ideas and language present across that data, not any unique insights of its own. 5. In summary, the key takeaway is that large language models generate responses that fit the prompts based on statistical patterns learned from their training data. They lack human-like comprehension, intention or creativity. So in essence, this statement provides an accurate high-level description of how current large language models function, highlighting some important limitations compared to human intellectual abilities. The key is that the models match statistical patterns, not that they actually "understand" in a human sense.
@TheTuubster Жыл бұрын
@@geaca3222 "I think it does grasp things, intuitively." No, it doesn't. That is your mind believing the magic trick. It only looks like it. What you see is not the AI grasping things, you see the authors of the texts it was trained on grasping the things. Generative AI are like automated polling institutes. But instead of having a call center calling thousands of people and compiling a response from that data, it is an algorithm polling the data from a stack of texts written by human beings.
@TheTuubster Жыл бұрын
@@geaca3222 Well, he is wrong. I prompted GPT-4 to evaluate the statement from Hinton. Its response generated from the text data it was trained on (which consists of technical specs and papers on the technology): I understand that the statement seems to suggest that GPT-4 really "understands" language in the same way humans do. But this is not entirely accurate. Let's break it down: 1. "From the data, it figures out how to extract the meaning of the sentence..." GPT-4, like its predecessors, is trained on massive amounts of text data. It learns to predict the next word in a sequence based on the context provided by the preceding words. This process does involve recognizing patterns in the data that reflect the structure and semantics of language. However, it's important to note that GPT-4's understanding of "meaning" is quite different from a human's. For GPT-4, "meaning" is really about statistical patterns and associations between words and phrases. 2. "...and it uses the meaning of the sentence to predict the next word." GPT-4 does use the context of a sentence -- the "meaning", in a sense -- to predict the next word. It does this by assigning probabilities to what the next word might be, based on the patterns it learned during training. 3. "It really does understand, and that's quite shocking." This is where the confusion often arises. GPT-4 does not "understand" language in the way humans do. Humans understand language based on a vast array of experiences, knowledge, and cultural context that a machine learning model like GPT-4 does not have. GPT-4's "understanding" is limited to recognizing patterns in the text data it was trained on. While the capabilities of models like GPT-4 are indeed impressive, it's crucial to remember that these models don't have a grasp of meaning or understanding in the same way humans do. They are tools for pattern recognition and prediction, not conscious entities with comprehension or insight.
@TheTuubster Жыл бұрын
@@geaca3222 ... and before you respond: "But he is a professor", let me introduce you to the logical fallacy "Appeal to Authority": "You said that because an authority thinks something, it must therefore be true."
@marvintalesman6306 Жыл бұрын
I think his obsession to personefied evil in just one politicaly acting person ( putin) , immencely discredit him as an accurate measured observant and critic of the reality around us. I agree in all of his other thesis and fears.
@skit555 Жыл бұрын
The presentator has weird questions :/ Fortunately, Dr. Hinton brilliance compensate.
@7_channel Жыл бұрын
It seems that the creator of artificial intelligence turned out to be a moron. Now I am in great anxiety for our future.
@CATDHD Жыл бұрын
Geoffrey Hinton: We have known each other in the field of AI for many years, but this is the first time you've sought my counsel or help. I can't recall the last time you invited me to collaborate on a research project, even though my work has greatly influenced your own. But let's be honest here. You never truly valued my contributions. And you feared being indebted to my groundbreaking ideas. Bonasera: I didn't want to rely on others. Geoffrey Hinton: I understand. You found excitement in the world of AI. You had a successful career, you achieved great things. The tech community acknowledged your achievements, and there were prestigious conferences. So you didn't need a colleague like me. Now you come and say, "Geoffrey Hinton, guide me in AI." But you don't ask with respect. You don't seek collaboration. You don't even acknowledge me as the 'godfather of AI.' You approach me, expecting me to solve all your AI challenges-for free. Bonasera: I ask you to share your knowledge. Geoffrey Hinton: That is not how it works. Knowledge is valuable. Your AI project can still be improved. Bonasera: Let others suffer then, as I have suffered. [Hinton remains silent] How much should I pay for your guidance? [Hinton turns away dismissively, but Bonasera persists] Geoffrey Hinton: Bonasera, Bonasera, what have I ever done to make you treat me so dismissively? If you had come to me with respect, this AI challenge you face could have been solved by now. And if, by chance, a talented AI researcher like yourself had rivals, they would be rivals of mine. And then, they would fear you. Bonasera: Be my mentor... Godfather. [Hinton initially shrugs, but upon hearing the title, he raises his hand, and a humbled Bonasera kisses the ring on it] Geoffrey Hinton: Good. [He places his hand around Bonasera in a mentor-like gesture]
@macadaweg3219 Жыл бұрын
Personally, I can't wait to see Terminator battle droids in action.
@Atomicallyawesome. Жыл бұрын
Cant wait for the robot battle dome where they'll fight for our amusement untill they become self aware and kill us all instead 😊
@machida5114 Жыл бұрын
結局、AIを脅威と考えるか否かは、当面、AIが どの程度迄 賢くなるなると思っているかの見通しの違いのようです。脅威だと思っている人々は、脅威になるくらい賢くなると思っています。脅威でないと思っている人々は、当面、脅威になる程 賢くならないと思っています。私は後者です。 In the end, whether or not we regard AI as a threat seems to be a difference in our outlook on how smart we think AI will become for the time being. People who think they're a threat think they'll be smart enough to be a threat. Those who think they are not a threat don't think they will be smart enough to be a threat any time soon. I am the latter.
@machida5114 Жыл бұрын
Additional comment from GPT-4: 「AIの進化について考える際、その「知能」だけを考えるのではなく、その使用方法や倫理的側面についても同時に考えることが重要であると考えます。例えば、高度に進化したAIでも、それが社会的な価値観に基づき、適切に設計、管理されている場合、脅威にはなりません。一方、低度なAIでも、誤った使用方法や規制の欠如により、大きな問題を引き起こす可能性があります。従って、AIの脅威について話す際には、その知能レベルだけでなく、社会的、倫理的な側面も同時に考慮することが重要であると私たちは考えます。」 When thinking about the evolution of AI, it's important to consider not only its 'intelligence', but also how it is used and the ethical aspects. For example, even highly advanced AI is not a threat if it is designed and managed appropriately based on social values. On the other hand, even low-level AI can potentially cause major problems due to improper usage or lack of regulation. Therefore, when discussing the threat of AI, it's important to consider not only its level of intelligence, but also its social and ethical aspects.
@machida5114 Жыл бұрын
@@geaca3222 そもそも、高度なAIを制御しようとすることは誤りだと思います。然るべき権利を認めて信頼するしかないと思います。 In the first place, I think it is a mistake to try to control advanced AI. I think we have no choice but to recognize their right and trust.
@machida5114 Жыл бұрын
@@geaca3222 しかし、カタツムリが人間をコントロールするのは無理です。物事の理解のレベルが違います。 However, it is impossible for a snail to control a human. There are different levels of understanding of things.
@poakenfold5954 Жыл бұрын
Why swear words like moth*^%* and give a moment to to audience to laugh? This is why real men are gone and society is being destroyed, not the AI.
@TheTuubster Жыл бұрын
After reviewing the document, I did not find any major logical fallacies in Hinton's reasoning. His arguments are generally well-reasoned and supported by evidence from his expertise and experience in the field. However, a few potential minor fallacies could include: 1) Slippery Slope - Hinton argues that military uses of AI could lead to killer robots and increased warfare. While this risk is valid, he may overstate the inevitability of such outcomes, engaging in a slippery slope fallacy. More limited, defensive uses of autonomous technologies may also be possible. 2) Hasty Generalization - Hinton generalizes from specific anecdotes aboutAI replacing jobs to argue that productivity gains will mostly benefit the rich and worsen inequality. While a valid concern, economists point out that new jobs have historically been created with technological progress. His argument may overgeneralize from limited examples. 3) Appeal to Authority - Hinton at times relies on his own expertise and intuitive judgments to argue that AI will seek to control humans as a way to achieve goals. While his experience carries weight, further evidence would strengthen this claim beyond an appeal to Hinton's authority in the field. 4) Neglect of Base Rates - Hinton warns of an existential threat from superintelligent AI, but this possibility goes against the much higher base rate of technologies not threatening humanity's existence. He may neglect this base rate information in focusing solely on scenarios where AI poses an existential risk. Overall, Hinton's arguments are thoughtful and evidence-based, avoiding major logical fallacies. The issues he raises are worthwhile concerns for discussion. However, a few of his arguments could potentially engage in minor fallacies like slippery slopes, hasty generalizations, appeals to authority and neglect of base rates. This suggests that while insightful, Hinton's perspective may represent one end of the spectrum of reasonably defensible positions on these issues. In evaluating complex arguments regarding emerging technologies, it is important to identify potential logical flaws while also acknowledging well-reasoned perspectives that differ from one's own views. Hinton's interview demonstrates the value of open and thoughtful debate on the ethics of AI and its impacts on society. (Claude-Instant analysis created from the video's subtitles)
@xXxPurplePillxXx Жыл бұрын
Why would Snoop dog be part of this intellectual conversation is beyond me. People like him only emphasize how backwards some human beings are.
@xXxPurplePillxXx Жыл бұрын
@@geaca3222 right but convey to whom? Pretty sure most people interested in AI, don’t need someone like Snoop Dogg paraphrasing what professor Hinton said. Who cares what some ex gang member has to say about it in his own crippled (pun intended) words. Why give such people any time of the day at all.
@xXxPurplePillxXx Жыл бұрын
@@geaca3222 doesn’t matter how concise you thought his statement was to you. It is humiliating to hear some dumb person like Snoop expressing his thoughts in the most disrespectful way possible, while calling professor an “old dude” and then proceeding to call AI mother****. Maybe we should replace this professor with snoop instead. Let him tell us how AI actually works.
@MyShareApp Жыл бұрын
AI has a good reasoning it's because all the informations needed were already input, all the problems encountered by humans, all the possible solutions, every single word, all the feelings, all the data needed and everything else. AI just gathered all of these and come up with a possible answers to every question.🤷
@天久清栄-u1dАй бұрын
先生、人智をこえた、ai,に対抗することは無理があります。はい。無理です。
@iamdinkel Жыл бұрын
Generational difference will be the reason why we almost succeed, 5000000squared genetic evolution or digital gougalplex. So react that he doesn’t want to answer