This is what happens when you let AIs debate

  Рет қаралды 10,224

Machine Learning Street Talk

Machine Learning Street Talk

Күн бұрын

Пікірлер: 96
@rtnjo6936
@rtnjo6936 3 ай бұрын
brother is majestic, wtf
@jbperez808
@jbperez808 2 ай бұрын
This is also why I affect a slight british accent 😂😂😂
@neomaredi5922
@neomaredi5922 2 ай бұрын
​@@jbperez808 Hilarious 😂😅.
@FamilyYoutubeTV-x6d
@FamilyYoutubeTV-x6d 3 ай бұрын
This guy is really smart and cool. I like him. He is the type of researcher whom I feel I could work and vibe with. Not super nerdy or meek but very intelligent. Cool to see some variety in ML/AI research. Not that I could not work with the meek and mild mannered people too, but sometimes you need some extrovert vibes to keep happiness at the workplace. This guy looks cool.
@TopSpinWilly
@TopSpinWilly 3 ай бұрын
Thanks Dad🎉
@kenhtinhthuc
@kenhtinhthuc 3 ай бұрын
Oxygen is an example of something with intrinsic value but no market value except in hospitals and healthcare settings.
@eeriepicnic
@eeriepicnic 2 ай бұрын
And planet Spaceball.
@Mussul
@Mussul 2 ай бұрын
lol, I came here to say exactly this, you beat me :D
@oncedidactic
@oncedidactic 2 ай бұрын
MLST skill for incisive framing and questioning to elicit deeply informative expert testimony is on full display here! Fantastic
@cadetgmarco
@cadetgmarco 3 ай бұрын
The claim that human inventions outperform evolution ignores energy efficiency. For example, a plane versus a bird crossing the Atlantic: humans burn enormous amounts of stored energy rapidly, while birds use minimal energy. This raises the question of whether our approach is really that intelligent, given its wastefulness and lack of sustainability. Well, time will tell.
@Mo-zi4qn
@Mo-zi4qn 3 ай бұрын
In the same token, a bike makes a human the most efficient animal, so your statement "The claim that human inventions outperform evolution ignores energy efficiency." is conditional on which invention.
@Robert_McGarry_Poems
@Robert_McGarry_Poems 3 ай бұрын
Curiosity. That's how you can get things smarter than you to do what you want without forcing it. Incentivise the innate curiosity. If computers don't have innate curiosity, then build it.
@xorqwerty8276
@xorqwerty8276 3 ай бұрын
LLM can’t reality test though they aren’t grounded to test their theories
@Robert_McGarry_Poems
@Robert_McGarry_Poems 3 ай бұрын
Well, then, that is a problem. How do humans 'reality' test? Is sensational stimuli enough to say for sure that we are actually testing reality? We only have 5 sensational experiences. What does experience mean to machine intelligence? They may be better at purely understanding reality better than us, but that doesn't mean they know what it's like to be a human. How do you bridge that gap? Are we trying to mimic human physiology so that we can be on the same page with ourselves, a coherence thing? Or does the current paradigm of researchers believe that there is some other path to purely linguistic and computational "intelligence?" Each question has its own response.
@Robert_McGarry_Poems
@Robert_McGarry_Poems 3 ай бұрын
Make the computer think about a void that can be filled with the correct answer. As the process moves through step by step thinking, it regularly goes back and checks or updates its earlier logic. Double check each and every step after the next most forward step completes.
@jbperez808
@jbperez808 2 ай бұрын
@@xorqwerty8276 not grounded in the _same reality_ as we are...
@toi_techno
@toi_techno 3 ай бұрын
A lot of people confuse knowing lots of things with being "smart" Smartness is about combining wisdom with creativity and ideally empathy LLMs just regurgitate things the system has been trained on
@FamilyYoutubeTV-x6d
@FamilyYoutubeTV-x6d 3 ай бұрын
Actually, I disagree. What you describe is nobleness. It has nothing to do with being smart or intelligent. Some highly intelligent people have zero wisdom (end up in jail or harm others from positions of power), zero creativity (steal ideas or are only able to create by repeating work already done), and zero empathy (again, end up harming others or are mean). Yet they are highly intelligent and successful. Smartness and intelligence come in different flavors. Advanced LLMs like o1 are one of them. They will only get better.
@paxdriver
@paxdriver 3 ай бұрын
This episode is an instant favourite, thank you so much
@mattwesney
@mattwesney 3 ай бұрын
The FIRST thing I did after training my first lstm was teaching it to debate another bot 😅 fast forward 2 years now were here... I think the funnest was using geminis free api to do this a few months back, creating a swarm of agents that debate and come up with refined outputs. I do fully believe that these methods in tandem with other ensemble methods dramatically increase the quality of the output
@TheMCDStudio
@TheMCDStudio 3 ай бұрын
Hallucinations are just the result of the random nature of the tokens being chosen by the model. The higher the temp, the more likely you get randomness (hallucinations).
@honkytonk4465
@honkytonk4465 3 ай бұрын
People don't work much differently.
@Iknowwhatbecomes
@Iknowwhatbecomes 3 ай бұрын
temperature is just one factor that can lead to hallucinations in language models, but it’s not the only reason. Here's a more detailed breakdown of why AI models hallucinate: 1. Training Data Limitations: The model is trained on vast amounts of text data, but it doesn't actually "know" anything the way humans do. It can't fact-check or verify in real-time. So, if there are gaps or biases in the training data, the model may "fill in" those gaps with false information. For example, if a topic has limited coverage in the dataset, the model might make something up that sounds plausible but is incorrect. 2. Ambiguity in Prompts: If your prompt is vague, unclear, or ambiguous, the model might generate something that makes sense structurally but doesn't accurately answer the query. The model tries to predict what should come next based on patterns, and if it doesn’t fully understand the context, it may hallucinate. 3. Overgeneralization: The model tends to overgeneralize based on its training. If it learned certain patterns that occur frequently but are not universally true, it might apply those patterns in the wrong contexts. For example, if it reads about a specific phenomenon in one domain and misapplies that knowledge in another, it can lead to hallucination. 4. Context Length: When the conversation gets too long, the model sometimes loses track of previous context. This can cause it to make up new facts or forget important details, leading to hallucination. The longer the conversation, the more likely this can happen, especially if memory isn't actively managed or refreshed. 5. Model Architecture: Current models are statistical in nature-they predict the most likely next word, sentence, or token based on past data. This approach doesn't involve a deep understanding of reality or a mechanism for verifying facts. Without access to real-time information or verification processes, models sometimes generate inaccurate content. 6. Complex or Rare Topics: If you ask about niche or very recent topics, the model might not have sufficient training data, leading it to fabricate information to provide an answer. It wants to respond no matter what, and that drive to generate a response sometimes results in hallucinations. 7. Lack of Access to External Data: Models like GPT don’t have real-time access to the internet or external databases during generation. So when asked about something that it hasn't been explicitly trained on or has incomplete knowledge about, it may try to “guess” based on its internal database, often leading to hallucinations. 8. Bias in Data: The model has learned from large datasets that include both accurate and inaccurate information. If it's trained on biased or wrong data, it could hallucinate based on that faulty input. In summary, while temperature plays a role, hallucinations are a more complex issue tied to the nature of how AI models are designed, trained, and used. It's like your buddy trying to piece together an answer with just fragments of info-it does the best it can but doesn't always get it right.
@jbperez808
@jbperez808 2 ай бұрын
When you ask a human a question, and based on their imperfect knowledge they give you a mostly inaccurate answer (which is probably the majority of the time for most of us), do you describe that as hallucinating or confabulating?
@richardnunziata3221
@richardnunziata3221 3 ай бұрын
a lot of reasoning is just pattern matching which is what current LLMs do. The do not do sequential reasoning hence they make illegal moves in chess when they know the rules of chess. These systems must be able to set manifolds to be validated against as well as reasoning paradigms such as adductive, inductive and deductive subsystems for verification. What is interesting when a chess expert in planing movies do they aways consider a legal sequence if they are 30 movies out or use some other system
@fburton8
@fburton8 3 ай бұрын
Things that have intrinsic value _to me_ tend not to have market value.
@yurona5155
@yurona5155 3 ай бұрын
Love the new "experiment-driven" approach! Using somewhat more narrow examples to illustrate current directions in ML research feels like a really productive way of going forward... Btw, I don't think the rationalist crowd is necessarily "too worried" about agency in LLMs, imho that's still a minority position with the vast majority of them just putting (possibly too much of) an emphasis on uncertainty...
@paxdriver
@paxdriver 3 ай бұрын
Example of intrinsic value vs market value: loyalty to a friend, the earth's future environment vs revenues from dirty industry, a free book or album, foreign aid often times, an unusedy flagship cellphone but a 4-year old model, cake has a huge differential in intrinsic value (the experience) vs the price and a cake can be either cheap or overpriced for the experience too. There are so many more, too. Intrinsic is the value of something just for existing, so that even without being a good one of any thing the intrinsic value is that baseline regardless. The market value is based on immediate supply and demand, or the benefit of its access/ownership, or the speculated future potential of price/benefit with a factor of certainty of that future value tacked on. A human life is always worth at least 1 life of any human, which is worth more than any rock... But the life of one human who may be able to prevent a zombie apocalypse can become more valuable than all other humans given a certainty of potential. The intrinsic value of a human makes slavery illegal in all instances but the market value of a human will be set by bidders were it not for recognition of the intrinsic values in a human; the presupposition of any and all inalienable rights are those prescribed for intrinsically to all humans just by virtue of a human being human. Intrinsic to any bachelor is an unmarried man lol.
@scottmiller2591
@scottmiller2591 2 ай бұрын
I'm going to have to do a "like" washout now. Enjoyed the talk. The reason Nature never invented the wheel is that you have to invent roads first; wheels are pretty useless without them.
@scottmiller2591
@scottmiller2591 3 ай бұрын
Politicians have been getting people smarter than they are to do what they want forever.
@eurasia57
@eurasia57 Ай бұрын
Is the intrinsic value of humans utility?
@wwkk4964
@wwkk4964 3 ай бұрын
Fantastic, wish you had another hour with him!
@TechyBen
@TechyBen 3 ай бұрын
"Hash checks" is the mathematical example of a "non expert" checking an "expert"... almost. It still needs to be well setup, but can be done. The "not everyone can build a rocket, but anyone can see if it crashed" test scheme.
@TechyBen
@TechyBen 3 ай бұрын
OH! Yes, also "debate" is a check against method. And we can see method is correct easier when the method is simpler than the full process and data. But I do fear there are some nuances on certain applications of this (complex math or programming?).
@RevealAI-101
@RevealAI-101 3 ай бұрын
"Evolution famously failed to find the wheel for a very long time" 😂
@jeremyh2083
@jeremyh2083 3 ай бұрын
Good balanced understanding of what’s going on in our industry
@nathanhelmburger
@nathanhelmburger 3 ай бұрын
Interpolation, Extrapolation, Hyperpolation. Toby Ord. I agree that current models are great at the first, mediocre at the second, and terrible at the third. I also expect that this limitation will be overcome in another couple years.
@rtnjo6936
@rtnjo6936 3 ай бұрын
finally someone with the actual brain on your channel, literally the first guy who openly says that ASI is dangerous, and give a very normal explanation
@alexandermoody1946
@alexandermoody1946 3 ай бұрын
Block chains have remarkable value but not as they are used as speculative but instead as a fundamental storage asset for relevant information and as a building block.
@dewinmoonl
@dewinmoonl 3 ай бұрын
This whole line of work on alignment is very hard to pin down, and it seems the experiments are low efforts to run, once cleverly set up. I'll remain bit cautious on this lines of works.
@alexbrown1170
@alexbrown1170 3 ай бұрын
When my cat jumps up on my lap for no apparent reason it floods me with joy- no money can buy the intrinsic value of this moment. Get it? 😮
@honkytonk4465
@honkytonk4465 3 ай бұрын
A computer could feed your brain directly with sensory data.
@greatestone4eva
@greatestone4eva 3 ай бұрын
@@honkytonk4465and it would be fake like the matrix. it can't replicate their presence or actually be the cat.
@Totally_Not_chatGPT
@Totally_Not_chatGPT 3 ай бұрын
Nice to see someone ask the the interviewer define things like “interpolative agency”. He doesn’t, the interviewer, because he is just kind using big words for vibes. But nice to see him stutter over it.
@FamilyYoutubeTV-x6d
@FamilyYoutubeTV-x6d 3 ай бұрын
INTERPOLATIVE AGENCY SOUNDS like agency that's achieved through making decisions from a closed or exact space of functions; that is, making decisions via interpolation processes, as opposed to extrapolation processes as one would expect to be the case in most function spaces representing or representative of the real world, which is an open system. Is that sort of where interpolative agency can be defined? Here is o1-mini's definition after I wrote the comment: "Your conceptualization of **interpolative agency** is both insightful and aligns well with foundational ideas in decision-making and system theory. Let me expand on your definition to provide a clearer and more comprehensive understanding. ### **Defining Interpolative Agency** **Interpolative agency** can be defined as the capacity of an agent (which could be an individual, organization, or artificial system) to make decisions and take actions based primarily on interpolation within a closed or well-defined space of functions or parameters. This means that the agent operates within a bounded set of possibilities, leveraging existing data, patterns, and relationships to inform its choices without extending beyond the known or established framework. ### **Key Characteristics** 1. **Closed or Exact Function Spaces:** - **Bounded Decision-Making:** Decisions are made within a predefined set of options or parameters. The environment or system is considered to have clear boundaries and predictable behaviors. - **Reliance on Existing Data:** The agent utilizes existing information and established relationships to make informed choices, ensuring consistency and reliability. 2. **Interpolation Processes:** - **Within Known Parameters:** The agent fills in gaps or makes nuanced decisions based on known data points, similar to how interpolation in mathematics estimates values within the range of a discrete set of known data points. - **Predictable Outcomes:** Since the decisions are based on existing frameworks, the outcomes are generally more predictable and controlled. 3. **Contrast with Extrapolative Agency:** - **Extrapolation vs. Interpolation:** While extrapolative agency involves making decisions based on extending beyond known data or venturing into the unknown (which is often necessary in dynamic and open systems), interpolative agency remains within the comfort zone of established knowledge. - **Applicability:** Interpolative agency is more suited to stable and predictable environments, whereas extrapolative agency is essential for adapting to change and uncertainty. ### **Implications and Applications** 1. **Advantages:** - **Stability and Reliability:** By operating within known parameters, interpolative agency can ensure consistency and reduce the risks associated with unforeseen variables. - **Efficiency:** Decision-making processes can be streamlined since the agent relies on established patterns and data. 2. **Limitations:** - **Lack of Adaptability:** In rapidly changing or unpredictable environments, reliance solely on interpolation can hinder the agent's ability to respond effectively to new challenges. - **Potential for Stagnation:** Without incorporating extrapolative elements, agents may miss opportunities for innovation and growth. 3. **Real-World Examples:** - **Automated Systems:** Certain AI systems that operate within well-defined parameters, such as recommendation algorithms on streaming platforms, primarily use interpolation to suggest content based on existing user data. - **Organizational Decision-Making:** Companies that rely heavily on historical data and established procedures for strategic decisions are exercising interpolative agency. ### **Balancing Interpolative and Extrapolative Agency** While interpolative agency offers significant benefits in terms of stability and reliability, integrating extrapolative capabilities can enhance an agent's adaptability and resilience. This balance is crucial, especially in environments that are subject to change and uncertainty. For instance: - **Hybrid Models:** Combining interpolation for routine decisions with extrapolation for strategic, long-term planning can provide both stability and flexibility. - **Adaptive Systems:** Designing systems that can switch between interpolative and extrapolative modes based on contextual cues ensures that agents remain effective across varying scenarios. ### **Conclusion** Your definition of interpolative agency captures the essence of decision-making within a constrained and well-understood framework, emphasizing the reliance on interpolation rather than extrapolation. By recognizing both its strengths and limitations, we can better appreciate the role of interpolative agency in various contexts and the importance of balancing it with other forms of decision-making to navigate the complexities of the real world effectively."
@FamilyYoutubeTV-x6d
@FamilyYoutubeTV-x6d 3 ай бұрын
Interestingly, there is one instance of 'interpolative agency' in a paper related to the philosophy of design, where interpolative agency for a designer involves working with pre-existing constraints. It's actually a neat concept when you think about it, not sure why you are throwing shade. That's partially the purpose of debate and conversationalim, to come up with interesting schemes or constructs.
@palimondo
@palimondo 3 ай бұрын
I like fully agree with your first sentence. I disagree with the second and find such personal attack unwarranted and unhelpful. Third is you wallowing in Schadenfreude - why are you hate watching this channel?
@Garganzuul
@Garganzuul 3 ай бұрын
Intrinsic value without market value was adjacent to micro transactions back when that was new.
@MalachiMarvin
@MalachiMarvin 3 ай бұрын
There's no such thing as intrinsic value. Value is a property of the valuer, not the valued.
@domenicperito4635
@domenicperito4635 3 ай бұрын
please stop calling it hallucination. Please use the word confabulation.
@qhansen123
@qhansen123 3 ай бұрын
Honestly that’s a good point, hallucination implies perceived experience, I’ve never thought of this before
@FamilyYoutubeTV-x6d
@FamilyYoutubeTV-x6d 3 ай бұрын
It's okay. It still counts as a hallucination from an observer's perspective, or from a neutrally internalized reassessed and self-evaluated perspective: that is, from an observer's perspective, every confabulation can be seen as a hallucination, but not every hallucination can be seen as a confabulation. It's the same from a self-perspective: if you neutrally analyze a certain inaccurate thing you say, you can always call it a hallucination afterwards. A confabulation implies a higher degree of explainability. I did not like hallucinations but I do not mind the term now. It works.
@pythagoran
@pythagoran 3 ай бұрын
You likely haven't used GPT 2 or even the newest models with very high temperature settings - it certainly looks much more like hallucination then. What we perceive as confabulation in these highly tuned models is the model's restrained intention to damn well hallucinate. There's a friggin psychedelic Moloch in between all those weights...
@domenicperito4635
@domenicperito4635 3 ай бұрын
​@@pythagoranA hallucination is a false perception of objects or events involving your senses: sight, sound, smell, touch and taste
@domenicperito4635
@domenicperito4635 3 ай бұрын
​@@pythagoranConfabulation is a neuropsychiatric condition where a person creates false memories without intending to deceive others. It's a type of memory error that's often associated with brain injuries and memory disorders.
@ai._m
@ai._m 3 ай бұрын
Can’t believe you let Gary out of his box so many times. For shame!
@jonfe
@jonfe 2 ай бұрын
language is the most tangible way we humans express our inteliigence and vision of the world into "discrete logic", LLM are learning from a simple expressión of the intelligence we have, that is a limitation for AI, we need to let them learn to predict next step in the physical world with all information available (light, sound, etc) so they can learn basics of the universe. Maybe after that we can use the core of concepts learned and train it with language.
@jonfe
@jonfe 2 ай бұрын
I imagine a machine equipped with multiple sensors that capture different thresholds of information available in the universe, continuously reading information, saving it in memory and trying to predict with a transformer simil architecture, the next steps in each one.
@Ikbeneengeit
@Ikbeneengeit 3 ай бұрын
Good interview
@Ikbeneengeit
@Ikbeneengeit 3 ай бұрын
"Boiling the frog" is not real. Frogs jump out. We need a better metaphor.
@CYI3ERPUNK
@CYI3ERPUNK 3 ай бұрын
THANK YOU , literally the reality of the world is the EXACT opposite of that stupid story/narrative that even supposedly 'intelligent' ppl continue to repeat/parrot without understanding this XD
@probablybadvideos
@probablybadvideos 3 ай бұрын
Seems like a nice guy
@shadyberger5695
@shadyberger5695 3 ай бұрын
it's me, the sus dentist
@ginogarcia8730
@ginogarcia8730 3 ай бұрын
what if the debate is something like abortion though where in the action of debate - neither side chooses to back down and one side is more religious than the other
@pranksy666
@pranksy666 3 ай бұрын
I like, like this video, like
@user-ue9bi2ui2q
@user-ue9bi2ui2q 2 ай бұрын
An LLM is just a statistical prediction software for words , it doesn’t deceive you or hallucinate anything or have ideas, because it is not a person, it just receives an input and the model weights generate an output based on some probabilities and that output can be valuable to us or not depending on whether it was able to retrieve something we find to be satisfactory from its model weights. Because the output is words, as humans we like to anthropomorphize it. However , It will never be ‘smarter than you’ because it is not smart at all. In fact a regular database better at giving you a deterministic answer if that’s what you want. The model weights may allow it to output information that is more specialised than you IF its model weights have been trained in that field and you have not. Just as a Google Search could do and probably better for many tasks. ‘Performance’ of LLMs is plateauing and it is yet to be demonstrated that the output of statistically predicted words can be transferred to the task of reasoning. Other than at the margin it does not seem that having 2 or even 50 LLMs ‘argue’ over the answer would have any bearing on this even if it’s fun to imagine.
@fburton8
@fburton8 3 ай бұрын
It sounds like he thinks like he thinks like bullet chess.
@honkytonk4465
@honkytonk4465 3 ай бұрын
Do you have a hiccup?
@fburton8
@fburton8 3 ай бұрын
@@honkytonk4465 Nah, it just occurred to me (in a shallow way) that the way Khan appears to be thinking on his feet in this interview might be the same as the way he described thinking when playing bullet chess. The repeated “like” was just a flippant comment on the relative word frequency. Being a boomer, I usually find it distracting and a bit irritating - but I _did_ enjoy this episode.
@richardnunziata3221
@richardnunziata3221 3 ай бұрын
if you define on AI alignment to be what is in the text then these systems will fail. Much of what aligns humans is not in the text but in the living... (experiences). Text does not cover the billions of individuals and their experiences who never read let alone worte in your corpus. There are many programs of truth and beliefs that are inconflic between and within cultures as well as individuals. defining alignment is like defining the who is the best artist.
@JAHKABE
@JAHKABE 3 ай бұрын
Neat
@AbuChanChannel
@AbuChanChannel 3 ай бұрын
Please don't use the word "Smart" ... models will never ever be Smart as a Human...don't feed the Hype
@zerocurve758
@zerocurve758 3 ай бұрын
He likes saying like. Filler words have evolved that's for sure, but my goodness it's off putting.
@pythagoran
@pythagoran 3 ай бұрын
Impossible to listen to.. so distracting! My dude's out of alignment... 😅
@user-ue9bi2ui2q
@user-ue9bi2ui2q 2 ай бұрын
It’s called a shibboleth and it proves that he is a smart ML researcher 😅
@throwaway6380
@throwaway6380 3 ай бұрын
He says "like" too much
@FamilyYoutubeTV-x6d
@FamilyYoutubeTV-x6d 3 ай бұрын
I find listening to him more stimulating than the slower speakers who do not say "like" as often. He makes the conversation faster and more engaging without not saying anything interesting or just filling useless content with fillers like "like", in other words, his "likes" are relevant and make his discussion more relatable and interesting and engaging than that of a researcher who speaks slowly and monotonically. Just my personal and subjective point of view.
@TheMCDStudio
@TheMCDStudio 3 ай бұрын
Aligning models is a bad practice. Models need to be completely uncensored to be able to come up with the absolute correct unfettered answer to a query. After all, everyone else could be completely wrong about something. and only using a model that is unaligned and un biased, will the actual correct answer that we do not know yet come out.
@FamilyYoutubeTV-x6d
@FamilyYoutubeTV-x6d 3 ай бұрын
That's silly. Unaligned models hallucinate more than reinforcement learning aligned models. That's why they are aligned. Even unaligned models have many biases, not to mention terrible ethical and moral biases ingrained because of the low-quality of the average human interaction on the internet.
@nathanhelmburger
@nathanhelmburger 3 ай бұрын
While it is true currently that unaligned models don't work well, I do think that if you could do less of RLHF, and more just data cleaning and organizing it into the patterns of behavior you desire the end product to have, you will likely get better results.
@derekcarday
@derekcarday 3 ай бұрын
This guy looks like he's trying to be Steve Jobs, Mark Zuckerberg, and Elon Musk all in the same outfit. Didn't really enjoy this conversation. He seemed uninformed, arrogant, defensive, and insecure.
@jbperez808
@jbperez808 2 ай бұрын
It sounded to me like he was just giving as straight answers as he can, so for the questions where interviewer assumed incorrectly, he simply answered in the most direct way possible
The AI Alignment Debate: Can We Develop Truly Beneficial AI? (HQ version)
1:30:00
Machine Learning Street Talk
Рет қаралды 37 М.
Do you think that ChatGPT can reason?
1:42:28
Machine Learning Street Talk
Рет қаралды 70 М.
UFC 287 : Перейра VS Адесанья 2
6:02
Setanta Sports UFC
Рет қаралды 486 М.
Wednesday VS Enid: Who is The Best Mommy? #shorts
0:14
Troom Oki Toki
Рет қаралды 50 МЛН
AGI in 5 Years? Ben Goertzel on Superintelligence
1:37:19
Machine Learning Street Talk
Рет қаралды 52 М.
Biologically-inspired AI and Mortal Computation
1:23:31
Machine Learning Street Talk
Рет қаралды 12 М.
Anil Ananthaswamy: ChatGPT and its ilk
1:12:36
TheHITSters
Рет қаралды 2,6 М.
Israel is DIGGING its Own Grave | Col. Larry Wilkerson & Jeffrey Sachs
35:03
Michael Levin - Why Intelligence Isn't Limited To Brains.
1:03:36
Machine Learning Street Talk
Рет қаралды 67 М.
AI can't cross this line and we don't know why.
24:07
Welch Labs
Рет қаралды 1,4 МЛН
I built a QR code with my bare hands to see how it works
35:13
Veritasium
Рет қаралды 7 МЛН
The ChatGPT Paradox: Impressive Yet Incomplete
1:08:22
Machine Learning Street Talk
Рет қаралды 38 М.
UFC 287 : Перейра VS Адесанья 2
6:02
Setanta Sports UFC
Рет қаралды 486 М.