Jeff Dean (Google): Exciting Trends in Machine Learning

  Рет қаралды 164,386

Rice Ken Kennedy Institute

Rice Ken Kennedy Institute

2 ай бұрын

Abstract: In this talk I’ll highlight several exciting trends in the field of AI and machine learning. Through a combination of improved algorithms and major efficiency improvements in ML-specialized hardware, we are now able to build much more capable, general purpose machine learning systems than ever before. As one example of this, I’ll give an overview of the Gemini family of multimodal models and their capabilities. These new models and approaches have dramatic implications for applying ML to many problems in the world, and I’ll highlight some of these applications in science, engineering, and health. This talk will present work done by many people at Google.
Bio: Jeff Dean joined Google in 1999 where he now serves as Google’s Chief Scientist, focusing on AI advances for Google DeepMind and Google Research. His areas of focus include machine learning and AI, and applications of AI to problems that help billions of people in societally beneficial ways. His work has been integral to many generations of Google’s search engine, its initial ad serving system, distributed computing infrastructure such as BigTable and MapReduce, the Tensorflow open-source machine learning system, as well as many libraries and developer tools.
Jeff received a Ph.D. in Computer Science from the University of Washington and a B.S. in Computer Science & Economics from the University of Minnesota. He is a member of the National Academy of Engineering and of the American Academy of Arts and Sciences, a Fellow of the Association for Computing Machinery (ACM) and of the American Association for the Advancement of Sciences (AAAS), and a winner of the 2012 ACM Prize in Computing and the 2021 IEEE John von Neumann medal.
Sponsored by: The Ken Kennedy Institute

Пікірлер: 86
@ericmcnally5128
@ericmcnally5128 2 ай бұрын
I really appreciate that lectures like these are made public for interested parties to consume. As an armchair enthusiast without access to quality institutional education on the topic, I have only been able to learn about speech and vision technology from publicly available talks like this. So thank you so much!
2 ай бұрын
There are free courses if you have time/are interested. But the math will definitely cross dimensions.
@ericmcnally5128
@ericmcnally5128 2 ай бұрын
This is actually where I am sort of stuck :not knowing which maths to focus on brushing up on.
@armin3057
@armin3057 2 ай бұрын
you gotta start somewhere, you could for instance download the math for machine learning book, called mml, free to download. dont worry if you dont get everything right, you find the solutions online too.@@ericmcnally5128
@YagamiAckerman
@YagamiAckerman 2 ай бұрын
- [00:04] Machine learning expectations - [01:42] Scale improves results - [04:25] Reversing ML capabilities - [07:24] Accuracy advancements in vision - [08:18] Speech recognition strides - [10:20] Hardware for ML efficiency - [13:57] TPU evolution for ML - [18:31] Distributed word representations - [20:56] Transformer model revolutionizes - [44:04] TPUs accelerate ML. - [48:08] General models specialized. - [49:17] Generative models for images. - [54:05] ML enhancing smartphones. - [57:06] ML revolutionizes science. - [59:06] ML aids medical diagnosis. - [01:02:08] Principles for ethical ML. - [01:06:03] More data improves. - [01:07:27] Plenty of data. - [01:08:07] Multimodal models benefit. - [01:09:27] Challenges for startups. - [01:10:51] Diversity in models.
@megairrational
@megairrational Ай бұрын
Amazing! Thanks
@sayo9394
@sayo9394 2 ай бұрын
much appreciated for uploading this educational video onto KZbin and making it freely accessible to all.
@Dima-rj7bv
@Dima-rj7bv 2 ай бұрын
Surprisingly amazing questions in the end. Right to the point.
@Noobsitogamer10
@Noobsitogamer10 2 ай бұрын
Love the passion Jeff has for machine learning.
@collins4359
@collins4359 2 ай бұрын
love jeff. great speaker
@BennduR
@BennduR 2 ай бұрын
36:56 the length of the slope is not the hypotenuse, it's simply the length L in the diagram aka 80
@ArooPC
@ArooPC 2 ай бұрын
Wish he was asked on the reason for why they compared CoT 32-shot with 10-shot. And if they're developing a Tranformer successor or tried out Mamba.
@dr.mikeybee
@dr.mikeybee 2 ай бұрын
One thing that doesn't ever seem to get mentioned when discussing the context signatures, a.k.a. dense representations, in sequence-to-sequence modeling for translation is that different languages' semantic spaces have much of the same shape. This is a function of the fact that different languages all model our shared experiences. And despite misalignments in the semantic spaces, there's still enough similarity (i.e., the shapes are close enough) to make translation possible.
@quasimot8
@quasimot8 2 ай бұрын
Huh, that's fascinating
@stevo-dx5rr
@stevo-dx5rr Ай бұрын
It’d be interesting to look at the differences.
@BrianMosleyUK
@BrianMosleyUK 2 ай бұрын
This was posted 3 days ago, shame he wasn't able to discuss Gemini 1.5
@liubianxing
@liubianxing 2 ай бұрын
thx and appreciate for this sharing, Gemini and multi-model could bring along new trend in 2024
@deniz.7200
@deniz.7200 2 ай бұрын
Gemini is the worst
@denmark8th
@denmark8th 2 ай бұрын
super awesome talk!!
@varshneydevansh
@varshneydevansh 2 ай бұрын
Finally, the word Machine Learning rather than AI, AGI, and LLM
@nicohornswag
@nicohornswag 2 ай бұрын
A breath of fresh air
@Prometheus70747
@Prometheus70747 2 ай бұрын
All categories of machine learning. Ain't rocket science. There are different categories of any field of science or engineering. What do you expect?
@Prometheus70747
@Prometheus70747 2 ай бұрын
All categories of machine learning. There other types of machine learning categories so it's sort of necessary to categorise subsets of machine learning, which is an umbrella term for different categories.
@vsenderov
@vsenderov 2 ай бұрын
I think AI is a superset including ML and GOFAI and oddballs like the Semantic Web.
@idrisssarmouk8044
@idrisssarmouk8044 2 ай бұрын
We dont even have a definition of intelligence yet
@istvandarvas3372
@istvandarvas3372 2 ай бұрын
this was amazing! Thanks!
@alexeikorol6312
@alexeikorol6312 2 ай бұрын
🎯 Key Takeaways for quick navigation: 00:04 *💡 Introduction to Machine Learning Trends * - Introduction to trends in machine learning that includes opportunities and challenges. - This talk presents the work of many people at Google, including some co-authored by Dean. 01:26 *👁️ The Evolution of Computer Perception* - The evolution of computer perception from rudimentary speech recognition and image processing to comprehending language, multi-lingual data, and enhanced perception. - Opportunities opened up by advanced machine senses in a variety of fields. 02:35 *💾 The Change in Computer Processing Needs Due to Machine Learning* - Transition from traditional code to machine learning analogies triggering a need for different kinds of hardware. - Discussion on how better hardware can lead to lower economic and energy costs while improving model quality. 06:14 *📈 Progress in Image Recognition and Speech Recognition* - Detailed on the history and advancements in image and speech recognition, with a focus on neural networks. - Importance of continued research and scaling for improving accuracy and usability of systems. 10:20 *🖥️ Google's Development of Tensor Processing Units* - Google’s development of Tensor Processing Units (TPUs) to serve machine learning models effectively. - TPU's ability to carry out reduced precision computations and assemble different linear algebra operations. 14:11 *📚 Language Processing with Machine Learning* - The radical changes and opportunities in language processing using machine learning. - The use of distributed representations to represent words in high dimensional vectors which can push similar words closer in space and separate different words. 18:31 *⚙️ Sequence to Sequence Learning Model* - Introduction to the Sequence to Sequence learning model which can translate input sequences into different languages. - The model's ability to absorb input sentences and decode the corresponding translated sentence iteratively. 20:27 *🔃 Multi-Turn Conversations with Neural Language Models* - Discusses the use of sequence-to-sequence models for creating meaningful multi-turn conversations, where the system takes context of previous interactions into account. - The model effectively uses input from the previous turns of conversations to generate appropriate responses. 21:06 *🔂 Transformer Model for Parallel Processing* - Explanation of the Transformer model, shifting from a sequential processing approach to a parallel one. - The Transformer model allows processing of each word in input independently and uses "attention" to focus on relevant parts during translation. - This shift led to a significant improvement in accuracy with more computational efficiency. 23:17 *🚀 Scaling Up Models and Obtaining Better Results* - Talks about the trend of increasing the scale of models and using Transformer models for training on conversational style data. - Describes how model evaluation ensures that generated responses are both sensible and specific. - Overview of the progression of neural language models and their improvements over time. 25:31 *🧩 Multimodal Models for Various Inputs* - Moving towards multimodal models that can process different types of inputs such as images, text, and audio. - Establishes the goal to train the world's best multimodal models and use them across Google products. 28:31 *💡 Scaling Up the Training Infrastructure and Data Handling* - Explanation of how Google's training infrastructure operates at a large scale to map computations onto available hardware. - Discusses the importance of having a fast recovery system to mitigate the impact of system failures. - Describes the critical role of high-quality training data and proposes automated learning curriculums as a future research area. 33:07 *🧠 Better Model Elicitation and Multimodal Reasoning* - Proposes techniques to elicit better responses from models by asking them to "show their work". - Details examples of multimodal reasoning through the Gemini model, where complex problems are solved using various data types (texts, images, etc.). - The potential educational implications of these advancements are highlighted, such as individualized tutoring. 38:07 *📊 Evaluation of Models and Performance Comparisons * - Emphasizes the importance of model evaluation in identifying strengths/weaknesses, ensuring a well-trained model, and comparing its capabilities with other models/systems. - Presents performance statistics of the Gemini Ultra model, indicating its state-of-the-art performance in multiple areas of evaluation. 40:35 *🏆 State-of-the-Art Performance Across Benchmarks* - The Gemini model demonstrates state-of-the-art performance across multiple benchmarks. - Achieved top results in text, image, video, and audio understanding, even on data it had not previously encountered. 42:25 *💬 Coherent Conversations and Advanced Capabilities* - The model can generate surprisingly coherent conversations and offer domain-specific knowledge. - Showcases various input requests and the model’s impressive capacity to carry out meaningful, accurate responses. 44:45 *🤖 Chatbot Integration and Performance Measures* - Integration of the Gemini Model into the Bard chatbot. - Evaluation of the model's performance using the ELO scoring method. 49:17 *🖼️ Generative Models for Image Production* - Overview of how generative models are used to create images based on detailed prompts. - Emphasizes the influence of model scale on the quality of generation. 54:05 *📱 Machine Learning Applications in Everyday Tech* - Discussion of various machine learning features that have improved functionalities in smartphones, particularly in camera features. - Stresses the broad implications of machine learning technology for areas like language translation or literacy support. 57:06 *🧪 Machine Learning Potential in Material Science and Healthcare* - Discusses how machine learning can aid in exploring scientific hypothesis spaces and searches for new materials. - Highlights the potential of machine learning in healthcare, particularly in medical imaging and diagnostics. 01:01:27 *🩺 Machine Learning in Dermatological Diagnostics* - Utilizes machine learning to assist in diagnosing dermatological conditions. - Users can take a photo of their skin concern, and the system can provide potential diagnoses based on similar images in dermatological databases. 01:02:08 *📚 Guidance Principles for Applying Machine Learning* - Google's published a set of principles in 2018 to guide internal teams on considerations when applying machine learning to problems. - Highlights important aspects such as avoiding creating or reinforcing unfair bias and being accountable and sensitive to privacy. - Active areas of research include fairness, bias, privacy, and safety. 01:04:29 *🌟 Future Prospects and Responsibilities in Machine Learning* - The capacity for computers to understand various modalities and react accordingly is constantly expanding, paving the way for more intuitive and seamless user experiences. - The responsibility to harness machine learning for social benefit is proportional to the opportunities it presents. 01:05:49 *💡 Q&A Session* - Discussion about the impact of more data on the performance of machine learning models. - Views about the evolution of LLMs and the growth of multimodal models. - Reflections on the accessibility of machine learning research for small startups and individuals. Made with HARPA AI
@brookshamilton1
@brookshamilton1 Ай бұрын
Very helpful! Thank you
@doa_form
@doa_form Ай бұрын
this is more of a historical look rather than a focust toward the future
@planetsharks4746
@planetsharks4746 21 күн бұрын
Underrated talk from the GOAT
@jasonsong4035
@jasonsong4035 2 ай бұрын
🎯 Key Takeaways for quick navigation: 00:04 *🤖 Introduction to Machine Learning Trends* - Overview of exciting trends in machine learning, - Jeff Dean highlights the broad impacts and opportunities in AI, - Importance of awareness in technological development. 00:43 *🌐 Evolution of Machine Learning Capabilities* - Machine learning has transformed our expectations of computers, - Significant improvements in speech recognition, image understanding, and language processing, - Transition from basic functionalities to advanced perception and interaction. 02:07 *⚙️ Scaling and Hardware Innovations* - Scaling up computing resources leads to better machine learning performance, - The shift towards specialized hardware for more efficient computations, - Larger datasets and models contribute to advancements in AI capabilities. 03:57 *🔄 Reversibility in Machine Learning Models* - Recent progress in reversing traditional input-output relationships in AI models, - Examples include generating images from descriptions and converting text to speech, - These advancements open up new possibilities for creative and practical applications. 05:06 *📈 Benchmark Improvements Over the Decade* - Significant improvements in image recognition and speech recognition benchmarks, - The evolution of machine learning models has led to surpassing human accuracy in certain tasks, - Continuous advancements underscore the rapid development in the field of AI. 08:33 *🖥️ Specialized Machine Learning Hardware* - The development of hardware optimized for machine learning, like Google's TPU, - Improvements in computational efficiency and energy consumption, - The role of reduced precision and linear algebra in machine learning computations. 13:57 *🗣️ Advances in Language Understanding* - Significant progress in language models and translation, - From basic n-gram models to advanced neural network-based approaches, - The importance of distributed representations and sequence-to-sequence learning in improving language understanding. 20:27 *💬 Advancements in Conversational AI* - Introduction to effective multi-turn conversations using neural language models, - The progression from sequence-to-sequence models to more advanced Transformer models enabling parallel processing for efficiency and accuracy. 23:57 *🗨️ Evolution of Neural Language and Chat Models* - Overview of the development in neural language models and chatbots, including GPT and BERT variations, - Emphasis on the transformative impact of the Transformer architecture on model efficiency and capability. 26:00 *🌐 Introduction to Gemini Multimodal Models* - The goal of creating multimodal models capable of understanding and generating content across various data types, including text, images, and audio, - The introduction of Gemini models by Google for enhanced AI capabilities in handling multiple modalities simultaneously. 28:16 *⚙️ Scalable Training Infrastructure and Data Quality* - Discussion on the scalable training infrastructure designed to efficiently map computations onto available hardware, - Emphasis on data quality and its critical role in model performance, including strategies for enhancing training data relevance and richness. 33:07 *🧠 Techniques for Eliciting Better Responses from Models* - Introduction of techniques like Chain of Thought prompting to improve model accuracy and interpretability, - Examples demonstrating how guiding models to "show their work" can significantly enhance performance on complex tasks. 35:50 *🤖 Multimodal Reasoning in Gemini Models* - Presentation of Gemini's capabilities in multimodal reasoning with an example of solving a physics problem, - Discussion on the potential of multimodal AI models like Gemini for personalized educational tools and tutoring. 38:21 *📊 Evaluation and Performance Benchmarking of Gemini Models* - Overview of Gemini's evaluation process and its performance across various academic benchmarks, - Comparison of Gemini Ultra with other state-of-the-art models, highlighting its superior performance in a majority of evaluated tasks. 40:35 *🏅 State-of-the-Art Benchmarks in Image, Video, and Audio Understanding* - Gemini's exceptional performance on various benchmarks, including image, video, and audio understanding, - Achievements in multimodal capabilities with state-of-the-art results across multiple domains, - Importance of unbiased benchmark testing to validate model capabilities. 42:25 *💡 Conversational AI and Practical Applications* - The evolution of conversational AI models leading to coherent and helpful interactions, - Examples of Gemini's capabilities in providing detailed, context-aware responses in a conversational setting, - Introduction of programming concepts and detailed explanations as part of AI-generated responses. 48:08 *🏥 Domain-Specific Model Refinements for Medical Applications* - Refining general models for domain-specific applications, particularly in the medical field, - Achievements of the Med-PaLM model in exceeding medical board exam benchmarks, - Potential of domain-enriched training to achieve expert-level performance in specialized areas. 49:17 *🎨 Advances in Generative Models for Images and Video* - Development of generative models capable of creating detailed and contextually accurate images from textual descriptions, - Impact of model scaling on the fidelity and accuracy of generated images, - Integration of generative models into practical applications for creative and educational purposes. 54:05 *📱 Machine Learning in Everyday Devices* - The invisible role of machine learning in enhancing smartphone features and user experiences, - Examples of computational photography, live captioning, and language translation powered by AI on mobile devices, - The potential of AI to assist users in a variety of practical and accessibility-oriented tasks. 57:06 *🔬 Machine Learning in Material Science and Healthcare* - The influence of machine learning on scientific research, particularly in material science and healthcare, - Automated discovery of new materials with desirable properties using AI-driven simulations and structural pipelines, - The application of machine learning in medical diagnostics, with a focus on diabetic retinopathy and dermatology screening. 01:01:27 *📸 AI in Dermatology* - Deployment of AI systems for dermatological assessments through smartphone photography, - The system's capability to match user-uploaded images with dermatological databases for condition identification, - Emphasis on the potential for AI to distinguish between serious and benign skin conditions. 01:02:08 *🤖 Ethical Principles in Machine Learning* - The importance of ethical considerations and principles in the application of machine learning technologies, - Google's publication of AI principles to guide responsible development and usage, - Focus on avoiding bias, ensuring accountability, and enhancing social benefits through AI applications. 01:04:29 *🚀 Future of Computing with Learned Systems* - The shift from encoded software systems to learned models that interact more naturally with humans and the world, - The expanding capabilities of computers to understand and generate various modalities like speech, text, and images, - Discussion on the opportunities and responsibilities in advancing AI to ensure social benefits. 01:07:13 *💡 Data Quality and Model Performance* - The relationship between data quality, model capacity, and performance, - The importance of high-quality data and appropriate model scaling for improved AI effectiveness, - Mention of potential adverse effects of low-quality data on model capabilities. 01:08:07 *🧠 The Future of Large Language Models (LLMs)* - Discussion on the future of LLMs and the availability of high-quality training data, - Exploration of untapped data sources like video for further training and development of LLMs, - The ongoing potential for significant advancements in AI through diverse data utilization. 01:09:13 *🌐 Multimodal Models and Specialized Applications* - The impact of multimodal models on performance across different domains, - Considerations on whether multimodal models outperform domain-specific models in their respective areas, - The potential of base models enriched with domain-specific data for targeted applications. 01:10:09 *🚀 Opportunities in AI Research for Individuals and Startups* - Encouragement for individuals and startups with limited resources to engage in innovative AI research, - Highlighting the potential for significant contributions to AI through clever ideas and efficient use of available computational resources, - The importance of diversity in research topics within the AI field, beyond large-scale model training.
@George-nd7ot
@George-nd7ot 2 ай бұрын
This man made Google. LEGEND
@CharlesVanNoland
@CharlesVanNoland 2 ай бұрын
Well that was Larry Page and Sergey Brin but Jeff Dean did join early on in 1999 about a year after Google was founded - which was definitely before I'd even started using Google. I was still using excite and altavista until at least 2000 or 2001. Google was just way better at finding whitepapers and scientific content than any other search engine I'd ever used before.
@edgedg
@edgedg 2 ай бұрын
@@CharlesVanNoland He means "technologically", not the organization. Dean was a true programming monster for making things run fast, and almost every Google project he touched was a game changing success.
@mnchester
@mnchester 2 ай бұрын
Amazing talk! 😊
@wolpumba4099
@wolpumba4099 Ай бұрын
*Abstract* This comprehensive video presentation delves into the current state and future prospects of machine learning (ML), underlining significant advancements and the technological evolution that has shaped the field. The talk begins with an overview of machine learning trends, emphasizing the dramatic improvements in speech recognition, image understanding, and natural language processing over the last decade. It attributes these advancements to increased computing resources, specialized hardware, and larger datasets. A notable highlight is the development of Google's Tensor Processing Units (TPUs), designed to optimize ML computations efficiently, showcasing the importance of scalable and efficient hardware in pushing the boundaries of ML capabilities. The discussion progresses to the hardware evolution, with the latest TPUs achieving 1.1 exaFLOPS of computational power, and introduces the V5 series, enhancing performance for both inference and training. Attention is given to the strides in language models and translation, detailing the shift from traditional algorithms to neural networks and the transformative impact of models like Transformer, which allows parallel data processing for improved accuracy and efficiency. Central to the presentation is the unveiling of Gemini, Google's ambitious multimodal model, aimed at mastering the integration of text, image, video, and audio data. Gemini's varying sizes cater to different applications, from powerful cloud-based solutions to on-device implementations. The model's training, data filtering, and quality assurance processes are discussed, alongside innovative techniques like "Chain of Thought" prompting for eliciting more accurate and interpretable responses from the model. Performance evaluations reveal Gemini's superior capabilities across a wide range of benchmarks, outperforming state-of-the-art models in text, image, video, and audio understanding, as well as in conversational AI. The talk further explores the application of machine learning in enhancing smartphone features, material science, healthcare, and raises ethical considerations vital for responsible ML deployment. The session concludes with a Q&A segment addressing the audience's inquiries on model performance improvement with high-quality data, the future of large language models, the comparison between multimodal and domain-specific models, accessibility of AI research for individuals and startups, and concerns regarding the diversity of machine learning models. This presentation underscores the remarkable journey of machine learning, highlighting Google's leading role in advancing the field, and points towards a future where ML's potential to benefit society is fully realized, provided it is used responsibly. *Summary* *Introduction and Observations on Machine Learning* - *0:04* Introduction to trends in machine learning, its significance, opportunities, and considerations. - *0:22* Acknowledgment of Google's collective work in machine learning. - *0:48* Initial observations on machine learning improvements in speech recognition, image understanding, and natural language processing. - *1:59* Mention of the role of computing scale, specialized hardware, and large datasets in enhancing machine learning results. *Progress and Developments in Machine Learning* - *3:11* Examples of progress in image classification, speech recognition, and translation. - *4:17* Discussion on reversing machine learning processes for image generation from descriptions. - *5:13* Progress in image recognition accuracy, highlighted by ImageNet benchmark. - *7:42* Significant improvements in speech recognition accuracy. - *8:37* The importance of scalable and efficient hardware for machine learning. - *9:17* Benefits of reduced precision and focus on linear algebra in neural networks. *Hardware Innovations and Computing Power* - *10:27* Introduction to Google's Tensor Processing Units (TPUs) for efficient machine learning computation. - *12:02* Scaling with TPU pods for enhanced machine learning capabilities. - *12:58* Describes computing power in data centers with 1.1 exaFLOPS of computation. - *13:15* Introduction of the V5 series TPUs with enhanced memory and bandwidth. *Advances in Models and Translation* - *14:00* Advances in language models beyond traditional areas. - *18:31* Introduction to sequence learning and neural networks for translation. - *21:13* Explanation of the Transformer model allowing for parallel data processing. - *23:52* Evolution of neural language models and conversational AI, including developments in GPT and Transformer models. *Gemini: A Multimodal Model by Google* - *25:54* Introduction to Gemini models aiming to lead in multimodal machine learning. - *28:16* Training infrastructure and focus on maximizing "goodput." - *31:34* Importance of data quality and filtering for Gemini's training. - *33:19* "Chain of Thought" prompting technique for improved model performance. - *35:53* Multimodal reasoning capabilities of Gemini, with applications in education. *Performance and Applications of Gemini* - *39:14* Performance of Gemini Ultra in benchmarks. - *42:27* Conversational capabilities and development of domain-specific models. - *49:17* Generative models for creative image and video generation. - *53:01* Machine learning advancements in visual recognition and its applications in various fields. *Ethical Considerations, Conclusion, and Q&A* - *1:02:02* Emphasis on ethical considerations and responsible use of machine learning. - *1:04:18* Conclusion highlighting the shift to learned systems and their societal potential. - *1:05:38* Speaker's decline of further questions due to overwhelming response. - *1:06:13* Audience questions on model performance, future of LLMs, multimodal models, accessibility of AI research, and diversity in machine learning models. Disclaimer: I used gpt4-0125 to summarize the video transcript. This method may make mistakes in recognizing words and it can't distinguish between speakers.
@alph4966
@alph4966 2 ай бұрын
Jeff Dean's talent is real. but there is a CEO at Google who is crushing that talent.
@FoxnewsFan
@FoxnewsFan 2 ай бұрын
Also their horrible AI product and marketing team
@Fordtruck4sale
@Fordtruck4sale 2 ай бұрын
Thanks Jeff (If you're reading this) it clarified how embeddings work for me, on top of a lot of other things. Great presentation!
@sombh1971
@sombh1971 2 ай бұрын
37:16 Wow! There goes education! 44:03 There goes coding!
@AravindanUmashankar
@AravindanUmashankar 2 ай бұрын
Very insightful
@user-gc9sp7bx5z
@user-gc9sp7bx5z 2 ай бұрын
i would like to ask for a better example. jeff is a great person, and he has more to say, with prowess to back it. i just dont see him putting much time into these as he should be.
@joaoguerreiro9403
@joaoguerreiro9403 2 ай бұрын
Computer Science is awesome!!
@hansu7474
@hansu7474 2 ай бұрын
It's shame that they didn't give Jeff more time to answer questions. Because the fruit of this talk was more of picking his brain on what's going on with all these ML models, not his presentation.
@mikaelfiil3733
@mikaelfiil3733 Ай бұрын
Well, at 47 min. Genini gets the ordering wrong, when it states the countries with the most companies per 1 million residents. Either the table is wrong or the US should go last with 44.16. He didn't even notice it when reading the text below the table. So how can you trust it?
@user-df2tk1il7f
@user-df2tk1il7f 2 ай бұрын
멋있는 분이시네.
@skavihekkora5039
@skavihekkora5039 2 ай бұрын
Timestamps please!
@oscar12.34
@oscar12.34 2 ай бұрын
Way better than most online services I've used for improving quality.
@brayanfernandez9752
@brayanfernandez9752 2 ай бұрын
Magic!
@Elpapucho17
@Elpapucho17 2 ай бұрын
I second that, their upscaling is really top notch.
@fearlesspony
@fearlesspony 2 ай бұрын
@ [18:04] - they got the vector "king - queen" pointing in the wrong direction.... same for "man - woman"
@scign
@scign 2 ай бұрын
Love how he completely skips over GPUs and the link to gaming and video! Goes straight from CPU to TPU.
@iker64
@iker64 2 ай бұрын
Wonder what's next for AI?
@ps3301
@ps3301 2 ай бұрын
Jeff should start to build a video transformer to compete with sora.
@bartoszkunat1134
@bartoszkunat1134 2 ай бұрын
46:00
@skypickle29
@skypickle29 2 ай бұрын
who is the host?
@RiceKenKennedyInstitute
@RiceKenKennedyInstitute 2 ай бұрын
This lecture was hosted by the Ken Kennedy Institute, an interdisciplinary group at Rice University that works collaboratively on groundbreaking research in artificial intelligence, data, and computing. Visit kenkennedy.rice.edu/ to learn more about our events and activities!
@user-rl5he3fj3e
@user-rl5he3fj3e 2 ай бұрын
This video is super interesting, makes me wanna try dopepics on screenshots to see if I can improve them
@bartoszkunat1134
@bartoszkunat1134 2 ай бұрын
20:09
@marcelinaperez2087
@marcelinaperez2087 2 ай бұрын
Way better than other free sites I've tried.
@EdgarOchoa-vs7ow
@EdgarOchoa-vs7ow 2 ай бұрын
Super fascinating but my brain hurts now, hehe.
@nbansal
@nbansal 2 ай бұрын
Promised - Self-Driving Cars Result - Here is a cute video of dog 😅😅
@otoniellopez4992
@otoniellopez4992 2 ай бұрын
Cats are hard to capture clearly but maybe it could help
@demonio999-if6pc
@demonio999-if6pc 2 ай бұрын
This guy dreams in Python code for sure.
@prashantmorgaonkar3095
@prashantmorgaonkar3095 2 ай бұрын
TLDR; please 😢
@chromosome24
@chromosome24 2 ай бұрын
Its interesting to see how poorly these transformer models perform at math.
@pieter5466
@pieter5466 2 ай бұрын
1:03:04 "the world as we'd like it to be" - who's "we"?
@Prometheus70747
@Prometheus70747 2 ай бұрын
Other people
@Epistemophilos
@Epistemophilos Ай бұрын
"Avoid creating or reinforcing unfair bias." Sure.
@RobertMaximVerdes
@RobertMaximVerdes 2 ай бұрын
17:10 king minus queen roughly equals man minus woman
@jack_galt
@jack_galt 2 ай бұрын
1:11:58 Mhm mhm mhm
@mikhailbaalberith
@mikhailbaalberith 2 ай бұрын
My boy lost a good opportunity to talk less about Gemini and more about PIML
@theyruinedyoutubeagain
@theyruinedyoutubeagain 2 ай бұрын
I love Jeff, but OpenAI is kicking their ass with their crazy iteration speed, while Google is hamstrung by their incompetent leadership and can't focus or even deliver something reasonable. I keep saying that Sundar should've been dishonourably ejected years ago.
@hanchisun6164
@hanchisun6164 2 ай бұрын
Seeing Jeff himself lying about the numbers of Gemini, I am starting to think it is not necessarily Pichai's fault for corrupting the company's culture.
@pillescasdies
@pillescasdies Ай бұрын
Sincere Q: what part was a lie? The elo results?
@darylltempesta
@darylltempesta 2 ай бұрын
Why hide the wisdom of the audience?
@deniz.7200
@deniz.7200 2 ай бұрын
Google Gemini is the worst AI I have ever used, its a waste of time and money sadly
@lukeb111
@lukeb111 Ай бұрын
About as useful as bard
@garythepencil
@garythepencil Ай бұрын
ugh so boring
@JoseJimenez-il5vs
@JoseJimenez-il5vs 2 ай бұрын
Still waiting for Jeff to apologise to Timnit Gebru. Absolute clown.
Paris 2024 Olympic Stadiums
21:03
Futurology
Рет қаралды 18 М.
The $3BN Plan to Connect Malaysia to Singapore
7:32
The B1M
Рет қаралды 264 М.
And what’s your age? 🥰 @karina-kola
00:12
Andrey Grechka
Рет қаралды 6 МЛН
McDonald’s MCNUGGET PURSE?! #shorts
00:11
Lauren Godwin
Рет қаралды 23 МЛН
Prof. Chris Bishop's NEW Deep Learning Textbook!
1:23:00
Machine Learning Street Talk
Рет қаралды 69 М.
A Techno-optimist Look at AI - Jeff Dean | Endgame #166 (Luminaries)
1:14:35
This is why Deep Learning is really weird.
2:06:38
Machine Learning Street Talk
Рет қаралды 298 М.
The Most Important Algorithm in Machine Learning
40:08
Artem Kirsanov
Рет қаралды 156 М.
Mark Zuckerberg - Llama 3, $10B Models, Caesar Augustus, & 1 GW Datacenters
1:18:38
What's next for AI agentic workflows ft. Andrew Ng of AI Fund
13:40
Sequoia Capital
Рет қаралды 203 М.