Пікірлер
@pranaynandanwar901
@pranaynandanwar901 3 күн бұрын
Hi
@georgen9755
@georgen9755 12 күн бұрын
Lily sand Rice ken Kennedy centre Retreat centre
@georgen9755
@georgen9755 12 күн бұрын
Eating circles Hotels Hot chips restaurant Opposite C.L.R.I Near Adyar signal
@josecantu8195
@josecantu8195 13 күн бұрын
Very insightful thank you! :)
@milonsarkar8955
@milonsarkar8955 21 күн бұрын
Hello There!! You have a great KZbin channel,Really amazing. But I notice, I think You are not getting enough views & Subscribers, The reason is unfortunately your videos & Channel aren't SEO(Search Engine Optimization) optimized. That is the reason your videos aren't showing in the search results, and many people can’t see them. If you're interested I can help you to grow & monetize your channel. Thank you!
@planetsharks4746
@planetsharks4746 6 ай бұрын
Underrated talk from the GOAT
@lesiaambrose7011
@lesiaambrose7011 6 ай бұрын
Ameen 😢Medical errors in Nashville almost cost me my life. Say it say it Dr Ken!
@mikaelfiil3733
@mikaelfiil3733 6 ай бұрын
Well, at 47 min. Genini gets the ordering wrong, when it states the countries with the most companies per 1 million residents. Either the table is wrong or the US should go last with 44.16. He didn't even notice it when reading the text below the table. So how can you trust it?
@doa_form
@doa_form 6 ай бұрын
this is more of a historical look rather than a focust toward the future
@lukeb111
@lukeb111 6 ай бұрын
About as useful as bard
@ellielikesmath
@ellielikesmath 6 ай бұрын
ugh so boring
@Epistemophilos
@Epistemophilos 7 ай бұрын
"Avoid creating or reinforcing unfair bias." Sure.
@gabrielsandstedt
@gabrielsandstedt 7 ай бұрын
Here are some key takeaways from Jeff Dean's talk (as summarized by Anthropic Opus LLM): [00:04] Machine learning has changed expectations of what computers can do compared to 10-15 years ago, enabling them to see, perceive and sense the world much better. This opens up opportunities in many fields. [01:42] Increasing scale of compute resources, specialized hardware, larger datasets, and larger ML models tends to deliver better results. New capabilities emerge as accuracy reaches usable thresholds. [04:25] Many ML capabilities like image recognition, speech recognition, and translation have been reversed in recent years, going from labels/text to generating images, audio, and video. This is an exciting development. [07:24] Computer vision accuracy has improved dramatically, e.g. on the ImageNet benchmark, from 50.9% in 2011 to 91% now, revolutionizing the field. Human accuracy is lower. [08:18] Speech recognition word error rates have dropped from 13.25% to 2.5% in just 5 years, a huge leap in usability as the models can now be relied upon. [10:20] Specialized ML hardware is much more efficient, with major improvements each generation. Reduced precision is okay and linear algebra primitives are key. This enables larger models at lower cost. [13:57] Google's TPU chips and pods have rapidly evolved to accelerate ML, from the first inference-focused TPU v1 in 2015 to the 4 exaflop TPU v4 pod with 4096 chips. [18:31] Distributed word representations that capture similarity and meaningful directions in vector space have been powerful, e.g. king - queen ≈ man - woman. [20:56] The Transformer architecture processes input in parallel and uses attention, giving 10-100X less compute than recurrent models. This has revolutionized ML models. [44:04] Continued evolution of TPUs up to the TPU v5 with 9000 chips and 4 exaflops shows the importance of ML-specialized hardware. [48:08] Fine-tuning large general models on domain-specific data creates powerful specialized models, e.g. med-PaLM exceeding human performance on medical exams. [49:17] Generative image and video models conditioned on text descriptions are now integrated into products like Bard, powered by large models and techniques like diffusion. [54:05] ML is enhancing smartphone capabilities like photography, voice assistants, and real-time translation, helping people in powerful but often invisible ways. [57:06] ML is revolutionizing many scientific fields through learning simulators and large-scale automated discovery, e.g. finding promising new materials 100,000X faster. [59:06] ML shows huge potential to aid medical diagnosis, on par with expert specialists while expanding access. Models for diabetic retinopathy and dermatology are being deployed. [01:02:08] As ML is used more broadly, it's important to establish principles for responsible development, like fairness, privacy, interpretability, and social benefit. [01:06:03] More data, if high quality, improves model performance when the model has sufficient capacity. Low quality data can hurt. [01:07:27] There is still plenty of data to train more capable models, especially in modalities like video which are still underutilized compared to text. [01:08:27] Multimodal models that learn from vision, audio, text etc. outperform unimodal models and are the future, with potential to handle 50-100 useful modalities. [01:09:27] While large models are expensive to train, there are still many impactful research directions for smaller-scale work, like data quality evaluation, curriculum learning, and optimization. [01:10:51] While large language models and Transformers are dominant, it's important to still explore diverse architectural ideas, not just local improvements. Multimodal architectures beyond language are key.
@wolpumba4099
@wolpumba4099 7 ай бұрын
*Abstract* This comprehensive video presentation delves into the current state and future prospects of machine learning (ML), underlining significant advancements and the technological evolution that has shaped the field. The talk begins with an overview of machine learning trends, emphasizing the dramatic improvements in speech recognition, image understanding, and natural language processing over the last decade. It attributes these advancements to increased computing resources, specialized hardware, and larger datasets. A notable highlight is the development of Google's Tensor Processing Units (TPUs), designed to optimize ML computations efficiently, showcasing the importance of scalable and efficient hardware in pushing the boundaries of ML capabilities. The discussion progresses to the hardware evolution, with the latest TPUs achieving 1.1 exaFLOPS of computational power, and introduces the V5 series, enhancing performance for both inference and training. Attention is given to the strides in language models and translation, detailing the shift from traditional algorithms to neural networks and the transformative impact of models like Transformer, which allows parallel data processing for improved accuracy and efficiency. Central to the presentation is the unveiling of Gemini, Google's ambitious multimodal model, aimed at mastering the integration of text, image, video, and audio data. Gemini's varying sizes cater to different applications, from powerful cloud-based solutions to on-device implementations. The model's training, data filtering, and quality assurance processes are discussed, alongside innovative techniques like "Chain of Thought" prompting for eliciting more accurate and interpretable responses from the model. Performance evaluations reveal Gemini's superior capabilities across a wide range of benchmarks, outperforming state-of-the-art models in text, image, video, and audio understanding, as well as in conversational AI. The talk further explores the application of machine learning in enhancing smartphone features, material science, healthcare, and raises ethical considerations vital for responsible ML deployment. The session concludes with a Q&A segment addressing the audience's inquiries on model performance improvement with high-quality data, the future of large language models, the comparison between multimodal and domain-specific models, accessibility of AI research for individuals and startups, and concerns regarding the diversity of machine learning models. This presentation underscores the remarkable journey of machine learning, highlighting Google's leading role in advancing the field, and points towards a future where ML's potential to benefit society is fully realized, provided it is used responsibly. *Summary* *Introduction and Observations on Machine Learning* - *0:04* Introduction to trends in machine learning, its significance, opportunities, and considerations. - *0:22* Acknowledgment of Google's collective work in machine learning. - *0:48* Initial observations on machine learning improvements in speech recognition, image understanding, and natural language processing. - *1:59* Mention of the role of computing scale, specialized hardware, and large datasets in enhancing machine learning results. *Progress and Developments in Machine Learning* - *3:11* Examples of progress in image classification, speech recognition, and translation. - *4:17* Discussion on reversing machine learning processes for image generation from descriptions. - *5:13* Progress in image recognition accuracy, highlighted by ImageNet benchmark. - *7:42* Significant improvements in speech recognition accuracy. - *8:37* The importance of scalable and efficient hardware for machine learning. - *9:17* Benefits of reduced precision and focus on linear algebra in neural networks. *Hardware Innovations and Computing Power* - *10:27* Introduction to Google's Tensor Processing Units (TPUs) for efficient machine learning computation. - *12:02* Scaling with TPU pods for enhanced machine learning capabilities. - *12:58* Describes computing power in data centers with 1.1 exaFLOPS of computation. - *13:15* Introduction of the V5 series TPUs with enhanced memory and bandwidth. *Advances in Models and Translation* - *14:00* Advances in language models beyond traditional areas. - *18:31* Introduction to sequence learning and neural networks for translation. - *21:13* Explanation of the Transformer model allowing for parallel data processing. - *23:52* Evolution of neural language models and conversational AI, including developments in GPT and Transformer models. *Gemini: A Multimodal Model by Google* - *25:54* Introduction to Gemini models aiming to lead in multimodal machine learning. - *28:16* Training infrastructure and focus on maximizing "goodput." - *31:34* Importance of data quality and filtering for Gemini's training. - *33:19* "Chain of Thought" prompting technique for improved model performance. - *35:53* Multimodal reasoning capabilities of Gemini, with applications in education. *Performance and Applications of Gemini* - *39:14* Performance of Gemini Ultra in benchmarks. - *42:27* Conversational capabilities and development of domain-specific models. - *49:17* Generative models for creative image and video generation. - *53:01* Machine learning advancements in visual recognition and its applications in various fields. *Ethical Considerations, Conclusion, and Q&A* - *1:02:02* Emphasis on ethical considerations and responsible use of machine learning. - *1:04:18* Conclusion highlighting the shift to learned systems and their societal potential. - *1:05:38* Speaker's decline of further questions due to overwhelming response. - *1:06:13* Audience questions on model performance, future of LLMs, multimodal models, accessibility of AI research, and diversity in machine learning models. Disclaimer: I used gpt4-0125 to summarize the video transcript. This method may make mistakes in recognizing words and it can't distinguish between speakers.
@ivanchennyc
@ivanchennyc 7 ай бұрын
it seems there is some sound issues
@randomsitisee7113
@randomsitisee7113 7 ай бұрын
So why is Gemini showing me black nazi soldiers, when they all were white German?
@Papu19271
@Papu19271 7 ай бұрын
This guy dreams in Python code for sure.
@iker64
@iker64 7 ай бұрын
Wonder what's next for AI?
@oscar12.34
@oscar12.34 7 ай бұрын
Way better than most online services I've used for improving quality.
@Noobsitogamer10
@Noobsitogamer10 7 ай бұрын
Love the passion Jeff has for machine learning.
@Elpapucho17
@Elpapucho17 7 ай бұрын
I second that, their upscaling is really top notch.
@AravindanUmashankar
@AravindanUmashankar 7 ай бұрын
Very insightful
@deniz.7200
@deniz.7200 7 ай бұрын
Google Gemini is the worst AI I have ever used, its a waste of time and money sadly
@alexeikorol6312
@alexeikorol6312 7 ай бұрын
🎯 Key Takeaways for quick navigation: 00:04 *💡 Introduction to Machine Learning Trends * - Introduction to trends in machine learning that includes opportunities and challenges. - This talk presents the work of many people at Google, including some co-authored by Dean. 01:26 *👁️ The Evolution of Computer Perception* - The evolution of computer perception from rudimentary speech recognition and image processing to comprehending language, multi-lingual data, and enhanced perception. - Opportunities opened up by advanced machine senses in a variety of fields. 02:35 *💾 The Change in Computer Processing Needs Due to Machine Learning* - Transition from traditional code to machine learning analogies triggering a need for different kinds of hardware. - Discussion on how better hardware can lead to lower economic and energy costs while improving model quality. 06:14 *📈 Progress in Image Recognition and Speech Recognition* - Detailed on the history and advancements in image and speech recognition, with a focus on neural networks. - Importance of continued research and scaling for improving accuracy and usability of systems. 10:20 *🖥️ Google's Development of Tensor Processing Units* - Google’s development of Tensor Processing Units (TPUs) to serve machine learning models effectively. - TPU's ability to carry out reduced precision computations and assemble different linear algebra operations. 14:11 *📚 Language Processing with Machine Learning* - The radical changes and opportunities in language processing using machine learning. - The use of distributed representations to represent words in high dimensional vectors which can push similar words closer in space and separate different words. 18:31 *⚙️ Sequence to Sequence Learning Model* - Introduction to the Sequence to Sequence learning model which can translate input sequences into different languages. - The model's ability to absorb input sentences and decode the corresponding translated sentence iteratively. 20:27 *🔃 Multi-Turn Conversations with Neural Language Models* - Discusses the use of sequence-to-sequence models for creating meaningful multi-turn conversations, where the system takes context of previous interactions into account. - The model effectively uses input from the previous turns of conversations to generate appropriate responses. 21:06 *🔂 Transformer Model for Parallel Processing* - Explanation of the Transformer model, shifting from a sequential processing approach to a parallel one. - The Transformer model allows processing of each word in input independently and uses "attention" to focus on relevant parts during translation. - This shift led to a significant improvement in accuracy with more computational efficiency. 23:17 *🚀 Scaling Up Models and Obtaining Better Results* - Talks about the trend of increasing the scale of models and using Transformer models for training on conversational style data. - Describes how model evaluation ensures that generated responses are both sensible and specific. - Overview of the progression of neural language models and their improvements over time. 25:31 *🧩 Multimodal Models for Various Inputs* - Moving towards multimodal models that can process different types of inputs such as images, text, and audio. - Establishes the goal to train the world's best multimodal models and use them across Google products. 28:31 *💡 Scaling Up the Training Infrastructure and Data Handling* - Explanation of how Google's training infrastructure operates at a large scale to map computations onto available hardware. - Discusses the importance of having a fast recovery system to mitigate the impact of system failures. - Describes the critical role of high-quality training data and proposes automated learning curriculums as a future research area. 33:07 *🧠 Better Model Elicitation and Multimodal Reasoning* - Proposes techniques to elicit better responses from models by asking them to "show their work". - Details examples of multimodal reasoning through the Gemini model, where complex problems are solved using various data types (texts, images, etc.). - The potential educational implications of these advancements are highlighted, such as individualized tutoring. 38:07 *📊 Evaluation of Models and Performance Comparisons * - Emphasizes the importance of model evaluation in identifying strengths/weaknesses, ensuring a well-trained model, and comparing its capabilities with other models/systems. - Presents performance statistics of the Gemini Ultra model, indicating its state-of-the-art performance in multiple areas of evaluation. 40:35 *🏆 State-of-the-Art Performance Across Benchmarks* - The Gemini model demonstrates state-of-the-art performance across multiple benchmarks. - Achieved top results in text, image, video, and audio understanding, even on data it had not previously encountered. 42:25 *💬 Coherent Conversations and Advanced Capabilities* - The model can generate surprisingly coherent conversations and offer domain-specific knowledge. - Showcases various input requests and the model’s impressive capacity to carry out meaningful, accurate responses. 44:45 *🤖 Chatbot Integration and Performance Measures* - Integration of the Gemini Model into the Bard chatbot. - Evaluation of the model's performance using the ELO scoring method. 49:17 *🖼️ Generative Models for Image Production* - Overview of how generative models are used to create images based on detailed prompts. - Emphasizes the influence of model scale on the quality of generation. 54:05 *📱 Machine Learning Applications in Everyday Tech* - Discussion of various machine learning features that have improved functionalities in smartphones, particularly in camera features. - Stresses the broad implications of machine learning technology for areas like language translation or literacy support. 57:06 *🧪 Machine Learning Potential in Material Science and Healthcare* - Discusses how machine learning can aid in exploring scientific hypothesis spaces and searches for new materials. - Highlights the potential of machine learning in healthcare, particularly in medical imaging and diagnostics. 01:01:27 *🩺 Machine Learning in Dermatological Diagnostics* - Utilizes machine learning to assist in diagnosing dermatological conditions. - Users can take a photo of their skin concern, and the system can provide potential diagnoses based on similar images in dermatological databases. 01:02:08 *📚 Guidance Principles for Applying Machine Learning* - Google's published a set of principles in 2018 to guide internal teams on considerations when applying machine learning to problems. - Highlights important aspects such as avoiding creating or reinforcing unfair bias and being accountable and sensitive to privacy. - Active areas of research include fairness, bias, privacy, and safety. 01:04:29 *🌟 Future Prospects and Responsibilities in Machine Learning* - The capacity for computers to understand various modalities and react accordingly is constantly expanding, paving the way for more intuitive and seamless user experiences. - The responsibility to harness machine learning for social benefit is proportional to the opportunities it presents. 01:05:49 *💡 Q&A Session* - Discussion about the impact of more data on the performance of machine learning models. - Views about the evolution of LLMs and the growth of multimodal models. - Reflections on the accessibility of machine learning research for small startups and individuals. Made with HARPA AI
@brookshamilton1
@brookshamilton1 7 ай бұрын
Very helpful! Thank you
@prashantmorgaonkar3095
@prashantmorgaonkar3095 7 ай бұрын
TLDR; please 😢
@hanchisun6164
@hanchisun6164 7 ай бұрын
Seeing Jeff himself lying about the numbers of Gemini, I am starting to think it is not necessarily Pichai's fault for corrupting the company's culture.
@pillescasdies
@pillescasdies 6 ай бұрын
Sincere Q: what part was a lie? The elo results?
@fearlesspony
@fearlesspony 7 ай бұрын
@ [18:04] - they got the vector "king - queen" pointing in the wrong direction.... same for "man - woman"
@paragmahajani5450
@paragmahajani5450 Ай бұрын
P7😂
@bartoszkunat1134
@bartoszkunat1134 7 ай бұрын
46:00
@ai_outline
@ai_outline 7 ай бұрын
Computer Science is awesome!!
@bartoszkunat1134
@bartoszkunat1134 7 ай бұрын
20:09
@sombh1971
@sombh1971 7 ай бұрын
37:16 Wow! There goes education! 44:03 There goes coding!
@jasonsong4035
@jasonsong4035 7 ай бұрын
🎯 Key Takeaways for quick navigation: 00:04 *🤖 Introduction to Machine Learning Trends* - Overview of exciting trends in machine learning, - Jeff Dean highlights the broad impacts and opportunities in AI, - Importance of awareness in technological development. 00:43 *🌐 Evolution of Machine Learning Capabilities* - Machine learning has transformed our expectations of computers, - Significant improvements in speech recognition, image understanding, and language processing, - Transition from basic functionalities to advanced perception and interaction. 02:07 *⚙️ Scaling and Hardware Innovations* - Scaling up computing resources leads to better machine learning performance, - The shift towards specialized hardware for more efficient computations, - Larger datasets and models contribute to advancements in AI capabilities. 03:57 *🔄 Reversibility in Machine Learning Models* - Recent progress in reversing traditional input-output relationships in AI models, - Examples include generating images from descriptions and converting text to speech, - These advancements open up new possibilities for creative and practical applications. 05:06 *📈 Benchmark Improvements Over the Decade* - Significant improvements in image recognition and speech recognition benchmarks, - The evolution of machine learning models has led to surpassing human accuracy in certain tasks, - Continuous advancements underscore the rapid development in the field of AI. 08:33 *🖥️ Specialized Machine Learning Hardware* - The development of hardware optimized for machine learning, like Google's TPU, - Improvements in computational efficiency and energy consumption, - The role of reduced precision and linear algebra in machine learning computations. 13:57 *🗣️ Advances in Language Understanding* - Significant progress in language models and translation, - From basic n-gram models to advanced neural network-based approaches, - The importance of distributed representations and sequence-to-sequence learning in improving language understanding. 20:27 *💬 Advancements in Conversational AI* - Introduction to effective multi-turn conversations using neural language models, - The progression from sequence-to-sequence models to more advanced Transformer models enabling parallel processing for efficiency and accuracy. 23:57 *🗨️ Evolution of Neural Language and Chat Models* - Overview of the development in neural language models and chatbots, including GPT and BERT variations, - Emphasis on the transformative impact of the Transformer architecture on model efficiency and capability. 26:00 *🌐 Introduction to Gemini Multimodal Models* - The goal of creating multimodal models capable of understanding and generating content across various data types, including text, images, and audio, - The introduction of Gemini models by Google for enhanced AI capabilities in handling multiple modalities simultaneously. 28:16 *⚙️ Scalable Training Infrastructure and Data Quality* - Discussion on the scalable training infrastructure designed to efficiently map computations onto available hardware, - Emphasis on data quality and its critical role in model performance, including strategies for enhancing training data relevance and richness. 33:07 *🧠 Techniques for Eliciting Better Responses from Models* - Introduction of techniques like Chain of Thought prompting to improve model accuracy and interpretability, - Examples demonstrating how guiding models to "show their work" can significantly enhance performance on complex tasks. 35:50 *🤖 Multimodal Reasoning in Gemini Models* - Presentation of Gemini's capabilities in multimodal reasoning with an example of solving a physics problem, - Discussion on the potential of multimodal AI models like Gemini for personalized educational tools and tutoring. 38:21 *📊 Evaluation and Performance Benchmarking of Gemini Models* - Overview of Gemini's evaluation process and its performance across various academic benchmarks, - Comparison of Gemini Ultra with other state-of-the-art models, highlighting its superior performance in a majority of evaluated tasks. 40:35 *🏅 State-of-the-Art Benchmarks in Image, Video, and Audio Understanding* - Gemini's exceptional performance on various benchmarks, including image, video, and audio understanding, - Achievements in multimodal capabilities with state-of-the-art results across multiple domains, - Importance of unbiased benchmark testing to validate model capabilities. 42:25 *💡 Conversational AI and Practical Applications* - The evolution of conversational AI models leading to coherent and helpful interactions, - Examples of Gemini's capabilities in providing detailed, context-aware responses in a conversational setting, - Introduction of programming concepts and detailed explanations as part of AI-generated responses. 48:08 *🏥 Domain-Specific Model Refinements for Medical Applications* - Refining general models for domain-specific applications, particularly in the medical field, - Achievements of the Med-PaLM model in exceeding medical board exam benchmarks, - Potential of domain-enriched training to achieve expert-level performance in specialized areas. 49:17 *🎨 Advances in Generative Models for Images and Video* - Development of generative models capable of creating detailed and contextually accurate images from textual descriptions, - Impact of model scaling on the fidelity and accuracy of generated images, - Integration of generative models into practical applications for creative and educational purposes. 54:05 *📱 Machine Learning in Everyday Devices* - The invisible role of machine learning in enhancing smartphone features and user experiences, - Examples of computational photography, live captioning, and language translation powered by AI on mobile devices, - The potential of AI to assist users in a variety of practical and accessibility-oriented tasks. 57:06 *🔬 Machine Learning in Material Science and Healthcare* - The influence of machine learning on scientific research, particularly in material science and healthcare, - Automated discovery of new materials with desirable properties using AI-driven simulations and structural pipelines, - The application of machine learning in medical diagnostics, with a focus on diabetic retinopathy and dermatology screening. 01:01:27 *📸 AI in Dermatology* - Deployment of AI systems for dermatological assessments through smartphone photography, - The system's capability to match user-uploaded images with dermatological databases for condition identification, - Emphasis on the potential for AI to distinguish between serious and benign skin conditions. 01:02:08 *🤖 Ethical Principles in Machine Learning* - The importance of ethical considerations and principles in the application of machine learning technologies, - Google's publication of AI principles to guide responsible development and usage, - Focus on avoiding bias, ensuring accountability, and enhancing social benefits through AI applications. 01:04:29 *🚀 Future of Computing with Learned Systems* - The shift from encoded software systems to learned models that interact more naturally with humans and the world, - The expanding capabilities of computers to understand and generate various modalities like speech, text, and images, - Discussion on the opportunities and responsibilities in advancing AI to ensure social benefits. 01:07:13 *💡 Data Quality and Model Performance* - The relationship between data quality, model capacity, and performance, - The importance of high-quality data and appropriate model scaling for improved AI effectiveness, - Mention of potential adverse effects of low-quality data on model capabilities. 01:08:07 *🧠 The Future of Large Language Models (LLMs)* - Discussion on the future of LLMs and the availability of high-quality training data, - Exploration of untapped data sources like video for further training and development of LLMs, - The ongoing potential for significant advancements in AI through diverse data utilization. 01:09:13 *🌐 Multimodal Models and Specialized Applications* - The impact of multimodal models on performance across different domains, - Considerations on whether multimodal models outperform domain-specific models in their respective areas, - The potential of base models enriched with domain-specific data for targeted applications. 01:10:09 *🚀 Opportunities in AI Research for Individuals and Startups* - Encouragement for individuals and startups with limited resources to engage in innovative AI research, - Highlighting the potential for significant contributions to AI through clever ideas and efficient use of available computational resources, - The importance of diversity in research topics within the AI field, beyond large-scale model training.
@mnchester
@mnchester 7 ай бұрын
Amazing talk! 😊
@mikhailbaalberith
@mikhailbaalberith 7 ай бұрын
My boy lost a good opportunity to talk less about Gemini and more about PIML
@BennduR
@BennduR 7 ай бұрын
36:56 the length of the slope is not the hypotenuse, it's simply the length L in the diagram aka 80
@herashak
@herashak 7 ай бұрын
Gemini is another neutered commiefornian google product
@chromosome24
@chromosome24 7 ай бұрын
Its interesting to see how poorly these transformer models perform at math.
@정창욱-z6v
@정창욱-z6v 7 ай бұрын
멋있는 분이시네.
@darylltempesta
@darylltempesta 7 ай бұрын
Why hide the wisdom of the audience?
@YagamiAckerman
@YagamiAckerman 7 ай бұрын
- [00:04] Machine learning expectations - [01:42] Scale improves results - [04:25] Reversing ML capabilities - [07:24] Accuracy advancements in vision - [08:18] Speech recognition strides - [10:20] Hardware for ML efficiency - [13:57] TPU evolution for ML - [18:31] Distributed word representations - [20:56] Transformer model revolutionizes - [44:04] TPUs accelerate ML. - [48:08] General models specialized. - [49:17] Generative models for images. - [54:05] ML enhancing smartphones. - [57:06] ML revolutionizes science. - [59:06] ML aids medical diagnosis. - [01:02:08] Principles for ethical ML. - [01:06:03] More data improves. - [01:07:27] Plenty of data. - [01:08:07] Multimodal models benefit. - [01:09:27] Challenges for startups. - [01:10:51] Diversity in models.
@megairrational
@megairrational 6 ай бұрын
Amazing! Thanks
@joeylantis22
@joeylantis22 5 ай бұрын
no need to thank people who post these anymore, they’re all generated with AI, not by hand like the early days of KZbin.
@AlexSuslin
@AlexSuslin Ай бұрын
@@joeylantis22 why not to thank people if they used AI to speed up their work?
@scign
@scign 7 ай бұрын
Love how he completely skips over GPUs and the link to gaming and video! Goes straight from CPU to TPU.
@alph4966
@alph4966 7 ай бұрын
Jeff Dean's talent is real. but there is a CEO at Google who is crushing that talent.
@FoxnewsFan
@FoxnewsFan 7 ай бұрын
Also their horrible AI product and marketing team
@jermunitz3020
@jermunitz3020 7 ай бұрын
Impressive tech but also quite dystopian. The wealthy and powerful will run AI and use it to rule over everyone else. Too many people are talking about how shiny it is rather than the political ramifications.
@ColinRowat-z1z
@ColinRowat-z1z 7 ай бұрын
Oh no! This is a great presentation, and Dean is a legend, but: 1. the slide at 47:20 min claims that six _thousand_ companies were registered in England in 2022; the next slide claims that it's six _million_, citing the ONS. 2. ONS data at www.ons.gov.uk/businessindustryandtrade/business/activitysizeandlocation/bulletins/ukbusinessactivitysizeandlocation/2023 show that 2.06 million _companies_ were registered in 2022 (Table 1), or 2.77 million _total_ (inc. government bodies). This is NOT six million. 3. on the other hand, Companies House, which maintains the companies register in England and Wales, 5.12 million in March 2023, a 4.5% increase relative to March 2022. Neither the 2022 nor the 2023 figure is six million. See www.gov.uk/government/statistics/companies-register-activities-statistical-release-2022-to-2023/companies-register-activities-2022-to-2023
@nbansal
@nbansal 7 ай бұрын
Promised - Self-Driving Cars Result - Here is a cute video of dog 😅😅
@skavihekkora5039
@skavihekkora5039 7 ай бұрын
Timestamps please!
@liubianxing
@liubianxing 7 ай бұрын
thx and appreciate for this sharing, Gemini and multi-model could bring along new trend in 2024
@deniz.7200
@deniz.7200 7 ай бұрын
Gemini is the worst
@JoseJimenez-il5vs
@JoseJimenez-il5vs 7 ай бұрын
Still waiting for Jeff to apologise to Timnit Gebru. Absolute clown.
@raxcoins
@raxcoins 7 ай бұрын
i would like to ask for a better example. jeff is a great person, and he has more to say, with prowess to back it. i just dont see him putting much time into these as he should be.
@jack_galt
@jack_galt 7 ай бұрын
1:11:58 Mhm mhm mhm