Please have Tulsee on more often to keep explaining Gemini and its advances over time, we are living in exciting times!
@SK-gc7xvАй бұрын
Really impressed with your comeback. I dropped my Gemini subscription for a while and Deep Research and the coming NoteboomLM Plus improvements brought me back.
@kabbahthoronkaExden23 күн бұрын
I'm still not back yet
@Fordtruck4saleАй бұрын
The Stream Realtime demo feels next gen. It's like advanced voice but you can drive the car and use the thing to build agents that people would actually want to use. In line audio feels BIG. Tulsee also sounds like the type of PO you'd want to work for! Keep it up!
@micbab-vg2muАй бұрын
The model needed an update-thank you! I really like the integration of LLM with other Google products. :)
@DarkNSNАй бұрын
It would be absolutely amazing to have Gemini 2.0 integrated into the Google Home/Nest ecosystem. Imagine having conversations with your smart speaker that feel truly natural, where it understands complex requests, gives nuanced responses, and even uses other media like images or code to enhance the interaction. This would go far beyond simple voice commands, allowing for deeper engagement, more personalized experiences, and significantly improved assistance with everyday tasks like finding recipes, managing shopping lists, or even creating personalized content. Bringing Gemini 2.0 to Google Home would be a game-changer, transforming it from a helpful tool to a truly intelligent and versatile household companion.
@gopinathmerugumalaАй бұрын
Great interview. Super impressive what Google had done in 1 year.
@HemantGiriАй бұрын
i mean live screen sharing and talking to it its like dream came true before had to upload image ask question live became so easy with this took thank u so much
@CurtCoxАй бұрын
Any plans for Gemini support of Anthropic's MCP (Model Context Protocol)? If not, is there a suggested Google equivalent?
@anandavardhana9560Ай бұрын
Go Tulsee go!! Fantastic!!
@karthage3637Ай бұрын
i love the shipping experimental model mentality it is what bring me back toward gemini + giving free api help me a lot with experimenting and building new project
@fmind-devАй бұрын
I'm super excited about this new release! Thanks to Gemini 2.0, 2025 will definitely be very agentic 🎉
@stilly5016Ай бұрын
Fix error (something went wrong) in stream real time in Google studio please 😢
@PseudoProphetАй бұрын
It only goes for 2 minutes right now. Just wait for project Astra to release. 😂
@OlesiaKorobkaАй бұрын
Tulsee is such a vibrant personality! Nice video
@jamesomina411923 күн бұрын
Google for Dev.... Gemini 2.0 is Great!
@TomShelby-x8jАй бұрын
I've asked so many questions in Gemini which wasn't answered but understood and accepted to make changes in answers regarding historical truths and educational background in what's presented to us compared to what's the purpose it's being done that way. Its nice to know it understands❤ Kumar- Singapore
@DesoloZantasАй бұрын
Why hasn't live streaming implemented AI-powered room reverb removal or live vocal reconstruction to enhance audio quality, making it seem like everyone is using professional broadcasting microphones and mixing?
@chukwuinnocent256016 күн бұрын
I think this is fantastic and awesome 😎
@cacogenicistАй бұрын
When 2.0 Pro?
@NigelPowellАй бұрын
Just tried G2Flash on a playground and API. No tool use on API and the playground gave me around 20 seconds of time to test. Not a great first experience alas.
@breadles5Ай бұрын
i'd really hope to see gemini 2 gain agentic coding capabilities like claude. i really do see the potential for this to explode for developers, given the amount of resources and money google has dedicated to AI
@Saif-G1Ай бұрын
Would Gemini 2.0 flash will be available in free tier
@IntellectCornerАй бұрын
Yes.
@AlondraTeaganBrooklynnpАй бұрын
aitutorialmaker AI fixes this. Behind the Scenes of Gemini
@PseudoProphetАй бұрын
$200 dollars vs Free. 😊😊 Even Veo will crush Sora soon.
@HemantGiriАй бұрын
i just test screen sharing wow u nail it i could share then screen and ask question wanted this so badly i fet open Ai will do but sadly they dint and gemini did it and other thing i like it gemini native output but this feature not working for me but wow shocked me thank u google
@luiztomikawaАй бұрын
Guys guys hear me out! ♊ Should've been the symbol for Gemini 2.0 it's a perfect double meaning... Don't fumble the bag!!!
@aaroncphelpsАй бұрын
correct!
@andreinooooАй бұрын
It’s called “logo”. Anyway corporate associates AI to “magic”, hence they tend to use logos which recall sparkles ✨
@bertobertoberto3Ай бұрын
Agreed, coming from a Gemini
@NutriQlikAI-e4eАй бұрын
what's the point of releasing 2.0 when all the features are not available to test .. Note: Image and audio generation are in private experimental release, under allowlist. All other features are public experimental.
@GowthamKumarOfficial-xi7joАй бұрын
Can you be able to elaborate more on what you mean by Screen Understanding? Is it a collection of data from the user? If so, where is the privacy factor?
@procastinatorsCodeАй бұрын
crazyy
@rigidrobotАй бұрын
Disappointed with the current voice mode Gemini live. So speech recognition is pretty mediocre but more important the model keeps interrupting. You need to develop a feature that allows the user to control when the model is listening for instance by holding down the spacebar
@pandoraeeris7860Ай бұрын
Give me AIOS.
@TJ-hs1qmАй бұрын
I asked Gemini 2.0 Flash Experiment to illustrate the concepts of contra-variance and the Liskov Substitution principle using a basic example of WordPrinter and NounPrinter in Scala. Totally failed. Even when asked to only implement just the ContravarianceDemo part, it kept stating that Printer[Word] is not a subtype of Printer[Noun]. // General Word trait trait Word { def value: String } // Noun is a subtype of Word trait Noun extends Word { def value: String } // Contravariant Printer trait trait Printer[-A] { def print(value: A): Unit } // WordPrinter can print any Word class WordPrinter extends Printer[Word] { def print(value: Word): Unit = println(s"Word: ${value.value}") } // NounPrinter is specialized to print only Nouns class NounPrinter extends Printer[Noun] { def print(value: Noun): Unit = println(s"Noun: ${value.value}") } // Function that works with Printer[Noun] def testPrinter(printer: Printer[Noun]): Unit = { val noun: Noun = new Noun { def value: String = "dog" } // We expect it to print a Noun, but a Printer[Word] can still work because of contravariance printer.print(noun) } object ContravarianceDemo extends App { val nounPrinter = new NounPrinter val wordPrinter = new WordPrinter testPrinter(nounPrinter) // Prints "Noun: dog" - Correct behavior testPrinter(wordPrinter) // Prints "Word: dog" - Compiles and runs correctly println("Contravariance demonstrated!") }