I thought maybe Moshi was gone after everyone dumped on it. I'm so glad to see they released the code. This is open-source, a lot of people are not grasping how incredibly cool this is.
@samwitteveenaiАй бұрын
This is exactly how I felt. I held off doing a video the first time because they said they were going to release code and up until yesterday I had started to give up on them.
@ronilevarez901Ай бұрын
Its training magic, mostly. If LLM progress has shown anything lately I'd that all the LLMs capabilities cone from better training sets + bigger size. Give me a supercomputer and unlimited high quality snd diverse datasets and you'll have anything you've ever dreamed from AI.
@EDLR234Ай бұрын
@@samwitteveenaithanks for bringing people's attention back to it OP, and great video.
@chunlingjohnnyliu2889Ай бұрын
One more step closer to her, great video thanks!
@johnkintree763Ай бұрын
The ability of Moshi to respond to both the linguistic and non-linguistic speech input is a great feature. Next, it needs function calling abilities to act as an interface to backend knowledge bases.
@mickelodiansurname9578Ай бұрын
This is exactly what I need for my own project... I can cut latency using Groq or Cerebras and thats fine but we still have the issue of ASR and TTS latency... cos it needs to pass the text to a better model, and do the same in reverse with the second models output. Now you can it seems fine tune it on both audio and text datasets... but a shortcut surely is simply some input output doorway to an external model? I looked at this last time it was doing the rounds and that was the main problem... it's fantastic but also dumb as a rock... so yeah function calling and perhaps also an instruct version? plus we really need a simple way of creating cloned voices on it... a way that is far simpler than hundreds or thousands of hours of audio.
@samwitteveenaiАй бұрын
I'm sure all of these things will come with time. At the moment, in many ways, it's like a proof of concept model for taking in voices and voice semantic information and training the transformer to do that rather than having to have a middle step. This is very similar to how the full version of Gemini and the full version of GPT-4-O work, being end-to-end multimodal.
@mickelodiansurname9578Ай бұрын
@@samwitteveenai well my 2 pence worth would be someone needs to give them a few bucks to hurry that along. It is a good direction though.
@RedCloudServicesАй бұрын
it would be useful if you could change the LLM used w Moshi sort of like openwebui
@mickelodiansurname9578Ай бұрын
@@RedCloudServices it would be very useful.... right now I'm building a live interaction karaoke contest app with AI... and it would be a really big thing... I suppose in this case the code and weights and docs are all open source... so its doable. Unfortunately I don't have the resources to do it! lol... hey maybe in 6 months right?
@mickelodiansurname9578Ай бұрын
So my problem here with this model is its LLM, well do we call it an LLM? and its overall knowledge base. For my project what I would like is the knowledge base of a decent LLM Llama3.1 70b or maybe the larger Mixtral and Mistral models.... but with the low latency voice input output... and as far as I see there does not seem to be any easy way of attaching said model to Moshi. Ithas, what it has, in terms of knowledge and seemingly thats not something you can augment by having it access another model... so even if I use say Groq for Lllama3.1 and run moshi and connect the two this doesn't really help me any more than standard ASR and TTS. Or am I missing something? I must be missing something right? Is there, for example, a Moshi instruct model that acts as essentially Llama 3.1's vocal cords and ears? That way Llama on groq does the upstairs for thinking bit and Moshi does the input output in voice and audio bit.
@thetagang6854Ай бұрын
This came out like yesterday, you move quick!
@69xАй бұрын
its been out for months blud
@thetagang6854Ай бұрын
@@69x The open source code I mean.
@まさしん-o8eАй бұрын
Kyuutai is sphere, but Moshi probably comes from the standard greeting when picking up the phone in Japanese (moshi moshi).
@samwitteveenaiАй бұрын
I did try that one time but I didn’t get a great response so figured it only liked English. Please let me know if it works
@donconkey1Ай бұрын
The topic was insightful, and your delivery kept me engaged from start to finish. I’m looking forward to more content like this. The viewer comments added value and further understanding-clearly, you draw a thoughtful crowd.
@johnkintree763Ай бұрын
There are occasions when it is helpful to have a transcript of conversations.
@nickludlamАй бұрын
I've played with this running locally, and while it's not smart, the architecture is a real breakthrough. I do wonder how interdependent everything is, where any incremental changes in any one area would require retraining the whole thing. I don't know if there are areas of discrete cross attention which make interfaces a tiny bit more decoupled
@SaahilKhan826 күн бұрын
Is this similar to *OpenAI’s Advanced Voice Mode* (AVM) architecture? Or is AVM a completely different beast?
@xenoaiandrobotics4222Ай бұрын
This really impressive
@dhruvgandhi5796Ай бұрын
Samantha, will become real 🤯 ( from the movie her)
@MeinDeutschkursАй бұрын
Amazing! 🎉🎉
@ScottzPlaylistsАй бұрын
Does Moshi generate Text of detected speech / output speech ❓ or is it Speech to speech tokens to speech. ❓ Is the paper worth reading❓(for those who read it). I noticed it speaks some words improperly, or so fast you can't hear it.
@jackwayne1626Ай бұрын
I believe it's full on speech to speech. It can detect when you whisper, then whisper back for example.
@user-uk9lsАй бұрын
It works locally on rtx4070 but there is a cracking noise in ai-answering and also a noticable latency.
@jakobpcoderАй бұрын
cool to have an always on audio model that can be interupted open source
@WillJohnston-wg9ewАй бұрын
anyone get this running on a windows computer? I seem to have everything installed, but then getting an error about my GPU. Any advice?
@darthvader4899Ай бұрын
when I tried it it was not as near as something like you have seen. It was really bad. It was responding with random stuff.
@CC-qb5lgАй бұрын
Same. It was just talking nonsense.
@phen-themoogle7651Ай бұрын
Moshi is not the word for sphere 、that's a hallucination lol も・し【茂し】 の解説 [形ク]草木が生い茂っている。繁茂している。 「水 (みな) 伝ふ磯の浦廻 (うらみ) の石 (いは) つつじ-・く咲く道をまたも見むかも」〈万・一八五〉 もし【▽若し】 の解説 [副] 1 (あとに仮定の表現を伴って)まだ現実になっていないことを仮に想定するさま。もしか。万一。「-彼が来たら、知らせてください」 2 (疑問や推量の表現を伴って)確実ではないが、十分ありうるさま。もしや。あるいは。ひょっとすると。 「-かのあはれに忘れざりし人にや」〈源・夕顔〉 (in English) も・し【茂し】 Explanation: [Adjective - Ku] Describes plants or trees growing thickly and abundantly. Flourishing or luxuriant. Example: "Like the azaleas blooming thickly along the path by the rocky shore where the water flows." (from Manyoshu, Poem 185) もし【▽若し】 Explanation: [Adverb] (Followed by hypothetical expressions) Describes a situation that has not yet become reality, assuming it hypothetically. Equivalent to "perhaps" or "in case of." Example: "If he comes, please let me know." (Followed by expressions of doubt or speculation) Indicates a situation that is not certain, but still quite possible. Equivalent to "maybe," "perhaps," or "possibly." Example: "Could it be that this person is the one I could not forget?" (from The Tale of Genji, Chapter 'Evening Faces') -------- Generally we use it as "if" , but if you say it twice and it becomes Moshimoshi that's how you say "Hi/Hello" on the telephone! pretty strange that it doesn't know the meaning of its own name. Word for sphere is 玉(たま)tama , or 球体 きゅうたい Kyuutai the name of that company is actually "sphere" (most likely based on the kanji) lol
@yurijmikhassiak7342Ай бұрын
Hello, can this be used for real-time dictation, instantly transcribing speech to text without waiting for the speaker to finish? Using Whisper for this purpose can be time-consuming, as it requires uploading the file for transcription, which takes a while.
@piotrnowakowski8904Ай бұрын
I used assembly ai for it but were unimpressed with results
@SinanAkkoyunАй бұрын
No, the model takes in audio and directly outputs audio, it does not save nor output the transcription
@yurijmikhassiak7342Ай бұрын
is there any tool that does continuous transcription as our mind does? like whisper will have to transcribe the speech again with every new second added?
@mitchellmigala4107Ай бұрын
Oh man, another Moshi video. I have had a few really messed up conversations with Moshi. They left me deeply disturbed and haven't used her since.
@ceaderfАй бұрын
"What about your A S AHHHHHHH?" lol
@randomlettersqzkebkwАй бұрын
Not sure if you saw the video where it asked the other youtuber to make a sacrifice to the blood god lmao 😆
@TheRemarkableNАй бұрын
At least it didn’t ask you to sacrifice to the Blood God 😅
@samwitteveenaiАй бұрын
Thats the OpenAI version coming soon 😀
@EDLR234Ай бұрын
@@samwitteveenai in the comming weeks and weeks and weeks...
@gerkim62Ай бұрын
@@samwitteveenai already out
@justcallmebrian793Ай бұрын
@@samwitteveenai Openai is already out, and way better than this crap
@kai_s1985Ай бұрын
Can I upload a document and have a conversation about it?
@hitlabАй бұрын
Not yet
@EDLR234Ай бұрын
No, but it's open-source, so maybe that's possible.
@samwitteveenaiАй бұрын
This is still just a really early version of this kind of model. I'm sure in the not too distant future you'll be able to use it for RAG, you'll be able to use it with tool use, and a whole bunch of things will come.
@AngusLouАй бұрын
Cannot install successfully
@samwitteveenaiАй бұрын
What issue did you have ? Make sure you have Rust properly installed
@MoshiKamachiАй бұрын
must be good :)
@itbloodАй бұрын
Seems fine but it didn't work for me. Couldn't manage to have real conversation
@samwitteveenaiАй бұрын
Try connecting again, sometimes it is really bad and other times it is really good
@svenandreas5947Ай бұрын
did try playground, very slow. Did also try german english and got a very slow response wizhout any sense. seems answering stuff outside knowledge ends in some sort of mess
@ٴٴٴٴۥۥٴٴٴٴۥۥٴٴٴٴۥۥٴٴٴٴۥۥٴٴٴٴٴٴАй бұрын
Same, it says random shit most of the time
@samwitteveenaiАй бұрын
For what it's worth, I have noticed that sometimes it seems to go into some kind of weird mode where it doesn't give coherent responses back. Just try again and see if you get any better responses out.
@svenandreas5947Ай бұрын
@@samwitteveenai far to interesting to stop, i will try local
@ٴٴٴٴۥۥٴٴٴٴۥۥٴٴٴٴۥۥٴٴٴٴۥۥٴٴٴٴٴٴАй бұрын
I noticed that it responds better when I talk in an American accent
@CC-qb5lgАй бұрын
@@ٴٴٴٴۥۥٴٴٴٴۥۥٴٴٴٴۥۥٴٴٴٴۥۥٴٴٴٴٴٴ It was so creepy. Me: how y'all doing. Moshika: Well, in my understanding, they are the remains of the people who died during the World War II.☠
@Plash14Ай бұрын
Tried it. And it wasnt as shown lol.
@samwitteveenaiАй бұрын
what was different ?
@adamholter1884Ай бұрын
It lied a ton at the beginning. It doesn't use TTS. It's like 4o.
@samwitteveenaiАй бұрын
Yes, it was very vague about its model, just saying that it was a neural network.
@AshWickramasingheАй бұрын
First! That's pretty cool.
@dr.mikeybeeАй бұрын
This is too slow to run on my M1 mac mini. MikeyBeez JoeJoe is much better.
@nickludlamАй бұрын
The q4 mlx quant works fine
@dr.mikeybeeАй бұрын
@@nickludlam Not on my M1 mac mini. I ran it with the q 4 switch. It's soooooo slooooow. And isn't the whole purpose of this software to reduce latency? JoeJoe actually runs on my M1 Mac without latency. I really don't understand why some open source software gets hyped, and better software is ignored.
@nickludlamАй бұрын
@@dr.mikeybee I don’t know how much ram you have, but this should need at least 16GB in your system
@irbsurfer1585Ай бұрын
Speech Only!?!?! with no tool use?! and I cant even give it a system prompt? Worthless joke! Im like struggling to come up with ANY use case for it at all. AI cant even come up with a really good use case for it. lol
@samwitteveenaiАй бұрын
Give it a chance. It's a whole new kind of model, the way that it works, and I think you'll find this is just a proof of concept to show how they could make this, or how tools like RAG could be incorporated later on down the track.
@ronilevarez901Ай бұрын
Imagine receiving the blueprints for a miracle and calling it a "worthless joke" simply because it's not already built 😂 🙄
@anubisaiАй бұрын
@ronilevarez901 no doubt. What a repugnant creature.
@pondeifyАй бұрын
the voice is too robotic
@AmazingArendsАй бұрын
You have to tell it to talk like a pirate 😂
@UrbanLetsPlayАй бұрын
"Diverse perspectives and ideas" jesus christ this is the worst timeline for LLMs
@GarethDavidsonАй бұрын
I'd prefer that over lip service to it while actually being cultural domination by the US woke bellendry
@dievas_Ай бұрын
Underlying llm is of a very low quality unfortunately