The Secret to Instant Meeting Summaries: Whisper Diarization Revealed

  Рет қаралды 5,404

AI FOR DEVS

AI FOR DEVS

Күн бұрын

Пікірлер: 15
@nexuslux
@nexuslux 9 ай бұрын
Very cool channel being so responsive to the comments. Going to check this out in more details in the coming days
@shaytrequesser4482
@shaytrequesser4482 9 ай бұрын
Is there any way to run the transcription with diarization locally?
@ano2028
@ano2028 8 ай бұрын
If you have a very strong GPU machine with high memory, you can clone the model to your local machine, follow their README for setting up and run it with "python subprocess" instead of replicate locally. Replicate basically a Cloud API, which "lends" you a GPU Compute Engine for those don't have enough budget to buy such one.
@ai-for-devs
@ai-for-devs 6 ай бұрын
Yes, you can find the used models on huggingface.co including the instructions how to run them locally.
@Michaelhajster
@Michaelhajster 9 ай бұрын
Great video, thanks for the tutorial! I just subscribed to your channel. Tolle Zeiten in denen wir leben!
@ai-for-devs
@ai-for-devs 9 ай бұрын
🙌
@AngusLou
@AngusLou 6 ай бұрын
Thank you for the impressive video, even better if there is an on-premise solution.
@truckfinanceaustralia1335
@truckfinanceaustralia1335 4 ай бұрын
This vid is awesome! thanks :) I just subbed
@boooosh2007
@boooosh2007 6 ай бұрын
This is great. Did you automate the final version of the meeting notes as well or cleaned it up yourself? If automated please show that as well.
@ai-for-devs
@ai-for-devs 6 ай бұрын
This is done in part 2 of the course (see course preview here: kzbin.info/www/bejne/lXSQk6J_mM5jeZo). If you are interested please put yourself on the waiting list at ai-for-dev.com
@st.3m906
@st.3m906 8 ай бұрын
What would you do if the transcript is past the token limit for the LLM?
@ai-for-devs
@ai-for-devs 7 ай бұрын
If the transcript exceeds the token limit for the LLM, I would break it into smaller, manageable chunks and process each one sequentially.
@kryptonic010
@kryptonic010 9 ай бұрын
The presentation was great, however instead of sending data off to aws and processing much if not all queries in house on my own data servers. Privacy is paramount.
@HyperUpscale
@HyperUpscale 9 ай бұрын
I was thinking the same
@ai-for-devs
@ai-for-devs 9 ай бұрын
Absolutely, prioritizing privacy by processing data in-house is a smart move. Leveraging open-source solutions and hosting LLMs on your own servers offers both control and security.
Fine tune and Serve Faster Whisper Turbo
34:44
Trelis Research
Рет қаралды 3,5 М.
小丑教训坏蛋 #小丑 #天使 #shorts
00:49
好人小丑
Рет қаралды 54 МЛН
When you have a very capricious child 😂😘👍
00:16
Like Asiya
Рет қаралды 18 МЛН
Quando A Diferença De Altura É Muito Grande 😲😂
00:12
Mari Maria
Рет қаралды 45 МЛН
How Strong Is Tape?
00:24
Stokes Twins
Рет қаралды 96 МЛН
Codechef Starters 167 | Video Solutions - A to E | by Pradyumn Kejriwal | TLE Eliminators
1:55:53
Windsurf vs Cursor: In-Depth AI Code Editor Comparison
18:14
Yifan - Beyond the Hype
Рет қаралды 21 М.
AutoGen Studio: Build Self-Improving AI Agents With No-Code
27:06
Multi Speaker Transcription with Speaker IDs with Local Whisper
14:56
Prompt Engineering
Рет қаралды 41 М.
小丑教训坏蛋 #小丑 #天使 #shorts
00:49
好人小丑
Рет қаралды 54 МЛН