What is MLflow?
2:30
Жыл бұрын
Deep Learning 2 0
4:11
Жыл бұрын
Пікірлер
@Pingu_astrocat21
@Pingu_astrocat21 2 күн бұрын
Thank you for uploading this :)
@mannapmt3041
@mannapmt3041 5 күн бұрын
hello i need quick help her the env that i set up in WSL dont show in VS Code can u help in this ... ?
@tennisdanoz
@tennisdanoz Ай бұрын
Can I get an acess to slides?
@rogendothepoet3108
@rogendothepoet3108 Ай бұрын
Loved it
@pablovera2102
@pablovera2102 Ай бұрын
Please share the jupiter note book or the code...
@atlant1707
@atlant1707 Ай бұрын
please turn on the subtitles
@AmrMoursi-sm3cl
@AmrMoursi-sm3cl 2 ай бұрын
Thanks ❤❤❤❤
@kunalnikam9112
@kunalnikam9112 2 ай бұрын
Can this be done using LORA?
@rodralez
@rodralez 2 ай бұрын
Link to the notebook: colab.research.google.com/drive/1JBtIiMA-LLCmqxGwzK6aokPKR6wRWdV0?usp=sharing
@randradefonseca
@randradefonseca 3 ай бұрын
This is the best and most extensive explanation of RAG that I have seen on KZbin. Thank you!
@jakobbourne6381
@jakobbourne6381 3 ай бұрын
Enhance your marketing efficiency and financial success using Phlanx's Caption Generator, an AI solution that simplifies the content creation process, allowing businesses to focus on core activities while reaping the benefits of increased online visibility.
@Peter-cd9rp
@Peter-cd9rp 3 ай бұрын
thank you. Just out of curiosity, can you share code/notebook as well?
@pmobley6526
@pmobley6526 3 ай бұрын
A few of the links in the "summary" above result in "page not found". Also is there a link to the slides? Thanks for the great presentation btw. I always enjoy learning the cutting edge of time series forecasting.
@chrisogonas
@chrisogonas 3 ай бұрын
This is exciting! I cannot wait to try it out on sensor data. Thanks folks.
@sadam8739
@sadam8739 3 ай бұрын
you say Time GPT is open source and it is not downloadable - that is not open source - hahaha. What a waste of time - why cant you be clear
@user-nf6rl9pj5f
@user-nf6rl9pj5f 3 ай бұрын
multivariate timeseries is required for stock prediction. any plans to extend timegpt to multivariate?
@pauljones9150
@pauljones9150 3 ай бұрын
Why is multivariate timeseries required for stock prediction?
@homakar1
@homakar1 3 ай бұрын
@@pauljones9150 because correlations across stocks and markets are super important
@rahulgaikwad5058
@rahulgaikwad5058 4 ай бұрын
Amazing video and keep it up good work 🎉
@luisjoseve
@luisjoseve 4 ай бұрын
awesome! thanks a lot.
@GX-uq1hm
@GX-uq1hm 4 ай бұрын
when the next MLOps program is scheduled for?
@GX-uq1hm
@GX-uq1hm 4 ай бұрын
when does the next MLE program begin?
@thehiep4242
@thehiep4242 5 ай бұрын
Can u help me? I have a dataset with images of size 1024x536, can I change the input size for fine-tuning?
@chrisogonas
@chrisogonas 6 ай бұрын
Excellent session, Salwa and Luis. Thanks FourthBrain 👏👏👏
@junhualiu5259
@junhualiu5259 7 ай бұрын
What resources do you use to finetune the model?
@Delmark1904
@Delmark1904 7 ай бұрын
Now I receive "Out of memory on GPU" every time, even tho I use Colab Pro. Maybe someone have any insights on this issue?
@user-dh6cl2er2z
@user-dh6cl2er2z 7 ай бұрын
I've upgraded colab, and now I've 15 GPU RAM. and receiving this error: torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 50.00 MiB (GPU 0; 14.75 GiB total capacity; 13.29 GiB already allocated; 6.81 MiB free; 13.44 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF However I've make the --max_train_steps=10 Any idea what's the minimum required RAM, and how can I optmize that?
@fintech1378
@fintech1378 7 ай бұрын
how bout finetuning for video?
@chiefmiester3801
@chiefmiester3801 7 ай бұрын
A tutorial on using Dreambooth as well would be super cool
@BT-te9vx
@BT-te9vx 7 ай бұрын
Thanks for sharing - it was helpful
@infolabai
@infolabai 9 ай бұрын
🎯 Key Takeaways for quick navigation: 00:00 🎉 Introduction to fine-tuning LLMs with Ludwig framework. 00:27 📚 Learn to fine-tune large language models (LLMs) using Ludwig framework. 00:55 🧠 Ludwig uses a declarative programming approach to generate fine-tuned LLMs with minimal code. 02:16 🛠️ Speaker Piero Molina introduces Ludwig, an open-source deep learning framework with declarative capabilities. 03:36 🚀 Ludwig simplifies training and fine-tuning LLMs, reducing coding effort and time. 07:08 🔧 Ludwig configuration specifies inputs, outputs, and task types to create LLMs with desired architectures. 09:43 🚀 Fine-tuning LLMs with Ludwig requires minimal code - about 10 lines - to achieve effective results. 13:44 📊 Ludwig supports large LLMs like GPT-3, and handles distributed training with Ray for models exceeding GPU memory. 20:28 📊 Evaluation and visualization tools in Ludwig help analyze LLM performance. 23:06 🔄 Switching LLM models or transfer learning with Ludwig is straightforward by adjusting configuration parameters. 25:30 🚀 Developing machine learning models using traditional approaches can be time-consuming and resource-intensive, often requiring months of development and deployment. 26:24 🛠️ Aggregative interfaces and declarative approaches, like those used in tools such as DBT and Terraform, can simplify complex tasks in data engineering and infrastructure management. 27:18 ⚙️ Combining declarative and automation approaches in machine learning reduces time to value and opens opportunities for engineers with varying levels of ML expertise. 28:41 🧠 Ludwig's architecture revolves around ECD (Encoder-Combiner-Decoder), allowing flexibility in encoding different data types for various machine learning tasks. 29:48 🧩 Ludwig's configuration-driven approach lets you build different machine learning models by specifying encoders, preprocessors, architectures, and more. 30:44 📊 The flexibility in the configuration file allows creating multi-modal, multi-task models, enabling tasks like image captioning, regression, audio processing, and more. 33:57 📝 Ludwig's configuration system can be easily extended with custom encoders, expanding the platform's capabilities for various applications. 37:56 ⚙️ Scaling Ludwig is facilitated through the Ray backend, enabling data parallelism and model parallelism for larger data sets and models. 43:45 🌟 Pretty Base enhances Ludwig with additional components, making it an enterprise platform for easy model building, iteration, deployment, and collaboration. 46:04 🌐 Ludwig supports various LLMs, including those available on Hugging Face, and upcoming versions will include built-in support for models like alpaca and llama. 48:47 🏁 When fine-tuning LLMs on resource-limited platforms like Google Colab, consider using smaller versions of models and optimizing for performance per available resources. 50:54 🧠 Fine-tuning LLMs can achieve comparable performance to full fine-tuning or even better in some cases, depending on data similarity. 51:34 🔄 Freezing specific parts of a pre-trained model can be effective when the new data is similar to the original model's training data. 53:51 🏆 Hyperband with Bayesian optimization is recommended for hyperparameter optimization due to its efficient resource usage. 55:29 🌟 Keeping up with AI advancements is challenging, and following influential lab discussions or seminar series can help. 57:21 🚀 Beginners aiming to add ML capabilities to applications can start with high-level tools like Ludwig. 58:30 🔍 For researchers, exploring historical AI papers can provide valuable insights into the field's progression. 01:00:08 🧩 Building with high-level abstractions like Lego blocks is crucial in the evolving landscape of AI. 01:00:47 🤝 Joining AI communities like Stanford MLSys can help stay updated and connected within the field. Made with HARPA AI
@somaiamahmoud1675
@somaiamahmoud1675 9 ай бұрын
Can you provide the fineruning code? Thank you.
@user-xp5qg9qk8c
@user-xp5qg9qk8c 9 ай бұрын
Thanks for the video, it has been really useful! Do you know how to keep trace of the training logs?
@mehdimohsenimahani4150
@mehdimohsenimahani4150 10 ай бұрын
amazing
@ANONYMUS92300
@ANONYMUS92300 10 ай бұрын
Forgot one cut there 22:11 ahah Nice work tho thank you
@saisha_playz5355
@saisha_playz5355 11 ай бұрын
Hi, I tried navigating to the github url visible in 1:49 timeframe of this youtube video and I dont find any such repo. Can you please share the correct repo url here?
@MrRadziu86
@MrRadziu86 11 ай бұрын
How to join your meetings?
@jsj14
@jsj14 11 ай бұрын
In the second approach did you not use OpenAI API ?
@lfunderburk367
@lfunderburk367 11 ай бұрын
Yes. The second approach is not reliant on OpenAI API
@user-ng5ph1yq6y
@user-ng5ph1yq6y 11 ай бұрын
bro you're amazing!
@fengyouzheng2434
@fengyouzheng2434 11 ай бұрын
Nice work.
@MrGtube007
@MrGtube007 11 ай бұрын
Good video Chris
@shoubhikdasguptadg9911
@shoubhikdasguptadg9911 11 ай бұрын
what is the loss function here, what is the model learning during training?
@billya5249
@billya5249 11 ай бұрын
Hi, if I wanted to train it on more and more. Do I just provide the model path that I just trained and then train it on more data?
@imaduddin146
@imaduddin146 Жыл бұрын
Nice explanation. Maybe make it longer upto 5 mins to show more examples how MLFlow can help
@galmoore3193
@galmoore3193 Жыл бұрын
doesn't work for me. The generate_prompt() function doesn't seem to actually create the prompt. What did I do wrong?
@chrisalexiuk
@chrisalexiuk Жыл бұрын
Hey, Gal! Could you provide any more specific information about your issue?
@HuseyinABANOZ
@HuseyinABANOZ Жыл бұрын
Thanks for the content!
@HuseyinABANOZ
@HuseyinABANOZ Жыл бұрын
Thanks.
@richard3d7
@richard3d7 Жыл бұрын
This was great...thanks
@telmoc9041
@telmoc9041 Жыл бұрын
There is some misleading information in this tutorial: while the anaconda environment was created within WSL2 / Ubuntu, the Python execution environment selected in VS Code was actually a pre-existing anaconda environment installed on Windows. I lost an unreasonable amount of time trying to track why my WSL conda environment wasn't showing up as shown. This does not matter for this tutorial -- the Python interpreter in VS Code is only used to run a jupyter notebook cell -- but the discrepancy in environments between VS Code and WSL can lead to hard to track bugs if this environment setup is kept as is.
@mannapmt3041
@mannapmt3041 5 күн бұрын
is there any way of doing it ??
@sadam8739
@sadam8739 Жыл бұрын
where is the notebook
@curisity
@curisity Жыл бұрын
very easily understood. Just curious, what is the difference between the Coursera DeepLearningAI course and the FourthBrainAI courses ?- Because you seem to be the instructor for both. Even among the FourthBrain there is ML Engineering vs MLOps ?
@LoveThyself-wh8jg
@LoveThyself-wh8jg Жыл бұрын
Milan Mcgraw is a pretty racist human being. Join Fourth Brain at your own risk.