great pres, is there a chance i can get the slides.
@prestonfrasch92705 ай бұрын
Thank you! This was a helpful overview! It helped me to better understand .autolog() especially.
@user-wr4yl7tx3w6 ай бұрын
Is it like weights and biases?
@user-wr4yl7tx3w6 ай бұрын
Harvard is not known for producing significant AI research. I get Stanford and Berkeley but Harvard?
@chrisogonas6 ай бұрын
That was a remarkable presentation. I am not surprised that more Ph.D's are headed into the industry than ever. Some of us feel vindicated by that trend🙂👍🏾. That said, this moment is revolutionary and it is palpable; it is about impossible to ignore the AI storm or merely sleep through it. It is both exciting and a little bit scary. I am personally excited about the possibilities and the repertoire of tools available to humanity to make gigantic innovation leaps in just about every domain. Thanks FourthBrainAI, thanks panel for the insights and perspectives.
@tommyr87476 ай бұрын
I understand the enthusiasm around AI's potential, but we must also confront some uncomfortable truths. The trend of Ph.D.s flocking to the industry, while beneficial in some respects, risks draining talent and focus from academic research, which is vital for deep, foundational discoveries. Moreover, the excitement surrounding AI must not eclipse the serious ethical challenges it poses. Issues like job displacement, privacy invasions, and widening social inequalities need urgent and thorough attention. Celebrating AI's capabilities without a robust plan to address these concerns is not only irresponsible but potentially dangerous. As we push the boundaries of what AI can do, let’s not forget to safeguard the societal values we cherish.
@chrisogonas6 ай бұрын
@@tommyr8747totally agree! This is a two-edged sword and our greater success with it depends on how holistically we embrace the technology, considering the entire spectrum of its potential.
@BORCHLEO7 ай бұрын
Such an amazing and informative talk, thank you!!!🙏
@ArindamChattopadhya7 ай бұрын
😍
@jonahkollenberg10607 ай бұрын
Wow, you are awesome!
@madhavparikh67477 ай бұрын
Thank you!
@Pingu_astrocat217 ай бұрын
Thank you for uploading this :)
@mannapmt30417 ай бұрын
hello i need quick help her the env that i set up in WSL dont show in VS Code can u help in this ... ?
@tennisdanoz9 ай бұрын
Can I get an acess to slides?
@rogendothepoet31089 ай бұрын
Loved it
@pablovera21029 ай бұрын
Please share the jupiter note book or the code...
@atlant17079 ай бұрын
please turn on the subtitles
@AmrMoursi-sm3cl10 ай бұрын
Thanks ❤❤❤❤
@kunalnikam911210 ай бұрын
Can this be done using LORA?
@rodralez10 ай бұрын
Link to the notebook: colab.research.google.com/drive/1JBtIiMA-LLCmqxGwzK6aokPKR6wRWdV0?usp=sharing
@randradefonseca10 ай бұрын
This is the best and most extensive explanation of RAG that I have seen on KZbin. Thank you!
@jakobbourne638110 ай бұрын
Enhance your marketing efficiency and financial success using Phlanx's Caption Generator, an AI solution that simplifies the content creation process, allowing businesses to focus on core activities while reaping the benefits of increased online visibility.
@Peter-cd9rp11 ай бұрын
thank you. Just out of curiosity, can you share code/notebook as well?
@pmobley652611 ай бұрын
A few of the links in the "summary" above result in "page not found". Also is there a link to the slides? Thanks for the great presentation btw. I always enjoy learning the cutting edge of time series forecasting.
@chrisogonas11 ай бұрын
This is exciting! I cannot wait to try it out on sensor data. Thanks folks.
@sadam873911 ай бұрын
you say Time GPT is open source and it is not downloadable - that is not open source - hahaha. What a waste of time - why cant you be clear
@HomaKarimabadi11 ай бұрын
multivariate timeseries is required for stock prediction. any plans to extend timegpt to multivariate?
@pauljones915011 ай бұрын
Why is multivariate timeseries required for stock prediction?
@homakar111 ай бұрын
@@pauljones9150 because correlations across stocks and markets are super important
@rahulgaikwad505811 ай бұрын
Amazing video and keep it up good work 🎉
@luisjoseve Жыл бұрын
awesome! thanks a lot.
@GX-uq1hm Жыл бұрын
when the next MLOps program is scheduled for?
@GX-uq1hm Жыл бұрын
when does the next MLE program begin?
@thehiep4242 Жыл бұрын
Can u help me? I have a dataset with images of size 1024x536, can I change the input size for fine-tuning?
@rayhanegholampoor19 күн бұрын
transform = transforms.Compose([ transforms.Resize((image_size, image_size)), transforms.CenterCrop(image_size), transforms.ToTensor(), transforms.Normalize([0.5], [0.5], [0.5]) # Ensure the dataset has 3 channels (RGB) ])
@chrisogonas Жыл бұрын
Excellent session, Salwa and Luis. Thanks FourthBrain 👏👏👏
@0xjeph Жыл бұрын
What resources do you use to finetune the model?
@Delmark1904 Жыл бұрын
Now I receive "Out of memory on GPU" every time, even tho I use Colab Pro. Maybe someone have any insights on this issue?
@RamyHassan-t2r Жыл бұрын
I've upgraded colab, and now I've 15 GPU RAM. and receiving this error: torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 50.00 MiB (GPU 0; 14.75 GiB total capacity; 13.29 GiB already allocated; 6.81 MiB free; 13.44 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF However I've make the --max_train_steps=10 Any idea what's the minimum required RAM, and how can I optmize that?
@fintech1378 Жыл бұрын
how bout finetuning for video?
@chiefmiester3801 Жыл бұрын
A tutorial on using Dreambooth as well would be super cool
@BT-te9vx Жыл бұрын
Thanks for sharing - it was helpful
@infolabai Жыл бұрын
🎯 Key Takeaways for quick navigation: 00:00 🎉 Introduction to fine-tuning LLMs with Ludwig framework. 00:27 📚 Learn to fine-tune large language models (LLMs) using Ludwig framework. 00:55 🧠 Ludwig uses a declarative programming approach to generate fine-tuned LLMs with minimal code. 02:16 🛠️ Speaker Piero Molina introduces Ludwig, an open-source deep learning framework with declarative capabilities. 03:36 🚀 Ludwig simplifies training and fine-tuning LLMs, reducing coding effort and time. 07:08 🔧 Ludwig configuration specifies inputs, outputs, and task types to create LLMs with desired architectures. 09:43 🚀 Fine-tuning LLMs with Ludwig requires minimal code - about 10 lines - to achieve effective results. 13:44 📊 Ludwig supports large LLMs like GPT-3, and handles distributed training with Ray for models exceeding GPU memory. 20:28 📊 Evaluation and visualization tools in Ludwig help analyze LLM performance. 23:06 🔄 Switching LLM models or transfer learning with Ludwig is straightforward by adjusting configuration parameters. 25:30 🚀 Developing machine learning models using traditional approaches can be time-consuming and resource-intensive, often requiring months of development and deployment. 26:24 🛠️ Aggregative interfaces and declarative approaches, like those used in tools such as DBT and Terraform, can simplify complex tasks in data engineering and infrastructure management. 27:18 ⚙️ Combining declarative and automation approaches in machine learning reduces time to value and opens opportunities for engineers with varying levels of ML expertise. 28:41 🧠 Ludwig's architecture revolves around ECD (Encoder-Combiner-Decoder), allowing flexibility in encoding different data types for various machine learning tasks. 29:48 🧩 Ludwig's configuration-driven approach lets you build different machine learning models by specifying encoders, preprocessors, architectures, and more. 30:44 📊 The flexibility in the configuration file allows creating multi-modal, multi-task models, enabling tasks like image captioning, regression, audio processing, and more. 33:57 📝 Ludwig's configuration system can be easily extended with custom encoders, expanding the platform's capabilities for various applications. 37:56 ⚙️ Scaling Ludwig is facilitated through the Ray backend, enabling data parallelism and model parallelism for larger data sets and models. 43:45 🌟 Pretty Base enhances Ludwig with additional components, making it an enterprise platform for easy model building, iteration, deployment, and collaboration. 46:04 🌐 Ludwig supports various LLMs, including those available on Hugging Face, and upcoming versions will include built-in support for models like alpaca and llama. 48:47 🏁 When fine-tuning LLMs on resource-limited platforms like Google Colab, consider using smaller versions of models and optimizing for performance per available resources. 50:54 🧠 Fine-tuning LLMs can achieve comparable performance to full fine-tuning or even better in some cases, depending on data similarity. 51:34 🔄 Freezing specific parts of a pre-trained model can be effective when the new data is similar to the original model's training data. 53:51 🏆 Hyperband with Bayesian optimization is recommended for hyperparameter optimization due to its efficient resource usage. 55:29 🌟 Keeping up with AI advancements is challenging, and following influential lab discussions or seminar series can help. 57:21 🚀 Beginners aiming to add ML capabilities to applications can start with high-level tools like Ludwig. 58:30 🔍 For researchers, exploring historical AI papers can provide valuable insights into the field's progression. 01:00:08 🧩 Building with high-level abstractions like Lego blocks is crucial in the evolving landscape of AI. 01:00:47 🤝 Joining AI communities like Stanford MLSys can help stay updated and connected within the field. Made with HARPA AI
@somaiamahmoud1675 Жыл бұрын
Can you provide the fineruning code? Thank you.
@MateoRiosQuerubin Жыл бұрын
Thanks for the video, it has been really useful! Do you know how to keep trace of the training logs?
@mehdimohsenimahani4150 Жыл бұрын
amazing
@LordSagavalta Жыл бұрын
Forgot one cut there 22:11 ahah Nice work tho thank you
@saisha_playz5355 Жыл бұрын
Hi, I tried navigating to the github url visible in 1:49 timeframe of this youtube video and I dont find any such repo. Can you please share the correct repo url here?
@MrRadziu86 Жыл бұрын
How to join your meetings?
@jsj14 Жыл бұрын
In the second approach did you not use OpenAI API ?
@lfunderburk367 Жыл бұрын
Yes. The second approach is not reliant on OpenAI API
@孟-j4x Жыл бұрын
bro you're amazing!
@fengyouzheng2434 Жыл бұрын
Nice work.
@MrGtube007 Жыл бұрын
Good video Chris
@shoubhikdasguptadg9911 Жыл бұрын
what is the loss function here, what is the model learning during training?