Machine Learning Q and AI author interview with Sebastian Raschka

  Рет қаралды 2,124

Sophia Yang

Sophia Yang

Күн бұрын

Пікірлер: 10
@abdulhamidmerii5538
@abdulhamidmerii5538 7 ай бұрын
I am halfway through reading the book myself. It is absolutely amazing. Simple and high level enough to keep you emersed when commuting, but still rich enough to learn quite a few things on the way!
@CryptoMaN_Rahul
@CryptoMaN_Rahul 7 ай бұрын
00:03 Sebastian Raschka shares his background and work experience 02:18 Sebastian Raschka discusses his motivation for writing a Q&A book on machine learning. 06:27 Author's approach to writing the book 08:35 Using earlier layers in neural networks for embeddings depends on the data set and task 12:42 Adding layers without removing any can improve model performance 14:55 Pruning models to create more efficient smaller models 19:21 Parameter efficient model adaptation technique explained 21:22 Q Laura allows using only one GPU instead of eight for fine-tuning. 25:21 Discussion on LAMA adapters and their similarities 27:37 Machine learning foundations remain consistent with potential additions in newer editions. 31:52 Utilizing multiple GPUs for faster training 33:56 Fully sharded data parallelism in PyTorch is a combination of data and tensor parallelism. 37:45 Usage of Fabric to simplify practice in machine learning. 39:46 Discussion on fine-tuning models for specific use cases 43:53 Models may look good on the surface but lack truthfulness compared to rlhf 45:56 Europe's challenge to train an ML model with limitations 49:45 Discussion on terminology in deep learning architectures 51:52 Comparison between encoder and decoder in autoencoders 55:49 Quantization methods and their implications 57:46 Encouragement to reach out for questions and check out the book Crafted by Merlin AI.
@devlearnllm
@devlearnllm Жыл бұрын
The book was a joy to read. Ty Sophia for letting us hear the author's thoughts
@SophiaYangDS
@SophiaYangDS Жыл бұрын
Thank you! Yes this book is amazing!
@agdsam
@agdsam Жыл бұрын
Thanks Sophia and Raschka, this interview was packed with practical insights.
@SophiaYangDS
@SophiaYangDS Жыл бұрын
Thank you 🙏
@Zale370
@Zale370 Жыл бұрын
00:31 💡 Sebastian Raschka's book "Machine Learning Q and AI" was chosen by the book club and he is honored and excited to discuss it. 04:27 💡 The book covers 30 different machine learning topics that didn't fit into Sebastian's previous books, providing a next-step resource for those already familiar with machine learning. 09:08 💡 Earlier layers in a neural network may be used for embeddings depending on the data set and task, but often replacing the last layer and fine-tuning the other layers is sufficient. 19:36 💡 The lottery ticket hypothesis, which involves finding smaller sub-networks with similar performance to the original network, can be applied in NLP for model distillation or efficiency. 30:24 💡 Different parallelism strategies, such as tensor, model, pipeline, or sequence parallelism, exist for training large language models, but some may be more popular or practical than others. 51:20 💡 Terminology for encoder and decoder in language models can be misleading, as the architectures are often the same with different training objectives. Alternative names like "autoencoder" or "auto-regressive" may be more informative. 54:00 💡 Terminology helps distinguish models based on their use case, with encoders requiring additional processing for specific tasks, and auto-regressive models allowing for tasks like future few-shot learning. 55:29 💡 Pruning and quantization techniques can be applied to large language models for memory savings and inference speedup, but may have additional computational costs. The "Bits and Bytes" and "Llamas" libraries provide methods for quantization. 56:51 💡 Recent libraries have improved the efficiency of quantization methods for large language models, offering both memory savings and faster inference, while maintaining similar model output quality. 57:36 💡 Sebastian Raschka appreciates the technical questions and is open to further discussions and inquiries, and hints at the possibility of a second volume in the future.
@danieltost
@danieltost Жыл бұрын
That's an awesome book and a super technical chat. Loved every minute. Keep up the great work, Sophia!! ❤
@SophiaYangDS
@SophiaYangDS Жыл бұрын
Thanks Daniel! I loved the book and the discussion. Sebastian is the best! ❤️
@adeveloper6653
@adeveloper6653 Жыл бұрын
kzbin.info/www/bejne/nmiTlp-tZdJ2psk does anyone know about the "New Europe Challenge" he is talking about. I can't seem to find it anywhere.
Langchain vs Llama Index (which one should I use?)
4:16
Omari Harebin
Рет қаралды 12 М.
Was bedeutet das für Dich? Die Magie des Teilens - mit Max Görtler
1:19:49
Сестра обхитрила!
00:17
Victoria Portfolio
Рет қаралды 958 М.
IL'HAN - Qalqam | Official Music Video
03:17
Ilhan Ihsanov
Рет қаралды 700 М.
Modeling Mindsets author interview with Christoph Molnar
51:25
Mastering the Art of Buying and Selling a Web Agency in 2025
54:38
Causal Inference in Python author interview with Matheus Facure
1:00:58
🚀🔍 AI papers deep dive: LLM understanding, RAG, CoT
15:54
Sophia Yang
Рет қаралды 2,8 М.
Advanced RAG 01: Small-to-Big Retrieval with LlamaIndex
9:17
Sophia Yang
Рет қаралды 13 М.
Google’s Quantum Chip: Did We Just Tap Into Parallel Universes?
9:34