Рет қаралды 16,558
This video gives an introduction into how to use existing pre-trained AI models in your own solutions with your own data. I give an introduction to Hugging Face and their AI Model Hub and how you can use that to test Q&A (question and answer) functionality with your own data. We even pull some random content from the internet and ask questions of that content that the model has never seen before. I then give an introduction to Transfer Learning and BERT (a state of the art Natural Language Processing (NLP) model that powers many AI functionality today).. We look at how BERT works and how it is pre-trained with wikipedia and BookCorpus and why it's advantageous just to use a pre-trained model rather than training your own model. We then look at BERT is then fine tuned on a Stanford Q&A dataset called SQuAD 2.0 so it learns how to answer questions (so that you can use it with your own data).
Finally we code up some of our Q&A routines in Python in Jupyter Notebooks hosted in Google Colab so that you can get an idea on how easy it is to embed pre-trained AI's models in your own solutions with your own data.
00:00 - Intro
01:20 - Hugging Face Model Hub
03:49 - Using a BERT model on HuggingFace
08:49 - Introduction to Transfer Learning
10:12 - Understanding BERT
15:55 - Datasets used to build pre-train BERT (Wikipedia and BookCorpus)
18:41 - Fine Tuning BERT to understand Q&A with SQuAD 2.0
22:24 - Coding our model with HuggingFace Pipelines using Google Colab
29:34 - Coding our model with TensorFlow using Google Colab