Love this video. Love the analysis behind the execution you've made to create a more incremental and modular training. Thinking further away on the scalability. I've found that machine learning usually encounters tons of centralization or vertical and monolithic scalability which is annoying... I hope as in microserviced architecture, models can learn incrementally too by reusing previous knowledge. For example a model that first learns the language itself, then learns based on the pre trained neurons of the output layer, medical informacion, laws, etc. In this way you reuse training and scales. But wonder why I don't see any videos talking about this.
@ifranrahman2 жыл бұрын
Hi. Is there any paper which shows the Incremental learning approach you have demonstrated?
@growwitharosh50523 жыл бұрын
Hi, is it possible to add incremental learning by creme with cnn in facial emotion recognition?
@RasaHQ3 жыл бұрын
(Vincent here) The creme project recently merged with another project to become river, more info here: github.com/online-ml/river. While I know some of the creators of that package, I should stress that it's completely unrelated to Rasa. Two final comments; what use-case exactly do you have with facial emotion recognition? There's ethical concerns in that realm. I might run through this check-list before proceeding: deon.drivendata.org/. Finally, when you train on batches of data, you're also technically learning on a stream. Keras and Pytorch also natively support online learning in that sense.
@interesting29063 жыл бұрын
Is there any chance I can speed up the process of training my model from scratch on my laptop? I am talking not about adding some new data to the existed dataset, but speeding up the very first training of the model. Is there any chance I can do something about it without buying a new PC with large amount of RAM and GPU's and so on...? Thank you in advance
@RasaHQ3 жыл бұрын
Maybe but it depends on why your training is slow. In general, I've found that the most common cause for a Rasa model training really slowly is a large number of intents, so reducing that might help. -R
@atinsood4 жыл бұрын
Vincent, in addition to reduced number of epochs, wouldn't it make sense to over sample A and B datasets and under sample the original dataset when re-training.
@RasaHQ4 жыл бұрын
(Vincent here) You could experiment with it, but you need to remain careful. You want to prevent "catastrophic forgetting".
@atinsood4 жыл бұрын
@@RasaHQ any chance that this notebook/code and dataset that you ran the numbers on is available on github. I can try the experiement on the same notebook and report back the numbers.
@RasaHQ4 жыл бұрын
@@atinsood the command line arugments are described here blog.rasa.com/rasa-new-incremental-training/ and the dataset that we use can be found here github.com/RasaHQ/rasa-demo
@avinashpaul16654 жыл бұрын
Do you also keep extra empty classes for output. For example is A is trained with 3 classes and the B is trained with 2 classes the final modle should o/p 5 classes
@RasaHQ4 жыл бұрын
(Vincent here) Nope, as the video mentions, you *must* keep the size of the weights the same. So the current approach will not work if you add new labels.
@bofenghuang77633 жыл бұрын
Does it get the model overfitted if we repeat this process each time getting new training examples ? Should we set a limit for the max times of re-training ? Thanks in advance.
@RasaHQ3 жыл бұрын
(Vincent here) This is hard to say upfront. It will depend a lot on your dataset. On the datasets that I've tried, it seems like 5-10 epochs of fine-tuning is enough.
@TechWithUmraz3 жыл бұрын
hey can i have a sample code of keras incremental learning model for binary image classification(i mean loading previous model and adding new weights and features to it) thanks :)
@RasaHQ3 жыл бұрын
(Vincent here) The incremental training feature shown in the video is designed to work for Rasa pipelines and isn't made to support general Keras models.
@nishantkumar39973 жыл бұрын
Hi Vincent, Can you also link the research paper for this method of incremental learning ? #RASA
@RasaHQ3 жыл бұрын
(Vincent here) We never wrote a paper about this feature. To my knowledge it's also not that novel. SpaCy does something very similar and the partial_fit mechanic in scikit-learn can also be used for this.
@nishantkumar39973 жыл бұрын
@@RasaHQ Hi Vincent , thank you for the information . I’ll also check out Spacy. Again love your videos Vincent. They are really good. Thank you.