CBOW is using context to predict target and learn context embeddings and skip gram is using target to predict context to learn target word embeddings. I believe I understood this correctly from the explanation. Thanks.
@7369392 жыл бұрын
6:00 Are these different embeddings or the same? And if they are the same, how to implement them in pytorch to have the same representation?
@asm.i.ta.4 жыл бұрын
Loved this. Thanks!
@aeronszalai41903 жыл бұрын
Thanks for the video! However I was familiar with the details of these two algos. What I am actually interested in is, how I can make RASA using spaCy-s dense word embeddings (token.vector). I have an own spaCy model with a complete language model in it for hungarian language (made with Skip-gram of course) and I want to utilize it in RASA. Is it capable of that, out of the box or I'll have to write custom code? Thanks!
@RasaHQ3 жыл бұрын
(Vincent here) I'd love to help out here but maybe it's best to use our public forum for that. Feel free to ask away at forum.rasa.com and ping me @koaning there. Also, I host a side-project that also has pre-trained embeddings for Hungarian. We also allow for gensim if you've trained your own. rasahq.github.io/rasa-nlu-examples/docs/featurizer/gensim/
@RasaHQ3 жыл бұрын
On the forum I can give more long form answers with images. The youtube chat-system is a bit limited for that use-case.
@aeronszalai41903 жыл бұрын
@@RasaHQ Thank you for your quick response! I have created a topic for this same question on the RASA Forums.
@RasaHQ3 жыл бұрын
@@aeronszalai4190 Could you make sure that I'm tagged. You can do so by adding @koaning to a message.