AI Weekly Update - January 24th, 2022

  Рет қаралды 2,549

Connor Shorten

Connor Shorten

Күн бұрын

Пікірлер: 15
@TimScarfe
@TimScarfe 2 жыл бұрын
Great to see these back!
@connor-shorten
@connor-shorten 2 жыл бұрын
Thanks Tim, appreciate it!
@domenickmifsud
@domenickmifsud 2 жыл бұрын
Thanks!
@connor-shorten
@connor-shorten 2 жыл бұрын
You're welcome! Thanks for watching!
@billykotsos4642
@billykotsos4642 2 жыл бұрын
Weviates technology looks REALLY interesting. Do they have something akin to a white paper covering the technology?
@connor-shorten
@connor-shorten 2 жыл бұрын
Thanks Billy, really happy to hear that! Maybe Bob will drop in the comments soon to give you what they recommend -- I'd suggest just checking out the Weaviate documentation, scrolling down to Tutorials, and looking at "How to query data?". I think that will provide a lot of inspiration for the GraphQL API for Neurosymbolic search. More than happy to answer any more questions you have about this, also please check out the Weaviate slack, great community for these kinds of discussions!
@billykotsos4642
@billykotsos4642 2 жыл бұрын
@@connor-shorten Oh thanks ! nice !
@SantoshGupta-jn1wn
@SantoshGupta-jn1wn 2 жыл бұрын
re promptBert The paper gave explanations on why it outperforms the original Bert, but they also compare it to SOTA model that fine-tune the whole model with using either the CLS token or average pooling and promptBert outperforms them, but it doesn't seem to give a solid explanation why. Any idea?
@connor-shorten
@connor-shorten 2 жыл бұрын
They probably don't want to give away too many of the secrets of it haha, but generally this idea of prompting large language models seems like the future of many applications --- the explanation being that the contextual representations are so powerful that we have been misevaluating these systems this entire time with a lack of proper context in testing
@SantoshGupta-jn1wn
@SantoshGupta-jn1wn 2 жыл бұрын
​@@connor-shorten Kinda mind blowing. It definitely makes sense with limited data. But I would imagine fine tuning with just the CLS token would be all the beneficial nudging a model can get for the task, with enough data; the attention heads and linear layers should get enough information to pinpoint the task. But the prompting also outperformed on the unsupervised models (I'm guessing the unsupervised training had large amounts of data). So maybe having some input tokens dedicated to storing task information frees up the attention heads / linear layers to keep focused on their original purpose. Maybe attention heads are not well suited for tasks not directly involved with token-to-token attention (ie anything outside of token classification)
@connor-shorten
@connor-shorten 2 жыл бұрын
@@SantoshGupta-jn1wn I agree with all the conclusions in this post -- w.r.t. fine-tuning, you might be interested in these techniques that do continuous prompt tuning, no reason the prompt needs to be an actual text sequence if you are going to optimize it with gradients
@InquilineKea
@InquilineKea 2 жыл бұрын
Wait Ray kurzweil is on a paper.???
@connor-shorten
@connor-shorten 2 жыл бұрын
Haha, maybe I don't really study the authors names -- which one did you see?
AI Weekly Update - January 31st, 2022
36:41
Connor Shorten
Рет қаралды 1,9 М.
Something Strange Happens When You Take This To Its Logical Conclusion
32:44
I tricked MrBeast into giving me his channel
00:58
Jesser
Рет қаралды 25 МЛН
Perfect Pitch Challenge? Easy! 🎤😎| Free Fire Official
00:13
Garena Free Fire Global
Рет қаралды 14 МЛН
🕊️Valera🕊️
00:34
DO$HIK
Рет қаралды 17 МЛН
the balloon deflated while it was flying #tiktok
00:19
Анастасия Тарасова
Рет қаралды 35 МЛН
AI Weekly Update - February 7th, 2022
29:51
Connor Shorten
Рет қаралды 2,6 М.
Neural and Non-Neural AI, Reasoning, Transformers, and LSTMs
1:39:39
Machine Learning Street Talk
Рет қаралды 72 М.
AI Weekly Update - August 7th, 2021 (#40)
22:17
Connor Shorten
Рет қаралды 5 М.
What's the future for generative AI? - The Turing Lectures with Mike Wooldridge
1:00:59
GEOMETRIC DEEP LEARNING BLUEPRINT
3:33:23
Machine Learning Street Talk
Рет қаралды 204 М.
Large Language Models (LLMs) - Everything You NEED To Know
25:20
Matthew Berman
Рет қаралды 112 М.
I tricked MrBeast into giving me his channel
00:58
Jesser
Рет қаралды 25 МЛН