Hope this will pave the road to Symbolic AGI/Symbolic ASI where are AI is fully explainable provable.
@cbxxxbc3 ай бұрын
Thank you for the presentation that clearly spells out NLP tasks in RL!
@jason-ps6mf6 ай бұрын
First author talks about his paper!
@harishkumarsharishkumars8486 ай бұрын
Where we get a course in Symbolic AI , please tell , University's courses
@joe_hoeller_chicago6 ай бұрын
To a certain extent, it doesn’t really exist sans a few elite universities. Just Google the math for it.
@YvesNewman9 ай бұрын
Love this talk Jeff! Awesome summary
@ddsmax Жыл бұрын
4 years ahead of his time.
@generativeresearch Жыл бұрын
The issue of scalabality plagues Tree-LSTM structures
@garimpovirtual4660 Жыл бұрын
That tool is exceptional! Please, I would like to test it! When will it be available to the public?
@AR_7333 Жыл бұрын
So elegantly presented. Kudos to Colin and team!
@chitreshkaushik6980 Жыл бұрын
Awesome content !Where do I get these ppts used in lecture delivery?
@bologcom Жыл бұрын
9:41 thru 16:14 Very informative for learning CCG Supertagging
@enghelp1998 Жыл бұрын
10:37 , is that garden path sentence intentional?
@NavidAnjumAadit Жыл бұрын
nice talk but you should slow down a bit. too fast
@Takamanoharra Жыл бұрын
how can we try it ?
@billykotsos46422 жыл бұрын
NO mention of "overfitting" anywhere in the past few years of these huge models and datasets. The fundamentals of ML are out the window
@charlie-fd5mp Жыл бұрын
because they don't overfit.
@nathanhelmburger2 жыл бұрын
You cover so many different good ideas here! What a ride!
@PokemeisterSarabicum2 жыл бұрын
It doesnt load :( Can you fix it or did you take it offline?
@nicolaslopezcarranza57242 жыл бұрын
Still relevant as ever in 2022
@InfinityDz2 жыл бұрын
What is the gradient for the frobenius based error function relative to W and b? It's not in the paper and I couldn't find it in the references given
@stanslausmwongela52622 жыл бұрын
Clear and coherent explanations
@marieshino24722 жыл бұрын
clear explanation !
@MuhammadAzhar-eq3fi2 жыл бұрын
Best video on rules extraction. Thanks .
@q0x2 жыл бұрын
Do we always have to choose the number of samples to draw in the beginning or can we draw more later?
@sarahooshmand18303 жыл бұрын
Awesome ! Thanks for sharing
@leonardTsn3 жыл бұрын
Where can I can the slides from?
@brandomiranda67033 жыл бұрын
btw, I really appreciated the "functional" view of CCA. That was especially helpful to me.
@brandomiranda67033 жыл бұрын
I find the comment of the data views a little confusing. I have never seen that point to view. Perhaps that's why it's confusing to me. The way I understand that after further reading is that in reality general functional CCA finds two functions that correlate a collection of random variables x1, x2 (where the coordinates are random variables). The random variables might have a correlation or not. But I guess in Galen Andrews's work he is has applied it to data that comes form the same source - why I think he refers them as views. Essentially, that assumption makes it clear that the r.v.s should be correlated since the data is the same - i.e. the generation of data is from the same source but it generates different views i.e. different r.v.s. So the assumption (from the example) is that there *must* be some correlation, due to the problem setting. Which is fine, just trying to understand the general framework, application, etc. Good video in general! Thanks for sharing.
@upsc62063 жыл бұрын
To this wonderful professor of Pennsylvania University
@upsc62063 жыл бұрын
Rip ....
@ifgcguitarclub13953 жыл бұрын
whereis the codes for implementation ??? thank u
@seankessel84473 жыл бұрын
Could you post a link to the slides? They're a bit blurry in the video
@mattizzle813 жыл бұрын
Even our innate machinery came about through an evolutionary process. Is that not a form of "learning". I doubt what he is talking about could be explicitly designed, so in the context of AI Yann would be the one who is correct. You'd have to learn the innate structure, not design it by hand.
@ramaswamikv3 жыл бұрын
Thank you so very much for this. Deeply Indebted.
@thankyouthankyou11723 жыл бұрын
I like your presentation very much. Thanks
@zihaozheng41593 жыл бұрын
Wow, I've read some papers of Jack, surprised to see a presentation of him!
@rsilveira793 жыл бұрын
Nice video!
@wonop3 жыл бұрын
Do you have the code in a Github repo anywhere?
@TheAIEpiphany3 жыл бұрын
Awesome work! Any newer research you folks did on this topic?
@moog5003 жыл бұрын
What a cool talk
@bingochipspass083 жыл бұрын
The idea of not relying on logical forms is really cool! This is an impressive project!
@GrantCastillou3 жыл бұрын
It's becoming clearer that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman's Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with primary consciousness will probably have to come first.
@reemgody54923 жыл бұрын
Very interesting
@BlockDesignz4 жыл бұрын
Colin breaks the mold of a typical researcher by also being extremely eloquent. Save some talent for the rest of us Colin!
@ShikharSrivastava3 жыл бұрын
Amen!
@mvlad74024 жыл бұрын
Excellent approach to understand text
@pascalzoleko6944 жыл бұрын
Liked even before I started watching.
@anweshabasu25844 жыл бұрын
Amazing talk!
@thatchipmunksings4 жыл бұрын
Love her approach to any feedback with a more or less truthful reply! But she's getting grilled, no doubt 😂😂. Good luck ma'am! AI2 we're all a BIG fan of you! Thank you for the community service you've done!