Do not fluid intelligence tests aim to measure /get at educability?
@JD-xu6py9 ай бұрын
I don't know if he'd even be interested in being on the Mindscape podcast, but I request that you try getting as a guest Robert Harper to talk about Computational Trinitarianism
@JH-pl8ih9 ай бұрын
Robert Harper would be great. While we're recomending computer scientist guests: Shafi Goldwasser (on the wider implications of cryptography) and Yoshua Bengio (on developing higher forms of reasoning within the connectionist paradigm).
@seionne859 ай бұрын
First time hearing about computational trinitarianism, but it sounds like a digital proof Christianity 😂
@JD-xu6py9 ай бұрын
@@seionne85 lol it's a funny name for sure, it's his way of pointing out that something special is going on at the intersection of Logic's Proof Theory, Computer Sciences Type Theory, and Mathematics Category Theory.
@seionne859 ай бұрын
@@JD-xu6py that sounds very interesting thank you for the new rabbit hole lol!
@JD-xu6py9 ай бұрын
@@seionne85 He has lectures on KZbin, search Robert Harper Type Theory
@gtziavelis9 ай бұрын
In the context of the idea that consciousness is not a computational process, we will never have AGI, Artificial General Intelligence. It would have to have been co-evolutionary with us throughout past history up till now, which implies that building a time machine is easier.
@PNNYRFACE9 ай бұрын
Big dong and prosper
@trevorcrowley57489 ай бұрын
My interpretation from the talk is we do not know how consciousness works or agree on how to measure intelligence, but we do know that the human genome has not changed appreciably in 300k years. This implies that recent exponential human developments may be due to the gradual accumulation of knowledge through learning until certain tipping points are met. While current AI learning by example is useful, it is not until it can logically chain different methods together and then communication / bootstrap them to to future agents that we will be on the exponential path of General Intelligence. (Be curious to know if emotion and embodiment are also factors in Educability.) Agree that this is difficult, and that evolution took 6M years to guide us from chimps to humans. We are already in a time machine moving one second per second into the future -- let's check back in about 20 years
@kaushikmitra-v8f9 ай бұрын
1:53 in which universe do theoretical physicists make the big bucks? 🤔
@cashkaval9 ай бұрын
Is it just me or Leslie Valiant sounds a lot like Cristopher Hitchens?
@Benson_Bear4 ай бұрын
okay but sounds more like timothy williamson to me
@jessenyokabi42909 ай бұрын
Looking forward.
@OBGynKenobi9 ай бұрын
These AI's are not thinking, they are calculating. An AI cannot deduce the subtleties of poetry and the hierarchical meanings hidden within. It doesn't understand sarcasm. It doesn't have feelings based on past events, etc etc etc
@Kolinnor9 ай бұрын
If you make that argument, you must specify how the human brain works
@OBGynKenobi9 ай бұрын
@@Kolinnor you don't have to know how brains work. You only have to test the AI. Ask it how it feels about being your friend.
@Kolinnor9 ай бұрын
@@OBGynKenobi You're talking about thinking, feelings, and understanding meaning, which I think are 3 distinct concepts. I was especially answering the "understanding" part. I agree, I don't think it has feelings
@OBGynKenobi9 ай бұрын
@@Kolinnor I'm also saying it doesn't think because thinking, I suggest, is emergent from input of all parts of the brain, including the subconscious, which is not understood.
@Kolinnor9 ай бұрын
@@OBGynKenobi Mhm, fair enough. I'm not sure about thinking either actually. However I'd say they understand things
@lukegratrix9 ай бұрын
I've been entertained by AI but not really impressed. They are wrong surprisingly often. Keep working computer geeks and mathematicians! You're on the right path!
@lukegratrix9 ай бұрын
Like railroad workers in Blazing Saddles. Quicksand!
@yeezythabest9 ай бұрын
They're not "wrong" they delivered the statistically probable next token based on your prompt. What you mean is they're not always aligned with your intent. Those are two different things and knowing that can help you help them to get you what you want