*Artificial Intelligence: a Problem of Plumbing? - Gerald J. Sussman at ELS 2023* - *0:00** Introduction:* - Gerald J. Sussman discusses the development and relationship between artificial intelligence (AI) and Lisp. - AI advancements are linked to symbolic computation and machine learning. - *0:24** Background:* - Sussman recounts his early connection to the Lisp world and AI. - Inspired by Bill Gosper and Marvin Minsky during his time at MIT. - *2:10** Evolution of AI:* - AI combines symbolic computation and machine learning. - Machine learning deals with large data sets, whereas symbolic AI involves logical and mathematical reasoning. - *4:37** Case Study - GPT Models:* - Example of GPT creating plausible, yet fictitious, academic content. - Highlights the advanced capabilities and limitations of AI models in understanding and generating text. [From Comments] GPT-4 has improved by recognizing unknown queries and avoiding fabrication. - *8:01** Intelligence and Learning:* - Human intelligence involves rapid language acquisition and rule generalization (e.g., Wug Test). - Current AI models don't mimic this efficient learning process. - *11:05** Intelligence Defined:* - Intelligence encompasses various abilities like focusing on salient features, predicting outcomes, planning, executing, and reflecting on actions. - *13:40** Challenges with Large Language Models:* - They are monolithic, opaque, and require significant resources for training. - Difficult to scientifically investigate or adjust due to their complexity and cost. - *17:45** Brain Analogy:* - Sussman compares AI to the human brain's specialized regions and connectivity. - References research by Nancy Kanwisher and Elizabeth Spelke on brain modularity and mental representations. - *23:24** Integration of AI Systems:* - Difficulty in combining AI programs to solve complex, interdisciplinary problems. - Example: Combining chess-playing algorithms with theorem-proving techniques to enhance search strategies. - *25:54** Plumbing in AI:* - Importance of having systems to maintain behavior traces, summarize, and abstract these traces for better integration and learning. - *28:48** Jacob Beale's Thesis:* - Example of systems learning to communicate via shared context and scrambled connections. - Potential for AI components to develop communication protocols organically. - *31:05** Philosophy and Intrinsic Capabilities:* - Discussion on sense data, intrinsic capabilities, and the challenge of combining subsystems to achieve general intelligence. - *35:42** Brain and Genome:* - Brain's complexity vs. genome's limited code. - Detailed wiring of the brain cannot be fully specified genetically; it must be learned and adaptively developed. - *42:23** Consciousness:* - Consciousness involves awareness of internal states, planning, and decision-making based on internal models. - Current AI systems lack this self-reflective ability. - *44:39** Programming Languages and New Plumbing:* - Need for built-in mechanisms for recording and summarizing behavior traces in programming languages. - Comparison to historical advancements like garbage collection and recursion. - *47:56** Q&A Session:* - Discussing the role of macros in Lisp and whether they meet the need for new Plumbing. - Importance of interconnected Blackboard systems for modular communication. - Addressing ethical concerns related to AI and Singularity, emphasizing the value of human life over labor. - *1:00:05** Propagator Systems:* - Use of propagator models for connecting large systems. - Potential for implementing learning machines at the boundaries of AI systems. - *1:00:44** Closing:* - Sussman expresses his appreciation for the discussion and ongoing exploration of AI and cognitive processes. i used gpt-4o to summarize the video transcript
@arthurcolle2778 Жыл бұрын
Interesting talk
@uvarsg Жыл бұрын
4:38 The "shrobe-davis theorem" thing seems irreproducible in the latest GPT models (GPT-4). They seem to know what they don't know now, instead of "playing along".
@DanielVaughan Жыл бұрын
Yes, I just tried it with "Tell me about the shrobe-davis theorem in computational complexity." which resulted in the response: The Schoening-Umesh-Vazirani (SUV) Theorem, often also called the Shrobe-Davis Theorem, is not a standard result in the field of computational complexity. As of my last training data up to September 2021, there's no theorem by that name in recognized computational complexity literature. However, there are many theorems and results related to complexity theory, such as the Cook-Levin Theorem (which proves that SAT is NP-complete) and results related to the P vs NP problem, among others. If you're referring to a result or theorem that has emerged after 2021, I may not be aware of it. Can you provide more context or details about the theorem you're referring to, or perhaps clarify if there's another topic in computational complexity you'd like to explore? ----- It also had no problem filling in the blank for the pluralization of the made up animal.
@mznxbcv123459 ай бұрын
someone told it it's wrong
@nonsensedotai2 ай бұрын
Because the RAG system, it seems that GPT can search online now.