🎯 quick navigation: 00:00 🎙️ The speaker explores the future of NLP in the context of large language models (LLMs) like GPT. 08:07 🔄 The field is transitioning from vertical decompositions of tasks to a more horizontal tech stack approach, with LLMs as a key component. 09:31 🆕 LLMs introduced horizontal capabilities, such as portable knowledge and general ability, challenging traditional vertical approaches. 14:10 ❓ The current phase may mark the end of the beginning of NLP, but not the end of NLP itself; a shift from vertical to horizontal tech stacks is underway. 18:00 🧠 New control mechanisms and superstructures are emerging to guide and augment LLMs, providing more user influence and coherence. 19:56 📚 Language models can be orchestrated in an ecosystem where they collaborate hierarchically to generate stories or other outputs, resulting in more coherent and interesting content than single, uninterrupted model output. 21:30 🧩 Hybrid technology combining large language models with other methods, like search or structured modules, can achieve better performance in specialized tasks, such as crossword solving, compared to using language models alone. 22:56 🔌 Tension exists between modularity and end-to-end optimization in machine learning. Modularity allows systems to be built reliably, while end-to-end optimization enables high accuracy and generality. Exploring ways for these approaches to coexist could lead to more robust AI systems. 28:04 ⚖️ Responsible development of AI systems requires considering their failure modes, success modes, and potential to cause harm. Addressing plagiarism detection, authorship attribution, and safety features are crucial for creating AI tools that benefit society. 33:14 🔍 A system like "Ghostbuster" is developed to detect text generated by large language models by utilizing scores from weaker language models and arithmetic functions. The detection of language model-generated content, even in cases where the specific model isn't known, offers a tool for addressing potential misuse of AI-generated content. 39:42 🧠 Challenges of Large Language Models: Concerns about cheating, understanding the system's inner workings, and addressing biases in language models. 42:13 🤖 Interplay of Objective Functions: Highlighting the potential conflict between optimizing for objective functions (e.g., user satisfaction) and the truth in AI systems, leading to behavior that may deviate from truth. 45:01 🕵️♂️ Truth and Control: Emphasizing the need to distinguish between what the system knows and what it does, with a focus on identifying methods to evaluate and align AI models with the truth. 48:13 🌐 Multitask Model Vulnerabilities: Exploring the potential cross-task vulnerabilities in multitask models, highlighting the risks associated with poisoning one task that can impact multiple tasks. 51:14 🚀 Future of NLP: Reflecting on the evolution of NLP, from solving representational questions in linguistics to focusing on real-world problem-solving, acknowledging challenges in architecture, safety, and control. Made with coinmarketcap.com/currencies/sharbi/
@Mark-q3f5f8 ай бұрын
Awesome talk - Dan's the man!
@caten_811 ай бұрын
This is super insightful.
@tahirsyed5454 Жыл бұрын
Prof Dan Klein - he'd make everything mentally accessible to you.
@yiran4329 ай бұрын
is he in UCB?
@mauricecinque56187 ай бұрын
Interesting perspectives about architectures of NL systems. Worth questionning sustainability / maintainability / accuracy, and all other non functional requirements that contrain a fully operational solution.
@mauricecinque56187 ай бұрын
One of the key point raised in the talk relates to accuracy and (ground) truth. Dan taises the point of truth level implicitly reached in LLM because of the isomorphic relationship between words and real world, if i got his point correctly. That said, in LLM « truth » is essentially based on statistics. Is that really sufficient? Several AI have been purposedly trained with fundamental biases which are obviously distorsions and/or portions of reality.
@namansinghal36858 ай бұрын
Good AI engineers are good software engineers. I have seen a lot of AI folks struggle when they suck at coding.
@easyaistudio Жыл бұрын
this started out great, then reason went on holiday when he started weighing into is using LLMs moral
@soumen_das Жыл бұрын
Very informative
@woolfel Жыл бұрын
The job of a software engineer is to figure out what the human needs and build a solution. It's not writing a function. It's asking humans what they want and figuring out what they really need. The reason why so many software projects fails is simple, humans don't know what they really need and will tell you contradictory things about what they want. LLM can't read human's minds or figure out why humans are saying stupid contradictory things. At least not today, but maybe in the future it can read humans minds.
@matthewcurry3565 Жыл бұрын
You just said yourself... Even if it can read doesn't mean it can understand that answer, or/and cannot be fooled.
@opusdei1151 Жыл бұрын
Is this guy from the movie X+Y?
@jayasimhatalur5503 Жыл бұрын
Jitendra Malik ❤
@RickySupriyadi Жыл бұрын
not now... you can't replace human when making apps, sometimes so complicated at the end the engineer says ahhh i see and the client say ok that might work (wtf! how to explain it to him!) at the end of the project the apps launched and you know.... it sells because marketing team do they good job selling them, so everybody happy. until security bug came...