No video

Representation Learning and Information Retrieval -SIGIR 2024, Keynote Speaker, Yiming Yang

  Рет қаралды 728

Association for Computing Machinery (ACM)

Association for Computing Machinery (ACM)

Күн бұрын

Abstract: How to best represent words, documents, queries, entities, relations, and other variables in information retrieval (IR) and related applications has been a fundamental research question for decades. Early IR systems relied on the independence assumptions about words and documents for simplicity and scalability, which were clearly sub-optimal from a semantic point of view. The rapid development of deep neural networks in the past decade has revolutionized the representation learning technologies for contextualized word embedding and graph-enhanced document embedding, leading to the new era of dense IR. This talk highlights such impactful shifts in representation learning for IR and related areas, the new challenges coming along and the remedies, including our recent work in large-scale dense IR, in graph-based reasoning for knowledge-enhanced predictions, in self-refinement of large language models (LLMs) with retrieval augmented generation (RAG) and iterative feedback, in principle-driven selfalignment of LLMs with minimum human supervision, etc. More generally, the power of such deep learning goes beyond IR enhancements, e.g., for significantly improving the state-of-the-art solvers for NP-Complete problems in classical computer science.
Bio: Yiming Yang is a professor with a joint appointment at the Language Technologies Institute (LTI) and the Machine Learning Department (MLD) in the School of Computer Science, Carnegie Mellon University (CMU). She has jointed CMU as a faculty member since 1996, and her research has been focused on machine learning paradigms, algorithms and applications in a broad range, including her influential early work in large-scale text classification and information retrieval, and more recently on cutting-edge technologies for large language models (e.g., XL-Net), neural-network architecture search (e.g., DARTS), reasoning with graph neural networks, reinforcement learning and diffusion models for solving NP complete problems (e.g., DIMES and DIFFUSCO), AI-enhanced self-alignment of LLMs, knowledge-enhanced information retrieval, LLMs with RAG (Retrieval Augmented Generation), large foundation models for scientific domains, etc. She became a member of the SIGIR Academy in 2023, in recognition for her contributions in the intersection of Machine Learning and Information Retrieval.

Пікірлер: 1
@yimingyang4254
@yimingyang4254 Ай бұрын
Due to some AV issues, I could not hear the questions clearly on the stage. So, let me clarify some of the answers retrospectively. Question 1. Why did the easy-to-heard voting strategies perform worse than the SFT baseline with greedy decoding when the candidate-pool size is less than 10? Answer: The voting strategies had a non-deterministic process, with the temperature set to 0.7 (as shown in the slide) while the baseline has the temperature set to 0 (deterministic). This means that the baseline always picked its top candidate per problem instance, but the voting strategies may miss it when the pool size is rather small. Question 2. How does the proposed method differ from GAN? Answer: The discriminator in GAN is trained to label each instance as a natural-language output (yes) or un-natural (no), while the evaluator in easy-to-hard generalization is trained to tell whether a math solution is correct (yes) or wrong (no) for a given math problem. Even if an answer is perfect in English, it still can be wrong mathematically. Question 3. Can we use this idea to improve LLM pre-training? Answer: Maybe not. When we have enough (unlabeled) data for pre-training of an LLM, we may not be benefited much from the easy-to-hard generalization. On the other hand, if we do not have enough data for pretraining an LLM, we may also not be able to train the evaluator well. One may argue that what if we can train the evaluator on rare patterns that the current LLM cannot handle well? Perhaps yes, but this is a big “if”. That is, the challenge is then shifted to how to obtain the annotated data on the rare patterns. I hope the above answers help.
الذرة أنقذت حياتي🌽😱
00:27
Cool Tool SHORTS Arabic
Рет қаралды 22 МЛН
Zombie Boy Saved My Life 💚
00:29
Alan Chikin Chow
Рет қаралды 16 МЛН
Prank vs Prank #shorts
00:28
Mr DegrEE
Рет қаралды 9 МЛН
The moment we stopped understanding AI [AlexNet]
17:38
Welch Labs
Рет қаралды 949 М.
Why Does Diffusion Work Better than Auto-Regression?
20:18
Algorithmic Simplicity
Рет қаралды 288 М.
MIT Introduction to Deep Learning | 6.S191
1:09:58
Alexander Amini
Рет қаралды 496 М.
Official PyTorch Documentary: Powering the AI Revolution
35:53
What Is an AI Anyway? | Mustafa Suleyman | TED
22:02
TED
Рет қаралды 1,4 МЛН
MIT 6.S087: Foundation Models & Generative AI. BIOLOGY
37:43
Rickard Brüel Gabrielsson
Рет қаралды 6 М.
SIGIR 2024 Salton Award Talk
1:38:13
Association for Computing Machinery (ACM)
Рет қаралды 819
About 50% Of Jobs Will Be Displaced By AI Within 3 Years
26:26
Fortune Magazine
Рет қаралды 309 М.
Large Language Models (LLMs) - Everything You NEED To Know
25:20
Matthew Berman
Рет қаралды 81 М.
الذرة أنقذت حياتي🌽😱
00:27
Cool Tool SHORTS Arabic
Рет қаралды 22 МЛН