Total: 1
Modern artificial intelligence performs impressively in data-rich settings but still struggles to learn and adapt from only a few examples—a capability central to human intelligence. My research seeks to understand and enable data-efficient generalization, unifying principles across few-shot learning, meta-learning, in-context learning in large language models (LLMs), and adaptive agent behavior. First, I revisit few-shot learning from a foundational perspective, showing why conventional supervised learning breaks down under sparse data and how prior knowledge enables reliable adaptation. I then discuss how these principles extend to real-world scenarios such as scientific discovery and cold-start recommendation, where data are scarce, costly, or dynamically evolving. Finally, I explore how LLMs perform in-context learning and how their adaptive behaviors connect to meta-learning mechanisms. Building on these insights, I develop data-efficient, preference-adaptive agents that quickly align to user needs with minimal interaction.This talk presents a cohesive view of data-efficient intelligence and outlines future directions toward more reliable, human-like learning systems.