41351@AAAI

Total: 1

#1 Cross-Modal Knowledge Transfer in Time Series AI via Large Vision Models [PDF] [Copy] [Kimi] [REL]

Author: Jingchao Ni

Time series analysis has progressed from traditional autoregressive models to deep learning, Transformers, and foundation models (FMs), including large language models (LLMs) and large vision models (LVMs). These advances expand model design possibilities and enable time series problem-solving across multiple modalities. This talk will provide an overview of recent developments in large FMs for time series, highlighting frameworks for transferring knowledge from other modalities to time series, and identifying the advantages of LVMs over LLMs in cross-modal knowledge transfer. I will then delve into our recent research on LVMs for time series, discussing (1) mainstream techniques for imaging time series; (2) key strengths and limitations of LVMs in time series modeling; and (3) multimodal frameworks that integrate LVMs for time series encoding. This talk will conclude with an application of LVMs to brain time series analysis in neuroscience. The aim of the talk is to review state-of-the-art (SOTA) AI techniques for time series, highlight unique challenges, and share our recent findings in this promising area.

Subject: AAAI.2026 - New Faculty Highlights