Wenqi Liang
Customized video generation from user-specific prompts has made significant progress in recent years, yet most existing approaches assume a fixed personalization setup and are unable to evolve with the user. In realistic scenarios, users continually introduce new identities, visual styles, and motion concepts over time. Without mechanisms to accommodate this evolving stream of information, current methods suffer from catastrophic forgetting of previously learned concepts and struggle to faithfully integrate multiple personalized elements within a single video. This often results in degraded visual fidelity, loss of identity consistency, and omissions or conflicts when several user-specific attributes must coexist. His PhD research will explore a continual video customization framework grounded in evolving low-rank adaptation techniques. The central idea is to design models that can incrementally absorb new user preferences while preserving earlier knowledge, all under constrained computational and memory budgets. By enabling adaptive, scalable, and robust personalized video generation, this work aims to support long-term, user-driven customization where the system improves and refines its understanding of each user over time. Ultimately, the project seeks to advance the capabilities of generative AI towards more persistent, lifelong personalization in dynamic, real-world settings.