- 일시: 2026년 1월 26일(월) 오전 10시
- 장소: 온라인 Zoom
https://sogang-ac-kr.zoom.us/j/368251892?pwd=rvosqEPJYZ8eI5QbNqKZCEaI5ztKFw.1
- 연사: IL-MIN KIM 교수, Department Head, ECE, Queen’s University, Canada
- 제목: Toward Federated and Multi-Modal Learning: From Optimized Edge Intelligence to Bias-Aware Foundation Models
- Abstract
This seminar presents recent advances in Federated Learning (FL) and multimodal foundation models, motivated by the limitations of traditional centralized machine learning in privacy-sensitive and distributed environments. FL enables collaborative model training without sharing raw data, reducing privacy risks and communication costs, but it remains challenged by data heterogeneity, communication inefficiency, and limited client computation. We first discuss a series of methods addressing these core FL issues, including Federated Learning with Managed Redundancy for communication-efficient training, Modular Federated Contrastive Learning for reducing client-side computation under heterogeneous data, and Twin Normalization, a sample-based normalization technique that improves contrastive learning performance in highly non-IID settings. As the machine learning landscape shifts toward Large Language Models and Vision–Language Models (VLMs), the seminar then explores how these foundation models can be effectively and fairly integrated into FL. We examine key limitations of VLMs, such as spurious correlations, biased representations, and underutilized spatial features, and introduce three solutions: PRISM, a data-free and task-agnostic framework for mitigating implicit spurious bias in VLM embeddings; INFER, a feature refinement approach that enhances CLIP representations by fusing global and attention-weighted local features; and NormFit, a lightweight federated fine-tuning strategy that updates only Pre-LayerNorm parameters to achieve strong performance under severe data heterogeneity with minimal communication and computation overhead. Together, these contributions demonstrate how principled algorithmic design can bridge federated optimization, representation learning, and foundation models, paving the way for scalable, robust, and fair multimodal learning in real-world edge and distributed systems.
- Biography
Il-Min Kim has been Professor, Department of Electrical and Computer Engineering (ECE) at Queen’s University, Kingston, Canada, since July 2003. He is currently serving as Head of the ECE Department. He was Chair of the Undergraduate Program (for Electrical Engineering) from July 2022 to June 2025 and was Chair of the Graduate Program (i.e., Graduate Coordinator) from July 2012 to June 2015.
He is currently Director of Ubiquitous Artificial Intelligence Laboratory (UAI lab), a core faculty member of Ingenuity Labs Research Institute, and a faculty member of Queen's Centre for Security & Privacy. He is cross-appointed as an Adjunct Professor in the School of Electrical Engineering and Computer Science (EECS) at the University of Ottawa, and holds a Status-Only Professorship in the Department of Mechanical & Industrial Engineering at the University of Toronto.
His research focuses on artificial intelligence (AI), including agentic AI, physical AI, ubiquitous AI, edge AI, on-device AI, safe AI, universal equity AI, AI governance, AI alignment with human values, foundation models, AI for healthcare applications, machine unlearning, data privacy in machine learning, federated learning, distributed learning, continual learning, diffusion models, out-of-distribution (OOD) detection, self-supervised learning, contrastive representation learning, AI for IoT/IoE/IIoT/Mobile Crowd Sensing (MCS), AI-driven 6G wireless systems, AI-driven vehicle-to-everything (V2X) communications, and Geoscience AI (Geo-AI).
- 주관: 소재우 교수 연구실