时 间:2026年4月24日 09:30-10:30
地 点:中北理科大楼A1214室
报告人:韩潇 中国科学技术大学教授
邀请人:马慧娟 华东师范大学副教授
摘 要:
Cross-domain knowledge distillation often suffers from domain shift. Al- though domain adaptation methods have shown strong empirical success in addressing this issue, their theoretical foundations remain underdeveloped. In this paper, we study knowledge distillation in a teacher–student framework for regularized linear regression and derive high-dimensional asymptotic excess risk for the student estimator, accounting for both covariate shift and model shift. This asymptotic analysis enables a precise characterization of the performance gain in cross-domain knowledge distillation. Our results demonstrate that, even under substantial shifts between the source and target domains, it remains feasible to identify an imitation parameter for which the student model outperforms the student-only baseline. Moreover, we show that the student’s generalization performance exhibits the double descent phenomenon.
报告人简介:
韩潇,中国科学技术大学管理学院特任教授,研究方向为大维随机矩阵;高维统计推断,入选国家创新人才计划青年项目,相关成果发表于AOS,JASA,JRSSB,JMLR,BERNOULLI,ICML等期刊和会议,主持青年基金项目与面上基金项目各一项