Paper Note --- Transfer Learning via Dimensionality Reduction
Paper Background
Proceedings of the 23rd Annual AAAI Conference on Artificial Intelligence Year
- Sinno Jialin Pan
- James T.Kwork
- Qiang Yang
Hong Kong University of Science and Technology
This paper introduces an innovative approach for dimensionality reduction aimed at identifying a latent feature space; by reducing the distance between the distributions of data in the source and target domains within this latent space, thereby enabling us to utilize widely accepted algorithms for model training within this framework.




According to this formula, we have learned the kernel matrix K rather than directly learning the universal kernels k. However, it is imperative to ensure that the learned kernel matrices do correspond to an universal kernel.

about kernel:
from Zhihu

Proved that learned kernel matix K is universal.

MVU 最大差异展开


Inspiration
To compare two distinct distributions, we project them into a latent space through the use of a kernel matrix, and the differences within this space will be more reasonable to examine.
