Advertisement

Paper Note --- Transfer Learning via Dimensionality Reduction

阅读量:

Paper Background

Proceedings of the 23rd Annual AAAI Conference on Artificial Intelligence Year

  1. Sinno Jialin Pan
  2. James T.Kwork
  3. Qiang Yang

Hong Kong University of Science and Technology


This paper introduces an innovative approach for dimensionality reduction aimed at identifying a latent feature space; by reducing the distance between the distributions of data in the source and target domains within this latent space, thereby enabling us to utilize widely accepted algorithms for model training within this framework.

The empirical estimate of distance between P and Q defined by MMD
Rewritten MMD
MMD in latent space F
Can be rewritten in terms of the kernel matrices defined by k

According to this formula, we have learned the kernel matrix K rather than directly learning the universal kernels k. However, it is imperative to ensure that the learned kernel matrices do correspond to an universal kernel.

这里写图片描述

about kernel:
from Zhihu

这里写图片描述

Proved that learned kernel matix K is universal.

这里写图片描述

MVU 最大差异展开

这里写图片描述
这里写图片描述

Inspiration

To compare two distinct distributions, we project them into a latent space through the use of a kernel matrix, and the differences within this space will be more reasonable to examine.

全部评论 (0)

还没有任何评论哟~