Advertisement

【论文阅读】Model Extraction Attacks on Graph Neural Networks: Taxonomy and Realisation(2021)

阅读量:

该研究探讨了图神经网络(GNN)模型面临的模型窃取攻击问题。尽管现有工作主要针对欧几里得空间的数据(如图像和文本)进行了模型窃取攻击的研究,但对包含图结构和节点特征的GNN模型的攻击尚未有系统性研究。本文首次全面研究并开发了针对GNN模型的模型提取攻击方法。通过系统化的威胁建模和分类攻击方式,本文提出了一种分类 seven 类对抗性威胁的方法,并利用可访问的攻击者知识实现了攻击。实验结果表明,该方法在三个真实世界数据集上表现出色,能够有效提取模型并实现与原模型相同的预测结果(84%-89%)。该研究为GNN模型的安全性提供了新的视角和方法论支持。

在这里插入图片描述

摘要

Machine learning models(机器学习模型) are challenged(面临) by a significant threat(严峻的威胁) from Model Extraction Attacks(模型窃取攻击), where a service provider's(服务提供商的) well-trained private model(训练有素的私有模型) can be compromised(被破坏) by an attacker masquerading as a client(伪装成客户端的攻击者). Regrettably(遗憾的是), prior research(现有研究) has concentrated(集中在) on models within Euclidean space(欧几里得空间中的模型), such as images and texts(图像和文本), yet the extraction(提取) of Graph Neural Network(GNN) models(包含图结构和节点特征) remains an unexplored(尚未探索) challenge. In this study, we present(首次报道) a comprehensive investigation(全面研究) and effective development(开发实现) of model extraction attacks against GNN models(针对GNN模型的模型提取攻击). We systematically(系统化) formalize(建模) the threat landscape(威胁环境) within the context of GNN model extraction(GNN模型提取) and categorize(分类) adversarial threats(对抗性威胁) into seven distinct types(七类) based on the varying background knowledge(攻击者背景知识), including node attributes(节点属性) and/or neighbor connections(邻近链接), which the attacker can exploit(利用) to mount attacks(发起攻击). Furthermore, we detail(详细说明) the methods(方法) employed(用于) each threat type(威胁类型) to execute(实施) the attacks. Our experimental results(实验结果) demonstrate(证明) that our approach can effectively(有效地) extract duplicated models(提取重复模型), achieving(达到) an impressive 84% to 89% accuracy(准确率) in preserving(保持) the same output predictions(输出预测) as the original model across three real-world datasets(三个真实世界的数据集).

Graph-based Neural Networks,图神经网络 Model Stealing Attack,模型窃取攻击

方法

在这里插入图片描述

论文链接

基于图神经网络的模型提取攻击:分类与实现

全部评论 (0)

还没有任何评论哟~