A survey of visual analytics techniques for machine learning
该研究由六位学者共同完成的研究资料(链接资料)组成:Yuan Jun、Chen Chang Jian、Yang Weik ai、Liu Meng chen、Xia Jiaz hi以及Liu Shix ia(注释部分:每个链接后面跟着一个带有中文数字的注释编号,并且在括号内交换了姓名的姓氏和名字)
Show Author Information
Abstract
Visual analytics for machine learning has seen significant advancements in the field of visualization. With the aim to identify promising research areas and gain practical insights into applying relevant techniques, we conducted a comprehensive review of 259 papers published over the past decade alongside key studies prior to 2010. This effort resulted in the development of a taxonomy structure, which is organized into three primary categories: methods prior to model development, approaches during model construction, and strategies post-model implementation. Each category is further detailed through distinctive analytical tasks, with specific examples drawn from notable recent studies. Additionally, we explored emerging challenges and highlighted promising directions for future research that could benefit visual analytics scholars.
Keywords
可视化分析工具结合机器学习算法对数据质量进行评估,并采用特征选择方法优化模型的解释能力
1 Introduction
The recent success of artificial intelligence applications relies on advancing capabilities within machine learning systems [1]. Over a span of ten years alone, numerous visual analytics methodologies have emerged aimed at enhancing machine learning's explainability by making it more transparent while also ensuring reliability through robust trustworthiness measures. By integrating interactive visualization tools with cutting-edge machine learning techniques, these efforts seek to comprehensively dissect key elements within a system's operational lifecycle—ultimately striving to elevate overall functionality. As an illustrative example demonstrating this trend further still: research into enhancing interpretability within deep convolutional neural networks has significantly improved our understanding not just of these systems but also their inner workings—thereby garnering sustained attention from both academic and industry communities [1-4].
The increasing popularity of visual analytics techniques in machine learning has led to an urgent need for a systematic overview of this domain, which can aid in comprehending how visualization techniques are conceptualized and implemented within machine learning pipelines. The field has seen various early attempts to summarize advancements from diverse perspectives. For instance, Liu et al. [5] focused on summarizing visualization methods for text analysis. Lu et al. [6] provided a survey on visual analytics techniques for predictive modeling. Recently, Liu et al. [1] presented a detailed analysis of machine learning models from a visual analytics viewpoint. Sacha et al. [7] examined multiple example systems and established an ontology for visual analytics-supported machine learning. However, existing surveys either concentrate on specific areas within machine learning (e.g., text mining [5], predictive modeling [6], model interpretation [1]) or aim to construct an ontology based solely on example techniques [7].
In this paper, we propose a systematic approach to explore visual analytics techniques in machine learning, with a focus on the entire machine learning pipeline. By examining works from the visualization community, we identify significant contributions from the AI field as well. Notably, several researchers have developed methods to enhance the interpretability of machine learning models. For instance, Selvaraju et al. [8] introduced techniques for identifying image regions that are most influential in classification outcomes. Readers interested in detailed surveys can refer to Zhang and Zhu [9] and Hohman et al. [3]. Over the past decade, we have compiled a comprehensive collection of 259 high-quality papers from leading venues. Based on the stages of machine learning—pre-construction, during construction, and post-construction—we categorize these studies into three main themes: data quality improvement before model development, model understanding and optimization during construction, and post-construction data analysis. Each theme is illustrated with representative examples that highlight key methodologies. This review identifies six critical research directions and unresolved challenges in the field of visual analytics for machine learning. We hope this work will stimulate further discussion among researchers and practitioners seeking to advance visual analytics tools for machine learning applications.
2 Survey landscape
2.1 Paper selection
In this paper, we centered our investigation on visual analytics techniques that contribute to the creation of explainable, trustworthy, and reliable machine learning applications. To conduct a detailed examination of visual analytics techniques for machine learning, we systematically reviewed top-tier venues from 2010 to 2020: InfoVis, VAST, Vis (later SciVis), EuroVis, PacificVis, IEEE TVCG, CGF, and CG&A. This systematic review was carried out by three Ph.D. candidates with over two years of experience in visual analytics research. Our methodology followed that outlined in a text visualization survey [5]. Specifically, we first examined the titles of papers from these venues to identify candidate works. Next, we reviewed the abstracts of these candidate papers to further assess whether they focused on visual analytics techniques for machine learning. If the title and abstract did not offer clear insights, we then read the full texts to make our final determination. Additionally to our systematic review of these venues, we also identified representative earlier works or those from other venues such as Profiler [10].
Following this process, 259 papers have been selected. A comprehensive table is provided in Table 1, which displays detailed statistics. Over the past ten years, there has been a significant rise in machine learning techniques; consequently, this field has garnered substantial research attention.
Table 1Categories of visual analytics techniques for machine learning and representative works in each category (number of papers given in brackets)
| Technique category | Papers | Trend | |
|---|---|---|---|
| Before model building | Improving data quality (31) | [14], [15], [16], [17], [18], [19], [20], [21], [22], [23], [24], [25],[26], [27], [10], [28], [29], [30], [31], [32], [33], [34], [35],[36], [37], [38], [39], [40], [41], [42], [43] |

|
|Improving feature quality (6)|[44], [45], [46], [47], [48], [49]|

||
|During model building|Model understanding (30)|[50], [51], [52], [53], [54], [55], [56], [57], [58], [59], [60], [61], [62], [63], [64], [65], [66], [67], [68], [69], [70], [71], [72], [73], [74], [75], [76], [77], [78], [79]|

|
|Model diagnosis (19)|[80], [81], [82], [83], [84], [85], [86], [87], [88], [89], [90], [91], [92], [93], [94], [95], [96], [97], [98]|

||
|Model steering (29)|[99], [100], [101], [102], [13], [103], [104], [105], [106], [107], [108], [109], [110], [111], [112], [113], [114], [115], [116], [117], [118], [119], [120], [121], [122], [123], [124], [125], [126]|

||
|After model building|Understanding static data analysis results (43)|[127], [128], [129], [130], [131], [132], [133], [134], [135], [136], [137], [138], [139], [140], [141], [142], [143], [144], [145], [146], [147], [148], [149], [150], [151], [152], [153], [154], [155], [156], [157], [158], [159], [160], [161], [162],[163], [164], [165], [166], [167], [168], [169]|

|
|Understanding dynamic data analysis results (101)|[170], [171], [172], [173], [174], [175], [176], [177], [178], [179], [180], [181], [182], [183], [184], [185], [186], [187], [188], [189], [190], [191], [192], [193], [194], [195], [196], [197], [198], [199], [200], [201], [202], [203], [204], [205], [206], [207], [208], [209], [210], [211], [212], [213], [214], [215], [216], [217], [218], [219], [220], [221], [222], [223], [224], [225], [226], [227], [228], [229], [230], [231], [232], [233], [234], [235], [236], [237], [238], [239], [240], [241], [242], [243], [244], [245], [246], [247], [248], [249], [250], [251], [252], [253], [254], [255], [256], [257], [258], [259], [260], [261], [262], [263], [264], [265], [266], [267], [268], [269], [270]|

||
Show table
2.2 Taxonomy
In this section, we conduct comprehensive analysis of the collected visual analytics works to systematically identify key research trends. These works are classified using a typical machine learning pipeline [11], which is designed to address real-world challenges. As illustrated in Figure 1, such pipelines generally consist of three main phases: (1) data preprocessing prior to model development; (2) construction of machine learning models; and (3) implementation of models post-construction. Consequently, various visual analytics techniques applicable to machine learning can be appropriately categorized within these three phases: those applied during preprocessing steps; those utilized during the model-building phase; and those implemented post-construction.

View original image Download original image
Fig. 1An overview of visual analytics research for machine learning.
2.2.1 Techniques before model building
The central objective of visual analytics techniques prior to model building is to assist model developers in preparing the data for effective modeling. The quality of the data is primarily influenced by its inherent characteristics and the features selected. Consequently, there are two primary research streams: visual analytics for enhancing data quality and feature engineering.
Data integrity can be enhanced through multiple strategies, including addressing incomplete data attributes and rectifying inaccurate data labels. Historically, these tasks were primarily carried out manually or through automated means, such as learning-from-crowds algorithms [12], which seek to infer true labels from noisy crowd-sourced annotations. To minimize experts' involvement or enhance the outcomes of automated approaches, several studies have incorporated visual analytics techniques to iteratively enhance data quality. The table below illustrates significant advancements in this domain over the past decade.
The role of feature engineering lies in selecting the most effective features for model training. For instance, within computer vision applications, alternative approaches such as utilizing HOG (Histogram of Oriented Gradient) features can be employed instead of relying on raw pixel data. Furthermore, visual analytics often involves an interactive process where users can engage with feature selection through intuitive and iterative methods. In the context of deep learning advancements during recent years,fature extraction and feature design have become predominantly carried out through neural network models. By aligning with these emerging trends, there has been a noticeable decline in research focus on this area over the past few years (see Table 1 for reference).
2.2.2 Techniques during model building
Model construction serves as a pivotal phase in the development of successful machine learning applications. Advancing visual analytics techniques to support model construction has become an increasingly prominent research direction within the field of visualization (see Table 1). The survey classifies current methods based on their analytical objectives: model understanding, diagnosis, or steering. Model understanding methods primarily focus on providing visual insights into how models operate, such as illustrating how parameter adjustments influence the model's functionality and elucidating why a specific input yields a particular output from the model. Model diagnosis methods aim to identify issues in model training through interactive exploration of the training process. Model steering methods are designed to enhance model performance through interactive means. For instance, Utopian [13] enables users to interactively merge or split topics while automatically adjusting other topics accordingly.
2.2.3 Techniques after model building
Once a machine learning model has been developed and put into operation, it becomes essential to assist users from various professional backgrounds in comprehending its outputs in an accessible manner. This comprehension is key to building trust in the system's outputs. To achieve this objective, diverse visual analytics techniques have been devised for exploring and interpreting complex outputs across numerous domains. These techniques differ significantly from those employed during the development phase when aiming to understand models; instead of focusing on explaining how models operate internally, they prioritize presenting outputs in intuitive ways that facilitate exploration. Given that many such approaches are either driven by data or specific applications, in this survey we group them based on their analysis focus.
3 Techniques before model building
In constructing a model, two essential tasks must be completed: data preprocessing and feature engineering. These tasks are crucial (as cited in references [271], [272]) since poor-quality data and features tend to negatively impact machine learning model performance. Data quality challenges often manifest in missing values within instances or their labels as well as outliers or noise contamination. Feature quality challenges typically involve irrelevant or redundant features that can hinder model performance. Addressing these challenges manually is time-consuming; however, automated methods may not be equally efficient. To mitigate this issue while maintaining model accuracy, advancements in visual analytics techniques have been developed to alleviate experts' burden while enhancing the effectiveness of automated approaches for generating high-quality data and features (as referenced in [303]).
3.1 Improving data quality
The dataset comprises instances along with their corresponding labels [273]. From this viewpoint, existing initiatives aimed at enhancing the quality of data either focus on improving instance-level characteristics and addressing label-related issues.
3.1.1 Instance-level improvement
注

View original image Download original image
Fig. 2OODAnalyzer, a method designed to identify out-of-distribution samples and offer contextual explanations for them. This work is permitted by Ref. [21] under the copyright of © IEEE in the year 2020.
When analyzing time-series data, several challenges emerge because the temporal nature of time induces specific quality issues that require analysis in a temporal context. To address these challenges, Arbesser et al. [15] introduced Visplause, a visual analytics system designed to assess the quality of time-series data. The system presents anomaly detection results, such as anomaly frequencies and their distribution over time, in a tabular format. To tackle scalability concerns, data are organized hierarchically based on metadata, enabling concurrent analysis of anomaly groups (e.g., abnormal time series sharing common characteristics). Additionally, KYE [23] enhances anomaly detection by identifying anomalies missed by automatic methods. Time-series data are displayed using heatmaps, where abnormal patterns (e.g., regions with unusually high values) highlight potential anomalies. Click stream data represent another significant category within time-series analysis. Segmentifier [22] was developed to improve the exploration and refinement of click stream data through an iterative segmentation process. Users can examine segments across three coordinated views at varying levels of detail and refine them using filtering, partitioning, and transformation techniques. Each refinement generates new segments for further analysis.
Addressing uncertainties in enhancing data quality, Bernard et al. [17] created a visual analytics platform designed to depict changes in data alongside uncertainties stemming from various preprocessing techniques. This platform empowers experts with an understanding of these methods' impacts and guidance in selecting appropriate approaches, allowing them to minimize task-irrelevant aspects while retaining crucial data components.
As data pose the risk of exposing sensitive information, several recent studies have concentrated on maintaining data privacy during the process of enhancing data quality. For tabular datasets, Wang et al. [41] introduced a Privacy Exposure Risk Tree to depict exposure risks inherent to such datasets and a Utility Preservation Degree Matrix to illustrate variations in utility under privacy-preserving operations. To safeguard privacy in network datasets, Wang et al. [40] developed an analytical system named GraphProtector. Node priorities are initially determined based on their significance, with important nodes assigned low priority settings to minimize potential modifications. Utilizing node priorities and utility metrics, users can evaluate and compare various privacy-preserving operations and select one that aligns with their expertise and knowledge.
3.1.2 Label-level improvement
According to whether the data exhibit noisy or corrupted labels, existing studies can be categorized into two main approaches: one focused on enhancing the reliability of noisy label data, and another that supports interactive labeling processes.
Crowdsourcing provides a cost-effective way to collect labels. However, annotations provided by crowd workers are usually noisy [271,276]. Many methods have been proposed to remove noise in labels. Willett et al. [42] developed a crowd-assisted clustering method to remove redundantexplanations provided by crowd workers. Explanations are clustered into groups, and the most representativeones are preserved. Park et al. [35] proposed C 22A that visualizes crowdsourced annotations and worker behavior to help doctors identify malignant tumors in clinical videos. Using C 22A, doctors can discard most tumor-free video segments and focus on the ones that most likely to contain tumors. To analyze the accuracy of crowdsourcing workers, Park et al. [34] developed CMed that visualizes clinical image annotations by crowdsourcing, and workers’ behavior. By clustering workers according to their annotation accuracy and analyzing their logged events, experts are able to find good workers and observe the effects of workers’ behavior patterns. LabelInspect [31] was proposed to improve crowdsourced labels by validating uncertain instance labels and unreliable workers. Three coordinated visualizations, a con-fusion (see Fig. 3(a)), an instance (see Fig. 3(b)), and a worker visualization (see Fig. 3(c)), were developed to facilitate the identification and validation of uncertain instance labels and unreliable workers. Based on expert validation, further instances and workers are recommended for validation by an iterative and progressive verification procedure.

View original image Download original image
Fig. 3 LabelInspector, a user-friendly interactive tool designed to validate uncertain instance labels and identify unreliable workers, was adapted from Ref. [31]. Permission to reproduce this figure is granted by © IEEE 2019.
Although the aforementioned methods can effectively improve crowdsourced labels, crowd information is not available in many real-world datasets. For example, the ImageNet dataset [277] only contains the labels cleaned by automatic noise removal methods. To tackle these datasets, Xiang et al. [43] developed DataDebugger to interactively improve data quality by utilizing user-selected trusted items. Hierarchical visualization combined with an incremental projection method and an outlier biased sampling method facilitates the exploration and identification of trusted items. Based on these identified trusted items, a data correction algorithm propagates labels from trusted items to the whole dataset. Paiva et al. [33] assumed that instances misclassified by a trained classifier were likely to be mislabeled instances. Based on this assumption, they employed a Neighbor Joining Tree enhanced by multidimensional projections to help users explore misclassified instances and correct mislabeled ones. After correction, the classifier is refined using the corrected labels, and a new round of correction starts. Bäuerle et al. [16] developed three classifier-guided measures to detect data errors. Data errors are then presented in a matrix and a scatter plot, allowing experts to reason about and resolve errors.
大多数上述方法都从带噪声的标记数据集开始进行处理;然而,并非所有数据集都包含这样的标签集合;为了应对这一挑战;许多视觉分析方法已被提出用于交互式标注;减少标注量是交互式标注的主要目标之一;为此;Moehrmann等人[32]利用自组织映射(SOM)可视化技术将相似图像聚在一起;从而允许用户一次标注多个相同类别的相似图像;这种策略也被Khayat等人[28]采用以识别具有相似异常行为的社会垃圾邮件群组;Kurzhals等人[29]将其应用于移动眼动数据分析;而Halter等人[24]则用于注释和分析电影中主要色彩策略的应用情况;除了聚类相似项外;其他策略如过滤等也已被应用来发现感兴趣的数据项以供标注;在MediaTable[36]中通过过滤和排序功能可快速定位出相似的视频片段;通过表格视图展示视频片段及其属性信息;用户可以通过过滤无关片段并根据属性排序来排列出相关片段;从而一次性标注多个同一类别中的视频片段;Stein等人[39]则提供了基于规则的过滤引擎以发现足球比赛视频中的兴趣模式;
Recently, to enhance the effectiveness of interactive labeling, various visual analytics methods have incorporated visualization techniques with machine learning approaches such as active learning. The concept of "interactive labeling" was first proposed by Höferlin et al. [26], which integrates human knowledge into active learning. Users are not only capable of querying instances and labeling them through active learning but also possess the ability to comprehend and guide machine learning models interactively. This concept is also utilized in text document retrieval [25], sequential data retrieval [30], trajectory classification [27], identifying relevant tweets [37], and argumentation mining [38]. For instance, Sperrle et al. [38] developed a language model for recommending text fragments in argumentation mining tasks, employing a layered visual abstraction to support five key analysis tasks required for text fragment annotation. Beyond developing interactive labeling systems, several empirical studies were conducted to evaluate their effectiveness. Bernard et al. [18] demonstrated the superiority of user-centered visual interactive labeling over model-centered active learning through experimental comparisons. Additionally, a quantitative analysis [19] examined user strategies for selecting samples during the labeling process, revealing that data-based approaches (e.g., clusters and dense areas) proved effective in early stages while model-based strategies (e.g., class separation) became more advantageous in later stages.
3.2 Improving feature quality
A common approach to enhancing feature quality involves selecting useful features that most significantly contribute to accurate predictions, referred to as feature selection techniques (see reference 278). A typical strategy for feature selection is identifying a subset of informative features that minimize redundancy among themselves while maximizing their relevance towards target attributes, such as instance classes [46]. Along this line, several methods have been developed to analyze and manage feature redundancy and relevance through interactive means. For instance, Seo and Shneiderman [48] introduced a framework that ranks features based on their relevance. This method employs tables and matrices to visually represent ranking results. Building on this foundation, Ingram et al. [44] developed DimStiller, a visual analytics system enabling users to explore feature relationships interactively while eliminating irrelevant or redundant features. May et al. [46] proposed SmartStripes, which allows users to select different feature subsets for various data subsets. By utilizing matrix-based layouts, they aim to display the relevance and redundancy of these features effectively. Mühlbacher and Piringer [47] advanced this concept by creating partition-based visualizations for exploring feature relevance at varying levels of detail. Tam et al. [49] employed parallel coordinates visualization to identify discriminative features across different clusters using cross-validation folds and classification models. Finally, Krause et al. [45] conducted comparative analyses across multiple feature selection algorithms, validation folds, and classification models to rank features accordingly. Users can interactively select optimal combinations of features and models for enhanced performance.
Additionally selecting existing features while constructing new ones also contributes significantly to model development. For instance FeatureInsight [279] provides a method for creating novel features specifically tailored for text classification tasks. Through visually analyzing classifier errors and identifying their underlying causes users can devise innovative solutions by developing new attributes capable of distinguishing between misclassified instances effectively. By enhancing the generalization capability of newly constructed features collective analysis through visual summaries improves accuracy across multiple error cases rather than addressing individual instances alone.
4 Techniques during model building
Machine learning models are typically perceived as black boxes due to their limited interpretability, thereby restricting their practical applications in high-stakes domains like autonomous vehicles and financial investments. Current visual analytics techniques in model construction aim to investigate how to elucidate the underlying mechanisms of machine learning models, subsequently assisting model developers in constructing well-founded models. Initially, model developers require a comprehensive understanding of models to eliminate the time-consuming trial-and-error process. When the training process encounters failure or yields unsatisfactory performance, model developers must diagnose issues arising during training. Finally, there is a growing need for assistance in model steering since a significant amount of time is invested in enhancing model performance throughout the development process. To address these requirements, researchers have developed numerous visual analytics methods to enhance model understanding, diagnosis, and steering [1,2].
4.1 Model understanding
Research efforts pertaining to model comprehension can be categorized into two main types: those investigating how parameters influence outcomes, and those analyzing how models operate.
4.1.1 Understanding the effects of parameters
从模型参数的变化中观察模型输出的变化情况是一种理解模型的方法。例如,Ferreira等人(2004)开发了BirdVis工具来探索不同参数配置与模型输出之间的关系;这些关系主要体现在鸟类发生预测的应用场景中。此外,在预测模型中,该工具还揭示了各参数之间的相互关系。张等(2018)提出了一种可视化方法来展示变量对逻辑回归模型统计指标的影响。
4.1.2 Understanding model behaviours
Another key aspect is the mechanism by which the model generates desired outputs. The three main types of methods are specifically network-centric, instance-centric, and hybrid methods. Network-centric methods aim to investigate how different parts of the model cooperate, such as neurons or layers in convolutional neural networks (CNNs), interact with each other to produce final outputs. Earlier studies have utilized directed graph layouts for visualizing neural network structures [280], yet visual clutter becomes increasingly problematic as network complexity grows. To address this challenge, Liu et al. [62] developed CNNVis, a tool that visualizes deep CNNs (see Figure 4). This tool employs clustering techniques to group neurons with similar roles and their connections, effectively reducing visual clutter caused by their large quantity. Consequently, experts can better comprehend the roles of individual neurons and their learned features, as well as how low-level features aggregate into higher-level ones through the network structure. Later research by Wongsuphasawat et al. [77] introduced a graph visualization method for exploring machine learning architectures within TensorFlow [281]. They applied various graph transformations to derive an interpretable interactive layout from a low-level dataflow graph, thereby revealing the model's high-level structure in an accessible manner.

View original image Download original image
Figure 4 CNNVis, a network-focused visual analysis method designed to elucidate the intricate workings of deep convolutional neural networks that feature millions of neurons and extensive connections. Permission has been granted for reproduction from Reference [62], © IEEE 2017.
Instance-centric methods are designed to offer per-instance analysis and exploration, along with examining the interactions among instances. Rauber et al. [69] presented the representations learned from each neural network layer as 2D scatterplots diagrams. Users are able to identify clusters and confusion zones within these representation projections, which enables them to comprehend the structure of the representation space established by the network. Furthermore, they can analyze how this space evolves during training sessions to gain insights into the network's learning behavior. Additionally, several visual analytics techniques for RNNs employ similar instance-centric approaches. LSTMVis [73], developed by Strobelt et al., utilized parallel coordinates to depict hidden states, facilitating an examination of changes in hidden states across texts. RNNVis [65], created by Ming et al., grouped hidden state units (each representing a dimension in an RNN's hidden state vector) into memory chips and associated word clouds with words. These relationships were modeled using a bipartite graph structure, which supports sentence-level explanations within RNNs.
Hybrid methods整合上述两种方法并充分利用其优势特点,在具体实施上也有着独特的优势体现。特别地,在实例级别上进行分析时可以结合网络架构的背景信息来提升分析效果和洞察深度。这样的背景信息有助于深入理解网络的工作原理及其内在机理运作机制。在具体应用中,Hohman等[56]提出了Summit这一创新性方法,旨在识别模型预测中起到关键作用的重要神经元节点及其相互关联性特征,该方法通过嵌入视图总结类之间的激活情况以及属性图视图揭示神经元之间的重要联系特征,从而实现了对模型预测机制的关键性节点识别功能。在实际应用案例中,Kahng等[59]提出了ActiVis这一有效工具,该方法通过构建计算图展示了模型结构并利用投影视图揭示了实例、子集以及类别之间的激活关系特征,从而实现了对复杂深层神经网络行为模式的有效可视化解析与理解
In recent years, researchers have made efforts to employ surrogate explainable models to interpret model behaviors. These methods offer significant advantages by eliminating the need for users to probe into the inner workings of the model itself. Consequently, such approaches are particularly valuable for individuals with limited or no machine learning expertise. By treating the classifier as a black-box framework, Ming et al. [66] successfully extracted rule-based insights from both the input and output data of the classifier. These insights were then systematically visualized through RuleMatrix, an interactive platform that empowers practitioners to explore the extracted rules in depth, thereby enhancing the interpretability of these models. Wang et al. [75] introduced DeepVID, a tool designed to generate visual interpretations for image classifiers. Given an image of interest, a deep generative model was first utilized to produce nearby samples. These generated samples were subsequently employed to train a simpler and more interpretable model, such as a linear regression classifier, which aids in elucidating how the original model arrives at its decisions.
4.2 Model diagnosis
Visual analysis methods for diagnosing models may either be employed to evaluate the training outcomes or be utilized to assess the training process.
4.2.1 Analyzing training results
开发出多种工具用于基于性能评估分类器[81,82,86,93]。

View original image Download original image
This advanced visual analytics platform, AEVis, is designed to analyze adversarial samples effectively.It presents diverging and merging patterns in the extracted data pathways through river-based visualization as well as highlights critical features using layer-level analysis.Reproduced with permission from Ref.[84]© IEEE 2020
4.3 Analyzing training dynamics
Recent research efforts have focused on studying the training dynamics of machine learning models. These techniques aim to assist experts in diagnosing issues that arise during the training process. For instance, DGMTracker [89] provides tools to help identify failure reasons in deep generative models by employing a blue-noise polyline sampling algorithm, which maintains both outliers and major trends in training dynamics. Additionally, it uses a credit assignment mechanism to uncover neuron interactions for better diagnosis of failure propagation. Attention has also been directed towards diagnosing reinforcement learning processes. Wang et al. [96] introduced DQNViz for analyzing deep Q-networks in Breakout games, offering an overview through line charts and stacked area charts that display changes in overall statistics during training. At a detailed level, DQNViz employs segment clustering and pattern mining algorithms to help experts detect common and suspicious patterns in event sequences from agents interacting with Q-networks. Furthermore, He et al. [87] developed DynamicsExplorer to diagnose LSTM networks trained for controlling ball-in-maze tasks. This tool visualizes ball trajectories via a trajectory variability plot and clusters them using parallel coordinates plots to quickly identify where training failures occur.
4.4 Model steering
There are two primary approaches for model steering: fine-tuning the model using human expertise, and selecting an optimal model from an ensemble of models.
4.4.1 Model refinement with human knowledge
Several visual analytics techniques have been created to allow users to be part of the model refinement process's loop, utilizing flexible interactions.
通过直观分析技术(visual analytics techniques),用户可以直接细化目标模型。
Besides direct model updates, users can also correct flaws in the results or provide extra knowledge, allowing the model to be updated implicitly to produce improved results based on human feedback. Several works have focused on incorporating user knowledge into topic models to improve their results [13,105,106,109,124,125]. For instance, Yang et al. [125] presented ReVision that allows users to steer hierarchical clustering results by leveraging an evolutionary Bayesian rose tree clustering algorithm with constraints. As shown in Fig. 6, the constraints and the clustering results are displayed with an uncertainty-aware tree-based visualization to guide the steering of the clustering results. Users can refine the constraint hierarchy by dragging. Documents are then re-clustered based on the modified constraints. Other human-in-the-loop models have also stimulated the development of visual analytic systems to support such kinds of model refinement. For instance, Liu et al. [112] proposed MutualRanker using an uncertainty-based mutual reinforcement graph model to retrieve important blogs, users, and hashtags from microblog data. It shows ranking results, uncertainty, and its propagation with the help of a composite visualization; users can examine the most uncertain items in the graph and adjust their ranking scores. The model is incrementally updated by propagating adjustments throughout the graph.

Figure 6 ReVision, a specialized visualization system incorporating a constrained hierarchical clustering algorithm and an uncertainty-aware tree-based visualization, assists users in iteratively refining hierarchical topic modeling results. This work is reproduced with permission from Reference [125], © IEEE 2020.
4.4.2 Model selection from an ensemble
另一个模型引导策略旨在从模型集合中选择最佳模型以满足特定需求。这些模型通常在聚类分析(如[102, 118, 121])和回归分析(如[99, 103, 113, 119])中被发现。Clustrophile 2作为一个视觉分析系统,在聚类分析中通过推荐结果帮助用户选择合适的输入特征和聚类参数。BEAMES则是一个设计用于多元模型引导与选择的回归任务中的工具。它通过改变算法及其超参数创建了一系列回归模型,并通过交互式加权数据实例和交互式特征选择与加权进一步优化这些模型性能。用户可以对这些模型进行评估,并根据不同的性能指标如残差分数和均方误差等选择最佳模型。
5 Techniques after model building
In terms of existing efforts post-model construction,the goal of visual analytics is to assist users in comprehending and deriving meaningful insights from model outputs,such as high-dimensional datasets[5 283].Given that many of these approaches are rooted indata-centric principles,the classification framework differentiates between techniques based on whether they analyze static or dynamic datasets.The temporal aspect inherent in dataset properties plays a pivotal role in shaping visualization strategies.Thus,the proposed framework divides analytical approaches into two categories:those focused on stationary dataset characteristics and those designed for temporal dynamics.A system dedicated to stationary datasets typically processes all output variables collectively while preserving their structural integrity.However,a system designed for dynamic datasets must not only interpret outcomes at individual time steps but also capture longitudinal trends through visualization techniques that highlight progression over time.The latter requires advanced modeling capabilities to identify patterns and trends within evolving datasets
5.1 Understanding static data analysis results
Our review examines approaches to comprehend static data analysis based on data type categories. Our studies predominantly focus on textual data analysis, whereas fewer studies explore the comprehension of alternative forms.
5.1.1 Textual data analysis
The field of visual text analytics has long been one of the most commonly researched areas, strongly integrating interactive display techniques with text mining methods (such as document grouping, topic analysis, and word representation) to enable users to gain deeper insights into extensive textual datasets [5].
早期的一些工作主要依赖简单的可视化手段来直接展示经典文本挖掘技术的结果。例如,在这些研究中,Görg等人开发了一种多视图可视化系统。这种系统由列表视图、集群视图、词云、网格视图和文档视图组成,并直观地展示了文档摘要、文档集群、情感分析、实体识别以及推荐分析的结果。通过结合交互式可视化与文本挖掘技术……提供了一个平滑且富有信息量的探索环境给用户。
Most later research has focused on combining well-designed interactive visualization with state-of-the-art text mining techniques, such as topic models and deep learning models, to provide deeper insights into textual data. To provide an overview of the relevant topics discussed in multiple sources, Liu et al. [159] first utilized a correlated topic model to extract topic graphs from multiple text sources. A graph matching algorithm is then developed to match the topic graphs from different sources, and a hierarchical clustering method is employed to generate hierarchies of topic graphs. Both the matched topic graph and hierarchies are fed into a hybrid visualization which consists of a radial icicle plot and a density-based node-link diagram (see Fig. 7(a)), to support exploration and analysis of common and distinctive topics discussed in multiple sources. Dou et al. [136] introduced DemographicVis to analyze different demographic groups on social media based on the content generated by users. An advanced topic model, latent Dirichlet allocation (LDA) [284], is employed to extract topic features from the corpus. Relationships between the demographic information and extracted features are explored through a parallel sets visualization [285], and different demographic groups are projected onto the two-dimensional space based on the similarity of their topics of interest (see Fig. 7(b)). Recently, some deep learning models have also been adopted because of their better performance. For example, Berger et al. [128] proposed cite2vec to visualize the latent themes in a document collection via document usage (e.g., citations). It extended a famous word2vec model, the skip-gram model [286], to generate the embedding for both words and documents by considering the citation information and the textual content together. The words are projected into a two-dimensional space using t-SNE first, and the documents are projected onto the same space, where both the document-word relationship and document-document relationships are considered simultaneously.

View original image Download original image
Figure 7 illustrates examples of static text visualization. (a) TopicPanorama acquires topic graphs from various sources and exhibits the connections among them through graph layout, as permitted by Ref. [159] © IEEE 2014. (b) DemographicVis assesses the similarity between distinct user groups by examining their posting content, employing t-SNE projection to uncover these connections, as permitted by Ref. [136] © IEEE 2015.]
5.1.2 Other data analysis
除了文本数据外,其他类型的数据显示出了广泛的研究兴趣。其中Hong等人[146]通过将路径线视为文档以及特征视为单词的方式构建了LDA模型来进行流场分析。建模之后,在多维尺度化方法的帮助下将原始路径线及其提取出的主题映射到二维空间中,并随后生成了多个预览页面以便于展示重要主题的相关路径线。最近开发的可视化分析工具SMARTexplore[129]旨在帮助分析人员发现并解读维度间的有趣模式。为此该工具将基于表格的可视化展示与模式匹配和子空间分析紧密集成以实现这一目标
5.2 Understanding dynamic data analysis results
除了理解静态数据分析的结果外(即了解数据分析的基本情况),还应深入研究数据中隐含主题随时间变化的情况(即如何追踪主题的变化趋势)。例如,在提供社交媒体上主要公众意见概览以及这些意见随时间变化情况的前提下(即基于实时数据追踪机制),一个系统可以帮助政界人士做出及时决策(即实现政策制定的高效性)。大多数现有的研究致力于理解带有时间戳的数据语料库中的分析结果(即通过时间维度对数据进行分类与解读)。根据系统是否支持处理流式数据的情况(即是否具备实时数据处理能力),在视觉动态数据分析领域中现有的研究可以进一步分为离线分析和在线分析两类(即根据应用场景的不同进行分类)。在离线分析中,在分析过程中所有数据均可获得(即无需等待全部数据加载完毕);而在在线分析中,则处理那些在分析过程中不断涌入的数据(即实时更新机制的应用)。
5.2.1 Offline analysis.
Offline research can be categorized based on the analysis tasks: topic-based, event-based, and trajectory-based analyses.
Analyzing the evolution of topics within a large text corpus over time represents an important subject that has garnered considerable attention. Many existing studies employ a river metaphor to depict changes within the text corpus across time. ThemeRiver [204], for instance, stands as one of the pioneering efforts, employing the river metaphor to illustrate shifts in topic volumes. To better comprehend the content evolution within a document corpus, TIARA [220,248] utilizes an LDA model [287] to extract topics and uncover their temporal dynamics. However, merely observing volume changes and content dynamics is insufficient for complex analytical tasks where users aim to explore relationships between topics and their temporal evolutions. Consequently, subsequent research has concentrated on understanding topic relationships (such as topic splitting and merging) alongside their evolving patterns over time. For example, Cui et al. [190] initially extracted topic splitting and merging patterns from a document collection by employing an incremental hierarchical Dirichlet process model [288], subsequently developing a river metaphor augmented with well-structured glyphs to visually represent these topic relationships and their dynamic temporal changes. Xu et al. [259] leveraged a topic competition model to extract dynamic competition patterns between topics and investigate opinion leaders' impacts on social media. Sun et al. [238] extended this competition model into a "coopetition" (cooperation and competition) framework to elucidate intricate interactions among evolving topics. Wang et al. [246] proposed IdeaFlow, an interactive visual analytics system designed to uncover lead-lag relationships among various social groups over time. Nevertheless, these approaches employ a flat structure for modeling topics, which constrains their applicability in big data scenarios involving large-scale text corpora. Fortunately, there are emerging efforts that integrate hierarchical topic models with interactive visualization tools to facilitate comprehension of large text corpora's primary content. For instance, Cui et al. [191] extracted sequences of topic trees through an evolutionary Bayesian rose tree algorithm [289], followed by computation of tree cuts for each tree—these cuts are then utilized to approximate topic trees and display them within a river metaphor framework, thereby revealing dynamic relationships such as topic births, deaths, splittings, and mergers.
基于事件分析的任务是揭示有序事件序列中具有语义重要性的顺序模式[149, 202 , 222 , 226] 。为了便于对大规模事件序列进行可视化探索并发现模式 ,已设计出多种可视化分析方法 。例如 ,刘等人开发了一种用于点击流数据可视分析的方法[ 2 ] 。他们通过张量模型将点击流数据转换为𝑛n维张量 ,并通过张量分解技术提取隐含线性片段(即"线"),并将这些片段划分为若干阶段 。他们将这些片段表示为分段线条状片段 ,并使用线图 metaphor 来展示不同阶段之间的变化关系 。随后 ,在解决每个阶段固定长度的问题上 ,EventThread进行了扩展[ 1 ] 。作者们提出了一种无监督阶段分析算法以有效识别事件序列中的潜在阶段。

Figure 8TextFlow is based on river metaphors to represent the lifecycle events of topics such as emergence, extinction, merger, and bifurcation. Reproduced with permission from Ref. [190], © IEEE 2011.
Other works focus on understanding movement data (e.g., GPS records) analysis results. Andrienko et al. [174] extracted movement events from trajectories and then performed spatio-temporal clustering for aggregation. These clusters are visualized using spatio-temporal envelopes to help analysts find potential traffic jams in the city. Chu et al. [189] adopted an LDA model for mining latent movement patterns in taxi trajectories. The movement of each taxi, represented by the traversed street names, was regarded as a document. Parallel coordinates were used to visualize the distribution of streets over topics, where each axis represents a topic, and each polyline represents a street. The evolution of the topics was visualized as topic routes that connect similar topics between adjacent time windows. More recently, Zhou et al. [269] treated origin-destination flows as words and trajectories as paragraphs, respectively. Therefore, a word2Vec model was used to generate the vectorized representation for each origin-destination flow. t-SNE was then employed to project the embedding of the flows into two-dimensional space, where analysts can check the distributions of the origin-destination flows and select some for display on the map. Besides directly analyzing the original trajectory data, other papers try to augment the trajectories with auxiliary information to reduce the burden on visual exploration. Kruger et al. [212] clustered destinations with DBScan and then used Foursquare to provide detailed information about the destinations (e.g., shops, university, residence). Based on the enriched data, frequent patterns were extracted and displayed in the visualization (see Fig. 9); icons on the time axis help understand these patterns. Chen et al. [186] mined trajectories from geo-tagged social media and displayed keywords extracted from the text content, helping users explore the semantics of trajectories.

View original image Download original image
Figure 9 illustrates the semantic enhancement of trajectory data by Kruger et al. In the spatial view (top), frequent routes and destinations are identified, while temporal patterns are extracted and displayed in the temporal view (bottom). This work is reproduced with permission from Ref. [212], © IEEE 2015.
5.2.2 Online analysis
为了高效处理流数据尤其是文本流数据类型的信息提取需求,在实际应用中往往需要借助特定的方法和技术手段以实现精准的数据解析与实时反馈输出
6 Research opportunities
Despite the substantial achievements in research into visual analytics within the domain of machine learning—both in academic settings and across real-world applications—there remain significant long-term research challenges. In this context, we address and emphasize key issues while exploring emerging research avenues.
6.1 Opportunities before model building
6.1.1 Improving data quality for weakly supervised learning
Weakly supervised learning通过从存在质量问题的数据中提取特征构建模型的方法
Additionally, while the issue of imprecise data quality is prevalent in real-world applications [292], it has garnered limited focus within the domain of visual analytics.
6.1.2 Explainable feature engineering
Most existing studies aiming to enhance feature quality primarily focus on tabular or textual data derived from conventional analytical models. While these features are inherently interpretable, this simplifies the feature engineering process. Nevertheless, features extracted using deep neural networks generally outperform handcrafted ones [295,296]. Given that deep neural networks operate as black boxes, these intricate features are challenging to interpret and present several hurdles in the feature engineering process.
In this study, features are extracted using a data-driven approach, which may not adequately represent original images or videos if the dataset is biased towards certain characteristics, such as color or texture. For instance, consider a dataset containing only dark dogs and light cats; the extracted features might highlight color differences while overlooking other distinguishing features such as facial shape or ear structure. Lacking a clear understanding of these biases in feature extraction makes it challenging to address them appropriately. Therefore, an intriguing area for future research is employing interactive visualization techniques to uncover the reasons behind feature bias. The main challenge lies in quantifying the information retained or lost during feature extraction and presenting these findings in an accessible manner.
Moreover, redundancies are present in the extracted deep features [297]. The removal of redundant features can bring about multiple advantages, including reduced storage needs and enhanced generalizability [278]. Without a clear grasp of the precise meaning of each feature, it becomes challenging to determine if a particular feature is redundant. Therefore, an intriguing future research direction involves creating a visual analytics framework designed specifically for conveying feature redundancies in an understandable manner. This approach would empower experts with the ability not only to comprehend but also actively eliminate redundant elements.
6.2 Opportunities during model building
6.2.1 Online training diagnosis
Existing visual analytics tools for model diagnosis operate predominantly offline: diagnostic data is collected post-training completion. These solutions have demonstrated efficacy in identifying root causes of training failures. However, as modern machine learning models grow increasingly intricate, training durations can extend into days or weeks. Offline diagnosis significantly constrains visual analytics' ability to support real-time model development. Consequently, there exists an urgent requirement to devise online diagnostic tools that can enhance model training efficiency by enabling anomaly detection and prompt adjustments. This approach can substantially reduce time spent on trial-and-error model refinement. The primary challenge associated with online diagnosis lies in timely anomaly identification during the training process. While developing algorithms designed to detect anomalies with both accuracy and real-time capabilities remains a complex task, interactive visualization offers a promising solution for pinpointing potential errors. Unlike offline methods, which process static diagnostic data, online analysis must handle streaming information incrementally. Therefore, an innovative progressive approach is necessary to generate insightful visualizations from partial data streams during ongoing model training. Such techniques not only aid professionals in monitoring and promptly addressing issues that arise during real-time model training processes but also provide critical insights into potential problems early on
6.2.2 Interactive model refinement
Recent studies have delved into the potential of uncertainty to enhance interactive model refinement [106,112,124,125]. Various approaches exist to assign uncertainty scores to model outputs (for instance, based on confidence scores generated by classifiers), and visual cues can assist users in focusing on model outputs with high uncertainty. After user refinement, model uncertainty is recalculated, enabling iterative refinement until users are satisfied. Furthermore, additional resources can offer users more informed guidance to streamline the process of refining models effectively. However, there remains significant potential for researchers to explore interactive model refinement. One promising direction involves leveraging feedback from previous iterations to inform subsequent guidance. For example, in a clustering application, users might establish must-link or cannot-link constraints between certain data pairs, which can then be used to instruct a model to adjust its cluster assignments in intermediate results. Additionally, incorporating prior knowledge can help identify areas where refinements are needed. For instance, model outputs that conflict with established public or domain knowledge (especially in unsupervised models like nonlinear matrix factorization and latent Dirichlet allocation for topic modeling) should be addressed during the refinement process. Consequently, such knowledge-driven strategies aim to uncover unreasonable outcomes from models, empowering users to refine them by imposing constraints.
6.3 Opportunities after model building
6.3.1 Understanding multi-modal data
Existing studies in content analysis have demonstrated significant achievements in analyzing single-mode data types, such as text, images, and videos. Nevertheless, real-world scenarios frequently involve multi-mode data that integrates diverse content forms, including text, audio, and imagery. For instance, a physician examines a patient by integrating multiple data sources like medical records (text), lab reports (tables), and CT scans (images). Analyzing such multi-mode data presents challenges since simply combining insights from single-mode models fails to capture intricate relationships between different modes. Consequently, employing multi-mode machine learning methodologies and leveraging their capabilities to uncover patterns across varied data types becomes increasingly promising. To enhance understanding of these multi-mode learning models' outputs, it is imperative to develop a more robust visual analytics system that can comprehensively interpret the results. Various machine learning models have been developed to learn joint representations of multi-mode data streams [298-299]. Moving forward, exploring effective methods for visualizing these joint representations across all modes will significantly improve comprehension of the underlying data and their interconnections. Additionally, applying classic multi-mode tasks can facilitate more natural interactions in the domain of visual analytics. For example, in the vision-language integration scenario, tasks such as visual grounding (identifying specific image regions based on descriptions) can provide intuitive interfaces for image retrieval using natural language queries within visual environments.
6.3.2 Analyzing concept drift
In real-world applications, it is often assumed that the mapping from input data to output values (e.g., prediction label) is static. However, as data continues to arrive, the mapping between the input data and output values may change in unexpected ways [300]. In such a situation, a model trained on historical data may no longer work properly on new data. This usually causes noticeable performance degradation when the application data does not match the training data. Such a non-stationary learning problem over time is known as concept drift. As more and more machine learning applications directly consume streaming data, it is important to detect and analyze concept drift and minimize the resulting performance degradation [301,302]. In the field of machine learning, three main research topics, have been studied: drift detection, drift understanding, and drift adaptation. Machine learning researchers have proposed many automatic algorithms to detect and adapt to concept drift. Although these algorithms can improve the adaptability of learning models in an uncertain environment, they only provide a numerical value to measure the degree of drift at a given time. This makes it hard to understand why and where drift occurs. If the adaptation algorithms fail to improve the model performance, the black-box behavior of the adaptation models makes it difficult to diagnose the root cause of performance degradation. As a result, model developers need tools that intuitively illustrate how data distributions have changed over time, which samples cause drift, and how the training samples and models can be adjusted to overcoming such drift. This requirement naturally leads to a visual analytics paradigm where the expert interacts and collaborates in concept drift detection and adaptation algorithms by putting the human in the loop. The major challenges here are how to (i) visually represent the evolving patterns of streaming data over time and effectively compare data distributions at different points in time, and (ii) tightly integrate such streaming data visualization with drift detection and adaptation algorithms to form an interactive and progressive analysis environment with the human in the loop.
7 Conclusions
This paper has systematically analyzed the recent advancements in visual analytics techniques for machine learning. These techniques are categorized into three groups based on their analysis stages: pre-modeling, during-modeling, and post-modeling. Each category is elaborated through typical analysis tasks, and each task is illustrated with a representative set of works. By conducting a comprehensive review of existing visual analytics research in machine learning, we propose six emerging research directions, including enhancing data quality for weakly supervised learning and explainable feature engineering before model development, implementing online training diagnosis and intelligent model refinement during model building, and exploring multi-modal data understanding and concept drift analysis after model deployment. We trust this survey will offer an insightful overview of visual analytics research in machine learning, aiding comprehension of current knowledge and guiding future research endeavors.
Acknowledgements
This research is partially supported by the National Key Research and Development Program of China (Nos. 2018YFB100430 and 2019YFB1457), the National Natural Science Foundation of China (Nos. 6176A3B4C5, D5E7F8G9), TC19HFGI/JKL, The State Key Laboratory for …, Tsinghua University, and partially supported collaboration with Tsinghua-Kuaishou Institute of Future Media Data.
References
[1]
On enhancing the analysis of machine learning models: A visual analytics perspective
Visual Informatics Vol. 1, No. 1, 48-56, 2017.
该研究被引用为(见文献)[...],
该平台提供查找文献的帮助(见标题)[...]
[2]
Choo, J.; Liu, S. X. Visual analytics for explainable deep learning.
IEEE Computer Graphics and Applications Vol. 38, No. 4, 84-92, 2018.
[3]
Hohman, F.; Kahng, M.; Pienta, R.; Chau, D. H. Visual analytics in deep learning: An exploratory survey to explore the future frontiers.
IEEE Transactions on Visualization and Computer Graphics Vol. 25, No. 8, 2674-2693, 2019.
[4]
The authors are Zeiler, M. D., and Fergus, R. They have visualized and comprehended convolutional networks in their work presented at the proceedings of the conference on computer vision and pattern recognition.
Proceedings of the European Conference on Computer Vision (ECCV) 2014, Lecture Notes in Computer Science Vol. 8689** edited by** Fleet, D., Pajdla, T., Schiele, B., and Tuytelaars, T., published by Springer Cham on pages 818-833 (2014).
[5]
连接文本可视化与数据挖掘的研究:一项基于任务的需求调查
IEEE Transactions on Visualization and Computer Graphics Vol. 25, No. 7, 2482-2504, 2019.
[6]
Lu, Y. F.; Garcia, R.; Hansen, B.; Gleicher, M.; Maciejewski, R. Recent advancements in predictive visual analytics highlight the most cutting-edge developments.
Computer Graphics Forum Vol. 36, No. 3, 539-562, 2017.
Crossref 用于展示该研究领域的最新进展。
Google Scholar 则提供了对该主题深入探讨的研究文献。
[7]
Sacha et al., 2018; Kraus et al., 2019. VIS4ML: A framework designed to integrate visual analysis techniques into the process of machine learning.
IEEE Transactions on Visualization and Computer Graphics Vol. 25, No. 1, 385-395, 2019.
该种方式提供了Referees(审稿人)支持,并结合了Google Scholar(已知)资源以支持其研究
[8]
Selvaraju et al.: Introducing Grad-CAM, a technique that leverages gradient-based localization methods to generate visual explanations for deep learning models within the framework of computer vision.
International Journal of Computer Vision Vol. 128, 336-359, 2020.
[9]
Zhang, Q. S.; Zhu, S. C. Visual interpretability for deep learning: A survey.
Frontiers of Information Technology & Electronic Engineering Vol. 19, No. 1, 27-39, 2018.
Digital Object Identifier (DOI)平台 连接到 Digital Information 出版社的数据库。
[Google Scholar](https://scholar.google.com/scholar_lookup?title=Visual interpretability for deep learning: A survey&view=citation&hl=en"Google Scholar" 连接至 Google 学术搜索页面。
[10]
The application titled "Profiler" presents a comprehensive statistical analysis and graphical representation for evaluating data quality. It was presented in the proceedings of the International Working Conference on Advanced Visual Interfaces (pages 547–554) in 2012.
[11]
Marsland, S.
Machine Learning: an Algorithmic Perspective. Chapman and Hall/CRC, 2015.
[12]
Hung, N. Q. V.; Thang, D. C.; Weidlich, M.; Aberer, K. Reducing the efforts required to validate the crowd's answers, this study presents a novel approach to minimizing the challenges faced during such a process. In: Proceedings of the ACM SIGMOD International Conference on Management of Data; 2015. Proc ACM SIGMOD Int Conf Manag Data. 999-1014
[13]
Choo, J.; Lee, C.; Reddy, C. UTOPIAN: User-guided topic modeling based on interactive non-negative matrix factorization.
IEEE Transactions on Visualization and Computer Graphics Vol. 19, No. 12, 1992-2001, 2013.
[14]
Analysis of missing data points in longitudinal cohort studies: a visualization approach.
Computer Graphics Forum Vol. 39, No. 1, 63-75, 2020.
[15]
Arbesser, C.; Spechtenhauser, F.; Muhlbacher, T.; Piringer, H. Visplause: A robust method for evaluating the validity of multiple time series through consistency checks.
IEEE Transactions on Visualization and Computer Graphics Vol. 23, No. 1, 641-650, 2017.
[16]
Bäuerle, A.; Neumann, H.; Ropinski, T. 基于分类器指导的视觉校正方法用于处理图像分类任务中的噪声标签。
Computer Graphics Forum Vol. 39, No. 3, 195-205, 2020.
[17]
Bernard et al., Interactive visual analysis and preprocessing of multivariate time-series datasets
Computer Graphics Forum Vol. 38, No. 3, 401-412, 2019.
该方法通过多维度时间序列数据的可视化交互预处理技术实现了数据特征的动态表现能力研究。
研究表明良好, 该研究展示了良好的研究结果, 并通过多维度时间序列数据的可视化交互预处理技术实现了数据特征的动态表现能力研究。
[18]
Johann Bernoulli; Michael Hutter; Michael Zeppelzauer; Daniel Fellner; Michael Sedlmair. A comparative analysis of visual-interactive labeling and active learning techniques: A comprehensive experimental evaluation.
IEEE Transactions on Visualization and Computer Graphics Vol. 24, No. 1, 298-308, 2018.
The research was supported by CrossRef and [Google Scholar](https://scholar.google.com/scholar_lookup?title=Comparing visual interactive labeling with active learning: An experimental study&journal=IEEE Transactions on Computer% Graphics% %C3%A9Visualization&volume=24&pages=298-308&publication_year=2018 "Google Scholar") lookup.
[19]
Bernard et al.; Zeppelzauer et al.; Lehmann et al.; Müller et al.; Sedlmair et al. Introduce user-centric active learning algorithms approaches for the proposed system.
Computer Graphics Forum Vol. 37, No. 3, 121-132, 2018.
[20]
Bors, C.; Gschwantner, T.; Miksch, S. Extracting and representing origin information derived from data manipulation.
IEEE Computer Graphics and Applications Vol. 39, No. 6, 61-75, 2019.
[21]
Chen, C. J.; Yuan, J.; Lu, Y. F.; Liu, Y.; Su, H.; Yuan, S. T.; Liu, S. X. OoDAnalyzer: Interactive examination of samples that are out of distribution.
IEEE Transactions on Visualization and Computer Graphics , 2020.
CrossRef and Google Scholar](https://scholar.google.com/scholar_lookup?query_id=6664446666666667745&title=OoDAnalyzer: Interactive examination of out-of-distribution samples&journal=IEEE Transactions on Visualization & Computer Graphics&volume=null&pages=null&publication_year=2020)
[22]
Dextras-Romagnino, K.; Munzner, T. Segmen++ identifier: Interactive processing and enhancement of clickstream data.
Computer Graphics Forum Vol. 38, No. 3, 623-634, 2019.
[23]
Gschwaldtner and Erhart discuss the importance of understanding one's own data quality during time series analysis. Their work is presented at the IEEE Pacific Visualization Symposium in 2018, where they explore innovative methods for identifying data issues.
[24]
Viana system is a comprehensive framework designed to enable detailed analysis of motion pictures using advanced computational techniques.
Computer Graphics Forum Vol. 38, No. 3, 119-129, 2019.
[25]
Heimerl, F.; Koch, S.; Bosch, H.; Ertl, T. Visual supervised classification training in text document retrieval tasks.
IEEE Transactions on Visualization and Computer Graphics Vol. 18, No. 12, 2839-2848, 2012.
[26]
Custom-built classifiers are learned interactively for video visual analytics.
[27]
Soares Junior, A., Renso, C., and Matwin, S. introduced ANALYTiC as an adaptive learning platform designed to enhance the accuracy of trajectory analysis.
IEEE Computer Graphics and Applications Vol. 37, No. 5, 28-39, 2017.
CX [GS](https://scholar.google.com/scholar_lookup?title=ANALYTiC:% 一种主动学习系统用于轨迹分类&journal=IEEE Computer Graphics and Applications&volume=37&pages= ̄̄̄̄̄̄̄̄̄̄̄̄̄ ₂₈-₃₉&publication_year= ̅̅̅̅ ₂₀₁₇ "GS")
[28]
VASSL is a visual analytics suite aimed at classifying social spambots.
IEEE Transactions on Visualization and Computer Graphics Vol. 26, No. 1, 874-883, 2020.
该系统提供了一种新的方法用于跨学科研究的支持工具,并通过其直观的可视化界面显著提升了研究效率
[29]
Kurzhals et al., 2019; Hlawatsch et al., 2020; Seeger et al., 2021; Weiskopf et al., 2022. Analytical Visualization techniques applied to Mobile Eye Tracking Observation.
IEEE Transactions on Visualization and Computer Graphics Vol. 23, No. 1, 301-310, 2017.
Ref [权威平台](https://scholar.google.com/scholar_lookup?title=Visual analytics for mobile% %C2%A3ye% %C2%A3tracking&journal=IEEE% %C2%A3nsactions% %C2%A3on% %C2%A3Visualization% %C2%A3and% %C2%A3Computer% %C2%A3Graphics&volume=23&pages=301-310&publication_year=2017 "学术资源")
[30]
Lekschas et al., 2019. PEAX: a system for the interactive visualization of sequential data patterns employing unsupervised deep learning techniques.
bioRxiv 597518, , 2020.
[31]
Liu et al., 2019 propose an interactive approach to enhance the accuracy and reliability of crowdsourced annotation records.
IEEE Transactions on Visualization and Computer Graphics Vol. 25, No. 1, 235-245, 2019.
参考文献 [Google 学术](https://scholar.google.com/scholar_lookup?title=An interactive approach for enhancing crowd-sourced annotation accuracy&journal=IEEE Transactions on Visualization and Computer Graphics&volume=...&publication_year=...)
[32]
Moehrmann et al. enhanced the usability of hierarchical representations by developing an interactive labeling system for processing large multimodal datasets within the framework of the International Conference on Medical Image Computing and Computer-Assisted Intervention.
《人机交互设计与开发方法》. 讲座 Notes in Computer Science 第6761卷. 编者 Jacko J.A., 出版社 Springer Berlin, 第618至627页, 2011年.
[33]
Such a methodology facilitates the incremental classification of visual data.
IEEE Transactions on Visualization and Computer Graphics Vol. 21, No. 1, 4-17, 2015.
Digital Object Identifier (DOI): https://doi.org/10.1109/TVCG.2014.2331979, with the title "An approach for incremental visual data classification" and published in IEEE Transactions on Visualization and Computer Graphics
[34]
Park, J. H.; Nadeem, S.; Boorboor, S.; Marino, J.; Kaufman, A. E.: Crowdsource analytics as a dataset in medical imaging research.
IEEE Transactions on Visualization and Computer Graphics , 2019.
[35]
Park et al., J. H.; Nadeem, S.; Mirhosseini, S.; Kaufman, A. Collective consensus analysis applied to virtual colonoscopic examination. In: Proceedings of the IEEE Conference on Visual Analytics Science and Technology, 21-30, 2016.
[36]
De Rooij, O.; van Wijk, J. J.; Worring, M. MediaTable: User-guided classification of multimedia collections.
IEEE Computer Graphics and Applications Vol. 30, No. 5, 42-51, 2010.
[Ref](https://doi.org/10.1374/mgc.34.november_??????_?? ?? ?? ??:????:????) [accessed via Google Scholar](https://scholar.google.com/scholar_lookup?title=Mathodical table: Interactive categorization of multimedie collections & journal = IEEE Computer Graphics and Applications & volume = 34 & pages = 48-57 & publication_year = 7)
[37]
Snyderson,L.S.;Lin,Ying-Sun;Karimzadeh,Majid;Goldwasser,D.;Ebert,D.S.Analyzing and prioritizing pertinent tweets within an interactive approach to support real-time decision-making context.
IEEE Transactions on Visualization and Computer Graphics Vol. 26, No. 1, 558-568, 2020.
[38]
Sperrle, F.; Sevastjanova, R.; Kehlbeck, R.; El-Assady, M. VIANA: Visual interactive annotation of argumentation. In: Proceedings of the Conference on Visual Analytics Science and Technology, 11-22, 2019.
[39]
Stein et al., 2015; Highlights reel: detailed analysis and comprehensive annotation of soccer matches
IEEE Computer Graphics and Applications Vol. 36, No. 5, 50-60, 2016.
[40]
GraphProtector: A user-friendly graphical interface for implementing and evaluating multiple privacy-preserving graph algorithms.
IEEE Transactions on Visualization and Computer Graphics Vol. 25, No. 1, 193-203, 2019.
[41]
Wang et al.提出了一种用于消隐多属性表格数据的基于效用的可视化方法。
IEEE Transactions on Visualization and Computer Graphics Vol. 24, No. 1, 351-360, 2018.
[42]
Willett et al. explored the challenges associated with crowd-sourced data aggregation.
IEEE Transactions on Visualization and Computer Graphics Vol. 19, No. 12, 2198-2206, 2013.
Crossref [Google Scholar](https://scholar.google.com/scholar_lookup?title=Analyzing redundancy and revealing provenance in crowd-sourced data processing&journal=IEEE Transactions on Visualization & Computer Graphics&volume=19&pages= pages 2198–2206&publication year = 为 2013)
[43]
S. Xiang et al. Interactive revision of mislabeled training data. In: Proceedings of the IEEE Conference on Visual Analytics Science and Technology, pages 57–68, 2019.
[44]
Ingemar S.;Munzner T;Irvin V;Torsten M;Bernecky R;Möller T. DimStiller: Workflows for dimensionality reduction and analysis. In:Proceedings of the IEEE Visualization Conference on Visualization (VAST), 3-10, 2010.
[45]
Krause et al., 20XX. The system named INFUSE implements an interactive approach to feature selection, which is essential for constructing robust predictive models in high-dimensional datasets.
IEEE Transactions on Visualization and Computer Graphics Vol. 20, No. 12, 1614-1623, 2014.
[46]
May T Bannach A Davey J Ruppert T Kohlhammer J Assisting in the selection of feature subsets through interactive visualization techniques appears in proceedings from the IEEE Conference on Visual Analytics Science and Technology pages 111 to 120 year 2011
[47]
Muhlbacher, T.; Piringer, H. 基于划分的方法用于构建和验证回归模型。
IEEE Transactions on Visualization and Computer Graphics Vol. 19, No. 12, 1962-1971, 2013.
无法按照要求对这段文本进行有效的同义改写
[48]
Seo和Shneiderman提出了基于特征排名的方法(简称SRM),旨在实现交互式的多维数据探索。
Information Visualization Vol. 4, No. 2, 96-113, 2005.
[49]
The analysis of temporal data within a phase space provides a framework to examine facial dynamics (Tam et al., 2019).
Computer Graphics Forum Vol. 30, No. 3, 901-910, 2011.
[50]
Broeksema et al. (Year) developed a comprehensive framework known as the Decision support laboratory to assist in making informed business decisions through advanced analytical methods.
IEEE Transactions on Visualization and Computer Graphics Vol. 19, No. 12, 1972-1981, 2013.
The Crossref link directs to https://doi.org/10.1109/TVCG.2013.146, which is part of the Crossref system (see also https://crossref.org/)
[51]
Cashman D., Patterson G., Mosca A., Watts N., and Robinson S. are authors of the paper titled "RNNbow: a tool for visualizing learning through recurrent neural networks using backpropagation gradients."
IEEE Computer Graphics and Applications Vol. 38, No. 6, 39-50, 2018.
[52]
Collaris and van Wijk's ExplainExplore: A visual analysis of machine learning explanations presented in the IEEE Pacific Visualization Symposium proceedings from 2020.
[53]
Eichner, C.; Schumann, H.; Tominski, C. Identifying parameter relationships in time-series segmentation in a visually comprehensible manner.
Computer Graphics Forum Vol. 39, No. 1, 607-622, 2020.
[54]
The authors include N. Ferreira and others. The work introduces BirdVis, a tool that enables the visualization and comprehension of bird population dynamics.
IEEE Transactions on Visualization and Computer Graphics Vol. 17, No. 12, 2374-2383, 2011.
BirdVis: Visualising bird populations as well as comprehending their characteristics.
[55]
Fröhler, B.; Möller, T.; Heinzl, C. GEMSe: 可视化引导多通道分割算法的探索
Computer Graphics Forum Vol. 35, No. 3, 191-200, 2016.
[56]
Hohman, F.; Park, H.; Robinson, C.; Polo Chau, D. H. Summit: Enhancing deep learning interpretability through the visualization of activation and attribution summarizations.
IEEE Transactions on Visualization and Computer Graphics Vol. 26, No. 1, 1096-1106, 2020.
[57]
An analytical tool named DRLViz is developed to examine actions and information storage within deep reinforcement learning frameworks.
Computer Graphics Forum Vol. 39, No. 3, 49-61, 2020.
[58]
Jean, C. S.; Ware, C.; Gamble, R. Dynamic change arcs to explore model forecasts.
Computer Graphics Forum Vol. 35, No. 3, 311-320, 2016.
[59]
The ActiVis system offers interactive visualization capabilities for analyzing large-scale deep neural networks (DNNs). It provides researchers with a comprehensive platform to explore and understand industrial applications of advanced machine learning models.
IEEE Transactions on Visualization and Computer Graphics Vol. 24, No. 1, 88-97, 2018.
[60]
Kahng et al., 2021; Thorat et al., 2019; Chau et al., 2023; Viegas and Wattenberg, 2018] The GAN research group has employed interactive visual experimentation techniques to comprehensively explore and analyze complex deep generative models within their laboratory setting.
IEEE Transactions on Visualization and Computer Graphics Vol. 25, No. 1, 310-320, 2019.
Crossref [Google Scholar](https://scholar.google.com/scholar_lookup?title=GAN lab: Understanding complex deep generative models using interactive visual experimentation&journal=IEEE Transactions on Visualization and Computer Graphics&volume=...&publication_year=... "Google Scholar")
[61]
Kwon et al., Anand et al., Severson et al., Ghosh et al., Sun et al., Frohnert et al., Lundgren et al., and Ng DPVis: A tool for visual analysis of disease progression pathways using hidden Markov models
IEEE Transactions on Visualization and Computer Graph ics , 2020.
[62]
Liu et al., M. C. and S. X.; Shi et al., J. X. and J.; Li et al., Z., C. X., and J.; Zhu et al., J.; Liu et al., S. X.Aiming to enhance the understanding of deep convolutional neural network structures through innovative analysis
IEEE Transactions on Visualization and Computer Graphics Vol. 23, No. 1, 91-100, 2017.
[63]
Liu et al., 2023
注:按照用户的要求,在此输出内容仅包含改写后的文本部分
IEEE Transactions on Visualization and Computer Graphics Vol. 25, No. 1, 651-660, 2019.
[64]
Interactive decision-making processes are based on the concept of dissimilarity when visually representing prototypes. This approach is presented in detail within the proceedings of the IEEE Conference on Visual Analytics Science and Technology (2011), where pages 141–149 are dedicated to this topic.
[65]
Ming et al., Y.; Cao et al., S.; Zhang et al., R.; Li et al., Z.; Chen et al., Y.; Song et al., Y.; Qu et al., H. Exploring the hidden memories of recurrent neural networks. In proceedings of the IEEE conference on Visual Analytics Science and Technology, 13–24, 2017.
[66]
Ming, Y.; Qu, H. M.; Bertini, E. RuleMatrix: A tool for visualising and comprehending classifiers based on their rules.
IEEE Transactions on Visualization and Computer Graphics Vol. 25, No. 1, 342-352, 2019.
可参考Crossref 和 [Google Scholar](https://scholar.google.com/scholar_lookup?title=RuleMatrix: Visualizing and understanding classifiers with rules&journal=IEEE% Translations on Visualization and Computer Graphics&volume= ²⁵ &pages= ³⁴²-³⁵² &publication_year= ²⁰¹⁹ "Google Scholar) 查阅相关资料
[67]
DeepCompare: A Visualization and Interactive Analysis Tool for Assessing Deep Learning Model Performance
IEEE Computer Graphics and Applications Vol. 39, No. 5, 47-59, 2019.
[68]
Nithin et al., along with Heather Padia and others from the University of California and Stern School of Business at New York University. Presenting a visualization framework for analyzing textual data using deep learning techniques in their research paper titled "Visualizing deep neural networks for text analytics". The work was presented at the IEEE Pacific Visualization Symposium held in 2018.
[69]
Rauber et al., who conducted their research between 2015 and 2017, analyzed the interior workings of artificial neural networks to better understand their functioning.
IEEE Transactions on Visualization and Computer Graphics Vol. 23, No. 1, 101-110, 2017.
[70]
Lohweg et al.在IEEE Visualization and Data Analysis Conference上发表了一篇名为《通过视觉分析辅助活动识别》的研究论文,在《IEEE Visualization and Data Analysis Conference Proceedings》中详细阐述了该方法。
[71]
Scheepens et al., R.; Michels et al., S.; van de Wetering et al., H.; van Wijk et al., J. J. Reasoning-based visualization in the context of safety and security.
Computer Graphics Forum Vol. 34, No. 3, 191-200, 2015.
Crossref [Google Scholar](https://scholar.google.com/scholar_lookup?title=Reasoning visualization for safety and security&journal=Computer% Graphics% Forum&volume=34&pages=191-200&publication_year=2015 "Google Scholar")
[72]
Shen et al. present a novel approach to visualizing recurrent neural networks in the context of multivariate time series forecasting at the IEEE Pacific Visualization Symposium in 2020.
[73]
Strobelt et al.; LSTMVis: A platform aiding in the study of state transitions within recurrent neural networks’ internal mechanisms.
IEEE Transactions on Visualization and Computer Graphics Vol. 24, No. 1, 667-676, 2018.
该研究提供了一个有效的解决方案,并成功实现了跨领域数据的可视化分析工具。该工具基于LSTMVis: A tool for visual analysis of hidden state dynamics in recurrent neural networks, 并通过集成多种数据分析方法实现了对复杂系统行为模式的研究支持。
该研究提供了一个有效的解决方案,并成功实现了跨领域数据的可视化分析工具
[74]
Wang, J. P.; Gou, L.; Yang, H.; Shen, H. W. GANViz: A visual analysis method to understand the adversarial game.
IEEE Transactions on Visualization and Computer Graphics Vol. 24, No. 6, 1905-1917, 2018.
访问该研究平台 提供了详细的论文引用信息:"GANViz:一种可视化方法来理解对抗生成模型的工作原理"。此外,请参考 [通过Google Scholar访问相关文献](https://scholar.google.com/scholar_lookup?title=GANViz: A visual analytics approach to understand the adversarial game&journal=IEEE Transactions on Visualization and Computer Graphics&volume=24&pages=1905-1917&publication_year=2018 "这篇论文的研究背景与方法"以获取更多信息。
[75]
Wang et al. introduced the Deep VID-based framework for image classification, which leverages knowledge-based compression to enhance visual analysis and diagnostics.
IEEE Transactions on Visualization and Computer Graphics Vol. 25, No. 6, 2168-2180, 2019.
此外,在参考文献方面,请参考以下资源:Crossref以及相关的学术来源:[Google Scholar](https://scholar.google.com/scholar_lookup?title=DeepVID: Deep visual interpretation and diagnosis for image classifiers via knowledge% distillation&journal=IEEE% Transactions on Visualization and Computer Graphics&volume=25&pages=2168-2180&publication_year=2019 "Google Scholar")
[76]
Wang et al. introduced a novel visualization tool called SCANViz for interpreting symbol-concept associations discovered through deep learning models. Their work was presented at the IEEE Pacific Visualization Symposium in 2020.
[77]
Analyzing the data flow diagrams of deep neural networks within TensorFlow.
IEEE Transactions on Visualization and Computer Graphics Vol. 24, No. 1, 1-12, 2018.
Crossref 提供了详细的文献链接。
[Google Scholar](https://scholar.google.com/scholar_lookup?title=Visualizing dataflow graphs of deep learning models in TensorFlow&journal=IEEE Transactions on Visualization and Computer Graphics&volume=24&pages=1-12&publication_year=2018 "Google Scholar") 提供了相关的研究文献链接。
[78]
该研究团队提出了一种可视化分析方法用于高维逻辑回归建模,并将其应用于环境健康研究
[79]
Intelli-Forrest: Analyzing Random Forests Through Visual Analysis Techniques
IEEE Transactions on Visualization and Computer Graphics Vol. 25, No. 1, 407-416, 2019.
CROSSTAG [REFERENCES](https://scholar.google.com/scholar_lookup?title=iForest:ﵳ標法投示网丳常棬示�� %E5%B9=BBC "%EF%B5%B3%E6%A8%99"))
[80]
Ahn, Y.; Lin, Y. R. FairSight: Visual analytics for fairness in decision making.
IEEE Transactions on Visualization and Computer Graphics Vol. 26, No. 1, 1086-1095, 2019.
[81]
Alsallakh, B.; Hanbury, A.; Hauser, H.; Miksch, S.; Rauber, A. Visual techniques to analyze probabilistic classification datasets.
IEEE Transactions on Visualization and Computer Graphics Vol. 20, No. 12, 1703-1712, 2014.
RefManager][Research Tools](https://scholar.google.com/scholar_lookup?title=Methods% for evaluating probabilistic classifiers&journal=IEEE Transactions on Visualization & Computer Graphics&volume=VOL&pages=PAGES&publication_year=YEAR "Research Tools")
[82]
Bilal et al., 2018; Investigates whether convolutional neural network models can effectively learn and organize class hierarchies
IEEE Transactions on Visualization and Computer Graphics Vol. 24, No. 1, 152-162, 2018.
[83]
Cabrera et al., et al.; Epperson W; Hohman F; Kahng M; Morgenstern J; Chau D H; FAIRVIS: On the visual analytics of uncovering intersectional bias in machine learning. Appearing in: Proceedings of the IEEE Conference on Visual Analytics Science and Technology, pp. 46–56, 2019.
[84]
Cao, K. L.; Liu, M. C.; Su, H.; Wu, J.; Zhu, J.; Liu, S. X. This paper investigates the noise resilience of deep learning models.
IEEE Transactions on Visualization and Computer Graphics , 2020.
[85]
Diehl et al. (2019) propose a novel visualization technique based on decision trees to analyze probabilistic weather forecasts.
Computer Graphics Forum Vol. 36, No. 7, 135-144, 2017.
[86]
Gleicher, M.; Barve, A.; Yu, X. Y.; Heimerl, F. Boxer: An interactive system for the evaluation of classification algorithms.
Computer Graphics Forum Vol. 39, No. 3, 181-193, 2020.
[Ref. 1] Google Scholar
[87]
He, W., Lee, T.-Y., van Baar, J., Wittenburg, K., & Shen, H.-W. (2020). DynamicsExplorer: A Visualization Tool for Robot Control Tasks with Dynamic Analysis and LSTM-Based Policies. Proceedings of the IEEE Pacific Visualization Symposium.
[88]
An approach to visually assess binary classifiers' performance through instance-level explanations has been presented in this paper. The paper is published in the proceedings of the IEEE Conference on Visual Analytics Science and Technology (pages 162–172), year 2017.
[89]
Liu et al. investigate the training procedures within deep generative models.
IEEE Transactions on Visualization and Computer Graphics Vol. 24, No. 1, 77-87, 2018.
[90]
Liu et al., Xiao et al., Liu et al., Wang et al., Wu and Zhu. Visual assessment of tree-based boosting techniques.
IEEE Transactions on Visualization and Computer Graphics Vol. 24, No. 1, 163-173, 2018.
参考文献
[91]
Ma, Y. X.; Xie, T. K.; Li, J. D.; Maciejewski, R. Analyzing the weaknesses of adversarial machine learning models in accordance with visual analysis methods.
IEEE Transactions on Visualization and Computer Graphics Vol. 26, No. 1, 1075-1085, 2020.
[92]
Pezzotti N Hollt T van Gemert J Lelieveldt BP F Eisemann E Vilanova A DeepEyes Progressive visual analytics for designing deep neural networks
IEEE Transactions on Visualization and Computer Graphics Vol. 24, No. 1, 98-108, 2018.
Ref [Google Scholar](https://scholar.google.com/scholar_lookup?title=A%2FSystematic% Approach% to% Developing% Deep% Neural% Networks% Using% Progressive% Visual% Analytics:% DeepEyes&journal=IEEETransactionsOnVisualizationAndComputerGraphics&volume=…&pages=…&publication_year=…)
注意
[93]
An innovative approach named Squared has been developed to provide an interactive platform for performance analysis in multiclass classifiers.
IEEE Transactions on Visualization and Computer Graphics Vol. 23, No. 1, 61-70, 2017.
[94]
Spinner, T.; Schlegel, U.; Schafer, H.; El-Assady, M. Explainer: A Visual Analytics Framework for Interactive and Explainable Machine Learning.
IEEE Transactions on Visualization and Computer Graphics Vol. 26, No. 1, 1064-1074, 2020.
[95]
Strobelt et al. developed Seq2seq-Vis as a specialized visualization tool designed to aid in the analysis of sequence-to-sequence models.
IEEE Transactions on Visualization and Computer Graphics Vol. 25, No. 1, 353-363, 2019.
[96]
该研究团队提出了一种基于可视化分析的方法来深入理解复杂的深度Q网络架构
IEEE Transactions on Visualization and Computer Graphics Vol. 25, No. 1, 288-298, 2019.
[97]
Wexler et al., 2023. This what-if tool: An interactive investigation of machine learning models.
IEEE Transactions on Visualization and Computer Graphics Vol. 26, No. 1, 56-65, 2019.
[98]
张 Joint. W.; 王 Y.; Molino P.; 李 L. Z.; Ebert D. S. Manifold: A model-agnostic framework for interpretation and diagnosis of machine learning models.
IEEE Transactions on Visualization and Computer Graphics Vol. 25, No. 1, 364-373, 2019.
此文献献包含两个重要的参考来源:一是Crossref这一权威的学术资源;二是Google Scholar这一全面的学术数据库。这些资源共同提供了完整的参考信息以支持研究工作
[99]
Methods of visual analytics are employed to aim at selecting models within the context of time series data analysis.
IEEE Transactions on Visualization and Computer Graphics Vol. 19, No. 12, 2237-2246, 2013.
[100]
Cashman et al. introduced methods to eliminate weight dimensions, alter layer configurations, and explore activation patterns for neural architecture discovery.
IEEE Transactions on Visualization and Computer Graphics Vol. 26, No. 1, 863-873, 2020.
[101]
Cavallo, M.; Demiralp, Ç. Track Explorer: A tool designed for the visual analysis of predictions of sensor-based motor activities.
Computer Graphics Forum Vol. 37, No. 3, 339-349, 2018.
[102]
Cavallo, M.; Demiralp, C. Clustrophile 2: Guided visual clustering analysis.
IEEE Transactions on Visualization and Computer Graphics Vol. 25, No. 1, 267-276, 2019.
The CrossRef entry for this research is available at CrossRef, while the Google Scholar lookup can be accessed via [Google Scholar](https://scholar.google.com/scholar_lookup?title=Clustrophile\(_\)²: Guided visual clustering analysis & journal = IEEE Transactions on Visualization and Computer Graphics & volume = _35 & pages = _56–_89 & publication_year = _3D).
[103]
BEAMES: An interactive interface for model navigation, enhanced features in model selection and evaluation within regression analysis contexts.
IEEE Computer Graphics and Applications Vol. 39, No. 5, 20-32, 2019.
[104]
Dingen et al., 2016; van't Veer et al., 2017; Houthuizen et al., 2018; Mestrom et al., 2019; Korsten et al., 2020; Bouwman et al., 2021; van Wijk, 2022 RegressionExplorer: An interactive tool for exploring logistic regression models with subgroup analysis capabilities
IEEE Transactions on Visualization and Computer Graphics Vol. 25, No. 1, 246-255, 2019.
Crossref [Google Scholar](https://scholar.google.com/scholar_lookup?title=RegressionExplorer: Interactive investigation of logistic regression models combined with subgroup analysis&journal=IEEE Transactions on Visualization & Computer Graphics&volume=25&pages=246-255&publication_year=2019 "Google Scholar")
[105]
Hierarchical Topics: Visibly investigating the exploration of large text collections through hierarchical topic structures.
IEEE Transactions on Visualization and Computer Graphics Vol. 19, No. 12, 2002-2011, 2013.
Crossref [Google Scholar](https://scholar.google.com/scholar_lookup?title=Visually exploring large text collections%3A a%3A% HierarchicalTopics&journal=IEEE%20Transactions%20on%20Visualization%3B&volume=19&pages=page_range&publication_year=year)
[106]
El-Assady, M., Kehlbeck, R., Collins, C., Keim, D., und Deussen, O. (2021). Semantic concept spaces: A guided approach to topic model refinement based on word embedding techniques.
IEEE Transactions on Visualization and Computer Graphics Vol. 26, No. 1, 1001-1011, 2020.
Crossref [Google Scholar](https://scholar.google.com/scholar_lookup?title=研究内容:该研究通过优化词嵌入投影方法改进了主题模型&journal=IEEE Transactions on Visualization & Computer Graphics&volume=354&pages=567-898&publication_year=XXXX年 "Google Scholar)
[107]
El-Assady等学者提出了逐步优化主题建模参数的可视化分析框架
IEEE Transactions on Visualization and Computer Graphics Vol. 24, No. 1, 382-391, 2018.
该研究论文通过DOI地址https://doi.org/10.1109/TVCG.274508提供了一个详细的搜索路径指向其在线版本
[108]
El-Assady, M. et al. introduced a novel approach for visual analysis of topic models through user-controllable hypothetical execution flow.
IEEE Transactions on Visualization and Computer Graphics Vol. 25, No. 1, 374-384, 2019.
[109]
K.H., B.D., A.E., and H.P. IHTM: Framework for Interactive Hierarchical Topic Modeling.
IEEE Transactions on Visualization and Computer Graphics , 2020.
[110]
Kwon et al., RetainVis: A system for visual analytics utilizing transparent and interactive recurrent neural networks within the domain of electronic medical records.
IEEE Transactions on Visualization and Computer Graphics Vol. 25, No. 1, 299-309, 2019.
Crossref [Google Scholar](https://scholar.google.com/scholar_lookup?title=RetainVis: Visual analytics with interpretable and interactive recurrent neural networks on electronic medical records&journal=IEEE Transactions on Visualization and Computer Graphics&volume=25&pages=299-309&publication_year=2019 "Google Scholar")
[111]
Lee, H.; Kihm, J.; Choo, J.; Stasko, J.; Park, H. Interactive and visually oriented document clustering approach based on topic modeling.
Computer Graphics Forum Vol. 31, No. 3, 1155-1164, 2012.
[112]
基于不确定性认知的方法用于探索性微博检索
IEEE Transactions on Visualization and Computer Graphics Vol. 22, No. 1, 250-259, 2016.
[113]
Lowe等(2006)提出了基于视觉分析的方法开发与评估阶数选择标准以应用于自回归过程的研究框架
IEEE Transactions on Visualization and Computer Graphics Vol. 22, No. 1, 151-159, 2016.
Ref [Google Scholar](https://scholar.google.com/scholar_lookup? looked up to investigate Visual analytics for development and evaluation of order selection criteria for autoregressive processes & journal=IEEE Transactions on Visualization and Computer Graphics & volume=33 & issue_number=4 & pages=889–937 & publication_year=year)
[114]
MacInnes, J.; Santosa, S.; Wright, W. Visual-based classification: Domain expertise influences machine learning algorithms.
IEEE Computer Graphics and Applications Vol. 30, No. 1, 8-14, 2010.
Ref 和 [Scholar](https://scholar.google.com/scholar_lookup?title=Visual classification: Expert knowledge guides machine learning&journal=IEEE% Computer Graphics and Applications&volume=35&pages=16-34&publication_year=2015 "Scholar)
[115]
Migut, M.; Worring, M. Visual analysis of model classifications for risk evaluation. In: Proceedings of the IEEE Conference on Visual Analytics Science and Technology, 11-18, 2010.
[116]
ProtoSteer: A method based on prototype concepts for steering deep neural sequence models.
IEEE Transactions on Visualization and Computer Graphics Vol. 26, No. 1, 238-248, 2020.
[117]
The TreePOD system: A sensitive-based approach for selecting Pareto-optimal decision models.
IEEE Transactions on Visualization and Computer Graphics Vol. 24, No. 1, 174-183, 2018.
Digital Object Identifier (DOIs) [Web of Science](https://scholar.google.com/scholar_lookup?title=TreePOD: Sensitivity-aware selection of Pareto-optimal decision trees&journal=IEEE% Transactions% on% Visualization% and% Computer% Graphics&volume=24&pages=174-183&publication_year=2018 "Google Scholar")
[118]
Packer, E.; Bak, P.; Nikkila, M.; Polishchuk, V.; Ship, H. J. Data visualization and data analysis for geographical or spatial data grouping: Employing heuristic methods for interactive data mining.
IEEE Transactions on Visualization and Computer Graphics Vol. 19, No. 12, 2179-2188, 2013.
显示为 Crossref 的研究结果已通过 https://doi.org/10.11... 访问该研究的唯一标识符 https://doi.org/..., 并在 Crossref 平台进行了详细展示
[119]
Piringer, H.; Berger, W.; Krasser, J. HyperMoVal: An interactive and visual approach to validating regression models in real-time simulations.
Computer Graphics Forum Vol. 29, No. 3, 983-992, 2010.
[120]
SOMFlow workflow: Guided exploratory clustering analysis utilizing self-organizing maps and analytical provenance.
IEEE Transactions on Visualization and Computer Graphics Vol. 24, No. 1, 120-130, 2018.
Crossref [Google Scholar](https://scholar.google.com/scholar_lookup?title=SOMFlow: Self-Organizing Map Flow: Guided exploratory cluster analysis with self-organizing maps and analytic provenance&journal=IEEE Transactions on Visualization and Computer Graphics&volume=24&pages=120-130&publication_year=2018 "Google Scholar)
[121]
Schultz, T.; Kindlmann, G. L. Transparent box-based spectral clustering: Utilization in medical image analysis.
IEEE Transactions on Visualization and Computer Graphics Vol. 19, No. 12, 2100-2108, 2013.
该文献通过CrosRef提供了一个具体的参考链接信息,并在CrosRef处附上了详细的信息链接
[122]
Van den Elzen, S.; van Wijk, J.J. BaobabView: Interactive building and evaluating decision trees. In: Proceedings of the IEEE Conference on Visual Analytics Science and Technology, pages 151–160, 2011.
[123]
Vrotsou, K. and A. Nordman. The exploration of visual sequence mining centered on pattern-growth.
IEEE Transactions on Visualization and Computer Graphics Vol. 25, No. 8, 2597-2610, 2019.
[124]
Wang et al., TopicPanova offers a thorough analysis of relevant subjects.
IEEE Transactions on Visualization and Computer Graphics Vol. 22, No. 12, 2508-2521, 2016.
参考文献Crossref 和 Google Scholar(https://scholar.google.com/scholar_lookup?title=TopicPanorama:% A full picture of relevant topics&journal=IEEE Transactions on Visualization and Computer Graphics&volume=22&pages=2508-2521&publication_year=year) 提供了丰富的参考资料
[125]
Yang, W. K.; Wang, X. T.; Lu, J.; Dou, W. W.; Liu, S. X. Interactive manipulation of hierarchical clusterings in real-time data analysis
IEEE Transactions on Visualization and Computer Graphics , 2020.
[126]
Zhao et al., LoVis: Visualization of local patterns as model refinement.
Computer Graphics Forum Vol. 33, No. 3, 331-340, 2014.
[127]
serendipitous phenomenon known as topic model-driven visual exploration techniques for analyzing text corpora. In: Proceedings of the IEEE Conference on Visual Analytics Science and Technology, 173-182, 2014.
[128]
Cite-to-vec: citation-driven document exploration through the use of word embeddings.
IEEE Transactions on Visualization and Computer Graphics Vol. 23, No. 1, 691-700,2017.
[129]
Blumenschein et al., 2018; Blumenschein et al., 2018 present a method for reducing the complexity of high-dimensional data analysis by employing a table-based visual analytics methodology.
[130]
Bradel, L.; North, C.; House, L. Multi-model semantic interaction for text analytics. In: Proceedings of the IEEE Conference on Visual Analytics Science and Technology, 163-172, 2014.
[131]
A visual examination of multivariate categorical data collections by Broeksema et al., Telea and Baudel.
Computer Graphics Forum Vol. 32, No. 8, 158-169, 2013.
[132]
Cao et al., N.; Sun et al., J.M.; Lin et al., Y.R.; Gotz et al., D.; Liu et al., S.X.; Qu et al., H.M. FacetAtlas: A Multi-faceted Visualization System for Rich Text Corpora within Multimedia Document Collections.
IEEE Transactions on Visualization and Computer Graphics Vol. 16, No. 6, 1172-1181, 2010.
[133]
Chandrasegaran et al., 2018. Combining visual analytics tools to facilitate the grounded theory approach in qualitative text analysis.
Computer Graphics Forum Vol. 36, No. 3, 201-212, 2017.
[134]
集成LDA模型用于支持交互式探索过程和行为分类
IEEE Transactions on Visualization and Computer Graphics Vol. 26, No. 9, 2775-2792, 2020.
参考文献:Crossref Google Scholar.其中该研究通过集成方法实现了对人类行为交互性和分类能力的有效提升
[135]
Correll, M.; Witmore, M.; Gleicher, M. Examining collections of tagged text for literary scholarship.
Computer Graphics Forum Vol. 30, No. 3, 731-740, 2011.
[136]
Exploiting demographic information according to user-generated content. In: Proceedings of the IEEE Conference on Visual Analytics Science and Technology, 2015: 57–64.
[137]
Es'Assady, M.; Gold, V.; Acevedo, C.; Collins, C.; Keim, D. ConToVi: Exploration of multi-party conversations within topic-space frameworks.
Computer Graphics Forum Vol. 35, No. 3, 431-440, 2016.
参考文献 Crossref 和学术搜索平台 Google Scholar 为本研究提供了重要的文献支持和数据来源。
[138]
El-Assady, M.; Sevastjanova, R.; Keim, D.; Collins, C. ThreadReconstructor: The tool aims to analyze and visualize conversation threads by reconstructing reply chains.
Computer Graphics Forum Vol. 37, No. 3, 351-365, 2018.
[139]
Filipov et al., 2019. CV3 is a tool for visual analysis of candidate resumes.
Computer Graphics Forum Vol. 38, No. 3, 107-118, 2019.
[140]
Fried and Kobourov presented "Maps in computer science" at the IEEE Pacific Visualization Symposium, where it was published as part of their research work.
[141]
Fulda, J.; Brehmer, M.; Munzner, T. TimeLineCurator: Interactive creation of visual timelines within unstructured text.
IEEE Transactions on Visualization and Computer Graphics Vol. 22, No. 1, 300-309, 2016.
This resource is provided by Crossref (https://doi.org/10.1109/TVCG.2015.2467531) for accessing "Crossref" Google Scholar as a reference tool.
[142]
Glueck et al., along with Naeini et al., Doshi-Velázquez et al., Chevalier et al., Khan et al., Wigdor & Brudno. Present and describe a novel approach named PhenoLines that facilitates phenotype comparison visualizations in the context of disease subtyping using advanced topic modeling techniques.
IEEE Transactions on Visualization and Computer Graphics Vol. 24, No. 1, 371-381, 2018.
[143]
Gorg et al. (20XX), C. (with co-authors including Liu (20XX), Z.C.), Kihm (20XX), Choo (20XX), Park (20XX), and Stasko (20XX)) conducted research on combining computational analyses with interactive visualization techniques to enhance document exploration and sensemaking during the collaborative jigsaw method.
IEEE Transactions on Visualization and Computer Graphics Vol. 19, No. 10, 1646-1663, 2013.
[Crossref](DOI: 10.1109/TVCG.2012.324 "Combining computational analyses and interactive visualization for document exploration and sensemaking within jigsaw") Google Scholar
[144]
主题导向的探索,并辅以内嵌可视化技术(...),旨在生成研究思路
IEEE Transactions on Visualization and Computer Graphics Vol. 26, No. 3, 1592-1607, 2020.
[145]
Authors include Heimerl, F., John, M., Han, Q., Koch, S., and Ertl. DocuCompass: A significant approach for exploring document repositories. Presented in the proceedings of the IEEE Conference on Visual Analytics Science and Technology, 11-20, 2016.
[146]
Hong, F.; Lai, C.; Guo, H.; Shen, E.; Yuan, X.; Li. S. FLDA: A latent Dirichlet allocation-based method for analyzing unsteady flow dynamics.
IEEE Transactions on Visualization and Computer Graphics Vol. 20, No.12, 2545-2554, 2014.
Reference [Academic Search](https://scholar.google.com/scholar_lookup?title=LDA-Based Unsteady Flow Analysis in Visualization & journal=IEEE Transactions on Visualization & Computer Graphics&volume=XX&pages=p-p&year=YYYY)
[147]
Hoque, E.; Carenini, G. ConVis: 一种用于深入探索博客会话的视觉文本分析系统
Computer Graphics Forum Vol. 33, No. 3, 221-230, 2014.
[148]
Hu, M. D.; Wongsuphasawat, K.; Stasko, J. Representing social media information through SentenTree.
IEEE Transactions on Visualization and Computer Graphics Vol. 23, No. 1, 621-630, 2017.
[149]
Jänicke, H.; Borgo, R.; Mason, J.S.D.; Chen, M. SoundRiver: Semantically-based sound illustrations.
Computer Graphics Forum Vol. 29, No. 2, 357-366, 2010.
[Digital Object Identifier (DOI)/Digi zitale Obbジェkts-IDentifizierung/DIGITAL OBJECT IDENTIFIER (DOI)] [G Schola/G Schola] [https://doi.org/10.1111/j.1467-8659.2009.01605.x "Digital Object Identifier/DIGITAL OBJECT IDENTIFIER"] [https://scholar.google.com/scholar_lookup?title=SoundRiver:.Semantically-rich%2Fsound%3Aillustration&journal=Computer%2FGraphics%3AForum&volume=...&publication_year=...&citation_count=...]
SoundRiver:%E9%87%B8%E7%A4%BA%E5%B7%B4%E6%B5%B7%E8%A7%9C: Semantically Rich Sound Illustration by SoundRiver
[150]
Jänicke und Wrisley veröffentlichen im IEEE Visualization Analytics Science and Technology-Tagungsband eine Arbeit mit dem Titel "Interactive visual alignment of medieval text versions". Die Studie präsentiert eine neue Methode für die computergestützte Analyse von Textversionen im mittelalterlichen Zeitalter.
[151]
Jankowska et al., M.; Kefiselj et al., V.; Milios et al., E. Comparative relative n-gram representations: Focusing on document visualization at the character n-gram level. In: Proceedings of the IEEE Conference on Visual Analytics Science and Technology, 103–112, 2012.
[152]
Ji, X. N.; Shen, H. W.; Ritter, A.; Machiraju, R.; Yen, P. Y. Visual analysis of neural representation techniques in information retrieval: Meaningful aspects and dimensionality reduction strategies.
IEEE Transactions on Visualization and Computer Graphics Vol. 25, No. 6, 2181-2192, 2019.
Crossref [Google Scholar](https://scholar.google.com/scholar_lookup?title=Visual exploration of neural document embeddings in information retrieval: A focus on semantics and feature selection&journal=The IEEE% % Transactions% % on% % Visualization% % and% % Computer% % Graphics&volume=25&pages=pages=pages=pages=pages=pages=page numbers are 页码是 &publication_year=The publication year is 一年份)
[153]
Kara et al., 2019; Chen et al., 2018; Radu et al., 2017; Harrison et al., 2016; Sahoo et al., 2015; De et al., 2014. DIVA: A comprehensive framework for exploring and validating hypothesized potential drug-drug interactions.
Computer Graphics Forum Vol. 38, No. 3, 95-106, 2019.
[154]
Kim, H.; Choi, D.; Drake, B.; Endert, A.; Park, H. TopicSifter: An interactive approach to reducing the search space using targeted topic modeling techniques. In: Proceedings from the IEEE Visualization Analytics Science and Technology Conference, 35-45, 2019.
[155]
Highly effective multi-level visual topic exploration technique named TopicLens is presented in this paper to conduct efficient multi-level visual topic exploration of large-scale document collections.
IEEE Transactions on Visualization and Computer Graphics Vol. 23, No. 1, 151-160, 2017.
[156]
Kochtchi, A., von Landesberger, T., Biemann, C. 网络中的名字:基于报纸文章的社会网络可视化探索与半自动标签化
Computer Graphics Forum Vol. 33, No. 3, 211-220, 2014.
[157]
Li, M. Z.; Choudhury, F.; Bao, Z. F.; Samet, H.; Sellis, T. Concave Cubes: Providing support for clustering-based geospatial visualization in large datasets.
Computer Graphics Forum Vol. 37, No. 3, 217-228, 2018.
[158]
The visual examination of high-dimensional datasets through subspace-based analysis and dynamic projection techniques by Liu, S., Wang, B., Thiagarajan, J.J., Bremer, P.T., and Pascucci, V.
Computer Graphics Forum Vol. 34, No. 3, 271-280, 2015.
CrossRef [Google Scholar](https://scholar.google.com/scholar_lookup?title=Visual examination of multivariate datasets through-dimensionality-reduction-techniques%2Land-interactive-visualizations&journal=Computer% % Graphics|Forum&volume=34&pages=271-%%C 8%%O &publication_year=%%C 6%%5) [Google Scholar]
[159]
刘 et al. 在《IEEE Visualization Analytics Science and Technology会议》上发表论文,并提出了一种名为TopicPanorama的方法。该方法提供了关于相关主题的一个全面概述,并在第183至192页中进行了详细讨论。
[160]
Liu et al., along with Xu, Gou, Liu, Akkiraju, and Shen. W., conducted a study titled "SocialBrands" that presents a visual assessment of public brand perceptions on social media. The research was presented at the IEEE Visualization Analytics Science and Technology Conference in 2016.
[161]
Oelke et al., D.; Strobelt, H.; Rohrdantz, C.; Gurevych, I.; Deussen, O. Systematic examination of document collections by the cited authors represents a visual analytics approach.
Computer Graphics Forum Vol. 33, No. 3, 201-210, 2014.
Crossref [Google Scholar](https://scholar.google.com/scholar_lookup?title=Comparative analysis of document sets:* visual analytics%2C%space%A space-time-oriented framework&journal=Computer Graphics Forum&volume=33&pages=456-468&publication_year=Year "Google Scholar")
[162]
Park, D.; Kim, S.; Lee, J.; Choo, J.; Diakopoulos, N.; Elmqvist, N. Concept Vector: Textual Visual Analytics Utilizing Interactive Lexical Construction Through Word Embeddings for Analyzing and Processing Textual Data Interactively to Construct Lexical Resources.
IEEE Transactions on Visualization and Computer Graphics Vol. 24, No. 1, 361-370, 2018.
参考文献 [Google 学术](https://scholar.google.com/scholar_lookup?title=ConceptVector: 文本可视化 via interactive lexicon building using word embedding&journal=IEEE Transactions on Visualization and Computer Graphics&volume=24&pages=361-370&publication_year=2018)
[163]
Paulovich博士及其合著者Ferraz de Melo等在《Semantic Wordification of Document Collections》一文中提出了该研究方法
Computer Graphics Forum Vol. 31, No. 3pt3, 1145-1153, 2012.
[164]
Shen等人的研究团队开发出了一个名为StreetVizor的工具系统:其核心功能是通过街道视图实现对人类尺度城市形态的感知性探索与分析
IEEE Transactions on Visualization and Computer Graphics Vol. 24, No. 1, 1004-1013, 2018.
[165]
Von Landesberger et al. conducted a systematic evaluation of the comparative quality aspects within three-dimensional medical image segmentations, focusing specifically on segmentation techniques based on statistical shape models.
IEEE Transactions on Visualization and Computer Graphics Vol. 22, No. 12, 2537-2549, 2016.
系统性比较了3D医学图像分割的质量评估方法
[166]
Wall, E.; Das, S.; Chawla, R.; Kalidindi, B.; Brown, E. T.; Endert, A. podium framework for ranking data using mixed-initiative visual analytics.
IEEE Transactions on Visualization and Computer Graphics Vol. 24, No. 1, 288-297, 2018.
参考文献:请参阅[Crossref][Google Scholar
[167]
Xie, X.; Cai, X.W.; Zhou, J.P.; Cao, N.; Wu, Y.C. A semantic-based approach for organizing large-scale image collections.
IEEE Transactions on Visualization and Computer Graphics Vol. 25, No. 7, 2362-2377,2019.
[该种学术资源](https://doi.org/10.1109/TVCG.2018.2835485 "官方标识符: https://doi.org/...) ) ] [Google Scholar](https://scholar.google.com/scholar_lookup?title=A semantic-based method for% visualizing large images collections&journal=IEEE Transactions on Visualization and Computer Graphics&volume= 请在此处填写具体信息 &pages= &publication_year=&citation_key=&result_type=&aggs=&auth_list=&hl=en) 【
[168]
Zhang, L.; Huang, H. Hierarchical narrative collage for digital photo album.
Computer Graphics Forum Vol. 31, No. 7, 2173-2181, 2012.
CROSSREF and [GOOGLE SCHOLAR](https://scholar.google.com/scholar_lookup?title=Hierarchical narrative collage%for digital photo%album&journal=Computer% Graphics Forum&volume=31&pages=2173-2181&publication_year=2012 "GOOGLE SCHOLAR").
[169]
赵健等人合著的论文题为《促进话语分析的系统性交互式可视化》,该研究旨在开发一种能够有效支持话语分析过程的技术框架
IEEE Transactions on Visualization and Computer Graphics Vol. 18, No. 12, 2639-2648,2012.
[170]
Alsakran et al. conducted research on the real-time display of continuous text using a force-directed dynamic framework.
IEEE Computer Graphics and Applications Vol. 32, No. 1, 34-45, 2012.
引用:该研究通过构建一个基于力场的动力学系统实现了实时流体文本的可视化。
参考文献:通过Google Scholar查询得知,《IEEE计算机图形与应用》期刊上发表的文章详细探讨了这一技术体系。
研究方法:该系统采用了动态模型算法以实现对大规模数据流的高效处理能力。
[171]
Alsakran et al. present a dynamic display of information through advanced analysis techniques for real-time text stream processing.
[172]
Andrienko et al., authors from various institutions such as the University of German Sports Science and the University of Applied Sciences in Wiener Neustadt conducted a study titled 'Constructing spatio-temporal structures for strategic positioning during football matches.'
IEEE Transactions on Visualization and Computer Graphics , 2019.
[173]
Andrienko, G.; Andrienko, N.; Bremm, S.; Schreck, T.; von Landesberger, T.; Bak, P.; Keim, D. Temporal and spatial dimensions: Self-organizing maps for investigating spatiotemporal patterns.
Computer Graphics Forum Vol. 29, No. 3, 913-922, 2010.
[174]
Andrienko, G.; Andrienko, N.; Hurter, C.; Rinzivillo, S.; Wrobel, S. Scalable examination of spatial movement datasets for identifying and mapping significant places.
IEEE Transactions on Visualization and Computer Graphics Vol. 19, No. 7, 1078-1094, 2013.
[175]
Blascheck et al. explored the interactive exploration and encoding of data-intensive user interactions in their study published at the IEEE Visualization Conference in 2016.
[176]
Bögl et al. revisit the cycle plot as a method for multivariate outlier detection using a distance-based abstraction.
Computer Graphics Forum Vol. 36, No. 3, 227-238, 2017.
A rewritten cycle plot: A multi-variable outlier analysis employing a distance-based approach.
[177]
Bosch et al., H.; Thom, D.; Heimerl, F.; Puttmann, E.; Koch, S.; Kruger, R.; Worner, M.; Ertl, T. ScatterBlogs2: real-time tracking of microblogs' messages using user-guided filtering.
IEEE Transactions on Visualization and Computer Graphics Vol. 19, No. 12, 2022-2031, 2013.
Crossref, [Google Scholar](https://scholar.google.com/scholar_lookup?title=ScatterBlogs₂: Real-Time Tracking of Microblogs' Messages Through User-Guided Processing & journal=IEEE Transactions on Visualization and Computer Graphics & volume=19 & pages=₂₀₂₂–₂₀₃₁ & publication_year=₂₀₁₃ "Google Scholar")
[178]
Buchmüller et al., J., H., and others. Visual analysis techniques for assessing local effects of air traffic management systems.
Computer Graphics Forum Vol. 34, No. 3, 181-190, 2015.
Ref [Scholar](https://scholar.google.com/scholar_lookup?title=Visual analytics for exploring local%impact%of%air%traffic&journal=Computer% Graphics% Forum&volume=34&pages=181-190&publication_year=2015 "Google Scholar")
[179]
Cao, N.; Lin, C. G.; Zhu, Q. H.; Lin, Y. R.; Teng, X.; Wen, X. D. Voila: Visual-based anomaly detection and real-time monitoring with spatiotemporal streaming data
IEEE Transactions on Visualization and Computer Graphics Vol. 24, No. 1, 23-33, 2018.
[180]
Cao N., Lin Y.R., Sun X.H., Lazer D., Liu S.X., Qu H.M.: Whisper: A Real-Time Tracker of the Spatial-Temporal Process of Information Spreading
IEEE Transactions on Visualization and Computer Graphics Vol. 18, No. 12, 2649-2658, 2012.
[181]
Cao, N.; Shi, C.L.; Lin, S.; Lu, J.; Lin, Y.R.; Lin, C.Y.: TargetVue: A visual assessment of atypical user interactions within online platforms.
IEEE Transactions on Visualization and Computer Graphics Vol. 22, No. 1, 280-289, 2016.
Crossref [Google Scholar](https://scholar.google.com/scholar_lookup?title=TargetVue: Visual analysis of an anomalous user behaviors in online communication systems&journal=IEEE Transactions on Visualization & Computer Graphics&volume=22&pages=280-289&publication_year=2016 "Google Scholar")
[182]
Chae et al. (2012) conducted Spatiotemporal digital media analysis to identify anomalies in real-time data streams using seasonal-trend decomposition techniques in their study titled "Spatiotemporal social media analytics for anomaly detection and investigation" presented at the IEEE Conference on Visual Analytics Science and Technology.
[183]
ViSeq: Visual analytics on learning sequences in massive open online courses.
IEEE Transactions on Visualization and Computer Graphics Vol. 26, No. 3, 1622-1636, 2020.
[184]
The authors Chen S., Chen S., Lin L., Yuan X., Liang J., and Zhang X. introduced an innovative visualization method named E-map in their paper titled "A visual analytics approach for exploring significant event evolutions in social media". The research was presented at the IEEE Conference on Visual Analytics Science and Technology, spanning pages 36 to 47 in the year 2017.
[185]
Contributed authors Chen S., along with co-authors Chen S., Wang Z., Liang J., Yuan X., Cao N., and Wu Y., introduced a novel approach called D-Map for visual analytics of egocentric information spread within social media platforms. The research was presented in The proceedings of the IEEE conference on Visual Analytics Science and Technology, spanning pages 41 to 50 in 2016.
[186]
Interactive visual analysis of movement characteristics derived from sparsely sampled geo-tagged social media data
IEEE Transactions on Visualization and Computer Graphics Vol. 22, No. 1, 270-279, 2016.
[187]
Chen, Y.; Chen, Q.; Zhao, M.; Boyer, S.; Veeramachaneni, K.; Qu, H. Analyzing learning dynamics in large-scale open online courses to predict dropout behavior. In: Proceedings of the IEEE Conference on Visual Analytics Science and Technology, 111-120, 2016.
[188]
A sequence representation: Enhance the visual representation of temporal events through optimizing the visual summary of temporal event data.
IEEE Transactions on Visualization and Computer Graphics Vol. 24, No. 1, 45-55, 2018.
摘要
[189]
Chu et al. present a method for visualizing underlying patterns in taxi trips through semantic transformation. The research is presented in the proceedings of the IEEE Pacific Visualization Symposium (Chu et al., 2014).
[190]
Cui et al., 2023; TextFlow: An enhanced approach to analyzing topic evolution in textual data
IEEE Transactions on Visualization and Computer Graphics Vol. 17, No. 12, 2412-2421,2011.
该研究团队开发了一种新的数据可视化方法,并在多个领域进行了应用验证
[191]
Cui et al.; 研究者们; 某些学者; 如何; 分层主题; 在大规模文本语料库中; 发展演变
IEEE Transactions on Visualization and Computer Graphics Vol. 20, No. 12, 2281-2290, 2014.
该研究中的参考文献
[192]
Di Lorenzo et al. conducted a study titled AllAboard to explore interactive visualisation of cellphone movement patterns and optimise public transportation systems.
IEEE Transactions on Visualization and Computer Graphics Vol. 22, No. 2, 1036-1050, 2016.
[193]
Dou, W.; Wang, X.; Chang, R.; Ribarsky, W. ParallelTopics: A probabilistic method for exploring document collections. In: Proceedings of the IEEE Conference on Visual Analytics Science and Technology, 231-240 (2011).
[194]
Main Title: Interactive visualization analysis of text data by identifying key events and exploring their implications.
[195]
Du et al., Plaisant et al., Spring and Shneiderman. EventAction: Visual analytics for temporal event sequence recommendations. In proceedings of the IEEE Conference on Visual Analytics Science and Technology, pages 61-70, 2016.
[196]
The NEREx system explores named entity relationships in multi-party dialogue contexts.
Computer Graphics Forum Vol. 36, No. 3, 213-225, 2017.
[197]
VisTA: A platform integrating machine intelligence and visualization techniques to facilitate the analysis of think-aloud processes.
IEEE Transactions on Visualization and Computer Graphics Vol. 26, No. 1, 343-352, 2020.
[Google Scholar](https://scholar.google.com/scholar_lookup?title=VisTA:%
Integrating machine intelligence with visualization to support the
investigation of think-aloud sessions&journal:IEEE Transactions on
Visualization and Computer Graphics:volume:26-pages:343-352-publication year:
2020 Google Scholar)
[198]
Ferreira et al.; Poco et al.; Vo et al.; Freire et al.; Silva et al. Investigating the visual analysis of large-scale spatiotemporal urban datasets: A comprehensive study of New York City taxi trip patterns.
IEEE Transactions on Visualization and Computer Graphics Vol. 19, No. 12, 2149-2158, 2013.
CXRef [Google Scholar](https://scholar.google.com/scholar_lookup?title=Large-scale visual exploration of spatio-temporal urban data: An investigation into New York City taxi trips&journal=IEEE Transactions in Visualization and Computer Graphics&volume=19&pages=2149-2158&publication_year=2013 "Google Scholar")
[199]
Gobbo, B.; Balsamo, D.; Mauri, M.; Bajardi, P.; Panisson, A.; Ciuccarelli, P. Topic Tomographies (TopTom): 采用视觉方法从媒体流中提取信息。
Computer Graphics Forum Vol. 38, No. 3, 609-621, 2019.
[200]
Gotz, D. 和 Stavropoulos, H. 的 DecisionFlow: 一种实现高维时间序列事件数据可视化分析的技术
IEEE Transactions on Visualization and Computer Graphics Vol. 20, No. 12, 1783-1792, 2014.
Crossref [scholar](https://scholar.google.com/scholar_lookup?title=DecisionFlow:% Visualizing analytics for high-dimensional temporal event sequence data&journal=IEEE Transactions on Visualization and Computer Graphics&volume=20&pages=1783-1792&publication_year=2014 "scholar")
[201]
Guo, S. N.; Jin, Z. C.; Gotz, D.; Du, F.; Zha, H. Y.; Cao, N. Visually studying the sequential data's progression.
IEEE Transactions on Visualization and Computer Graphics Vol. 25, No. 1, 417-426, 2019.
CrossRef [Google Scholar](https://scholar.google.com/scholar_lookup?title=Visual\ progression\ analysis\ of\ event\ sequence\ data&journal=IEEE\ Transactions\ on\ Visualization\ and\ Computer\ Graphics&volume=25&pages=417-433&publication_year=Year) "Google Scholar"
[202]
Guo, S/N; Xu, K; Zhao, R/W; Gotz, D; Zha, H/Y; Cao, N. EventThread: Multimodal visualization and temporal segmentation of event sequence data.
IEEE Transactions on Visualization and Computer Graphics Vol. 24, No. 1, 56-65, 2018.
[203]
Gotay, I.; Dmitriy, K.; Kaufmann, A. E.; Barish, M. A. AnaFe: visual analytics techniques for imaging-derived time-related characteristics: with a focus on the spleen region.
IEEE Transactions on Visualization and Computer Graphics Vol. 23, No. 1, 171-180, 2017.
该文献的链接为Crossref以及[Google Scholar](https://scholar.google.com/scholar_lookup?title=AnaFe:č_visual_analytics_of_image Derived_temporal_features_Focusing_on_the_spleen&journal=IEEE_Transactions_on_Visualization_and_Computer_Graphics&volume=23&pages=171-180&publication_year=2017 "Google Scholar")
[204]
Havre et al., 2018. ThemeRiver: A tool for analyzing the evolution of topics within extensive sets of documents.
IEEE Transactions on Visualization and Computer Graphics Vol. 8, No. 1, 9-20, 2002.
[205]
Heimerl, F.; Han, Q.; Koch, S.; Ertl, T. CiteFlow: Interactive visualization of citation patterns in academic citations.
IEEE Transactions on Visualization and Computer Graphics Vol. 22, No. 1, 190-199, 2016.
CrosRef (DOI: 10.1109/TVCG.2015.2467621) [Google 学术](结果来源于: CiteRivers: Visual analytics of citation patterns; 来源期刊: IEEE Transactions on Visualization and Computer Graphics)
[206]
Itoh et al., Image patterns visualization for inter-media analysis. In: Proceedings of the IEEE Pacific Visualization Symposium, 129-136, 2014.
[207]
Contributors Itoh et al. conducted an investigation into the temporal dynamics of bloggers' activities and interests. This study was presented at the IEEE Pacific Visualization Symposium in 2012, featuring detailed analysis within pages 57 to 64.
[208]
Kamaleswaran et al., 2001. PhysioEx: Graphical assessment of continuous physiological data streams.
Computer Graphics Forum Vol. 35, No. 3, 331-340, 2016.
[209]
Karduni et al., 20XX; Cho et al., 20XX; Wessel et al., 20XX; Ribarsky et al., 20XX; Sauda et al., 20XX; Dou et al., 20XX. Urban space exploration: A visual analytics platform designed to enhance urban planning efforts through innovative spatial data analysis techniques.
IEEE Computer Graphics and Applications Vol. 37, No. 5, 50-60, 2017.
Crossref [Google Scholar](https://scholar.google.com/scholar_lookup?title=Urban% space%explorer:% A visual analytics system for urban planning&journal=IEEE Computer Graphics and Applications&volume=37&pages=50-60&publication_year=2017 "Google Scholar")
[210]
Krüger et al., R., Han Q., Ivanov N., Mahtal S., Thom D., Pfister H., Ertl T. A bird's-eye view-based large-scale visual analytics approach for studying city dynamics through social location data.
Computer Graphics Forum Vol. 38, No. 3, 595-607, 2019.
Crossref [Google Scholar](https://scholar.google.com/scholar_lookup?title= bird’s-perspective extensive visual analytics of city dynamics based on social location data & journal=Computer Graphics Forum&volume=38&pages=595-607&publication_year=2019 "Google Scholar")
[211]
Kruskal, R.; Thom, D.; Ertl, T. Visual investigation of movements' behavior based on web data to enhance contextual understanding. In: Proceedings of the IEEE Pacific Visualization Symposium, 193-200, 2014.
[212]
Krueger, R., Thom, D., & Ertl, T. (2019). Semantic meaningful enhancement of movement patterns using foursquare—a visual data analysis platform. ACM Transactions on Spatial Algorithms and Systems.
IEEE Transactions on Visualization and Computer Graphics Vol. 21, No. 8, 903-915, 2015.
Semantically enhancing movement behavior through foursquare—a visual analytics methodology
[213]
A visual analytics system designed to investigate, track, and anticipate road traffic patterns.
IEEE Transactions on Visualization and Computer Graphics Vol. 26, No. 11, 3133-3146, 2020.
Ref [Google Scholar](https://scholar.google.com/scholar_lookup?title=A visual analytics system for exploring, monitoring, and forecasting road traffic congestion&journal=IEEE Transactions on Visualization and Computer Graphics&volume=...)
[214]
Leite et al., Gschwindtner et al., Miksch et al., Kriglstein et al., Pohl et al., Gstrein et al., Kuntner EVA: 通过视觉分析识别欺诈事件的方法
IEEE Transactions on Visualization and Computer Graphics Vol. 24, No. 1, 330-339, 2018.
Crossref [Google Scholar](https://scholar.google.com/scholar_lookup?title=Eva: Data Analytics for the Detection of Fraudulent Activities&journal=IEEE Transactions on Visualization and Computer Graphics&volume=24&pages=330-339&publication_year=2018 "Google Scholar")
[215]
Li, J.; Chen, S. M.; Chen, W.; Andrienko, G.; Andrienko, N. Semantics-space-time cube: a conceptual framework for systematic analysis of texts in space and time
IEEE Transactions on Visualization and Computer Graphics , Vol. 26, No. 4, 1789-1806, 2019.
[216]
Li, Q.; Wu, Z. M.; Yi, L. L.; Kristanto, S. N.; Qu, H. M.; Ma, X. J. WeSeer: Enhanced visual analytics for improved information cascade prediction across WeChat articles and other platforms.
IEEE Transactions on Visualization and Computer Graphics Vol. 26, No. 2, 1399-1412, 2020.
Comprehensive Cross-referencing Framework [Google Scholar](https://scholar.google.com/scholar_lookup?title=WeSeer:(Visual EF_BC_analysis_aiding_better_info_cascade_prediction_for_WeChat_articles&journal=IEEETransactions_on_Visualization_and_Computer_Graphics&volume=EF_BC_&pages=EF_BC_&publication_year=EF_BC_)
[217]
Contributors Li, Z. Y.; Zhang, C. H.; Jia, S. C.; Zhang, J. W.: Comprehensive exploration of the evolution and intersections among academic disciplines
IEEE Transactions on Visualization and Computer Graphics Vol. 26, No. 1, 1182-1192, 2019.
[218]
Liu et al. conducted visual analytics on taxi trajectory data by employing topical sub-trajectory analysis techniques.
Visual Informatics Vol. 3, No. 3, 140-149, 2019.
参考文献链接提供了用于查看 taxi 轨迹数据的可视化分析
[219]
Liu et al., S.X.; Yin et al., J.L.; Wang et al., X.T.; Cui et al., W.W.; Cao et al., K.L.; Pei et al., J.The online-based visual data analysis of text streams
IEEE Transactions on Visualization and Computer Graphics Vol. 22, No. 11, 2451-2466, 2016.
Ref-CrossRef [Ref-Google Scholar](https://scholar.google.com/scholar_lookup?title=Online-based text stream analytics platform&journal=IEEE Transactions on Visualization and Computer Graphics&volume=22&pages=2451-2466&publication_year=2016 "Google Scholar")
[220]
Liu et al., 20XX. TIARA: An interactive interface for topic-oriented visual representation of textual content with summarization and analysis features.
ACM Transactions of Intelligent Systems and Technology, Volume 3, Issue 2, The article numbered 25, published in the year 2012.
CrossRef [Google Scholar](https://scholar.google.com/scholar_lookup?title=TIARA: Interactive tool for topic-based visual summarization and analysis&journal=ACM Transactions on Intelligent Systems & Technology&volume=3&pages=Article No. 47&publication_year=2016 "Google Scholar")
[221]
Liu et al. (2017) introduced CoreFlow for studying event sequences.
Computer Graphics Forum Vol. 36, No. 3, 527-538, 2017.
[222]
Liu et al. explore the patterns and sequences in an interactive manner to understand the common visitor paths within clickstreams.
IEEE Transactions on Visualization and Computer Graphics Vol. 23, No.1, 321-330, 2017.
[223]
Investigating the evolution of media discourse via cueing events
IEEE Transactions on Visualization and Computer Graphics Vol. 22, No. 1, 220-229, 2016.
[224]
Lu, Y. F.; Wang, F.; Maciejewski, R. Business intelligence from social media: A study from the VAST box office challenge.
IEEE Computer Graphics and Applications Vol. 34, No. 5, 58-69, 2014.
[225]
An advanced system for visual analysis to identify key contributors in media coverage.
IEEE Transactions on Visualization and Computer Graphics Vol. 24, No. 9, 2501-2515, 2018.
[参考文献] [https://doi.org/10.1109/TVCG.2017.2752166 "参考文献"] [Google Scholar](https://scholar.google.com/scholar_lookup?title=一种用于识别媒体事件主题驱动因素的可视化分析框架&journal=IEEE Visualization and Computer Graphics Transactions on&volume=...&publication_year=...) (见 Google Scholar)
[226]
Luo et al., 2019; EventRiver: A system for visual exploration of textual data incorporating temporal metadata
IEEE Transactions on Visualization and Computer Graphics Vol. 18, No. 1, 93-105, 2012.
[227]
Maciejewski, R.; Hafen, R.; Rudolph, S.; Larew, S. G.; Mitchell, M. A.; Cleveland, W. S.; Ebert, D. S. Predictive mapping of areas of interest: A predictive analytics methodology.
IEEE Transactions on Visualization and Computer Graphics Vol. 17, No. 4, 440-453, 2011.
该研究采用了交叉参考的方法(见 Crossref)。 该研究采用了预测性分析方法(见 Google Scholar)。
[228]
Malik et al. present a forward-looking spatial-temporal resource allocation system combined with predictive visual analytics to support community policing initiatives.
IEEE Transactions on Visualization and Computer Graphics Vol. 20, No. 12, 1863-1872, 2014.
Crossref [Google Scholar](https://scholar.google.com/scholar_lookup?title=Proactive spatiotemporal deployment of resources and anticipatory visual analysis for community policing strategies and law enforcement operations&journal=IEEE Transactions on Visualization and Computer Graphics&volume=20&pages=1863-1872&publication_year=2014 "Google Scholar")
[229]
Among these researchers—Fernando Miranda and others—have explored various aspects of urban dynamics in their study titled "Urban Dynamics: Analyzing City Growth Patterns." Their work focuses on understanding how cities evolve over time and identify key factors influencing urban development.
IEEE Transactions on Visualization and Computer Graphics Vol. 23, No. 1, 791-800, 2017.
[230]
Pwantingtsih, O.; Sallaberry, A.; Andary, S.; Seilles, A.; Azfie, J. Visual assessment of human motion dynamics using simulation games in the medical field. In the proceedings of the IEEE Pacific Visualization Symposium (pp. 229-233), 2016.
[231]
Riehmann, P.; Kiesel, D.; Kohlhaas, M.; Froehlich, B. Visualizing a thinker’s life.
IEEE Transactions on Visualization and Computer Graphics Vol. 25, No. 4, 1803-1816, 2019.
参考文献 [Google学术](https://scholar.google.com/scholar_lookup?title=Visualizing a thinker's life&journal=IEEE Transactions on Visualization and Computer Graphics&volume=25&pages=1803-1816&publication_year=2019 "Visualizing a thinker's life")
[232]
Contributors Sacha et al. explored the dynamic visual abstraction of soccer movement through innovative techniques.
Computer Graphics Forum Vol. 36, No. 3, 305-315, 2017.
[233]
Sarikaya et al. present a method for representing complementarity patterns within the context of viral genome sequence populations.
Computer Graphics Forum Vol. 35, No. 3, 151-160, 2016.
CrossRef [Google Scholar](https://scholar.google.com/scholar_lookup?title=Visualizing co-occurrence of events in populations of viral genome sequences&journal=Computer Graphics Forum&volume=35&pages=151-160&publication_year=2016 "Google Scholar")
[234]
Shi et al., 2023; LoyalTracker: A novel tool for analyzing loyalty patterns across various search platforms
IEEE Transactions on Visualization and Computer Graphics Vol. 20, No. 12, 1733-1742, 2014.
[235]
Steiger et al. conducted a study on visual examination of temporal data patterns for anomaly detection in networks of sensors.
Computer Graphics Forum Vol. 33, No. 3, 401-410, 2014.
用于文献引用的在线服务平台 Crossref 提供了高质量的参考文献管理功能,并通过其智能匹配算法实现了对高影响力研究论文的快速访问与检索
[236]
Stopar et al., 2014. StreamStory: Investigating the intricate dynamics of multivariate time series data across various temporal scales.
IEEE Transactions on Visualization and Computer Graphics Vol. 25, No. 4, 1788-1802, 2019.
[237]
Sultanum, N.; Singh, D.; Brudno, M.; Chevalier, F. Curator: A content-screening method for clinical text visualization.
IEEE Transactions on Visualization and Computer Graphics Vol. 25, No. 1, 142-151,2019.
[238]
Sun, G. D.; Wu, Y. C.; Liu, S. X.; Peng, T. Q.; Zhu, J. J.H.; Liang, R.H.EvoRiver: 可视化分析主题协同创新在社交媒体中的应用
IEEE Transactions on Visualization and Computer Graphics Vol. 20, No. 12, 1753-1762, 2014.
[239]
Sung et al., 2017; Huang et al., 2018; Shen et al., 2019; Cherng et al., 2020; Lin et al., 2021; Wang et al., 2022. This study investigates the interactive dynamics of online learners by visually analyzing their time-anchored comment threads to explore patterns and characteristics of learner interactions in digital learning environments.
Computer Graphics Forum Vol. 36, No. 7, 145-155, 2017.
[240]
该团队通过视觉分析法对地理位置标记的Twitter Tweeter进行异常行为识别的研究表明,在第十五届IEEE太平洋可视化会议上发表了一篇论文讨论了空间时间和异常事件检测的方法及其在实际数据集上的应用结果表明该方法能够有效识别这些异常行为
[241]
Thom, D.; Kruger, R.; Ertl, T. Could Twitter save lives? A comprehensive large-scale research on image-based social media analytics in the domain of public safety.
IEEE Transactions on Visualization and Computer Graphics Vol. 22, No. 7, 1816-1829, 2016.
Crossref [Google Scholar](https://scholar.google.com/scholar_lookup?title=Can twitter save lives? A broad-scale study on visual social media analytics for public safety&journal=IEEE Transactions on% Visualization% %and %Computer %Graphics&volume=22&pages=1816-1829&publication_year=2016)
[242]
The study focuses on local prediction-based modeling approaches for the visualization of spatiotemporal volumes.
IEEE Transactions on Visualization and Computer Graphics , 2019.
如图所示
[243]
Vehlow et al., 2019. Analyzing the development of community structures within dynamic networks.
Computer Graphics Forum Vol. 34, No. 1, 277-288, 2015.
[244]
Von Landesberger et al. developed MobilityGraphs to analyze spatio-temporal mobility patterns.
IEEE Transactions on Visualization and Computer Graphics Vol. 22, No. 1, 11-20, 2016.
Digital Object Identifier (DOI) [scholar.google.com lookup](https://scholar.google.com/scholar_lookup?title=MobilityGraphs: Visual analysis of mass mobility dynamics via spatio-temporal graphs and clustering&journal=IEEE Transactions on Visualization and Computer Graphics&volume=22&pages=11-20&publication_year=2016 "IEEE Transactions on Visualization and Computer Graphics")
[245]
Wang X., Dou W., Ma Z., Villalobos J., Chen Y., Kraft T., Ribarsky W. A scalable system named I-SI for efficiently processing latent topic-level data in social media.
Computer Graphics Forum Vol. 31, No. 3, 1275-1284, 2012.
[246]
Wang et al., X.; Liu et al., S.; Chen et al., Y.; Peng et al., T.-Q.; Su et al., J.; Yang et al., J.; Guo et al., B. The way thoughts traverse several social circles. In: Proceedings of the IEEE Conference on Visual Analytics Science and Technology, pages 51–60, 2016.
[247]
To facilitate the straightforward comparison of local businesses through online reviews
Computer Graphics Forum Vol. 37, No. 3, 63-74, 2018.
Crossref [Google Scholar](https://scholar.google.com/scholar_lookup?title=Towards an easy comparison of local businesses using online reviews&journal=Computer Graphics Forum&volume=37&pages=63-74&publication_year=2018 "Google Scholar")
[248]
Wei, F. R.; Liu, S. X.; Song, Y. Q.; Pan, S. M.; Zhou, M. X.; Qian, W. H.; Shi, L.; Tan, L.; Zhang, Q. TIARA: A graphical explorative text analytic system presented in the proceedings of the 16th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining in 2010
[249]
Wei, J., Shen, Z., Sundaresan, N., & Ma, K.-L. (2012). Visual clustering analysis of online browsing behavior data. In Proceedings of the IEEE Conference on Visual Analytics Science and Technology, 3-12.
[250]
Wu, A. Y.; Qu, H. M. Multimodal examination of video datasets: Visual investigation into the display methods within the context of TED presentations.
IEEE Transactions on Visualization and Computer Graphics Vol. 26, No. 7, 2429-2442, 2020.
Crossref [Google Scholar](https://scholar.google.com/scholar_lookup?title=Multimodal Analysis of Video Collections: Visual Exploration of Presentation Techniques in TED Talks&journal=IEEE Transactions on Visualization and Computer Graphics&volume=56&pages=789-891&publication_year=556)
[251]
MobiSeg采用交互式区域分割方法基于异质移动数据(Proceedings of the IEEE Pacific Visualization Symposium中的一篇论文)。
[252]
Wu, Y. C.; Chen, Z. T.; Sun, G. D.; Xie, X.; Cao, N.; Liu, S. X.; Cui, W. StreamExplorer: 一个用于在社交流中进行视觉化探索的多阶段系统
IEEE Transactions on Visualization and Computer Graphics Vol. 24, No. 10, 2758-2772, 2018.
[Crossref: ] https://doi.org/10.1109/TVCG.2017.2764459 Crossref
[Google Scholar: ] https://scholar.google.com/scholar_lookup?title=StreamExplorer:%B�A�multi-stage�system�sfor�v visually exploring%BBevents%BKin%BPsocial%Bstreams&journal=IEEE%B'Transactions%BOn%B'Visualization_%BAnd_%BBiternal_%BGraphics&volume=24&pages= 3D&Bdash;wise&Bdash;exploration&Bdash;of&Bdash;event&Trends&BFin socially oriented datasets &publication_year= 3D visualization techniques have revolutionized the way we analyze and interpret complex datasets across various domains, including social media, environmental monitoring, and healthcare analytics. The integration of advanced visualization tools with machine learning algorithms has opened new possibilities for uncovering hidden patterns and trends in large-scale data.
[253]
Wu, Y. C., Liu, S. X., Yan, K., Liu, M. C., and Wu, F. Z.: A Visual Examination of Opinion Spreading in the Domain of Social Media
IEEE Transactions on Visualization and Computer Graphics Vol. 20, No. 12, 1763-1772, 2014.
该研究基于深度学习算法构建了一个高效的情绪分析模型,并在多个真实场景中进行了验证
[254]
Wu, Y.H., Pitipornvivat, N., Zhao, J., Yang, S.X., Huang, G.W., Qu, H.M.: egoSlider: A visualization platform for studying evolutionary dynamics within egocentric networks.
IEEE Transactions on Visualization and Computer Graphics Vol. 22, No. 1, 260-269, 2016.
The research work was supported by [引用] Crossref and Google Scholar, which provided a solid foundation for our study on egocentric network visualization. The proposed methodology demonstrated significant potential in analyzing dynamic egocentric networks through comprehensive experimental validation.
[255]
Xie, C.; Chen, W.; Huang, X.X.; Hu, Y.Q.; Barlowe, S.; Yang, J. VAET: An analytical visualization technique related to E-Transaction Sequences.
IEEE Transactions on Visualization and Computer Graphics Vol. 20, No. 12, 1743-1752, 2014.
引用来源:[Crossref] 和 [[Google Scholar]](https://scholar.google.com/scholar_lookup?title=VAET:% undefinition|A visual analytics approach for E-transactions time-series&journal=IEEE Transactions on Visualization and Computer Graphics&volume=20&pages=1743-1752&publication_year=2014 "Google Scholar)
发布日期:(相关文献)
[256]
Xu et al., 2017. Investigating controversy through sentiment variances among review elements. In proceedings of the IEEE Pacific Visualization Symposium, pages 240-249.
[257]
Investigating the development and characteristics of dynamic networks through diachronic node-based embedding techniques.
IEEE Transactions on Visualization and Computer Graphics Vol. 26, No. 7, 2387-2402, 2020.
Crossref [Google Scholar](https://scholar.google.com/scholar_lookup?title=Exploring development of dynamic network structures through longitudinal node embeddings&journal=The IEEE Transactions on Visualization and Computer Graphics&volume=26&pages=2387-2402&publication_year=2020 "Google Scholar")
[258]
Xu et al., ViDX: a specialized tool for analyzing assembly line performance within smart manufacturing facilities.
IEEE Transactions on Visualization and Computer Graphics Vol. 23, No. 1, 291-300, 2017.
该方法表现出显著的效果,并在《IEEE 交易可视化与计算机图形》上发表
[259]
Xu et al. conducted a thorough visual investigation into the dynamics of topic competition on social media platforms through their study titled "Visual Analysis of Topic Competition on Social Media."
IEEE Transactions on Visualization and Computer Graphics Vol. 19, No. 12, 2012-2021, 2013.
The Crossref organization has provided a comprehensive citation lookup system for academic research. The Google Scholar platform offers extensive research into visual topic competition dynamics on social media platforms, specifically examining IEEE Transactions on Visualization and Computer Graphics (ISSN 1077-245X), published in volume 19 between pages 2012 and 2021 in the year 2013.
[260]
Yu L., Wu W., Li X., Li G., Ng W.S., Ng S.-K., Huang Z., Arunan A., Watt H.M. iVizTRANS: A system for interactive visual analysis of home and workplace location detection from large-scale public transportation data.Proceedings of the IEEE Conference on Visual Analytics Science and Technology, 49-56. 2015.
[261]
由Garcia Zanabria及其实习生G.............\。\Alvarenga Silveira及其实习生J..\。\Poco及其实习生J\。\Paiva及其实习生A\。\Batista Nery及其实习生M\。\Silva及其实习生C\T\。\de Abreu及其实习生S\F\A\以及Nonato及其实习生L\G共同开发的CrimAnalyzer 用于分析São Paulo地区的犯罪模式
IEEE Transactions on Visualization and Computer Graphics , 2019.
[262]
Zeng et al., 2015; Shu et al., 2017; Wang et al., 2018; Wang et al., 2019; Zhang et al., 2020; Pong and Qu, 2021] conducted research on EmotionCues for emotion-oriented visual summarization of classroom videos.
IEEE Transactions on Visualization and Computer Graphics , 2020.
[263]
Zeng et al. conducted a study titled EmoCo to comprehensively examine the relationship between emotional coherence and effective communication in video presentations.
IEEE Transactions on Visualization and Computer Graphics Vol. 26, No. 1, 927-937, 2019.
[264]
Zeng, W. et al. Analyzing waypoint-constrained origin-destination patterns for extensive transportation datasets.
Computer Graphics Forum Vol. 35, No. 8, 95-107, 2016.
[265]
Zhang, J. W.; Ahlbrand, B.; Malik, A.; Chae, J.; Min, Z. Y.; Ko, S.; Ebert, D. S. A visual analytics framework for microblog data processing or analysis at different levels of data summarization.
Computer Graphics Forum Vol. 35, No. 3, 441-450, 2016.
Ref: CrossRef [Ref: Google Scholar](https://scholar.google.com/scholar_lookup?title=The title of the study is'A Visual Analytics Framework for Microblog Data Analysis at Multiple Scales of Aggregation'&journal=Computer Graphics Forum&volume=35&pages=441-450&publication_year=2016 "Google Scholar")
[266]
张健伟及其团队. 城市公共服务中心问题集的公共视式分析法
IEEE Transactions on Visualization and Computer Graphics Vol. 20, No. 12, 1843-1852, 2014.
[267]
赵杰; 曹娜; 文振; 宋 Yukang; 林玉兰; 科尔曼Collins. #FluxFlow: A comprehensive visual analysis of unconventional information diffusion patterns across online social networks.
IEEE Transactions on Visualization and Computer Graphics Vol. 20, No. 12, 1773-1782, 2014.
Crossref [Google Scholar](https://scholar.google.com/scholar_lookup?title=#FluxFlow: Visual examination of unusual information spreading on online platforms & IEEE Transactions on Visualization and Computer Graphics "Google Scholar")
[268]
Zhao et al., X.B., et al. employed visual analytics techniques to enhance situational awareness in electromagnetic environments during radio monitoring and management processes.
IEEE Transactions on Visualization and Computer Graphics Vol. 26, No. 1, 590-600, 2020.
该研究可以通过访问该研究的DOI地址获取详细信息:https://doi.org/10.1109/TVCG.2019.2934655 "Crossref" [CrossRef]. 同时,在Google Scholar上也可以找到相关资源:访问该研究的Google Scholar链接:https://scholar.google.com/scholar_lookup?title=Visual analytics for electromagnetic situation awareness in radio monitoring and management&journal=IEEE Transactions on%2 Visualization and Computer Graphics&volume=26&pages=590-600&publication_year=2020 "Google Scholar" [Google Scholar].
[269]
Zhou Z G Meng L H Tang C Guo Z Y Hu M X Chen W Visual representation of large scale geospatial OD matrix movement data
IEEE Transactions on Visualization and Computer Graphics Vol. 25, No. 1, 43-53, 2019.
Visual abstraction of extensive geospatial origin-destination movement datasets, as presented by Crossref, has been comprehensively analyzed in depth by Google Scholar.
[270]
The research team employed data visualization techniques to analyze the spatial distribution patterns of air quality datasets collected from various locations over a specific time frame
IEEE Computer Graphics and Applications Vol. 37, No. 5, 98-105, 2017.
[271]
Tian and Zhu introduced the max margin ensemble majority vote method in crowd learning scenarios. This approach was presented in the proceedings of the Advances in Neural Information Processing Systems conference, where it was published as part of pages 1621 to 1629 in 2015.
[272]
Ng, A. 通过模拟大脑理解机器学习与人工智能的工作原理。2013年发表于斯坦福大学的人工智能模拟与计算神经科学幻灯片集(Deep Learning - Mar 2013)。可在线获取[Link to slides]。
[273]
Niels J. Nilsson, An introduction to machine learning textbook draft. 2005. Available at https://ai.stanford.edu/~nilsson/MLBOOK.pdf.
[274]
Lakshminarayanan et al., 2017; Pritzel et al., 2017; Blundell et al., 2017 present a straightforward and efficient approach for uncertainty quantification using deep ensembles. Their methodology is described in detail within the context of the Advances in Neural Information Processing Systems conference proceedings.
[275]
Generating confidence-calibrated classifiers to identify out-of-distribution samples by Lee et al.
arXiv preprint arXiv:1711.09325, 2018.
[276]
Liu et al. propose a novel approach to enhance learning from crowds by validating expert opinions in their 2017 publication at the 26th International Joint Conference on Artificial Intelligence.
[277]
Russakovsky et al. conducted a comprehensive evaluation of the ImageNet dataset to assess its effectiveness in large-scale visual recognition tasks.
International Journal of Computer Vision Vol. 115, No. 3, 211-252, 2015.
[278]
Chandrashekar, G.; Sahin, F. A survey on feature selection methods.
Computers & Electrical Engineering Vol. 40, No. 1, 16-28, 2014.
采用了机器学习方法以达成特征选择的任务,并提供了详细的理论分析
[279]
Brooks M et al. An insight into visual support for error-driven feature ideation in text classification was presented at the IEEE Conference on Visual Analytics Science and Technology (2015), pages 105–112.
[280]
Tzeng and Ma explore the inner workings through data-driven visualization techniques for neural networks in their study presented at the IEEE Visualization Conference.
[281]
Mehdi Abadi; Anand Agarwal; Paris Barham; Emily Brevdo; Zhanghui Chen; Cori Citro; Greg S. Corradi; Ashish Davis; Jeff Dean and others introduced TensorFlow as a framework for large-scale machine learning algorithms in heterogeneous distributed systems.
arXiv preprint arXiv:1603.04467, 2015.
[282]
Ming et al., 2019; Ming et al. propose a novel framework for interpretable and steerable sequence learning through prototype-based approaches in their study published in the proceedings of the ACM SIGKDD conference on knowledge discovery and data mining (2019).
[283]
Liu, S.X., Cui, W.W., Wu, Y.C., Liu, M.C.: An overview of information visualization techniques: The latest developments and current issues?
The Visual Computer Vol. 30, No. 12, 1373-1393, 2014.
[284]
Ma et al. introduced Tag-Latent Dirichlet Allocation to analyze the relationships among hashtags in Year 2013.
[285]
The Parallel Sets provide a framework for facilitating interactive exploration and visual examination of categorical data.
IEEE Transactions on Visualization and Computer Graphics Vol. 12, No. 4, 558-568, 2006.
[286]
Mikolov, T.; Sutskever, I.; Chen, K.; Corrado, G. S.; Dean, J. Distributed representations of words and phrases along with their compositionality. In: Proceedings of the Advances in Neural Information Processing Systems, 3111-3119, 2013.
[287]
Blei, D. M.; Ng, A. Y.; Jordan, M. I. Latent Dirichlet allocation.
Journal of Machine Learning Research Vol. 3, 993-1022, 2003.
[288]
Teh, Y. W.; Jordan, M. I.; Beal, M. J.; Blei, D. M. Hierarchical dirichlet processes.
Journal of the American Statistical Association Vol. 101, No. 476, 1566-1581, 2006.
Crossref and [Google Scholar](https://scholar.google.com/scholar_lookup?title=Hierarchical dirichlet processes&journal=Journal%252Fof%252FThe%252AAmerican%252AScientific%252ASSOCIATION&volume=101&pages=1566-1581&publication_year=20XX "Google Scholar)
[289]
Wang et al., X.T.; Liu et al., S.X.; Song et al., Y.Q.; Guo et al., B.N., presented in the proceedings of the 19th ACM SIGKDD international conference on knowledge discovery and data mining, extracting and analyzing complex tree structures within a continuous text stream between pages 722 and 730 in the year 2013
[290]
Li, Y. F.; Guo, L. Z.; Zhou, Z. H. Towards safe weakly supervised learning.
IEEE Transactions on Pattern Analysis and Machine Intelligence , 2019.
[291]
Li, Y. F.; Wang, S. B.; Zhou, Z. H., Graph quality assessment: A significant gap in graph analysis appeared in the proceedings of the International Joint Conference on Artificial Intelligence held in 2016
[292]
Zhou, Z. H. A brief introduction to weakly supervised learning.
National Science Review Vol. 5, No. 1, 44-53, 2018.
[293]
Foulds, J.; Frank, E. A review of multi-instance learning assumptions.
The Knowledge Engineering Review Vol. 25, No. 1, 1-25, 2010.
该系统通过Crossref实现了对知识工程评论的研究与应用
该研究利用Google Scholar提供了详细的文献综述支持
[294]
Zhou, Z. H. Multi-instance learning from supervised view.
Journal of Computer Science and Technology Vol. 21, No. 5, 800-809, 2006.
[295]
Donaugh et al. introduced a deep convolutional activation-based feature for generic visual recognition in their 2014 paper titled 'DeCAF' published in the proceedings of the International Conference on Machine Learning. The work was conducted by a team comprising Jia Y., Vinyals O., Hofiman J., Zhang N., Tzeng E., and Darrell T., and it appeared on pages 647–655.
[296]
Wang, Q. W.; Yuan, J.; Chen, S. X.; Su, H.; Qu, H. M.; Liu, S. X. Visualized genealogy of deep neural networks
IEEE Transactions on Visualization and Computer Graphics Vol. 26, No. 11, 3340-3352,2020.
[297]
Constructing efficient Convolutional Neural Networks (ConvNets) through the removal of redundant features
arXiv preprint arXiv:1802.07653, 2018.
[298]
Baltrusaitis et al., T.; Ahuja, C.; Morency, L. P. 机器学习领域综述与分类:一项全面研究。
IEEE Transactions on Pattern Analysis and Machine Intelligence Vol. 41, No. 2, 423-443, 2019.
Crossref提供了详细的文献信息;[Google Scholar](https://scholar.google.com/scholar_lookup?title=Multimodal machine learning: A survey and taxonomy&journal=IEEE Transactions on Pattern Analysis and Machine Intelligence&volume=41&pages=423-443&publication_year=2019 "Google Scholar")则提供了对多模态机器学习领域的系统综述及其分类体系的完整信息。这些参考文献详细列出了相关文献及其完整信息的位置。其中[Crossref]链接位置包括标题、期刊名称、卷号、页码以及发表年份;而[Google Scholar]链接位置则包括完整的标题、期刊名称以及发表年份等详细信息。这些参考文献详细列出了相关文献及其完整信息的位置,并提供了对多模态机器学习领域的系统综述及其分类体系的完整信息。
[299]
Lu et al., 2019. ViLBERT: Pretraining task-agnostic visiolinguistic representations for vision-and-language tasks. In Proceedings of the Advances in Neural Information Processing Systems, pages 13–23.
[300]
Lu, J.; Liu, A. J.; Dong, F.; Gu, F.; Gama, J.; Zhang, G. Q. Learning under concept drift: A review.
IEEE Transactions on Knowledge and Data Engineering Vol. 31, No. 12, 2346-2363, 2018.
该文献提供了详细信息的文献链接:Crossref [Google Scholar](https://scholar.google.com/scholar_lookup?title=Learning under concept drift: A review&journal=IEEE% Transactions% on% Knowledge% and% Data% Engineering&volume=31&pages=2346-2363&publication_year=2018 "Google Scholar")
[301]
Yang et al.使用视觉分析方法对概念漂移进行诊断
arXiv preprint arXiv:2007.14372, 2020.
[302]
Wang et al. developed an innovative tool called ConceptExplorer for visual analysis of concept drifts in multi-source time series data.
arXiv preprint arXiv:2007.15272, 2020.
[303]
Liu et al. propose a framework for guiding the maintenance of data integrity through visual analytics approaches. Their study identifies key challenges in ensuring high-quality datasets while maintaining analytical efficiency. The framework addresses complexities arising from diverse data sources and intricate relationships within large-scale systems.
Visual Informatics Vol. 2, No. 4, 191-197, 2018.
