Advertisement

Nature Machine Intelligence: Volume 1 Issue 10, October 2019 Nature机器智能10月份论文

阅读量:

深度学习光声断层成像技术在稀疏数据场景下的应用

The ever-evolving field of optoacoustic (photoacoustic) imaging and tomography is continually motivated by the quest for enhanced imaging performance, specifically in resolution, speed, sensitivity, depth, and contrast. In practice, data acquisition strategies commonly employ suboptimal sampling methods for tomographic data, inevitably leading to performance trade-offs and degraded image quality. To address these challenges, we present a novel framework designed to restore image quality from sparse optoacoustic data using deep convolutional neural networks. Through comprehensive testing with whole-body mouse imaging in vivo, we demonstrate the framework's effectiveness. A full-view tomographic scanner capable of delivering high-quality cross-sectional images from living mice was developed to generate accurate reference images for optimal network training. When applied to images reconstructed from significantly undersampled datasets or limited-view scans, the trained network successfully enhances the visibility of arbitrarily oriented structures while restoring expected image quality. Notably, it also reduces reconstruction artifacts observed in reference images derived from densely sampled data. Comparable improvements were not achieved when training was conducted using synthetic or phantom datasets alone, emphasizing the critical role of high-quality in vivo training data. This innovative approach offers significant benefits for optoacoustic imaging applications by mitigating common image artifacts, improving anatomical contrast and quantification capabilities, accelerating acquisition and reconstruction processes, and facilitating the development of practical imaging systems. The proposed method operates exclusively on image-domain data and can seamlessly integrate with other modality-based reconstruction techniques.

Content conversion of unsupervised data through histogram-based matching within a cyclical consistent framework for generative adversarial networks.

Image segmentation represents a fundamental task across various research domains. To address increasingly complex image data, AI-driven methodologies have emerged as an effective solution to overcome limitations inherent in traditional feature extraction techniques. Given that most AI research outputs are publicly accessible and algorithms requiring specific implementations are now feasible in numerous widely-used programming languages, these AI-based approaches are becoming more prevalent. However, such methods often necessitate manual annotation by researchers to establish training targets for algorithm convergence. This annotation process can be both labor-intensive and restrictive in practical applications. Drawing inspiration from cycle-consistent GANs' capability to perform style transfer tasks, this work introduces an innovative unsupervised learning framework that leverages synthetic image generation for image segmentation purposes. By comparing our proposed unsupervised method against a state-of-the-art supervised cell-counting network on the VGG Cells dataset, we demonstrate that our approach achieves comparable performance while also exhibiting enhanced precision in identifying individual cells within segmented images. The efficacy of this methodology is further substantiated through its application to diverse imaging scenarios: bright-field microscopy images of cellular cultures, live/dead assays conducted on C. elegans worms, and X-ray computed tomography slices of metallic nanowire meshes.

论文原文

该论文提出了一种快速神经网络方法用于直接预测协变力,在复杂多元扩展系统中的应用研究

神经网络势场方法(NNFF)是一种用于分析原子结构与力关系的方法,在避免昂贵量子力学计算的前提下实现了长时程高质量分子动力学模拟。

然而,在复杂多元素原子体系中使用的大多数NNFF方法仅依赖于原子结构旋转不变特性和网络特征的空间导数来间接预测原子力矢量,并且计算成本较高。

本研究提出了一种交错的NNFF架构设计,在利用了旋转不变与协变特性的基础上直接预测了原子力矢量,并且通过对比实验表明所开发的Python引擎相较于现代C++引擎实现了2.2倍加速效果。

该高效架构使得我们能够将NNFF技术应用于由长聚合物链、无定形氧化物以及表面化学反应组成的复杂三元及以上元素扩展体系的研究。

此外,在本研究中我们还阐述了一种能够直接从局部环境获取协变矢量输出的新架构设计。

论文地址

临床适用的深度学习框架用于CT图像中的器官风险界定

Radiation therapy stands as one of the most commonly utilized treatment methodologies for cancer management. A pivotal phase in radiation therapy planning involves the precise identification and delineation of all organs at risk (OARs) to minimize potential adverse effects on nearby healthy tissues. However, the manual delineation of OARs based on computed tomography images is both time-intensive and prone to human error. To address these challenges, we introduce a deep learning-based system designed to automatically segment OARs in head and neck regions, utilizing a dataset comprising 215 computed tomography scans where 28 OARs were meticulously delineated by experienced radiation oncologists. When tested on an independent dataset of 100 computed tomography scans, our system achieved an average Dice similarity coefficient of 78.34%, significantly outperforming human experts by 10.05% and surpassing the previous state-of-the-art method by an additional 5.18%. Notably, our system requires only a few seconds to complete the delineation of a single scan, compared to over half an hour typically required by human experts. These results underscore the potential for deep learning to enhance the quality and efficiency of radiation therapy treatment planning.

代码地址

全部评论 (0)

还没有任何评论哟~