4. 3d unet : 基于稀疏标注的稠密体分割学习
# 3D U-Net: Learning Dense Volumetric Segmentation from Sparse Annotation
Abstract. This paper introduces a network for volumetric segmentation that learns from sparsely annotated volumetric images. We outline two attractive use cases of this method: (1) In a semi-automated setup, the user annotates some slices in the volume to be segmented. The network learns from these sparse annotations and provides a dense 3D segmentation. (2) In a fully-automated setup, we assume that a representative, sparsely annotated training set exists. Trained on this data set, the network densely segments new volumetric images. The proposed network extends the previous u-net architecture from Ronneberger et al. by replacing all 2D operations with their 3D counterparts. The implementation performs on-the-fly elastic deformations for efficient data augmentation during training. It is trained end-to-end from scratch, i.e., no pre-trained network is required. We test the performance of the proposed method on a complex, highly variable 3D structure, the Xenopus kidney, and achieve good results for both use cases.
本文介绍了一个从稀疏标注的体图像中学习的体分割网络。本文概述了这种方法的两个有吸引力的用例:(1) 在半自动化设置中,用户标注了要分割的卷中的一些切片。网络从这些稀疏标注中学习,并提供密集的 3D 分割。(2) 在完全自动化的设置中,假设存在一个具有代表性的、稀疏注释的训练集。在这个数据集上训练后,网络密集地分割新的体图像。所提出的网络通过将所有 2D 操作替换为 3D 操作,扩展了 Ronneberger 等人之前的 u-net 架构。该实现执行实时弹性变形,以在训练过程中有效地进行数据增强。它是端到端进行训练的,即不需要预先训练的网络。在一个复杂的、高度可变的 3D 结构 Xenopus 肾脏上测试了所提出方法的性能,并在两个用例中都取得了良好的结果。