首页 星云 工具 资源 星选 资讯 热门工具
:

PDF转图片 完全免费 小红书视频下载 无水印 抖音视频下载 无水印 数字星空

Unet分割的代码不含有数据集

人工智能 2.66MB 1 需要积分: 1
立即下载

资源介绍:

Unet分割的代码不含有数据集
U-Net: Convolutional Networks for Biomedical
Image Segmentation
Olaf Ronneberger, Philipp Fischer, and Thomas Brox
Computer Science Department and BIOSS Centre for Biological Signalling Studies,
University of Freiburg, Germany
ronneber@informatik.uni-freiburg.de,
WWW home page: http://lmb.informatik.uni-freiburg.de/
Abstract. There is large consent that successful training of deep net-
works requires many thousand annotated training samples. In this pa-
per, we present a network and training strategy that relies on the strong
use of data augmentation to use the available annotated samples more
efficiently. The architecture consists of a contracting path to capture
context and a symmetric expanding path that enables precise localiza-
tion. We show that such a network can be trained end-to-end from very
few images and outperforms the prior best method (a sliding-window
convolutional network) on the ISBI challenge for segmentation of neu-
ronal structures in electron microscopic stacks. Using the same net-
work trained on transmitted light microscopy images (phase contrast
and DIC) we won the ISBI cell tracking challenge 2015 in these cate-
gories by a large margin. Moreover, the network is fast. Segmentation
of a 512x512 image takes less than a second on a recent GPU. The full
implementation (based on Caffe) and the trained networks are available
at http://lmb.informatik.uni-freiburg.de/people/ronneber/u-net.
1 Introduction
In the last two years, deep convolutional networks have outperformed the state of
the art in many visual recognition tasks, e.g. [7,3]. While convolutional networks
have already existed for a long time [8], their success was limited due to the
size of the available training sets and the size of the considered networks. The
breakthrough by Krizhevsky et al. [7] was due to supervised training of a large
network with 8 layers and millions of parameters on the ImageNet dataset with
1 million training images. Since then, even larger and deeper networks have been
trained [12].
The typical use of convolutional networks is on classification tasks, where
the output to an image is a single class label. However, in many visual tasks,
especially in biomedical image processing, the desired output should include
localization, i.e., a class label is supposed to be assigned to each pixel. More-
over, thousands of training images are usually beyond reach in biomedical tasks.
Hence, Ciresan et al. [1] trained a network in a sliding-window setup to predict
the class label of each pixel by providing a local region (patch) around that pixel
arXiv:1505.04597v1 [cs.CV] 18 May 2015
2
copy and crop
input
image
tile
output
segmentation
map
641
128
256
512
1024
max pool 2x2
up-conv 2x2
conv 3x3, ReLU
572 x 572
284²
64
128
256
512
570 x 570
568 x 568
282²
280²140²
138²
136²
68²
66²
64²
32²
28²
56²
54²
52²
512
104²
102²
100²
200²
30²
198²
196²
392 x 392
390 x 390
388 x 388
388 x 388
1024
512
256
256
128
64
128
64 2
conv 1x1
Fig. 1. U-net architecture (example for 32x32 pixels in the lowest resolution). Each blue
box corresponds to a multi-channel feature map. The number of channels is denoted
on top of the box. The x-y-size is provided at the lower left edge of the box. White
boxes represent copied feature maps. The arrows denote the different operations.
as input. First, this network can localize. Secondly, the training data in terms
of patches is much larger than the number of training images. The resulting
network won the EM segmentation challenge at ISBI 2012 by a large margin.
Obviously, the strategy in Ciresan et al. [1] has two drawbacks. First, it
is quite slow because the network must be run separately for each patch, and
there is a lot of redundancy due to overlapping patches. Secondly, there is a
trade-off between localization accuracy and the use of context. Larger patches
require more max-pooling layers that reduce the localization accuracy, while
small patches allow the network to see only little context. More recent approaches
[11,4] proposed a classifier output that takes into account the features from
multiple layers. Good localization and the use of context are possible at the
same time.
In this paper, we build upon a more elegant architecture, the so-called “fully
convolutional network” [9]. We modify and extend this architecture such that it
works with very few training images and yields more precise segmentations; see
Figure 1. The main idea in [9] is to supplement a usual contracting network by
successive layers, where pooling operators are replaced by upsampling operators.
Hence, these layers increase the resolution of the output. In order to localize, high
resolution features from the contracting path are combined with the upsampled
3
Fig. 2. Overlap-tile strategy for seamless segmentation of arbitrary large images (here
segmentation of neuronal structures in EM stacks). Prediction of the segmentation in
the yellow area, requires image data within the blue area as input. Missing input data
is extrapolated by mirroring
output. A successive convolution layer can then learn to assemble a more precise
output based on this information.
One important modification in our architecture is that in the upsampling
part we have also a large number of feature channels, which allow the network
to propagate context information to higher resolution layers. As a consequence,
the expansive path is more or less symmetric to the contracting path, and yields
a u-shaped architecture. The network does not have any fully connected layers
and only uses the valid part of each convolution, i.e., the segmentation map only
contains the pixels, for which the full context is available in the input image.
This strategy allows the seamless segmentation of arbitrarily large images by an
overlap-tile strategy (see Figure 2). To predict the pixels in the border region
of the image, the missing context is extrapolated by mirroring the input image.
This tiling strategy is important to apply the network to large images, since
otherwise the resolution would be limited by the GPU memory.
As for our tasks there is very little training data available, we use excessive
data augmentation by applying elastic deformations to the available training im-
ages. This allows the network to learn invariance to such deformations, without
the need to see these transformations in the annotated image corpus. This is
particularly important in biomedical segmentation, since deformation used to
be the most common variation in tissue and realistic deformations can be simu-
lated efficiently. The value of data augmentation for learning invariance has been
shown in Dosovitskiy et al. [2] in the scope of unsupervised feature learning.
Another challenge in many cell segmentation tasks is the separation of touch-
ing objects of the same class; see Figure 3. To this end, we propose the use of
a weighted loss, where the separating background labels between touching cells
obtain a large weight in the loss function.
The resulting network is applicable to various biomedical segmentation prob-
lems. In this paper, we show results on the segmentation of neuronal structures
in EM stacks (an ongoing competition started at ISBI 2012), where we out-

资源文件列表:

unet-master - 副本.zip 大约有92个文件
  1. unet-master - 副本/.idea/
  2. unet-master - 副本/.idea/.gitignore 176B
  3. unet-master - 副本/.idea/inspectionProfiles/
  4. unet-master - 副本/.idea/inspectionProfiles/profiles_settings.xml 174B
  5. unet-master - 副本/.idea/inspectionProfiles/Project_Default.xml 510B
  6. unet-master - 副本/.idea/misc.xml 272B
  7. unet-master - 副本/.idea/modules.xml 274B
  8. unet-master - 副本/.idea/unet_42-new.iml 472B
  9. unet-master - 副本/.idea/workspace.xml 7.83KB
  10. unet-master - 副本/__pycache__/
  11. unet-master - 副本/bmp转jpg.py 1KB
  12. unet-master - 副本/bmp转png.py 1001B
  13. unet-master - 副本/data/
  14. unet-master - 副本/data/results/
  15. unet-master - 副本/data/Test_Images/
  16. unet-master - 副本/data/Test_Labels/
  17. unet-master - 副本/data/Training_Images/
  18. unet-master - 副本/data/Training_Labels/
  19. unet-master - 副本/images/
  20. unet-master - 副本/images/111/
  21. unet-master - 副本/images/111/ISIC_0000000.jpg 48.79KB
  22. unet-master - 副本/images/111/ISIC_0000000_res.png 4.76KB
  23. unet-master - 副本/images/ISIC_0000000.jpg 48.79KB
  24. unet-master - 副本/images/ISIC_0000000_res.png 4.72KB
  25. unet-master - 副本/images/right.jpeg 26.57KB
  26. unet-master - 副本/images/tmp/
  27. unet-master - 副本/images/tmp/tmp_upload.jpeg 21.27KB
  28. unet-master - 副本/images/UI/
  29. unet-master - 副本/images/UI/logo.jpeg 33.37KB
  30. unet-master - 副本/images/UI/lufei.png 215.7KB
  31. unet-master - 副本/images/UI/right.jpeg 25.45KB
  32. unet-master - 副本/images/UI/up.jpeg 27.88KB
  33. unet-master - 副本/images/up.jpeg 21.27KB
  34. unet-master - 副本/labelme2seg.py 1.02KB
  35. unet-master - 副本/model/
  36. unet-master - 副本/model/__init__.py
  37. unet-master - 副本/model/__pycache__/
  38. unet-master - 副本/model/__pycache__/__init__.cpython-37.pyc 141B
  39. unet-master - 副本/model/__pycache__/__init__.cpython-38.pyc 161B
  40. unet-master - 副本/model/__pycache__/unet_model.cpython-37.pyc 1.34KB
  41. unet-master - 副本/model/__pycache__/unet_model.cpython-38.pyc 1.37KB
  42. unet-master - 副本/model/__pycache__/unet_parts.cpython-37.pyc 2.79KB
  43. unet-master - 副本/model/__pycache__/unet_parts.cpython-38.pyc 2.75KB
  44. unet-master - 副本/model/unet_model.py 1.29KB
  45. unet-master - 副本/model/unet_parts.py 3.39KB
  46. unet-master - 副本/predict.py 1.72KB
  47. unet-master - 副本/requirements.txt 143B
  48. unet-master - 副本/results/
  49. unet-master - 副本/results/confusion_matrix.csv 68B
  50. unet-master - 副本/results/mIoU.png 15.56KB
  51. unet-master - 副本/results/mPA.png 14.85KB
  52. unet-master - 副本/results/Precision.png 14.7KB
  53. unet-master - 副本/results/Recall.png 14.06KB
  54. unet-master - 副本/test.py 4.17KB
  55. unet-master - 副本/testdata/
  56. unet-master - 副本/testdata/jsons/
  57. unet-master - 副本/testdata/jsons/Case-1-U-1-1.json 65.34KB
  58. unet-master - 副本/testdata/jsons/Case-2-U-2-2.json 77.86KB
  59. unet-master - 副本/testdata/jsons/Case-2-U-2-3.json 89.63KB
  60. unet-master - 副本/testdata/jsons/Case-3-U-5-0.json 83.71KB
  61. unet-master - 副本/testdata/jsons/Case-3-U-5-2.json 83.12KB
  62. unet-master - 副本/testdata/jsons/Case-3-U-5-3.json 80.91KB
  63. unet-master - 副本/testdata/jsons/Case-4-U-8-0.json 84.39KB
  64. unet-master - 副本/testdata/jsons/Case-4-U-8-1.json 85.46KB
  65. unet-master - 副本/testdata/jsons/Case-5-U-10-0.json 78.91KB
  66. unet-master - 副本/testdata/jsons/Case-5-U-10-1.json 82.62KB
  67. unet-master - 副本/testdata/labels/
  68. unet-master - 副本/testdata/labels/Case-1-U-1-1.png 3.18KB
  69. unet-master - 副本/testdata/labels/Case-2-U-2-2.png 2.82KB
  70. unet-master - 副本/testdata/labels/Case-2-U-2-3.png 3.25KB
  71. unet-master - 副本/testdata/labels/Case-3-U-5-0.png 3.55KB
  72. unet-master - 副本/testdata/labels/Case-3-U-5-2.png 3.59KB
  73. unet-master - 副本/testdata/labels/Case-3-U-5-3.png 3.24KB
  74. unet-master - 副本/testdata/labels/Case-4-U-8-0.png 3.02KB
  75. unet-master - 副本/testdata/labels/Case-4-U-8-1.png 2.44KB
  76. unet-master - 副本/testdata/labels/Case-5-U-10-0.png 3.47KB
  77. unet-master - 副本/testdata/labels/Case-5-U-10-1.png 3.46KB
  78. unet-master - 副本/train.py 2.46KB
  79. unet-master - 副本/ui.py 8.04KB
  80. unet-master - 副本/unet原文.pdf 1.57MB
  81. unet-master - 副本/utils/
  82. unet-master - 副本/utils/__pycache__/
  83. unet-master - 副本/utils/__pycache__/dataset.cpython-37.pyc 1.81KB
  84. unet-master - 副本/utils/__pycache__/dataset.cpython-38.pyc 1.84KB
  85. unet-master - 副本/utils/__pycache__/utils_metrics.cpython-37.pyc 6.01KB
  86. unet-master - 副本/utils/__pycache__/utils_metrics.cpython-38.pyc 6.35KB
  87. unet-master - 副本/utils/data_remove_seg.py 730B
  88. unet-master - 副本/utils/dataset.py 2.81KB
  89. unet-master - 副本/utils/gen_split.py 1.57KB
  90. unet-master - 副本/utils/label2png.py 1.41KB
  91. unet-master - 副本/utils/utils_metrics.py 9.26KB
  92. unet-master - 副本/切换镜像.txt 348B
0评论
提交 加载更多评论
其他资源 基于Java的视频会议系统(程序).zip
基于Java的视频会议系统,文件中包含图片说明、项目源码及使用说明,资料仅供学习使用。
基于Java的视频会议系统(程序).zip
功能特点:支持上传较大体积的 Word 文档(最大支持上传 48mb),压缩比较高 该网站无广告,使用方便,并且支持痕迹清除,注
功能特点:支持上传较大体积的 Word 文档(最大支持上传 48mb),压缩比较高 该网站无广告,使用方便,并且支持痕迹清除,注
VMware 脚本存储库,该存储库包含基于 VMware 的解决方案的各种编程/脚本语言的示例脚本集合
VMware 的解决方案的各种编程/脚本语言的示例脚本, VMware 的解决方案的各种编程/脚本语言的示例脚本, VMware 的解决方案的各种编程/脚本语言的示例脚本
JAVA在线考试管理系统(源代码).zip
JAVA在线考试管理系统,文件中包含图片说明、项目源码及使用说明,资料仅供学习使用。
JAVA在线考试管理系统(源代码).zip
Kafka Tools 3.0,现改名 offsetexplorer
Kafka Tools 3.0 支持 JAAS config 配置 支持最新版本Kafka 3.7
python pywin32阅读文档 python的pywin32api帮助文档
python pywin32阅读文档 python的pywin32api帮助文档
redis-desktop-manager-Redis桌面管理器
redis-desktop-manager该工具为您提供了一个易于使用的GUI,可以访问您的Redis数据库并执行一些基本操作:将键视为树,CRUD键,通过shell执行命令
GAN网络的几篇论文,主要是图像方面的论文
包括: 《Conditional Generative Adversarial Nets》,2014年; 《Semi-Supervised Learning with Generative Adversarial Networks》,2016; 《Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks》,2016; 《Self-Attention Generative Adversarial Networks》,2019 《Generative Adversarial Nets》,2014; 《Generative Adversarial Networks- A Survey and Taxonomy》,一篇综述 还有几篇其他的。