首页 星云 工具 资源 星选 资讯 热门工具
:

PDF转图片 完全免费 小红书视频下载 无水印 抖音视频下载 无水印 数字星空

改进版的yolov5+双目测距

人工智能 71.18MB 78 需要积分: 1
立即下载

资源介绍:

新版本代码特点:(注意目前只适用于2560*720分辨率的双目,其他分辨率需要修改) 1、替换“回”字形查找改为“米”字形查找,可以设置存储像素点的个数20可修改,然后取有效像素点的中位数(个人觉得比平均值更有代表性)。 2、每10帧(约1/3秒)双目匹配一次,提升代码的运行速度。 3、可以进行实时检测,运行速度与机器的性能有关。
  CI CPU testing This repository represents Ultralytics open-source research into future object detection methods, and incorporates lessons learned and best practices evolved over thousands of hours of training and evolution on anonymized client datasets. **All code and models are under active development, and are subject to modification or deletion without notice.** Use at your own risk. ** GPU Speed measures end-to-end time per image averaged over 5000 COCO val2017 images using a V100 GPU with batch size 32, and includes image preprocessing, PyTorch FP16 inference, postprocessing and NMS. EfficientDet data from [google/automl](https://github.com/google/automl) at batch size 8. - **January 5, 2021**: [v4.0 release](https://github.com/ultralytics/yolov5/releases/tag/v4.0): nn.SiLU() activations, [Weights & Biases](https://wandb.ai/) logging, [PyTorch Hub](https://pytorch.org/hub/ultralytics_yolov5/) integration. - **August 13, 2020**: [v3.0 release](https://github.com/ultralytics/yolov5/releases/tag/v3.0): nn.Hardswish() activations, data autodownload, native AMP. - **July 23, 2020**: [v2.0 release](https://github.com/ultralytics/yolov5/releases/tag/v2.0): improved model definition, training and mAP. - **June 22, 2020**: [PANet](https://arxiv.org/abs/1803.01534) updates: new heads, reduced parameters, improved speed and mAP [364fcfd](https://github.com/ultralytics/yolov5/commit/364fcfd7dba53f46edd4f04c037a039c0a287972). - **June 19, 2020**: [FP16](https://pytorch.org/docs/stable/nn.html#torch.nn.Module.half) as new default for smaller checkpoints and faster inference [d4c6674](https://github.com/ultralytics/yolov5/commit/d4c6674c98e19df4c40e33a777610a18d1961145). ## Pretrained Checkpoints | Model | size | APval | APtest | AP50 | SpeedV100 | FPSV100 || params | GFLOPS | |---------- |------ |------ |------ |------ | -------- | ------| ------ |------ | :------: | | [YOLOv5s](https://github.com/ultralytics/yolov5/releases) |640 |36.8 |36.8 |55.6 |**2.2ms** |**455** ||7.3M |17.0 | [YOLOv5m](https://github.com/ultralytics/yolov5/releases) |640 |44.5 |44.5 |63.1 |2.9ms |345 ||21.4M |51.3 | [YOLOv5l](https://github.com/ultralytics/yolov5/releases) |640 |48.1 |48.1 |66.4 |3.8ms |264 ||47.0M |115.4 | [YOLOv5x](https://github.com/ultralytics/yolov5/releases) |640 |**50.1** |**50.1** |**68.7** |6.0ms |167 ||87.7M |218.8 | | | | | | | || | | [YOLOv5x](https://github.com/ultralytics/yolov5/releases) + TTA |832 |**51.9** |**51.9** |**69.6** |24.9ms |40 ||87.7M |1005.3 ** APtest denotes COCO [test-dev2017](http://cocodataset.org/#upload) server results, all other AP results denote val2017 accuracy. ** All AP numbers are for single-model single-scale without ensemble or TTA. **Reproduce mAP** by `python test.py --data coco.yaml --img 640 --conf 0.001 --iou 0.65` ** SpeedGPU averaged over 5000 COCO val2017 images using a GCP [n1-standard-16](https://cloud.google.com/compute/docs/machine-types#n1_standard_machine_types) V100 instance, and includes image preprocessing, FP16 inference, postprocessing and NMS. NMS is 1-2ms/img. **Reproduce speed** by `python test.py --data coco.yaml --img 640 --conf 0.25 --iou 0.45` ** All checkpoints are trained to 300 epochs with default settings and hyperparameters (no autoaugmentation). ** Test Time Augmentation ([TTA](https://github.com/ultralytics/yolov5/issues/303)) runs at 3 image sizes. **Reproduce TTA** by `python test.py --data coco.yaml --img 832 --iou 0.65 --augment` ## Requirements Python 3.8 or later with all [requirements.txt](https://github.com/ultralytics/yolov5/blob/master/requirements.txt) dependencies installed, including `torch>=1.7`. To install run: ```bash $ pip install -r requirements.txt ``` ## Tutorials * [Train Custom Data](https://github.com/ultralytics/yolov5/wiki/Train-Custom-Data)  🚀 RECOMMENDED * [Weights & Biases Logging](https://github.com/ultralytics/yolov5/issues/1289)  🌟 NEW * [Multi-GPU Training](https://github.com/ultralytics/yolov5/issues/475) * [PyTorch Hub](https://github.com/ultralytics/yolov5/issues/36)  ⭐ NEW * [ONNX and TorchScript Export](https://github.com/ultralytics/yolov5/issues/251) * [Test-Time Augmentation (TTA)](https://github.com/ultralytics/yolov5/issues/303) * [Model Ensembling](https://github.com/ultralytics/yolov5/issues/318) * [Model Pruning/Sparsity](https://github.com/ultralytics/yolov5/issues/304) * [Hyperparameter Evolution](https://github.com/ultralytics/yolov5/issues/607) * [Transfer Learning with Frozen Layers](https://github.com/ultralytics/yolov5/issues/1314)  ⭐ NEW * [TensorRT Deployment](https://github.com/wang-xinyu/tensorrtx) ## Environments YOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including [CUDA](https://developer.nvidia.com/cuda)/[CUDNN](https://developer.nvidia.com/cudnn), [Python](https://www.python.org/) and [PyTorch](https://pytorch.org/) preinstalled): - **Google Colab and Kaggle** notebooks with free GPU: Open In Colab Open In Kaggle - **Google Cloud** Deep Learning VM. See [GCP Quickstart Guide](https://github.com/ultralytics/yolov5/wiki/GCP-Quickstart) - **Amazon** Deep Learning AMI. See [AWS Quickstart Guide](https://github.com/ultralytics/yolov5/wiki/AWS-Quickstart) - **Docker Image**. See [Docker Quickstart Guide](https://github.com/ultralytics/yolov5/wiki/Docker-Quickstart) Docker Pulls ## Inference detect.py runs inference on a variety of sources, downloading models automatically from the [latest YOLOv5 release](https://github.com/ultralytics/yolov5/releases) and saving results to `runs/detect`. ```bash $ python detect.py --source 0 # webcam file.jpg # image file.mp4 # video path/ # directory path/*.jpg # glob rtsp://170.93.143.139/rtplive/470011e600ef003a004ee33696235daa # rtsp stream rtmp://192.168.1.105/live/test # rtmp stream http://112.50.243.8/PLTV/88888888/224/3221225900/1.m3u8 # http stream ``` To run inference on example images in `data/images`: ```bash $ python detect.py --source data/images --weights yolov5s.pt --conf 0.25 Namespace(agnostic_nms=False, augment=False, classes=None, conf_thres=0.25, device='', exist_ok=False, img_size=640, iou_thres=0.45, name='exp', project='runs/detect', save_conf=False, save_txt=False, source='data/images/', update=False, view_img=False, weights=['yolov5s.pt']) YOLOv5 v4.0-96-g83dc1b4 torch 1.7.0+cu101 CUDA:0 (Tesla V100-SXM2-16GB, 16160.5MB

资源文件列表:

yolov5_stereo_Pro.zip 大约有142个文件
  1. yolov5_stereo_Pro/
  2. yolov5_stereo_Pro/LICENSE 34.3KB
  3. yolov5_stereo_Pro/detect_and_stereo_video_030.py 28.78KB
  4. yolov5_stereo_Pro/detect_and_stereo_video_033.py 30.95KB
  5. yolov5_stereo_Pro/cuda_test.py 1.02KB
  6. yolov5_stereo_Pro/README.md 10.55KB
  7. yolov5_stereo_Pro/train.py 31.51KB
  8. yolov5_stereo_Pro/test.py 16.14KB
  9. yolov5_stereo_Pro/tutorial.ipynb 384.14KB
  10. yolov5_stereo_Pro/Dockerfile 1.68KB
  11. yolov5_stereo_Pro/detect.py 8.03KB
  12. yolov5_stereo_Pro/hubconf.py 5.15KB
  13. yolov5_stereo_Pro/code.txt 103B
  14. yolov5_stereo_Pro/requirements.txt 610B
  15. yolov5_stereo_Pro/models/
  16. yolov5_stereo_Pro/models/yolov5s.yaml 1.33KB
  17. yolov5_stereo_Pro/models/experimental.py 5.03KB
  18. yolov5_stereo_Pro/models/__init__.py
  19. yolov5_stereo_Pro/models/yolov5m.yaml 1.33KB
  20. yolov5_stereo_Pro/models/yolo.py 11.78KB
  21. yolov5_stereo_Pro/models/export.py 4.32KB
  22. yolov5_stereo_Pro/models/common.py 12.69KB
  23. yolov5_stereo_Pro/models/yolov5l.yaml 1.33KB
  24. yolov5_stereo_Pro/models/yolov5x.yaml 1.33KB
  25. yolov5_stereo_Pro/models/__pycache__/
  26. yolov5_stereo_Pro/models/__pycache__/yolo.cpython-36.pyc 9.9KB
  27. yolov5_stereo_Pro/models/__pycache__/experimental.cpython-37.pyc 5.65KB
  28. yolov5_stereo_Pro/models/__pycache__/common.cpython-36.pyc 14.57KB
  29. yolov5_stereo_Pro/models/__pycache__/__init__.cpython-38.pyc 142B
  30. yolov5_stereo_Pro/models/__pycache__/experimental.cpython-36.pyc 5.69KB
  31. yolov5_stereo_Pro/models/__pycache__/__init__.cpython-37.pyc 138B
  32. yolov5_stereo_Pro/models/__pycache__/experimental.cpython-38.pyc 5.56KB
  33. yolov5_stereo_Pro/models/__pycache__/yolo.cpython-38.pyc 9.78KB
  34. yolov5_stereo_Pro/models/__pycache__/yolo.cpython-37.pyc 9.78KB
  35. yolov5_stereo_Pro/models/__pycache__/common.cpython-37.pyc 14.49KB
  36. yolov5_stereo_Pro/models/__pycache__/common.cpython-38.pyc 14.14KB
  37. yolov5_stereo_Pro/models/__pycache__/__init__.cpython-36.pyc 162B
  38. yolov5_stereo_Pro/models/hub/
  39. yolov5_stereo_Pro/models/hub/yolov3-tiny.yaml 1.17KB
  40. yolov5_stereo_Pro/models/hub/yolov5s6.yaml 1.93KB
  41. yolov5_stereo_Pro/models/hub/yolov5-panet.yaml 1.42KB
  42. yolov5_stereo_Pro/models/hub/yolov5-fpn.yaml 1.22KB
  43. yolov5_stereo_Pro/models/hub/yolov5x6.yaml 1.93KB
  44. yolov5_stereo_Pro/models/hub/yolov5-p7.yaml 2.18KB
  45. yolov5_stereo_Pro/models/hub/anchors.yaml 3.28KB
  46. yolov5_stereo_Pro/models/hub/yolov3-spp.yaml 1.5KB
  47. yolov5_stereo_Pro/models/hub/yolov5m6.yaml 1.93KB
  48. yolov5_stereo_Pro/models/hub/yolov5l6.yaml 1.93KB
  49. yolov5_stereo_Pro/models/hub/yolov3.yaml 1.49KB
  50. yolov5_stereo_Pro/models/hub/yolov5-p6.yaml 1.77KB
  51. yolov5_stereo_Pro/models/hub/yolov5-p2.yaml 1.7KB
  52. yolov5_stereo_Pro/stereo/
  53. yolov5_stereo_Pro/stereo/dianyuntu_yolo.py 8.64KB
  54. yolov5_stereo_Pro/stereo/stereoconfig_040_2.py 1.55KB
  55. yolov5_stereo_Pro/stereo/stereo.py 12.4KB
  56. yolov5_stereo_Pro/stereo/dianyuntu.py 8.59KB
  57. yolov5_stereo_Pro/stereo/yolo/
  58. yolov5_stereo_Pro/stereo/__pycache__/
  59. yolov5_stereo_Pro/stereo/__pycache__/stereo.cpython-36.pyc 4.49KB
  60. yolov5_stereo_Pro/stereo/__pycache__/stereoconfig_Bud.cpython-36.pyc 1.12KB
  61. yolov5_stereo_Pro/stereo/__pycache__/stereoconfig_040_2.cpython-36.pyc 1.15KB
  62. yolov5_stereo_Pro/stereo/__pycache__/dianyuntu_yolo.cpython-36.pyc 4.44KB
  63. yolov5_stereo_Pro/data/
  64. yolov5_stereo_Pro/data/hyp.finetune.yaml 846B
  65. yolov5_stereo_Pro/data/coco128.yaml 1.51KB
  66. yolov5_stereo_Pro/data/argoverse_hd.yaml 849B
  67. yolov5_stereo_Pro/data/coco.yaml 1.7KB
  68. yolov5_stereo_Pro/data/voc.yaml 738B
  69. yolov5_stereo_Pro/data/hyp.scratch.yaml 1.53KB
  70. yolov5_stereo_Pro/data/video/
  71. yolov5_stereo_Pro/data/video/gym_001.mov 31.95MB
  72. yolov5_stereo_Pro/data/scripts/
  73. yolov5_stereo_Pro/data/scripts/get_voc.sh 4.33KB
  74. yolov5_stereo_Pro/data/scripts/get_argoverse_hd.sh 1.97KB
  75. yolov5_stereo_Pro/data/scripts/get_coco.sh 963B
  76. yolov5_stereo_Pro/data/images/
  77. yolov5_stereo_Pro/data/images/zidane.jpg 164.99KB
  78. yolov5_stereo_Pro/data/images/bus.jpg 476.01KB
  79. yolov5_stereo_Pro/__pycache__/
  80. yolov5_stereo_Pro/__pycache__/test.cpython-36.pyc 10.63KB
  81. yolov5_stereo_Pro/utils/
  82. yolov5_stereo_Pro/utils/general.py 23.35KB
  83. yolov5_stereo_Pro/utils/autoanchor.py 6.78KB
  84. yolov5_stereo_Pro/utils/activations.py 2.2KB
  85. yolov5_stereo_Pro/utils/__init__.py
  86. yolov5_stereo_Pro/utils/torch_utils.py 11.68KB
  87. yolov5_stereo_Pro/utils/loss.py 9.18KB
  88. yolov5_stereo_Pro/utils/google_utils.py 4.76KB
  89. yolov5_stereo_Pro/utils/metrics.py 8.76KB
  90. yolov5_stereo_Pro/utils/datasets.py 43.14KB
  91. yolov5_stereo_Pro/utils/plots.py 17.7KB
  92. yolov5_stereo_Pro/utils/aws/
  93. yolov5_stereo_Pro/utils/aws/mime.sh 780B
  94. yolov5_stereo_Pro/utils/aws/__init__.py
  95. yolov5_stereo_Pro/utils/aws/resume.py 1.09KB
  96. yolov5_stereo_Pro/utils/aws/userdata.sh 1.21KB
  97. yolov5_stereo_Pro/utils/google_app_engine/
  98. yolov5_stereo_Pro/utils/google_app_engine/Dockerfile 821B
  99. yolov5_stereo_Pro/utils/google_app_engine/app.yaml 173B
  100. yolov5_stereo_Pro/utils/google_app_engine/additional_requirements.txt 105B
  101. yolov5_stereo_Pro/utils/__pycache__/
  102. yolov5_stereo_Pro/utils/__pycache__/autoanchor.cpython-36.pyc 5.94KB
  103. yolov5_stereo_Pro/utils/__pycache__/__init__.cpython-36.pyc 161B
  104. yolov5_stereo_Pro/utils/__pycache__/general.cpython-36.pyc 18.76KB
  105. yolov5_stereo_Pro/utils/__pycache__/torch_utils.cpython-36.pyc 10.74KB
  106. yolov5_stereo_Pro/utils/__pycache__/datasets.cpython-37.pyc 32.61KB
  107. yolov5_stereo_Pro/utils/__pycache__/metrics.cpython-37.pyc 7.48KB
  108. yolov5_stereo_Pro/utils/__pycache__/datasets.cpython-38.pyc 32.47KB
  109. yolov5_stereo_Pro/utils/__pycache__/metrics.cpython-38.pyc 7.42KB
  110. yolov5_stereo_Pro/utils/__pycache__/activations.cpython-36.pyc 3.36KB
  111. yolov5_stereo_Pro/utils/__pycache__/activations.cpython-37.pyc 3.37KB
  112. yolov5_stereo_Pro/utils/__pycache__/plots.cpython-37.pyc 15.53KB
  113. yolov5_stereo_Pro/utils/__pycache__/plots.cpython-38.pyc 15.34KB
  114. yolov5_stereo_Pro/utils/__pycache__/activations.cpython-38.pyc 3.33KB
  115. yolov5_stereo_Pro/utils/__pycache__/google_utils.cpython-37.pyc 3.15KB
  116. yolov5_stereo_Pro/utils/__pycache__/metrics.cpython-36.pyc 7.51KB
  117. yolov5_stereo_Pro/utils/__pycache__/google_utils.cpython-38.pyc 3.19KB
  118. yolov5_stereo_Pro/utils/__pycache__/torch_utils.cpython-38.pyc 10.73KB
  119. yolov5_stereo_Pro/utils/__pycache__/__init__.cpython-38.pyc 141B
  120. yolov5_stereo_Pro/utils/__pycache__/general.cpython-38.pyc 18.73KB
  121. yolov5_stereo_Pro/utils/__pycache__/general.cpython-37.pyc 18.68KB
  122. yolov5_stereo_Pro/utils/__pycache__/google_utils.cpython-36.pyc 3.19KB
  123. yolov5_stereo_Pro/utils/__pycache__/datasets.cpython-36.pyc 32.76KB
  124. yolov5_stereo_Pro/utils/__pycache__/__init__.cpython-37.pyc 137B
  125. yolov5_stereo_Pro/utils/__pycache__/torch_utils.cpython-37.pyc 10.69KB
  126. yolov5_stereo_Pro/utils/__pycache__/plots.cpython-36.pyc 15.63KB
  127. yolov5_stereo_Pro/utils/__pycache__/loss.cpython-36.pyc 6.37KB
  128. yolov5_stereo_Pro/utils/__pycache__/autoanchor.cpython-38.pyc 5.84KB
  129. yolov5_stereo_Pro/utils/__pycache__/autoanchor.cpython-37.pyc 5.88KB
  130. yolov5_stereo_Pro/utils/wandb_logging/
  131. yolov5_stereo_Pro/utils/wandb_logging/wandb_utils.py 6.73KB
  132. yolov5_stereo_Pro/utils/wandb_logging/__init__.py
  133. yolov5_stereo_Pro/utils/wandb_logging/log_dataset.py 1.71KB
  134. yolov5_stereo_Pro/weights/
  135. yolov5_stereo_Pro/weights/download_weights.sh 277B
  136. yolov5_stereo_Pro/weights/yolov5s/
  137. yolov5_stereo_Pro/weights/yolov5s/yolov5s.pt 14.11MB
  138. yolov5_stereo_Pro/weights/person/
  139. yolov5_stereo_Pro/weights/person/last_person_1000.pt 13.73MB
  140. yolov5_stereo_Pro/weights/person/last_person_300.pt 13.72MB
  141. yolov5_stereo_Pro/runs/
  142. yolov5_stereo_Pro/runs/detect/
0评论
提交 加载更多评论
其他资源 Sybase学习笔记.zip
Sybase学习笔记.zip
php-5.2.8-Win32.zip
php-5.2.8-Win32.zip PHP
ShareSDKv2.1.1 简化压缩 使用教程
ShareSDKv2.1.1 简化压缩 使用教程 blog: http://blog.csdn.net/li6185377/article/details/8956171
boot-example-netty-tcp-2.0.5
使用SpringBoot2.x集成Netty4.x创建基于TCP/IP协议的服务端和客户端的Demo代码案例 使用了SpringBoot+swaggerUI方式进行测试,客户端可以给服务端发送消息,服务端也可以给已经连上的客户端发送消息,使用了通道保活机制,可以使用demo搭建基于tcp协议的物联网平台
模仿qq群组滑动界面切换
Fragment模仿qq群组滑动界面切换
Struts2
Struts2Demo Struts2示例 Struts2例子
Latex中文论文模板A4双栏,适用课程论文
Latex中文论文模板A4双栏,适用课程论文
Latex中文论文模板A4双栏,适用课程论文 Latex中文论文模板A4双栏,适用课程论文
停用词集合
停用词集合(哈工大停用词表、四川大学机器智能实验室停用词库)