[涓枃](https://docs.ultralytics.com/zh) | [頃滉淡鞏碷(https://docs.ultralytics.com/ko) | [鏃ユ湰瑾瀅(https://docs.ultralytics.com/ja) | [袪褍褋褋泻懈泄](https://docs.ultralytics.com/ru) | [Deutsch](https://docs.ultralytics.com/de) | [Fran莽ais](https://docs.ultralytics.com/fr) | [Espa帽ol](https://docs.ultralytics.com/es) | [Portugu锚s](https://docs.ultralytics.com/pt) | [T眉rk莽e](https://docs.ultralytics.com/tr) | [Ti岷縩g Vi峄噒](https://docs.ultralytics.com/vi) | [丕賱毓乇亘賷丞](https://docs.ultralytics.com/ar)
YOLOv5 馃殌 is the world's most loved vision AI, representing
Ultralytics open-source research into future vision AI methods, incorporating lessons learned and best practices evolved over thousands of hours of research and development.
We hope that the resources here will help you get the most out of YOLOv5. Please browse the YOLOv5
Docs for details, raise an issue on
GitHub for support, and join our
Discord community for questions and discussions!
To request an Enterprise License please complete the form at [Ultralytics Licensing](https://www.ultralytics.com/license).
##
YOLOv8 馃殌 NEW
We are thrilled to announce the launch of Ultralytics YOLOv8 馃殌, our NEW cutting-edge, state-of-the-art (SOTA) model released at **[https://github.com/ultralytics/ultralytics](https://github.com/ultralytics/ultralytics)**. YOLOv8 is designed to be fast, accurate, and easy to use, making it an excellent choice for a wide range of object detection, image segmentation and image classification tasks.
See the [YOLOv8 Docs](https://docs.ultralytics.com/) for details and get started with:
[![PyPI version](https://badge.fury.io/py/ultralytics.svg)](https://badge.fury.io/py/ultralytics) [![Downloads](https://static.pepy.tech/badge/ultralytics)](https://pepy.tech/project/ultralytics)
```bash
pip install ultralytics
```
##
Documentation
See the [YOLOv5 Docs](https://docs.ultralytics.com/yolov5/) for full documentation on training, testing and deployment. See below for quickstart examples.
Install
Clone repo and install [requirements.txt](https://github.com/ultralytics/yolov5/blob/master/requirements.txt) in a [**Python>=3.8.0**](https://www.python.org/) environment, including [**PyTorch>=1.8**](https://pytorch.org/get-started/locally/).
```bash
git clone https://github.com/ultralytics/yolov5 # clone
cd yolov5
pip install -r requirements.txt # install
```
Inference
YOLOv5 [PyTorch Hub](https://docs.ultralytics.com/yolov5/tutorials/pytorch_hub_model_loading/) inference. [Models](https://github.com/ultralytics/yolov5/tree/master/models) download automatically from the latest YOLOv5 [release](https://github.com/ultralytics/yolov5/releases).
```python
import torch
# Model
model = torch.hub.load("ultralytics/yolov5", "yolov5s") # or yolov5n - yolov5x6, custom
# Images
img = "https://ultralytics.com/images/zidane.jpg" # or file, Path, PIL, OpenCV, numpy, list
# Inference
results = model(img)
# Results
results.print() # or .show(), .save(), .crop(), .pandas(), etc.
```
Inference with detect.py
`detect.py` runs inference on a variety of sources, downloading [models](https://github.com/ultralytics/yolov5/tree/master/models) automatically from the latest YOLOv5 [release](https://github.com/ultralytics/yolov5/releases) and saving results to `runs/detect`.
```bash
python detect.py --weights yolov5s.pt --source 0 # webcam
img.jpg # image
vid.mp4 # video
scre