Add documentation for using uv with PyTorch (#9210)

## Summary

Now that we have all the pieces in place, this PR adds some dedicated
documentation to enable a variety of PyTorch setups.

This PR is downstream of #6523 and builds on the content in there; #6523
will merge first, and this PR will follow.
This commit is contained in:
Charlie Marsh 2024-11-18 20:09:52 -05:00 committed by GitHub
parent e4fc875afa
commit dea2a040f0
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
4 changed files with 197 additions and 62 deletions

View file

@ -15,6 +15,7 @@ Learn how to integrate uv with other software:
- [Using in GitHub Actions](./integration/github.md) - [Using in GitHub Actions](./integration/github.md)
- [Using in GitLab CI/CD](./integration/gitlab.md) - [Using in GitLab CI/CD](./integration/gitlab.md)
- [Using with alternative package indexes](./integration/alternative-indexes.md) - [Using with alternative package indexes](./integration/alternative-indexes.md)
- [Installing PyTorch](./integration/pytorch.md)
- [Building a FastAPI application](./integration/fastapi.md) - [Building a FastAPI application](./integration/fastapi.md)
Or, explore the [concept documentation](../concepts/index.md) for comprehensive breakdown of each Or, explore the [concept documentation](../concepts/index.md) for comprehensive breakdown of each

View file

@ -8,4 +8,5 @@ Learn how to integrate uv with other software:
- [Using in GitHub Actions](./github.md) - [Using in GitHub Actions](./github.md)
- [Using in GitLab CI/CD](./gitlab.md) - [Using in GitLab CI/CD](./gitlab.md)
- [Using with alternative package indexes](./alternative-indexes.md) - [Using with alternative package indexes](./alternative-indexes.md)
- [Installing PyTorch](./pytorch.md)
- [Building a FastAPI application](./fastapi.md) - [Building a FastAPI application](./fastapi.md)

View file

@ -1,47 +1,60 @@
# Installing PyTorch with uv # Using uv with PyTorch
[PyTorch](https://pytorch.org/) is a popular open-source deep learning framework that has The [PyTorch](https://pytorch.org/) ecosystem is a popular choice for deep learning research and
first-class support for acceleration via GPUs. Installation, however, can be complex, as you won't development. You can use uv to manage PyTorch projects and PyTorch dependencies across different
find all the wheels for PyTorch on PyPI and you have to manage this through external indexes. This Python versions and environments, even controlling for the choice of accelerator (e.g., CPU-only vs.
guide aims to help you set up a `pyproject.toml` file using `uv` CUDA).
[indexes features](../../configuration/indexes.md).
!!! info "Available PyTorch indexes" ## Installing PyTorch
By default, from PyPI you will download: From a packaging perspective, PyTorch has a few uncommon characteristics:
* CPU-only wheels on Windows and macOS.
* CUDA (i.e. GPU) on Linux. At the time of writing, for PyTorch stable version (2.5.0) this defaults to CUDA 12.4. On older versions, this might be different.
If you want to install CPU-only PyTorch wheels on Linux, or a PyTorch version that supports a different CUDA version, you need to resort to external indexes: - Many PyTorch wheels are hosted on a dedicated index, rather than the Python Package Index (PyPI).
As such, installing PyTorch typically often configuring a project to use the PyTorch index.
- PyTorch includes distinct builds for each accelerator (e.g., CPU-only, CUDA). Since there's no
standardized mechanism for specifying these accelerators when publishing or installing, PyTorch
encodes them in the local version specifier. As such, PyTorch versions will often look like
`2.5.1+cpu`, `2.5.1+cu121`, etc.
- Builds for different accelerators are published to different indexes. For example, the `+cpu`
builds are published on https://download.pytorch.org/whl/cpu, while the `+cu121` builds are
published on https://download.pytorch.org/whl/cu121.
* https://download.pytorch.org/whl/cu118 As such, the necessary packaging configuration will vary depending on both the platforms you need to
* https://download.pytorch.org/whl/cu121 support and the accelerators you want to enable.
* https://download.pytorch.org/whl/cu124
* https://download.pytorch.org/whl/cpu
* https://download.pytorch.org/whl/rocm6.2 (AMD GPUs, only available on Linux)
The [PyTorch website](https://pytorch.org/get-started/locally/) offers a simple interface to To start, consider the following (default) configuration, which would be generated by running
determine what `pip` command you should run to install PyTorch. This guide should help you do the `uv init --python 3.12` followed by `uv add torch torchvision`.
same with `uv`. If the following instructions fail, you might want to check the link for any
difference, and open an issue to report the discrepancy. In this case, or if you run into any other
issue, please refer to [Getting Help](../../getting-started/help.md).
## Initialise a project In this case, PyTorch would be installed from PyPI, which hosts CPU-only wheels for Windows and
macOS, and GPU-accelerated wheels on Linux (targeting CUDA 12.4):
Create a new project with `uv init`: ```toml
[project]
```sh name = "project"
uv init version = "0.1.0"
requires-python = ">=3.12"
dependencies = [
"torch>=2.5.1",
"torchvision>=0.20.1",
]
``` ```
!!! tip "Supported Python versions" !!! tip "Supported Python versions"
Make sure to use a Python version that is supported by PyTorch. You can find the compatibility matrix [here](https://github.com/pytorch/pytorch/blob/main/RELEASE.md#release-compatibility-matrix). At time of writing, PyTorch does not yet publish wheels for Python 3.13; as such projects with
`requires-python = ">=3.13"` may fail to resolve. See the
[compatibility matrix](https://github.com/pytorch/pytorch/blob/main/RELEASE.md#release-compatibility-matrix).
## Add PyTorch indexes This is a valid configuration for projects that want to use CPU builds on Windows and macOS, and
CUDA-enabled builds on Linux. However, if you need to support different platforms or accelerators,
you'll need to configure the project accordingly.
Open the `pyproject.toml` file, and create a custom index matching the channel you want to use (CPU, ## Using a PyTorch index
NVIDIA or AMD), to instruct `uv` where to find the PyTorch wheels.
In some cases, you may want to use a specific PyTorch variant across all platforms. For example, you
may want to use the CPU-only builds on Linux too.
In such cases, the first step is to add the relevant PyTorch index to your `pyproject.toml`:
=== "CPU-only" === "CPU-only"
@ -88,20 +101,16 @@ NVIDIA or AMD), to instruct `uv` where to find the PyTorch wheels.
explicit = true explicit = true
``` ```
Note the `explicit` option: this prevents packages from being installed from that index unless We recommend the use of `explicit = true` to ensure that the index is _only_ used for `torch`,
explicitly pinned to it (see the step below). This means that only PyTorch will be installed from `torchvision`, and other PyTorch-related packages, as opposed to generic dependencies like `jinja2`,
this index, while all other packages will be looked up on PyPI (the default index). which should continue to be sourced from the default index (PyPI).
## Pin PyTorch to the custom index Next, update the `pyproject.toml` to point `torch` and `torchvision` to the desired index:
Now you need to pin specific PyTorch versions to the appropriate indexes. The `tool.uv.sources`
table in the `pyproject.toml` is a mapping that matches a package to a list of indexes inside of
which to look. Note that you need to create a new entry for every library you want to install. In
other words, if your project depends on both PyTorch and torchvision, you need to do as follows:
=== "CPU-only" === "CPU-only"
Note that we use the `platform_system` marker to instruct `uv` to look into this index on Linux and Windows. PyTorch doesn't publish CPU-only builds for macOS, since macOS builds are always considered CPU-only.
As such, we gate on `platform_system` to instruct uv to ignore the PyTorch index when resolving for macOS.
```toml ```toml
[tool.uv.sources] [tool.uv.sources]
@ -115,7 +124,8 @@ other words, if your project depends on both PyTorch and torchvision, you need t
=== "CUDA 11.8" === "CUDA 11.8"
Note that we use the `platform_system` marker to instruct `uv` to look into this index on Linux and Windows. PyTorch doesn't publish CUDA builds for macOS. As such, we gate on `platform_system` to instruct uv to ignore
the PyTorch index when resolving for macOS.
```toml ```toml
[tool.uv.sources] [tool.uv.sources]
@ -129,7 +139,8 @@ other words, if your project depends on both PyTorch and torchvision, you need t
=== "CUDA 12.1" === "CUDA 12.1"
Note that we use the `platform_system` marker to instruct `uv` to look into this index on Linux and Windows. PyTorch doesn't publish CUDA builds for macOS. As such, we gate on `platform_system` to instruct uv to ignore
the PyTorch index when resolving for macOS.
```toml ```toml
[tool.uv.sources] [tool.uv.sources]
@ -143,7 +154,8 @@ other words, if your project depends on both PyTorch and torchvision, you need t
=== "CUDA 12.4" === "CUDA 12.4"
Note that we use the `platform_system` marker to instruct `uv` to look into this index on Linux and Windows. PyTorch doesn't publish CUDA builds for macOS. As such, we gate on `platform_system` to instruct uv to ignore
the PyTorch index when resolving for macOS.
```toml ```toml
[tool.uv.sources] [tool.uv.sources]
@ -157,7 +169,8 @@ other words, if your project depends on both PyTorch and torchvision, you need t
=== "ROCm6" === "ROCm6"
Note that we use the `platform_system` marker to instruct `uv` to look into this index when on Linux only. PyTorch doesn't publish ROCm6 builds for macOS or Windows. As such, we gate on `platform_system` to instruct uv to
ignore the PyTorch index when resolving for those platforms.
```toml ```toml
[tool.uv.sources] [tool.uv.sources]
@ -169,30 +182,149 @@ other words, if your project depends on both PyTorch and torchvision, you need t
] ]
``` ```
## Add PyTorch to your dependencies As a complete example, the following project would use PyTorch's CPU-only builds on all platforms:
Finally, we can add PyTorch to the `project.dependencies` section of the `pyproject.toml`. To
install PyTorch and torchvision, run:
```sh
uv add torch torchvision
```
However, if you want to be more explicit, you could also:
```toml ```toml
[project] [project]
name = "project"
version = "0.1.0"
requires-python = ">=3.12.0"
dependencies = [ dependencies = [
"torch==2.5.0 ; platform_system == 'Darwin'", "torch>=2.5.1",
"torch==2.5.0+cu124 ; platform_system != 'Darwin'", "torchvision>=0.20.1",
"torchvision==0.20.0 ; platform_system == 'Darwin'",
"torchvision==0.20.0+cu124 ; platform_system != 'Darwin'",
] ]
[tool.uv.sources]
torch = [
{ index = "pytorch-cpu", marker = "platform_system != 'Darwin'" },
]
torchvision = [
{ index = "pytorch-cpu", marker = "platform_system != 'Darwin'" },
]
[[tool.uv.index]]
name = "pytorch-cpu"
url = "https://download.pytorch.org/whl/cpu"
explicit = true
``` ```
This will install PyTorch 2.5.0 and torchvision 0.19 on macOS, and PyTorch 2.5.0+cu124 with ## Configuring accelerators with environment markers
torchvision 0.20.0+cu124 on Linux and Windows.
!!! warning "PyTorch on Intel Macs" In some cases, you may want to use CPU-only builds in one environment (e.g., macOS and Windows), and
CUDA-enabled builds in another (e.g., Linux).
Note that the last minor version to support Intel Macs is PyTorch 2.2. In other words, if you try to install PyTorch 2.3.0 or more on an Intel Mac, you will get an error. Alternatively, you can install PyTorch with conda With `tool.uv.sources`, you can use environment markers to specify the desired index for each
platform. For example, the following configuration would use PyTorch's CPU-only builds on Windows
(and macOS, by way of falling back to PyPI), and CUDA-enabled builds on Linux:
```toml
[project]
name = "project"
version = "0.1.0"
requires-python = ">=3.12.0"
dependencies = [
"torch>=2.5.1",
"torchvision>=0.20.1",
]
[tool.uv.sources]
torch = [
{ index = "pytorch-cpu", marker = "platform_system == 'Windows'" },
{ index = "pytorch-cu124", marker = "platform_system == 'Linux'" },
]
torchvision = [
{ index = "pytorch-cpu", marker = "platform_system == 'Windows'" },
{ index = "pytorch-cu124", marker = "platform_system == 'Linux'" },
]
[[tool.uv.index]]
name = "pytorch-cpu"
url = "https://download.pytorch.org/whl/cpu"
explicit = true
[[tool.uv.index]]
name = "pytorch-cu124"
url = "https://download.pytorch.org/whl/cu124"
explicit = true
```
## Configuring accelerators with optional dependencies
In some cases, you may want to use CPU-only builds in some cases, but CUDA-enabled builds in others,
with the choice toggled by a user-provided extra (e.g., `uv sync --extra cpu` vs.
`uv sync --extra cu124`).
With `tool.uv.sources`, you can use extra markers to specify the desired index for each enabled
extra. For example, the following configuration would use PyTorch's CPU-only for
`uv sync --extra cpu` and CUDA-enabled builds for `uv sync --extra cu124`:
```toml
[project]
name = "project"
version = "0.1.0"
requires-python = ">=3.12.0"
dependencies = []
[project.optional-dependencies]
cpu = [
"torch>=2.5.1",
"torchvision>=0.20.1",
]
cu124 = [
"torch>=2.5.1",
"torchvision>=0.20.1",
]
[tool.uv]
conflicts = [
[
{ extra = "cpu" },
{ extra = "cu124" },
],
]
[tool.uv.sources]
torch = [
{ index = "pytorch-cpu", extra = "cpu", marker = "platform_system != 'Darwin'" },
{ index = "pytorch-cu124", extra = "cu124" },
]
torchvision = [
{ index = "pytorch-cpu", extra = "cpu", marker = "platform_system != 'Darwin'" },
{ index = "pytorch-cu124", extra = "cu124" },
]
[[tool.uv.index]]
name = "pytorch-cpu"
url = "https://download.pytorch.org/whl/cpu"
explicit = true
[[tool.uv.index]]
name = "pytorch-cu124"
url = "https://download.pytorch.org/whl/cu124"
explicit = true
```
!!! note
Since GPU-accelerated builds aren't available on macOS, the above configuration will continue to use
the CPU-only builds on macOS via the `"platform_system != 'Darwin'"` marker, regardless of the extra
provided.
## The `uv pip` interface
While the above examples are focused on uv's project interface (`uv lock`, `uv sync`, `uv run`,
etc.), PyTorch can also be installed via the `uv pip` interface.
PyTorch itself offers a [dedicated interface](https://pytorch.org/get-started/locally/) to determine
the appropriate pip command to run for a given target configuration. For example, you can install
stable, CPU-only PyTorch on Linux with:
```shell
$ pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cpu
```
To use the same workflow with uv, replace `pip3` with `uv pip`:
```shell
$ uv pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cpu
```

View file

@ -115,6 +115,7 @@ nav:
- GitHub Actions: guides/integration/github.md - GitHub Actions: guides/integration/github.md
- GitLab CI/CD: guides/integration/gitlab.md - GitLab CI/CD: guides/integration/gitlab.md
- Pre-commit: guides/integration/pre-commit.md - Pre-commit: guides/integration/pre-commit.md
- PyTorch: guides/integration/pytorch.md
- FastAPI: guides/integration/fastapi.md - FastAPI: guides/integration/fastapi.md
- PyTorch: guides/integration/pytorch.md - PyTorch: guides/integration/pytorch.md
- Alternative indexes: guides/integration/alternative-indexes.md - Alternative indexes: guides/integration/alternative-indexes.md