mirror of
https://github.com/astral-sh/uv.git
synced 2025-07-13 08:15:00 +00:00

PyTorch 2.6.0 supports Python 3.13, and publishes wheels for it. Update the tip to reflect this. ## Summary Clarify docs. ## Test Plan Look for "cp313" at the following URLs: - [x] https://download.pytorch.org/whl/cu124/torch/ - [x] https://download.pytorch.org/whl/torch/
422 lines
12 KiB
Markdown
422 lines
12 KiB
Markdown
---
|
|
title: Using uv with PyTorch
|
|
description:
|
|
A guide to using uv with PyTorch, including installing PyTorch, configuring per-platform and
|
|
per-accelerator builds, and more.
|
|
---
|
|
|
|
# Using uv with PyTorch
|
|
|
|
The [PyTorch](https://pytorch.org/) ecosystem is a popular choice for deep learning research and
|
|
development. You can use uv to manage PyTorch projects and PyTorch dependencies across different
|
|
Python versions and environments, even controlling for the choice of accelerator (e.g., CPU-only vs.
|
|
CUDA).
|
|
|
|
!!! note
|
|
|
|
Some of the features outlined in this guide require uv version 0.5.3 or later. We recommend upgrading prior to configuring PyTorch.
|
|
|
|
## Installing PyTorch
|
|
|
|
From a packaging perspective, PyTorch has a few uncommon characteristics:
|
|
|
|
- Many PyTorch wheels are hosted on a dedicated index, rather than the Python Package Index (PyPI).
|
|
As such, installing PyTorch often requires configuring a project to use the PyTorch index.
|
|
- PyTorch produces distinct builds for each accelerator (e.g., CPU-only, CUDA). Since there's no
|
|
standardized mechanism for specifying these accelerators when publishing or installing, PyTorch
|
|
encodes them in the local version specifier. As such, PyTorch versions will often look like
|
|
`2.5.1+cpu`, `2.5.1+cu121`, etc.
|
|
- Builds for different accelerators are published to different indexes. For example, the `+cpu`
|
|
builds are published on https://download.pytorch.org/whl/cpu, while the `+cu121` builds are
|
|
published on https://download.pytorch.org/whl/cu121.
|
|
|
|
As such, the necessary packaging configuration will vary depending on both the platforms you need to
|
|
support and the accelerators you want to enable.
|
|
|
|
To start, consider the following (default) configuration, which would be generated by running
|
|
`uv init --python 3.12` followed by `uv add torch torchvision`.
|
|
|
|
In this case, PyTorch would be installed from PyPI, which hosts CPU-only wheels for Windows and
|
|
macOS, and GPU-accelerated wheels on Linux (targeting CUDA 12.4):
|
|
|
|
```toml
|
|
[project]
|
|
name = "project"
|
|
version = "0.1.0"
|
|
requires-python = ">=3.12"
|
|
dependencies = [
|
|
"torch>=2.6.0",
|
|
"torchvision>=0.21.0",
|
|
]
|
|
```
|
|
|
|
!!! tip "Supported Python versions"
|
|
|
|
At time of writing, PyTorch does not yet publish wheels for Python 3.14; as such projects with
|
|
`requires-python = ">=3.14"` may fail to resolve. See the
|
|
[compatibility matrix](https://github.com/pytorch/pytorch/blob/main/RELEASE.md#release-compatibility-matrix).
|
|
|
|
This is a valid configuration for projects that want to use CPU builds on Windows and macOS, and
|
|
CUDA-enabled builds on Linux. However, if you need to support different platforms or accelerators,
|
|
you'll need to configure the project accordingly.
|
|
|
|
## Using a PyTorch index
|
|
|
|
In some cases, you may want to use a specific PyTorch variant across all platforms. For example, you
|
|
may want to use the CPU-only builds on Linux too.
|
|
|
|
In such cases, the first step is to add the relevant PyTorch index to your `pyproject.toml`:
|
|
|
|
=== "CPU-only"
|
|
|
|
```toml
|
|
[[tool.uv.index]]
|
|
name = "pytorch-cpu"
|
|
url = "https://download.pytorch.org/whl/cpu"
|
|
explicit = true
|
|
```
|
|
|
|
=== "CUDA 11.8"
|
|
|
|
```toml
|
|
[[tool.uv.index]]
|
|
name = "pytorch-cu118"
|
|
url = "https://download.pytorch.org/whl/cu118"
|
|
explicit = true
|
|
```
|
|
|
|
=== "CUDA 12.1"
|
|
|
|
```toml
|
|
[[tool.uv.index]]
|
|
name = "pytorch-cu121"
|
|
url = "https://download.pytorch.org/whl/cu121"
|
|
explicit = true
|
|
```
|
|
|
|
=== "CUDA 12.4"
|
|
|
|
```toml
|
|
[[tool.uv.index]]
|
|
name = "pytorch-cu124"
|
|
url = "https://download.pytorch.org/whl/cu124"
|
|
explicit = true
|
|
```
|
|
|
|
=== "ROCm6"
|
|
|
|
```toml
|
|
[[tool.uv.index]]
|
|
name = "pytorch-rocm"
|
|
url = "https://download.pytorch.org/whl/rocm6.2"
|
|
explicit = true
|
|
```
|
|
|
|
=== "Intel GPUs"
|
|
|
|
```toml
|
|
[[tool.uv.index]]
|
|
name = "pytorch-xpu"
|
|
url = "https://download.pytorch.org/whl/xpu"
|
|
explicit = true
|
|
```
|
|
|
|
We recommend the use of `explicit = true` to ensure that the index is _only_ used for `torch`,
|
|
`torchvision`, and other PyTorch-related packages, as opposed to generic dependencies like `jinja2`,
|
|
which should continue to be sourced from the default index (PyPI).
|
|
|
|
Next, update the `pyproject.toml` to point `torch` and `torchvision` to the desired index:
|
|
|
|
=== "CPU-only"
|
|
|
|
```toml
|
|
[tool.uv.sources]
|
|
torch = [
|
|
{ index = "pytorch-cpu" },
|
|
]
|
|
torchvision = [
|
|
{ index = "pytorch-cpu" },
|
|
]
|
|
```
|
|
|
|
=== "CUDA 11.8"
|
|
|
|
PyTorch doesn't publish CUDA builds for macOS. As such, we gate on `sys_platform` to instruct uv to use
|
|
the PyTorch index on Linux and Windows, but fall back to PyPI on macOS:
|
|
|
|
```toml
|
|
[tool.uv.sources]
|
|
torch = [
|
|
{ index = "pytorch-cu118", marker = "sys_platform == 'linux' or sys_platform == 'win32'" },
|
|
]
|
|
torchvision = [
|
|
{ index = "pytorch-cu118", marker = "sys_platform == 'linux' or sys_platform == 'win32'" },
|
|
]
|
|
```
|
|
|
|
=== "CUDA 12.1"
|
|
|
|
PyTorch doesn't publish CUDA builds for macOS. As such, we gate on `sys_platform` to instruct uv to limit
|
|
the PyTorch index to Linux and Windows, falling back to PyPI on macOS:
|
|
|
|
```toml
|
|
[tool.uv.sources]
|
|
torch = [
|
|
{ index = "pytorch-cu121", marker = "sys_platform == 'linux' or sys_platform == 'win32'" },
|
|
]
|
|
torchvision = [
|
|
{ index = "pytorch-cu121", marker = "sys_platform == 'linux' or sys_platform == 'win32'" },
|
|
]
|
|
```
|
|
|
|
=== "CUDA 12.4"
|
|
|
|
PyTorch doesn't publish CUDA builds for macOS. As such, we gate on `sys_platform` to instruct uv to limit
|
|
the PyTorch index to Linux and Windows, falling back to PyPI on macOS:
|
|
|
|
```toml
|
|
[tool.uv.sources]
|
|
torch = [
|
|
{ index = "pytorch-cu124", marker = "sys_platform == 'linux' or sys_platform == 'win32'" },
|
|
]
|
|
torchvision = [
|
|
{ index = "pytorch-cu124", marker = "sys_platform == 'linux' or sys_platform == 'win32'" },
|
|
]
|
|
```
|
|
|
|
=== "ROCm6"
|
|
|
|
PyTorch doesn't publish ROCm6 builds for macOS or Windows. As such, we gate on `sys_platform` to instruct uv
|
|
to limit the PyTorch index to Linux, falling back to PyPI on macOS and Windows:
|
|
|
|
```toml
|
|
[tool.uv.sources]
|
|
torch = [
|
|
{ index = "pytorch-rocm", marker = "sys_platform == 'linux'" },
|
|
]
|
|
torchvision = [
|
|
{ index = "pytorch-rocm", marker = "sys_platform == 'linux'" },
|
|
]
|
|
```
|
|
|
|
=== "Intel GPUs"
|
|
|
|
PyTorch doesn't publish Intel GPU builds for macOS. As such, we gate on `sys_platform` to instruct uv to limit
|
|
the PyTorch index to Linux and Windows, falling back to PyPI on macOS:
|
|
|
|
```toml
|
|
[tool.uv.sources]
|
|
torch = [
|
|
{ index = "pytorch-xpu", marker = "sys_platform == 'linux' or sys_platform == 'win32'" },
|
|
]
|
|
torchvision = [
|
|
{ index = "pytorch-xpu", marker = "sys_platform == 'linux' or sys_platform == 'win32'" },
|
|
]
|
|
# Intel GPU support relies on `pytorch-triton-xpu` on Linux, which should also be installed from the PyTorch index
|
|
# (and included in `project.dependencies`).
|
|
pytorch-triton-xpu = [
|
|
{ index = "pytorch-xpu", marker = "sys_platform == 'linux'" },
|
|
]
|
|
```
|
|
|
|
As a complete example, the following project would use PyTorch's CPU-only builds on all platforms:
|
|
|
|
```toml
|
|
[project]
|
|
name = "project"
|
|
version = "0.1.0"
|
|
requires-python = ">=3.12.0"
|
|
dependencies = [
|
|
"torch>=2.6.0",
|
|
"torchvision>=0.21.0",
|
|
]
|
|
|
|
[tool.uv.sources]
|
|
torch = [
|
|
{ index = "pytorch-cpu" },
|
|
]
|
|
torchvision = [
|
|
{ index = "pytorch-cpu" },
|
|
]
|
|
|
|
[[tool.uv.index]]
|
|
name = "pytorch-cpu"
|
|
url = "https://download.pytorch.org/whl/cpu"
|
|
explicit = true
|
|
```
|
|
|
|
## Configuring accelerators with environment markers
|
|
|
|
In some cases, you may want to use CPU-only builds in one environment (e.g., macOS and Windows), and
|
|
CUDA-enabled builds in another (e.g., Linux).
|
|
|
|
With `tool.uv.sources`, you can use environment markers to specify the desired index for each
|
|
platform. For example, the following configuration would use PyTorch's CUDA-enabled builds on Linux,
|
|
and CPU-only builds on all other platforms (e.g., macOS and Windows):
|
|
|
|
```toml
|
|
[project]
|
|
name = "project"
|
|
version = "0.1.0"
|
|
requires-python = ">=3.12.0"
|
|
dependencies = [
|
|
"torch>=2.6.0",
|
|
"torchvision>=0.21.0",
|
|
]
|
|
|
|
[tool.uv.sources]
|
|
torch = [
|
|
{ index = "pytorch-cpu", marker = "sys_platform != 'linux'" },
|
|
{ index = "pytorch-cu124", marker = "sys_platform == 'linux'" },
|
|
]
|
|
torchvision = [
|
|
{ index = "pytorch-cpu", marker = "sys_platform != 'linux'" },
|
|
{ index = "pytorch-cu124", marker = "sys_platform == 'linux'" },
|
|
]
|
|
|
|
[[tool.uv.index]]
|
|
name = "pytorch-cpu"
|
|
url = "https://download.pytorch.org/whl/cpu"
|
|
explicit = true
|
|
|
|
[[tool.uv.index]]
|
|
name = "pytorch-cu124"
|
|
url = "https://download.pytorch.org/whl/cu124"
|
|
explicit = true
|
|
```
|
|
|
|
Similarly, the following configuration would use PyTorch's Intel GPU builds on Windows and Linux,
|
|
and CPU-only builds on macOS (by way of falling back to PyPI):
|
|
|
|
```toml
|
|
[project]
|
|
name = "project"
|
|
version = "0.1.0"
|
|
requires-python = ">=3.12.0"
|
|
dependencies = [
|
|
"torch>=2.6.0",
|
|
"torchvision>=0.21.0",
|
|
"pytorch-triton-xpu>=3.2.0 ; sys_platform == 'linux'",
|
|
]
|
|
|
|
[tool.uv.sources]
|
|
torch = [
|
|
{ index = "pytorch-xpu", marker = "sys_platform == 'win32' or sys_platform == 'linux'" },
|
|
]
|
|
torchvision = [
|
|
{ index = "pytorch-xpu", marker = "sys_platform == 'win32' or sys_platform == 'linux'" },
|
|
]
|
|
pytorch-triton-xpu = [
|
|
{ index = "pytorch-xpu", marker = "sys_platform == 'linux'" },
|
|
]
|
|
|
|
[[tool.uv.index]]
|
|
name = "pytorch-xpu"
|
|
url = "https://download.pytorch.org/whl/xpu"
|
|
explicit = true
|
|
```
|
|
|
|
## Configuring accelerators with optional dependencies
|
|
|
|
In some cases, you may want to use CPU-only builds in some cases, but CUDA-enabled builds in others,
|
|
with the choice toggled by a user-provided extra (e.g., `uv sync --extra cpu` vs.
|
|
`uv sync --extra cu124`).
|
|
|
|
With `tool.uv.sources`, you can use extra markers to specify the desired index for each enabled
|
|
extra. For example, the following configuration would use PyTorch's CPU-only for
|
|
`uv sync --extra cpu` and CUDA-enabled builds for `uv sync --extra cu124`:
|
|
|
|
```toml
|
|
[project]
|
|
name = "project"
|
|
version = "0.1.0"
|
|
requires-python = ">=3.12.0"
|
|
dependencies = []
|
|
|
|
[project.optional-dependencies]
|
|
cpu = [
|
|
"torch>=2.6.0",
|
|
"torchvision>=0.21.0",
|
|
]
|
|
cu124 = [
|
|
"torch>=2.6.0",
|
|
"torchvision>=0.21.0",
|
|
]
|
|
|
|
[tool.uv]
|
|
conflicts = [
|
|
[
|
|
{ extra = "cpu" },
|
|
{ extra = "cu124" },
|
|
],
|
|
]
|
|
|
|
[tool.uv.sources]
|
|
torch = [
|
|
{ index = "pytorch-cpu", extra = "cpu" },
|
|
{ index = "pytorch-cu124", extra = "cu124" },
|
|
]
|
|
torchvision = [
|
|
{ index = "pytorch-cpu", extra = "cpu" },
|
|
{ index = "pytorch-cu124", extra = "cu124" },
|
|
]
|
|
|
|
[[tool.uv.index]]
|
|
name = "pytorch-cpu"
|
|
url = "https://download.pytorch.org/whl/cpu"
|
|
explicit = true
|
|
|
|
[[tool.uv.index]]
|
|
name = "pytorch-cu124"
|
|
url = "https://download.pytorch.org/whl/cu124"
|
|
explicit = true
|
|
```
|
|
|
|
!!! note
|
|
|
|
Since GPU-accelerated builds aren't available on macOS, the above configuration will fail to install
|
|
on macOS when the `cu124` extra is enabled.
|
|
|
|
## The `uv pip` interface
|
|
|
|
While the above examples are focused on uv's project interface (`uv lock`, `uv sync`, `uv run`,
|
|
etc.), PyTorch can also be installed via the `uv pip` interface.
|
|
|
|
PyTorch itself offers a [dedicated interface](https://pytorch.org/get-started/locally/) to determine
|
|
the appropriate pip command to run for a given target configuration. For example, you can install
|
|
stable, CPU-only PyTorch on Linux with:
|
|
|
|
```shell
|
|
$ pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cpu
|
|
```
|
|
|
|
To use the same workflow with uv, replace `pip3` with `uv pip`:
|
|
|
|
```shell
|
|
$ uv pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cpu
|
|
```
|
|
|
|
## Automatic backend selection
|
|
|
|
In [preview](../../reference/settings.md#preview), uv can automatically select the appropriate
|
|
PyTorch index at runtime by inspecting the system configuration via `--torch-backend=auto` (or
|
|
`UV_TORCH_BACKEND=auto`):
|
|
|
|
```shell
|
|
$ UV_TORCH_BACKEND=auto uv pip install torch
|
|
```
|
|
|
|
When enabled, uv will query for the installed CUDA driver version and use the most-compatible
|
|
PyTorch index for all relevant packages (e.g., `torch`, `torchvision`, etc.). If no such CUDA driver
|
|
is found, uv will fall back to the CPU-only index. uv will continue to respect existing index
|
|
configuration for any packages outside the PyTorch ecosystem.
|
|
|
|
To select a specific backend (e.g., `cu126`), set `--torch-backend=cu126` (or
|
|
`UV_TORCH_BACKEND=cu126`).
|
|
|
|
At present, `--torch-backend` is only available in the `uv pip` interface, and only supports
|
|
detection of CUDA drivers (as opposed to other accelerators like ROCm or Intel GPUs).
|
|
|
|
As `--torch-backend` is a preview feature, it should be considered experimental and is not governed
|
|
by uv's standard [versioning policy](../../reference/policies/versioning.md). `--torch-backend` may
|
|
change or be removed entirely in future versions of uv.
|