mirror of
https://github.com/astral-sh/uv.git
synced 2025-07-09 22:35:01 +00:00

Some checks failed
CI / Determine changes (push) Has been cancelled
CI / lint (push) Has been cancelled
CI / cargo shear (push) Has been cancelled
CI / mkdocs (push) Has been cancelled
CI / typos (push) Has been cancelled
CI / cargo clippy | ubuntu (push) Has been cancelled
CI / cargo clippy | windows (push) Has been cancelled
CI / cargo dev generate-all (push) Has been cancelled
CI / cargo test | ubuntu (push) Has been cancelled
CI / cargo test | macos (push) Has been cancelled
CI / cargo test | windows (push) Has been cancelled
CI / check windows trampoline | aarch64 (push) Has been cancelled
CI / build binary | windows aarch64 (push) Has been cancelled
CI / check windows trampoline | i686 (push) Has been cancelled
CI / check windows trampoline | x86_64 (push) Has been cancelled
CI / test windows trampoline | i686 (push) Has been cancelled
CI / test windows trampoline | x86_64 (push) Has been cancelled
CI / build binary | linux libc (push) Has been cancelled
CI / build binary | linux musl (push) Has been cancelled
CI / build binary | macos aarch64 (push) Has been cancelled
CI / build binary | macos x86_64 (push) Has been cancelled
CI / build binary | windows x86_64 (push) Has been cancelled
CI / cargo build (msrv) (push) Has been cancelled
CI / build binary | freebsd (push) Has been cancelled
CI / ecosystem test | pydantic/pydantic-core (push) Has been cancelled
CI / ecosystem test | prefecthq/prefect (push) Has been cancelled
CI / ecosystem test | pallets/flask (push) Has been cancelled
CI / smoke test | linux (push) Has been cancelled
CI / check system | alpine (push) Has been cancelled
CI / smoke test | macos (push) Has been cancelled
CI / smoke test | windows x86_64 (push) Has been cancelled
CI / smoke test | windows aarch64 (push) Has been cancelled
CI / integration test | conda on ubuntu (push) Has been cancelled
CI / integration test | deadsnakes python3.9 on ubuntu (push) Has been cancelled
CI / integration test | free-threaded on windows (push) Has been cancelled
CI / integration test | pypy on ubuntu (push) Has been cancelled
CI / integration test | pypy on windows (push) Has been cancelled
CI / integration test | graalpy on ubuntu (push) Has been cancelled
CI / integration test | graalpy on windows (push) Has been cancelled
CI / integration test | pyodide on ubuntu (push) Has been cancelled
CI / integration test | github actions (push) Has been cancelled
CI / integration test | free-threaded python on github actions (push) Has been cancelled
CI / integration test | determine publish changes (push) Has been cancelled
CI / integration test | registries (push) Has been cancelled
CI / integration test | uv publish (push) Has been cancelled
CI / integration test | uv_build (push) Has been cancelled
CI / check cache | ubuntu (push) Has been cancelled
CI / check cache | macos aarch64 (push) Has been cancelled
CI / check system | python on debian (push) Has been cancelled
CI / check system | python on fedora (push) Has been cancelled
CI / check system | python on ubuntu (push) Has been cancelled
CI / check system | python on rocky linux 8 (push) Has been cancelled
CI / check system | python on rocky linux 9 (push) Has been cancelled
CI / check system | graalpy on ubuntu (push) Has been cancelled
CI / check system | pypy on ubuntu (push) Has been cancelled
CI / check system | pyston (push) Has been cancelled
CI / check system | python on macos aarch64 (push) Has been cancelled
CI / check system | homebrew python on macos aarch64 (push) Has been cancelled
CI / check system | python on macos x86-64 (push) Has been cancelled
CI / check system | python3.10 on windows x86-64 (push) Has been cancelled
CI / check system | python3.10 on windows x86 (push) Has been cancelled
CI / check system | python3.13 on windows x86-64 (push) Has been cancelled
CI / check system | x86-64 python3.13 on windows aarch64 (push) Has been cancelled
CI / check system | windows registry (push) Has been cancelled
CI / check system | python3.12 via chocolatey (push) Has been cancelled
CI / check system | python3.9 via pyenv (push) Has been cancelled
CI / check system | python3.13 (push) Has been cancelled
CI / check system | conda3.11 on macos aarch64 (push) Has been cancelled
CI / check system | conda3.8 on macos aarch64 (push) Has been cancelled
CI / check system | conda3.11 on linux x86-64 (push) Has been cancelled
CI / check system | conda3.8 on linux x86-64 (push) Has been cancelled
CI / check system | conda3.11 on windows x86-64 (push) Has been cancelled
CI / check system | conda3.8 on windows x86-64 (push) Has been cancelled
CI / check system | amazonlinux (push) Has been cancelled
CI / check system | embedded python3.10 on windows x86-64 (push) Has been cancelled
CI / benchmarks | walltime aarch64 linux (push) Has been cancelled
CI / benchmarks | instrumented (push) Has been cancelled
## Summary Allows `--torch-backend=auto` to detect AMD GPUs. The approach is fairly well-documented inline, but I opted for `rocm_agent_enumerator` over (e.g.) `rocminfo` since it seems to be the recommended approach for scripting: https://rocm.docs.amd.com/projects/rocminfo/en/latest/how-to/use-rocm-agent-enumerator.html. Closes https://github.com/astral-sh/uv/issues/14086. ## Test Plan ``` root@rocm-jupyter-gpu-mi300x1-192gb-devcloud-atl1:~# ./uv-linux-libc-11fb582c5c046bae09766ceddd276dcc5bb41218/uv pip install torch --torch-backend=auto Resolved 11 packages in 251ms Prepared 2 packages in 6ms Installed 11 packages in 257ms + filelock==3.18.0 + fsspec==2025.5.1 + jinja2==3.1.6 + markupsafe==3.0.2 + mpmath==1.3.0 + networkx==3.5 + pytorch-triton-rocm==3.3.1 + setuptools==80.9.0 + sympy==1.14.0 + torch==2.7.1+rocm6.3 + typing-extensions==4.14.0 ``` --------- Co-authored-by: Zanie Blue <contact@zanie.dev>
463 lines
13 KiB
Markdown
463 lines
13 KiB
Markdown
---
|
|
title: Using uv with PyTorch
|
|
description:
|
|
A guide to using uv with PyTorch, including installing PyTorch, configuring per-platform and
|
|
per-accelerator builds, and more.
|
|
---
|
|
|
|
# Using uv with PyTorch
|
|
|
|
The [PyTorch](https://pytorch.org/) ecosystem is a popular choice for deep learning research and
|
|
development. You can use uv to manage PyTorch projects and PyTorch dependencies across different
|
|
Python versions and environments, even controlling for the choice of accelerator (e.g., CPU-only vs.
|
|
CUDA).
|
|
|
|
!!! note
|
|
|
|
Some of the features outlined in this guide require uv version 0.5.3 or later. We recommend upgrading prior to configuring PyTorch.
|
|
|
|
## Installing PyTorch
|
|
|
|
From a packaging perspective, PyTorch has a few uncommon characteristics:
|
|
|
|
- Many PyTorch wheels are hosted on a dedicated index, rather than the Python Package Index (PyPI).
|
|
As such, installing PyTorch often requires configuring a project to use the PyTorch index.
|
|
- PyTorch produces distinct builds for each accelerator (e.g., CPU-only, CUDA). Since there's no
|
|
standardized mechanism for specifying these accelerators when publishing or installing, PyTorch
|
|
encodes them in the local version specifier. As such, PyTorch versions will often look like
|
|
`2.5.1+cpu`, `2.5.1+cu121`, etc.
|
|
- Builds for different accelerators are published to different indexes. For example, the `+cpu`
|
|
builds are published on https://download.pytorch.org/whl/cpu, while the `+cu121` builds are
|
|
published on https://download.pytorch.org/whl/cu121.
|
|
|
|
As such, the necessary packaging configuration will vary depending on both the platforms you need to
|
|
support and the accelerators you want to enable.
|
|
|
|
To start, consider the following (default) configuration, which would be generated by running
|
|
`uv init --python 3.12` followed by `uv add torch torchvision`.
|
|
|
|
In this case, PyTorch would be installed from PyPI, which hosts CPU-only wheels for Windows and
|
|
macOS, and GPU-accelerated wheels on Linux (targeting CUDA 12.6):
|
|
|
|
```toml
|
|
[project]
|
|
name = "project"
|
|
version = "0.1.0"
|
|
requires-python = ">=3.12"
|
|
dependencies = [
|
|
"torch>=2.7.0",
|
|
"torchvision>=0.22.0",
|
|
]
|
|
```
|
|
|
|
!!! tip "Supported Python versions"
|
|
|
|
At time of writing, PyTorch does not yet publish wheels for Python 3.14; as such projects with
|
|
`requires-python = ">=3.14"` may fail to resolve. See the
|
|
[compatibility matrix](https://github.com/pytorch/pytorch/blob/main/RELEASE.md#release-compatibility-matrix).
|
|
|
|
This is a valid configuration for projects that want to use CPU builds on Windows and macOS, and
|
|
CUDA-enabled builds on Linux. However, if you need to support different platforms or accelerators,
|
|
you'll need to configure the project accordingly.
|
|
|
|
## Using a PyTorch index
|
|
|
|
In some cases, you may want to use a specific PyTorch variant across all platforms. For example, you
|
|
may want to use the CPU-only builds on Linux too.
|
|
|
|
In such cases, the first step is to add the relevant PyTorch index to your `pyproject.toml`:
|
|
|
|
=== "CPU-only"
|
|
|
|
```toml
|
|
[[tool.uv.index]]
|
|
name = "pytorch-cpu"
|
|
url = "https://download.pytorch.org/whl/cpu"
|
|
explicit = true
|
|
```
|
|
|
|
=== "CUDA 11.8"
|
|
|
|
```toml
|
|
[[tool.uv.index]]
|
|
name = "pytorch-cu118"
|
|
url = "https://download.pytorch.org/whl/cu118"
|
|
explicit = true
|
|
```
|
|
|
|
=== "CUDA 12.6"
|
|
|
|
```toml
|
|
[[tool.uv.index]]
|
|
name = "pytorch-cu126"
|
|
url = "https://download.pytorch.org/whl/cu126"
|
|
explicit = true
|
|
```
|
|
|
|
=== "CUDA 12.8"
|
|
|
|
```toml
|
|
[[tool.uv.index]]
|
|
name = "pytorch-cu128"
|
|
url = "https://download.pytorch.org/whl/cu128"
|
|
explicit = true
|
|
```
|
|
|
|
=== "ROCm6"
|
|
|
|
```toml
|
|
[[tool.uv.index]]
|
|
name = "pytorch-rocm"
|
|
url = "https://download.pytorch.org/whl/rocm6.3"
|
|
explicit = true
|
|
```
|
|
|
|
=== "Intel GPUs"
|
|
|
|
```toml
|
|
[[tool.uv.index]]
|
|
name = "pytorch-xpu"
|
|
url = "https://download.pytorch.org/whl/xpu"
|
|
explicit = true
|
|
```
|
|
|
|
We recommend the use of `explicit = true` to ensure that the index is _only_ used for `torch`,
|
|
`torchvision`, and other PyTorch-related packages, as opposed to generic dependencies like `jinja2`,
|
|
which should continue to be sourced from the default index (PyPI).
|
|
|
|
Next, update the `pyproject.toml` to point `torch` and `torchvision` to the desired index:
|
|
|
|
=== "CPU-only"
|
|
|
|
```toml
|
|
[tool.uv.sources]
|
|
torch = [
|
|
{ index = "pytorch-cpu" },
|
|
]
|
|
torchvision = [
|
|
{ index = "pytorch-cpu" },
|
|
]
|
|
```
|
|
|
|
=== "CUDA 11.8"
|
|
|
|
PyTorch doesn't publish CUDA builds for macOS. As such, we gate on `sys_platform` to instruct uv to use
|
|
the PyTorch index on Linux and Windows, but fall back to PyPI on macOS:
|
|
|
|
```toml
|
|
[tool.uv.sources]
|
|
torch = [
|
|
{ index = "pytorch-cu118", marker = "sys_platform == 'linux' or sys_platform == 'win32'" },
|
|
]
|
|
torchvision = [
|
|
{ index = "pytorch-cu118", marker = "sys_platform == 'linux' or sys_platform == 'win32'" },
|
|
]
|
|
```
|
|
|
|
=== "CUDA 12.6"
|
|
|
|
PyTorch doesn't publish CUDA builds for macOS. As such, we gate on `sys_platform` to instruct uv to limit
|
|
the PyTorch index to Linux and Windows, falling back to PyPI on macOS:
|
|
|
|
```toml
|
|
[tool.uv.sources]
|
|
torch = [
|
|
{ index = "pytorch-cu126", marker = "sys_platform == 'linux' or sys_platform == 'win32'" },
|
|
]
|
|
torchvision = [
|
|
{ index = "pytorch-cu126", marker = "sys_platform == 'linux' or sys_platform == 'win32'" },
|
|
]
|
|
```
|
|
|
|
=== "CUDA 12.8"
|
|
|
|
PyTorch doesn't publish CUDA builds for macOS. As such, we gate on `sys_platform` to instruct uv to limit
|
|
the PyTorch index to Linux and Windows, falling back to PyPI on macOS:
|
|
|
|
```toml
|
|
[tool.uv.sources]
|
|
torch = [
|
|
{ index = "pytorch-cu128", marker = "sys_platform == 'linux' or sys_platform == 'win32'" },
|
|
]
|
|
torchvision = [
|
|
{ index = "pytorch-cu128", marker = "sys_platform == 'linux' or sys_platform == 'win32'" },
|
|
]
|
|
```
|
|
|
|
=== "ROCm6"
|
|
|
|
PyTorch doesn't publish ROCm6 builds for macOS or Windows. As such, we gate on `sys_platform` to instruct uv
|
|
to limit the PyTorch index to Linux, falling back to PyPI on macOS and Windows:
|
|
|
|
```toml
|
|
[tool.uv.sources]
|
|
torch = [
|
|
{ index = "pytorch-rocm", marker = "sys_platform == 'linux'" },
|
|
]
|
|
torchvision = [
|
|
{ index = "pytorch-rocm", marker = "sys_platform == 'linux'" },
|
|
]
|
|
# ROCm6 support relies on `pytorch-triton-rocm`, which should also be installed from the PyTorch index
|
|
# (and included in `project.dependencies`).
|
|
pytorch-triton-rocm = [
|
|
{ index = "pytorch-rocm", marker = "sys_platform == 'linux'" },
|
|
]
|
|
```
|
|
|
|
=== "Intel GPUs"
|
|
|
|
PyTorch doesn't publish Intel GPU builds for macOS. As such, we gate on `sys_platform` to instruct uv to limit
|
|
the PyTorch index to Linux and Windows, falling back to PyPI on macOS:
|
|
|
|
```toml
|
|
[tool.uv.sources]
|
|
torch = [
|
|
{ index = "pytorch-xpu", marker = "sys_platform == 'linux' or sys_platform == 'win32'" },
|
|
]
|
|
torchvision = [
|
|
{ index = "pytorch-xpu", marker = "sys_platform == 'linux' or sys_platform == 'win32'" },
|
|
]
|
|
# Intel GPU support relies on `pytorch-triton-xpu`, which should also be installed from the PyTorch index
|
|
# (and included in `project.dependencies`).
|
|
pytorch-triton-xpu = [
|
|
{ index = "pytorch-xpu", marker = "sys_platform == 'linux' or sys_platform == 'win32'" },
|
|
]
|
|
```
|
|
|
|
As a complete example, the following project would use PyTorch's CPU-only builds on all platforms:
|
|
|
|
```toml
|
|
[project]
|
|
name = "project"
|
|
version = "0.1.0"
|
|
requires-python = ">=3.12.0"
|
|
dependencies = [
|
|
"torch>=2.7.0",
|
|
"torchvision>=0.22.0",
|
|
]
|
|
|
|
[tool.uv.sources]
|
|
torch = [
|
|
{ index = "pytorch-cpu" },
|
|
]
|
|
torchvision = [
|
|
{ index = "pytorch-cpu" },
|
|
]
|
|
|
|
[[tool.uv.index]]
|
|
name = "pytorch-cpu"
|
|
url = "https://download.pytorch.org/whl/cpu"
|
|
explicit = true
|
|
```
|
|
|
|
## Configuring accelerators with environment markers
|
|
|
|
In some cases, you may want to use CPU-only builds in one environment (e.g., macOS and Windows), and
|
|
CUDA-enabled builds in another (e.g., Linux).
|
|
|
|
With `tool.uv.sources`, you can use environment markers to specify the desired index for each
|
|
platform. For example, the following configuration would use PyTorch's CUDA-enabled builds on Linux,
|
|
and CPU-only builds on all other platforms (e.g., macOS and Windows):
|
|
|
|
```toml
|
|
[project]
|
|
name = "project"
|
|
version = "0.1.0"
|
|
requires-python = ">=3.12.0"
|
|
dependencies = [
|
|
"torch>=2.7.0",
|
|
"torchvision>=0.22.0",
|
|
]
|
|
|
|
[tool.uv.sources]
|
|
torch = [
|
|
{ index = "pytorch-cpu", marker = "sys_platform != 'linux'" },
|
|
{ index = "pytorch-cu128", marker = "sys_platform == 'linux'" },
|
|
]
|
|
torchvision = [
|
|
{ index = "pytorch-cpu", marker = "sys_platform != 'linux'" },
|
|
{ index = "pytorch-cu128", marker = "sys_platform == 'linux'" },
|
|
]
|
|
|
|
[[tool.uv.index]]
|
|
name = "pytorch-cpu"
|
|
url = "https://download.pytorch.org/whl/cpu"
|
|
explicit = true
|
|
|
|
[[tool.uv.index]]
|
|
name = "pytorch-cu128"
|
|
url = "https://download.pytorch.org/whl/cu128"
|
|
explicit = true
|
|
```
|
|
|
|
Similarly, the following configuration would use PyTorch's AMD GPU builds on Linux, and CPU-only
|
|
builds on Windows and macOS (by way of falling back to PyPI):
|
|
|
|
```toml
|
|
[project]
|
|
name = "project"
|
|
version = "0.1.0"
|
|
requires-python = ">=3.12.0"
|
|
dependencies = [
|
|
"torch>=2.7.0",
|
|
"torchvision>=0.22.0",
|
|
"pytorch-triton-rocm>=3.3.0 ; sys_platform == 'linux'",
|
|
]
|
|
|
|
[tool.uv.sources]
|
|
torch = [
|
|
{ index = "pytorch-rocm", marker = "sys_platform == 'linux'" },
|
|
]
|
|
torchvision = [
|
|
{ index = "pytorch-rocm", marker = "sys_platform == 'linux'" },
|
|
]
|
|
pytorch-triton-rocm = [
|
|
{ index = "pytorch-rocm", marker = "sys_platform == 'linux'" },
|
|
]
|
|
|
|
[[tool.uv.index]]
|
|
name = "pytorch-rocm"
|
|
url = "https://download.pytorch.org/whl/rocm6.3"
|
|
explicit = true
|
|
```
|
|
|
|
Or, for Intel GPU builds:
|
|
|
|
```toml
|
|
[project]
|
|
name = "project"
|
|
version = "0.1.0"
|
|
requires-python = ">=3.12.0"
|
|
dependencies = [
|
|
"torch>=2.7.0",
|
|
"torchvision>=0.22.0",
|
|
"pytorch-triton-xpu>=3.3.0 ; sys_platform == 'win32' or sys_platform == 'linux'",
|
|
]
|
|
|
|
[tool.uv.sources]
|
|
torch = [
|
|
{ index = "pytorch-xpu", marker = "sys_platform == 'win32' or sys_platform == 'linux'" },
|
|
]
|
|
torchvision = [
|
|
{ index = "pytorch-xpu", marker = "sys_platform == 'win32' or sys_platform == 'linux'" },
|
|
]
|
|
pytorch-triton-xpu = [
|
|
{ index = "pytorch-xpu", marker = "sys_platform == 'win32' or sys_platform == 'linux'" },
|
|
]
|
|
|
|
[[tool.uv.index]]
|
|
name = "pytorch-xpu"
|
|
url = "https://download.pytorch.org/whl/xpu"
|
|
explicit = true
|
|
```
|
|
|
|
## Configuring accelerators with optional dependencies
|
|
|
|
In some cases, you may want to use CPU-only builds in some cases, but CUDA-enabled builds in others,
|
|
with the choice toggled by a user-provided extra (e.g., `uv sync --extra cpu` vs.
|
|
`uv sync --extra cu128`).
|
|
|
|
With `tool.uv.sources`, you can use extra markers to specify the desired index for each enabled
|
|
extra. For example, the following configuration would use PyTorch's CPU-only for
|
|
`uv sync --extra cpu` and CUDA-enabled builds for `uv sync --extra cu128`:
|
|
|
|
```toml
|
|
[project]
|
|
name = "project"
|
|
version = "0.1.0"
|
|
requires-python = ">=3.12.0"
|
|
dependencies = []
|
|
|
|
[project.optional-dependencies]
|
|
cpu = [
|
|
"torch>=2.7.0",
|
|
"torchvision>=0.22.0",
|
|
]
|
|
cu128 = [
|
|
"torch>=2.7.0",
|
|
"torchvision>=0.22.0",
|
|
]
|
|
|
|
[tool.uv]
|
|
conflicts = [
|
|
[
|
|
{ extra = "cpu" },
|
|
{ extra = "cu128" },
|
|
],
|
|
]
|
|
|
|
[tool.uv.sources]
|
|
torch = [
|
|
{ index = "pytorch-cpu", extra = "cpu" },
|
|
{ index = "pytorch-cu128", extra = "cu128" },
|
|
]
|
|
torchvision = [
|
|
{ index = "pytorch-cpu", extra = "cpu" },
|
|
{ index = "pytorch-cu128", extra = "cu128" },
|
|
]
|
|
|
|
[[tool.uv.index]]
|
|
name = "pytorch-cpu"
|
|
url = "https://download.pytorch.org/whl/cpu"
|
|
explicit = true
|
|
|
|
[[tool.uv.index]]
|
|
name = "pytorch-cu128"
|
|
url = "https://download.pytorch.org/whl/cu128"
|
|
explicit = true
|
|
```
|
|
|
|
!!! note
|
|
|
|
Since GPU-accelerated builds aren't available on macOS, the above configuration will fail to install
|
|
on macOS when the `cu128` extra is enabled.
|
|
|
|
## The `uv pip` interface
|
|
|
|
While the above examples are focused on uv's project interface (`uv lock`, `uv sync`, `uv run`,
|
|
etc.), PyTorch can also be installed via the `uv pip` interface.
|
|
|
|
PyTorch itself offers a [dedicated interface](https://pytorch.org/get-started/locally/) to determine
|
|
the appropriate pip command to run for a given target configuration. For example, you can install
|
|
stable, CPU-only PyTorch on Linux with:
|
|
|
|
```shell
|
|
$ pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cpu
|
|
```
|
|
|
|
To use the same workflow with uv, replace `pip3` with `uv pip`:
|
|
|
|
```shell
|
|
$ uv pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cpu
|
|
```
|
|
|
|
## Automatic backend selection
|
|
|
|
uv supports automatic selection of the appropriate PyTorch index via the `--torch-backend=auto`
|
|
command-line argument (or the `UV_TORCH_BACKEND=auto` environment variable), as in:
|
|
|
|
```shell
|
|
$ # With a command-line argument.
|
|
$ uv pip install torch --torch-backend=auto
|
|
|
|
$ # With an environment variable.
|
|
$ UV_TORCH_BACKEND=auto uv pip install torch
|
|
```
|
|
|
|
When enabled, uv will query for the installed CUDA driver and AMD GPU versions then use the
|
|
most-compatible PyTorch index for all relevant packages (e.g., `torch`, `torchvision`, etc.). If no
|
|
such GPU is found, uv will fall back to the CPU-only index. uv will continue to respect existing
|
|
index configuration for any packages outside the PyTorch ecosystem.
|
|
|
|
You can also select a specific backend (e.g., CUDA 12.6) with `--torch-backend=cu126` (or
|
|
`UV_TORCH_BACKEND=cu126`):
|
|
|
|
```shell
|
|
$ # With a command-line argument.
|
|
$ uv pip install torch torchvision --torch-backend=cu126
|
|
|
|
$ # With an environment variable.
|
|
$ UV_TORCH_BACKEND=cu126 uv pip install torch torchvision
|
|
```
|
|
|
|
At present, `--torch-backend` is only available in the `uv pip` interface.
|