mirror of
				https://github.com/astral-sh/uv.git
				synced 2025-10-31 03:55:33 +00:00 
			
		
		
		
	 b1dc2b71a3
			
		
	
	
		b1dc2b71a3
		
			
		
	
	
	
		
			
	
		
	
	
		
			Some checks are pending
		
		
	
	CI / build binary | linux musl (push) Blocked by required conditions
				
			CI / build binary | macos aarch64 (push) Blocked by required conditions
				
			CI / build binary | macos x86_64 (push) Blocked by required conditions
				
			CI / build binary | windows x86_64 (push) Blocked by required conditions
				
			CI / build binary | windows aarch64 (push) Blocked by required conditions
				
			CI / cargo build (msrv) (push) Blocked by required conditions
				
			CI / build binary | freebsd (push) Blocked by required conditions
				
			CI / smoke test | macos (push) Blocked by required conditions
				
			CI / ecosystem test | pydantic/pydantic-core (push) Blocked by required conditions
				
			CI / ecosystem test | prefecthq/prefect (push) Blocked by required conditions
				
			CI / ecosystem test | pallets/flask (push) Blocked by required conditions
				
			CI / smoke test | linux (push) Blocked by required conditions
				
			CI / smoke test | windows x86_64 (push) Blocked by required conditions
				
			CI / smoke test | windows aarch64 (push) Blocked by required conditions
				
			CI / integration test | conda on ubuntu (push) Blocked by required conditions
				
			CI / integration test | deadsnakes python3.9 on ubuntu (push) Blocked by required conditions
				
			CI / Determine changes (push) Waiting to run
				
			CI / lint (push) Waiting to run
				
			CI / cargo clippy | ubuntu (push) Blocked by required conditions
				
			CI / cargo clippy | windows (push) Blocked by required conditions
				
			CI / cargo dev generate-all (push) Blocked by required conditions
				
			CI / cargo shear (push) Waiting to run
				
			CI / cargo test | ubuntu (push) Blocked by required conditions
				
			CI / cargo test | macos (push) Blocked by required conditions
				
			CI / cargo test | windows (push) Blocked by required conditions
				
			CI / check windows trampoline | aarch64 (push) Blocked by required conditions
				
			CI / check windows trampoline | i686 (push) Blocked by required conditions
				
			CI / smoke test | linux aarch64 (push) Blocked by required conditions
				
			CI / check windows trampoline | x86_64 (push) Blocked by required conditions
				
			CI / test windows trampoline | i686 (push) Blocked by required conditions
				
			CI / test windows trampoline | x86_64 (push) Blocked by required conditions
				
			CI / typos (push) Waiting to run
				
			CI / mkdocs (push) Waiting to run
				
			CI / check system | alpine (push) Blocked by required conditions
				
			CI / build binary | linux libc (push) Blocked by required conditions
				
			CI / build binary | linux aarch64 (push) Blocked by required conditions
				
			CI / integration test | free-threaded on windows (push) Blocked by required conditions
				
			CI / integration test | aarch64 windows implicit (push) Blocked by required conditions
				
			CI / integration test | aarch64 windows explicit (push) Blocked by required conditions
				
			CI / integration test | pypy on ubuntu (push) Blocked by required conditions
				
			CI / integration test | pypy on windows (push) Blocked by required conditions
				
			CI / integration test | graalpy on ubuntu (push) Blocked by required conditions
				
			CI / integration test | graalpy on windows (push) Blocked by required conditions
				
			CI / integration test | pyodide on ubuntu (push) Blocked by required conditions
				
			CI / integration test | github actions (push) Blocked by required conditions
				
			CI / integration test | free-threaded python on github actions (push) Blocked by required conditions
				
			CI / integration test | determine publish changes (push) Blocked by required conditions
				
			CI / integration test | registries (push) Blocked by required conditions
				
			CI / integration test | uv publish (push) Blocked by required conditions
				
			CI / integration test | uv_build (push) Blocked by required conditions
				
			CI / check cache | ubuntu (push) Blocked by required conditions
				
			CI / check cache | macos aarch64 (push) Blocked by required conditions
				
			CI / check system | python on debian (push) Blocked by required conditions
				
			CI / check system | python on fedora (push) Blocked by required conditions
				
			CI / check system | python on ubuntu (push) Blocked by required conditions
				
			CI / check system | python on rocky linux 8 (push) Blocked by required conditions
				
			CI / check system | python on rocky linux 9 (push) Blocked by required conditions
				
			CI / check system | graalpy on ubuntu (push) Blocked by required conditions
				
			CI / check system | pypy on ubuntu (push) Blocked by required conditions
				
			CI / check system | pyston (push) Blocked by required conditions
				
			CI / check system | python on macos aarch64 (push) Blocked by required conditions
				
			CI / check system | homebrew python on macos aarch64 (push) Blocked by required conditions
				
			CI / check system | python on macos x86-64 (push) Blocked by required conditions
				
			CI / check system | python3.10 on windows x86-64 (push) Blocked by required conditions
				
			CI / check system | python3.10 on windows x86 (push) Blocked by required conditions
				
			CI / check system | python3.13 on windows x86-64 (push) Blocked by required conditions
				
			CI / check system | x86-64 python3.13 on windows aarch64 (push) Blocked by required conditions
				
			CI / check system | aarch64 python3.13 on windows aarch64 (push) Blocked by required conditions
				
			CI / check system | windows registry (push) Blocked by required conditions
				
			CI / check system | python3.12 via chocolatey (push) Blocked by required conditions
				
			CI / check system | python3.9 via pyenv (push) Blocked by required conditions
				
			CI / check system | python3.13 (push) Blocked by required conditions
				
			CI / check system | conda3.11 on macos aarch64 (push) Blocked by required conditions
				
			CI / check system | conda3.8 on macos aarch64 (push) Blocked by required conditions
				
			CI / check system | conda3.11 on linux x86-64 (push) Blocked by required conditions
				
			CI / check system | conda3.8 on linux x86-64 (push) Blocked by required conditions
				
			CI / check system | conda3.11 on windows x86-64 (push) Blocked by required conditions
				
			CI / check system | conda3.8 on windows x86-64 (push) Blocked by required conditions
				
			CI / check system | amazonlinux (push) Blocked by required conditions
				
			CI / check system | embedded python3.10 on windows x86-64 (push) Blocked by required conditions
				
			CI / benchmarks | walltime aarch64 linux (push) Blocked by required conditions
				
			CI / benchmarks | instrumented (push) Blocked by required conditions
				
			## Summary This PR intends to enable `--torch-backend=auto` to detect Intel GPUs automatically: - On Linux, detection is performed using the `lspci` command via `Display controller` id. - On Windows, ~~detection is done via a `powershell` query to `Win32_VideoController`~~. Skip support for now—revisit once a better solution is available. Currently, Intel GPUs (XPU) do not rely on specific driver or toolkit versions to distribute different PyTorch wheels. ## Test Plan <!-- How was it tested? --> On Linux:  ~~On Windows: ~~ --------- Co-authored-by: Charlie Marsh <charlie.r.marsh@gmail.com>
		
			
				
	
	
		
			471 lines
		
	
	
	
		
			14 KiB
		
	
	
	
		
			Markdown
		
	
	
	
	
	
			
		
		
	
	
			471 lines
		
	
	
	
		
			14 KiB
		
	
	
	
		
			Markdown
		
	
	
	
	
	
| ---
 | |
| title: Using uv with PyTorch
 | |
| description:
 | |
|   A guide to using uv with PyTorch, including installing PyTorch, configuring per-platform and
 | |
|   per-accelerator builds, and more.
 | |
| ---
 | |
| 
 | |
| # Using uv with PyTorch
 | |
| 
 | |
| The [PyTorch](https://pytorch.org/) ecosystem is a popular choice for deep learning research and
 | |
| development. You can use uv to manage PyTorch projects and PyTorch dependencies across different
 | |
| Python versions and environments, even controlling for the choice of accelerator (e.g., CPU-only vs.
 | |
| CUDA).
 | |
| 
 | |
| !!! note
 | |
| 
 | |
|     Some of the features outlined in this guide require uv version 0.5.3 or later. We recommend upgrading prior to configuring PyTorch.
 | |
| 
 | |
| ## Installing PyTorch
 | |
| 
 | |
| From a packaging perspective, PyTorch has a few uncommon characteristics:
 | |
| 
 | |
| - Many PyTorch wheels are hosted on a dedicated index, rather than the Python Package Index (PyPI).
 | |
|   As such, installing PyTorch often requires configuring a project to use the PyTorch index.
 | |
| - PyTorch produces distinct builds for each accelerator (e.g., CPU-only, CUDA). Since there's no
 | |
|   standardized mechanism for specifying these accelerators when publishing or installing, PyTorch
 | |
|   encodes them in the local version specifier. As such, PyTorch versions will often look like
 | |
|   `2.5.1+cpu`, `2.5.1+cu121`, etc.
 | |
| - Builds for different accelerators are published to different indexes. For example, the `+cpu`
 | |
|   builds are published on https://download.pytorch.org/whl/cpu, while the `+cu121` builds are
 | |
|   published on https://download.pytorch.org/whl/cu121.
 | |
| 
 | |
| As such, the necessary packaging configuration will vary depending on both the platforms you need to
 | |
| support and the accelerators you want to enable.
 | |
| 
 | |
| To start, consider the following (default) configuration, which would be generated by running
 | |
| `uv init --python 3.12` followed by `uv add torch torchvision`.
 | |
| 
 | |
| In this case, PyTorch would be installed from PyPI, which hosts CPU-only wheels for Windows and
 | |
| macOS, and GPU-accelerated wheels on Linux (targeting CUDA 12.6):
 | |
| 
 | |
| ```toml
 | |
| [project]
 | |
| name = "project"
 | |
| version = "0.1.0"
 | |
| requires-python = ">=3.12"
 | |
| dependencies = [
 | |
|   "torch>=2.7.0",
 | |
|   "torchvision>=0.22.0",
 | |
| ]
 | |
| ```
 | |
| 
 | |
| !!! tip "Supported Python versions"
 | |
| 
 | |
|     At time of writing, PyTorch does not yet publish wheels for Python 3.14; as such projects with
 | |
|     `requires-python = ">=3.14"` may fail to resolve. See the
 | |
|     [compatibility matrix](https://github.com/pytorch/pytorch/blob/main/RELEASE.md#release-compatibility-matrix).
 | |
| 
 | |
| This is a valid configuration for projects that want to use CPU builds on Windows and macOS, and
 | |
| CUDA-enabled builds on Linux. However, if you need to support different platforms or accelerators,
 | |
| you'll need to configure the project accordingly.
 | |
| 
 | |
| ## Using a PyTorch index
 | |
| 
 | |
| In some cases, you may want to use a specific PyTorch variant across all platforms. For example, you
 | |
| may want to use the CPU-only builds on Linux too.
 | |
| 
 | |
| In such cases, the first step is to add the relevant PyTorch index to your `pyproject.toml`:
 | |
| 
 | |
| === "CPU-only"
 | |
| 
 | |
|     ```toml
 | |
|     [[tool.uv.index]]
 | |
|     name = "pytorch-cpu"
 | |
|     url = "https://download.pytorch.org/whl/cpu"
 | |
|     explicit = true
 | |
|     ```
 | |
| 
 | |
| === "CUDA 11.8"
 | |
| 
 | |
|     ```toml
 | |
|     [[tool.uv.index]]
 | |
|     name = "pytorch-cu118"
 | |
|     url = "https://download.pytorch.org/whl/cu118"
 | |
|     explicit = true
 | |
|     ```
 | |
| 
 | |
| === "CUDA 12.6"
 | |
| 
 | |
|     ```toml
 | |
|     [[tool.uv.index]]
 | |
|     name = "pytorch-cu126"
 | |
|     url = "https://download.pytorch.org/whl/cu126"
 | |
|     explicit = true
 | |
|     ```
 | |
| 
 | |
| === "CUDA 12.8"
 | |
| 
 | |
|     ```toml
 | |
|     [[tool.uv.index]]
 | |
|     name = "pytorch-cu128"
 | |
|     url = "https://download.pytorch.org/whl/cu128"
 | |
|     explicit = true
 | |
|     ```
 | |
| 
 | |
| === "ROCm6"
 | |
| 
 | |
|     ```toml
 | |
|     [[tool.uv.index]]
 | |
|     name = "pytorch-rocm"
 | |
|     url = "https://download.pytorch.org/whl/rocm6.3"
 | |
|     explicit = true
 | |
|     ```
 | |
| 
 | |
| === "Intel GPUs"
 | |
| 
 | |
|     ```toml
 | |
|     [[tool.uv.index]]
 | |
|     name = "pytorch-xpu"
 | |
|     url = "https://download.pytorch.org/whl/xpu"
 | |
|     explicit = true
 | |
|     ```
 | |
| 
 | |
| We recommend the use of `explicit = true` to ensure that the index is _only_ used for `torch`,
 | |
| `torchvision`, and other PyTorch-related packages, as opposed to generic dependencies like `jinja2`,
 | |
| which should continue to be sourced from the default index (PyPI).
 | |
| 
 | |
| Next, update the `pyproject.toml` to point `torch` and `torchvision` to the desired index:
 | |
| 
 | |
| === "CPU-only"
 | |
| 
 | |
|     ```toml
 | |
|     [tool.uv.sources]
 | |
|     torch = [
 | |
|       { index = "pytorch-cpu" },
 | |
|     ]
 | |
|     torchvision = [
 | |
|       { index = "pytorch-cpu" },
 | |
|     ]
 | |
|     ```
 | |
| 
 | |
| === "CUDA 11.8"
 | |
| 
 | |
|     PyTorch doesn't publish CUDA builds for macOS. As such, we gate on `sys_platform` to instruct uv to use
 | |
|     the PyTorch index on Linux and Windows, but fall back to PyPI on macOS:
 | |
| 
 | |
|     ```toml
 | |
|     [tool.uv.sources]
 | |
|     torch = [
 | |
|       { index = "pytorch-cu118", marker = "sys_platform == 'linux' or sys_platform == 'win32'" },
 | |
|     ]
 | |
|     torchvision = [
 | |
|       { index = "pytorch-cu118", marker = "sys_platform == 'linux' or sys_platform == 'win32'" },
 | |
|     ]
 | |
|     ```
 | |
| 
 | |
| === "CUDA 12.6"
 | |
| 
 | |
|     PyTorch doesn't publish CUDA builds for macOS. As such, we gate on `sys_platform` to instruct uv to limit
 | |
|     the PyTorch index to Linux and Windows, falling back to PyPI on macOS:
 | |
| 
 | |
|     ```toml
 | |
|     [tool.uv.sources]
 | |
|     torch = [
 | |
|       { index = "pytorch-cu126", marker = "sys_platform == 'linux' or sys_platform == 'win32'" },
 | |
|     ]
 | |
|     torchvision = [
 | |
|       { index = "pytorch-cu126", marker = "sys_platform == 'linux' or sys_platform == 'win32'" },
 | |
|     ]
 | |
|     ```
 | |
| 
 | |
| === "CUDA 12.8"
 | |
| 
 | |
|     PyTorch doesn't publish CUDA builds for macOS. As such, we gate on `sys_platform` to instruct uv to limit
 | |
|     the PyTorch index to Linux and Windows, falling back to PyPI on macOS:
 | |
| 
 | |
|     ```toml
 | |
|     [tool.uv.sources]
 | |
|     torch = [
 | |
|       { index = "pytorch-cu128", marker = "sys_platform == 'linux' or sys_platform == 'win32'" },
 | |
|     ]
 | |
|     torchvision = [
 | |
|       { index = "pytorch-cu128", marker = "sys_platform == 'linux' or sys_platform == 'win32'" },
 | |
|     ]
 | |
|     ```
 | |
| 
 | |
| === "ROCm6"
 | |
| 
 | |
|     PyTorch doesn't publish ROCm6 builds for macOS or Windows. As such, we gate on `sys_platform` to instruct uv
 | |
|     to limit the PyTorch index to Linux, falling back to PyPI on macOS and Windows:
 | |
| 
 | |
|     ```toml
 | |
|     [tool.uv.sources]
 | |
|     torch = [
 | |
|       { index = "pytorch-rocm", marker = "sys_platform == 'linux'" },
 | |
|     ]
 | |
|     torchvision = [
 | |
|       { index = "pytorch-rocm", marker = "sys_platform == 'linux'" },
 | |
|     ]
 | |
|     # ROCm6 support relies on `pytorch-triton-rocm`, which should also be installed from the PyTorch index
 | |
|     # (and included in `project.dependencies`).
 | |
|     pytorch-triton-rocm = [
 | |
|       { index = "pytorch-rocm", marker = "sys_platform == 'linux'" },
 | |
|     ]
 | |
|     ```
 | |
| 
 | |
| === "Intel GPUs"
 | |
| 
 | |
|     PyTorch doesn't publish Intel GPU builds for macOS. As such, we gate on `sys_platform` to instruct uv to limit
 | |
|     the PyTorch index to Linux and Windows, falling back to PyPI on macOS:
 | |
| 
 | |
|     ```toml
 | |
|     [tool.uv.sources]
 | |
|     torch = [
 | |
|       { index = "pytorch-xpu", marker = "sys_platform == 'linux' or sys_platform == 'win32'" },
 | |
|     ]
 | |
|     torchvision = [
 | |
|       { index = "pytorch-xpu", marker = "sys_platform == 'linux' or sys_platform == 'win32'" },
 | |
|     ]
 | |
|     # Intel GPU support relies on `pytorch-triton-xpu`, which should also be installed from the PyTorch index
 | |
|     # (and included in `project.dependencies`).
 | |
|     pytorch-triton-xpu = [
 | |
|       { index = "pytorch-xpu", marker = "sys_platform == 'linux' or sys_platform == 'win32'" },
 | |
|     ]
 | |
|     ```
 | |
| 
 | |
| As a complete example, the following project would use PyTorch's CPU-only builds on all platforms:
 | |
| 
 | |
| ```toml
 | |
| [project]
 | |
| name = "project"
 | |
| version = "0.1.0"
 | |
| requires-python = ">=3.12.0"
 | |
| dependencies = [
 | |
|   "torch>=2.7.0",
 | |
|   "torchvision>=0.22.0",
 | |
| ]
 | |
| 
 | |
| [tool.uv.sources]
 | |
| torch = [
 | |
|     { index = "pytorch-cpu" },
 | |
| ]
 | |
| torchvision = [
 | |
|     { index = "pytorch-cpu" },
 | |
| ]
 | |
| 
 | |
| [[tool.uv.index]]
 | |
| name = "pytorch-cpu"
 | |
| url = "https://download.pytorch.org/whl/cpu"
 | |
| explicit = true
 | |
| ```
 | |
| 
 | |
| ## Configuring accelerators with environment markers
 | |
| 
 | |
| In some cases, you may want to use CPU-only builds in one environment (e.g., macOS and Windows), and
 | |
| CUDA-enabled builds in another (e.g., Linux).
 | |
| 
 | |
| With `tool.uv.sources`, you can use environment markers to specify the desired index for each
 | |
| platform. For example, the following configuration would use PyTorch's CUDA-enabled builds on Linux,
 | |
| and CPU-only builds on all other platforms (e.g., macOS and Windows):
 | |
| 
 | |
| ```toml
 | |
| [project]
 | |
| name = "project"
 | |
| version = "0.1.0"
 | |
| requires-python = ">=3.12.0"
 | |
| dependencies = [
 | |
|   "torch>=2.7.0",
 | |
|   "torchvision>=0.22.0",
 | |
| ]
 | |
| 
 | |
| [tool.uv.sources]
 | |
| torch = [
 | |
|   { index = "pytorch-cpu", marker = "sys_platform != 'linux'" },
 | |
|   { index = "pytorch-cu128", marker = "sys_platform == 'linux'" },
 | |
| ]
 | |
| torchvision = [
 | |
|   { index = "pytorch-cpu", marker = "sys_platform != 'linux'" },
 | |
|   { index = "pytorch-cu128", marker = "sys_platform == 'linux'" },
 | |
| ]
 | |
| 
 | |
| [[tool.uv.index]]
 | |
| name = "pytorch-cpu"
 | |
| url = "https://download.pytorch.org/whl/cpu"
 | |
| explicit = true
 | |
| 
 | |
| [[tool.uv.index]]
 | |
| name = "pytorch-cu128"
 | |
| url = "https://download.pytorch.org/whl/cu128"
 | |
| explicit = true
 | |
| ```
 | |
| 
 | |
| Similarly, the following configuration would use PyTorch's AMD GPU builds on Linux, and CPU-only
 | |
| builds on Windows and macOS (by way of falling back to PyPI):
 | |
| 
 | |
| ```toml
 | |
| [project]
 | |
| name = "project"
 | |
| version = "0.1.0"
 | |
| requires-python = ">=3.12.0"
 | |
| dependencies = [
 | |
|   "torch>=2.7.0",
 | |
|   "torchvision>=0.22.0",
 | |
|   "pytorch-triton-rocm>=3.3.0 ; sys_platform == 'linux'",
 | |
| ]
 | |
| 
 | |
| [tool.uv.sources]
 | |
| torch = [
 | |
|   { index = "pytorch-rocm", marker = "sys_platform == 'linux'" },
 | |
| ]
 | |
| torchvision = [
 | |
|   { index = "pytorch-rocm", marker = "sys_platform == 'linux'" },
 | |
| ]
 | |
| pytorch-triton-rocm = [
 | |
|   { index = "pytorch-rocm", marker = "sys_platform == 'linux'" },
 | |
| ]
 | |
| 
 | |
| [[tool.uv.index]]
 | |
| name = "pytorch-rocm"
 | |
| url = "https://download.pytorch.org/whl/rocm6.3"
 | |
| explicit = true
 | |
| ```
 | |
| 
 | |
| Or, for Intel GPU builds:
 | |
| 
 | |
| ```toml
 | |
| [project]
 | |
| name = "project"
 | |
| version = "0.1.0"
 | |
| requires-python = ">=3.12.0"
 | |
| dependencies = [
 | |
|   "torch>=2.7.0",
 | |
|   "torchvision>=0.22.0",
 | |
|   "pytorch-triton-xpu>=3.3.0 ; sys_platform == 'win32' or sys_platform == 'linux'",
 | |
| ]
 | |
| 
 | |
| [tool.uv.sources]
 | |
| torch = [
 | |
|   { index = "pytorch-xpu", marker = "sys_platform == 'win32' or sys_platform == 'linux'" },
 | |
| ]
 | |
| torchvision = [
 | |
|   { index = "pytorch-xpu", marker = "sys_platform == 'win32' or sys_platform == 'linux'" },
 | |
| ]
 | |
| pytorch-triton-xpu = [
 | |
|   { index = "pytorch-xpu", marker = "sys_platform == 'win32' or sys_platform == 'linux'" },
 | |
| ]
 | |
| 
 | |
| [[tool.uv.index]]
 | |
| name = "pytorch-xpu"
 | |
| url = "https://download.pytorch.org/whl/xpu"
 | |
| explicit = true
 | |
| ```
 | |
| 
 | |
| ## Configuring accelerators with optional dependencies
 | |
| 
 | |
| In some cases, you may want to use CPU-only builds in some cases, but CUDA-enabled builds in others,
 | |
| with the choice toggled by a user-provided extra (e.g., `uv sync --extra cpu` vs.
 | |
| `uv sync --extra cu128`).
 | |
| 
 | |
| With `tool.uv.sources`, you can use extra markers to specify the desired index for each enabled
 | |
| extra. For example, the following configuration would use PyTorch's CPU-only for
 | |
| `uv sync --extra cpu` and CUDA-enabled builds for `uv sync --extra cu128`:
 | |
| 
 | |
| ```toml
 | |
| [project]
 | |
| name = "project"
 | |
| version = "0.1.0"
 | |
| requires-python = ">=3.12.0"
 | |
| dependencies = []
 | |
| 
 | |
| [project.optional-dependencies]
 | |
| cpu = [
 | |
|   "torch>=2.7.0",
 | |
|   "torchvision>=0.22.0",
 | |
| ]
 | |
| cu128 = [
 | |
|   "torch>=2.7.0",
 | |
|   "torchvision>=0.22.0",
 | |
| ]
 | |
| 
 | |
| [tool.uv]
 | |
| conflicts = [
 | |
|   [
 | |
|     { extra = "cpu" },
 | |
|     { extra = "cu128" },
 | |
|   ],
 | |
| ]
 | |
| 
 | |
| [tool.uv.sources]
 | |
| torch = [
 | |
|   { index = "pytorch-cpu", extra = "cpu" },
 | |
|   { index = "pytorch-cu128", extra = "cu128" },
 | |
| ]
 | |
| torchvision = [
 | |
|   { index = "pytorch-cpu", extra = "cpu" },
 | |
|   { index = "pytorch-cu128", extra = "cu128" },
 | |
| ]
 | |
| 
 | |
| [[tool.uv.index]]
 | |
| name = "pytorch-cpu"
 | |
| url = "https://download.pytorch.org/whl/cpu"
 | |
| explicit = true
 | |
| 
 | |
| [[tool.uv.index]]
 | |
| name = "pytorch-cu128"
 | |
| url = "https://download.pytorch.org/whl/cu128"
 | |
| explicit = true
 | |
| ```
 | |
| 
 | |
| !!! note
 | |
| 
 | |
|     Since GPU-accelerated builds aren't available on macOS, the above configuration will fail to install
 | |
|     on macOS when the `cu128` extra is enabled.
 | |
| 
 | |
| ## The `uv pip` interface
 | |
| 
 | |
| While the above examples are focused on uv's project interface (`uv lock`, `uv sync`, `uv run`,
 | |
| etc.), PyTorch can also be installed via the `uv pip` interface.
 | |
| 
 | |
| PyTorch itself offers a [dedicated interface](https://pytorch.org/get-started/locally/) to determine
 | |
| the appropriate pip command to run for a given target configuration. For example, you can install
 | |
| stable, CPU-only PyTorch on Linux with:
 | |
| 
 | |
| ```shell
 | |
| $ pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cpu
 | |
| ```
 | |
| 
 | |
| To use the same workflow with uv, replace `pip3` with `uv pip`:
 | |
| 
 | |
| ```shell
 | |
| $ uv pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cpu
 | |
| ```
 | |
| 
 | |
| ## Automatic backend selection
 | |
| 
 | |
| uv supports automatic selection of the appropriate PyTorch index via the `--torch-backend=auto`
 | |
| command-line argument (or the `UV_TORCH_BACKEND=auto` environment variable), as in:
 | |
| 
 | |
| ```shell
 | |
| $ # With a command-line argument.
 | |
| $ uv pip install torch --torch-backend=auto
 | |
| 
 | |
| $ # With an environment variable.
 | |
| $ UV_TORCH_BACKEND=auto uv pip install torch
 | |
| ```
 | |
| 
 | |
| When enabled, uv will query for the installed CUDA driver, AMD GPU versions, and Intel GPU presence,
 | |
| then use the most-compatible PyTorch index for all relevant packages (e.g., `torch`, `torchvision`,
 | |
| etc.). If no such GPU is found, uv will fall back to the CPU-only index. uv will continue to respect
 | |
| existing index configuration for any packages outside the PyTorch ecosystem.
 | |
| 
 | |
| You can also select a specific backend (e.g., CUDA 12.6) with `--torch-backend=cu126` (or
 | |
| `UV_TORCH_BACKEND=cu126`):
 | |
| 
 | |
| ```shell
 | |
| $ # With a command-line argument.
 | |
| $ uv pip install torch torchvision --torch-backend=cu126
 | |
| 
 | |
| $ # With an environment variable.
 | |
| $ UV_TORCH_BACKEND=cu126 uv pip install torch torchvision
 | |
| ```
 | |
| 
 | |
| On Windows, Intel GPU (XPU) is not automatically selected with `--torch-backend=auto`, but you can
 | |
| manually specify it using `--torch-backend=xpu`:
 | |
| 
 | |
| ```shell
 | |
| $ # Manual selection for Intel GPU.
 | |
| $ uv pip install torch torchvision --torch-backend=xpu
 | |
| ```
 | |
| 
 | |
| At present, `--torch-backend` is only available in the `uv pip` interface.
 |