
Background reading: https://github.com/astral-sh/uv/issues/8157 Companion PR: https://github.com/astral-sh/pubgrub/pull/36 Requires for test coverage: https://github.com/astral-sh/packse/pull/230 When two packages A and B conflict, we have the option to choose a lower version of A, or a lower version of B. Currently, we determine this by the order we saw a package (assuming equal specificity of the requirement): If we saw A before B, we pin A until all versions of B are exhausted. This can lead to undesirable outcomes, from cases where it's just slow (sentry) to others cases without lower bounds where be backtrack to a very old version of B. This old version may fail to build (terminating the resolution), or it's a version so old that it doesn't depend on A (or the shared conflicting package) anymore - but also is too old for the user's application (fastapi). #8157 collects such cases, and the `wrong-backtracking` packse scenario contains a minimized example. We try to solve this by tracking which packages are "A"s, culprits, and "B"s, affected, and manually interfering with project selection and backtracking. Whenever a version we just chose is rejected, we give the current package a counter for being affected, and the package it conflicted with a counter for being a culprit. If a package accumulates more counts than a threshold, we reprioritize: Undecided after the culprits, after the affected, after packages that only have a single version (URLs, `==<version>`). We then ask pubgrub to backtrack just before the culprit. Due to the changed priorities, we now select package B, the affected, instead of package A, the culprit. To do this efficiently, we ask pubgrub for the incompatibility that caused backtracking, or just the last version to be discarded (due to its dependencies). For backtracking, we use the last incompatibility from unit propagation as a heuristic. When a version is discarded because one of its dependencies conflicts with the partial solution, the incompatibility tells us the package in the partial solution that conflicted. We only backtrack once per package, on the first time it passes the threshold. This prevents backtracking loops in which we make the same decisions over and over again. But we also changed the priority, so that we shouldn't take the same path even after the one time we backtrack (it would defeat the purpose of this change). There are some parameters that can be tweaked: Currently, the threshold is set to 5, which feels not too eager with so me of the conflicts that we want to tolerate but also changes strategies quickly. The relative order of the new priorities can also be changed, as for each (A, B) pair the priority of B is afterwards lower than that for A. Currently, culprits capture conflict for the whole package, but we could limit that to a specific version. We could discard conflict counters after backtracking instead of keeping them eternally as we do now. Note that we're always taking about pairs (A, B), but in practice we track individual packages, not pairs. A case that we wouldn't capture is when B is only introduced to the dependency graph after A, but I think that would require cyclical dependency for A and B to conflict? There may also be cases where looking at the last incompatibility is insufficient. Another example that we can't repair with prioritization is urllib3/boto3/botocore: We actually have to check all the newer versions of boto3 and botocore to identify the version that allows with the older urllib3, no shortcuts allowed. ``` urllib3<1.25.4 boto3 ``` All examples I tested were cases with two packages where we only had to switch the order, so I've abstracted them into a single packse case. This PR changes the resolution for certain paths, and there is the risk for regressions. Fixes #8157 --- All tested examples improved. Input fastapi: ```text starlette<=0.36.0 fastapi<=0.115.2 ``` ``` # BEFORE $ uv pip --no-progress compile -p 3.11 --exclude-newer 2024-10-01 --no-annotate debug/fastapi.txt annotated-types==0.7.0 anyio==4.6.0 fastapi==0.1.17 idna==3.10 pydantic==2.9.2 pydantic-core==2.23.4 sniffio==1.3.1 starlette==0.36.0 typing-extensions==4.12.2 # AFTER $ cargo run --profile fast-build --no-default-features pip compile -p 3.11 --no-progress --exclude-newer 2024-10-01 --no-annotate debug/fastapi.txt annotated-types==0.7.0 anyio==4.6.0 fastapi==0.109.1 idna==3.10 pydantic==2.9.2 pydantic-core==2.23.4 sniffio==1.3.1 starlette==0.35.1 typing-extensions==4.12.2 ``` Input xarray: ```text xarray[accel] ``` ``` # BEFORE $ uv pip --no-progress compile -p 3.11 --exclude-newer 2024-10-01 --no-annotate debug/xarray-accel.txt bottleneck==1.4.0 flox==0.9.13 llvmlite==0.36.0 numba==0.53.1 numbagg==0.8.2 numpy==2.1.1 numpy-groupies==0.11.2 opt-einsum==3.4.0 packaging==24.1 pandas==2.2.3 python-dateutil==2.9.0.post0 pytz==2024.2 scipy==1.14.1 setuptools==75.1.0 six==1.16.0 toolz==0.12.1 tzdata==2024.2 xarray==2024.9.0 # AFTER $ cargo run --profile fast-build --no-default-features pip compile -p 3.11 --no-progress --exclude-newer 2024-10-01 --no-annotate debug/xarray-accel.txt bottleneck==1.4.0 flox==0.9.13 llvmlite==0.43.0 numba==0.60.0 numbagg==0.8.2 numpy==2.0.2 numpy-groupies==0.11.2 opt-einsum==3.4.0 packaging==24.1 pandas==2.2.3 python-dateutil==2.9.0.post0 pytz==2024.2 scipy==1.14.1 six==1.16.0 toolz==0.12.1 tzdata==2024.2 xarray==2024.9.0 ``` Input sentry: The resolution is identical, but arrived at much faster: main tries 69 versions (sentry-kafka-schemas: 63), PR tries 12 versions (sentry-kafka-schemas: 6; 5 times conflicting, then once the right version). ```text python-rapidjson<=1.20,>=1.4 sentry-kafka-schemas<=0.1.113,>=0.1.50 ``` ``` # BEFORE $ uv pip --no-progress compile -p 3.11 --exclude-newer 2024-10-01 --no-annotate debug/sentry.txt fastjsonschema==2.20.0 msgpack==1.1.0 python-rapidjson==1.8 pyyaml==6.0.2 sentry-kafka-schemas==0.1.111 typing-extensions==4.12.2 # AFTER $ cargo run --profile fast-build --no-default-features pip compile -p 3.11 --no-progress --exclude-newer 2024-10-01 --no-annotate debug/sentry.txt fastjsonschema==2.20.0 msgpack==1.1.0 python-rapidjson==1.8 pyyaml==6.0.2 sentry-kafka-schemas==0.1.111 typing-extensions==4.12.2 ``` Input apache-beam ```text # Run on Python 3.10 dill<0.3.9,>=0.2.2 apache-beam<=2.49.0 ``` ``` # BEFORE $ uv pip --no-progress compile -p 3.10 --exclude-newer 2024-10-01 --no-annotate debug/apache-beam.txt × Failed to download and build `apache-beam==2.0.0` ╰─▶ Build backend failed to determine requirements with `build_wheel()` (exit status: 1) # AFTER $ cargo run --profile fast-build --no-default-features pip compile -p 3.10 --no-progress --exclude-newer 2024-10-01 --no-annotate debug/apache-beam.txt apache-beam==2.49.0 certifi==2024.8.30 charset-normalizer==3.3.2 cloudpickle==2.2.1 crcmod==1.7 dill==0.3.1.1 dnspython==2.6.1 docopt==0.6.2 fastavro==1.9.7 fasteners==0.19 grpcio==1.66.2 hdfs==2.7.3 httplib2==0.22.0 idna==3.10 numpy==1.24.4 objsize==0.6.1 orjson==3.10.7 proto-plus==1.24.0 protobuf==4.23.4 pyarrow==11.0.0 pydot==1.4.2 pymongo==4.10.0 pyparsing==3.1.4 python-dateutil==2.9.0.post0 pytz==2024.2 regex==2024.9.11 requests==2.32.3 six==1.16.0 typing-extensions==4.12.2 urllib3==2.2.3 zstandard==0.23.0 ```
8.1 KiB
Resolver internals
!!! tip
This document focuses on the internal workings of uv's resolver. For using uv, see the
[resolution concept](../concepts/resolution.md) documentation.
Resolver
As defined in a textbook, resolution, or finding a set of version to install from a given set of requirements, is equivalent to the SAT problem and thereby NP-complete: in the worst case you have to try all possible combinations of all versions of all packages and there are no general, fast algorithms. In practice, this is misleading for a number of reasons:
- The slowest part of resolution in uv is loading package and version metadata, even if it's cached.
- There are many possible solutions, but some are preferable to others. For example, we generally prefer using the latest version of packages.
- Package dependencies are complex, e.g., there are contiguous versions ranges — not arbitrary boolean inclusion/exclusions of versions, adjacent releases often have the same or similar requirements, etc.
- For most resolutions, the resolver doesn't need to backtrack, picking versions iteratively is sufficient. If there are version preferences from a previous resolution, barely any work needs to be done.
- When resolution fails, more information is needed than a message that there is no solution (as is seen in SAT solvers). Instead, the resolver should produce an understandable error trace that states which packages are involved in away to allows a user to remove the conflict.
uv uses pubgrub-rs, the Rust implementation of PubGrub, an incremental version solver. PubGrub in uv works in the following steps:
- Start with a partial solution that declares which packages versions have been selected and which are undecided. Initially, only a virtual root package is decided.
- The highest priority package is selected from the undecided packages. Package with URLs (including
file, git, etc.) have the highest priority, then those with more exact specifiers (such as
==
), then those with less strict specifiers. Inside each category, packages are ordered by when they were first seen (i.e. order in a file), making the resolution deterministic. - A version is picked for the selected package. The version must works with all specifiers from the
requirements in the partial solution and must not be previously marked as incompatible. The
resolver prefers versions from a lockfile (
uv.lock
or-o requirements.txt
) and those installed in the current environment. Versions are checked from highest to lowest (unless using an alternative resolution strategy). - All requirements of the selected package version are added to the undecided packages. uv prefetches their metadata in the background to improve performance.
- The process is either repeated with the next package unless a conflict is detected, in which the
resolver will backtrack. For example, the partial solution contains, among other packages,
a 2
thenb 2
with the requirementsa 2 -> c 1
andb 2 -> c 2
. No compatible version ofc
can be found. PubGrub can determine this was caused bya 2
andb 2
and add the incompatibility{a 2, b 2}
, meaning that when either is picked, the other cannot be selected. The partial solution is restored toa 2
with the tracked incompatibility and the resolver attempts to pick a new version forb
.
Eventually, the resolver either picks compatible versions for all packages (a successful resolution) or there is an incompatibility including the virtual "root" package which defines the versions requested by the user. An incompatibility with the root package indicates that whatever versions of the root dependencies and their transitive dependencies are picked, there will always be a conflict. From the incompatibilities tracked in PubGrub, an error message is constructed to enumerate the involved packages.
!!! tip
For more details on the PubGrub algorithm, see [Internals of the PubGrub
algorithm](https://pubgrub-rs-guide.pages.dev/internals/intro).
In addition to PubGrub's base algorithm, we also use a heuristic that backtracks and switches the order of two packages if they have been conflicting too much.
Forking
Python resolvers historically didn't support backtracking, and even with backtracking, resolution was usually limited to single environment, which one specific architecture, operating system, Python version, and Python implementation. Some packages use contradictory requirements for different environments, for example:
numpy>=2,<3 ; python_version >= "3.11"
numpy>=1.16,<2 ; python_version < "3.11"
Since Python only allows one version of each package, a naive resolver would error here. Inspired by Poetry, uv uses a forking resolver: whenever there are multiple requirements for a package with different markers, the resolution is split.
In the above example, the partial solution would be split into two resolutions, one for
python_version >= "3.11"
and one for python_version < "3.11"
.
If markers overlap or are missing a part of the marker space, the resolver splits additional times — there can be many forks per package. For example, given:
flask > 1 ; sys_platform == 'darwin'
flask > 2 ; sys_platform == 'win32'
flask
A fork would be created for sys_platform == 'darwin'
, for sys_platform == 'win32'
, and for
sys_platform != 'darwin' and sys_platform != 'win32'
.
Forks can be nested, e.g., each fork is dependent on any previous forks that occurred. Forks with identical packages are merged to keep the number of forks low.
!!! tip
Forking can be observed in the logs of `uv lock -v` by looking for
`Splitting resolution on ...`, `Solving split ... (requires-python: ...)` and `Split ... resolution
took ...`.
One difficulty in a forking resolver is that where splits occur is dependent on the order packages
are seen, which is in turn dependent on the preferences, e.g., from uv.lock
. So it is possible for
the resolver to solve the requirements with specific forks, write this to the lockfile, and when the
resolver is invoked again, a different solution is found because the preferences result in different
fork points. To avoid this, the resolution-markers
of each fork and each package that diverges
between forks is written to the lockfile. When performing a new resolution, the forks from the
lockfile are used to ensure the resolution is stable. When requirements change, new forks may be
added to the saved forks.
Requires-python
To ensure that a resolution with requires-python = ">=3.9"
can actually be installed for the
included Python versions, uv requires that all dependencies have the same minimum Python version.
Package versions that declare a higher minimum Python version, e.g., requires-python = ">=3.10"
,
are rejected, because a resolution with that version can't be installed on Python 3.9. For
simplicity and forward compatibility, only lower bounds in requires-python
are respected. For
example, if a package declares requires-python = ">=3.8,<4"
, the <4
marker is not propagated to
the entire resolution.
Wheel tags
While uv's resolution is universal with respect to environment markers, this doesn't extend to wheel
tags. Wheel tags can encode the Python version, Python implementation, operating system, and
architecture. For example, torch-2.4.0-cp312-cp312-manylinux2014_aarch64.whl
is only compatible
with CPython 3.12 on arm64 Linux with glibc>=2.17
(per the manylinux2014
policy), while
tqdm-4.66.4-py3-none-any.whl
works with all Python 3 versions and interpreters on any operating
system and architecture. Most projects have a universally compatible source distribution that can be
used when attempted to install a package that has no compatible wheel, but some packages, such as
torch
, don't publish a source distribution. In this case an installation on, e.g., Python 3.13, an
uncommon operating system, or architecture, will fail and complain that there is no matching wheel.