In #10875, I relaxed the error checking during resolution to permit
dependencies like `foo[x1]`, where `x1` was defined to be conflicting.
In exchange, the error was, roughly speaking, moved to installation
time. This was achieved by looking at the full set of enabled extras
and checking whether any conflicts occurred. If so, an error was
reported. This ends up being more expressive and permits more valid
configurations.
However, in so doing, there was a bug in how the accumulated extras
were being passed to conflict marker evaluation. Namely, we weren't
accounting for the fact that if `foo[x1]` was enabled, then that fact
should be carried through to all conflict marker evaluations. This is
because some of those will use things like `extra != 'x1'` to indicate
that it should only be included if an extra *isn't* enabled.
In #10985, this manifested with PyTorch where `torch==2.4.1` and
`torch==2.4.1+cpu` were being installed simultaneously. Namely, the
choice to install `torch==2.4.1` was not taking into account that
the `cpu` extra has been enabled. If it did, then it's conflict
marker would evaluate to `false`. Since it didn't, and since
`torch==2.4.1+cpu` was also being included, we ended up installing both
versions.
The approach I took in this PR was to add a second breadth first
traversal (which comes first) over the dependency tree to accumulate all
of the activated extras. Then, only in the second traversal do we
actually build up the resolution graph.
Unfortunately, I have no automatic regression test to include here. The
regression test we _ought_ to include involves `torch`. And while we are
generally find to use those in tests that only generate a lock file, the
regression test here actually requires running installation. And
downloading and installing `torch` in tests is bad juju. So adding a
regression test for this is blocked on better infrastructure for PyTorch
tests. With that said, I did manually verify that the test case in #10985
no longer installs multiple versions of `torch`.
Fixes#10985
## Summary
I'm open to not merging this -- I was kind of just interested in what
the API looked like. But the idea is: we can avoid hashing values twice
and unnecessarily cloning within the priority map by using the raw entry
API.
This collects ALL activated extras while traversing the lock file to
produce a `Resolution` for installation. If any two extras are activated
that are conflicting, then an error is produced.
We add a couple of tests to demonstrate the behavior. One case is
desirable (where we conditionally depend on `package[extra]`) and the
other case is undesirable (where we create an uninstallable lock file).
Fixes#9942, Fixes#10590
This will make `package[extra]` work even when `extra` is declared as a
conflicting extra.
Note that this isn't relevant for dependency groups since AFAIK those
can actually only be enabled on the CLI. There is no `package:group`
dependency syntax.
This removes the error that was causing folks problems.
This does result in some snapshot updates that are arguably wrong, or at
least sub-optimal. However, it's actually intended. Because the approach
we're going to take is going to permit the creation of uninstallable
lock files as a side effect. In the future, we will modify this test to
check that, while `uv lock` succeeds, `uv sync` will always fail.
## Summary
We should only be ignoring changes in `version` for dynamic projects;
for static projects, it should still be enforced. We should also be
invalidating the lockfile if a project goes from static to dynamic or
vice versa.
Closes#10852.
## Summary
If members define disjoint Python requirements, we should error. Right
now, it seems that it maps to unbounded and leads to weird behavior.
Closes https://github.com/astral-sh/uv/issues/10835.
## Summary
This PR reverts https://github.com/astral-sh/uv/pull/10441 and applies a
different fix for https://github.com/astral-sh/uv/issues/10425.
In #10441, I changed prioritization to visit proxies eagerly. I think
this is actually wrong, since it means we prioritize proxy packages
above _everything_ else. And while a proxy only depends on itself, it
does mean we're selecting a _version_ for the proxy package earlier than
anything else. So, if you look at #10828, we end up choosing a version
for `async-timeout` before we choose a version for `langchain`, despite
the latter being a first-party dependency. (`async-timeout` has a marker
on it, so it has a proxy package, so we solve for it first.)
To fix#10425, we instead need to make sure we visit proxies in the
order we see them. I think the virtual tiebreaker for proxies is
reversed? We want to visit the package we see first, first.
So, in short: this reverts #10441, then corrects the ordering for
visiting proxies.
Closes https://github.com/astral-sh/uv/issues/10828.
## Summary
The linked issue actually isn't a bug on main anymore, but it does
require us to take the "slow" path, since setuptools seems to reorder
the extras. This PR adds another normalization step which lets us take
the fast path: https://github.com/astral-sh/uv/issues/10855.
When support for conflicting extras/groups was initially added, I
stopped short of including the conflict markers in uv's "fork markers"
in the lock file. That is, the fork markers are markers that indicate
the different splits uv took during resolution, which we record, I
believe, to avoid spurious updates to the lock file as a result of
using them as preferences.
One interesting result of omitting the conflict markers from the fork
markers is that sometimes this would result in duplicate markers. In
response, I wrote a function that stripped off the conflict markers and
deduplicated the remainder. My thinking at the time was that it wasn't
clear whether we needed to keep conflict markers around.
It looks like #10783 demonstrates a case where we do, seemingly, need
them. Namely, it's a case where after stripping conflict markers, you
don't end up with duplicate markers, but you do end up with overlapping
markers. Overlapping fork markers are bad juju for the same reason that
overlapping resolver forks are bad juju: you can end up with multiple
versions of the same package in the same environment.
I don't know how to fix overlapping markers without just including the
conflict markers. So that's what this PR does. Because of this, there
will be some churn in lock files, but this only applies to projects that
define conflicting extras.
This PR includes a regression test from #10783. I also manually tried
the original reproduction in #10772 (where adding `numpy<2` caused `uv
sync` to fail), and things worked.
Fixes#10772, Fixes#10783
## Summary
This is a smaller alternative to #10794. If the `Requires-Dist` that we
extract statically doesn't match the lockfile metadata, we now go back
to the distribution database to double-check. Checking the
`Requires-Dist` is itself very cheap, so in the worst case, we're just
paying the same cost as prior to this optimization.
Closes https://github.com/astral-sh/uv/issues/10776.
## Summary
These are very similar to (and computed in the same way as) the hints we
should during a failed resolution, but for install-time.
Closes#10635.
## Test Plan
As an example, when installing PyTorch on macOS with Python 3.13 (wheels
exist for Linux):
```
error: Distribution `torch==2.5.1 @ registry+https://pypi.org/simple` can't be installed because it doesn't have a source distribution or wheel for the current platform
hint: You're on macOS (`macosx_14_0_arm64`), but `torch` (v2.5.1) only has wheels for the following platform: `manylinux1_x86_64`
```
## Summary
The fix I shipped in https://github.com/astral-sh/uv/pull/10690
regressed an important case. If we solve a PyPI branch before a PyTorch
branch, we'll end up respecting the preference, and choosing `2.2.2`
instead of `2.2.2+cpu`.
This PR goes back to ignoring preferences that don't map to the current
index. However, to solve https://github.com/astral-sh/uv/issues/10383,
we need to special-case `requirements.txt`, which can't provide explicit
indexes. So, if a preference comes from `requirements.txt`, we still
respect it.
Closes https://github.com/astral-sh/uv/issues/10772.
## Summary
This has a few effects:
1. We only call `preferences` once, which should be more efficient.
2. We collect `preferences` into a vector when there are multiple. Less
efficient, but pretty rare?
3. We now correctly prefer preferences from the same index.
## Summary
A bug in `requires_python` (which infers the Python requirement from a
marker) was leading us to break an invariant around the relationship
between the marker environment and the Python requirement. This, in
turn, was leading us to drop parts of the environment space when
solving.
Specifically, in the linked example, we generated a fork for
`python_full_version < '3.10' or platform_python_implementation !=
'CPython'`, which was later split into `python_full_version == '3.8.*'`
and `python_full_version == '3.9.*'`, losing the
`platform_python_implementation != 'CPython'` portion.
Closes https://github.com/astral-sh/uv/issues/10669.
## Summary
We can retain the small-size advantage of our new tags by moving the
"unknown tag" case into `WheelTagLarge`. This ensures that we can still
represent unknown tags, but avoid paying the cost for them.
## Summary
I'm inferring that these are like... the older tag format? See, e.g.:
```
soxbindings-0.0.1-pp27-pypy_73-macosx_10_9_x86_64.whl
soxbindings-0.0.1-pp27-pypy_73-manylinux2010_x86_64.whl
soxbindings-0.0.1-pp36-pypy36_pp73-macosx_10_9_x86_64.whl
soxbindings-0.0.1-pp36-pypy36_pp73-manylinux2010_x86_64.whl
```
## Summary
This PR modifies the lockfile to omit versions for source trees that use
`dynamic` versioning, thereby enabling projects to use dynamic
versioning with `uv.lock`.
Prior to this change, dynamic versioning was largely incompatible with
locking, especially for popular tools like `setuptools_scm` -- in that
case, every commit bumps the version, so every commit invalidates the
committed lockfile.
Closes https://github.com/astral-sh/uv/issues/7533.
## Summary
I previously made this required, but we now need to be able to create
these from a lockfile that _omits_ versions for dynamic source trees.
They should still be present in most cases, but it's best-effort.
## Summary
After we resolve, we filter out any wheels that aren't applicable for
the target platforms. So, e.g., we remove macOS wheels if we find that
the user only asked to solve for Windows.
This PR extends the same logic to architectures, so that we filter out
ARM-only wheels when the user is only solving for x86, etc.
Closes#10571.
## Summary
This PR extends the thinking in #10525 to platform tags, and then uses
the structured tag enums everywhere, rather than passing around strings.
I think this is a big improvement! It means we're no longer doing ad hoc
tag parsing all over the place.
## Summary
The idea here is to show both (1) an example of a compatible tag and (2)
the tags that were available, whenever we fail to resolve due to an
abscence of matching wheels.
Closes https://github.com/astral-sh/uv/issues/2777.