## Summary
Right now, if you have a `requirements.txt` with a pre-release, but the
`requirements.in` does not have a pre-release marker for that dependency
we drop the pre-release. (In the selector, we end up returning
`AllowPrerelease::IfNecessary`, the default.)
I played with a few ways of solving this... The first was to remove that
guard altogether. But if we do that,
`universal_transitive_disjoint_prerelease_requirement` fails (we use
`1.17.0rc1` in both forks, when it should only apply to one of the two).
The second was to do that, but also avoid pushing pre-releases as
preferences when we solve a fork. But then
`universal_disjoint_prereleases` fails, because we return a different
pre-release in each fork.
Finally, I settled on allowing existing pre-releases in forks if they
have no markers on them, i.e., they are "global" preferences. I believe
this is true IFF the preference came from an existing lockfile.
Closes https://github.com/astral-sh/uv/issues/5729.
## Summary
If _both_ nodes in the derivation tree are proxies, we need to remove
the _entire_ node. So, the function now returns an `Option<Tree>`
instead of taking `&mut Tree`.
Closes https://github.com/astral-sh/uv/issues/5618.
## Summary
Partially resolves#5561. Haven't added overrides support yet but I can
add it tomorrow if the current approach for constraints is ok.
## Test Plan
`cargo test`
Manually checked trace logs after changing the constraints.
## Summary
As-is, if you have a workspace with mixed `requires-python`
requirements, resolution will _never_ succeed, since we'll use the union
as the `requires-python` bound (i.e., take the lowest value), and fail
when we see the package that only supports some more narrow range.
This PR modifies the behavior to take the intersection (i.e., the
highest value), so if you have one package that supports Python 3.12 and
later, and another that supports Python 3.8 and later, we lock for
Python 3.12. If you try to sync or run with Python 3.8, we raise an
error, since the lockfile will be incompatible with that request.
Konsti has a write-up in https://github.com/astral-sh/uv/issues/5594
that outlines what could be a longer-term strategy.
Closes https://github.com/astral-sh/uv/issues/5578.
By resolving for each fork from the lockfile individually and by adding
using preferences for the current fork, we solve the instability #5180.
I've tested the locally and will add the packse test scenarios upstack.
Part of
https://github.com/astral-sh/uv/issues/5180#issuecomment-2247696198
## Summary
Given a fork like:
```
pylint < 3 ; sys_platform == 'darwin'
pylint > 2 ; sys_platform != 'darwin'
```
Solving the top branch will typically yield a solution that also
satisfies the bottom branch, due to maximum version selection (while the
inverse isn't true).
To quote an example from the docs:
```rust
// If there's no difference, prioritize forks with upper bounds. We'd prefer to solve
// `numpy <= 2` before solving `numpy >= 1`, since the resolution produced by the former
// might work for the latter, but the inverse is unlikely to be true due to maximum
// version selection. (Selecting `numpy==2.0.0` would satisfy both forks, but selecting
// the latest `numpy` would not.)
```
Closes https://github.com/astral-sh/uv/issues/4926 for now.
## Summary
First part of: https://github.com/astral-sh/uv/issues/4926. We should
solve forks that _don't_ expand the world of supported versions (e.g.,
`python_version >= '3.11'` enables us to select new packages, since we
narrow the supported version range).
This PR represents a different approach to marker propagation in an
attempt to unblock #4640. In particular, instead of propagating markers
when forks are created, we wait until resolution is complete to
propagate all markers to all dependencies in each fork. This ends up
being both more robust (we should never miss anything) and simpler to
implement because it doesn't require mutating a `PubGrubPackage` (which
was pretty annoying). I think the main downside here is that this can
sometimes add markers where they aren't needed.
This actually winds up making quite a few snapshot changes. I went
through each of them. Some of them look like legitimate bug fixes. Some
of them look like superfluous additions. And some of them look like they
would be removed if we had perfect marker normalization. But I don't
think any of the changes are _wrong_.
Consider the following packse scenario:
```toml
[root]
requires = [
"a>=1.0.0 ; python_version < '3.10'",
"a>=1.1.0 ; python_version >= '3.10'",
"a>=1.2.0 ; python_version >= '3.11'",
]
[packages.a.versions."1.0.0"]
[packages.a.versions."1.1.0"]
[packages.a.versions."1.2.0"]
```
On current `main`, this produces a dependency on `a` that looks like
this:
```toml
dependencies = [
{ name = "fork-overlapping-markers-basic-a", marker = "python_version < '3.10' or python_version >= '3.11'" },
]
```
But the marker expression is clearly wrong here, since it implies that
`a` isn't installed at all for Python 3.10. With this PR, the above
dependency becomes:
```toml
dependencies = [
{ name = "fork-overlapping-markers-basic-a" },
]
```
That is, it's unconditional. Which is I believe correct here since there
aren't any other constraints on which version to select.
The specific bug here is that when we found overlapping dependency
specifications for the same package *within* a pre-existing fork, we
intersected all of their marker expressions instead of unioning them.
That in turn resulted in incorrect marker expressions.
While this doesn't fix any known bug on the issue tracker (like #4640),
it does appear to fix a couple of our snapshot tests. And fixes a basic
test case I came up with while working on #4732.
For the packse scenario test: https://github.com/astral-sh/packse/pull/206
When a fork occurs, we divide not just the dependencies that
provoked a fork into distinct groups, but we also add the
corresponding sibling dependencies to each fork. Previously,
while we track markers on the fork itself, the individual
dependencies that had markers only corresponded to markers
written from the dependency specification.
This meant that the sibling dependencies that got added to
each fork would not themselves have markers attached to them.
This in turn meant they would not have markers associated with
them in the lock file.
In many cases, this is actually okay, because the resolver will
pick a version that is "universal" across all forks in most
cases. But in some cases, this just simply isn't possible as
the marker expressions in the fork can and do influence resolution.
In which case, it is possible for the same package with different
versions to show up in the lock file unconditionally. Which is a
big no-no.
So in this commit, after we determine the forks, we intersect the
markers on each fork with each of its dependencies.
This does seem to balloon the marker expressions in some cases.
I plucked one low hanging fruit to avoid doing `x and x` in
trivial cases. (And this eliminated a portion of the snapshot
diffs.) But some pretty gnarly diffs remain.
This commit also fixes another bug: previously, when we created a fork
to capture the "remaining" universe of an incomplete set of markers, we
left out dependencies that should be included in that fork. We rectify
that here.
Fixes#5086
Partially addresses #4732
Two changes split out from the instability work:
* Break `ResolutionGraph::from_state` into methods before adding new
logic to it.
* `ResolutionGraph`: Convert `NodeKey` type to `PackageRef` struct:
Another small refactoring to make subsequent changes easier.
No functional changes.
@BurntSushi I hope this doesn't interfere with your work too much, the
`PackageRef` should at least make debugging panics here easier.
In preparation for the preferences changes with forking, change the
method structure in `CandidateSelector`. Split out into its own PR to
avoid merge conflicts with main. No functional changes.
Consider these requirements from pylint 3.2.5:
```
Requires-Dist: dill >=0.3.6 ; python_version >= "3.11"
Requires-Dist: dill >=0.3.7 ; python_version >= "3.12"
```
We will split on the python version, but then we may pick a version of
`dill` that's `>=0.3.7` in both branches and also have an otherwise
identical resolution in both forks. In this case, we merge both forks
and store only their conjoined markers.
## Summary
Normalize the order of marker expressions on construction. This removes
the distinction between expressions like `os_name == 'Linux'` vs.
`'Linux' == os_name` throughout the codebase. One caveat here is that
the `in` operator does not have a direct inverse, so we introduce
`MarkerOperator::Contains` to handle that case.
I wanted to land this smaller change before some more intrusive changes
as it simplifies the existing code quite a bit.
## Summary
Prior to this change, the resolver would panic if we ran with
`--offline` and `--no-deps` and we had cached metadata for a _package_
(i.e., the versions) but no cached metadata for the _distribution_
(i.e., the specific wheel), since we weren't validating that the
returned metadata in the `--no-deps` case was actually successful. (We
need metadata, even for `--no-deps`, so that we can validate extras.)
## Test Plan
The added test panics on the previous branch.
## Summary
Prefers, in order:
- The major-minor version of an interpreter discovered via `--python`.
- The `requires-python` from the workspace.
- The major-minor version of the default interpreter.
If the `--python` request is a version or a version range, we use that
without fetching an interpreter.
Closes https://github.com/astral-sh/uv/issues/5299.
The `RequirementsTxtComparator` was written assuming there is one
distribution per package name. This changed with the universal
resolution, which allows multiple versions or urls for the same package
name. The sorting we emitted for these new entries was incidental.
With this change, we properly sort these entries by name, version and
then url in universal mode.
This is an output format change for `--universal` users.
## Summary
Excellent find from @konstin. If we have a package that's included in
two forks at the same version, but with different URLs, we need to avoid
collapsing them in the lockfile.
Closes https://github.com/astral-sh/uv/issues/5294.
Looking at how to merge identical forks, i found that the `Resolution`
can be simplified by treating it as a nodes and edges store (plus pins,
they are separate since they are per name-version, not per
(virtual-)package-version). This should also make #5294 more apparent,
which i didn't touch here.
I additionally added some doc comments to the `Resolution` types.
Our everything-method `solve` tends to grow large, so before i'm adding
more logic, i'm moving some code and some logging statements around to
keep it manageable.
I made minor changes to the logging, otherwise no logic changes, only
refactoring.
## Summary
`dearpygui==1.9.1 has no wheels are available with a matching Python
ABI` is way better than `he requested Python version (>=3.12.3) does not
satisfy Python>=3.12.3`.
Closes https://github.com/astral-sh/uv/issues/5284.
This fixes resolving packages that publish an invalid stub to pypi, such
as tensorrt-llm.
## Summary
In https://github.com/astral-sh/uv/pull/3138 , we implemented
`unsafe-best-match`. However, it seems to not quite work as expected.
When multiple indices contain the same version, it's not clear which
index the current code uses. This PR fixes that to use the first index
the package is in.
## Test Plan
```console
$ echo 'tensorrt-llm==0.11.0' | ./target/debug/uv pip compile - --extra-index-url https://pypi.nvidia.com --python-version=3.10 --index-strategy=unsafe-best-match --annotation-style=line
```
## Summary
Given `Requires-Python: ">=3.12.3"`, we were rejecting wheels like
`dearpygui-1.11.1-cp312-cp312-win_amd64.whl`, since `3.12.0` is not
included in `>=3.12.3`. We instead need to test against the major-minor
version of `Requires-Python`.
The easiest way to do this, I think, is the use the `RequiresPython`
struct, which has a single bound that we can truncate the major-minor.
This also means that we now allow
`dearpygui-1.11.1-cp312-cp312-win_amd64.whl` for specifiers like
`Requires-Python: "==3.10.*"`. This is incorrect on the surface, but it
does match our semantics for `Requires-Python` elsewhere: we treat it as
a lower bound.
Closes https://github.com/astral-sh/uv/issues/5287.
## Summary
This PR modifies the lockfile to include the impactful resolution
settings, like the resolution and pre-release mode. If any of those
values change, we want to ignore the existing lockfile. Otherwise,
`--resolution lowest-direct` will typically have no effect, which is
really unintuitive.
Closes https://github.com/astral-sh/uv/issues/5226.