## Summary
If we fail to deserialize cached metadata in the cache, we should just
ignore it, rather than failing.
Ideally, this never happens. If it does, it means we missed a cache
version bump. But if it does happen, it should still be non-fatal.
Closes https://github.com/astral-sh/uv/issues/11043.
Closes https://github.com/astral-sh/uv/issues/11101.
## Test Plan
Prior to this PR, the following would fail:
- `uvx uv@0.5.25 venv --python 3.12 --cache-dir foo`
- `uvx uv@0.5.25 pip install ./scripts/packages/hatchling_dynamic
--no-deps --python 3.12 --cache-dir foo`
- `uvx uv@0.5.18 venv --python 3.12 --cache-dir foo`
- `uvx uv@0.5.18 pip install ./scripts/packages/hatchling_dynamic
--no-deps --python 3.12 --cache-dir foo`
We can't go back and fix 0.5.18, but this will prevent such regressions
in the future.
Closes https://github.com/astral-sh/uv/issues/11048
This brings the `PythonEnvironment::from_root` behavior in-line with the
rest of uv Python discovery behavior (and in-line with pip). It's not
clear why we were canonicalizing the path in the first place here.
## Summary
This PR migrates all of our PyTorch tests to use our own mirror, which
includes upload timestamps that we can use to enforce
`--excludes-newer`, making the tests far more stable over time. (Today,
if you checkout old versions of `uv`, many of the PyTorch tests will
fail, since the index contents drift over time.)
Some snapshots changed in this PR (see, e.g.,
`universal_nested_overlapping_local_requirement`). The underlying reason
is that I used the current timestamp when setting upload times in the
PyTorch mirror, but those tests read from both the PyTorch
`--find-links` index _and_ PyPI. I guess we don't omit `--find-links`
entries based on `--excludes-newer`? That might be a bug. But I had to
_increase_ the `--excludes-newer` to include the PyTorch mirror's
`--find-links`, which meant pulling in some newer packages from PyPI
too. This is fine: it's a one-time churn, and they'll be stable going
forward.
In #10875, I relaxed the error checking during resolution to permit
dependencies like `foo[x1]`, where `x1` was defined to be conflicting.
In exchange, the error was, roughly speaking, moved to installation
time. This was achieved by looking at the full set of enabled extras
and checking whether any conflicts occurred. If so, an error was
reported. This ends up being more expressive and permits more valid
configurations.
However, in so doing, there was a bug in how the accumulated extras
were being passed to conflict marker evaluation. Namely, we weren't
accounting for the fact that if `foo[x1]` was enabled, then that fact
should be carried through to all conflict marker evaluations. This is
because some of those will use things like `extra != 'x1'` to indicate
that it should only be included if an extra *isn't* enabled.
In #10985, this manifested with PyTorch where `torch==2.4.1` and
`torch==2.4.1+cpu` were being installed simultaneously. Namely, the
choice to install `torch==2.4.1` was not taking into account that
the `cpu` extra has been enabled. If it did, then it's conflict
marker would evaluate to `false`. Since it didn't, and since
`torch==2.4.1+cpu` was also being included, we ended up installing both
versions.
The approach I took in this PR was to add a second breadth first
traversal (which comes first) over the dependency tree to accumulate all
of the activated extras. Then, only in the second traversal do we
actually build up the resolution graph.
Unfortunately, I have no automatic regression test to include here. The
regression test we _ought_ to include involves `torch`. And while we are
generally find to use those in tests that only generate a lock file, the
regression test here actually requires running installation. And
downloading and installing `torch` in tests is bad juju. So adding a
regression test for this is blocked on better infrastructure for PyTorch
tests. With that said, I did manually verify that the test case in #10985
no longer installs multiple versions of `torch`.
Fixes#10985
## Summary
Fixes a recurring typo.
## Details
There's a typo appearing in a particular sentence...
> Ignore package dependencies, instead only add those packages
explicitly listed on the command line to the resulting **the**
requirements file.
... used in:
* `crates/uv-cli/src/lib.rs`
* `crates/uv-settings-src-settings.rs`
* `docs/reference/settings.md`
* `uv.schem.json`
Docs, comments and a CLI command description seem affected.
This PR fixes it.
---------
Co-authored-by: bujnok01 <bujnok01@heiway.net>
I'm sorry, but I was writing some new content here and the inconsistent
wrapping was very hard to maintain and I didn't want to muddy the diff
there with reflowing.
I don't think we need to be strict about the reflow (I'm not sure we
even can be) but some of these were very far off from our typical wrap
length.
## Summary
This is a really subtle issue. I'm actually having trouble writing a
test for it, though the problem makes sense. In short, we're sharing the
`SharedState` between the `BuildContext` and the universal resolver. The
`SharedState` includes `VersionMap`, which tracks incompatibilities...
The incompatibilities use the platform tags, which are only present when
resolving from the `BuildContext` (i.e., when resolving build
dependencies). The universal resolver then fails because it sees a bunch
of "incompatible" wheels that are incompatible with the current platform
(i.e., the current Python interpreter).
In short, we _cannot_ share a `SharedState` across two operations that
perform a universal and then a platform-specific resolution. So this PR
adds separate types and fixes up any overlapping usages.
A better setup, for the future, would be to somehow share the underlying
simple metadata, and only track separate `VersionMap` -- since there
_is_ a bunch of data we can share. But that's a larger change.
Closes https://github.com/astral-sh/uv/issues/10977.
## Summary
The issue here boils down to: when we write metadata that came from
building the wheel itself, we aren't setting the `dynamic` field.
We now _always_ set the dynamic field when reading, even when we read
cached data.
Closes https://github.com/astral-sh/uv/issues/11047.
<!--
Thank you for contributing to uv! To help us out with reviewing, please
consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
## Summary
<!-- What's the purpose of the change? What does it do, and why? -->
## Test Plan
<!-- How was it tested? -->
## Summary
On Windows, we have a lot of issues with atomic replacement and such.
There are a bunch of different failure modes, but they generally
involve: trying to persist a fail to a path at which the file already
exists, trying to replace or remove a file while someone else is reading
it, etc.
This PR adds locks to all of the relevant database paths. We already use
these advisory locks when building source distributions; now we use them
when unzipping wheels, storing metadata, etc.
Closes#11002.
## Test Plan
I ran the following script:
```shell
# Define the cache directory path
$cacheDir = "C:\Users\crmar\workspace\uv\cache"
# Clear the cache directory if it exists
if (Test-Path $cacheDir) {
Remove-Item -Recurse -Force $cacheDir
}
# Create the cache directory again
New-Item -ItemType Directory -Force -Path $cacheDir
# Define the command to run with --cache-dir flag
$command = {
param ($venvPath)
# Create a virtual environment in the specified path with --python
uv venv $venvPath
# Run the pip install command with --cache-dir flag
C:\Users\crmar\workspace\uv\target\profiling\uv.exe pip install flask==1.0.4 --no-binary flask --cache-dir C:\Users\crmar\workspace\uv\cache -v --python $venvPath
}
# Define the paths for the different virtual environments
$venv1 = "C:\Users\crmar\workspace\uv\venv1"
$venv2 = "C:\Users\crmar\workspace\uv\venv2"
$venv3 = "C:\Users\crmar\workspace\uv\venv3"
$venv4 = "C:\Users\crmar\workspace\uv\venv4"
$venv5 = "C:\Users\crmar\workspace\uv\venv5"
# Start the command in parallel five times using Start-Job, each with a different venv
$job1 = Start-Job -ScriptBlock $command -ArgumentList $venv1
$job2 = Start-Job -ScriptBlock $command -ArgumentList $venv2
$job3 = Start-Job -ScriptBlock $command -ArgumentList $venv3
$job4 = Start-Job -ScriptBlock $command -ArgumentList $venv4
$job5 = Start-Job -ScriptBlock $command -ArgumentList $venv5
# Wait for all jobs to complete
$jobs = @($job1, $job2, $job3, $job4, $job5)
$jobs | ForEach-Object { Wait-Job $_ }
# Retrieve the results (optional)
$jobs | ForEach-Object { Receive-Job -Job $_ }
# Clean up the jobs
$jobs | ForEach-Object { Remove-Job -Job $_ }
```
And ensured it succeeded in five straight invocations (whereas on
`main`, it consistently fails with a variety of different traces).
There should be two functional changes here:
- If we receive SIGINT twice, forward it to the child process
- If the `uv run` child process changes its PGID, then forward SIGINT
Previously, we never forwarded SIGINT to a child process. Instead, we
relied on shell to do so.
On Windows, we still do nothing but eat the Ctrl-C events we receive.
I cannot see an easy way to send them to the child.
The motivation for these changes should be explained in the comments.
Closes https://github.com/astral-sh/uv/issues/10952 (in which Ray
changes its PGID)
Replaces the (much simpler) #10989 with a more comprehensive approach.
See https://github.com/astral-sh/uv/pull/6738#issuecomment-2315451358
for some previous context.
## Summary
When a `pyproject.toml` `[tool.uv.sources.(package)]` section specifies
`workspace` and one or more of (`index`, `git`, `url`, `path`, `rev`,
`tag`, `branch`, `editable`), running `uv` to build or sync the package
gives the error:
```
cannot specify both `index` and `(parameter name)`
```
The error should actually say:
```
cannot specify both `workspace` and `(parameter name)`
```
## Test Plan
I ran `cargo test`, and all tests still passed.
## Summary
I think the "available versions" may not filter on `--exclude-newer`,
since it's marked as an incompatibility? In which case, this error
message can change as versions are published.
First of all, I want to test automatic managed installs (see #10913) and
need to set that up. Second of all, some tests were _implicitly_
downloading interpreters instead of using the one from their context —
which is unexpected and naughty and very slow.
We'll probably end up shipping but we were moving ahead with this on the
basis that pip may not even ship this, so let's play it safe and wait
for a bit.