## Summary
I don't think it's worth maintaining this separate test harness for ~18
tests, when they can all be tested in the `uv` package itself. Let's
drop the maintenance burden.
## Summary
Externally, development dependencies are currently structured as a flat
list of PEP 580-compatible requirements:
```toml
[tool.uv]
dev-dependencies = ["werkzeug"]
```
When locking, we lock all development dependencies; when syncing, users
can provide `--dev`.
Internally, though, we model them as dependency groups, similar to
Poetry, PDM, and [PEP 735](https://peps.python.org/pep-0735). This
enables us to change out the user-facing frontend without changing the
internal implementation, once we've decided how these should be exposed
to users.
A few important decisions encoded in the implementation (which we can
change later):
1. Groups are enabled globally, for all dependencies. This differs from
extras, which are enabled on a per-requirement basis. Note, however,
that we'll only discover groups for uv-enabled packages anyway.
2. Installing a group requires installing the base package. We rely on
this in PubGrub to ensure that we resolve to the same version (even
though we only expect groups to come from workspace dependencies anyway,
which are unique). But anyway, that's encoded in the resolver right now,
just as it is for extras.
## Summary
As with other `.egg-info` and `.egg-link` distributions, it's easy to
support _existing_ `.egg-link` files. Like pip, we refuse to uninstall
these, since there's no way to know which files are part of the
distribution.
Closes https://github.com/astral-sh/uv/issues/4059.
## Test Plan
Verify that `vtk` is included here, which is installed as a `.egg-link`
file:
```
> conda create -c conda-forge -n uv-test python h5py vtk pyside6 cftime psutil
> cargo run pip freeze --python /opt/homebrew/Caskroom/miniforge/base/envs/uv-test/bin/python
aiohttp @ file:///Users/runner/miniforge3/conda-bld/aiohttp_1713964997382/work
aiosignal @ file:///home/conda/feedstock_root/build_artifacts/aiosignal_1667935791922/work
attrs @ file:///home/conda/feedstock_root/build_artifacts/attrs_1704011227531/work
cached-property @ file:///home/conda/feedstock_root/build_artifacts/cached_property_1615209429212/work
cftime @ file:///Users/runner/miniforge3/conda-bld/cftime_1715919201099/work
frozenlist @ file:///Users/runner/miniforge3/conda-bld/frozenlist_1702645558715/work
h5py @ file:///Users/runner/miniforge3/conda-bld/h5py_1715968397721/work
idna @ file:///home/conda/feedstock_root/build_artifacts/idna_1713279365350/work
loguru @ file:///Users/runner/miniforge3/conda-bld/loguru_1695547410953/work
msgpack @ file:///Users/runner/miniforge3/conda-bld/msgpack-python_1715670632250/work
multidict @ file:///Users/runner/miniforge3/conda-bld/multidict_1707040780513/work
numpy @ file:///Users/runner/miniforge3/conda-bld/numpy_1707225421156/work/dist/numpy-1.26.4-cp312-cp312-macosx_11_0_arm64.whl
pip==24.0
psutil @ file:///Users/runner/miniforge3/conda-bld/psutil_1705722460205/work
pyside6==6.7.1
setuptools==70.0.0
shiboken6==6.7.1
vtk==9.2.6
wheel==0.43.0
wslink @ file:///home/conda/feedstock_root/build_artifacts/wslink_1716591560747/work
yarl @ file:///Users/runner/miniforge3/conda-bld/yarl_1705508643525/work
```
Closes https://github.com/astral-sh/uv/issues/4072
This was an accidental change in
https://github.com/astral-sh/uv/pull/4029 in which I had updated the
pull request to support virtual environments as requested in review and
forgot to revert it.
Separately, we shouldn't fail if `VIRTUAL_ENV` points to an empty
directory and `SystemPython::Allowed` is used, will address that
separately.
## Summary
If `Requires-Python` is omitted in `uv lock` or `uv run`, we now warn
and default to `>=` the current minor version.
Closes https://github.com/astral-sh/uv/issues/4050.
## Summary
This PR adds the `Requires-Python` range to the user's lockfile. This
will enable us to validate it when installing.
For now, we repeat the `Requires-Python` back to the user;
alternatively, though, we could detect the supported Python range
automatically.
See: https://github.com/astral-sh/uv/issues/4052
We had previously changed the signature of
`DependencyProvider::get_dependencies` to return an iterator instead of
a hashmap to avoid the conversion cost from our dependencies `Vec` to
the pubgrub's hashmap. These changes are difficult to make in pubgrub
since they complicate the public api. But we don't actually use
`DependencyProvider::get_dependencies`, so we rolled those
customizations back in https://github.com/pubgrub-rs/pubgrub/pull/226
and instead opted to change only the internal
`add_incompatibility_from_dependencies` method that we exposed in our
fork. This aligns us closer with upstream, removes the design questions
about `DependencyProvider` from our concerns and reduces our diff (not
counting the github action) to +36 -12.
Retry the flaky `compile_invalid_pyc_invalidation_mode` if it fails. I
don't understand why this happening in the first place (we have code
that should catch those cases, but also those cases shouldn't be
happening at all) and this is terrible hack, but it fixes the test
flakes.
Fixes#2672
## Summary
This PR separates "gathering the requirements" from the rest of the
metadata (e.g., version), which isn't required when installing a
package's _dependencies_ (as opposed to installing the package itself).
It thus ensures that we don't need to build a package when a static
`pyproject.toml` is provided in `pip compile`.
Closes https://github.com/astral-sh/uv/issues/4040.
We know that `[project]` must exist for each workspace member, so we can
store it directly and avoid going through the `.and_then()` when we need
to access it. This requires cloning the struct due to lack of
self-referential structs. An alternative would taking the `Project` from
`PyProjectToml` instead, but this could be confusing when passing the
`PyProjectToml` around.
## Summary
Thankfully this is pretty rare since `pip sync` is usually run on `pip
compile` output, and `pip compile` never outputs markers.
Closes https://github.com/astral-sh/uv/issues/4044
## Summary
Closes#3955
Adds explicit support to `NO_COLOR` and `FORCE_COLOR` via
GlobalSettings.
The order, per specs is now `NO_COLOR` > `FORCE_COLOR` > `color`.
This PR is a backup plan pending rust-cli/anstyle#192.
## Test Plan
Tested all cases locally for now; I didn't see existing tests for
GlobalSettings parsing.
## Summary
Given `install -e dagster`, we need to assume that the user meant
`install -e ./dagster`, even though `install dagster` should _not_ be
treated as `install ./dagster`. I suspect pip will change this in the
future (since `pip install dagster` does _not_ meant `pip install
./dagster`) but for now it's what users expect.
Closes https://github.com/astral-sh/uv/issues/3994.
This commit adds a template and does some light surgery on `generate.py`
to make use of that template. In particular, the universal tests require
using the "workspace"-aware version of `uv`, so we can't use the
existing `uv pip {compile,install}` tests.
This is just the result of running
./scripts/sync_scenarios.sh
From the root of the `uv` repository.
When I initially ran this, it produced some tests with snapshots that
weren't being updated. It turned out this was because the tests weren't
running, as they were gated behind the `python-patch` feature. In this
commit, we add `python-patch` to our `cargo insta` command, which should
update all relevant snapshots.
There are still some superfluous updates as a result of a spell checker
being run on generated files, but
This is a quick fix for some flaky tests where the output in the lock
file isn't stable because marker expressions can be combined in a
non-deterministic order.
I believe there is ongoing work to simplify marker expressions which
will help here, but I think some kind of normalization is still
ultimately needed to guarantee consistent output.
I first noticed the flaky test in:
https://github.com/astral-sh/uv/pull/4015
## Summary
Avoid using work-stealing Tokio workers for bytecode compilation,
favoring instead dedicated threads. Tokio's work-stealing does not
really benefit us because we're spawning Python workers and scheduling
tasks ourselves — we don't want Tokio to re-balance our workers. Because
we're doing scheduling ourselves and compilation is a primarily
compute-bound task, we can also create dedicated runtimes for each
worker and avoid some synchronization overhead.
This is part of a general desire to avoid relying on Tokio's
work-stealing scheduler and be smarter about our workload. In this case
we already had the custom scheduler in place, Tokio was just getting in
the way (though the overhead is very minor).
## Test Plan
This improves performance by ~5% on my machine.
```
$ hyperfine --warmup 1 --prepare "target/profiling/uv-dev clear-compile .venv" "target/profiling/uv-dev compile .venv" "target/profiling/uv-dev-dedicated compile .venv"
Benchmark 1: target/profiling/uv-dev compile .venv
Time (mean ± σ): 1.279 s ± 0.011 s [User: 13.803 s, System: 2.998 s]
Range (min … max): 1.261 s … 1.296 s 10 runs
Benchmark 2: target/profiling/uv-dev-dedicated compile .venv
Time (mean ± σ): 1.220 s ± 0.021 s [User: 13.997 s, System: 3.330 s]
Range (min … max): 1.198 s … 1.272 s 10 runs
Summary
target/profiling/uv-dev-dedicated compile .venv ran
1.05 ± 0.02 times faster than target/profiling/uv-dev compile .venv
$ hyperfine --warmup 1 --prepare "target/profiling/uv-dev clear-compile .venv" "target/profiling/uv-dev compile .venv" "target/profiling/uv-dev-dedicated compile .venv"
Benchmark 1: target/profiling/uv-dev compile .venv
Time (mean ± σ): 3.631 s ± 0.078 s [User: 47.205 s, System: 4.996 s]
Range (min … max): 3.564 s … 3.832 s 10 runs
Benchmark 2: target/profiling/uv-dev-dedicated compile .venv
Time (mean ± σ): 3.521 s ± 0.024 s [User: 48.201 s, System: 5.392 s]
Range (min … max): 3.484 s … 3.566 s 10 runs
Summary
target/profiling/uv-dev-dedicated compile .venv ran
1.03 ± 0.02 times faster than target/profiling/uv-dev compile .venv
```
We retry several kinds of network request failures, but it's often
unclear whether a request was retried or not
(https://github.com/astral-sh/uv/issues/3514#issuecomment-2105485773).
This PR adds a small intermediary layer that logs all transient request
failures, adding the `DEBUG Transient request failure` lines:
```
DEBUG Searching for Python interpreter in virtual environments
DEBUG Found CPython 3.12.3 at `/home/konsti/projects/uv/.venv/bin/python3` (active virtual environment)
DEBUG Using Python 3.12.3 environment at .venv/bin/python3
DEBUG Acquired lock for `.venv`
DEBUG At least one requirement is not satisfied: tqdm
DEBUG Using registry request timeout of 30s
DEBUG Solving with target Python version 3.12.3
DEBUG Adding direct dependency: tqdm*
DEBUG No cache entry for: https://pypi.org/simple/tqdm/
DEBUG Transient request failure for https://pypi.org/simple/tqdm/, retrying: Request error: error sending request for url (https://pypi.org/simple/tqdm/)
Caused by: error sending request for url (https://pypi.org/simple/tqdm/)
Caused by: client error (Connect)
Caused by: dns error: failed to lookup address information: Name or service not known
Caused by: failed to lookup address information: Name or service not known
DEBUG Transient request failure for https://pypi.org/simple/tqdm/, retrying: Request error: error sending request for url (https://pypi.org/simple/tqdm/)
Caused by: error sending request for url (https://pypi.org/simple/tqdm/)
Caused by: client error (Connect)
Caused by: dns error: failed to lookup address information: Name or service not known
Caused by: failed to lookup address information: Name or service not known
DEBUG Transient request failure for https://pypi.org/simple/tqdm/, retrying: Request error: error sending request for url (https://pypi.org/simple/tqdm/)
Caused by: error sending request for url (https://pypi.org/simple/tqdm/)
Caused by: client error (Connect)
Caused by: dns error: failed to lookup address information: Name or service not known
Caused by: failed to lookup address information: Name or service not known
DEBUG Transient request failure for https://pypi.org/simple/tqdm/, retrying: Request error: error sending request for url (https://pypi.org/simple/tqdm/)
Caused by: error sending request for url (https://pypi.org/simple/tqdm/)
Caused by: client error (Connect)
Caused by: dns error: failed to lookup address information: Name or service not known
Caused by: failed to lookup address information: Name or service not known
error: Could not connect, are you offline?
Caused by: error sending request for url (https://pypi.org/simple/tqdm/)
Caused by: client error (Connect)
Caused by: dns error: failed to lookup address information: Name or service not known
Caused by: failed to lookup address information: Name or service not known
```
I decided for multi-line logging to show the complete error trace since
only `Transient request failure for https://pypi.org/simple/tqdm/,
retrying: Request error: error sending request for url
(https://pypi.org/simple/tqdm/)` doesn't tell you the actual problem (a
dns error).
Note that running with `-v` will not show messages about retry backoff
timing, but running with `RUST_LOG=debug` now shows a complete picture:
```
DEBUG starting new connection: https://pypi.org/
DEBUG resolving host="pypi.org"
DEBUG Transient request failure for https://pypi.org/simple/tqdm/, retrying: Request error: error sending request for url (https://pypi.org/simple/tqdm/)
Caused by: error sending request for url (https://pypi.org/simple/tqdm/)
Caused by: client error (Connect)
Caused by: dns error: failed to lookup address information: Name or service not known
Caused by: failed to lookup address information: Name or service not known
WARN Retry attempt #2. Sleeping 528.728192ms before the next attempt
```
Fixes#3572
## Summary
See #4013
`uv pip ...` command loads workspace settings from pyproject.toml and
uv.toml.
Although a warning is implemented to output a warning when parsing
fails, it is not actually output.
https://github.com/astral-sh/uv/blob/main/crates/uv-workspace/src/workspace.rs#L38-L61
The reason is that the flag to display warnings is enabled after loading
the workspace settings.
This PR turns on the warning output flag before loading the workspace.
## Test Plan
pyproject.toml for test
```toml
[project]
name = "sample"
version = "0.0.0"
dependencies = ["ruff"]
[tool.uv.pip]
# originally string type.
index-url = 1
```
command output (before modification)
```bash
uv pip compile pyproject.toml
Resolved 1 package in 383ms
# This file was autogenerated by uv via the following command:
# uv pip compile pyproject.toml
ruff==0.4.7
# via sample (pyproject.toml)
```
command output (after modification)
```bash
uv pip compile pyproject.toml
warning: Failed to parse `pyproject.toml`: TOML parse error at line 7, column 13
|
7 | index-url = true
| ^^^^
invalid type: boolean `true`, expected a string
Resolved 1 package in 107ms
# This file was autogenerated by uv via the following command:
# uv pip compile pyproject.toml
ruff==0.4.7
# via sample (pyproject.toml)
```
The workspace test directories can be used both in tests and directly
for developing/debugging. In the latter, we shouldn't copy the venv and
the lockfile when running tests. Using the ignore crate over manual
recursion we exclude those files.
## Summary
Instead of checking if the target and installed version are the same, we
model the data such that the target version is only present if it was
specified by the user. This also means that we correctly say "requested
version" even if the two happen to be the same.
## Summary
I believe this is no longer necessary. Part of the problem here is that
we can't _know_ the full set of available Python versions, especially
once we start resolving against a `Requires-Python` rather than a fixed
set of two versions.
## Summary
Previously, when we locked something like `flask[dotenv]`, we created
two separate distributions in the lockfile: one for `flask`, which
included the base dependencies, and one for `flask[dotenv]`, which
included the base dependencies _and_ the `dotenv` dependencies. This was
easy to implement, but it meant that we were duplicating all of the
distribution files for every extra, and duplicating all of the base
dependencies for every extra.
This PR normalizes the data such that we now have one entry per
distribution (i.e., `ExtraName` was removed from `DistributionId`), with
an optional dependencies table with an entry per extra, like:
```toml
[[distribution]]
name = "project"
version = "0.1.0"
source = "editable+file://[TEMP_DIR]/"
sdist = { url = "file://[TEMP_DIR]/" }
[[distribution.dependencies]]
name = "anyio"
version = "3.7.0"
source = "registry+https://pypi.org/simple"
[distribution.optional-dependencies]
[[distribution.optional-dependencies.test]]
name = "iniconfig"
version = "2.0.0"
source = "registry+https://pypi.org/simple"
```
This requires a bit more work upfront, because we now need to merge
multiple packages from the `PetGraph` representation when creating the
lockfile.
Closes https://github.com/astral-sh/uv/issues/3916.
## Summary
Once we use a _range_ rather than a precise version, it won't actually
make sense to return a version here. It's no longer required, so I'm
removing it.