## Summary
The basic idea here is that we can (should) reuse a build environment
across resolution (`prepare_metadata_for_build_wheel`) and installation.
This also happens to solve the build-PyTorch-from-source problem, since
we use a consistent build environment between the invocations.
Since `SourceDistributionBuilder` is stateless, we instead store the
builds on `BuildContext`, and we key them by various properties: the
underlying interpreter, the configuration settings, etc. This just
ensures that if we build the same package twice within a process, we
don't accidentally reuse an incompatible build (virtual) environment.
(Note that still drop build environments at the end of the command, and
don't attempt to reuse them across processes.)
Closes#14269.
Prior to this PR, there were numerous places where uv would leak
credentials in logs. We had a way to mask credentials by calling methods
or a recently-added `redact_url` function, but this was not secure by
default. There were a number of other types (like `GitUrl`) that would
leak credentials on display.
This PR adds a `DisplaySafeUrl` newtype to prevent leaking credentials
when logging by default. It takes a maximalist approach, replacing the
use of `Url` almost everywhere. This includes when first parsing config
files, when storing URLs in types like `GitUrl`, and also when storing
URLs in types that in practice will never contain credentials (like
`DirectorySourceUrl`). The idea is to make it easy for developers to do
the right thing and for the compiler to support this (and to minimize
ever having to manually convert back and forth). Displaying credentials
now requires an active step. Note that despite this maximalist approach,
the use of the newtype should be zero cost.
One conspicuous place this PR does not use `DisplaySafeUrl` is in the
`uv-auth` crate. That would require new clones since there are calls to
`request.url()` that return a `&Url`. One option would have been to make
`DisplaySafeUrl` wrap a `Cow`, but this would lead to lifetime
annotations all over the codebase. I've created a separate PR based on
this one (#13576) that updates `uv-auth` to use `DisplaySafeUrl` with
one new clone. We can discuss the tradeoffs there.
Most of this PR just replaces `Url` with `DisplaySafeUrl`. The core is
`uv_redacted/lib.rs`, where the newtype is implemented. To make it
easier to review the rest, here are some points of note:
* `DisplaySafeUrl` has a `Display` implementation that masks
credentials. Currently, it will still display the username when there is
both a username and password. If we think is the wrong choice, it can
now be changed in one place.
* `DisplaySafeUrl` has a `remove_credentials()` method and also a
`.to_string_with_credentials()` method. This allows us to use it in a
variety of scenarios.
* `IndexUrl::redacted()` was renamed to
`IndexUrl::removed_credentials()` to make it clearer that we are not
masking.
* We convert from a `DisplaySafeUrl` to a `Url` when calling `reqwest`
methods like `.get()` and `.head()`.
* We convert from a `DisplaySafeUrl` to a `Url` when creating a
`uv_auth::Index`. That is because, as mentioned above, I will be
updating the `uv_auth` crate to use this newtype in a separate PR.
* A number of tests (e.g., in `pip_install.rs`) that formerly used
filters to mask tokens in the test output no longer need those filters
since tokens in URLs are now masked automatically.
* The one place we are still knowingly writing credentials to
`pyproject.toml` is when a URL with credentials is passed to `uv add`
with `--raw`. Since displaying credentials is no longer automatic, I
have added a `to_string_with_credentials()` method to the `Pep508Url`
trait. This is used when `--raw` is passed. Adding it to that trait is a
bit weird, but it's the simplest way to achieve the goal. I'm open to
suggestions on how to improve this, but note that because of the way
we're using generic bounds, it's not as simple as just creating a
separate trait for that method.
Rustfmt introduces a lot of formatting changes in the 2024 edition. To
not break everything all at once, we split out the set of formatting
changes compatible with both the 2021 and 2024 edition by first
formatting with the 2024 style, and then again with the currently used
2021 style.
Notable changes are the formatting of derive macro attributes and lines
with overly long strings and adding trailing semicolons after statements
consistently.
We added this to help with resolving some specific packages, and for
parity with Poetry. But in some cases, this metadata is just wrong, and
at the very least it's unreliable.
Closes https://github.com/astral-sh/uv/issues/8989.
Closes#10945.
Build failures are one of the most common user facing failures that
aren't "obivous" errors (such as typos) or resolver errors. Currently,
they show more technical details than being focussed on this being an
error in a subprocess that is either on the side of the package or -
more likely - in the build environment, e.g. the user needs to install a
dev package or their python version is incompatible.
The new error message clearly delineates the part that's important (this
is a build backend problem) from the internals (we called this hook) and
is consistent about which part of the dist building stage failed. We
have to calibrate the exact wording of the error message some more. Most
of the implementation is working around the orphan rule, (this)error
rules and trait rules, so it came out more of a refactoring than
intended.
Example:

## Summary
On `main`, if you ask for a source but name a missing subdirectory, you
just get:
```
{source} does not appear to be a Python project, as neither `pyproject.toml` nor `setup.py` are present in the directory
```
But, in reality, the directory doesn't exist at all.
## Summary
The basic issue here is that `uv add` will compute and store a hash for
each package. But if you later run `uv pip install` _after_ `uv cache
prune --ci`, we need to re-download the source distribution. After
re-downloading, we compare the hashes before and after. But `uv pip
install` doesn't compute any hashes by default. So the hashes "differ"
and we error.
Instead, we need to compute a superset of the already-existing and
newly-requested hashes when performing this re-download. (In practice,
this will always be SHA-256.)
Closes https://github.com/astral-sh/uv/issues/8929.
## Test Plan
```shell
export UV_CACHE_DIR="$PWD/cache"
rm -rf "$UV_CACHE_DIR" .venv .venv-2 pyproject.toml uv.lock
echo $(uv --version)
uv init --name uv-cache-issue
cargo run add --python 3.13 "pycairo"
uv cache prune --ci
rm -rf .venv .venv-2
uv venv --python python3.11 .venv-2
. .venv-2/bin/activate
cargo run pip install "pycairo"
```
## Summary
At present, when we have a Python requirement and we see a wheel, we
verify that the Python requirement is compatible with the wheel. For
source distributions, though, we verify that both the Python requirement
_and_ the currently-installed version are compatible, because we assume
that we'll need to build the source distribution in order to get
metadata. However, we can often extract source distribution metadata
_without_ building (e.g., if there's a `pyproject.toml` with no dynamic
keys).
This PR thus modifies the source distribution handling to defer that
incompatibility ("We couldn't get metadata for this project, because it
has no static metadata and requires a higher Python version to run /
build") until we actually try to build the package. As a result, you can
now resolve source distribution-only packages using Python versions
below their `requires-python`, as long as they include static metadata.
Closes https://github.com/astral-sh/uv/issues/8767.
## Summary
`uv cache prune --ci` will remove the source distribution directory. If
we then need to build a _different_ wheel (e.g., you're building a
package that has Python minor version-specific wheels), we fail, because
we expect the source to be there.
Now, if the source is missing, we re-download it. It would be slightly
easier to just _ignore_ that revision, but that would mean we'd also
lose the already-built wheels -- so if you ran against many Python
versions, we'd continuously lose the cached data.
Closes https://github.com/astral-sh/uv/issues/7543.
## Test Plan
We can add tests, but they _need_ to build non-pure Python wheels, which
tends to be expensive...
For reference:
```console
$ cargo run venv --python 3.12
$ cargo run pip install mercurial==6.8.1 --verbose
$ cargo run cache prune --ci
$ cargo run venv --python 3.11
$ cargo run pip install mercurial==6.8.1 --verbose
```
I also did this with a local `.tar.gz` that I downloaded from PyPI.
This is preparatory work for the upload functionality, which needs to
read the METADATA file and attach its parsed contents to the POST
request: We move finding the `.dist-info` from `install-wheel-rs` and
`uv-client` to a new `uv-metadata` crate, so it can be shared with the
publish crate.
I don't properly know if its the right place since the upload code isn't
ready, but i'm PR-ing it now because it already had merge conflicts.
## Summary
This has bothered me for a while and should be fairly impactful for
users. It requires a weird implementation, since the
distribution-building crate depends on the cache, and so the prune
operation can't live in the cache, since it needs to access internals of
the distribution-building crate.
Closes https://github.com/astral-sh/uv/issues/7096.
## Summary
This PR adds a `DistExtension` field to some of our distribution types,
which requires that we validate that the file type is known and
supported when parsing (rather than when attempting to unzip). It
removes a bunch of extension parsing from the code too, in favor of
doing it once upfront.
Closes https://github.com/astral-sh/uv/issues/5858.
## Summary
We allow the use of (e.g.) `.whl.metadata` files when `--no-binary` is
enabled, so it makes sense that we'd also also allow wheels to be
downloaded for metadata extraction. So now, we validate `--no-binary` at
install time, rather than metadata-fetch time.
Closes https://github.com/astral-sh/uv/issues/5699.
In #3514 and #2755, users had intermittent network errors, but it was
not always clear whether we had already retried these requests or not.
Building upon https://github.com/TrueLayer/reqwest-middleware/pull/159,
this PR adds the number of retries to the error message, so we can see
at first glance where we're missing retries and where we might need to
change retry settings.
Example error trace:
```
Could not connect, are you offline?
Caused by: Request failed after 3 retries
Caused by: error sending request for url (https://pypi.org/simple/uv/)
Caused by: client error (Connect)
Caused by: dns error: failed to lookup address information: Name or service not known
Caused by: failed to lookup address information: Name or service not known
```
This code is ugly since i'm missing a better pattern for attaching
context to reqwest middleware errors in
https://github.com/TrueLayer/reqwest-middleware/pull/159.
## Summary
This is what I consider to be the "real" fix for #8072. We now treat
directory and path URLs as separate `ParsedUrl` types and
`RequirementSource` types. This removes a lot of `.is_dir()` forking
within the `ParsedUrl::Path` arms and makes some states impossible
(e.g., you can't have a `.whl` path that is editable). It _also_ fixes
the `direct_url.json` for direct URLs that refer to files. Previously,
we wrote out to these as if they were installed as directories, which is
just wrong.
## Summary
Adds handling for a few cases to improve interoperability with Poetry:
- If the `project` schema is invalid, we now raise a hard error, rather
than treating the metadata as dynamic and then falling back to the build
backend. This could cause problems, I'm not sure. It's stricter than
before.
- If the project contains `tool.poetry` but omits
`project.dependencies`, we now treat it as dynamic. We could go even
further and treat _any_ Poetry project as dynamic, but then we'd be
ignoring user-declared dependencies, which is also confusing.
Closes https://github.com/astral-sh/uv/issues/4142.
## Summary
Externally, development dependencies are currently structured as a flat
list of PEP 580-compatible requirements:
```toml
[tool.uv]
dev-dependencies = ["werkzeug"]
```
When locking, we lock all development dependencies; when syncing, users
can provide `--dev`.
Internally, though, we model them as dependency groups, similar to
Poetry, PDM, and [PEP 735](https://peps.python.org/pep-0735). This
enables us to change out the user-facing frontend without changing the
internal implementation, once we've decided how these should be exposed
to users.
A few important decisions encoded in the implementation (which we can
change later):
1. Groups are enabled globally, for all dependencies. This differs from
extras, which are enabled on a per-requirement basis. Note, however,
that we'll only discover groups for uv-enabled packages anyway.
2. Installing a group requires installing the base package. We rely on
this in PubGrub to ensure that we resolve to the same version (even
though we only expect groups to come from workspace dependencies anyway,
which are unique). But anyway, that's encoded in the resolver right now,
just as it is for extras.
## Summary
This PR removes the static resolver map:
```rust
static RESOLVED_GIT_REFS: Lazy<Mutex<FxHashMap<RepositoryReference, GitSha>>> =
Lazy::new(Mutex::default);
```
With a `GitResolver` struct that we now pass around on the
`BuildContext`. There should be no behavior changes here; it's purely an
internal refactor with an eye towards making it cleaner for us to
"pre-populate" the list of resolved SHAs.
Move `Metadata`, `MetadataLoweringError` and `ArchiveMetadata` into
their own file `metadata.rs` in `uv-distribution`, moving it out from
`lib.rs`. No functional changes.
With the change, we remove the special casing of workspace dependencies
and resolve `tool.uv` for all git and directory distributions. This
gives us support for non-editable workspace dependencies and path
dependencies in other workspaces. It removes a lot of special casing
around workspaces. These changes are the groundwork for supporting
`tool.uv` with dynamic metadata.
The basis for this change is moving `Requirement` from
`distribution-types` to `pypi-types` and the lowering logic from
`uv-requirements` to `uv-distribution`. This changes should be split out
in separate PRs.
I've included an example workspace `albatross-root-workspace2` where
`bird-feeder` depends on `a` from another workspace `ab`. There's a
bunch of failing tests and regressed error messages that still need
fixing. It does fix the audited package count for the workspace tests.
When parsing requirements from any source, directly parse the url parts
(and reject unsupported urls) instead of parsing url parts at a later
stage. This removes a bunch of error branches and concludes the work
parsing url parts once and passing them around everywhere.
Many usages of the assembled `VerbatimUrl` remain, but these can be
removed incrementally.
Please review commit-by-commit.
## Summary
Closes https://github.com/astral-sh/uv/issues/3715.
## Test Plan
```
❯ echo "/../test" | cargo run pip compile -
error: Couldn't parse requirement in `-` at position 0
Caused by: path could not be normalized: /../test
/../test
^^^^^^^^
❯ echo "-e /../test" | cargo run pip compile -
error: Invalid URL in `-`: `/../test`
Caused by: path could not be normalized: /../test
Caused by: cannot normalize a relative path beyond the base directory
```
## Summary
I found some of these too bare (e.g., when they _just_ show a package
name with no other information). For me, this makes it easier to
differentiate error message copy from data. But open to other opinions.
Take a look at the fixture changes and LMK!
Add a dedicated error type for direct url parsing. This change is broken
out from the new uv requirement type, which uses direct url parsing
internally.
## Summary
If there are no hashes for a given package, we now return
`Validate(&[])` so that the policy is impossible to satisfy. Previously,
we returned `None`, which is always satisfied.
We don't really ever expect to hit this, because we detect this case in
the resolver and raise a different error. But if we have a bug
somewhere, it's better to fail with an error than silently let the
package through.
## Summary
This PR adds support for hash-checking mode in `pip install` and `pip
sync`. It's a large change, both in terms of the size of the diff and
the modifications in behavior, but it's also one that's hard to merge in
pieces (at least, with any test coverage) since it needs to work
end-to-end to be useful and testable.
Here are some of the most important highlights:
- We store hashes in the cache. Where we previously stored pointers to
unzipped wheels in the `archives` directory, we now store pointers with
a set of known hashes. So every pointer to an unzipped wheel also
includes its known hashes.
- By default, we don't compute any hashes. If the user runs with
`--require-hashes`, and the cache doesn't contain those hashes, we
invalidate the cache, redownload the wheel, and compute the hashes as we
go. For users that don't run with `--require-hashes`, there will be no
change in performance. For users that _do_, the only change will be if
they don't run with `--generate-hashes` -- then they may see some
repeated work between resolution and installation, if they use `pip
compile` then `pip sync`.
- Many of the distribution types now include a `hashes` field, like
`CachedDist` and `LocalWheel`.
- Our behavior is similar to pip, in that we enforce hashes when pulling
any remote distributions, and when pulling from our own cache. Like pip,
though, we _don't_ enforce hashes if a distribution is _already_
installed.
- Hash validity is enforced in a few different places:
1. During resolution, we enforce hash validity based on the hashes
reported by the registry. If we need to access a source distribution,
though, we then enforce hash validity at that point too, prior to
running any untrusted code. (This is enforced in the distribution
database.)
2. In the install plan, we _only_ add cached distributions that have
matching hashes. If a cached distribution is missing any hashes, or the
hashes don't match, we don't return them from the install plan.
3. In the downloader, we _only_ return distributions with matching
hashes.
4. The final combination of "things we install" are: (1) the wheels from
the cache, and (2) the downloaded wheels. So this ensures that we never
install any mismatching distributions.
- Like pip, if `--require-hashes` is provided, we require that _all_
distributions are pinned with either `==` or a direct URL. We also
require that _all_ distributions have hashes.
There are a few notable TODOs:
- We don't support hash-checking mode for unnamed requirements. These
should be _somewhat_ rare, though? Since `pip compile` never outputs
unnamed requirements. I can fix this, it's just some additional work.
- We don't automatically enable `--require-hashes` with a hash exists in
the requirements file. We require `--require-hashes`.
Closes#474.
## Test Plan
I'd like to add some tests for registries that report incorrect hashes,
but otherwise: `cargo test`
## Summary
If we build a source distribution from the registry, and the version
doesn't match that of the filename, we should error, just as we do for
mismatched package names. However, we should also backtrack here, which
we didn't previously.
Closes https://github.com/astral-sh/uv/issues/2953.
## Test Plan
Verified that `cargo run pip install docutils --verbose --no-cache
--reinstall` installs `docutils==0.21` instead of the invalid
`docutils==0.21.post1`.
In the logs, I see:
```
WARN Unable to extract metadata for docutils: Package metadata version `0.21` does not match given version `0.21.post1`
```
## Summary
Now that we're resolving metadata more aggressively for local sources,
it's worth doing this. We now pull metadata from the `pyproject.toml`
directly if it's statically-defined.
Closes https://github.com/astral-sh/uv/issues/2629.
## Summary
Some zip files can't be streamed; in particular, `rs-async-zip` doesn't
support data descriptors right now (though it may in the future). This
PR adds a fallback path for such zips that downloads the entire zip file
to disk, then unzips it from disk (which gives us `Seek`).
Closes https://github.com/astral-sh/uv/issues/2216.
## Test Plan
`cargo run pip install --extra-index-url https://buf.build/gen/python
hashb_foxglove_protocolbuffers_python==25.3.0.1.20240226043130+465630478360
--force-reinstall -n`
## Summary
PyPI now supports Metadata 2.2, which means distributions with Metadata
2.2-compliant metadata will start to appear. The upside is that if a
source distribution includes a `PKG-INFO` file with (1) a metadata
version of 2.2 or greater, and (2) no dynamic fields (at least, of the
fields we rely on), we can read the metadata from the `PKG-INFO` file
directly rather than running _any_ of the PEP 517 build hooks.
Closes https://github.com/astral-sh/uv/issues/2009.
## Summary
If a user provides a source distribution via a direct path, it can
either be an archive (like a `.tar.gz` or `.zip` file) or a directory.
If the former, we need to extract (e.g., unzip) the contents at some
point. Previously, this extraction was in `uv-build`; this PR lifts it
up to the distribution database.
The first benefit here is that various methods that take the
distribution are now simpler, as they can assume a directory.
The second benefit is that we no longer extract _multiple times_ when
working with a source distribution. (Previously, if we tried to get the
metadata, then fell back and built the wheel, we'd extract the wheel
_twice_.)
Error for `uv pip compile scripts/requirements/jupyter.in` without
internet:
**Before**
```
error: error sending request for url (https://pypi.org/simple/jupyter/): error trying to connect: dns error: failed to lookup address information: No such host is known. (os error 11001)
Caused by: error trying to connect: dns error: failed to lookup address information: No such host is known. (os error 11001)
Caused by: dns error: failed to lookup address information: No such host is known. (os error 11001)
Caused by: failed to lookup address information: No such host is known. (os error 11001)
```
**After**
```
error: Could not connect, are you offline?
Caused by: error sending request for url (https://pypi.org/simple/django/): error trying to connect: dns error: failed to lookup address information: Temporary failure in name resolution
Caused by: error trying to connect: dns error: failed to lookup address information: Temporary failure in name resolution
Caused by: dns error: failed to lookup address information: Temporary failure in name resolution
Caused by: failed to lookup address information: Temporary failure in name resolution
```
On linux, it would be "Temporary failure in name resolution" instead of
"No such host is known. (os error 11001)".
The implementation checks for "dne error" stringly as hyper errors are
opaque. The danger is that this breaks with a hyper update. We still get
the complete error trace since reqwest eagerly inlines errors
(https://github.com/seanmonstar/reqwest/issues/2147).
No test since i wouldn't know how to simulate this in cargo test.
Fixes#1971
This PR improves the error message for the problem described in
https://github.com/astral-sh/uv/issues/1376. The original output
duplicates the actual error message and includes lots of noise
(`DirEntry { inner: DirEntry(...) }`).
```
$ uv pip install hexdump==3.3
error: Failed to download and build: hexdump==3.3
Caused by: Failed to extract source distribution: The top level of the archive must only contain a list directory, but it contains: [DirEntry { inner: DirEntry("/home/robin/.cache/uv/.tmpgSvTCk/__main__.py") }, DirEntry { inner: DirEntry("/home/robin/.cache/uv/.tmpgSvTCk/hexdump.py") }, DirEntry { inner: DirEntry("/home/robin/.cache/uv/.tmpgSvTCk/data") }, DirEntry { inner: DirEntry("/home/robin/.cache/uv/.tmpgSvTCk/PKG-INFO") }, DirEntry { inner: DirEntry("/home/robin/.cache/uv/.tmpgSvTCk/setup.py") }, DirEntry { inner: DirEntry("/home/robin/.cache/uv/.tmpgSvTCk/README.txt") }]
Caused by: The top level of the archive must only contain a list directory, but it contains: [DirEntry { inner: DirEntry("/home/robin/.cache/uv/.tmpgSvTCk/__main__.py") }, DirEntry { inner: DirEntry("/home/robin/.cache/uv/.tmpgSvTCk/hexdump.py") }, DirEntry { inner: DirEntry("/home/robin/.cache/uv/.tmpgSvTCk/data") }, DirEntry { inner: DirEntry("/home/robin/.cache/uv/.tmpgSvTCk/PKG-INFO") }, DirEntry { inner: DirEntry("/home/robin/.cache/uv/.tmpgSvTCk/setup.py") }, DirEntry { inner: DirEntry("/home/robin/.cache/uv/.tmpgSvTCk/README.txt") }]
```
This PR removes the duplication and `DirEntry` internals so that the
error message is easier to grasp:
```
$ uv pip install hexdump==3.3
error: Failed to download and build: hexdump==3.3
Caused by: Failed to extract source distribution
Caused by: The top level of the archive must only contain a list directory, but it contains: ["__main__.py", "hexdump.py", "data", "PKG-INFO", "setup.py", "README.txt"]
```
First, replace all usages in files in-place. I used my editor for this.
If someone wants to add a one-liner that'd be fun.
Then, update directory and file names:
```
# Run twice for nested directories
find . -type d -print0 | xargs -0 rename s/puffin/uv/g
find . -type d -print0 | xargs -0 rename s/puffin/uv/g
# Update files
find . -type f -print0 | xargs -0 rename s/puffin/uv/g
```
Then add all the files again
```
# Add all the files again
git add crates
git add python/uv
# This one needs a force-add
git add -f crates/uv-trampoline
```
2024-02-15 11:19:46 -06:00
Renamed from crates/puffin-distribution/src/error.rs (Browse further)