Commit graph

36 commits

Author SHA1 Message Date
Charlie Marsh
8126a5ed32
Make MarkerTree Copy (#9542)
Some checks are pending
CI / integration test | uv publish (push) Blocked by required conditions
CI / check cache | ubuntu (push) Blocked by required conditions
CI / check cache | macos aarch64 (push) Blocked by required conditions
CI / check system | python on debian (push) Blocked by required conditions
CI / check system | python on fedora (push) Blocked by required conditions
CI / check system | python on ubuntu (push) Blocked by required conditions
CI / check system | python on opensuse (push) Blocked by required conditions
CI / check system | python on rocky linux 8 (push) Blocked by required conditions
CI / check system | python on rocky linux 9 (push) Blocked by required conditions
CI / check system | pypy on ubuntu (push) Blocked by required conditions
CI / check system | pyston (push) Blocked by required conditions
CI / check system | homebrew python on macos aarch64 (push) Blocked by required conditions
CI / check system | python on macos x86_64 (push) Blocked by required conditions
CI / check system | python3.10 on windows (push) Blocked by required conditions
CI / check system | python3.10 on windows x86 (push) Blocked by required conditions
CI / check system | python3.13 on windows (push) Blocked by required conditions
CI / check system | python3.12 via chocolatey (push) Blocked by required conditions
CI / check system | python3.9 via pyenv (push) Blocked by required conditions
CI / check system | python3.13 (push) Blocked by required conditions
CI / check system | conda3.11 on linux (push) Blocked by required conditions
CI / check system | conda3.8 on linux (push) Blocked by required conditions
CI / check system | conda3.11 on macos (push) Blocked by required conditions
CI / check system | conda3.8 on macos (push) Blocked by required conditions
CI / check system | conda3.11 on windows (push) Blocked by required conditions
CI / integration test | determine publish changes (push) Blocked by required conditions
CI / check system | alpine (push) Blocked by required conditions
CI / check system | conda3.8 on windows (push) Blocked by required conditions
CI / check system | amazonlinux (push) Blocked by required conditions
CI / check system | embedded python3.10 on windows (push) Blocked by required conditions
CI / benchmarks (push) Blocked by required conditions
## Summary

It's just a `usize`. It seems simpler and perhaps even more performant
(?) to make it `Copy`.
2024-11-30 14:07:07 -05:00
Charlie Marsh
69811837e5
Support recursive extras with marker in pip compile -r pyproject.toml (#9535)
## Summary

Closes https://github.com/astral-sh/uv/issues/9530.
2024-11-30 03:40:22 +00:00
Li-Lun Lin
e485dfd7f1
feat: add support for --no-extra flag and setting (#9387)
Some checks are pending
CI / integration test | uv publish (push) Blocked by required conditions
CI / check cache | ubuntu (push) Blocked by required conditions
CI / check cache | macos aarch64 (push) Blocked by required conditions
CI / check system | python on debian (push) Blocked by required conditions
CI / check system | python on fedora (push) Blocked by required conditions
CI / check system | python on ubuntu (push) Blocked by required conditions
CI / check system | python on opensuse (push) Blocked by required conditions
CI / check system | python on rocky linux 8 (push) Blocked by required conditions
CI / check system | python on rocky linux 9 (push) Blocked by required conditions
CI / check system | pypy on ubuntu (push) Blocked by required conditions
CI / check system | pyston (push) Blocked by required conditions
CI / check system | alpine (push) Blocked by required conditions
CI / check system | python on macos aarch64 (push) Blocked by required conditions
CI / check system | homebrew python on macos aarch64 (push) Blocked by required conditions
CI / check system | python on macos x86_64 (push) Blocked by required conditions
CI / check system | python3.10 on windows (push) Blocked by required conditions
CI / check system | python3.10 on windows x86 (push) Blocked by required conditions
CI / check system | python3.13 on windows (push) Blocked by required conditions
CI / check system | python3.12 via chocolatey (push) Blocked by required conditions
CI / check system | python3.9 via pyenv (push) Blocked by required conditions
CI / check system | python3.13 (push) Blocked by required conditions
CI / check system | conda3.11 on linux (push) Blocked by required conditions
CI / check system | conda3.8 on linux (push) Blocked by required conditions
CI / check system | conda3.11 on macos (push) Blocked by required conditions
CI / check system | conda3.8 on macos (push) Blocked by required conditions
CI / check system | conda3.11 on windows (push) Blocked by required conditions
CI / check system | conda3.8 on windows (push) Blocked by required conditions
CI / check system | amazonlinux (push) Blocked by required conditions
CI / check system | embedded python3.10 on windows (push) Blocked by required conditions
CI / benchmarks (push) Blocked by required conditions
<!--  
Thank you for contributing to uv! To help us review effectively, please
ensure that:
- The pull request includes a summary of the change.  
- The title is descriptive and concise.  
- Relevant issues are referenced where applicable.  
-->

## Summary

Resolves #9333  

This pull request introduces support for the `--no-extra` command-line
flag and the corresponding `no-extra` UV setting.

### Behavior
- When `--all-extras` is supplied, the specified extras in `--no-extra`
will be excluded from the installation.
- If `--all-extras` is not supplied, `--no-extra` has no effect and is
safely ignored.

## Test Plan

Since `ExtrasSpecification::from_args` and
`ExtrasSpecification::extra_names` are the most important parts in the
implementation, I added the following tests in the
`uv-configuration/src/extras.rs` module:

- **`test_no_extra_full`**: Verifies behavior when `no_extra` includes
the entire list of extras.
- **`test_no_extra_partial`**: Tests partial exclusion, ensuring only
specified extras are excluded.
- **`test_no_extra_empty`**: Confirms that no extras are excluded if
`no_extra` is empty.
- **`test_no_extra_excessive`**: Ensures the implementation ignores
`no_extra` values that don't match any available extras.
- **`test_no_extra_without_all_extras`**: Validates that `no_extra` has
no effect when `--all-extras` is not supplied.
- **`test_no_extra_without_package_extras`**: Confirms correct behavior
when no extras are available in the package.
- **`test_no_extra_duplicates`**: Verifies that duplicate entries in
`pkg_extras` or `no_extra` do not cause errors.

---------

Co-authored-by: Charlie Marsh <charlie.r.marsh@gmail.com>
2024-11-24 02:25:09 +00:00
Charlie Marsh
dc3f628de1
Respect dynamic extras in uv lock and uv sync (#8091)
## Summary

We can't rely on reading these from the `pyproject.toml`; instead, we
resolve the project metadata (which will typically just require reading
the `pyproject.toml`, but will go through our standard metadata paths).

Closes https://github.com/astral-sh/uv/issues/8071.
2024-10-10 16:00:31 +02:00
Charlie Marsh
14507a1793
Add uv- prefix to all internal crates (#7853)
## Summary

Brings more consistency to the repo and ensures that all crates
automatically show up in `--verbose` logging.
2024-10-01 20:15:32 -04:00
Charlie Marsh
f7835243c5
Only use relative paths in lockfile (#6490)
For users who were using absolute paths in the `pyproject.toml`
previously, this is a behavior change: We now convert all absolute paths
in `path` entries to relative paths. Since i assume that no-one relies
on absolute path in their lockfiles - they are intended to be portable -
I'm tagging this as a bugfix.

Closes https://github.com/astral-sh/uv/pull/6438
Fixes https://github.com/astral-sh/uv/issues/6371
2024-08-23 22:19:10 -04:00
Ibraheem Ahmed
f5110f7b5e
Remove uses of Option<MarkerTree> (#5978)
## Summary

Follow up to https://github.com/astral-sh/uv/pull/5898. This should fix
some of the failures in https://github.com/astral-sh/uv/pull/5887 where
`uv lock --locked` is failing due to `Some(true)` and `None` markers not
comparing equal.
2024-08-10 13:23:29 -04:00
Ibraheem Ahmed
ffd18cc75d
Implement marker trees using algebraic decision diagrams (#5898)
## Summary

This PR rewrites the `MarkerTree` type to use algebraic decision
diagrams (ADD). This has many benefits:
- The diagram is canonical for a given marker function. It is impossible
to create two functionally equivalent marker trees that don't refer to
the same underlying ADD. This also means that any trivially true or
unsatisfiable markers are represented by the same constants.
- The diagram can handle complex operations (conjunction/disjunction) in
polynomial time, as well as constant-time negation.
- The diagram can be converted to a simplified DNF form for user-facing
output.

The new representation gives us a lot more confidence in our marker
operations and simplification, which is proving to be very important
(see https://github.com/astral-sh/uv/pull/5733 and
https://github.com/astral-sh/uv/pull/5163).

Unfortunately, it is not easy to split this PR into multiple commits
because it is a large rewrite of the `marker` module. I'd suggest
reading through the `marker/algebra.rs`, `marker/simplify.rs`, and
`marker/tree.rs` files for the new implementation, as well as the
updated snapshots to verify how the new simplification rules work in
practice. However, a few other things were changed:
- [We now use release-only comparisons for `python_full_version`, where
we previously only did for
`python_version`](https://github.com/astral-sh/uv/blob/ibraheem/canonical-markers/crates/pep508-rs/src/marker/algebra.rs#L522).
I'm unsure how marker operations should work in the presence of
pre-release versions if we decide that this is incorrect.
- [Meaningless marker expressions are now
ignored](https://github.com/astral-sh/uv/blob/ibraheem/canonical-markers/crates/pep508-rs/src/marker/parse.rs#L502).
This means that a marker such as `'x' == 'x'` will always evaluate to
`true` (as if the expression did not exist), whereas we previously
treated this as always `false`. It's negation however, remains `false`.
- [Unsatisfiable markers are written as `python_version <
'0'`](https://github.com/astral-sh/uv/blob/ibraheem/canonical-markers/crates/pep508-rs/src/marker/tree.rs#L1329).
- The `PubGrubSpecifier` type has been moved to the new `uv-pubgrub`
crate, shared by `pep508-rs` and `uv-resolver`. `pep508-rs` also depends
on the `pubgrub` crate for the `Range` type, we probably want to move
`pubgrub::Range` into a separate crate to break this, but I don't think
that should block this PR (cc @konstin).

There is still some remaining work here that I decided to leave for now
for the sake of unblocking some of the related work on the resolver.
- We still use `Option<MarkerTree>` throughout uv, which is unnecessary
now that `MarkerTree::TRUE` is canonical.
- The `MarkerTree` type is now interned globally and can potentially
implement `Copy`. However, it's unclear if we want to add more
information to marker trees that would make it `!Copy`. For example, we
may wish to attach extra and requires-python environment information to
avoid simplifying after construction.
- We don't currently combine `python_full_version` and `python_version`
markers.
- I also have not spent too much time investigating performance and
there is probably some low-hanging fruit. Many of the test cases I did
run actually saw large performance improvements due to the markers being
simplified internally, reducing the stress on the old `normalize`
routine, especially for the extremely large markers seen in
`transformers` and other projects.

Resolves https://github.com/astral-sh/uv/issues/5660,
https://github.com/astral-sh/uv/issues/5179.
2024-08-09 13:40:02 -04:00
Charlie Marsh
82d94838cb
Implement a --verify-hashes hash-checking mode (#4007)
## Summary

This is an alternative to `--require-hashes` which will validate a hash
if it's present, but ignore requirements that omit hashes or are absent
from the lockfile entirely.

So, e.g., transitive dependencies that are missing will _not_ error; nor
will dependencies that are included but lack a hash.

Closes https://github.com/astral-sh/uv/issues/3305.
2024-07-17 21:25:31 +00:00
konsti
abb6ac5127
Support workspace to workspace path dependencies (#4833)
Add support for path dependencies from a package in one workspace to a
package in another workspace, which it self has workspace dependencies.

Say we have a main workspace with packages `a` and `b`, and a second
workspace with `c` and `d`. We have `a -> b`, `b -> c`, `c -> d`. This
would previously lead to a mangled path for `d`, which is now fixed.

Like distribution paths, we split workspace paths into an absolute
install path and a relative (or absolute, if the user provided an
absolute path) lock path.

Part of https://github.com/astral-sh/uv/issues/3943
2024-07-16 20:38:46 +00:00
Charlie Marsh
191f9556b7
Avoid building packages with dynamic versions (#4058)
## Summary

This PR separates "gathering the requirements" from the rest of the
metadata (e.g., version), which isn't required when installing a
package's _dependencies_ (as opposed to installing the package itself).
It thus ensures that we don't need to build a package when a static
`pyproject.toml` is provided in `pip compile`.

Closes https://github.com/astral-sh/uv/issues/4040.
2024-06-05 18:11:58 +00:00
konsti
081f20c53e
Add support for tool.uv into distribution building (#3904)
With the change, we remove the special casing of workspace dependencies
and resolve `tool.uv` for all git and directory distributions. This
gives us support for non-editable workspace dependencies and path
dependencies in other workspaces. It removes a lot of special casing
around workspaces. These changes are the groundwork for supporting
`tool.uv` with dynamic metadata.

The basis for this change is moving `Requirement` from
`distribution-types` to `pypi-types` and the lowering logic from
`uv-requirements` to `uv-distribution`. This changes should be split out
in separate PRs.

I've included an example workspace `albatross-root-workspace2` where
`bird-feeder` depends on `a` from another workspace `ab`. There's a
bunch of failing tests and regressed error messages that still need
fixing. It does fix the audited package count for the workspace tests.
2024-05-31 02:42:03 +00:00
Charlie Marsh
3461c8b585
Move extra specification into uv-configuration (#3897)
## Summary

I need to use this in the resolver (and it's at-home with other, similar
configuration options).
2024-05-29 04:49:57 +00:00
Charlie Marsh
1fc6a59707
Remove special-casing for editable requirements (#3869)
## Summary

There are a few behavior changes in here:

- We now enforce `--require-hashes` for editables, like pip. So if you
use `--require-hashes` with an editable requirement, we'll reject it. I
could change this if it seems off.
- We now treat source tree requirements, editable or not (e.g., both `-e
./black` and `./black`) as if `--refresh` is always enabled. This
doesn't mean that we _always_ rebuild them; but if you pass
`--reinstall`, then yes, we always rebuild them. I think this is an
improvement and is close to how editables work today.

Closes #3844.

Closes #2695.
2024-05-28 15:49:34 +00:00
konsti
4db468e27f
Use VerbatimParsedUrl in pep508_rs (#3758)
When parsing requirements from any source, directly parse the url parts
(and reject unsupported urls) instead of parsing url parts at a later
stage. This removes a bunch of error branches and concludes the work
parsing url parts once and passing them around everywhere.

Many usages of the assembled `VerbatimUrl` remain, but these can be
removed incrementally.

Please review commit-by-commit.
2024-05-23 19:52:47 +00:00
Ibraheem Ahmed
39af09f09b
Parallelize resolver (#3627)
## Summary

This PR introduces parallelism to the resolver. Specifically, we can
perform PubGrub resolution on a separate thread, while keeping all I/O
on the tokio thread. We already have the infrastructure set up for this
with the channel and `OnceMap`, which makes this change relatively
simple. The big change needed to make this possible is removing the
lifetimes on some of the types that need to be shared between the
resolver and pubgrub thread.

A related PR, https://github.com/astral-sh/uv/pull/1163, found that
adding `yield_now` calls improved throughput. With optimal scheduling we
might be able to get away with everything on the same thread here.
However, in the ideal pipeline with perfect prefetching, the resolution
and prefetching can run completely in parallel without depending on one
another. While this would be very difficult to achieve, even with our
current prefetching pattern we see a consistent performance improvement
from parallelism.

This does also require reverting a few of the changes from
https://github.com/astral-sh/uv/pull/3413, but not all of them. The
sharing is isolated to the resolver task.

## Test Plan

On smaller tasks performance is mixed with ~2% improvements/regressions
on both sides. However, on medium-large resolution tasks we see the
benefits of parallelism, with improvements anywhere from 10-50%.

```
./scripts/requirements/jupyter.in
Benchmark 1: ./target/profiling/baseline (resolve-warm)
  Time (mean ± σ):      29.2 ms ±   1.8 ms    [User: 20.3 ms, System: 29.8 ms]
  Range (min … max):    26.4 ms …  36.0 ms    91 runs
 
Benchmark 2: ./target/profiling/parallel (resolve-warm)
  Time (mean ± σ):      25.5 ms ±   1.0 ms    [User: 19.5 ms, System: 25.5 ms]
  Range (min … max):    23.6 ms …  27.8 ms    99 runs
 
Summary
  ./target/profiling/parallel (resolve-warm) ran
    1.15 ± 0.08 times faster than ./target/profiling/baseline (resolve-warm)
```
```
./scripts/requirements/boto3.in   
Benchmark 1: ./target/profiling/baseline (resolve-warm)
  Time (mean ± σ):     487.1 ms ±   6.2 ms    [User: 464.6 ms, System: 61.6 ms]
  Range (min … max):   480.0 ms … 497.3 ms    10 runs
 
Benchmark 2: ./target/profiling/parallel (resolve-warm)
  Time (mean ± σ):     430.8 ms ±   9.3 ms    [User: 529.0 ms, System: 77.2 ms]
  Range (min … max):   417.1 ms … 442.5 ms    10 runs
 
Summary
  ./target/profiling/parallel (resolve-warm) ran
    1.13 ± 0.03 times faster than ./target/profiling/baseline (resolve-warm)
```
```
./scripts/requirements/airflow.in 
Benchmark 1: ./target/profiling/baseline (resolve-warm)
  Time (mean ± σ):     478.1 ms ±  18.8 ms    [User: 482.6 ms, System: 205.0 ms]
  Range (min … max):   454.7 ms … 508.9 ms    10 runs
 
Benchmark 2: ./target/profiling/parallel (resolve-warm)
  Time (mean ± σ):     308.7 ms ±  11.7 ms    [User: 428.5 ms, System: 209.5 ms]
  Range (min … max):   287.8 ms … 323.1 ms    10 runs
 
Summary
  ./target/profiling/parallel (resolve-warm) ran
    1.55 ± 0.08 times faster than ./target/profiling/baseline (resolve-warm)
```
2024-05-17 11:47:30 -04:00
Charlie Marsh
42c3bfa351
Make Directory its own distribution kind (#3519)
## Summary

I think this is overall good change because it explicitly encodes (in
the type system) something that was previously implicit. I'm not a huge
fan of the names here, open to input.

It covers some of https://github.com/astral-sh/uv/issues/3506 but I
don't think it _closes_ it.
2024-05-13 10:03:14 -04:00
Ibraheem Ahmed
783df8f657
Consolidate concurrency limits (#3493)
## Summary

This PR consolidates the concurrency limits used throughout `uv` and
exposes two limits, `UV_CONCURRENT_DOWNLOADS` and
`UV_CONCURRENT_BUILDS`, as environment variables.

Currently, `uv` has a number of concurrent streams that it buffers using
relatively arbitrary limits for backpressure. However, many of these
limits are conflated. We run a relatively small number of tasks overall
and should start most things as soon as possible. What we really want to
limit are three separate operations:
- File I/O. This is managed by tokio's blocking pool and we should not
really have to worry about it.
- Network I/O.
- Python build processes.

Because the current limits span a broad range of tasks, it's possible
that a limit meant for network I/O is occupied by tasks performing
builds, reading from the file system, or even waiting on a `OnceMap`. We
also don't limit build processes that end up being required to perform a
download. While this may not pose a performance problem because our
limits are relatively high, it does mean that the limits do not do what
we want, making it tricky to expose them to users
(https://github.com/astral-sh/uv/issues/1205,
https://github.com/astral-sh/uv/issues/3311).

After this change, the limits on network I/O and build processes are
centralized and managed by semaphores. All other tasks are unbuffered
(note that these tasks are still bounded, so backpressure should not be
a problem).
2024-05-10 12:43:08 -04:00
Charlie Marsh
b6bb2ac548
Remove Optional from with_origin API (#3482) 2024-05-09 13:40:51 +00:00
Charlie Marsh
3e4365301e
Track origin for setup.py files and friends (#3481)
## Summary

Ensures that we track the origins for requirements regardless of whether
they come from `pyproject.toml` or `setup.py` or `setup.cfg`.

Closes #3480.
2024-05-09 09:30:40 -04:00
Ibraheem Ahmed
94cf604574
Remove unnecessary uses of DashMap and Arc (#3413)
## Summary

All of the resolver code is run on the main thread, so a lot of the
`Send` bounds and uses of `DashMap` and `Arc` are unnecessary. We could
also switch to using single-threaded versions of `Mutex` and `Notify` in
some places, but there isn't really a crate that provides those I would
be comfortable with using.

The `Arc` in `OnceMap` can't easily be removed because of the uv-auth
code which uses the
[reqwest-middleware](https://docs.rs/reqwest-middleware/latest/reqwest_middleware/trait.Middleware.html)
crate, that seems to adds unnecessary `Send` bounds because of
`async-trait`. We could duplicate the code and create a `OnceMapLocal`
variant, but I don't feel that's worth it.
2024-05-06 22:30:43 -04:00
konsti
4f87edbe66
Add basic tool.uv.sources support (#3263)
## Introduction

PEP 621 is limited. Specifically, it lacks
* Relative path support
* Editable support
* Workspace support
* Index pinning or any sort of index specification

The semantics of urls are a custom extension, PEP 440 does not specify
how to use git references or subdirectories, instead pip has a custom
stringly format. We need to somehow support these while still stying
compatible with PEP 621.

## `tool.uv.source`

Drawing inspiration from cargo, poetry and rye, we add `tool.uv.sources`
or (for now stub only) `tool.uv.workspace`:

```toml
[project]
name = "albatross"
version = "0.1.0"
dependencies = [
  "tqdm >=4.66.2,<5",
  "torch ==2.2.2",
  "transformers[torch] >=4.39.3,<5",
  "importlib_metadata >=7.1.0,<8; python_version < '3.10'",
  "mollymawk ==0.1.0"
]

[tool.uv.sources]
tqdm = { git = "https://github.com/tqdm/tqdm", rev = "cc372d09dcd5a5eabdc6ed4cf365bdb0be004d44" }
importlib_metadata = { url = "https://github.com/python/importlib_metadata/archive/refs/tags/v7.1.0.zip" }
torch = { index = "torch-cu118" }
mollymawk = { workspace = true }

[tool.uv.workspace]
include = [
  "packages/mollymawk"
]

[tool.uv.indexes]
torch-cu118 = "https://download.pytorch.org/whl/cu118"
```

See `docs/specifying_dependencies.md` for a detailed explanation of the
format. The basic gist is that `project.dependencies` is what ends up on
pypi, while `tool.uv.sources` are your non-published additions. We do
support the full range or PEP 508, we just hide it in the docs and
prefer the exploded table for easier readability and less confusing with
actual url parts.

This format should eventually be able to subsume requirements.txt's
current use cases. While we will continue to support the legacy `uv pip`
interface, this is a piece of the uv's own top level interface. Together
with `uv run` and a lockfile format, you should only need to write
`pyproject.toml` and do `uv run`, which generates/uses/updates your
lockfile behind the scenes, no more pip-style requirements involved. It
also lays the groundwork for implementing index pinning.

## Changes

This PR implements:
* Reading and lowering `project.dependencies`,
`project.optional-dependencies` and `tool.uv.sources` into a new
requirements format, including:
  * Git dependencies
  * Url dependencies
  * Path dependencies, including relative and editable
* `pip install` integration
* Error reporting for invalid `tool.uv.sources`
* Json schema integration (works in pycharm, see below)
* Draft user-level docs (see `docs/specifying_dependencies.md`)

It does not implement:
* No `pip compile` testing, deprioritizing towards our own lockfile
* Index pinning (stub definitions only)
* Development dependencies
* Workspace support (stub definitions only)
* Overrides in pyproject.toml
* Patching/replacing dependencies

One technically breaking change is that we now require user provided
pyproject.toml to be valid wrt to PEP 621. Included files still fall
back to PEP 517. That means `pip install -r requirements.txt` requires
it to be valid while `pip install -r requirements.txt` with `-e .` as
content falls back to PEP 517 as before.

## Implementation

The `pep508` requirement is replaced by a new `UvRequirement` (name up
for bikeshedding, not particularly attached to the uv prefix). The still
existing `pep508_rs::Requirement` type is a url format copied from pip's
requirements.txt and doesn't appropriately capture all features we
want/need to support. The bulk of the diff is changing the requirement
type throughout the codebase.

We still use `VerbatimUrl` in many places, where we would expect a
parsed/decomposed url type, specifically:
* Reading core metadata except top level pyproject.toml files, we fail a
step later instead if the url isn't supported.
* Allowed `Urls`.
* `PackageId` with a custom `CanonicalUrl` comparison, instead of
canonicalizing urls eagerly.
* `PubGrubPackage`: We eventually convert the `VerbatimUrl` back to a
`Dist` (`Dist::from_url`), instead of remembering the url.
* Source dist types: We use verbatim url even though we know and require
that these are supported urls we can and have parsed.

I tried to make improve the situation be replacing `VerbatimUrl`, but
these changes would require massive invasive changes (see e.g.
https://github.com/astral-sh/uv/pull/3253). A main problem is the ref
`VersionOrUrl` and applying overrides, which assume the same
requirement/url type everywhere. In its current form, this PR increases
this tech debt.

I've tried to split off PRs and commits, but the main refactoring is
still a single monolith commit to make it compile and the tests pass.

## Demo

Adding
d1ae3b85d5/pyproject.json
as json schema (v7) to pycharm for `pyproject.toml`, you can try the IDE
support already:


![pycharm](599082c7-6be5-41c1-a3cd-516092382f8d)


[dove.webm](c293c272-c80b-459d-8c95-8c46a8d198a1)
2024-05-03 21:10:50 +00:00
Charlie Marsh
4a98839c1d
Move argument normalization into settings construction (#3103)
## Summary

No behavior changes, but the idea here is that we move the argument
normalization code (e.g., create an `Upgrade` struct from `--upgrade`
and `--upgrade-package`) into the `settings.rs` file, where we build the
common settings structs.

This reduces a lot of the logic and duplication across commands in
`main.rs`.
2024-04-19 23:45:08 +00:00
Charlie Marsh
96c3c2e774
Support unnamed requirements in --require-hashes (#2993)
## Summary

This PR enables `--require-hashes` with unnamed requirements. The key
change is that `PackageId` becomes `VersionId` (since it refers to a
package at a specific version), and the new `PackageId` consists of
_either_ a package name _or_ a URL. The hashes are keyed by `PackageId`,
so we can generate the `RequiredHashes` before we have names for all
packages, and enforce them throughout.

Closes #2979.
2024-04-11 11:26:50 -04:00
Charlie Marsh
006379c50c
Add support for URL requirements in --generate-hashes (#2952)
## Summary

This PR enables hash generation for URL requirements when the user
provides `--generate-hashes` to `pip compile`. While we include the
hashes from the registry already, today, we omit hashes for URLs.

To power hash generation, we introduce a `HashPolicy` abstraction:

```rust
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
pub enum HashPolicy<'a> {
    /// No hash policy is specified.
    None,
    /// Hashes should be generated (specifically, a SHA-256 hash), but not validated.
    Generate,
    /// Hashes should be validated against a pre-defined list of hashes. If necessary, hashes should
    /// be generated so as to ensure that the archive is valid.
    Validate(&'a [HashDigest]),
}
```

All of the methods on the distribution database now accept this policy,
instead of accepting `&'a [HashDigest]`.

Closes #2378.
2024-04-10 20:02:45 +00:00
Charlie Marsh
8513d603b4
Return computed hashes from metadata requests (#2951)
## Summary

This PR modifies the distribution database to return both the
`Metadata23` and the computed hashes when clients request metadata.

No behavior changes, but this will be necessary to power
`--generate-hashes`.
2024-04-10 19:31:41 +00:00
Charlie Marsh
1f3b5bb093
Add hash-checking support to install and sync (#2945)
## Summary

This PR adds support for hash-checking mode in `pip install` and `pip
sync`. It's a large change, both in terms of the size of the diff and
the modifications in behavior, but it's also one that's hard to merge in
pieces (at least, with any test coverage) since it needs to work
end-to-end to be useful and testable.

Here are some of the most important highlights:

- We store hashes in the cache. Where we previously stored pointers to
unzipped wheels in the `archives` directory, we now store pointers with
a set of known hashes. So every pointer to an unzipped wheel also
includes its known hashes.
- By default, we don't compute any hashes. If the user runs with
`--require-hashes`, and the cache doesn't contain those hashes, we
invalidate the cache, redownload the wheel, and compute the hashes as we
go. For users that don't run with `--require-hashes`, there will be no
change in performance. For users that _do_, the only change will be if
they don't run with `--generate-hashes` -- then they may see some
repeated work between resolution and installation, if they use `pip
compile` then `pip sync`.
- Many of the distribution types now include a `hashes` field, like
`CachedDist` and `LocalWheel`.
- Our behavior is similar to pip, in that we enforce hashes when pulling
any remote distributions, and when pulling from our own cache. Like pip,
though, we _don't_ enforce hashes if a distribution is _already_
installed.
- Hash validity is enforced in a few different places:
1. During resolution, we enforce hash validity based on the hashes
reported by the registry. If we need to access a source distribution,
though, we then enforce hash validity at that point too, prior to
running any untrusted code. (This is enforced in the distribution
database.)
2. In the install plan, we _only_ add cached distributions that have
matching hashes. If a cached distribution is missing any hashes, or the
hashes don't match, we don't return them from the install plan.
3. In the downloader, we _only_ return distributions with matching
hashes.
4. The final combination of "things we install" are: (1) the wheels from
the cache, and (2) the downloaded wheels. So this ensures that we never
install any mismatching distributions.
- Like pip, if `--require-hashes` is provided, we require that _all_
distributions are pinned with either `==` or a direct URL. We also
require that _all_ distributions have hashes.

There are a few notable TODOs:

- We don't support hash-checking mode for unnamed requirements. These
should be _somewhat_ rare, though? Since `pip compile` never outputs
unnamed requirements. I can fix this, it's just some additional work.
- We don't automatically enable `--require-hashes` with a hash exists in
the requirements file. We require `--require-hashes`.

Closes #474.

## Test Plan

I'd like to add some tests for registries that report incorrect hashes,
but otherwise: `cargo test`
2024-04-10 19:09:03 +00:00
Charlie Marsh
00934044aa
Backtrack on distributions with invalid metadata (#2834)
## Summary

Closes https://github.com/astral-sh/uv/issues/2821.
2024-04-05 18:00:48 -04:00
Ikko Eltociear Ashimine
37225cb920
Fix a typo in source_tree.rs (#2836) 2024-04-05 17:53:09 +01:00
Charlie Marsh
189d0d41d0
Remove redirects from the resolver (#2792)
## Summary

Rather than storing the `redirects` on the resolver, this PR just
re-uses the "convert this URL to precise" logic when we convert to a
`Resolution` after-the-fact. I think this is a lot simpler: it removes
state from the resolver, and simplifies a lot of the hooks around
distribution fetching (e.g., `get_or_build_wheel_metadata` no longer
returns `(Metadata23, Option<Url>)`).
2024-04-03 02:43:57 +00:00
Charlie Marsh
b73cfbb0de
Use user_display for source tree error (#2768) 2024-04-02 00:42:11 +00:00
Charlie Marsh
ccd457a37e
Use distribution database and index for all pre-resolution phases (#2766)
## Summary

Ensures that if we resolve any distributions before the resolver, we
cache the metadata in-memory.

_Also_ ensures that we lock (important!) when resolving Git
distributions.
2024-04-02 00:34:13 +00:00
Charlie Marsh
e68cdb1049
Rename to SourceDistributionBuilder (#2750)
## Summary

This is more consistent with `DistributionDatabase`. The order of the
arguments is also now consistent between the two structs.
2024-04-01 02:37:43 +00:00
Zanie Blue
0b08ba1e67
Rename uv-traits and split into separate modules (#2674)
This is driving me a little crazy and is becoming a larger problem in
#2596 where I need to move more types (like `Upgrade` and `Reinstall`)
into this crate. Anything that's shared across our core resolver,
install, and build crates needs to be defined in this crate to avoid
cyclic dependencies. We've outgrown it being a single file with some
shared traits.

There are no behavioral changes here.
2024-03-26 15:39:43 -05:00
Charlie Marsh
39769d82a0
Fall back to PEP 517 hooks for non-compliant PEP 621 metadata (#2662)
If you pass a `pyproject.toml` that use Hatch's context formatting API,
we currently fail because the dependencies aren't valid under PEP 508.
This PR makes the static metadata parsing a little more relaxed, so that
we appropriately fall back to PEP 517 there.
2024-03-26 02:28:39 +00:00
Charlie Marsh
71428f7d74
Use PEP 517 to extract non-static pyproject.toml metadata (#2633)
## Summary

When a user passes a `pyproject.toml` to `pip compile` (e.g., `uv pip
compile pyproject.toml`), we extract the requirements from the
`pyproject.toml` directly. However... that isn't always possible (as
seen in the linked issues). When it's _not_, we instead need to run the
PEP 517 build hooks to identify the metadata.

Closes https://github.com/astral-sh/uv/issues/1624.

Closes https://github.com/astral-sh/uv/issues/1644.

## Test Plan

`cargo test`
2024-03-25 20:27:41 +00:00