Commit graph

47 commits

Author SHA1 Message Date
Charlie Marsh
ac87fd4006
Disable Clippy's too-many-arguments rule (#4663)
## Summary

We allow this constantly, I think it's just too pedantic for us.
2024-06-30 19:30:38 +00:00
Zanie Blue
25cde888ae
Rename SitePackages::from_environment for clarity (#4497) 2024-06-24 23:32:45 +00:00
Charlie Marsh
f07308823e
Add --emit-build-options flag to uv pip compile interface (#4463)
## Summary

Closes https://github.com/astral-sh/uv/issues/4420.
2024-06-24 12:25:01 +00:00
Charlie Marsh
cba270f750
Respect index strategy in source distribution builds (#4468)
## Summary

The `--index-strategy` is linked to the index locations, which we
propagate to source distribution builds; so it makes sense to pass the
`--index-strategy` too.

While I was here, I made `exclude_newer` a required argument so that we
don't forget to set it via the `with_options` builder.

Closes https://github.com/astral-sh/uv/issues/4465.
2024-06-24 12:03:38 +00:00
Zanie Blue
f219f88553
Rename Downloader for clarity (#4395)
Follow-up to https://github.com/astral-sh/uv/pull/4394 with internal
refactor
2024-06-18 16:00:10 -05:00
Zanie Blue
5a007b6b9f
Add BuildOptions for centralized combination of NoBuild and NoBinary (#4284)
As requested in review of https://github.com/astral-sh/uv/pull/4067
2024-06-12 21:33:33 +00:00
Zanie Blue
1ab4041baa
Allow specific --only-binary and --no-binary packages to override :all: (#4067)
Updates `--no-binary <package>` to take precedence over `--only-binary
:all:` and `--only-binary <package>` to take precedence over
`--no-binary :all:`.

I'm not entirely sure about this behavior, e.g. maybe I provided
`--only-binary :all:` later on the command line and really want it to
override those earlier arguments of `--no-binary <package>` for safety.
Right now we just fail to solve though since we can't satisfy the
overlapping requests.

Closes https://github.com/astral-sh/uv/issues/4063
2024-06-12 15:47:45 -05:00
Charlie Marsh
5269a0dba8
Ignore tags in universal resolution (#4174)
## Summary

If a package lacks a source distribution, and we can't find a compatible
wheel for the current platform, we need to just _assume_ that the
package will have a valid wheel on all platforms on which it's
requested; if not, we raise an error at install time.

It's possible that we can be smarter about this over time. For example,
if the package was requested _only_ for macOS, we could verify that
there's at least one macOS-compatible wheel. See the linked issue for
more details.

Closes https://github.com/astral-sh/uv/issues/4139.
2024-06-10 08:38:21 -04:00
Zanie Blue
325982c418
Rename uv-interpreter crate to uv-toolchain (#4120)
In preparation for managed toolchains #2607, just renames the crate to
something broader.

See #4121 and https://github.com/astral-sh/uv/pull/4138 to see the final
intent.
2024-06-07 13:59:14 -05:00
Charlie Marsh
77e93157fb
Make target Python version an optional field (#4000)
## Summary

Instead of checking if the target and installed version are the same, we
model the data such that the target version is only present if it was
specified by the user. This also means that we correctly say "requested
version" even if the two happen to be the same.
2024-06-03 22:37:15 +00:00
Tim de Jager
1b1600c40e
feat: add back the use of extra env vars to the build dispatch (#3981)
Seems like a recent Pull removed this, couldn't directly find out which.
I'm adding it back as we rely on this API, and I do not see another way
of accessing this, or am I mistaken?

Thanks!
2024-06-03 09:13:44 -04:00
Charlie Marsh
b7d77c04cc
Add Git resolver in lieu of static hash map (#3954)
## Summary

This PR removes the static resolver map:

```rust
static RESOLVED_GIT_REFS: Lazy<Mutex<FxHashMap<RepositoryReference, GitSha>>> =
    Lazy::new(Mutex::default);
```

With a `GitResolver` struct that we now pass around on the
`BuildContext`. There should be no behavior changes here; it's purely an
internal refactor with an eye towards making it cleaner for us to
"pre-populate" the list of resolved SHAs.
2024-05-31 22:44:42 -04:00
konsti
081f20c53e
Add support for tool.uv into distribution building (#3904)
With the change, we remove the special casing of workspace dependencies
and resolve `tool.uv` for all git and directory distributions. This
gives us support for non-editable workspace dependencies and path
dependencies in other workspaces. It removes a lot of special casing
around workspaces. These changes are the groundwork for supporting
`tool.uv` with dynamic metadata.

The basis for this change is moving `Requirement` from
`distribution-types` to `pypi-types` and the lowering logic from
`uv-requirements` to `uv-distribution`. This changes should be split out
in separate PRs.

I've included an example workspace `albatross-root-workspace2` where
`bird-feeder` depends on `a` from another workspace `ab`. There's a
bunch of failing tests and regressed error messages that still need
fixing. It does fix the audited package count for the workspace tests.
2024-05-31 02:42:03 +00:00
Charlie Marsh
cedd18e4c6
Remove some unused pub functions (#3872)
## Summary

I wrote a bad Python script to find these.
2024-05-28 15:58:13 +00:00
Charlie Marsh
1fc6a59707
Remove special-casing for editable requirements (#3869)
## Summary

There are a few behavior changes in here:

- We now enforce `--require-hashes` for editables, like pip. So if you
use `--require-hashes` with an editable requirement, we'll reject it. I
could change this if it seems off.
- We now treat source tree requirements, editable or not (e.g., both `-e
./black` and `./black`) as if `--refresh` is always enabled. This
doesn't mean that we _always_ rebuild them; but if you pass
`--reinstall`, then yes, we always rebuild them. I think this is an
improvement and is close to how editables work today.

Closes #3844.

Closes #2695.
2024-05-28 15:49:34 +00:00
Charlie Marsh
0efc5d0cab
Reuse reporting operation in venv (#3755) 2024-05-22 15:43:20 -04:00
Charlie Marsh
fe28b2c278
Remove unused methods from Resolution (#3754) 2024-05-22 18:48:44 +00:00
Ibraheem Ahmed
39af09f09b
Parallelize resolver (#3627)
## Summary

This PR introduces parallelism to the resolver. Specifically, we can
perform PubGrub resolution on a separate thread, while keeping all I/O
on the tokio thread. We already have the infrastructure set up for this
with the channel and `OnceMap`, which makes this change relatively
simple. The big change needed to make this possible is removing the
lifetimes on some of the types that need to be shared between the
resolver and pubgrub thread.

A related PR, https://github.com/astral-sh/uv/pull/1163, found that
adding `yield_now` calls improved throughput. With optimal scheduling we
might be able to get away with everything on the same thread here.
However, in the ideal pipeline with perfect prefetching, the resolution
and prefetching can run completely in parallel without depending on one
another. While this would be very difficult to achieve, even with our
current prefetching pattern we see a consistent performance improvement
from parallelism.

This does also require reverting a few of the changes from
https://github.com/astral-sh/uv/pull/3413, but not all of them. The
sharing is isolated to the resolver task.

## Test Plan

On smaller tasks performance is mixed with ~2% improvements/regressions
on both sides. However, on medium-large resolution tasks we see the
benefits of parallelism, with improvements anywhere from 10-50%.

```
./scripts/requirements/jupyter.in
Benchmark 1: ./target/profiling/baseline (resolve-warm)
  Time (mean ± σ):      29.2 ms ±   1.8 ms    [User: 20.3 ms, System: 29.8 ms]
  Range (min … max):    26.4 ms …  36.0 ms    91 runs
 
Benchmark 2: ./target/profiling/parallel (resolve-warm)
  Time (mean ± σ):      25.5 ms ±   1.0 ms    [User: 19.5 ms, System: 25.5 ms]
  Range (min … max):    23.6 ms …  27.8 ms    99 runs
 
Summary
  ./target/profiling/parallel (resolve-warm) ran
    1.15 ± 0.08 times faster than ./target/profiling/baseline (resolve-warm)
```
```
./scripts/requirements/boto3.in   
Benchmark 1: ./target/profiling/baseline (resolve-warm)
  Time (mean ± σ):     487.1 ms ±   6.2 ms    [User: 464.6 ms, System: 61.6 ms]
  Range (min … max):   480.0 ms … 497.3 ms    10 runs
 
Benchmark 2: ./target/profiling/parallel (resolve-warm)
  Time (mean ± σ):     430.8 ms ±   9.3 ms    [User: 529.0 ms, System: 77.2 ms]
  Range (min … max):   417.1 ms … 442.5 ms    10 runs
 
Summary
  ./target/profiling/parallel (resolve-warm) ran
    1.13 ± 0.03 times faster than ./target/profiling/baseline (resolve-warm)
```
```
./scripts/requirements/airflow.in 
Benchmark 1: ./target/profiling/baseline (resolve-warm)
  Time (mean ± σ):     478.1 ms ±  18.8 ms    [User: 482.6 ms, System: 205.0 ms]
  Range (min … max):   454.7 ms … 508.9 ms    10 runs
 
Benchmark 2: ./target/profiling/parallel (resolve-warm)
  Time (mean ± σ):     308.7 ms ±  11.7 ms    [User: 428.5 ms, System: 209.5 ms]
  Range (min … max):   287.8 ms … 323.1 ms    10 runs
 
Summary
  ./target/profiling/parallel (resolve-warm) ran
    1.55 ± 0.08 times faster than ./target/profiling/baseline (resolve-warm)
```
2024-05-17 11:47:30 -04:00
Charlie Marsh
66eea7a5fb
Remove unused installed field from Plan (#3566) 2024-05-14 01:53:48 +00:00
Charlie Marsh
e0da977fc9
Get cargo shear passing (#3533)
## Summary

Remove a few unused deps, and ignore `flate2` (which we include for
feature control).
2024-05-13 01:43:45 +00:00
Ibraheem Ahmed
783df8f657
Consolidate concurrency limits (#3493)
## Summary

This PR consolidates the concurrency limits used throughout `uv` and
exposes two limits, `UV_CONCURRENT_DOWNLOADS` and
`UV_CONCURRENT_BUILDS`, as environment variables.

Currently, `uv` has a number of concurrent streams that it buffers using
relatively arbitrary limits for backpressure. However, many of these
limits are conflated. We run a relatively small number of tasks overall
and should start most things as soon as possible. What we really want to
limit are three separate operations:
- File I/O. This is managed by tokio's blocking pool and we should not
really have to worry about it.
- Network I/O.
- Python build processes.

Because the current limits span a broad range of tasks, it's possible
that a limit meant for network I/O is occupied by tasks performing
builds, reading from the file system, or even waiting on a `OnceMap`. We
also don't limit build processes that end up being required to perform a
download. While this may not pose a performance problem because our
limits are relatively high, it does mean that the limits do not do what
we want, making it tricky to expose them to users
(https://github.com/astral-sh/uv/issues/1205,
https://github.com/astral-sh/uv/issues/3311).

After this change, the limits on network I/O and build processes are
centralized and managed by semaphores. All other tasks are unbuffered
(note that these tasks are still bounded, so backpressure should not be
a problem).
2024-05-10 12:43:08 -04:00
Andrew Gallant
8b0fad3560 uv-resolver: make MarkerEnvironment optional
This commit touches a lot of code, but the conceptual change here is
pretty simple: make it so we can run the resolver without providing a
`MarkerEnvironment`. This also indicates that the resolver should run in
universal mode. That is, the effect of a missing marker environment is
that all marker expressions that reference the marker environment are
evaluated to `true`. That is, they are ignored. (The only markers we
evaluate in that context are extras, which are the only markers that
aren't dependent on the environment.)

One interesting change here is that a `Resolver` no longer needs an
`Interpreter`. Previously, it had only been using it to construct a
`PythonRequirement`, by filling in the installed version from the
`Interpreter` state. But we now construct a `PythonRequirement`
explicitly since its `target` Python version should no longer be tied to
the `MarkerEnvironment`. (Currently, the marker environment is mutated
such that its `python_full_version` is derived from multiple sources,
including the CLI, which I found a touch confusing.)

The change in behavior can now be observed through the
`--unstable-uv-lock-file` flag. First, without it:

```
$ cat requirements.in
anyio>=4.3.0 ; sys_platform == "linux"
anyio<4 ; sys_platform == "darwin"
$ cargo run -qp uv -- pip compile -p3.10 requirements.in
anyio==4.3.0
exceptiongroup==1.2.1
    # via anyio
idna==3.7
    # via anyio
sniffio==1.3.1
    # via anyio
typing-extensions==4.11.0
    # via anyio
```

And now with it:

```
$ cargo run -qp uv -- pip compile -p3.10 requirements.in --unstable-uv-lock-file
  x No solution found when resolving dependencies:
  `-> Because you require anyio>=4.3.0 and anyio<4, we can conclude that the requirements are unsatisfiable.
```

This is expected at this point because the marker expressions are being
explicitly ignored, *and* there is no forking done yet to account for
the conflict.
2024-05-09 09:24:37 -04:00
Ibraheem Ahmed
94cf604574
Remove unnecessary uses of DashMap and Arc (#3413)
## Summary

All of the resolver code is run on the main thread, so a lot of the
`Send` bounds and uses of `DashMap` and `Arc` are unnecessary. We could
also switch to using single-threaded versions of `Mutex` and `Notify` in
some places, but there isn't really a crate that provides those I would
be comfortable with using.

The `Arc` in `OnceMap` can't easily be removed because of the uv-auth
code which uses the
[reqwest-middleware](https://docs.rs/reqwest-middleware/latest/reqwest_middleware/trait.Middleware.html)
crate, that seems to adds unnecessary `Send` bounds because of
`async-trait`. We could duplicate the code and create a `OnceMapLocal`
variant, but I don't feel that's worth it.
2024-05-06 22:30:43 -04:00
konsti
4f87edbe66
Add basic tool.uv.sources support (#3263)
## Introduction

PEP 621 is limited. Specifically, it lacks
* Relative path support
* Editable support
* Workspace support
* Index pinning or any sort of index specification

The semantics of urls are a custom extension, PEP 440 does not specify
how to use git references or subdirectories, instead pip has a custom
stringly format. We need to somehow support these while still stying
compatible with PEP 621.

## `tool.uv.source`

Drawing inspiration from cargo, poetry and rye, we add `tool.uv.sources`
or (for now stub only) `tool.uv.workspace`:

```toml
[project]
name = "albatross"
version = "0.1.0"
dependencies = [
  "tqdm >=4.66.2,<5",
  "torch ==2.2.2",
  "transformers[torch] >=4.39.3,<5",
  "importlib_metadata >=7.1.0,<8; python_version < '3.10'",
  "mollymawk ==0.1.0"
]

[tool.uv.sources]
tqdm = { git = "https://github.com/tqdm/tqdm", rev = "cc372d09dcd5a5eabdc6ed4cf365bdb0be004d44" }
importlib_metadata = { url = "https://github.com/python/importlib_metadata/archive/refs/tags/v7.1.0.zip" }
torch = { index = "torch-cu118" }
mollymawk = { workspace = true }

[tool.uv.workspace]
include = [
  "packages/mollymawk"
]

[tool.uv.indexes]
torch-cu118 = "https://download.pytorch.org/whl/cu118"
```

See `docs/specifying_dependencies.md` for a detailed explanation of the
format. The basic gist is that `project.dependencies` is what ends up on
pypi, while `tool.uv.sources` are your non-published additions. We do
support the full range or PEP 508, we just hide it in the docs and
prefer the exploded table for easier readability and less confusing with
actual url parts.

This format should eventually be able to subsume requirements.txt's
current use cases. While we will continue to support the legacy `uv pip`
interface, this is a piece of the uv's own top level interface. Together
with `uv run` and a lockfile format, you should only need to write
`pyproject.toml` and do `uv run`, which generates/uses/updates your
lockfile behind the scenes, no more pip-style requirements involved. It
also lays the groundwork for implementing index pinning.

## Changes

This PR implements:
* Reading and lowering `project.dependencies`,
`project.optional-dependencies` and `tool.uv.sources` into a new
requirements format, including:
  * Git dependencies
  * Url dependencies
  * Path dependencies, including relative and editable
* `pip install` integration
* Error reporting for invalid `tool.uv.sources`
* Json schema integration (works in pycharm, see below)
* Draft user-level docs (see `docs/specifying_dependencies.md`)

It does not implement:
* No `pip compile` testing, deprioritizing towards our own lockfile
* Index pinning (stub definitions only)
* Development dependencies
* Workspace support (stub definitions only)
* Overrides in pyproject.toml
* Patching/replacing dependencies

One technically breaking change is that we now require user provided
pyproject.toml to be valid wrt to PEP 621. Included files still fall
back to PEP 517. That means `pip install -r requirements.txt` requires
it to be valid while `pip install -r requirements.txt` with `-e .` as
content falls back to PEP 517 as before.

## Implementation

The `pep508` requirement is replaced by a new `UvRequirement` (name up
for bikeshedding, not particularly attached to the uv prefix). The still
existing `pep508_rs::Requirement` type is a url format copied from pip's
requirements.txt and doesn't appropriately capture all features we
want/need to support. The bulk of the diff is changing the requirement
type throughout the codebase.

We still use `VerbatimUrl` in many places, where we would expect a
parsed/decomposed url type, specifically:
* Reading core metadata except top level pyproject.toml files, we fail a
step later instead if the url isn't supported.
* Allowed `Urls`.
* `PackageId` with a custom `CanonicalUrl` comparison, instead of
canonicalizing urls eagerly.
* `PubGrubPackage`: We eventually convert the `VerbatimUrl` back to a
`Dist` (`Dist::from_url`), instead of remembering the url.
* Source dist types: We use verbatim url even though we know and require
that these are supported urls we can and have parsed.

I tried to make improve the situation be replacing `VerbatimUrl`, but
these changes would require massive invasive changes (see e.g.
https://github.com/astral-sh/uv/pull/3253). A main problem is the ref
`VersionOrUrl` and applying overrides, which assume the same
requirement/url type everywhere. In its current form, this PR increases
this tech debt.

I've tried to split off PRs and commits, but the main refactoring is
still a single monolith commit to make it compile and the tests pass.

## Demo

Adding
d1ae3b85d5/pyproject.json
as json schema (v7) to pycharm for `pyproject.toml`, you can try the IDE
support already:


![pycharm](599082c7-6be5-41c1-a3cd-516092382f8d)


[dove.webm](c293c272-c80b-459d-8c95-8c46a8d198a1)
2024-05-03 21:10:50 +00:00
konsti
eded6c9fae
Use link mode for builds, in uv pip compile and for uv venv seed packages (#3016)
Use the user specified link mode for temporary build venvs, too. It
seems consistent to respect the user's link mode for all installations
we perform
2024-04-15 08:49:41 +00:00
Charlie Marsh
96c3c2e774
Support unnamed requirements in --require-hashes (#2993)
## Summary

This PR enables `--require-hashes` with unnamed requirements. The key
change is that `PackageId` becomes `VersionId` (since it refers to a
package at a specific version), and the new `PackageId` consists of
_either_ a package name _or_ a URL. The hashes are keyed by `PackageId`,
so we can generate the `RequiredHashes` before we have names for all
packages, and enforce them throughout.

Closes #2979.
2024-04-11 11:26:50 -04:00
Charlie Marsh
006379c50c
Add support for URL requirements in --generate-hashes (#2952)
## Summary

This PR enables hash generation for URL requirements when the user
provides `--generate-hashes` to `pip compile`. While we include the
hashes from the registry already, today, we omit hashes for URLs.

To power hash generation, we introduce a `HashPolicy` abstraction:

```rust
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
pub enum HashPolicy<'a> {
    /// No hash policy is specified.
    None,
    /// Hashes should be generated (specifically, a SHA-256 hash), but not validated.
    Generate,
    /// Hashes should be validated against a pre-defined list of hashes. If necessary, hashes should
    /// be generated so as to ensure that the archive is valid.
    Validate(&'a [HashDigest]),
}
```

All of the methods on the distribution database now accept this policy,
instead of accepting `&'a [HashDigest]`.

Closes #2378.
2024-04-10 20:02:45 +00:00
Charlie Marsh
1f3b5bb093
Add hash-checking support to install and sync (#2945)
## Summary

This PR adds support for hash-checking mode in `pip install` and `pip
sync`. It's a large change, both in terms of the size of the diff and
the modifications in behavior, but it's also one that's hard to merge in
pieces (at least, with any test coverage) since it needs to work
end-to-end to be useful and testable.

Here are some of the most important highlights:

- We store hashes in the cache. Where we previously stored pointers to
unzipped wheels in the `archives` directory, we now store pointers with
a set of known hashes. So every pointer to an unzipped wheel also
includes its known hashes.
- By default, we don't compute any hashes. If the user runs with
`--require-hashes`, and the cache doesn't contain those hashes, we
invalidate the cache, redownload the wheel, and compute the hashes as we
go. For users that don't run with `--require-hashes`, there will be no
change in performance. For users that _do_, the only change will be if
they don't run with `--generate-hashes` -- then they may see some
repeated work between resolution and installation, if they use `pip
compile` then `pip sync`.
- Many of the distribution types now include a `hashes` field, like
`CachedDist` and `LocalWheel`.
- Our behavior is similar to pip, in that we enforce hashes when pulling
any remote distributions, and when pulling from our own cache. Like pip,
though, we _don't_ enforce hashes if a distribution is _already_
installed.
- Hash validity is enforced in a few different places:
1. During resolution, we enforce hash validity based on the hashes
reported by the registry. If we need to access a source distribution,
though, we then enforce hash validity at that point too, prior to
running any untrusted code. (This is enforced in the distribution
database.)
2. In the install plan, we _only_ add cached distributions that have
matching hashes. If a cached distribution is missing any hashes, or the
hashes don't match, we don't return them from the install plan.
3. In the downloader, we _only_ return distributions with matching
hashes.
4. The final combination of "things we install" are: (1) the wheels from
the cache, and (2) the downloaded wheels. So this ensures that we never
install any mismatching distributions.
- Like pip, if `--require-hashes` is provided, we require that _all_
distributions are pinned with either `==` or a direct URL. We also
require that _all_ distributions have hashes.

There are a few notable TODOs:

- We don't support hash-checking mode for unnamed requirements. These
should be _somewhat_ rare, though? Since `pip compile` never outputs
unnamed requirements. I can fix this, it's just some additional work.
- We don't automatically enable `--require-hashes` with a hash exists in
the requirements file. We require `--require-hashes`.

Closes #474.

## Test Plan

I'd like to add some tests for registries that report incorrect hashes,
but otherwise: `cargo test`
2024-04-10 19:09:03 +00:00
Charlie Marsh
48ba7df98a
Move FlatIndex into the uv-resolver crate (#2972)
## Summary

This lets us remove circular dependencies (in the future, e.g., #2945)
that arise from `FlatIndex` needing a bunch of resolver-specific
abstractions (like incompatibilities, required hashes, etc.) that aren't
necessary to _fetch_ the flat index entries.
2024-04-10 14:38:42 -04:00
konsti
273de456ea
Remove rust 1.75 workaround (#2959) 2024-04-10 13:26:18 +00:00
Zanie Blue
1512e07a2e
Split configuration options out of uv-types (#2924)
Needed to prevent circular dependencies in my toolchain work (#2931). I
think this is probably a reasonable change as we move towards persistent
configuration too?

Unfortunately `BuildIsolation` needs to be in `uv-types` to avoid
circular dependencies still. We might be able to resolve that in the
future.
2024-04-09 11:35:53 -05:00
Zanie Blue
e1878c8359
Consider installed packages during resolution (#2596)
Previously, we did not consider installed distributions as candidates
while performing resolution. Here, we update the resolver to use
installed distributions that satisfy requirements instead of pulling new
distributions from the registry.

The implementation details are as follows:

- We now provide `SitePackages` to the `CandidateSelector`
- If an installed distribution satisfies the requirement, we prefer it
over remote distributions
- We do not want to allow installed distributions in some cases, i.e.,
upgrade and reinstall
- We address this by introducing an `Exclusions` type which tracks
installed packages to ignore during selection
- There's a new `ResolvedDist` wrapper with `Installed(InstalledDist)`
and `Installable(Dist)` variants
- This lets us pass already installed distributions throughout the
resolver

The user-facing behavior is thoroughly covered in the tests, but
briefly:

- Installing a package that depends on an already-installed package
prefers the local version over the index
- Installing a package with a name that matches an already-installed URL
package does not reinstall from the index
- Reinstalling (--reinstall) a package by name _will_ pull from the
index even if an already-installed URL package is present
- To reinstall the URL package, you must specify the URL in the request

Closes https://github.com/astral-sh/uv/issues/1661

Addresses:

- https://github.com/astral-sh/uv/issues/1476
- https://github.com/astral-sh/uv/issues/1856
- https://github.com/astral-sh/uv/issues/2093
- https://github.com/astral-sh/uv/issues/2282
- https://github.com/astral-sh/uv/issues/2383
- https://github.com/astral-sh/uv/issues/2560

## Test plan

- [x] Reproduction at `charlesnicholson/uv-pep420-bug` passes
- [x] Unit test for editable package
([#1476](https://github.com/astral-sh/uv/issues/1476))
- [x] Unit test for previously installed package with empty registry
- [x] Unit test for local non-editable package
- [x] Unit test for new version available locally but not in registry
([#2093](https://github.com/astral-sh/uv/issues/2093))
- ~[ ] Unit test for wheel not available in registry but already
installed locally
([#2282](https://github.com/astral-sh/uv/issues/2282))~ (seems
complicated and not worthwhile)
- [x] Unit test for install from URL dependency then with matching
version ([#2383](https://github.com/astral-sh/uv/issues/2383))
- [x] Unit test for install of new package that depends on installed
package does not change version
([#2560](https://github.com/astral-sh/uv/issues/2560))
- [x] Unit test that `pip compile` does _not_ consider installed
packages
2024-03-28 13:49:17 -05:00
Zanie Blue
0b08ba1e67
Rename uv-traits and split into separate modules (#2674)
This is driving me a little crazy and is becoming a larger problem in
#2596 where I need to move more types (like `Upgrade` and `Reinstall`)
into this crate. Anything that's shared across our core resolver,
install, and build crates needs to be defined in this crate to avoid
cyclic dependencies. We've outgrown it being a single file with some
shared traits.

There are no behavioral changes here.
2024-03-26 15:39:43 -05:00
Charlie Marsh
5d7d7dce24
Enable PEP 517 builds for unnamed requirements (#2600)
## Summary

This PR enables the source distribution database to be used with unnamed
requirements (i.e., URLs without a package name). The (significant)
upside here is that we can now use PEP 517 hooks to resolve unnamed
requirement metadata _and_ reuse any computation in the cache.

The changes to `crates/uv-distribution/src/source/mod.rs` are quite
extensive, but mostly mechanical. The core idea is that we introduce a
new `BuildableSource` abstraction, which can either be a distribution,
or an unnamed URL:

```rust
/// A reference to a source that can be built into a built distribution.
///
/// This can either be a distribution (e.g., a package on a registry) or a direct URL.
///
/// Distributions can _also_ point to URLs in lieu of a registry; however, the primary distinction
/// here is that a distribution will always include a package name, while a URL will not.
#[derive(Debug, Clone, Copy)]
pub enum BuildableSource<'a> {
    Dist(&'a SourceDist),
    Url(SourceUrl<'a>),
}
```

All the methods on the source distribution database now accept
`BuildableSource`. `BuildableSource` has a `name()` method, but it
returns `Option<&PackageName>`, and everything is required to work with
and without a package name.

The main drawback of this approach (which isn't a terrible one) is that
we can no longer include the package name in the cache. (We do continue
to use the package name for registry-based distributions, since those
always have a name.). The package name was included in the cache route
for two reasons: (1) it's nice for debugging; and (2) we use it to power
`uv cache clean flask`, to identify the entries that are relevant for
Flask.

To solve this, I changed the `uv cache clean` code to look one level
deeper. So, when we want to determine whether to remove the cache entry
for a given URL, we now look into the directory to see if there are any
wheels that match the package name. This isn't as nice, but it does work
(and we have test coverage for it -- all passing).

I also considered removing the package name from the cache routes for
non-registry _wheels_, for consistency... But, it would require a cache
bump, and it didn't feel important enough to merit that.
2024-03-21 22:46:39 -04:00
konsti
70e0967dbd
Avoid repeating paths of workspace packages (#2573)
Scott schafer got me the idea: We can avoid repeating the path for
workspaces dependencies everywhere if we declare them in the virtual
package once and treat them as workspace dependencies from there on.
2024-03-20 16:16:02 -04:00
Micha Reiser
acbee166c0
Remove unused dependencies (#2543)
## Summary

I tried out `cargo shear` to see if there are any unused dependencies
that `cargo udeps` isn't reporting. It turned out, there are a few. This
PR removes those dependencies.

## Test Plan

`cargo build`
2024-03-19 13:10:10 -04:00
Charlie Marsh
cca9de13e2
Treat non-existent site-packages as empty (#2413)
## Summary

It turns out this doesn't need to exist until something has been
installed into it. See, e.g., https://github.com/astral-sh/uv/pull/2402.

Closes https://github.com/astral-sh/uv/issues/2404.
2024-03-13 15:10:34 +00:00
konsti
7964bfbb2b
Move architecture and operating system probing to Python (#2381)
The architecture of uv does not necessarily match that of the python
interpreter (#2326). In cross compiling/testing scenarios the operating
system can also mismatch. To solve this, we move arch and os detection
to python, vendoring the relevant pypa/packaging code, preventing
mismatches between what the python interpreter was compiled for and what
uv was compiled for.

To make the scripts more manageable, they are now a directory in a
tempdir and we run them with `python -m` . I've simplified the
pypa/packaging code since we're still building the tags in rust. A
`Platform` is now instantiated by querying the python interpreter for
its platform. The pypa/packaging files are copied verbatim for easier
updates except a `lru_cache()` python 3.7 backport.

Error handling is done by a `"result": "success|error"` field that allow
passing error details to rust:

```console
$ uv venv --no-cache
  × Can't use Python at `/home/konsti/projects/uv/.venv/bin/python3`
  ╰─▶ Unknown operation system `linux`
```

I've used the [maturin sysconfig
collection](855f6d2cb1/sysconfig)
as reference. I'm unsure how to test these changes across the wide
variety of platforms.

Fixes #2326
2024-03-13 11:51:14 +00:00
Zanie Blue
00ec99399a
Fix bug where --no-binary :all: prevented build of editable packages (#2393)
Closes https://github.com/astral-sh/uv/issues/2343
2024-03-12 23:21:40 +00:00
Charlie Marsh
5ae5980c88
Add support for --no-build-isolation (#2258)
## Summary

This PR adds support for pip's `--no-build-isolation`. When enabled,
build requirements won't be installed during PEP 517-style builds, but
the source environment _will_ be used when executing the build steps
themselves.

Closes https://github.com/astral-sh/uv/issues/1715.
2024-03-07 14:04:02 +00:00
Charlie Marsh
59c7a10c4b
Rename gourgeist to uv-virtualenv (#2118)
As agreed on Discord!
2024-03-01 14:02:40 -05:00
Tim de Jager
0fbfa11013
Add option pass environment variables for SDist building (#2039)
<!--
Thank you for contributing to uv! To help us out with reviewing, please
consider the following:

- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->

## Summary

With this PR I've added the option environment variables to the wheel
building process, through the `BuildDispatch`. When integrating uv with
our project pixi (https://github.com/prefix-dev/pixi/pull/863). We ran
into this missing requirement, I've made a rough version here, could
maybe use some refinement.

### Why do we need this?

Because pixi allow the user to use a conda activated prefix for wheel
building, this comes with a number of environment variables, like `PATH`
but also `CONDA_PREFIX` amongst others. This allows the user to use
system dependencies from conda-forge to use during an sdist build.
Because we use `uv` as a library we need to pass in the options
programatically. Additionally, in general there is nothing holding a
python sdist back from actually depending on an environment variable,
see
e.g the test package: https://pypi.org/project/env-test-package/

### What about `ConfigSettings`

I think `ConfigSettings` does not suffice because e.g. CMake could
function differently when the `CONDA_PREFIX` is set. Also, we do not
know if the user supplied backend actually support these settings.

### Path handling

Because the user can now also supply a PATH in the environment map, the
logic I had was the following, I format the path so that it has the
following precedence

1. venv scripts dir.
2. user supplied path.
3. system path.

### Improvements

There is some path modification and copying happening everytime we use
the `run_python_script` function, I think we could improve this but
would like some pointers where to best put the maybe split and cached
version, we might also want to use some types to split these things up.


### Finally

I did not add any of these options to the uv executables, I first would
like to know if this is a direction we would want to go in. I'm happy to
do this or make any changes that you feel would benefit this project.

Also tagging @wolfv to keep track of this as well.  

## Test Plan

<!-- How was it tested? -->

---------

Co-authored-by: konsti <konstin@mailbox.org>
2024-02-29 10:55:10 +00:00
Charlie Marsh
1ee28f78cd
Rename Virtualenv and PythonPlatform structs (#2034)
## Summary

`PythonPlatform` only exists to format paths to directories within
virtual environments based on a root and an OS, so it's now
`VirtualenvLayout`.

`Virtualenv` is now used for non-virtual environment Pythons, so it's
now `PythonEnvironment`.
2024-02-28 15:04:55 +00:00
Charlie Marsh
3417330f61
Remove unused base_python from BuildDispatch (#2004) 2024-02-27 05:35:41 +00:00
Charlie Marsh
aa73a4f0ea
Add support for config_settings in PEP 517 hooks (#1833)
## Summary

Adds `--config-setting` / `-C` (with a `--config-settings` alias for
convenience) to the CLI.

Closes https://github.com/astral-sh/uv/issues/1460.
2024-02-23 00:53:45 +00:00
Charlie Marsh
7eaed07f6c
Move conflicting dependencies into PubGrub (#1796)
## Summary

This revives a PR from long ago
(https://github.com/astral-sh/uv/pull/383 and
https://github.com/zanieb/pubgrub/pull/24) that modifies how we deal
with dependencies that are declared multiple times within a single
package.

To quote from the originating PR:

> Uses an experimental pubgrub branch (#370) that allows us to handle
multiple version ranges for a single dependency to the solver which
results in better error messages because the derivation tree contains
all of the relevant versions. Previously, the version ranges were merged
(by us) in the resolver before handing them to pubgrub since only one
range could be provided per package. Since we don't merge the versions
anymore, we no longer give the solver an empty range for conflicting
requirements; instead the solver comes to that conclusion from the
provided versions. You can see the improved error message for direct
dependencies in [this
snapshot](https://github.com/astral-sh/puffin/pull/383/files#diff-a0437f2c20cde5e2f15199a3bf81a102b92580063268417847ec9c793a115bd0).

The main issue with that PR was around its handling of URL dependencies,
so this PR _also_ refactors how we handle those. Previously, we stored
URL dependencies on `PubGrubPackage`, but they were omitted from the
hash and equality implementations of `PubGrubPackage`. This led to some
really careful codepaths wherein we had to ensure that we always visited
URLs before non-URL packages, so that the URL-inclusive versions were
included in any hashmaps, etc. I considered preserving this approach,
but it would require us to rely on lots of internal details of PubGrub
(since we'd now be relying on PubGrub to merge those packages in the
"right" order).

So, instead, we now _always_ set the URL on a given package, whenever
that package was _given_ a URL upfront. I think this is easier to reason
about: if the user provided a URL for `flask`, then we should just
always add the URL for `flask`. If we see some other URL for `flask`, we
error, like before. If we see some unknown URL for `flask`, we error,
like before.

Closes https://github.com/astral-sh/uv/issues/1522.

Closes https://github.com/astral-sh/uv/issues/1821.

Closes https://github.com/astral-sh/uv/issues/1615.
2024-02-21 21:27:58 -05:00
Zanie Blue
2586f655bb
Rename to uv (#1302)
First, replace all usages in files in-place. I used my editor for this.
If someone wants to add a one-liner that'd be fun.

Then, update directory and file names:

```
# Run twice for nested directories
find . -type d -print0 | xargs -0 rename s/puffin/uv/g
find . -type d -print0 | xargs -0 rename s/puffin/uv/g

# Update files
find . -type f -print0 | xargs -0 rename s/puffin/uv/g
```

Then add all the files again

```
# Add all the files again
git add crates
git add python/uv

# This one needs a force-add
git add -f crates/uv-trampoline
```
2024-02-15 11:19:46 -06:00