Moves yanked version filtering from `VersionMap::from_metadata` to the
resolver and tracks it as a PubGrub unavailable incompatibility so
yanked versions are reflected in error messages.
e.g. before
```
╰─▶ Because only albatross<=0.1.0 is available and you require albatross>0.1.0,
we can conclude that the requirements are unsatisfiable.
```
after
```
╰─▶ Because only the following versions of albatross are available:
albatross<=0.1.0
albatross==1.0.0
and albatross==1.0.0 is unusable because it was yanked, we can conclude that albatross>0.1.0 cannot be used.
And because you require albatross>0.1.0, we can conclude that the requirements are unsatisfiable.
```
## Summary
This PR adds an `--offline` flag to Puffin that disables network
requests (implemented as a Reqwest middleware on our registry client).
When `--offline` is provided, we also allow the HTTP cache to return
stale data.
Closes#942.
Updates our `--no-binary` option and adds a `--only-binary` option for
compatibility with `pip` which uses `:all:`, `:none:` and `<name>` for
specifying packages.
This required adding support for `--only-binary <name>` into our
resolver, previously it was only a boolean toggle.
Retains`--no-build` which is equivalent to `--only-binary :all:`. This
is common enough for safety that I would prefer it is available without
pip's awkward `:all:` syntax.
---------
Co-authored-by: konsti <konstin@mailbox.org>
## Summary
For PEP 517 builds, the current working directory needs to be set to the
directory of the source distribution. It turns out that on Windows, if
you use a UNC path for the working directory, then relative paths are
interpreted relative to the root of the current drive
([source](https://www.fileside.app/blog/2023-03-17_windows-file-paths/#paths-relative-to-the-root-of-the-current-drive)).
So, when builds attempted to resolve relative paths, they always
errored...
This PR ensures that we remove the UNC prefix when setting the current
working directory.
Closes#1238.
## Test Plan
I tested this on my Windows machine by installing `ujson` with
`--no-binary ujson`. (I don't want to add that specific test, since it's
really slow to build.)
Contrary to our prior assumption, we can't reliably select a specific
patch version. With the deadsnakes PPA for example, `python3.12` is
installed into `PATH`, but `python3.12.1` isn't. Based on the assumption
(or rather, observation) that users have a single python patch version
per python minor version installed, generally the latest, we only check
if the installed patch version matches the selected patch version, if
any, instead of search for one.
In the process, i deduplicated the python discovery logic.
Run `cargo test` on windows in CI, pulling the switch on tier 1 windows
support.
These changes make the bootstrap script virtually required for running
the tests. This gives us consistency between and CI, but it also locks
our tests to python-build-standalone and an articificial `PATH`.
I've deleted the shell bootstrap script in favor of only the python one,
which also runs on windows. I've left the (sym)link creation of the
bootstrap in place, even though it is not used by the tests anymore.
I've reactivated the three tests that would previously stack overflow by
doubling their stack sizes. The stack overflows only happen in debug
mode, so this is neither a user facing problem nor an actual problem
with our code and this workaround seems better than optimizing our code
for case that the (release) compiler can optimize much better for.
The handling of patch versions will be fixed in a follow-up PR.
Closes#1160Closes#1161
---------
Co-authored-by: Charlie Marsh <charlie.r.marsh@gmail.com>
In the process of making VersionMap construction lazy, I realized this
refactoring would be useful to me. It also simplifies a fair bit of case
analysis and does fewer BTreeMap lookups during construction. With that
said, this doesn't seem to matter for perf:
```
$ hyperfine -w10 --runs 50 \
"puffin-main pip compile --cache-dir ~/astral/tmp/cache-main ~/astral/tmp/reqs/home-assistant-reduced.in -o /dev/null" \
"puffin-test pip compile --cache-dir ~/astral/tmp/cache-test ~/astral/tmp/reqs/home-assistant-reduced.in -o /dev/null"
Benchmark 1: puffin-main pip compile --cache-dir ~/astral/tmp/cache-main ~/astral/tmp/reqs/home-assistant-reduced.in -o /dev/null
Time (mean ± σ): 146.8 ms ± 4.1 ms [User: 350.1 ms, System: 314.2 ms]
Range (min … max): 140.7 ms … 158.0 ms 50 runs
Benchmark 2: puffin-test pip compile --cache-dir ~/astral/tmp/cache-test ~/astral/tmp/reqs/home-assistant-reduced.in -o /dev/null
Time (mean ± σ): 146.8 ms ± 4.5 ms [User: 359.8 ms, System: 308.3 ms]
Range (min … max): 138.2 ms … 160.1 ms 50 runs
Summary
puffin-main pip compile --cache-dir ~/astral/tmp/cache-main ~/astral/tmp/reqs/home-assistant-reduced.in -o /dev/null ran
1.00 ± 0.04 times faster than puffin-test pip compile --cache-dir ~/astral/tmp/cache-test ~/astral/tmp/reqs/home-assistant-reduced.in -o /dev/null
```
But the simplification is still nice, and will decrease the delta
between what we have now and a lazy version map.
This PR reduces the stack sizes a windows a little further using the
stack traces from stack overflows combined with looking at the type
sizes. Ultimately, it ignore the three remaining tests failing in debug
on windows due to stack overflows to unblock `cargo test` for windows on
CI.
444 tests run: 444 passed (39 slow), 1 skipped
We need to use the anstream print macros instead of the std print
macros, otherwise we risk wrong color behavior
(https://github.com/astral-sh/puffin/pull/1258#discussion_r1480428236).
Luckily, the `print_stderr` and `print_stdout` lints catch usages of the
std prints.
This PR switches over to anstream consistently and removes the now
redundant clippy lints. The lints should catch missing anstream usage in
the future.
Remove windows-only dependencies from the snapshot output using regex.
We now do the filtering entirely on our without relying on insta
settings.
435 tests run: 430 passed (30 slow), 5 failed, 1 skipped
There are no binary installers for the latests patch versions of cpython
for windows, and building them is hard. As an alternative, we download
python-build-standanlone cpythons and put them into `<project
root>/bin`. On unix, we can symlink `pythonx.y.z` into this directory
and point `PUFFIN_PYTHON_PATH` to it. On windows, all pythons are called
`python.exe` and they don't like being linked. Instead, we add the path
to each directory containing a `python.exe` to `PUFFIN_PYTHON_PATH`,
similar to the regular `PATH`. The python discovery on windows was
extended to respect `PUFFIN_PYTHON_PATH` where needed.
These changes mean that we don't need to (sym)link pythons anymore and
could drop that part to the script.
435 tests run: 389 passed (21 slow), 46 failed, 1 skipped
## Summary
Open to other opinions here. We could just continue (and warn), prompt
the user with a confirmation, etc.
(The weird thing about those two options is we might need to validate
the command-line arguments _before_ we do that -- so you could get
errors for bad arguments, and then get a warning that your subcommand is
wrong. I can probably avoid that with more work if it feels like a
better out come though.)
Closes https://github.com/astral-sh/puffin/issues/1256.
## Summary
These add and remove dependencies from a `pyproject.toml` -- but they're
currently hidden, and don't match the rest of the workflow. We can
re-add them when the time is right.
Since unavailable packages with `--no-index` can be confusing when the
user does not also provide `--find-links` we add a hint for this case.
Required some plumbing to get the required information to the
`NoSolution` error.
---------
Co-authored-by: konstin <konstin@mailbox.org>
(Please review this PR commit by commit.)
This PR closes an initial loop on zero-copy deserialization. That
is, provides a way to get a `Archived<SimpleMetadata>` (spelled
`OwnedArchive<SimpleMetadata>` in the code) from a `CachedClient`. The
main benefit of zero-copy deserialization is that we can read bytes
from a file, cast those bytes to a structured representation without
cost, and then start using that type as any other Rust type. The
"catch" is that the structured representation is not the actual type
you started with, but the "archived" version of it.
In order to make all this work, we ended up needing to shave a rather
large yak: we had to re-implement HTTP cache semantics. Previously,
we were using the `http-cache-semantics` crate. While it does support
Serde, it doesn't support `rkyv`. Moreover, even simple support for
`rkyv` wouldn't be enough. What we actually want is for the HTTP cache
semantics to be implemented on the *archived* type so that we can
decide whether our cached response is stale or not without needing to
do a full deserialization into the unarchived type. This is why, in
this PR, you'll see `impl ArchivedCachePolicy { ... }` instead of
`impl CachePolicy { ... }`. (The `derive(rkyv::Archive)` macro
automatically introduces the `ArchivedCachePolicy` type into the
current namespace.)
Unfortunately, this PR does not fully realize the dream that is
zero-copy deserialization. Namely, while a `CachedClient` can now
provide an `OwnedArchive<SimpleMetadata>`, the rest of our code
doesn't really make use of it. Indeed, as soon as we go to build a
`VersionMap`, we eagerly convert our archived metadata into an owned
`SimpleMetadata` via deserialization (that *isn't* zero-copy). After
this change, a lot of the work now shifts to `rkyv` deserialization
and `VersionMap` construction. More precisely, the main thing we drop
here is `CachePolicy` deserialization (which is now truly zero-copy)
and the parsing of the MessagePack format for `SimpleMetadata`. But we
are still paying for deserialization. We're just paying for it in a
different place.
This PR does seem to bring a speed-up, but it is somewhat underwhelming.
My measurements have been pretty noisy, but I get a 1.1x speedup fairly
often:
```
$ hyperfine -w5 "puffin-main pip compile --cache-dir ~/astral/tmp/cache-main ~/astral/tmp/reqs/home-assistant-reduced.in -o /dev/null" "puffin-test pip compile --cache-dir ~/astral/tmp/cache-test ~/astral/tmp/reqs/home-assistant-reduced.in -o /dev/null" ; A kang
Benchmark 1: puffin-main pip compile --cache-dir ~/astral/tmp/cache-main ~/astral/tmp/reqs/home-assistant-reduced.in -o /dev/null
Time (mean ± σ): 164.4 ms ± 18.8 ms [User: 427.1 ms, System: 348.6 ms]
Range (min … max): 131.1 ms … 190.5 ms 18 runs
Benchmark 2: puffin-test pip compile --cache-dir ~/astral/tmp/cache-test ~/astral/tmp/reqs/home-assistant-reduced.in -o /dev/null
Time (mean ± σ): 148.3 ms ± 10.2 ms [User: 357.1 ms, System: 319.4 ms]
Range (min … max): 136.8 ms … 184.4 ms 19 runs
Summary
puffin-test pip compile --cache-dir ~/astral/tmp/cache-test ~/astral/tmp/reqs/home-assistant-reduced.in -o /dev/null ran
1.11 ± 0.15 times faster than puffin-main pip compile --cache-dir ~/astral/tmp/cache-main ~/astral/tmp/reqs/home-assistant-reduced.in -o /dev/null
```
One downside is that this does increase cache size (`rkyv`'s
serialization format is not as compact as MessagePack). On disk size
increases by about 1.8x for our `simple-v0` cache.
```
$ sort-filesize cache-main
4.0K cache-main/CACHEDIR.TAG
4.0K cache-main/.gitignore
8.0K cache-main/interpreter-v0
8.7M cache-main/wheels-v0
18M cache-main/archive-v0
59M cache-main/simple-v0
109M cache-main/built-wheels-v0
193M cache-main
193M total
$ sort-filesize cache-test
4.0K cache-test/CACHEDIR.TAG
4.0K cache-test/.gitignore
8.0K cache-test/interpreter-v0
8.7M cache-test/wheels-v0
18M cache-test/archive-v0
107M cache-test/simple-v0
109M cache-test/built-wheels-v0
242M cache-test
242M total
```
Also, while I initially intended to do a simplistic implementation of
HTTP cache semantics, I found that everything was somewhat
inter-connected. I could have wrote code that _specifically_ only worked
with the present behavior of PyPI, but then it would need to be special
cased and everything else would need to continue to use
`http-cache-sematics`. By implementing what we need based on what Puffin
actually is (which is still less than what `http-cache-semantics` does),
we can avoid special casing and use zero-copy deserialization for our
cache policy in _all_ cases.
Previously, whenever we encountered a missing package we would throw an
error without information about why the package was requested. This
meant that if a transitive dependency required a missing package, the
user would have no idea why it was even selected. Here, we track
`NotFound` and `NoIndex` errors as `NoVersions` incompatibilities with
an attached reason. Improves our test coverage for `--no-index` without
`--find-links`.
The
[snapshots](https://github.com/astral-sh/puffin/pull/1241/files#diff-3eea1658f165476252f1f061d0aa9f915aabdceafac21611cdf45019447f60ec)
show a nice improvement.
I think this will also enable backtracking to another version if some
version of transitive dependency has a missing dependency. I'll write a
scenario for that next.
Requires https://github.com/zanieb/pubgrub/pull/22
Closes#884
e.g.
```
❯ cargo run -q -- pip compile --python-version 3.12 requirements.in
× No solution found when resolving dependencies:
╰─▶ Because the requested Python version (3.12) does not satisfy Python>=3.6,<3.10 and recommenders==1.0.0 depends on Python>=3.6,<3.9, we can conclude that recommenders==1.0.0 cannot be used.
And because only the following versions of recommenders are available:
recommenders<=0.7
recommenders==1.0.0
recommenders==1.1.0
recommenders==1.1.1
we can conclude that recommenders>0.7,<1.1.0 cannot be used. (1)
Because the requested Python version (3.12) does not satisfy Python>=3.6,<3.10 and recommenders>=1.1.0 depends on Python>=3.6,<3.10, we can conclude that recommenders>=1.1.0 cannot be used.
And because we know from (1) that recommenders>0.7,<1.1.0 cannot be used, we can conclude that recommenders>0.7 cannot be used.
And because you require recommenders>0.7, we can conclude that the requirements are unsatisfiable.
```
## Summary
Previously, we were blocking operations that could run in parallel. We
would send request through our main requests channel, but not yield so
that the receiver could only start processing requests much later than
necessary. We solve this by switching to the async
`tokio::sync::mpsc::channel`, where send is an async functions that
yields.
Due to the increased parallelism cache deserialization and the
conversion from simple api request to version map became bottlenecks, so
i moved them to `spawn_blocking`. Together these result in a 30-60%
speedup for larger warm cache resolution. Small cases such as black
already resolve in 5.7 ms on my machine so there's no speedup to be
gained, refresh and no cache were to noisy to get signal from.
Note for the future: Revisit the bounded channel if we want to produce
requests from `process_request`, too, (this would be good for
prefetching) to avoid deadlocks.
## Details
We can look at the behavior change through the spans:
```
RUST_LOG=puffin=info TRACING_DURATIONS_FILE=target/traces/jupyter-warm-branch.ndjson cargo run --features tracing-durations-export --bin puffin-dev --profile profiling -- resolve jupyter 2> /dev/null
```
Below, you can see how on main, we have discrete phases: All (cached)
simple api requests in parallel, then all (cached) metadata requests in
parallel, repeat until done. The solver is mostly waiting until it has
it's version map from the simple API query to be able to choose a
version. The main thread is blocked by process requests.
In the PR branch, the simple api requests succeeds much earlier,
allowing the solver to advance and also to schedule more prefetching.
Due to that `parse_cache` and `from_metadata` became bottlenecks, so i
moved them off the main thread (green color, and their spans can now
overlap because they can run on multiple threads in parallel). The main
thread isn't blocked on `process_request` anymore, instead it has
frequent idle times. The spans are all much shorter, which indicates
that on main they could have finished much earlier, but a task didn't
yield so they weren't scheduled to finish (though i haven't dug deep
enough to understand the exact scheduling between the process request
stream and the solver here).
**main**

**PR**

## Benchmarks
```
$ hyperfine --warmup 3 "target/profiling/main-dev resolve jupyter" "target/profiling/branch-dev resolve jupyter"
Benchmark 1: target/profiling/main-dev resolve jupyter
Time (mean ± σ): 29.1 ms ± 0.7 ms [User: 22.9 ms, System: 11.1 ms]
Range (min … max): 27.7 ms … 32.2 ms 103 runs
Benchmark 2: target/profiling/branch-dev resolve jupyter
Time (mean ± σ): 18.8 ms ± 1.1 ms [User: 37.0 ms, System: 22.7 ms]
Range (min … max): 16.5 ms … 21.9 ms 154 runs
Summary
target/profiling/branch-dev resolve jupyter ran
1.55 ± 0.10 times faster than target/profiling/main-dev resolve jupyter
$ hyperfine --warmup 3 "target/profiling/main-dev resolve meine_stadt_transparent" "target/profiling/branch-dev resolve meine_stadt_transparent"
Benchmark 1: target/profiling/main-dev resolve meine_stadt_transparent
Time (mean ± σ): 37.8 ms ± 0.9 ms [User: 30.7 ms, System: 14.1 ms]
Range (min … max): 36.6 ms … 41.5 ms 79 runs
Benchmark 2: target/profiling/branch-dev resolve meine_stadt_transparent
Time (mean ± σ): 24.7 ms ± 1.5 ms [User: 47.0 ms, System: 39.3 ms]
Range (min … max): 21.5 ms … 28.7 ms 113 runs
Summary
target/profiling/branch-dev resolve meine_stadt_transparent ran
1.53 ± 0.10 times faster than target/profiling/main-dev resolve meine_stadt_transparent
$ hyperfine --warmup 3 "target/profiling/main pip compile scripts/requirements/home-assistant.in" "target/profiling/branch pip compile scripts/requirements/home-assistant.in"
Benchmark 1: target/profiling/main pip compile scripts/requirements/home-assistant.in
Time (mean ± σ): 229.0 ms ± 2.8 ms [User: 197.3 ms, System: 63.7 ms]
Range (min … max): 225.8 ms … 234.0 ms 13 runs
Benchmark 2: target/profiling/branch pip compile scripts/requirements/home-assistant.in
Time (mean ± σ): 91.4 ms ± 5.3 ms [User: 289.2 ms, System: 176.9 ms]
Range (min … max): 81.0 ms … 104.7 ms 32 runs
Summary
target/profiling/branch pip compile scripts/requirements/home-assistant.in ran
2.50 ± 0.15 times faster than target/profiling/main pip compile scripts/requirements/home-assistant.in
```
In the scenario tests, we want to make sure we're actually conforming to
the scenario's expectations, so we now have an extra assertion on
whether resolution failed or succeeded as well as that it includes the
given packages.
Closes#1112Closes#1030
We need more flexible filters than those `inta` offers, and `insta_cmd`
makes it impossible to plug in programmatic filters. At the same time we
use barely any of `insta_cmd`'s features. We can replace the subset we
need in about 50 loc.
Mostly a mechanical refactor to use the `puffin_snapshot!` and
`TestContext` infrastructure in the add, remove, venv and pip uninstall
tests, in preparation for adding programmatic windows testing filters.
The is only one remaining usage of `assert_cmd_snapshot!` now in the
`puffin_snapshot!` macro.
Mostly a mechanical refactor to use the `puffin_snapshot!` and
`TestContext` infrastructure in the pip install and pip sync tests, in
preparation for adding programmatic windows testing filters.
Split out from the large test refactoring PR. Use `normalized_display`
in tests and two more thiserror derives to match snapshots and output,
and other small windows fixes.
## Summary
See: https://github.com/astral-sh/puffin/issues/1224
## Test Plan
Ran `python -m scripts.bench --puffin
scripts/requirements/compiled/jupyter.txt --min-runs 100 --benchmark
install-warm --verbose` several times, which failed eventually on `main`
but not on this branch.
Mostly a mechanical refactor to use the `puffin_snapshot!` and
`TestContext` infrastructure in the pip compile and pip install
scenarios, in preparation for adding programmatic windows testing
filters.
## Summary
Oops -- this was using a different cache key than the route above (this
is the wheel _metadata_ route vs. the wheel build route), so we were
saving and building source distributions twice in `pip install`.
I originally used Python 3.10, since 3.10 and 3.11 are by far the most
common (at least for [Ruff](https://pypistats.org/packages/ruff)). But
3.12 should give Python tools the most favorable benchmarks.
It turns out that the pattern I coded up for SimpleMetadataRaw is
generally useful when working with rkyv. This commit makes it generic by
supporting any type that implements rkyv's traits, and makes a few
simplifying assumptions by picking a concrete serializer, validator and
deserializer. In effect, this lets use own any archived value.
We also rejigger the API a little bit and double-down on
`OwnedArchive<A>` just being a owned wrapper for `Archived<A>`. Namely,
we implement `Deref` and turn its inherent methods into methods that
require fully qualified syntax. (As is standard for things that
implement `Deref` to avoid ambiguity with the deref target's methods.)
(This PR also makes a couple small simplifications to our custom rkyv
serializer since we no longer need to use it directly. We do still need
to name the type in trait bounds, so it has to be public.)
In preparation for the new windows handling, i want to introduce a
`TestContext` and `puffin_snapshot!` abstraction. This PR applies those
changes for pip-compile. My plan is to use those for all venv-based
integration tests and build the custom windows filters on top of
`puffin_snapshot!`.