## Summary
Installs the seed packages you get with `virtualenv`, but opt-in rather
than opt-out.
Closes https://github.com/astral-sh/puffin/issues/852.
## Test Plan
```
❯ ./scripts/benchmarks/venv.sh
+ hyperfine --runs 20 --warmup 3 --prepare 'rm -rf .venv' './target/release/puffin venv' --prepare 'rm -rf .venv' 'virtualenv --without-pip .venv' --prepare 'rm -rf .venv' 'python -m venv --without-pip .venv'
Benchmark 1: ./target/release/puffin venv
Time (mean ± σ): 4.6 ms ± 0.2 ms [User: 2.4 ms, System: 3.6 ms]
Range (min … max): 4.3 ms … 4.9 ms 20 runs
Warning: Command took less than 5 ms to complete. Note that the results might be inaccurate because hyperfine can not calibrate the shell startup time much more precise than this limit. You can try to use the `-N`/`--shell=none` option to disable the shell completely.
Benchmark 2: virtualenv --without-pip .venv
Time (mean ± σ): 73.3 ms ± 0.3 ms [User: 57.4 ms, System: 14.2 ms]
Range (min … max): 72.8 ms … 74.0 ms 20 runs
Benchmark 3: python -m venv --without-pip .venv
Time (mean ± σ): 22.5 ms ± 0.3 ms [User: 17.0 ms, System: 4.9 ms]
Range (min … max): 22.0 ms … 23.2 ms 20 runs
Summary
'./target/release/puffin venv' ran
4.92 ± 0.20 times faster than 'python -m venv --without-pip .venv'
16.00 ± 0.63 times faster than 'virtualenv --without-pip .venv'
+ hyperfine --runs 20 --warmup 3 --prepare 'rm -rf .venv' './target/release/puffin venv --seed' --prepare 'rm -rf .venv' 'virtualenv .venv' --prepare 'rm -rf .venv' 'python -m venv .venv'
Benchmark 1: ./target/release/puffin venv --seed
Time (mean ± σ): 20.2 ms ± 0.4 ms [User: 8.6 ms, System: 15.7 ms]
Range (min … max): 19.7 ms … 21.2 ms 20 runs
Benchmark 2: virtualenv .venv
Time (mean ± σ): 135.1 ms ± 2.4 ms [User: 66.7 ms, System: 65.7 ms]
Range (min … max): 133.2 ms … 142.8 ms 20 runs
Benchmark 3: python -m venv .venv
Time (mean ± σ): 1.656 s ± 0.014 s [User: 1.447 s, System: 0.186 s]
Range (min … max): 1.641 s … 1.697 s 20 runs
Summary
'./target/release/puffin venv --seed' ran
6.67 ± 0.17 times faster than 'virtualenv .venv'
81.79 ± 1.70 times faster than 'python -m venv .venv'
```
## Summary
Refactors the benchmark script such that we use a single `hyperfine`
invocation per benchmark, and thus get the comparative summary, which is
_way_ nicer:
```
Benchmark 1: ./target/release/puffin (install-cold)
Time (mean ± σ): 410.3 ms ± 19.9 ms [User: 173.7 ms, System: 1314.5 ms]
Range (min … max): 389.7 ms … 452.1 ms 10 runs
Benchmark 2: ./target/release/baseline (install-cold)
Time (mean ± σ): 418.2 ms ± 14.4 ms [User: 210.7 ms, System: 1246.0 ms]
Range (min … max): 397.3 ms … 445.7 ms 10 runs
Summary
'./target/release/puffin (install-cold)' ran
1.02 ± 0.06 times faster than './target/release/baseline (install-cold)'
```
Taking some of Zanie's suggestions to make the custom-path API simpler
in the benchmark script. Each tool is now a dedicated argument, like:
```
python -m scripts.bench --pip-sync --poetry requirements.in
```
To provide custom binaries:
```
python -m scripts.bench \
--puffin-path ./target/release/puffin \
--puffin-path ./target/release/baseline \
requirements.in
```
## Summary
This PR enables the use of the `bench.py` script to benchmark Puffin
itself. This is something I often do by via a process like:
- Checkout the `main` branch (or any other baseline branch)
- Run: `cargo build --release`
- Run: `mv ./target/release/puffin ./target/release/baseline`
- Checkout a development branch
- Run: `cargo build --release`
- (New) Run: `python bench.py --tool puffin --path
./target/release/puffin --tool puffin --path ./target/release/baseline
requirements.in`
## Summary
Enables benchmarking against Poetry for resolution and installation:
```
Benchmark 1: pip-tools (resolve-cold)
Time (mean ± σ): 962.7 ms ± 241.9 ms [User: 322.8 ms, System: 80.5 ms]
Range (min … max): 714.9 ms … 1459.4 ms 10 runs
Benchmark 1: puffin (resolve-cold)
Time (mean ± σ): 193.2 ms ± 8.2 ms [User: 31.3 ms, System: 22.8 ms]
Range (min … max): 179.8 ms … 206.4 ms 14 runs
Benchmark 1: poetry (resolve-cold)
Time (mean ± σ): 900.7 ms ± 21.2 ms [User: 371.6 ms, System: 92.1 ms]
Range (min … max): 855.7 ms … 933.4 ms 10 runs
Benchmark 1: pip-tools (resolve-warm)
Time (mean ± σ): 386.0 ms ± 19.1 ms [User: 255.8 ms, System: 46.2 ms]
Range (min … max): 368.7 ms … 434.5 ms 10 runs
Benchmark 1: puffin (resolve-warm)
Time (mean ± σ): 8.1 ms ± 0.4 ms [User: 4.4 ms, System: 5.1 ms]
Range (min … max): 7.5 ms … 11.1 ms 183 runs
Benchmark 1: poetry (resolve-warm)
Time (mean ± σ): 336.3 ms ± 0.6 ms [User: 283.6 ms, System: 44.7 ms]
Range (min … max): 335.0 ms … 337.3 ms 10 runs
```
## Summary
Enables us to benchmark Puffin against `pip-tools` on a variety of
tasks. In subsequent PRs, I'll add support for Poetry and perhaps other
tools too.
Example usage:
```
❯ python scripts/bench.py -f requirements.in
2024-01-05 15:05:39 INFO Benchmarks: resolve-cold, resolve-warm
2024-01-05 15:05:39 INFO Tools: pip-tools, puffin
2024-01-05 15:05:39 INFO Reading requirements from: /Users/crmarsh/workspace/puffin/requirements.in
2024-01-05 15:05:39 INFO ```
2024-01-05 15:05:39 INFO black
2024-01-05 15:05:39 INFO ```
Benchmark 1: pip-tools (resolve-cold)
Time (mean ± σ): 758.4 ms ± 15.1 ms [User: 317.8 ms, System: 68.4 ms]
Range (min … max): 738.1 ms … 786.7 ms 10 runs
Benchmark 1: puffin (resolve-cold)
Time (mean ± σ): 213.5 ms ± 25.6 ms [User: 34.6 ms, System: 27.9 ms]
Range (min … max): 184.6 ms … 270.6 ms 12 runs
Benchmark 1: pip-tools (resolve-warm)
Time (mean ± σ): 384.3 ms ± 6.7 ms [User: 259.2 ms, System: 47.0 ms]
Range (min … max): 376.0 ms … 399.6 ms 10 runs
Benchmark 1: puffin (resolve-warm)
Time (mean ± σ): 8.0 ms ± 0.4 ms [User: 4.4 ms, System: 5.0 ms]
Range (min … max): 7.4 ms … 10.8 ms 209 runs
```
Uses new metadata added in https://github.com/zanieb/packse/pull/61 to
assert that resolution succeeded or failed _and_ that the installed
package versions match the expected result.
Previously, we just pulled the latest commit from `main` on every
update. This causes problems when you do not intend to update the
scenarios as in #787.
This bumps to the latest `packse` commit without new scenarios.
Adds support for a `PUFFIN_NO_WRAP` environment variable which disables
line wrapping in `miette` output.
We set this variable in the scenario tests to improve the readability of
snapshots.
I contributed the ability to disable line wrapping upstream at
https://github.com/zkat/miette/pull/328
This requirements file contains a pathological case where we have to
step through all the versions. I'm putting it in git to make it easier
to collaborate on it.
Following #757, improves the script for generating scenario test cases
with:
- A requirements file
- Support for downloading packse scenarios from GitHub dynamically
- Running rustfmt on the generated test file
- Updating snapshots / running tests
As mentioned in #746, instead of just installing the scenario root we
will unpack the root dependencies into the install command to allow
better coverage of direct user requests with scenarios.
I added display of the package tree provided by each scenario.
Use a mustache template for iterative replacements.
Adds tests using packse test scenarios! Uses `test.pypi.org` as a
backing index.
Tests are generated by a simple Python script. Requires
https://github.com/zanieb/packse/pull/49.
This opens us to a slight attack surface, as we cannot force use of
`test.pypi.org` only and someone could register these package names on
the real `pypi.org` index with malicious content. I could publish these
packages there too.
This PR combines three small changes to finish up the install-many
testing.
* Download pypi_10k_most_dependents.txt in script I'd like to have the
setup process of the large scale checks automated.
* Some install-many dev script improvements
* Fix mkl_fft-1.3.6-58-cp310-cp310-manylinux2014_x86_64.whl:
mkl_fft-1.3.6-58-cp310-cp310-manylinux2014_x86_64.whl has multiple
Wheel-Version entries, we have to ignore that like pip
Apart from the mkl-fft fix the only other errors i've seen showing up
are
https://github.com/astral-sh/puffin/issues/520#issuecomment-1869625642.
From manual inspection, this dataset generated through the [libraries.io
API](https://libraries.io/api#project-search) seems more mainstream than
the current 8k one, which is also preserved. I've added the dataset to
the repo because the API requires an API key.
Separate branch for rebasing #677 onto main because i don't trust the
rebase enough to force push.
Closes#677.
---
If you install `black` from PyPI, then `-e ../black`, we need to
uninstall the existing `black`. This sounds simple, but that in turn
requires that we _know_ `-e ../black` maps to the package `black`, so
that we can mark it for uninstallation in the install plan. This, in
turn, means that we need to build editable dependencies prior to the
install plan.
This is just a bunch of reorganization to fix that specific bug
(installing multiple versions of `black` if you run through the above
workflow): we now run through the list of editables upfront, mark those
that are already installed, build those that aren't, and then ensure
that `InstallPlan` correctly removes those that need to be removed, etc.
Closes#676.
Co-authored-by: Charlie Marsh <charlie.r.marsh@gmail.com>
Remove a dev annoyance: This makes the development paths shorter, e.g.
`scripts/benchmarks/requirements/all-kinds.in` to
`scripts/requirements/all-kinds.in`. The requirements are also shared
between different tasks and not only used for benchmarking.
## Summary and motivation
For a given source dist, we store the metadata of each wheel built
through it in `built-wheel-metadata-v0/pypi/<source dist
filename>/metadata.json`. During resolution, we check the cache status
of the source dist. If it is fresh, we check `metadata.json` for a
matching wheel. If there is one we use that metadata, if there isn't, we
build one. If the source is stale, we build a wheel and override
`metadata.json` with that single wheel. This PR thereby ties the local
built wheel metadata cache to the freshness of the remote source dist.
This functionality is available through `SourceDistCachedBuilder`.
`puffin_installer::Builder`, `puffin_installer::Downloader` and
`Fetcher` are removed, instead there are now `FetchAndBuild` which calls
into the also new `SourceDistCachedBuilder`. `FetchAndBuild` is the new
main high-level abstraction: It spawns parallel fetching/building, for
wheel metadata it calls into the registry client, for wheel files it
fetches them, for source dists it calls `SourceDistCachedBuilder`. It
handles locks around builds, and newly added also inter-process file
locking for git operations.
Fetching and building source distributions now happens in parallel in
`pip-sync`, i.e. we don't have to wait for the largest wheel to be
downloaded to start building source distributions.
In a follow-up PR, I'll also clear built wheels when they've become
stale.
Another effect is that in a fully cached resolution, we need neither zip
reading nor email parsing.
Closes#473
## Source dist cache structure
Entries by supported sources:
* `<build wheel metadata cache>/pypi/foo-1.0.0.zip/metadata.json`
* `<build wheel metadata
cache>/<sha256(index-url)>/foo-1.0.0.zip/metadata.json`
* `<build wheel metadata
cache>/url/<sha256(url)>/foo-1.0.0.zip/metadata.json`
But the url filename does not need to be a valid source dist filename
(<https://github.com/search?q=path%3A**%2Frequirements.txt+master.zip&type=code>),
so it could also be the following and we have to take any string as
filename:
* `<build wheel metadata
cache>/url/<sha256(url)>/master.zip/metadata.json`
Example:
```text
# git source dist
pydantic-extra-types @ git+https://github.com/pydantic/pydantic-extra-types.git
# pypi source dist
django_allauth==0.51.0
# url source dist
werkzeug @ ff1904eb5e/werkzeug-3.0.1.tar.gz
```
will be stored as
```text
built-wheel-metadata-v0
├── git
│ └── 5c56bc1c58c34c11
│ └── 843b753e9e8cb74e83cac55598719b39a4d5ef1f
│ └── metadata.json
├── pypi
│ └── django-allauth-0.51.0.tar.gz
│ └── metadata.json
└── url
└── 6781bd6440ae72c2
└── werkzeug-3.0.1.tar.gz
└── metadata.json
```
The inside of a `metadata.json`:
```json
{
"data": {
"django_allauth-0.51.0-py3-none-any.whl": {
"metadata-version": "2.1",
"name": "django-allauth",
"version": "0.51.0",
...
}
}
}
```
A consistent cache structure for remote wheel metadata:
* `<wheel metadata cache>/pypi/foo-1.0.0-py3-none-any.json`
* `<wheel metadata
cache>/<digest(index-url)>/foo-1.0.0-py3-none-any.json`
* `<wheel metadata cache>/url/<digest(url)>/foo-1.0.0-py3-none-any.json`
The source dist caching will use a similar structure (#468).
This script can compare different requirements between pip(-compile) and
puffin across python versions, with debug and release builds.
Examples:
```shell
scripts/compare_with_pip/compare_with_pip.py
scripts/compare_with_pip/compare_with_pip.py -p 3.10
scripts/compare_with_pip/compare_with_pip.py --release -p 3.9 --target 'transformers[deepspeed-testing,dev-tensorflow]'
```
It found a bunch of fixed bugs, e.g. the lack of yanked package handling
and source dist handling, as well as #423, which is currently most of
the output.
Example output:
https://gist.github.com/konstin/9ccf8dc7c2dcca737bf705429ced4892#443 should be merged first
This copies the allocator configuration used in the Ruff project. In
particular, this gives us an instant 10% win when resolving the top 1K
PyPI packages:
$ hyperfine \
"./target/profiling/puffin-dev-main resolve-many --cache-dir
cache-docker-no-build --no-build pypi_top_8k_flat.txt --limit 1000 2>
/dev/null" \
"./target/profiling/puffin-dev resolve-many --cache-dir
cache-docker-no-build --no-build pypi_top_8k_flat.txt --limit 1000 2>
/dev/null"
Benchmark 1: ./target/profiling/puffin-dev-main resolve-many --cache-dir
cache-docker-no-build --no-build pypi_top_8k_flat.txt --limit 1000 2>
/dev/null
Time (mean ± σ): 974.2 ms ± 26.4 ms [User: 17503.3 ms, System: 2205.3
ms]
Range (min … max): 943.5 ms … 1015.9 ms 10 runs
Benchmark 2: ./target/profiling/puffin-dev resolve-many --cache-dir
cache-docker-no-build --no-build pypi_top_8k_flat.txt --limit 1000 2>
/dev/null
Time (mean ± σ): 883.1 ms ± 23.3 ms [User: 14626.1 ms, System: 2542.2
ms]
Range (min … max): 849.5 ms … 916.9 ms 10 runs
Summary
'./target/profiling/puffin-dev resolve-many --cache-dir
cache-docker-no-build --no-build pypi_top_8k_flat.txt --limit 1000 2>
/dev/null' ran
1.10 ± 0.04 times faster than './target/profiling/puffin-dev-main
resolve-many --cache-dir cache-docker-no-build --no-build
pypi_top_8k_flat.txt --limit 1000 2> /dev/null'
I was moved to do this because I noticed `malloc`/`free` taking up a
fairly sizeable percentage of time during light profiling.
As is becoming a pattern, it will be easier to review this
commit-by-commit.
Ref #396 (wouldn't call this issue fixed)
-----
I did also try adding a `smallvec` optimization to the
`Version::release` field, but it didn't bare any fruit. I still think
there is more to explore since the results I observed don't quite line
up with what I expect. (So probably either my mental model is off or my
measurement process is flawed.) You can see that attempt with a little
more explanation here:
f9528b4ecd
In the course of adding the `smallvec` optimization, I also shrunk the
`Version` fields from a `usize` to a `u32`. They should at least be a
fixed size integer since version numbers aren't used to index memory,
and I shrunk it to `u32` since it seems reasonable to assume that all
version numbers will be smaller than `2^32`.
I intend this to become the main form of caching for puffin: You can
make http requests, you tranform the data to what you really need, you
have control over the cache key, and the cache is always json (or
anything else much faster we want to replace it with as long as it's
serde!)
It looks like Cargo, notice the bold green lines at the top (which
appear during the resolution, to indicate Git fetches and source
distribution builds):
<img width="868" alt="Screen Shot 2023-11-06 at 11 28 47 PM"
src="9647a480-7be7-41e9-b1d3-69faefd054ae">
<img width="868" alt="Screen Shot 2023-11-06 at 11 28 51 PM"
src="6bc491aa-5b51-4b37-9ee1-257f1bc1c049">
Closes https://github.com/astral-sh/puffin/issues/287 although we can do
a lot more here.
There are packages such as DTLSSocket 0.1.16 that say
```toml
[build-system]
requires = ["Cython<3", "setuptools", "wheel"]
```
In this case we need to install requires PEP 517 style but then call setup.py in the
legacy way
Part of making home-assistant work
To check to top 1k (current state):
```bash
scripts/resolve/get_pypi_top_8k.sh
cargo run --bin puffin-dev -- resolve-many scripts/resolve/pypi_top_8k_flat.txt --limit 1000
```
Results:
```
Errors: pywin32, geoip2, maxminddb, pypika, dirac
Success: 995, Error: 5
```
pywin32 has no solution for the build environment, 3 have no
`[build-system]` entry in pyproject.toml, `dirac` is missing cmake