Add a single job for for fast lint tools. Rustfmt for rust, ruff for
python formatting and linting, prettier avoids inconsistent formatter
changes between pycharm and vscode.
First, replace all usages in files in-place. I used my editor for this.
If someone wants to add a one-liner that'd be fun.
Then, update directory and file names:
```
# Run twice for nested directories
find . -type d -print0 | xargs -0 rename s/puffin/uv/g
find . -type d -print0 | xargs -0 rename s/puffin/uv/g
# Update files
find . -type f -print0 | xargs -0 rename s/puffin/uv/g
```
Then add all the files again
```
# Add all the files again
git add crates
git add python/uv
# This one needs a force-add
git add -f crates/uv-trampoline
```
I originally used Python 3.10, since 3.10 and 3.11 are by far the most
common (at least for [Ruff](https://pypistats.org/packages/ruff)). But
3.12 should give Python tools the most favorable benchmarks.
## Summary
Overall, similar to Poetry, with some simplifications (e.g., we don't
need to translate to Poetry's dependency syntax), and the need to adjust
how we manage the cache and virtual environment.
## Summary
This adds a benchmark in which we reuse the lockfile, but add a new
dependency to the input requirements.
Running `python -m scripts.bench --poetry --puffin --pip-compile
scripts/requirements/trio.in --benchmark resolve-warm --benchmark
resolve-incremental`:
```text
Benchmark 1: pip-compile (resolve-warm)
Time (mean ± σ): 1.169 s ± 0.023 s [User: 0.675 s, System: 0.112 s]
Range (min … max): 1.129 s … 1.198 s 10 runs
Benchmark 2: poetry (resolve-warm)
Time (mean ± σ): 610.7 ms ± 10.4 ms [User: 528.1 ms, System: 60.3 ms]
Range (min … max): 599.9 ms … 632.6 ms 10 runs
Benchmark 3: puffin (resolve-warm)
Time (mean ± σ): 19.3 ms ± 0.6 ms [User: 13.5 ms, System: 13.1 ms]
Range (min … max): 17.9 ms … 22.1 ms 122 runs
Summary
'puffin (resolve-warm)' ran
31.63 ± 1.19 times faster than 'poetry (resolve-warm)'
60.53 ± 2.37 times faster than 'pip-compile (resolve-warm)'
Benchmark 1: pip-compile (resolve-incremental)
Time (mean ± σ): 1.554 s ± 0.059 s [User: 0.974 s, System: 0.130 s]
Range (min … max): 1.473 s … 1.652 s 10 runs
Benchmark 2: poetry (resolve-incremental)
Time (mean ± σ): 474.2 ms ± 2.4 ms [User: 411.7 ms, System: 54.0 ms]
Range (min … max): 470.6 ms … 477.7 ms 10 runs
Benchmark 3: puffin (resolve-incremental)
Time (mean ± σ): 28.0 ms ± 1.1 ms [User: 21.7 ms, System: 14.6 ms]
Range (min … max): 26.7 ms … 34.4 ms 89 runs
Summary
'puffin (resolve-incremental)' ran
16.94 ± 0.67 times faster than 'poetry (resolve-incremental)'
55.52 ± 3.02 times faster than 'pip-compile (resolve-incremental)'
```
## Summary
This makes the separation clearer between the legacy `pip` API and the
API we'll add in the future for the package manager itself. It also
enables seamless `puffin pip` aliasing for those that want it.
Closes#918.
I personally found the output by default somewhat noisy, especially for
large requirements files. Since --verbose is already a thing, I propose
making the extra output opt-in.
## Summary
Refactors the benchmark script such that we use a single `hyperfine`
invocation per benchmark, and thus get the comparative summary, which is
_way_ nicer:
```
Benchmark 1: ./target/release/puffin (install-cold)
Time (mean ± σ): 410.3 ms ± 19.9 ms [User: 173.7 ms, System: 1314.5 ms]
Range (min … max): 389.7 ms … 452.1 ms 10 runs
Benchmark 2: ./target/release/baseline (install-cold)
Time (mean ± σ): 418.2 ms ± 14.4 ms [User: 210.7 ms, System: 1246.0 ms]
Range (min … max): 397.3 ms … 445.7 ms 10 runs
Summary
'./target/release/puffin (install-cold)' ran
1.02 ± 0.06 times faster than './target/release/baseline (install-cold)'
```
Taking some of Zanie's suggestions to make the custom-path API simpler
in the benchmark script. Each tool is now a dedicated argument, like:
```
python -m scripts.bench --pip-sync --poetry requirements.in
```
To provide custom binaries:
```
python -m scripts.bench \
--puffin-path ./target/release/puffin \
--puffin-path ./target/release/baseline \
requirements.in
```
## Summary
This PR enables the use of the `bench.py` script to benchmark Puffin
itself. This is something I often do by via a process like:
- Checkout the `main` branch (or any other baseline branch)
- Run: `cargo build --release`
- Run: `mv ./target/release/puffin ./target/release/baseline`
- Checkout a development branch
- Run: `cargo build --release`
- (New) Run: `python bench.py --tool puffin --path
./target/release/puffin --tool puffin --path ./target/release/baseline
requirements.in`
## Summary
Enables benchmarking against Poetry for resolution and installation:
```
Benchmark 1: pip-tools (resolve-cold)
Time (mean ± σ): 962.7 ms ± 241.9 ms [User: 322.8 ms, System: 80.5 ms]
Range (min … max): 714.9 ms … 1459.4 ms 10 runs
Benchmark 1: puffin (resolve-cold)
Time (mean ± σ): 193.2 ms ± 8.2 ms [User: 31.3 ms, System: 22.8 ms]
Range (min … max): 179.8 ms … 206.4 ms 14 runs
Benchmark 1: poetry (resolve-cold)
Time (mean ± σ): 900.7 ms ± 21.2 ms [User: 371.6 ms, System: 92.1 ms]
Range (min … max): 855.7 ms … 933.4 ms 10 runs
Benchmark 1: pip-tools (resolve-warm)
Time (mean ± σ): 386.0 ms ± 19.1 ms [User: 255.8 ms, System: 46.2 ms]
Range (min … max): 368.7 ms … 434.5 ms 10 runs
Benchmark 1: puffin (resolve-warm)
Time (mean ± σ): 8.1 ms ± 0.4 ms [User: 4.4 ms, System: 5.1 ms]
Range (min … max): 7.5 ms … 11.1 ms 183 runs
Benchmark 1: poetry (resolve-warm)
Time (mean ± σ): 336.3 ms ± 0.6 ms [User: 283.6 ms, System: 44.7 ms]
Range (min … max): 335.0 ms … 337.3 ms 10 runs
```