## Summary
This seems to be one of the most consistent benchmark cases we have in
terms of standard deviation:
```
➜ hyperfine "target/profiling/main pip compile scripts/requirements/airflow.in" --runs 200
Benchmark 1: target/profiling/main pip compile scripts/requirements/airflow.in
Time (mean ± σ): 292.6 ms ± 6.6 ms [User: 414.1 ms, System: 194.2 ms]
Range (min … max): 282.7 ms … 320.1 ms 200 runs
```
For smaller benchmarks, scispacy and dtlssocket seem to be a bit more
consistent than our current jupyter benchmark, but it hasn't given us
any problems so I'll leave it for now.
This update pulls in https://github.com/pubgrub-rs/pubgrub/pull/177,
optimizing common range operations in pubgrub. Please refer to this PR
for a more extensive description and discussion of the changes.
The changes optimize that last remaining pathological case,
`bio_embeddings[all]` on python 3.12, which has to try 100k versions,
from 12s to 3s in the cached case. It should also enable smarter
prefetching in batches (https://github.com/astral-sh/uv/issues/170),
even though a naive attempt didn't show better network usage.
**before** 12s

**after** 3s

```
$ taskset -c 0 hyperfine --warmup 1 "../uv/target/profiling/main-uv pip compile ../uv/scripts/requirements/bio_embeddings.in" "../uv/target/profiling/branch-uv pip compile ../uv/scripts/requirements/bio_embeddings.in"
Benchmark 1: ../uv/target/profiling/main-uv pip compile ../uv/scripts/requirements/bio_embeddings.in
Time (mean ± σ): 12.321 s ± 0.064 s [User: 12.014 s, System: 0.300 s]
Range (min … max): 12.224 s … 12.406 s 10 runs
Benchmark 2: ../uv/target/profiling/branch-uv pip compile ../uv/scripts/requirements/bio_embeddings.in
Time (mean ± σ): 3.109 s ± 0.004 s [User: 2.782 s, System: 0.321 s]
Range (min … max): 3.103 s … 3.116 s 10 runs
Summary
../uv/target/profiling/branch-uv pip compile ../uv/scripts/requirements/bio_embeddings.in ran
3.96 ± 0.02 times faster than ../uv/target/profiling/main-uv pip compile ../uv/scripts/requirements/bio_embeddings.in
```
It also adds `bio_embeddings[all]` as a requirements test case.
## Summary
This fixes https://github.com/astral-sh/uv/issues/1704 by removing the
version from the produced header.
## Test Plan
Checked with clippy, and tests are updated too.
First, replace all usages in files in-place. I used my editor for this.
If someone wants to add a one-liner that'd be fun.
Then, update directory and file names:
```
# Run twice for nested directories
find . -type d -print0 | xargs -0 rename s/puffin/uv/g
find . -type d -print0 | xargs -0 rename s/puffin/uv/g
# Update files
find . -type f -print0 | xargs -0 rename s/puffin/uv/g
```
Then add all the files again
```
# Add all the files again
git add crates
git add python/uv
# This one needs a force-add
git add -f crates/uv-trampoline
```
Instrument the main function as anchor span for checking overhead and
update tracing-durations-export to 0.2.0 for differentiating
blocking/non-blocking tasks.
Add a `jupyter.in` requirement since `pip install jupyter` is a common
operation. I tried `jupyterlab` too but there is no difference in
performance (1.00 ± 0.07).
## Summary
This makes the separation clearer between the legacy `pip` API and the
API we'll add in the future for the package manager itself. It also
enables seamless `puffin pip` aliasing for those that want it.
Closes#918.
This requirements file contains a pathological case where we have to
step through all the versions. I'm putting it in git to make it easier
to collaborate on it.
Remove a dev annoyance: This makes the development paths shorter, e.g.
`scripts/benchmarks/requirements/all-kinds.in` to
`scripts/requirements/all-kinds.in`. The requirements are also shared
between different tasks and not only used for benchmarking.