In #6827, we switched the uv-dev binary to not being built by default.
As an unintended side effect, we were also stopping to run the tests
that ensured the schema was up-to-date.
To fix this, we split uv-dev into an unconditional library, with only
the binary being a conditional build. This way, `cargo test` and `cargo
nextest` pick those tests up again.
An alternative would be running tests with the `dev` feature, with the
side effect of always building the uv-dev binary, too.
## Summary
Resolves#8417
I've just begun learning procedural macros, so this PR is more of a
proof of concept. It's still a work in progress, and I welcome any
assistance or feedback.
## Summary
This PR declares and documents all environment variables that are used
in one way or another in `uv`, either internally, or externally, or
transitively under a common struct.
I think over time as uv has grown there's been many environment
variables introduced. Its harder to know which ones exists, which ones
are missing, what they're used for, or where are they used across the
code. The docs only documents a handful of them, for others you'd have
to dive into the code and inspect across crates to know which crates
they're used on or where they're relevant.
This PR is a starting attempt to unify them, make it easier to discover
which ones we have, and maybe unlock future posibilities in automating
generating documentation for them.
I think we can split out into multiple structs later to better organize,
but given the high influx of PR's and possibly new environment variables
introduced/re-used, it would be hard to try to organize them all now
into their proper namespaced struct while this is all happening given
merge conflicts and/or keeping up to date.
I don't think this has any impact on performance as they all should
still be inlined, although it may affect local build times on changes to
the environment vars as more crates would likely need a rebuild. Lastly,
some of them are declared but not used in the code, for example those in
`build.rs`. I left them declared because I still think it's useful to at
least have a reference.
Did I miss any? Are their initial docs cohesive?
Note, `uv-static` is a terrible name for a new crate, thoughts? Others
considered `uv-vars`, `uv-consts`.
## Test Plan
Existing tests
## Summary
This PR exposes uv's PEP 517 implementation via a `uv build` frontend,
such that you can use `uv build` to build source and binary
distributions (i.e., wheels and sdists) from a given directory.
There are some TODOs that I'll tackle in separate PRs:
- [x] Support building a wheel from a source distribution (rather than
from source) (#6898)
- [x] Stream the build output (#6912)
Closes https://github.com/astral-sh/uv/issues/1510
Closes https://github.com/astral-sh/uv/issues/1663.
## Summary
Move completely off tokio's multi-threaded runtime. We've slowly been
making changes to be smarter about scheduling in various places instead
of depending on tokio's general purpose work-stealing, notably
https://github.com/astral-sh/uv/pull/3627 and
https://github.com/astral-sh/uv/pull/4004. We now no longer benefit from
the multi-threaded runtime, as we run on all I/O on the main thread.
There's one remaining instance of `block_in_place` that can be swapped
for `rayon::spawn`.
This change is a small performance improvement due to removing some
unnecessary overhead of the multi-threaded runtime (e.g. spawning
threads), but nothing major. It also removes some noise from profiles.
## Test Plan
```
Benchmark 1: ./target/profiling/uv (resolve-warm)
Time (mean ± σ): 14.9 ms ± 0.3 ms [User: 3.0 ms, System: 17.3 ms]
Range (min … max): 14.1 ms … 15.8 ms 169 runs
Benchmark 2: ./target/profiling/baseline (resolve-warm)
Time (mean ± σ): 16.1 ms ± 0.3 ms [User: 3.9 ms, System: 18.7 ms]
Range (min … max): 15.1 ms … 17.3 ms 162 runs
Summary
./target/profiling/uv (resolve-warm) ran
1.08 ± 0.03 times faster than ./target/profiling/baseline (resolve-warm)
```
See https://github.com/astral-sh/uv/issues/2617
Note this also includes:
- #2918
- #2931 (pending)
A first step towards Python toolchain management in Rust.
First, we add a new crate to manage Python download metadata:
- Adds a new `uv-toolchain` crate
- Adds Rust structs for Python version download metadata
- Duplicates the script which downloads Python version metadata
- Adds a script to generate Rust code from the JSON metadata
- Adds a utility to download and extract the Python version
I explored some alternatives like a build script using things like
`serde` and `uneval` to automatically construct the code from our
structs but deemed it to heavy. Unlike Rye, I don't generate the Rust
directly from the web requests and have an intermediate JSON layer to
speed up iteration on the Rust types.
Next, we add add a `uv-dev` command `fetch-python` to download Python
versions per the bootstrapping script.
- Downloads a requested version or reads from `.python-versions`
- Extracts to `UV_BOOTSTRAP_DIR`
- Links executables for path extension
This command is not really intended to be user facing, but it's a good
PoC for the `uv-toolchain` API. Hash checking (via the sha256) isn't
implemented yet, we can do that in a follow-up.
Finally, we remove the `scripts/bootstrap` directory, update CI to use
the new command, and update the CONTRIBUTING docs.
<img width="1023" alt="Screenshot 2024-04-08 at 17 12 15"
src="57bd3cf1-7477-4bb8-a8e9-802a00d772cb">
## Summary
This PR removes the custom `DistFinder` that we use in `pip sync`. This
originally existed because `VersionMap` wasn't lazy, and so we saved a
lot of time in `DistFinder` by reading distribution data lazily. But
now, AFAICT, there's really no benefit. Maintaining `DistFinder` means
we effectively have to maintain two resolvers, and end up fixing bugs in
`DistFinder` that don't exist in the `Resolver` (like #2688.
Closes#2694.
Closes#2443.
## Test Plan
I ran this benchmark a bunch. It's basically a wash. Sometimes one is
faster than the other.
```
❯ python -m scripts.bench \
--uv-path ./target/release/main \
--uv-path ./target/release/uv \
scripts/requirements/compiled/trio.txt --min-runs 50 --benchmark install-warm --warmup 25
Benchmark 1: ./target/release/main (install-warm)
Time (mean ± σ): 54.0 ms ± 10.6 ms [User: 8.7 ms, System: 98.1 ms]
Range (min … max): 45.5 ms … 94.3 ms 50 runs
Warning: Statistical outliers were detected. Consider re-running this benchmark on a quiet PC without any interferences from other programs. It might help to use the '--warmup' or '--prepare' options.
Benchmark 2: ./target/release/uv (install-warm)
Time (mean ± σ): 50.7 ms ± 9.2 ms [User: 8.7 ms, System: 98.6 ms]
Range (min … max): 44.0 ms … 98.6 ms 50 runs
Warning: The first benchmarking run for this command was significantly slower than the rest (77.6 ms). This could be caused by (filesystem) caches that were not filled until after the first run. You should consider using the '--warmup' option to fill those caches before the actual benchmark. Alternatively, use the '--prepare' option to clear the caches before each timing run.
Summary
'./target/release/uv (install-warm)' ran
1.06 ± 0.29 times faster than './target/release/main (install-warm)'
```
Add a `--compile` option to `pip install` and `pip sync`.
I chose to implement this as a separate pass over the entire venv. If we
wanted to compile during installation, we'd have to make sure that
writing is exclusive, to avoid concurrent processes writing broken
`.pyc` files. Additionally, this ensures that the entire site-packages
are bytecode compiled, even if there are packages that aren't from this
`uv` invocation. The disadvantage is that we do not update RECORD and
rely on this comment from [PEP 491](https://peps.python.org/pep-0491/):
> Uninstallers should be smart enough to remove .pyc even if it is not
mentioned in RECORD.
If this is a problem we can change it to run during installation and
write RECORD entries.
Internally, this is implemented as an async work-stealing subprocess
worker pool. The producer is a directory traversal over site-packages,
sending each `.py` file to a bounded async FIFO queue/channel. Each
worker has a long-running python process. It pops the queue to get a
single path (or exists if the channel is closed), then sends it to
stdin, waits until it's informed that the compilation is done through a
line on stdout, and repeat. This is fast, e.g. installing `jupyter
plotly` on Python 3.12 it processes 15876 files in 319ms with 32 threads
(vs. 3.8s with a single core). The python processes internally calls
`compileall.compile_file`, the same as pip.
Like pip, we ignore and silence all compilation errors
(https://github.com/astral-sh/uv/issues/1559). There is a 10s timeout to
handle the case when the workers got stuck. For the reviewers, please
check if i missed any spots where we could deadlock, this is the hardest
part of this PR.
I've added `uv-dev compile <dir>` and `uv-dev clear-compile <dir>`
commands, mainly for my own benchmarking. I don't want to expose them in
`uv`, they almost certainly not the correct workflow and we don't want
to support them.
Fixes#1788Closes#1559Closes#1928
First, replace all usages in files in-place. I used my editor for this.
If someone wants to add a one-liner that'd be fun.
Then, update directory and file names:
```
# Run twice for nested directories
find . -type d -print0 | xargs -0 rename s/puffin/uv/g
find . -type d -print0 | xargs -0 rename s/puffin/uv/g
# Update files
find . -type f -print0 | xargs -0 rename s/puffin/uv/g
```
Then add all the files again
```
# Add all the files again
git add crates
git add python/uv
# This one needs a force-add
git add -f crates/uv-trampoline
```
2024-02-15 11:19:46 -06:00
Renamed from crates/puffin-dev/src/main.rs (Browse further)