## Summary
This PR makes a variety of invalid states unrepresentable by changing
`Preference` to require a `PackageName` and `Version`, rather than
accepting a generic `Requirement`. There should be no meaningful
behavior changes.
When parsing requirements from any source, directly parse the url parts
(and reject unsupported urls) instead of parsing url parts at a later
stage. This removes a bunch of error branches and concludes the work
parsing url parts once and passing them around everywhere.
Many usages of the assembled `VerbatimUrl` remain, but these can be
removed incrementally.
Please review commit-by-commit.
Updates our Python interpreter discovery to conform to the rules
described in #2386, please see that issue for a full description of the
behavior. Briefly, we now will search for interpreters that satisfy a
requested version without stopping at the first Python executable.
Additionally, if retrieving information about an interpreter fails we
will continue to search for a working interpreter. We also add the
plumbing necessary to request Python implementations other than CPython,
though we do not add support for other implementations at this time.
A major internal goal of this work is to prepare for user-facing managed
toolchains i.e. fetching a requested version during `uv run`. These APIs
are not introduced, but there is some managed toolchain handling as
required for our test suite.
Some noteworthy implementation changes:
- The `uv_interpreter::find_python` module has been removed in favor of
a `uv_interpreter::discovery` module.
- There are new types to help structure interpreter requests and track
sources
- Executable discovery is implemented as a big lazy iterator and is a
central authority for source precedence
- `uv_interpreter::Error` variants were split into scoped types in each
module
- There's much more unit test coverage, but not for Windows yet
Remaining work:
- [x] Write new test cases
- [x] Determine correct behavior around executables in the current
directory
- _Future_: Combine `PythonVersion` and `VersionRequest`
- _Future_: Consider splitting `ManagedToolchain` into local and remote
variants
- _Future_: Add Windows unit test coverage
- _Future_: Explore behavior around implementation precedence (i.e.
CPython over PyPy)
Refactors split into:
- #3329
- #3330
- #3331
- #3332Closes#2386
## Summary
This PR introduces parallelism to the resolver. Specifically, we can
perform PubGrub resolution on a separate thread, while keeping all I/O
on the tokio thread. We already have the infrastructure set up for this
with the channel and `OnceMap`, which makes this change relatively
simple. The big change needed to make this possible is removing the
lifetimes on some of the types that need to be shared between the
resolver and pubgrub thread.
A related PR, https://github.com/astral-sh/uv/pull/1163, found that
adding `yield_now` calls improved throughput. With optimal scheduling we
might be able to get away with everything on the same thread here.
However, in the ideal pipeline with perfect prefetching, the resolution
and prefetching can run completely in parallel without depending on one
another. While this would be very difficult to achieve, even with our
current prefetching pattern we see a consistent performance improvement
from parallelism.
This does also require reverting a few of the changes from
https://github.com/astral-sh/uv/pull/3413, but not all of them. The
sharing is isolated to the resolver task.
## Test Plan
On smaller tasks performance is mixed with ~2% improvements/regressions
on both sides. However, on medium-large resolution tasks we see the
benefits of parallelism, with improvements anywhere from 10-50%.
```
./scripts/requirements/jupyter.in
Benchmark 1: ./target/profiling/baseline (resolve-warm)
Time (mean ± σ): 29.2 ms ± 1.8 ms [User: 20.3 ms, System: 29.8 ms]
Range (min … max): 26.4 ms … 36.0 ms 91 runs
Benchmark 2: ./target/profiling/parallel (resolve-warm)
Time (mean ± σ): 25.5 ms ± 1.0 ms [User: 19.5 ms, System: 25.5 ms]
Range (min … max): 23.6 ms … 27.8 ms 99 runs
Summary
./target/profiling/parallel (resolve-warm) ran
1.15 ± 0.08 times faster than ./target/profiling/baseline (resolve-warm)
```
```
./scripts/requirements/boto3.in
Benchmark 1: ./target/profiling/baseline (resolve-warm)
Time (mean ± σ): 487.1 ms ± 6.2 ms [User: 464.6 ms, System: 61.6 ms]
Range (min … max): 480.0 ms … 497.3 ms 10 runs
Benchmark 2: ./target/profiling/parallel (resolve-warm)
Time (mean ± σ): 430.8 ms ± 9.3 ms [User: 529.0 ms, System: 77.2 ms]
Range (min … max): 417.1 ms … 442.5 ms 10 runs
Summary
./target/profiling/parallel (resolve-warm) ran
1.13 ± 0.03 times faster than ./target/profiling/baseline (resolve-warm)
```
```
./scripts/requirements/airflow.in
Benchmark 1: ./target/profiling/baseline (resolve-warm)
Time (mean ± σ): 478.1 ms ± 18.8 ms [User: 482.6 ms, System: 205.0 ms]
Range (min … max): 454.7 ms … 508.9 ms 10 runs
Benchmark 2: ./target/profiling/parallel (resolve-warm)
Time (mean ± σ): 308.7 ms ± 11.7 ms [User: 428.5 ms, System: 209.5 ms]
Range (min … max): 287.8 ms … 323.1 ms 10 runs
Summary
./target/profiling/parallel (resolve-warm) ran
1.55 ± 0.08 times faster than ./target/profiling/baseline (resolve-warm)
```
## Summary
This PR consolidates the concurrency limits used throughout `uv` and
exposes two limits, `UV_CONCURRENT_DOWNLOADS` and
`UV_CONCURRENT_BUILDS`, as environment variables.
Currently, `uv` has a number of concurrent streams that it buffers using
relatively arbitrary limits for backpressure. However, many of these
limits are conflated. We run a relatively small number of tasks overall
and should start most things as soon as possible. What we really want to
limit are three separate operations:
- File I/O. This is managed by tokio's blocking pool and we should not
really have to worry about it.
- Network I/O.
- Python build processes.
Because the current limits span a broad range of tasks, it's possible
that a limit meant for network I/O is occupied by tasks performing
builds, reading from the file system, or even waiting on a `OnceMap`. We
also don't limit build processes that end up being required to perform a
download. While this may not pose a performance problem because our
limits are relatively high, it does mean that the limits do not do what
we want, making it tricky to expose them to users
(https://github.com/astral-sh/uv/issues/1205,
https://github.com/astral-sh/uv/issues/3311).
After this change, the limits on network I/O and build processes are
centralized and managed by semaphores. All other tasks are unbuffered
(note that these tasks are still bounded, so backpressure should not be
a problem).
We now use the getters and setters everywhere.
There were some places where we wanted to build a `MarkerEnvironment`
out of whole cloth, usually in tests. To facilitate those use cases, we
add a `MarkerEnvironmentBuilder` that provides a convenient constructor.
It's basically like a `MarkerEnvironment::new`, but with named
parameters. That's useful here because there are so many fields (and
they many have the same type).
This commit touches a lot of code, but the conceptual change here is
pretty simple: make it so we can run the resolver without providing a
`MarkerEnvironment`. This also indicates that the resolver should run in
universal mode. That is, the effect of a missing marker environment is
that all marker expressions that reference the marker environment are
evaluated to `true`. That is, they are ignored. (The only markers we
evaluate in that context are extras, which are the only markers that
aren't dependent on the environment.)
One interesting change here is that a `Resolver` no longer needs an
`Interpreter`. Previously, it had only been using it to construct a
`PythonRequirement`, by filling in the installed version from the
`Interpreter` state. But we now construct a `PythonRequirement`
explicitly since its `target` Python version should no longer be tied to
the `MarkerEnvironment`. (Currently, the marker environment is mutated
such that its `python_full_version` is derived from multiple sources,
including the CLI, which I found a touch confusing.)
The change in behavior can now be observed through the
`--unstable-uv-lock-file` flag. First, without it:
```
$ cat requirements.in
anyio>=4.3.0 ; sys_platform == "linux"
anyio<4 ; sys_platform == "darwin"
$ cargo run -qp uv -- pip compile -p3.10 requirements.in
anyio==4.3.0
exceptiongroup==1.2.1
# via anyio
idna==3.7
# via anyio
sniffio==1.3.1
# via anyio
typing-extensions==4.11.0
# via anyio
```
And now with it:
```
$ cargo run -qp uv -- pip compile -p3.10 requirements.in --unstable-uv-lock-file
x No solution found when resolving dependencies:
`-> Because you require anyio>=4.3.0 and anyio<4, we can conclude that the requirements are unsatisfiable.
```
This is expected at this point because the marker expressions are being
explicitly ignored, *and* there is no forking done yet to account for
the conflict.
## Introduction
PEP 621 is limited. Specifically, it lacks
* Relative path support
* Editable support
* Workspace support
* Index pinning or any sort of index specification
The semantics of urls are a custom extension, PEP 440 does not specify
how to use git references or subdirectories, instead pip has a custom
stringly format. We need to somehow support these while still stying
compatible with PEP 621.
## `tool.uv.source`
Drawing inspiration from cargo, poetry and rye, we add `tool.uv.sources`
or (for now stub only) `tool.uv.workspace`:
```toml
[project]
name = "albatross"
version = "0.1.0"
dependencies = [
"tqdm >=4.66.2,<5",
"torch ==2.2.2",
"transformers[torch] >=4.39.3,<5",
"importlib_metadata >=7.1.0,<8; python_version < '3.10'",
"mollymawk ==0.1.0"
]
[tool.uv.sources]
tqdm = { git = "https://github.com/tqdm/tqdm", rev = "cc372d09dcd5a5eabdc6ed4cf365bdb0be004d44" }
importlib_metadata = { url = "https://github.com/python/importlib_metadata/archive/refs/tags/v7.1.0.zip" }
torch = { index = "torch-cu118" }
mollymawk = { workspace = true }
[tool.uv.workspace]
include = [
"packages/mollymawk"
]
[tool.uv.indexes]
torch-cu118 = "https://download.pytorch.org/whl/cu118"
```
See `docs/specifying_dependencies.md` for a detailed explanation of the
format. The basic gist is that `project.dependencies` is what ends up on
pypi, while `tool.uv.sources` are your non-published additions. We do
support the full range or PEP 508, we just hide it in the docs and
prefer the exploded table for easier readability and less confusing with
actual url parts.
This format should eventually be able to subsume requirements.txt's
current use cases. While we will continue to support the legacy `uv pip`
interface, this is a piece of the uv's own top level interface. Together
with `uv run` and a lockfile format, you should only need to write
`pyproject.toml` and do `uv run`, which generates/uses/updates your
lockfile behind the scenes, no more pip-style requirements involved. It
also lays the groundwork for implementing index pinning.
## Changes
This PR implements:
* Reading and lowering `project.dependencies`,
`project.optional-dependencies` and `tool.uv.sources` into a new
requirements format, including:
* Git dependencies
* Url dependencies
* Path dependencies, including relative and editable
* `pip install` integration
* Error reporting for invalid `tool.uv.sources`
* Json schema integration (works in pycharm, see below)
* Draft user-level docs (see `docs/specifying_dependencies.md`)
It does not implement:
* No `pip compile` testing, deprioritizing towards our own lockfile
* Index pinning (stub definitions only)
* Development dependencies
* Workspace support (stub definitions only)
* Overrides in pyproject.toml
* Patching/replacing dependencies
One technically breaking change is that we now require user provided
pyproject.toml to be valid wrt to PEP 621. Included files still fall
back to PEP 517. That means `pip install -r requirements.txt` requires
it to be valid while `pip install -r requirements.txt` with `-e .` as
content falls back to PEP 517 as before.
## Implementation
The `pep508` requirement is replaced by a new `UvRequirement` (name up
for bikeshedding, not particularly attached to the uv prefix). The still
existing `pep508_rs::Requirement` type is a url format copied from pip's
requirements.txt and doesn't appropriately capture all features we
want/need to support. The bulk of the diff is changing the requirement
type throughout the codebase.
We still use `VerbatimUrl` in many places, where we would expect a
parsed/decomposed url type, specifically:
* Reading core metadata except top level pyproject.toml files, we fail a
step later instead if the url isn't supported.
* Allowed `Urls`.
* `PackageId` with a custom `CanonicalUrl` comparison, instead of
canonicalizing urls eagerly.
* `PubGrubPackage`: We eventually convert the `VerbatimUrl` back to a
`Dist` (`Dist::from_url`), instead of remembering the url.
* Source dist types: We use verbatim url even though we know and require
that these are supported urls we can and have parsed.
I tried to make improve the situation be replacing `VerbatimUrl`, but
these changes would require massive invasive changes (see e.g.
https://github.com/astral-sh/uv/pull/3253). A main problem is the ref
`VersionOrUrl` and applying overrides, which assume the same
requirement/url type everywhere. In its current form, this PR increases
this tech debt.
I've tried to split off PRs and commits, but the main refactoring is
still a single monolith commit to make it compile and the tests pass.
## Demo
Adding
d1ae3b85d5/pyproject.json
as json schema (v7) to pycharm for `pyproject.toml`, you can try the IDE
support already:

[dove.webm](c293c272-c80b-459d-8c95-8c46a8d198a1)
## Summary
This makes it easier to add (e.g.) JSON Schema derivations to the type.
If we have support for other dates in the future, we can generalize it
to a `UserDate` or similar.
freethreaded python reintroduces abiflags since it is incompatible with
regular native modules and abi3.
Tests: None yet! We're lacking cpython 3.13 no-gil builds we can use in
ci.
My test setup:
```
PYTHON_CONFIGURE_OPTS="--enable-shared --disable-gil" pyenv install 3.13.0a5
cargo run -q -- venv -q -p python3.13 .venv3.13 --no-cache-dir && cargo run -q -- pip install -v psutil --no-cache-dir && .venv3.13/bin/python -c "import psutil"
```
Fixes#2429
## Summary
This PR enables hash generation for URL requirements when the user
provides `--generate-hashes` to `pip compile`. While we include the
hashes from the registry already, today, we omit hashes for URLs.
To power hash generation, we introduce a `HashPolicy` abstraction:
```rust
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
pub enum HashPolicy<'a> {
/// No hash policy is specified.
None,
/// Hashes should be generated (specifically, a SHA-256 hash), but not validated.
Generate,
/// Hashes should be validated against a pre-defined list of hashes. If necessary, hashes should
/// be generated so as to ensure that the archive is valid.
Validate(&'a [HashDigest]),
}
```
All of the methods on the distribution database now accept this policy,
instead of accepting `&'a [HashDigest]`.
Closes#2378.
## Summary
This PR adds support for hash-checking mode in `pip install` and `pip
sync`. It's a large change, both in terms of the size of the diff and
the modifications in behavior, but it's also one that's hard to merge in
pieces (at least, with any test coverage) since it needs to work
end-to-end to be useful and testable.
Here are some of the most important highlights:
- We store hashes in the cache. Where we previously stored pointers to
unzipped wheels in the `archives` directory, we now store pointers with
a set of known hashes. So every pointer to an unzipped wheel also
includes its known hashes.
- By default, we don't compute any hashes. If the user runs with
`--require-hashes`, and the cache doesn't contain those hashes, we
invalidate the cache, redownload the wheel, and compute the hashes as we
go. For users that don't run with `--require-hashes`, there will be no
change in performance. For users that _do_, the only change will be if
they don't run with `--generate-hashes` -- then they may see some
repeated work between resolution and installation, if they use `pip
compile` then `pip sync`.
- Many of the distribution types now include a `hashes` field, like
`CachedDist` and `LocalWheel`.
- Our behavior is similar to pip, in that we enforce hashes when pulling
any remote distributions, and when pulling from our own cache. Like pip,
though, we _don't_ enforce hashes if a distribution is _already_
installed.
- Hash validity is enforced in a few different places:
1. During resolution, we enforce hash validity based on the hashes
reported by the registry. If we need to access a source distribution,
though, we then enforce hash validity at that point too, prior to
running any untrusted code. (This is enforced in the distribution
database.)
2. In the install plan, we _only_ add cached distributions that have
matching hashes. If a cached distribution is missing any hashes, or the
hashes don't match, we don't return them from the install plan.
3. In the downloader, we _only_ return distributions with matching
hashes.
4. The final combination of "things we install" are: (1) the wheels from
the cache, and (2) the downloaded wheels. So this ensures that we never
install any mismatching distributions.
- Like pip, if `--require-hashes` is provided, we require that _all_
distributions are pinned with either `==` or a direct URL. We also
require that _all_ distributions have hashes.
There are a few notable TODOs:
- We don't support hash-checking mode for unnamed requirements. These
should be _somewhat_ rare, though? Since `pip compile` never outputs
unnamed requirements. I can fix this, it's just some additional work.
- We don't automatically enable `--require-hashes` with a hash exists in
the requirements file. We require `--require-hashes`.
Closes#474.
## Test Plan
I'd like to add some tests for registries that report incorrect hashes,
but otherwise: `cargo test`
## Summary
This lets us remove circular dependencies (in the future, e.g., #2945)
that arise from `FlatIndex` needing a bunch of resolver-specific
abstractions (like incompatibilities, required hashes, etc.) that aren't
necessary to _fetch_ the flat index entries.
Needed to prevent circular dependencies in my toolchain work (#2931). I
think this is probably a reasonable change as we move towards persistent
configuration too?
Unfortunately `BuildIsolation` needs to be in `uv-types` to avoid
circular dependencies still. We might be able to resolve that in the
future.
Previously, we did not consider installed distributions as candidates
while performing resolution. Here, we update the resolver to use
installed distributions that satisfy requirements instead of pulling new
distributions from the registry.
The implementation details are as follows:
- We now provide `SitePackages` to the `CandidateSelector`
- If an installed distribution satisfies the requirement, we prefer it
over remote distributions
- We do not want to allow installed distributions in some cases, i.e.,
upgrade and reinstall
- We address this by introducing an `Exclusions` type which tracks
installed packages to ignore during selection
- There's a new `ResolvedDist` wrapper with `Installed(InstalledDist)`
and `Installable(Dist)` variants
- This lets us pass already installed distributions throughout the
resolver
The user-facing behavior is thoroughly covered in the tests, but
briefly:
- Installing a package that depends on an already-installed package
prefers the local version over the index
- Installing a package with a name that matches an already-installed URL
package does not reinstall from the index
- Reinstalling (--reinstall) a package by name _will_ pull from the
index even if an already-installed URL package is present
- To reinstall the URL package, you must specify the URL in the request
Closes https://github.com/astral-sh/uv/issues/1661
Addresses:
- https://github.com/astral-sh/uv/issues/1476
- https://github.com/astral-sh/uv/issues/1856
- https://github.com/astral-sh/uv/issues/2093
- https://github.com/astral-sh/uv/issues/2282
- https://github.com/astral-sh/uv/issues/2383
- https://github.com/astral-sh/uv/issues/2560
## Test plan
- [x] Reproduction at `charlesnicholson/uv-pep420-bug` passes
- [x] Unit test for editable package
([#1476](https://github.com/astral-sh/uv/issues/1476))
- [x] Unit test for previously installed package with empty registry
- [x] Unit test for local non-editable package
- [x] Unit test for new version available locally but not in registry
([#2093](https://github.com/astral-sh/uv/issues/2093))
- ~[ ] Unit test for wheel not available in registry but already
installed locally
([#2282](https://github.com/astral-sh/uv/issues/2282))~ (seems
complicated and not worthwhile)
- [x] Unit test for install from URL dependency then with matching
version ([#2383](https://github.com/astral-sh/uv/issues/2383))
- [x] Unit test for install of new package that depends on installed
package does not change version
([#2560](https://github.com/astral-sh/uv/issues/2560))
- [x] Unit test that `pip compile` does _not_ consider installed
packages
## Summary
This PR enables the resolver to "accept" URLs, prereleases, and local
version specifiers for direct dependencies of path dependencies. As a
result, `uv pip install .` and `uv pip install -e .` now behave
identically, in that neither has a restriction on URL dependencies and
the like.
Closes https://github.com/astral-sh/uv/issues/2643.
Closes https://github.com/astral-sh/uv/issues/1853.
This is driving me a little crazy and is becoming a larger problem in
#2596 where I need to move more types (like `Upgrade` and `Reinstall`)
into this crate. Anything that's shared across our core resolver,
install, and build crates needs to be defined in this crate to avoid
cyclic dependencies. We've outgrown it being a single file with some
shared traits.
There are no behavioral changes here.
## Summary
When a user runs with `--output-file` and `--generate-hashes`, we should
_only_ update the hashes if the pinned version itself changes.
Closes https://github.com/astral-sh/uv/issues/1530.
The architecture of uv does not necessarily match that of the python
interpreter (#2326). In cross compiling/testing scenarios the operating
system can also mismatch. To solve this, we move arch and os detection
to python, vendoring the relevant pypa/packaging code, preventing
mismatches between what the python interpreter was compiled for and what
uv was compiled for.
To make the scripts more manageable, they are now a directory in a
tempdir and we run them with `python -m` . I've simplified the
pypa/packaging code since we're still building the tags in rust. A
`Platform` is now instantiated by querying the python interpreter for
its platform. The pypa/packaging files are copied verbatim for easier
updates except a `lru_cache()` python 3.7 backport.
Error handling is done by a `"result": "success|error"` field that allow
passing error details to rust:
```console
$ uv venv --no-cache
× Can't use Python at `/home/konsti/projects/uv/.venv/bin/python3`
╰─▶ Unknown operation system `linux`
```
I've used the [maturin sysconfig
collection](855f6d2cb1/sysconfig)
as reference. I'm unsure how to test these changes across the wide
variety of platforms.
Fixes#2326
## Summary
This PR adds support for pip's `--no-build-isolation`. When enabled,
build requirements won't be installed during PEP 517-style builds, but
the source environment _will_ be used when executing the build steps
themselves.
Closes https://github.com/astral-sh/uv/issues/1715.
## Summary
`PythonPlatform` only exists to format paths to directories within
virtual environments based on a root and an OS, so it's now
`VirtualenvLayout`.
`Virtualenv` is now used for non-virtual environment Pythons, so it's
now `PythonEnvironment`.
## Summary
This PR adds a `--python` flag that allows users to provide a specific
Python interpreter into which `uv` should install packages. This would
replace the `VIRTUAL_ENV=` workaround that folks have been using to
install into arbitrary, system environments, while _also_ actually being
correct for installing into non-virtual environments, where the bin and
site-packages paths can differ.
The approach taken here is to use `sysconfig.get_paths()` to get the
correct paths from the interpreter, and then use those for determining
the `bin` and `site-packages` directories, rather than constructing them
based on hard-coded expectations for each platform.
Closes https://github.com/astral-sh/uv/issues/1396.
Closes https://github.com/astral-sh/uv/issues/1779.
Closes https://github.com/astral-sh/uv/issues/1988.
## Test Plan
- Verified that, on my Windows machine, I was able to install `requests`
into a global environment with: `cargo run pip install requests --python
'C:\\Users\\crmarsh\\AppData\\Local\\Programs\\Python\\Python3.12\\python.exe`,
then `python` and `import requests`.
- Verified that, on macOS, I was able to install `requests` into a
global environment installed via Homebrew with: `cargo run pip install
requests --python $(which python3.8)`.
## Summary
This revives a PR from long ago
(https://github.com/astral-sh/uv/pull/383 and
https://github.com/zanieb/pubgrub/pull/24) that modifies how we deal
with dependencies that are declared multiple times within a single
package.
To quote from the originating PR:
> Uses an experimental pubgrub branch (#370) that allows us to handle
multiple version ranges for a single dependency to the solver which
results in better error messages because the derivation tree contains
all of the relevant versions. Previously, the version ranges were merged
(by us) in the resolver before handing them to pubgrub since only one
range could be provided per package. Since we don't merge the versions
anymore, we no longer give the solver an empty range for conflicting
requirements; instead the solver comes to that conclusion from the
provided versions. You can see the improved error message for direct
dependencies in [this
snapshot](https://github.com/astral-sh/puffin/pull/383/files#diff-a0437f2c20cde5e2f15199a3bf81a102b92580063268417847ec9c793a115bd0).
The main issue with that PR was around its handling of URL dependencies,
so this PR _also_ refactors how we handle those. Previously, we stored
URL dependencies on `PubGrubPackage`, but they were omitted from the
hash and equality implementations of `PubGrubPackage`. This led to some
really careful codepaths wherein we had to ensure that we always visited
URLs before non-URL packages, so that the URL-inclusive versions were
included in any hashmaps, etc. I considered preserving this approach,
but it would require us to rely on lots of internal details of PubGrub
(since we'd now be relying on PubGrub to merge those packages in the
"right" order).
So, instead, we now _always_ set the URL on a given package, whenever
that package was _given_ a URL upfront. I think this is easier to reason
about: if the user provided a URL for `flask`, then we should just
always add the URL for `flask`. If we see some other URL for `flask`, we
error, like before. If we see some unknown URL for `flask`, we error,
like before.
Closes https://github.com/astral-sh/uv/issues/1522.
Closes https://github.com/astral-sh/uv/issues/1821.
Closes https://github.com/astral-sh/uv/issues/1615.
First, replace all usages in files in-place. I used my editor for this.
If someone wants to add a one-liner that'd be fun.
Then, update directory and file names:
```
# Run twice for nested directories
find . -type d -print0 | xargs -0 rename s/puffin/uv/g
find . -type d -print0 | xargs -0 rename s/puffin/uv/g
# Update files
find . -type f -print0 | xargs -0 rename s/puffin/uv/g
```
Then add all the files again
```
# Add all the files again
git add crates
git add python/uv
# This one needs a force-add
git add -f crates/uv-trampoline
```