A user reported a homebrew Python that would raise an exception in the
interpreter probing script because `platform.mac_ver()` returned `('',
('', '', ''), '')` on his installation due to
https://github.com/Homebrew/homebrew-core/issues/206778
This is easy enough to catch and show a proper error message instead of
the Python backtrace.
<!--
Thank you for contributing to uv! To help us out with reviewing, please
consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
## Summary
This adds `NO_BINARY` and `NO_BINARY_PACKAGE` environment variables to
the uv CLI, allowing the user to specify packages to build from source
using environment variables. Its not a complete fix for #4291 as it does
not handle the `pip` subcommand.
## Test Plan
This was tested by running `uv sync` with various `UV_NO_BINARY` and
`UV_NO_BINARY_PACKAGE` environment variables set and checking that the
correct set of packages were compiled rather than taken from pre-built
wheels.
---------
Co-authored-by: Zanie Blue <contact@zanie.dev>
## Summary
Now that `version` is an optional field, we shouldn't error if an
unambiguous package is lacking a version. We can still enforce the same
guarantees via `source`, since we always set version and source
together, if the package is unambiguous. I also retained the same error
for non-local packages that lack a version like this.
Closes https://github.com/astral-sh/uv/issues/11384.
The underlying cause here, I believe, was that we weren't accounting
for the case where an edge could be visited *without* any extras
enabled. Because of that, we got into situations where we thought
there was only one path to an edge when there were actually more
paths. This in turn lead to us erroneously doing simplification where
it actually isn't justified. And in turn lead to duplicate versions
of the same package being installed in the same environment.
The fix for this ends up being really simple: in the case where we
don't add any conflict items for a package during graph traversal,
we materialize an empty set of conflicts to mark the case of no
extras being enabled when visiting the child edges. This is enough
to propagate the knowledge of multiple paths to the same edge and
causes us to avoid doing improper simplifications.
This does fix the problem in the snapshot, but it does also I think
lead to other cases where simplifications are no longer possible
(hence the changes to the airflow snapshot). But this seems
expected, since we are doing strictly less simplification than we
were before. It's unclear if all of those cases were actual bugs
or not though.
The snapshot is too big to meaningfully read, but the problem is
in the dependencies of `torchmetrics`:
[[package]]
name = "torchmetrics"
version = "1.6.1"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "lightning-utilities" },
{ name = "numpy" },
{ name = "packaging" },
{ name = "torch", version = "2.2.1", source = { registry = "https://pypi.org/simple" } },
{ name = "torch", version = "2.5.1", source = { registry = "https://pypi.org/simple" }, marker = "extra == 'extra-4-test-chgnet' or extra != 'extra-4-test-m3gnet'" },
]
The conflict markers here are overlapping, which means
both can be included in the same environment.
Previously, we patched pkg-config .pc files to have the absolute path to
the directory where we unpack a python-build-standalone release. As
discussed in #11028, we can use ${pcfiledir} in a .pc file to indicate
paths relative to the location of the file itself.
This change was implemented in astral-sh/python-build-standalone#507, so
for newer python-build-standalone releases, we don't need to do any
patching. Optimize this case by only modifying the .pc file if an actual
change is needed (which might be helpful down the line with hard links
or something). For older releases, change uv's patch to match what
python-build-standalone now does.
## Motivation
No-op `uv lock` in apache airflow
(891c67f210ab7c877d1f00ea6ea3d3cdbb0e96ef) is slow, which makes `uv run`
slow, too.
Reference project:
```
$ hyperfine "uv run python -c \"print('hi')\""
Benchmark 1: uv run python -c "print('hi')"
Time (mean ± σ): 16.3 ms ± 1.5 ms [User: 9.8 ms, System: 6.4 ms]
Range (min … max): 13.0 ms … 20.0 ms 186 runs
```
Apache airflow before:
```
$ hyperfine "uv run python -c \"print('hi')\""
Benchmark 1: uv run python -c "print('hi')"
Time (mean ± σ): 161.0 ms ± 5.2 ms [User: 135.3 ms, System: 24.1 ms]
Range (min … max): 155.0 ms … 176.3 ms 18 runs
```
## Optimization
`FlatRequiresDist::from_requirements` is taking 50% of main thread
runtime.
Before:

After both commits:

Apache airflow after the first commit:
```
$ hyperfine "uv-profiling run python -c \"print('hi')\""
Benchmark 1: uv-profiling run python -c "print('hi')"
Time (mean ± σ): 122.3 ms ± 5.4 ms [User: 96.1 ms, System: 24.7 ms]
Range (min … max): 114.0 ms … 133.2 ms 23 runs
```
Apache airflow after the second commit:
```
$ hyperfine "uv-profiling run python -c \"print('hi')\""
Benchmark 1: uv-profiling run python -c "print('hi')"
Time (mean ± σ): 108.5 ms ± 3.4 ms [User: 83.2 ms, System: 24.2 ms]
Range (min … max): 103.6 ms … 119.9 ms 28 runs
```
## Summary
These are used for coordination across processes. If you run uv under,
e.g., the root user, then under a different user, I don't think we
should prevent you from acquiring the lock.
Closes https://github.com/astral-sh/uv/issues/11324.
Includes https://pypy.org/posts/2025/02/pypy-v7318-release.html
These are labeled as betas in the post but not anywhere obvious to me?
I'm not sure we need to portray this to users.
Co-authored-by: zanieb <2586601+zanieb@users.noreply.github.com>
## Summary
No behavior changes... This just separates the formatting from the
collection of the results, and also fixes a bug whereby we didn't say
"No changes detected" in some cases.
Given an input in the shape:
```
foo[bar]==1.0.0; sys_platform == 'linux'
foo==1.0.0; sys_platform != 'linux'
```
We would write either
```
foo==1.0.0; sys_platform == 'linux'
```
or
```
foo==1.0.0
```
depending on the iteration order, as the first one is from the marker
proxy package and the second one from the package without marker.
The fix correctly merges graph entries when there are two nodes with
different extras and different markers.
I tried to write a packse test but it failed due to a different
iteration order showing the correct case directly instead of the failing
one we'd need.
Only `strip_extras` is affected, since `combine_extras` uses
`version_marker`.
When stderr is not a tty, we currently don't show any messages for build
or large downloads, since indicatif is hidden. We can improve this by
showing a message for:
* Starting and finishing a large download (>1MB)
* Starting and finishing a build
Downloads are limited to 1MB or unknown size to keep the logs concise
and not scroll the entire terminal away for a download that finishes
almost immediately.
These messages are not captured in the tests since their order is
non-deterministic (downloads and builds race to finish).
There are no "tick" messages for large downloads yet, we could e.g. show
an update on runnning downloads every n seconds.
Part of #11121
**Test Plan**
```
$ uv venv && FORCE_COLOR=1 cargo run -q pip install numpy --no-binary :all: --no-cache 2>&1 | tee a.txt
Using CPython 3.13.0
Creating virtual environment at: .venv
Activate with: source .venv/bin/activate
Resolved 1 package in 221ms
Building numpy==2.2.2
Built numpy==2.2.2
Prepared 1 package in 2m 34s
Installed 1 package in 6ms
+ numpy==2.2.2
```

```
$ uv venv && FORCE_COLOR=1 cargo run -q pip install torch --no-cache 2>&1 | tee b.txt
Using CPython 3.13.0
Creating virtual environment at: .venv
Activate with: source .venv/bin/activate
Resolved 24 packages in 648ms
Downloading setuptools (1.2MiB)
Downloading nvidia-cuda-cupti-cu12 (13.2MiB)
Downloading torch (731.1MiB)
Downloading nvidia-nvjitlink-cu12 (20.1MiB)
Downloading nvidia-cufft-cu12 (201.7MiB)
Downloading nvidia-cuda-nvrtc-cu12 (23.5MiB)
Downloading nvidia-curand-cu12 (53.7MiB)
Downloading nvidia-nccl-cu12 (179.9MiB)
Downloading nvidia-cudnn-cu12 (634.0MiB)
Downloading nvidia-cublas-cu12 (346.6MiB)
Downloading sympy (5.9MiB)
Downloading nvidia-cusparse-cu12 (197.8MiB)
Downloading nvidia-cusparselt-cu12 (143.1MiB)
Downloading networkx (1.6MiB)
Downloading nvidia-cusolver-cu12 (122.0MiB)
Downloading triton (241.4MiB)
Downloaded setuptools
Downloaded networkx
Downloaded sympy
Downloaded nvidia-cuda-cupti-cu12
Downloaded nvidia-nvjitlink-cu12
Downloaded nvidia-cuda-nvrtc-cu12
Downloaded nvidia-curand-cu12
[...]
```

## Summary
This is attempting to solve the same problem surfaced in #11208 and
#11209. However, those PRs only worked for our own managed Pythons. In
Gentoo, for example, they disable the managed Pythons, which led to
failures in the test suite, because the "base Python" returned after
creating a virtual environment would differ from the "base Python" that
you get after _querying_ an existing virtual environment.
The fix here is to apply our same base Python normalization and
discovery logic, to non-standalone / non-managed Pythons. We continue to
use `sys._base_executable` for such Pythons when creating the
virtualenv, but when _caching_, we perform this second discovery step.
Closes https://github.com/astral-sh/uv/issues/11237.
This is a rewrite of the groups subsystem to have more clear semantics,
and some adjustments to the CLI flag constraints. In doing so, the
following bugs are fixed:
* `--no-default-groups --no-group foo` is no longer needlessly rejected
* `--all-groups --no-default-groups` now correctly evaluates to
`--all-groups` where previously it was erroneously being interpretted as
just `--no-default-groups`
* `--all-groups --only-dev` is now illegal, where previously it was
accepted and mishandled, as if it was a mythical `--only-all-groups`
flag
Fixes#10890Closes#10891
## Summary
If you `uv run` from the same directory via multiple processes at the
same time, some of them will fail as they'll see an "incomplete" virtual
environment.
Closes https://github.com/astral-sh/uv/issues/11219.
I think `UV_PROJECT_ENVIRONMENT` is too complicated for use-cases where
the user wants to sync to the active environment. I don't see a
compelling reason not to make opt-in easier. I see a lot of questions
about how to deal with this warning in the issue tracker, but it seems
painful to collect them here for posterity.
A notable behavior here — we'll treat this as equivalent to
`UV_PROJECT_ENVIRONMENT` so... if you point us to a valid virtual
environment that needs to be recreated for some reason (e.g., new Python
version request), we'll happily delete it and start over.
## Summary
This PR removes the ephemeral `.pth` overlay when using a cached
environment. This solution isn't _completely_ safe, since we could
remove the `.pth` file just as another process is starting the
environment... But that risk already exists today, since we could
_overwrite_ the `.pth` file just as another process is starting the
environment, so I think what I've added here is a strict improvement.
Ideally, we wouldn't write this file at all, and we'd instead somehow
(e.g.) pass a file to the interpreter to run at startup? Or find some
other solution that doesn't require poisoning the cache like this.
Closes https://github.com/astral-sh/uv/issues/11117.
# Test Plan
Ran through the great reproduction steps from the linked issue.
Before:

After:
