<!--
Thank you for contributing to uv! To help us out with reviewing, please
consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
## Summary
Fix WindowsRunnable::from_script_path to correctly append extensions
instead of replacing them when resolving executable paths. This resolves
https://github.com/astral-sh/uv/issues/15165#issue-3304086689.
- Add add_extension_to_path helper that appends extensions properly
- Update extension resolution to use the new helper
- Add tests
## Test Plan
Added unit tests for the new and existing functionality that the change
touches. Tested manually locally on Windows.
<!-- How was it tested? -->
---------
Co-authored-by: Zanie Blue <contact@zanie.dev>
## Summary
This just looks like an oversight. We weren't including hashes from
local Simple API indexes if a package had both a wheel and a source
distribution.
Closes https://github.com/astral-sh/uv/issues/14883
Replaces https://github.com/astral-sh/uv/pull/14092
Adds `tool.uv.extra-build-dependencies = {package = [dependency, ...]}`
which extends `build-system.requires` during package builds.
These are lowered via workspace sources, are applied to transitive
dependencies, and are included in the wheel cache shard hash.
There are some features we need to follow-up on, but are out of scope
here:
- Preferring locked versions for build dependencies
- Settings for requiring locked versions for build depencies
There are some quality of life follow-ups we should also do:
- Warn on `extra-build-dependencies` that do not apply to any packages
- Add test cases and improve error messaging when the
`extra-build-dependencies` resolve fails
-------
There ~are~ were a few open decisions to be made here
1. Should we resolve these dependencies alongside the
`build-system.requires` dependencies? Or should we resolve separately?
(I think the latter is more powerful? because you can override things?
but it opens the door to breaking your build)
2. Should we install these dependencies into the same environment? Or
should we layer it on top as we do elsewhere? (I think it's fine to
install into the same environment)
3. Should we respect sources defined in the parent project? (I think
yes, but then we need to lower the dependencies earlier — I don't think
that's a big deal, but it's not implemented)
4. Should we respect sources defined in the child project? (I think no,
this gets really complicated and seems weird to allow)
5. Should we apply this to transitive dependencies? (I think so)
---------
Co-authored-by: Aria Desires <aria.desires@gmail.com>
Co-authored-by: konstin <konstin@mailbox.org>
The script stumbled over a newline introduced in
https://github.com/pypi/warehouse/pull/18266 (which is valid).
Also fixed: Don't read versions for the same package from other indexes.
We were using `project_name` here instead of `target`, while using the
latter and only reading from a single index simplifies the code too.
This PR provides a script that uses environment variables to determine
which registries to test. This script is being used to run automated
registry tests in CI for AWS, Azure, GCP, Artifactory, GitLab,
Cloudsmith, and Gemfury.
You must configure the following required env vars for each registry:
```
UV_TEST_<registry_name>_URL URL for the registry
UV_TEST_<registry_name>_TOKEN authentication token
UV_TEST_<registry_name>_PKG private package to install
```
The username defaults to "\_\_token\_\_" but can be optionally set with:
```
UV_TEST_<registry_name>_USERNAME
```
For each configured registry, the test will attempt to install the
specified package. Some registries can fall back to PyPI internally, so
it's important to choose a package that only exists in the registry you
are testing.
Currently, a successful test means that it finds the line “ +
<package_name>” in the output. This is because in its current form we
don’t know ahead of time what package it is and hence what the exact
expected output would be. The advantage if that anyone can run this
locally, though they would have to have access to the registries they
want to test.
You can also use the `--use-op` command line argument to derive these
test env vars from a 1Password vault (default is "RegistryTests" but can
be configured with `--op-vault`). It will look at all items in the vault
with names following the pattern `UV_TEST_<registry_name>` and will
derive the env vars as follows:
```
`UV_TEST_<registry_name>_USERNAME` from the `username` field
`UV_TEST_<registry_name>_TOKEN` from the `password` field
`UV_TEST_<registry_name>_URL` from a field with the label `url`
`UV_TEST_<registry_name>_PKG` from a field with the label `pkg`
```
For the case where there was no matching wheel on sync, we previously
added a note about which wheels are available vs. on which platform you
are on. We extend this error message to link directly towards
`tool.uv.required-environments`, which otherwise has a discovery
problem.
On Linux (Setting `tool.uv.required-environments` doesn't help here
either, but it's a clear example):
```
[project]
name = "debug"
version = "0.1.0"
requires-python = "==3.10.*"
dependencies = ["tensorflow-macos>=2.13.1"]
```
```
Resolved 41 packages in 24ms
error: Distribution `tensorflow-macos==2.16.2 @ registry+https://pypi.org/simple` can't be installed because it doesn't have a source distribution or wheel for the current platform
hint: You're on Linux (`manylinux_2_39_x86_64`), but there are no wheels for the current platform, consider configuring `tool.uv.required-environments`.
hint: `tensorflow-macos` (v2.16.2) only has wheels for the following platform: `macosx_12_0_arm64`.
```

---------
Co-authored-by: Zanie Blue <contact@zanie.dev>
## Summary
This adds GraalPy download metadata so that `uv python install graalpy`
works. See https://github.com/astral-sh/uv/issues/13114
## Test Plan
The existing integration test was changed to test this functionality.
Some registries (like Azure Artifact) can require you to authenticate
separately for every package URL if you do not authenticate for the
/simple endpoint. These changes make the auth middleware aware of index
URL endpoints and attempts to fetch keyring credentials for such an
index URL when making a request to any URL it's a prefix of.
The current uv behavior is to cache credentials either at the request
URL or realm level. But with these changes, we also need to cache
credentials at the index level. Note that when uv does not detect an
index URL for a request URL, it will continue to apply the old behavior.
Addresses part of #4056Closes#4583Closes#11236Closes#11391Closes#11507
I wanted to consolidate these anyway, and apparently it's a huge pain to
make a Windows task fail early via GitHub's PowerShell setup so I
implement this in Python instead.
Previously, we required a username to perform a fetch from the keyring
because the `keyring` CLI only supported fetching password for a given
service and username. Unfortunately, this is different from the keyring
Python API which supported fetching a username _and_ password for a
given service. We can't (easily) use the Python API because we don't
expect `keyring` to be installed in a specific environment during
network requests. This means that we did not have parity with `pip`.
Way back in https://github.com/jaraco/keyring/pull/678 we got a `--mode
creds` flag added to `keyring`'s CLI which supports parity with the
Python API. Since `keyring` is expensive to invoke and we cannot be
certain that users are on the latest version of keyring, we've not added
support for invoking keyring with this flag. However, now that we have a
mode that says authentication is _required_ for an index (#11896), we
might as well _try_ to invoke keyring with `--mode creds` when there is
no username. This will address use-cases where the username is
non-constant and move us closer to `pip` parity.
I think it's important for the breaking changes to be at the _top_ of
the file instead of the bottom.
Now that it's not being rendered by GitHub's Releases markdown, we can
remove the prettier ignores.
In the publish client, we have to set the client retries to 0 as the
retry middleware is incompatible with upload bodies. This however also
sets `client.retry_policy()` to a zero-retry policy, so we need to
construct our own policy.
Fixes#12027
---------
Co-authored-by: Zanie Blue <contact@zanie.dev>
uv itself is a large package with many dependencies and lots of
features. To build a package using the uv build backend, you shouldn't
have to download and install the entirety of uv. For platform where we
don't provide wheels, it should be possible and fast to compile the uv
build backend. To that end, we're introducing a python package that
contains a trimmed down version of uv that only contains the build
backend, with a minimal dependency tree in rust.
The `uv_build` package is publish from CI just like uv itself. It is
part of the workspace, but has much less dependencies for its own
binary. We're using cargo deny to enforce that the network stack is not
part of the dependencies. A new build profile ensure we're getting the
minimum possible binary size for a rust binary.
---------
Co-authored-by: Zanie Blue <contact@zanie.dev>
<!--
Thank you for contributing to uv! To help us out with reviewing, please
consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
## Summary
<!-- What's the purpose of the change? What does it do, and why? -->
To stay within quota, it now has just under 30 days of data, so the
filename has been updated. Both will be available for a while. See
https://github.com/hugovk/top-pypi-packages/pull/46.
## Test Plan
<!-- How was it tested? -->
```sh
curl https://hugovk.github.io/top-pypi-packages/top-pypi-packages-30-days.min.json | jq -r ❯
curl https://hugovk.github.io/top-pypi-packages/top-pypi-packages.min.json | jq -r ".rows | .
diff pypi_8k_downloads_original.txt pypi_8k_downloads_new.txt
```
Fixes#11793
On Windows, trying to read a file inside what is not a directory but
another file results in a not found error, while on Unix we get a not a
directory error. We check explicitly if something included in a
workspace glob is a non-directory to fix the behavior on Windows.
First of all, I want to test automatic managed installs (see #10913) and
need to set that up. Second of all, some tests were _implicitly_
downloading interpreters instead of using the one from their context —
which is unexpected and naughty and very slow.
## Summary
In preview mode on windows, register und un-register the managed python build standalone installations in the Windows registry following PEP 514.
We write the values defined in the PEP plus the download URL and hash. We add an entry when installing a version, remove an entry when uninstalling and removing all values when uninstalling with `--all`. We update entries only by overwriting existing values, there is no "syncing" involved.
Since they are not official builds, pbs gets a prefix. `py -V:Astral/CPython3.13.1` works, `py -3.13` doesn't.
```
$ py --list-paths
-V:3.12 * C:\Users\Konsti\AppData\Local\Programs\Python\Python312\python.exe
-V:3.11.9 C:\Users\Konsti\.pyenv\pyenv-win\versions\3.11.9\python.exe
-V:3.11 C:\Users\micro\AppData\Local\Programs\Python\Python311\python.exe
-V:3.8 C:\Users\micro\AppData\Local\Programs\Python\Python38\python.exe
-V:Astral/CPython3.13.1 C:\Users\Konsti\AppData\Roaming\uv\data\python\cpython-3.13.1-windows-x86_64-none\python.exe
```
Registry errors are reported but not fatal, except for operations on the company key since it's not bound to any specific python interpreter.
On uninstallation, we prune registry entries that have no matching Python installation (i.e. broken entries).
The code uses the official `windows_registry` crate of the `winreg` crate.
Best reviewed commit-by-commit.
## Test Plan
We're reusing an existing system check to test different (un)installation scenarios.
## Summary
After we resolve, we filter out any wheels that aren't applicable for
the target platforms. So, e.g., we remove macOS wheels if we find that
the user only asked to solve for Windows.
This PR extends the same logic to architectures, so that we filter out
ARM-only wheels when the user is only solving for x86, etc.
Closes#10571.
## Summary
Adds regular expression based version filter to python mirror script.
## Test Plan
Manually using `uv run ./scripts/create-python-mirror.py --name cpython
--arch x86_64 --os linux --version "3.13.\d+$"`
## Summary
The architecture details in `crates/uv-python/download-metadata.json` is
now a dictionary with family and variant data, whereas it used to be a
string. This patches the architecture filter in
`scripts/create-python-mirror.py` to support both scenarios.
## Test Plan
Tested manually using `uv run ./scripts/create-python-mirror.py --name
cpython --arch x86_64 --os linux --from-all-history`
## Summary
A revival of an old idea (#9344) that I have slightly more confidence in
now. I abandoned this idea because (1) it couldn't capture that, e.g.,
`platform_system == 'Windows' and sys_platform == 'foo'` (or some other
unknown value) are disjoint, and (2) I thought that Android returned
`"android"` for one of `sys_platform` or `platform_system`, which
would've made this logic incorrect.
However, it looks like Android... doesn't do that? And the values here
are almost always in a small, known set. So in the end, the tradeoffs
here actually seem pretty good.
Vis-a-vis our current solution, this can (e.g.) _simplify out_
expressions like `sys_platform == 'win32' or platform_system ==
'Windows'`.