Revert #11601 for now
We run Python interpreter discovery with `-I` (#2500) which means these
environments variables are ignored when determining `sys.path`. Unless
we decide to remove the `-I` flag from the `sys.path` query, we
shouldn't release these changes to interpreter discovery caching.
I noticed that the latest two `sync-python-releases` jobs failed due to
`httpx.RemoteProtocolError: peer closed connection without sending
complete message body (incomplete chunked read)`.
For the current python-build-standalone release, each request page
(defaulting to 30 items per page) takes about 20 seconds and loads
around 32MB of data. This extensive data load might be causing the
request to frequently fail.
In this PR, I reduced number of items per page to 10 and added
`Accept-Encoding: gzip, deflate` to the request header. Now, it takes
about 6 seconds to load, and the compressed response size has been
reduced to 534KB. I hope this would addresses the request failure.
We want to use `sys.path` for package discovery (#2500, #9849). For
that, we need to know the correct value of `sys.path`. `sys.path` is a
runtime-changeable value, which gets influenced from a lot of different
sources: Environment variables, CLI arguments, `.pth` files with
scripting, `sys.path.append()` at runtime, a distributor patching
Python, etc. We cannot capture them all accurately, especially since
it's possible to change `sys.path` mid-execution. Instead, we do a best
effort attempt at matching the user's expectation.
The assumption is that package installation generally happens in venv
site-packages, system/user site-packages (including pypy shipping
packages with std), and `PYTHONPATH`. Specifically, we reuse
`PYTHONPATH` as dedicated way for users to tell uv to include specific
directories in package discovery.
A common way to influence `sys.path` that is not using venvs is setting
`PYTHONPATH`. To support this we're capturing `PYTHONPATH` as part of
the cache invalidation, i.e. we refresh the interpreter metadata if it
changed. For completeness, we're also capturing other environment
variables documented as influencing `sys.path` or other fields in the
interpreter info.
This PR does not include reading registry values for `sys.path`
additions on Windows as documented in
https://docs.python.org/3.11/using/windows.html#finding-modules. It
notably also does not include parsing of python CLI arguments, we only
consider their environment variable versions for package installation
and listing. We could try parsing CLI flags in `uv run python`, but we'd
still miss them when Python is launched indirectly through a script, and
it's more consistent to only consider uv's own arguments and environment
variables, similar to uv's behavior in other places.
This change keeps dependency group keys sorted when adding new ones.
If earlier dependency group keys were not sorted, we just append the new
group key to avoid churn in `pyproject.toml`. See discussion on #11447.
I've added a new snapshot test to capture this case.
Closes#11447.
## Summary
When tests are run downstream, the `COLUMNS` environment variable is
used to force fixed output width and avoid test failures due to
different terminal widths. However, this occasionally causes test
regressions when other tests rely on different output width. Use the
same `COLUMNS` value in CI to ensure consistent output and catch any
regressions.
## Test Plan
It wasn't, it's supposed to be tested by the CI :-).
---------
Co-authored-by: Zanie Blue <contact@zanie.dev>
The bad merge was a result of merging #11293 and #11513. I think even if
I had only merged the former, it still would have resulted in a bad
merge, since the snapshots hadn't been updated for `provide-extras`
additions.
This fixes the build failures seen here:
3739802963
The particular example I honed in on here was the `e3nn -> sympy 1.13.1`
and `e3nn -> sympy 1.13.3` dependency edges. In particular, while the
former correctly has a conflict marker, the latter's conflict marker was
getting simplified to `true`. This makes the edges trivially
overlapping, and results in both of them getting installed
simultaneously. (A similar problem happens for the `e3nn -> torch`
dependency edges.)
Why does this happen? Well, conflict marker simplification works by
detecting which extras are known to be enabled (and disabled) for each
node in the graph. This ends up being expressed as a set of sets, where
each inner set contains items corresponding to "extras is included" or
"extra is excluded."
The logic then is if _all_ of these sets are satisfied by the conflict
marker on the dependency edge, then this conflict marker can be
simplified by assuming all of the inclusions/exclusions to be true.
In this particular case, we run into an issue where the set of
assumptions discovered for `e3nn` is:
{test[sevennet]}, {}, {~test[m3gnet], ~test[alignn], test[all]}
And the corresponding conflict marker for `e3nn -> sympy 1.13.1` is:
extra == 'extra-4-test-all'
or extra == 'extra-4-test-chgnet'
or (extra != 'extra-4-test-alignn' and extra != 'extra-4-test-m3gnet')
And the conflict marker for `e3nn -> sympy 1.13.3` is:
extra == 'extra-4-test-alignn' or extra == 'extra-4-test-m3gnet'
Evaluating each of the sets above for `sympy 1.13.1`'s conflict
marker results in them all being true. Simplifying in turn results in
the marker being true. For `sympy 1.13.3`, not all of the sets are
satisfied, so this marker is not simplified.
I think the fundamental problem here is that our inferences aren't quite
rich enough to make these logical leaps. In particular, the conflict
marker for `e3nn -> sympy 1.13.3` is not satisfied by _any_ of our sets.
One might therefore conclude that this dependency edge is impossible.
But! The `test[sevennet]` set doesn't actually rule out `test[m3gnet]`
from being included, for example, because there is no conflict. So it is
actually possible for this marker to evaluate to true.
And I think this reveals the problem: for the `e3nn -> sympy 1.13.1`
conflict marker, the inferences don't capture the fact that
`test[sevennet]` _might_ have `test[m3gnet]` enabled, and that would in
turn result in the conflict marker evaluating to `false`. This directly
implies that our simplification here is inappropriate.
It would be nice to revisit how we build our inferences here so that
they are richer and enable us to make correct logical leaps. For now, we
fix this particular bug with a bit of a cop-out: we skip conflict marker
simplification when there are ambiguous dependency edges.
Fixes#11479
The place to look in this snapshot is the `name = "e3nn"` dependency.
Its dependencies on `sympy` and `torch` consist of multiple versions
with overlapping conflict markers. They are getting incorrectly
simplified to `true`.
Initially, we were limiting Git schemes to HTTPS and SSH as only
supported schemes. We lost this validation in #3429. This incidentally
allowed file schemes, which apparently work with Git out of the box.
A caveat for this is that in tool.uv.sources, we parse the git field
always as URL. This caused a problem with #11425: repo = { git =
'c:\path\to\repo', rev = "xxxxx" } was parsed as a URL where c: is the
scheme, causing a bad error message down the line.
This PR:
* Puts Git URL validation back in place. It bans everything but HTTPS,
SSH, and file URLs. This could be a breaking change, if users were using
a git transport protocol were not aware of, even though never
intentionally supported.
* Allows file: URL in Git: This seems to be supported by Git and we were
supporting it albeit unintentionally, so it's reasonable to continue to
support it.
* It does not allow relative paths in the git field in tool.uv.sources.
Absolute file URLs are supported, whether we want relative file URLs for
Git too should be discussed separately.
Closes#3429: We reject the input with a proper error message, while
hinting the user towards file:. If there's still desire for relative
path support, we can keep it open.
---------
Co-authored-by: Charlie Marsh <charlie.r.marsh@gmail.com>
## Summary
Resolves#6913.
Add `tool.uv.build-constraint-dependencies` to pyproject.toml.
The changes are analogous to the constraint-dependencies feature
implemented in #5248.
Add documentation for `build-constraint-dependencies`
## Test Plan
Add tests for `uv lock`, `uv add`, `uv pip install` and `uv pip
compile`.
---------
Co-authored-by: Charlie Marsh <charlie.r.marsh@gmail.com>
This typo wasn't caught because the `($arg:expr, false)` macro branch
was never exercised.
For example, prior to this change, if you add
```
show_settings!(globals, false);
```
below, you'll get a compiler error.
When running `uv pip install .` in a directory with a pyproject.toml
that does not configure a build, we will invoke setuptools and get a
wheel we can't parse (https://github.com/astral-sh/uv/issues/11344).
This PR adds warnings around these setups.
---------
Co-authored-by: Charlie Marsh <charlie.r.marsh@gmail.com>
We want to build `uv-build` without depending on the network crates. In
preparation for that, we split uv-git into uv-git and uv-git-types,
where only uv-git depends on reqwest, so that uv-build can use
uv-git-types.
## Summary
This PR fixes a subtle issue arising from our propagation of
preferences. When we resolve a fork, we take the solution from that fork
and mark all the chosen versions as "preferred" as we move on to the
next fork.
In this specific case, the resolver ended up solving a macOS-specific
fork first, which led us to pick `2.6.0` rather than `2.6.0+cpu`. This
in itself is correct; but when we moved on to the next fork, we
preferred `2.6.0` over `2.6.0+cpu`, despite the fact that `2.6.0` _only_
includes macOS wheel, and that branch was focused on Linux.
Now, in preferences, we prefer local variants (if they exist). If the
local variant ends up not working, we'll presumedly backtrack to the
base version anyway.
Closes https://github.com/astral-sh/uv/issues/11406.
## Summary
If the user provides a PEP 508 requirement (e.g., `uvx
change_wheel_version`), then we should us that verbatim for the
executable, rather than normalizing the package name.
Closes https://github.com/astral-sh/uv/issues/11521.
## Summary
This PR revives https://github.com/astral-sh/uv/pull/10017, which might
be viable now that we _don't_ enforce any platforms by default.
The basic idea here is that users can mark certain platforms as required
(empty, by default). When resolving, we ensure that the specified
platforms have wheel coverage, backtracking if not.
For example, to require that we include a version of PyTorch that
supports Intel macOS:
```toml
[project]
name = "project"
version = "0.1.0"
requires-python = ">=3.11"
dependencies = ["torch>1.13"]
[tool.uv]
required-platforms = [
"sys_platform == 'darwin' and platform_machine == 'x86_64'"
]
```
Other than that, the forking is identical to past iterations of this PR.
This would give users a way to resolve the tail of issues in #9711, but
with manual opt-in to supporting specific platforms.
## Summary
This is an alternative to the approach we took in #11063 whereby we
always included `provides-extra` and `requires-dist`, since we needed
some way to differentiate between "no extras" and "lockfile was
generated by a uv version that didn't include extras".
Instead, this PR adds a minor version (called a "revision") to the
lockfile that we can use to indicate support for this feature. While
lockfile version bumps are backwards-incompatible, older uv versions
_can_ read lockfiles with a later revision -- they just won't understand
all the data.
In a future major version bump, we could simplify things and change the
schema to use a (major, minor) format instead of these two separate
fields. But this is the only way to do it that's backwards-compatible
with existing uv versions.
---------
Co-authored-by: Zanie Blue <contact@zanie.dev>
Initially it seemed like `app.py` might be slightly more desirable but
people seem to overwhelmingly favour `main.py` as a good "generic" name.
Fixes#7782
Closes#11285
Closes https://github.com/astral-sh/uv/pull/11437
This changes `-p` from an alias of `--python-version` to `--python`
while retaining backwards compatibility for `--python-version`-like
fallback behavior when the requested version, e.g., `-p 3.12`, cannot be
found.
This was initially implemented with a hidden `--python-legacy` flag
which allows us to special case the short `-p` flag — unlike the
implementation in #11437. However, after further discussion, we decided
the behavior difference between `-p` and `--python` would be confusing
so now `-p` is an alias for `--python` and `--python` is special-cased
when a version is used.
Additionally, we now respect the `UV_PYTHON` environment variable, but
it is ignored when `--python-version` is set. If you want different
`--python-version` and `--python` values, you must do so explicitly. I
considered banning this, but it is valid for e.g. `--python pypy
--python-version 3.12`
Unlike https://github.com/astral-sh/uv/pull/10222, this does not respect
`UV_PYTHON` in `uv python uninstall` (continuing to require an explicit
target there) which I think is simpler and matches our `.python-version`
file behavior.
---------
Co-authored-by: Choudhry Abdullah <cabdulla@trinity.edu>
Co-authored-by: Choudhry Abdullah <choudhry347@choudhrys-air-2.trinity.local>
Co-authored-by: Aria Desires <aria.desires@gmail.com>
Closes#10597.
Recreated https://github.com/astral-sh/uv/pull/10925 that got closed as
the base branch got merged.
Snapshot tests.
---------
Co-authored-by: Aria Desires <aria.desires@gmail.com>
`uv publish` has not changed for some time, it has [notable production
usage](https://github.com/search?q=%22uv+publish%22&type=code) and there
are no outstanding blockers, it is time to stabilize it with the 0.6
release.
Publishing is only usable through `uv publish`. You need to build source
distributions and wheels ahead of time, usually with `uv build`.
By default, `uv publish` will upload all source distributions and wheels
in the `dist/` folder, ignoring all non-matching filenames. By default,
`uv build` and most other build frontend write their artifacts to
`dist/`. Together, we can build a publish workflow including a smoke
test that all relevant files have actually been included in the wheel:
```
uv build
uv venv
uv pip install --find-links dist ...
uv run smoke_test.py
uv publish
```
There are 3 options supported in configuration files:
- `tool.uv.publish-url`
- `tool.uv.trusted-publishing`
- `tool.uv.check-url`
Options support on the CLI and through environment variables for index
configuration:
```
--index <INDEX>
The name of an index in the configuration to use for publishing [env: UV_PUBLISH_INDEX=]
--publish-url <PUBLISH_URL>
The URL of the upload endpoint (not the index URL) [env: UV_PUBLISH_URL=]
--check-url <CHECK_URL>
Check an index URL for existing files to skip duplicate uploads [env: UV_PUBLISH_CHECK_URL=]
```
There are two ways to configure `uv publish`: Passing options
individually or using the index API.
For the individual options, there `--publish-url` and `--check-url`, and
their configuration counterparts, `tool.uv.publish_url` and
`tool.uv.check_url`. `--publish-url` is named this way to be clearly
different from the simple index URL, since uploading to the index URL
leads to unclear errors, or worse a 200 OK with no effect. While we
intend to keep supporting this configuration, the index API is better
integrated.
In the index API, the user specifies `[[tool.uv.index]]`, with an index
name, the simple index URL and the publish URL. The `publish-url` and
`url` are equivalent to `--publish-url` and `--check-url`. The `url`
being mandatory makes for a better upload behavior (next paragraph).
```toml
[[tool.uv.index]]
name = "pypi"
url = "https://pypi.org/simple"
publish-url = "https://upload.pypi.org/legacy/"
```
A version of a package contains multiple files, for pure-python packages
usually a source distribution and a wheel, for native packages usually
many, larger wheels and a source distributions. Uploads in the not
officially specified Upload API 1.0 are file based: Once you upload a
file, the version is created, even though most files are still missing.
When uploading a series of files fails in the middle (e.g. the CI server
breaks), the release is only half uploaded. For such cases, you want to
re-try the upload. The response of an index when re-uploading a file is
implementation defined. Notably, PyPI accepts uploads of the same file
again with status 200, but rejects uploads of a file with the same name
but different contents with status 400. Other indexes reject all
attempts at re-uploads with different status codes and messages. Twine
handles this with `--skip-existing`, which allows ignoring errors due to
files with the same name as an existing file being uploaded, however
this does also not error when uploading a file with different contents
but the same name, which indicates a problem with the publish pipeline.
To properly solve this, we need the ability to stage releases: Files of
a version are uploaded to a staging area, and only when all files are
uploaded, we atomically publish the release. When an upload breaks or CI
fails, we can discard or overwrite the staging area and try again. This
will only be properly solved by PEP 694 "Upload 2.0 API for Python
Package Indexes", with unclear progress. For local publishing, it would
also be convenient to be able to check which files exist and what their
hashes are from only the publish URL, so files in the `dist/` folder
from a previous release can be ignored.
In the Upload API 1.0, we need to upload transformed METADATA fields
along with the file as form-data. We currently upload only recognized
metadata fields, where we know how to translate the field name to the
form-data name. This means when a user adds unknown, wrong or future-PEP
metadata we miss it. To me best knowledge no index currently verifies
that the form-data and the METADATA file in the wheel match.
Upload API 2.0 will be an entirely new protocol. It is unclear how we
will decide whether to use Upload API 1.0 or Upload API 2.0 once the
latter is released. Upload API 2.0 will remove the need for a check URL.
This means no changes for `--index`, but `--check-url` will be
incompatible with Upload API 2.0.
Options support on the CLI and through environment variables for
authentication:
```
-u, --username <USERNAME>
The username for the upload [env: UV_PUBLISH_USERNAME=]
-p, --password <PASSWORD>
The password for the upload [env: UV_PUBLISH_PASSWORD=]
-t, --token <TOKEN>
The token for the upload [env: UV_PUBLISH_TOKEN=]
--trusted-publishing <TRUSTED_PUBLISHING>
Configure using trusted publishing through GitHub Actions [possible values: automatic, always,
never]
--keyring-provider <KEYRING_PROVIDER>
Attempt to use `keyring` for authentication for remote requirements files [env:
UV_KEYRING_PROVIDER=] [possible values: disabled, subprocess]
```
We need credentials for the publish URL, and we may need credentials for
the check URL.
We support credentials from environment variables, the CLI, the URL, the
keyring, trusted publishing or a prompt.
The username can come from, in order:
- Mutually exclusive:
- `--username` or `UV_PUBLISH_USERNAME`. The CLI option overrides the
environment variable
- The username field in the publish URL
- If `--token` or `UV_PUBLISH_TOKEN` are used, it is `__token__`. The
CLI option overrides the environment variable
- If trusted publishing is available, it is `__token__`
- (We currently do not read the username from the keyring)
- If stderr is a tty, prompt the user
The password can come from, in order:
- Mutually exclusive:
- `--password` or `UV_PUBLISH_PASSWORD`. The CLI option overrides the
environment variable
- The password field in the publish URL
- If `--token` or `UV_PUBLISH_TOKEN` are used, it is the token value.
The CLI option overrides the environment variable
- If the keyring is enabled, the keyring entry for the URL and username
- If trusted publishing is available, the trusted publishing token
- If stderr is a tty, prompt the user
If no credentials are found, we do a final check in the auth middleware
cache and otherwise error without sending the request.
Trusted publishing is only supported in GitHub Actions. By default, we
try to retrieve a token from it in GitHub Actions (`GITHUB_ACTIONS` is
`true`) but continue even it this fails. Trusted publishing can be
forced with `--trusted-publishing always`, to error on misconfiguration,
or deactivated with `--trusted-publishing never`. The option can also be
configured through `tool.uv.trusted-publishing`.
When `--check-url` or `--index` are used, we may need credentials for
the index URL, too. These are handle separately by the same rules as
using the index anywhere else. The `--keyring-provier` option is however
shared between them, turning the keyring on for either turns it on for
both.
As future option, we could read `UV_INDEX_USERNAME` and
`UV_INDEX_PASSWORD` as fallbacks for the publish credentials
(https://github.com/astral-sh/uv/issues/9845). This however would clash
with prompting: When index credentials and upload credentials are not
the same (they usually should be different, since regular uv operations
should have less privileges than publish), we would then instead of
prompting use the wrong credentials from `UV_INDEX_*` and fail.
A major UX problem is that there is no standard for the username when
using a token (or rather, there is no standard for just sending a token
without a username). PyPI uses `__token__`, Cloudsmith used to use your
username or `token`, but now also supports `__token__`
(https://github.com/astral-sh/uv/issues/8221), while Google Cloud
Artifacts always uses `oauth2accesstoken`
(https://github.com/astral-sh/uv/issues/9778). This means the index
documentation may say you're getting a token for authentication, but you
must not use `--token`, you must instead set username and password. This
is something that we can hopefully fix with Upload API 2.0.
An unsolved problem with the keyring is that you it's best practice to
use publish tokens scoped to projects and store tokens in a secure
location such as the keyring, but the keyring saves a single password
per publish URL and username combination. That means that it can't
natively store separate passwords for publishing multiple packages. The
current hack around this is using the package name as query parameter,
e.g. `https://test.pypi.org/legacy/?astral-test-keyring`, as PyPI
ignores this query parameter. This is however only applicable when
publishing locally and not from CI.
Another problem is that the keyring implementation currently relies on
the `keyring` pypi package, which needs to be installed in PATH together
with its plugins and is comparatively slow. This would be improved by
native keyring support (https://github.com/astral-sh/uv/issues/10867),
with the same caveats such as keyring plugins that shared with the
simple index API.
We currently don't upload attestations (PEP 740). Attestations are an
additional field in the form-data, so we should be able to add them
transparently without any changes to the API, unless we want to add a
switch to deactivate even when trusted publishing is used. See also
https://trailofbits.github.io/are-we-pep740-yet/.
Setuptools is writing an invalid combination of Metadata-Version and
used metadata fields in some cases, which PyPI correctly rejects
(https://github.com/astral-sh/uv/issues/9513).
We set a 15min overall timeout since reqwest is missing a write timeout
option (https://github.com/seanmonstar/reqwest/issues/2403).
https://github.com/astral-sh/uv/issues/8641 and
https://github.com/astral-sh/uv/issues/8774: We build artifact checking
in some capacity. This should be done ideally by the build backend or at
latest as part of `uv build`, doing it as part of publish is too late.
Closes#7839
---
Let me know if i missed anything.
We added this to help with resolving some specific packages, and for
parity with Poetry. But in some cases, this metadata is just wrong, and
at the very least it's unreliable.
Closes https://github.com/astral-sh/uv/issues/8989.
Closes#10945.
Instead of using junctions, we can just write files that contain (as the
file contents) the target path. This requires a little more finesse in
that, as readers, we need to know where to expect these. But it also
means we get to avoid junctions, which have led to a variety of
confusing behaviors. Further, `replace_symlink` should now be on atomic
on Windows.
Closes#11263.
We've never bumped the version of this bucket, and we may never do so...
But it's still incorrect for us to omit it from these serialized structs
in the cache. Specifically, these structs include a pointer into the
archive bucket (namely, the ID). But we don't include the bucket
version! So, in theory, we could end up pointing to archives that don't
match the current bucket version expected in the code.