This enhances the hints generator in the resolver with some heuristic to
detect and warn in case of failures due to version mismatches on a local
package. Those may be the symptom of name conflict/shadowing with a
transitive dependency.
Closes: https://github.com/astral-sh/uv/issues/7329
---------
Co-authored-by: Zanie Blue <contact@zanie.dev>
Recently, rkyv 0.8 was released. Its API is a fair bit simpler now for
higher level uses (like for us in `uv`) and results in us being able to
delete a fair bit of code. This also removes our last dependency on `syn
1.0`, and thus drops that dependency.
Performance (via testing on the `transformers` example) seems to remain
about the same, which is what was expected:
```
$ hyperfine -w5 -r100 'uv lock' 'uv-ag-rkyv-update lock'
Benchmark 1: uv lock
Time (mean ± σ): 55.6 ms ± 6.4 ms [User: 30.4 ms, System: 35.1 ms]
Range (min … max): 43.0 ms … 73.1 ms 100 runs
Benchmark 2: uv-ag-rkyv-update lock
Time (mean ± σ): 56.5 ms ± 7.2 ms [User: 30.5 ms, System: 36.3 ms]
Range (min … max): 39.1 ms … 71.5 ms 100 runs
Summary
uv lock ran
1.02 ± 0.18 times faster than uv-ag-rkyv-update lock
```
Closes#7415
This changes the structure of the hints generator in the resolver when
encountering solution errors, so that it re-uses a single output buffer
owned by the caller.
It avoids repeated allocations of a temporary buffer within each
recursive function call.
## Summary
This PR enables users to provide pre-defined static metadata for
dependencies. It's intended for situations in which the user depends on
a package that does _not_ declare static metadata (e.g., a
`setup.py`-only sdist), and that is expensive to build or even cannot be
built on some architectures. For example, you might have a Linux-only
dependency that can't be built on ARM -- but we need to build that
package in order to generate the lockfile. By providing static metadata,
the user can instruct uv to avoid building that package at all.
For example, to override all `anyio` versions:
```toml
[project]
name = "project"
version = "0.1.0"
requires-python = ">=3.12"
dependencies = ["anyio"]
[[tool.uv.dependency-metadata]]
name = "anyio"
requires-dist = ["iniconfig"]
```
Or, to override a specific version:
```toml
[project]
name = "project"
version = "0.1.0"
requires-python = ">=3.12"
dependencies = ["anyio"]
[[tool.uv.dependency-metadata]]
name = "anyio"
version = "3.7.0"
requires-dist = ["iniconfig"]
```
The current implementation uses `Metadata23` directly, so we adhere to
the exact schema expected internally and defined by the standards. Any
entries are treated similarly to overrides, in that we won't even look
for `anyio@3.7.0` metadata in the above example. (In a way, this also
enables #4422, since you could remove a dependency for a specific
package, though it's probably too unwieldy to use in practice, since
you'd need to redefine the _rest_ of the metadata, and do that for every
package that requires the package you want to omit.)
This is under-documented, since I want to get feedback on the core ideas
and names involved.
Closes https://github.com/astral-sh/uv/issues/7393.
## Summary
All the registry wheels were getting cached under
`index/b2a7eb67d4c26b82` rather than `pypi`, because we used
`IndexUrl::Url` rather than `IndexUrl::from`.
## Summary
This is arguably breaking, arguably a bug... Today, if project A depends
on project B, and you install A with dev dependencies enabled, you also
get B's dev dependencies. I think this is incorrect. Just like you
shouldn't be importing B's dependencies from A, you shouldn't be using
B's dev dependencies when developing on A.
Closes#7310.
## Summary
We have to call `to_dist` to get metadata while validating the lockfile,
but some of the distributions won't match the current platform -- and
that's fine!
## Summary
We need to apply the `--no-install` filters earlier, such that we don't
error if we only have a source distribution for a given package when
`--no-build` is provided but that package is _omitted_.
Closes#7247.
This is preparatory work for the upload functionality, which needs to
read the METADATA file and attach its parsed contents to the POST
request: We move finding the `.dist-info` from `install-wheel-rs` and
`uv-client` to a new `uv-metadata` crate, so it can be shared with the
publish crate.
I don't properly know if its the right place since the upload code isn't
ready, but i'm PR-ing it now because it already had merge conflicts.
## Summary
We now track the discovered `IndexCapabilities` for each `IndexUrl`. If
we learn that an index doesn't support range requests, we avoid doing
any batch prefetching.
Closes https://github.com/astral-sh/uv/issues/7221.
## Summary
If we have a singleton `Range`, we don't need to iterate over the map of
available ranges; instead, we can just get the singleton directly.
Closes#6131.
This finally gets rid of our hack for working around "hidden"
state. We no longer do a roundtrip marker serialization and
deserialization just to avoid the hidden state.
## Summary
I think a better tradeoff here is to skip fetching metadata, even though
we can't validate the extras.
It will help with situations like
https://github.com/astral-sh/uv/issues/5073#issuecomment-2334235588 in
which, otherwise, we have to download the wheels twice.
(This is part of #5711)
## Summary
@BurntSushi and I spotted that the `derivative` crate is only used for
one enum in the entire codebase — however, it's a proc macro, and we pay
for the cost of (re)compiling it in many different contexts.
This replaces it with a private `Inner` core which uses the regular std
derive macros — inlining and optimizations should make this equivalent
to the other implementation, and not too hard to maintain hopefully
(versus a manual impl of `PartialEq` and `Hash` which have to be kept in
sync.)
## Test Plan
Trust CI?
## Summary
Like `uv sync`, you can omit the current project (`--no-emit-project`),
a specific package (`--no-emit-package`), or the entire workspace
(`--no-emit-workspace`).
Closes https://github.com/astral-sh/uv/issues/6960.
Closes#6995.
Follow-up to #6959 and #6961: Use the reachability computation instead
of `propagate_markers` everywhere.
With `marker_reachability`, we have a function that computes for each
node the markers under which it is (`requirements.txt`, no markers
provided on installation) or can be (`uv.lock`, depending on the markers
provided on installation) included in the installation. Put differently:
If the marker computed by `marker_reachability` is not fulfilled for the
current platform, the package is never required on the current platform.
We compute the markers for each package in the graph, this includes the
virtual extra packages and the base packages. Since we know that each
virtual extra package depends on its base package (`foo[bar]` implied
`foo`), we only retain the base package marker in the `requirements.txt`
graph.
In #6959/#6961 we were only using it for pruning packages in `uv.lock`,
now we're also using it for the markers in `requirements.txt`.
I think this closes#4645, CC @bluss.
## Summary
We need to prioritize hashes for the distribution over hashes for the
related packages.
I think this needs to be redone entirely though. I can see other issues
with the current approach.
Closes https://github.com/astral-sh/uv/issues/7059.
When a package is included under a platform-specific marker, we know
that wheels that mismatch this marker can never be installed, so we drop
them from the lockfile.
In transformers, we have:
* `tensorflow-text`: `tensorflow-macos; python_full_version >= '3.13'
and platform_machine == 'arm64' and platform_system == 'Darwin'`
* `tensorflow-macos`: `tensorflow-cpu-aws; (python_full_version < '3.10'
and platform_machine == 'aarch64' and platform_system == 'Linux') or
(python_full_version >= '3.13' and platform_machine == 'aarch64' and
platform_system == 'Linux') or (python_full_version >= '3.13' and
platform_machine == 'arm64' and platform_system == 'Linux')`
* `tensorflow-macos`: `tensorflow-intel; python_full_version >= '3.13'
and platform_system == 'Windows'`
This means that `tensorflow-cpu-aws` and `tensorflow-intel` can never be
installed, and we can drop them from the lockfile.
This commit refactors how deal with `requires-python` so that instead of
simplifying markers of dependencies inside the resolver, we do it at the
edges of our system. When writing markers to output, we simplify when
there's an obvious `requires-python` context. And when reading markers
as input, we complexity markers with the relevant `requires-python`
constraint.
When I first wrote this routine, it was intended to only emit a trace
for the final "unioned" resolution. But we actually moved that semantic
operation to the construction of the resolution *graph*. So there is no
unioned `Resolution` any more.
But this is still useful to see. So I changed this to just emit a trace
of *every* resolution right before constructing the graph.
It might be nice to also emit a trace of the unioned graph too. Or
perhaps we should do that instead if this proves too noisy. (Although
this is only emitted at TRACE level.)
## Summary
Right now, we have slightly different `requires-python` semantics for
`-p 3.11` vs. `-p 3.11 --universal`, and slightly different (wrong)
semantics for how we compare against the _installed_ Python version
(which doesn't ignore upper bounds, but should).
This PR rips it all out and replaces it with consistent semantics across
`uv lock`, `uv pip compile -p 3.11`, and `uv pip compile -p 3.11
--universal`. We now always ignore upper bounds.
Closes https://github.com/astral-sh/uv/issues/6859.
Closes https://github.com/astral-sh/uv/issues/5045.
## Summary
We now respect the user-provided upper-bound in for `requires-python`.
So, if the user has `requires-python = "==3.11.*"`, we won't explore
forks that have `python_version >= '3.12'`, for example.
However, we continue to _only_ compare the lower bounds when assessing
whether a dependency is compatible with a given Python range.
Closes https://github.com/astral-sh/uv/issues/6150.
## Summary
The interface here is intentionally a bit more limited than `uv pip
compile`, because we don't want `requirements.txt` to be a system of
record -- it's just an export format. So, we don't write annotation
comments (i.e., which dependency is requested from which), we don't
allow writing extras, etc. It's just a flat list of requirements, with
their markers and hashes.
Closes#6007.
Closes#6668.
Closes#6670.