## Summary
If the package _isn't_ marked as `workspace = true`, locking will fail
given:
```rust
let workspace_package_declared =
// We require that when you use a package that's part of the workspace, ...
!workspace.packages().contains_key(&requirement.name)
// ... it must be declared as a workspace dependency (`workspace = true`), ...
|| matches!(
source,
Some(Source::Workspace {
// By using toml, we technically support `workspace = false`.
workspace: true,
..
})
)
// ... except for recursive self-inclusion (extras that activate other extras), e.g.
// `framework[machine_learning]` depends on `framework[cuda]`.
|| &requirement.name == project_name;
if !workspace_package_declared {
return Err(LoweringError::UndeclaredWorkspacePackage);
}
```
Closes https://github.com/astral-sh/uv/issues/4552.
## Summary
This PR modifies `uv run` to fallback to discovering an interpreter
(e.g., a local `.venv`) if the command is run outside of a workspace.
`uv run --isolated` continues to completely skip workspace _and_
interpreter discovering, only installing whatever's provided with
`--with`.
The next step here is adding some ergonomic controls for enabling this
behavior even if your project is technically in a workspace (i.e., you
have a `pyproject.toml` but aren't using the Project APIs and don't want
locking etc.). I could imagine a setting in `pyproject.toml` that's also
exposed on the command-line. Something like: `managed = false` or
`project = false`.
See: https://github.com/astral-sh/uv/issues/3836.
This is the minimal "working" implementation. In summary, we:
- Resolve the requested requirements
- Create an environment at `$UV_STATE_DIR/tools/$name`
- Inspect the `dist-info` for the main requirement to determine its
entry points scripts
- Link the entry points from a user-executable directory
(`$XDG_BIN_HOME`) to the environment bin
- Create an entry at `$UV_STATE_DIR/tools/tools.toml` tracking the
user's request
The idea with `tools.toml` is that it allows us to perform upgrades and
syncs, retaining the original user request (similar to declarations in a
`pyproject.toml`). I imagine using a similar schema in the
`pyproject.toml` in the future if/when we add project-levle tools. I'm
also considering exposing `tools.toml` in the standard uv configuration
directory instead of the state directory, but it seems nice to tuck it
away for now while we iterate on it. Installing a tool won't perform a
sync of other tool environments, we'll probably have an explicit `uv
tool sync` command for that?
I've split out todos into follow-up pull requests:
- #4509 (failing on Windows)
- #4501
- #4504Closes#4485
`ResolverState::choose_version` had become huge, with an odd match due
to the url handling from #4435. This refactoring breaks it into
`choose_version`, `choose_version_registry` and `choose_version_url`. No
functional changes.
In the case of a direct URL sdist, it includes a hash, and this hash is
not (and probably should not) be part of the `source`. The URL is part
of the source because it permits uniquely identifying this particular
package as distinct from any other package with the same name. But, we
should still include the hash.
So in this commit, we rejigger what we did previously to make it so the
`SourceDist` value isn't even constructed at all when it isn't needed.
This also in turn lets us make the hash field required (which we will do
in a subsequent commit).
This does mean the URL is stored twice for direct URL dependencies in
the lock file. This seems non-ideal. We could make the URL for the sdist
optional, but this seems like a bridge too far? Another choice is to add
a new key to `distribution` that is just `direct-url-hash`, but that
also seems mucky.
Maybe the duplication here is okay given the relative rarity of direct
URL dependencies.
This updates all of the test snapshots where `sdist` was
strictly redundant and could be removed.
Note that there is one test failure whose snapshot I didn't
update: one where there is a direct URL dependency. In this
case, the sdist entry isn't strictly redundant, as it includes
a hash that isn't present in the source. We'll deal with that
in a subsequent commit.
This fixes an issue in the lock file where, in cases where we had a
non-registry sdist, the information in the sdist was strictly redundant
with the information in the source. This was born out in the code
already where the `sdist` field was only ever used to build a source
distribution type when the source was a registry. In all other cases,
the source distribution data can be materialized from the `source`
field.
This makes it clear that an actual `sdist` is only required when a
distribution is from a registry. In all other cases, a source
distribution is manufactured directly from the `source`.
Previously, we had Lock and LockWire impl blocks inter-mixed. This bugs
me a bit, so I've just shuffled things around so that we have Lock, impl
Lock, LockWire and then impl LockWire.
No changes are otherwise made to the code here.
This update follows from the removal of of `source` and `version` from
`distribution.dependency` entries in the lock file when the package name
unambiguously refers to a single distribution.
When there is only one distribution for a particular package name, any
dependencies (the edges in the resolution graph) that reference that
package name are completely unambiguous. Therefore, we can actually omit
their version and source information and instead derive it from the
distribution entry.
We add some tests to check the success and error cases. That is, when
`source` or `version` are omitted and there are more than one
corresponding distribution for the package name (i.e., it's ambiguous),
then lock deserialization should fail.
This commit prepares to make the `source` and `version` fields optional
in a `distribution.dependency` based on whether they have an unambiguous
value. e.g., When there is exactly one distribution with a matching
package name.
This refactor effectively defines "wire" types for most of the lock data
types (repeating the `WheelWire` and `LockWire` pattern) with one key
difference: we don't use serde's `TryFrom` integration. In this
refactor, we could have, and it would have worked. But in a subsequent
commit, we're going to be adding state to the `unwire()` calls that is
impossible to thread through a `TryFrom` implementation. This state will
tell us how to populate the `source` and `version` values on a
`Dependency` when they're missing.
The duplication of types here is unfortunate, but compiler should catch
any deviations. And the wire types are unexported, so they have a
limited blast radius on complexity.
Downstack PR: #4481
## Introduction
We support forking the dependency resolution to support conflicting
registry requirements for different platforms, say on package range is
required for an older python version while a newer is required for newer
python versions, or dependencies that are different per platform. We
need to extend this support to direct URL requirements.
```toml
dependencies = [
"iniconfig @ 62565a6e1c/iniconfig-2.0.0-py3-none-any.whl ; python_version >= '3.12'",
"iniconfig @ b3c12c6d70/iniconfig-1.1.1-py2.py3-none-any.whl ; python_version < '3.12'"
]
```
This did not work because `Urls` was built on the assumption that there
is a single allowed URL per package. We collect all allowed URL ahead of
resolution by following direct URL dependencies (including path
dependencies) transitively, i.e. a registry distribution can't require a
URL.
## The same package can have Registry and URL requirements
Consider the following two cases:
requirements.in:
```text
werkzeug==2.0.0
werkzeug @ 960bb4017c/Werkzeug-2.0.0-py3-none-any.whl
```
pyproject.toml:
```toml
dependencies = [
"iniconfig == 1.1.1 ; python_version < '3.12'",
"iniconfig @ git+https://github.com/pytest-dev/iniconfig@93f5930e668c0d1ddf4597e38dd0dea4e2665e7a ; python_version >= '3.12'",
]
```
In the first case, we want the URL to override the registry dependency,
in the second case we want to fork and have one branch use the registry
and the other the URL. We have to know about this in
`PubGrubRequirement::from_registry_requirement`, but we only fork after
the current method.
Consider the following case too:
a:
```
c==1.0.0
b @ https://b.zip
```
b:
```
c @ https://c_new.zip ; python_version >= '3.12'",
c @ https://c_old.zip ; python_version < '3.12'",
```
When we convert the requirements of `a`, we can't know the url of `c`
yet. The solution is to remove the `Url` from `PubGrubPackage`: The
`Url` is redundant with `PackageName`, there can be only one url per
package name per fork. We now do the following: We track the urls from
requirements in `PubGrubDependency`. After forking, we call
`add_package_version_dependencies` where we apply override URLs, check
if the URL is allowed and check if the url is unique in this fork. When
we request a distribution, we ask the fork urls for the real URL. Since
we prioritize url dependencies over registry dependencies and skip
packages with `Urls` entries in pre-visiting, we know that when fetching
a package, we know if it has a url or not.
## URL conflicts
pyproject.toml (invalid):
```toml
dependencies = [
"iniconfig @ e96292c7f7/iniconfig-1.1.0.tar.gz",
"iniconfig @ b3c12c6d70/iniconfig-1.1.1-py2.py3-none-any.whl ; python_version < '3.12'",
"iniconfig @ 62565a6e1c/iniconfig-2.0.0-py3-none-any.whl ; python_version >= '3.12'",
]
```
On the fork state, we keep `ForkUrls` that check for conflicts after
forking, rejecting the third case because we added two packages of the
same name with different URLs.
We need to flatten out the requirements before transformation into
pubgrub requirements to get the full list of other requirements which
may contain a URL, which was changed in a previous PR: #4430.
## Complex Example
a:
```toml
dependencies = [
# Force a split
"anyio==4.3.0 ; python_version >= '3.12'",
"anyio==4.2.0 ; python_version < '3.12'",
# Include URLs transitively
"b"
]
```
b:
```toml
dependencies = [
# Only one is used in each split.
"b1 ; python_version < '3.12'",
"b2 ; python_version >= '3.12'",
"b3 ; python_version >= '3.12'",
]
```
b1:
```toml
dependencies = [
"iniconfig @ b3c12c6d70/iniconfig-1.1.1-py2.py3-none-any.whl",
]
```
b2:
```toml
dependencies = [
"iniconfig @ 62565a6e1c/iniconfig-2.0.0-py3-none-any.whl",
]
```
b3:
```toml
dependencies = [
"iniconfig @ e96292c7f7/iniconfig-1.1.0.tar.gz",
]
```
In this example, all packages are url requirements (directory
requirements) and the root package is `a`. We first split on `a`, `b`
being in each split. In the first fork, we reach `b1`, the fork URLs are
empty, we insert the iniconfig 1.1.1 URL, and then we skip over `b2` and
`b3` since the mark is disjoint with the fork markers. In the second
fork, we skip over `b1`, visit `b2`, insert the iniconfig 2.0.0 URL into
the again empty fork URLs, then visit `b3` and try to insert the
iniconfig 1.1.0 URL. At this point we find a conflict for the iniconfig
URL and error.
## Closing
The git tests are slow, but they make the best example for different URL
types i could find.
Part of #3927. This PR does not handle `Locals` or pre-releases yet.
In the last PR (#4430), we flatten the requirements. In the next PR
(#4435), we want to pass `Url` around next to `PubGrubPackage` and
`Range<Version>` to keep track of which `Requirement`s added a url
across forking. This PR is a refactoring split out from #4435 that rolls
the dependency conversion into a single iterator and introduces a new
`PubGrubDependency` struct as abstraction over `(PubGrubPackage,
Range<Version>)` (or `(PubGrubPackage, Range<Version>,
VerbatimParsedUrl)` in the next PR), and it removes the now unnecessary
`PubGrubDependencies` abstraction.
This PR refactors the command creation in the test suite to remove the
duplication.
**1)** We add the same set of test stubbing args to almost any uv
invocation in the tests:
```rust
command
.arg("--cache-dir")
.arg(self.cache_dir.path())
.env("VIRTUAL_ENV", self.venv.as_os_str())
.env("UV_NO_WRAP", "1")
.env("HOME", self.home_dir.as_os_str())
.env("UV_TOOLCHAIN_DIR", "")
.env("UV_TEST_PYTHON_PATH", &self.python_path())
.current_dir(self.temp_dir.path());
if cfg!(all(windows, debug_assertions)) {
// TODO(konstin): Reduce stack usage in debug mode enough that the tests pass with the
// default windows stack of 1MB
command.env("UV_STACK_SIZE", (8 * 1024 * 1024).to_string());
}
```
Centralizing these into a `TestContext::add_shared_args` method removes
them from everywhere.
**2)** Prefix all `TextContext` methods of the pip interface with
`pip_`. This is now necessary due to `uv sync` vs. `uv pip sync`.
**3)** Move command creation in the various test files into dedicated
functions or methods to avoid repeating the arguments. Except for error
message tests, there should be at most one `Command::new(get_bin())`
call per test file. `EXCLUDE_NEWER` is exclusively used in
`TestContext`.
---
I'm considering adding a `TestCommand` on top of these changes (in
another PR) that holds a reference to the `TextContext`, has
`add_shared_args` as a method and uses `Fn(Self) -> Self` instead of
`Fn(&mut Self) -> Self` for methods to improved chaining.
Downstack PR: #4515 Upstack PR: #4481
Consider these two cases:
A:
```
werkzeug==2.0.0
werkzeug @ 960bb4017c/Werkzeug-2.0.0-py3-none-any.whl
```
B:
```toml
dependencies = [
"iniconfig == 1.1.1 ; python_version < '3.12'",
"iniconfig @ git+https://github.com/pytest-dev/iniconfig@93f5930e668c0d1ddf4597e38dd0dea4e2665e7a ; python_version >= '3.12'",
]
```
In the first case, `werkzeug==2.0.0` should be overridden by the url. In
the second case `iniconfig == 1.1.1` is in a different fork and must
remain a registry distribution.
That means the conversion from `Requirement` to `PubGrubPackage` is
dependent on the other requirements of the package. We can either look
into the other packages immediately, or we can move the forking before
the conversion to `PubGrubDependencies` instead of after. Either version
requires a flat list of `Requirement`s to use. This refactoring gives us
this list.
I'll add support for both of the above cases in the forking urls branch
before merging this PR. I also have to move constraints over to this.
While working on https://github.com/astral-sh/uv/pull/4492 I noticed
that `--reinstall-package` was not actually respected by
`update_environment`, it exited early due to satisfied requirements.
Before
```
❯ cargo run -q -- tool install black -v --reinstall-package tomli
...
DEBUG All requirements satisfied: black | click>=8.0.0 | mypy-extensions>=0.4.3 | packaging>=22.0 | pathspec>=0.9.0 | platformdirs>=2 | tomli>=1.1.0 ; python_version < '3.11' | typing-extensions>=4.0.1 ; python_version < '3.11'
```
After
```
❯ cargo run -q -- tool install black -v --reinstall-package tomli
...
Uninstalled 1 package in 0.99ms
Installed 1 package in 4ms
- tomli==2.0.1
+ tomli==2.0.1
```
## Summary
This is an intermediary change in enabling universal resolution for
`requirements.txt` files. To start, we need to be able to preserve
markers in the `requirements.txt` output _and_ propagate those markers,
such that if you have a dependency that's only included with a given
marker, the transitive dependencies respect that marker too.
Closes#1429.
## Summary
If the user puts their configuration in a `pyproject.toml` that _isn't_
a valid workspace root (e.g., it's a Poetry file), we won't discover it,
because we only look in `uv.toml` files in that case. I think this is
somewhat debatable... We could choose to _require_ `uv.toml` there, but
as a user I'd probably expect it to work?
Closes https://github.com/astral-sh/uv/issues/4521.
`junction::create` apparently will happily succeed but not create a link
to files? Since our symlink function does not indicate that it cannot
handle files, this was quite surprising.
Tested over in #4509 which previously failed on an assertion that
`black.exe` existed.
```
error: Failed to install entrypoint
Caused by: Cannot create a junction for [TEMP_DIR]/tools/black/Scripts/black.exe: is not a directory
```
We should file an issue upstream too, I think?
## Summary
We currently accept `--index-url /path/to/index` on the command line,
but confusingly, not in `requirements.txt`. This PR just brings the two
in sync.
## Test Plan
New snapshot tests.
## Summary
pip allows these with the following logic:
```python
if os.path.exists(location): # Is a local path.
url = path_to_url(location)
path = location
elif location.startswith("file:"): # A file: URL.
url = location
path = url_to_path(location)
elif is_url(location):
url = location
```
Closes https://github.com/astral-sh/uv/issues/4510.
## Test Plan
`cargo run pip install --index-url ../packse/index/simple-html/
example-a-961b4c22 --reinstall --no-cache --no-deps`
I was getting this test failure locally on my Archlinux system:
```
-old snapshot
+new results
0 0 │ success: true
1 1 │ exit_code: 0
2 2 │ ----- stdout -----
3 │-[PYTHON-3.12]
3 │+/usr/bin/python3
4 4 │
5 5 │ ----- stderr -----
```
Where I have `/usr/bin/python3` and `/usr/bin/python3.12`.
Thanks @zanieb for the help with figuring out the fix here!
So we can skip them when there are not code changes and still enforce
our required checks (xref #4438)
Otherwise, the names are dynamic and they are forever expected (see
#4426 for example)
## Summary
In #4085, support was implemented for the `--prefix` option. When using
this option, however, a lock is either acquired on the virtualenv or
globally, preventing multiple installs to different `--prefix`s from the
same interpreter.
In this change, acquire the lock on just the prefix in question.
## Test Plan
Ran a `uv pip install` with `--prefix` and `RUST_LOG=trace` and observed
that the lock was acquired in the prefix.