mirror of
https://github.com/astral-sh/uv.git
synced 2025-08-04 10:58:28 +00:00
![]() Reduce the overhead of `uv run` in large workspaces. Instead of re-discovering the entire workspace each time we resolve the metadata of a member, we can the discovered set of workspace members. Care needs to be taken to not cache the discovery for `uv init`, `uv add` and `uv remove`, which change the definitions of workspace members. Below is apache airflow e3fe06382df4b19f2c0de40ce7c0bdc726754c74 `uv run python` with a minimal payload. With this change, we avoid a ~350ms overhead of each `uv run` invocation. ``` $ hyperfine --warmup 2 \ "uv run --no-dev python -c \"print('hi')\"" \ "uv-profiling run --no-dev python -c \"print('hi')\"" Benchmark 1: uv run --no-dev python -c "print('hi')" Time (mean ± σ): 492.6 ms ± 7.0 ms [User: 393.2 ms, System: 97.1 ms] Range (min … max): 482.3 ms … 501.5 ms 10 runs Benchmark 2: uv-profiling run --no-dev python -c "print('hi')" Time (mean ± σ): 129.7 ms ± 2.5 ms [User: 105.4 ms, System: 23.2 ms] Range (min … max): 126.0 ms … 136.1 ms 22 runs Summary uv-profiling run --no-dev python -c "print('hi')" ran 3.80 ± 0.09 times faster than uv run --no-dev python -c "print('hi')" ``` The profile after those change below. We still spend a large chunk in toml parsing (both `uv.lock` and `pyproject.toml`), but it's not excessive anymore.  |
||
---|---|---|
.. | ||
benches | ||
inputs | ||
src | ||
Cargo.toml |