mirror of
https://github.com/astral-sh/uv.git
synced 2025-10-27 18:36:44 +00:00
Add hash-checking support to install and sync (#2945)
## Summary This PR adds support for hash-checking mode in `pip install` and `pip sync`. It's a large change, both in terms of the size of the diff and the modifications in behavior, but it's also one that's hard to merge in pieces (at least, with any test coverage) since it needs to work end-to-end to be useful and testable. Here are some of the most important highlights: - We store hashes in the cache. Where we previously stored pointers to unzipped wheels in the `archives` directory, we now store pointers with a set of known hashes. So every pointer to an unzipped wheel also includes its known hashes. - By default, we don't compute any hashes. If the user runs with `--require-hashes`, and the cache doesn't contain those hashes, we invalidate the cache, redownload the wheel, and compute the hashes as we go. For users that don't run with `--require-hashes`, there will be no change in performance. For users that _do_, the only change will be if they don't run with `--generate-hashes` -- then they may see some repeated work between resolution and installation, if they use `pip compile` then `pip sync`. - Many of the distribution types now include a `hashes` field, like `CachedDist` and `LocalWheel`. - Our behavior is similar to pip, in that we enforce hashes when pulling any remote distributions, and when pulling from our own cache. Like pip, though, we _don't_ enforce hashes if a distribution is _already_ installed. - Hash validity is enforced in a few different places: 1. During resolution, we enforce hash validity based on the hashes reported by the registry. If we need to access a source distribution, though, we then enforce hash validity at that point too, prior to running any untrusted code. (This is enforced in the distribution database.) 2. In the install plan, we _only_ add cached distributions that have matching hashes. If a cached distribution is missing any hashes, or the hashes don't match, we don't return them from the install plan. 3. In the downloader, we _only_ return distributions with matching hashes. 4. The final combination of "things we install" are: (1) the wheels from the cache, and (2) the downloaded wheels. So this ensures that we never install any mismatching distributions. - Like pip, if `--require-hashes` is provided, we require that _all_ distributions are pinned with either `==` or a direct URL. We also require that _all_ distributions have hashes. There are a few notable TODOs: - We don't support hash-checking mode for unnamed requirements. These should be _somewhat_ rare, though? Since `pip compile` never outputs unnamed requirements. I can fix this, it's just some additional work. - We don't automatically enable `--require-hashes` with a hash exists in the requirements file. We require `--require-hashes`. Closes #474. ## Test Plan I'd like to add some tests for registries that report incorrect hashes, but otherwise: `cargo test`
This commit is contained in:
parent
715a309dd5
commit
1f3b5bb093
56 changed files with 3186 additions and 333 deletions
|
|
@ -1,5 +1,4 @@
|
|||
use std::borrow::Cow;
|
||||
|
||||
use std::path::{Path, PathBuf};
|
||||
|
||||
use anyhow::{Context, Result};
|
||||
|
|
@ -25,6 +24,8 @@ pub struct SourceTreeResolver<'a, Context: BuildContext + Send + Sync> {
|
|||
source_trees: Vec<PathBuf>,
|
||||
/// The extras to include when resolving requirements.
|
||||
extras: &'a ExtrasSpecification<'a>,
|
||||
/// Whether to require hashes for all dependencies.
|
||||
require_hashes: bool,
|
||||
/// The in-memory index for resolving dependencies.
|
||||
index: &'a InMemoryIndex,
|
||||
/// The database for fetching and building distributions.
|
||||
|
|
@ -36,6 +37,7 @@ impl<'a, Context: BuildContext + Send + Sync> SourceTreeResolver<'a, Context> {
|
|||
pub fn new(
|
||||
source_trees: Vec<PathBuf>,
|
||||
extras: &'a ExtrasSpecification<'a>,
|
||||
require_hashes: bool,
|
||||
context: &'a Context,
|
||||
client: &'a RegistryClient,
|
||||
index: &'a InMemoryIndex,
|
||||
|
|
@ -43,6 +45,7 @@ impl<'a, Context: BuildContext + Send + Sync> SourceTreeResolver<'a, Context> {
|
|||
Self {
|
||||
source_trees,
|
||||
extras,
|
||||
require_hashes,
|
||||
index,
|
||||
database: DistributionDatabase::new(client, context),
|
||||
}
|
||||
|
|
@ -84,6 +87,16 @@ impl<'a, Context: BuildContext + Send + Sync> SourceTreeResolver<'a, Context> {
|
|||
path: Cow::Owned(path),
|
||||
});
|
||||
|
||||
// TODO(charlie): Should we enforce this earlier? If the metadata can be extracted
|
||||
// statically, it won't go through this resolver. But we'll fail anyway, since the
|
||||
// dependencies (when extracted from a `pyproject.toml` or `setup.py`) won't include hashes.
|
||||
if self.require_hashes {
|
||||
return Err(anyhow::anyhow!(
|
||||
"Hash-checking is not supported for local directories: {}",
|
||||
source_tree.user_display()
|
||||
));
|
||||
}
|
||||
|
||||
// Fetch the metadata for the distribution.
|
||||
let metadata = {
|
||||
let id = PackageId::from_url(source.url());
|
||||
|
|
@ -104,7 +117,7 @@ impl<'a, Context: BuildContext + Send + Sync> SourceTreeResolver<'a, Context> {
|
|||
} else {
|
||||
// Run the PEP 517 build process to extract metadata from the source distribution.
|
||||
let source = BuildableSource::Url(source);
|
||||
let metadata = self.database.build_wheel_metadata(&source).await?;
|
||||
let metadata = self.database.build_wheel_metadata(&source, &[]).await?;
|
||||
|
||||
// Insert the metadata into the index.
|
||||
self.index
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue