## Summary
While fixing https://github.com/astral-sh/ruff/pull/5233, I noticed that
in FastAPI, 343 out of 823 files weren't hitting the cache. It turns out
these are standalone files in the documentation that lack a "package
root". Later, when looking up the cache entries, we fallback to the
package directory.
This PR ensures that we initialize the cache for both kinds of files:
those that are in a package, and those that aren't.
The total size of the FastAPI cache for me is now 388K. I also suspect
that this approach is much faster than as initially written, since
before, we were probably initializing one cache per _directory_.
## Test Plan
Ran `cargo run -p ruff_cli -- check ../fastapi --verbose`; verified
that, on second execution, there were no "Checking" entries in the logs.
## Summary
This is a proper fix for the issue patched-over in
https://github.com/astral-sh/ruff/pull/5229, thanks to an extremely
helpful repro from @tlambert03 in that thread. It looks like we were
using the keys of `package_roots` rather than the values to initialize
the cache -- but it's a map from package to package root.
## Test Plan
Reverted #5229, then ran through the plan that @tlambert03 included in
https://github.com/astral-sh/ruff/pull/5229#issuecomment-1599723226.
Verified the panic before but not after this change.
## Summary
I haven't been able to determine why / when this is happening, but in
some cases, users are reporting that this `unwrap()` is causing a panic.
It's fine to just return `None` here and fallback to "No cache",
certainly better than panicking (while we figure out the edge case).
Closes#5225.
Closes#5228.
## Summary
A new CLI option (`-o`/`--output-file`) to write output to a file
instead of stdout.
Major change is to remove the lock acquired on stdout. The argument is
that the output is buffered and thus the lock is acquired only when
writing a block (8kb). As per the benchmark below there is a slight
performance penalty.
Reference:
https://rustmagazine.org/issue-3/javascript-compiler/#printing-is-slow
## Benchmarks
_Output is truncated to only contain useful information:_
Command: `check --isolated --no-cache --select=ALL --show-source
./test-repos/cpython"`
Latest HEAD (361d45f2b2) with and without
the manual lock on stdout:
```console
Benchmark 1: With lock
Time (mean ± σ): 5.687 s ± 0.075 s [User: 17.110 s, System: 0.486 s]
Range (min … max): 5.615 s … 5.860 s 10 runs
Benchmark 2: Without lock
Time (mean ± σ): 5.719 s ± 0.064 s [User: 17.095 s, System: 0.491 s]
Range (min … max): 5.640 s … 5.865 s 10 runs
Summary
(1) ran 1.01 ± 0.02 times faster than (2)
```
This PR:
```console
Benchmark 1: This PR
Time (mean ± σ): 5.855 s ± 0.058 s [User: 17.197 s, System: 0.491 s]
Range (min … max): 5.786 s … 5.987 s 10 runs
Benchmark 2: Latest HEAD with lock
Time (mean ± σ): 5.645 s ± 0.033 s [User: 16.922 s, System: 0.495 s]
Range (min … max): 5.600 s … 5.712 s 10 runs
Summary
(2) ran 1.04 ± 0.01 times faster than (1)
```
## Test Plan
Run all of the commands which gives output with and without the
`--output-file=ruff.out` option:
* `--show-settings`
* `--show-files`
* `--show-fixes`
* `--diff`
* `--select=ALL`
* `--select=All --show-source`
* `--watch` (only stdout allowed)
resolves: #4754
## Summary
Open cache files in parallel (again), brings the performance back to be roughly equal to the old implementation.
## Test Plan
Existing tests should keep working.
## Summary
Maintain consistency while deserializing Jupyter notebook to JSON. The
following changes were made:
1. Use string array to store the source value as that's the default
(5781720423/nbformat/v4/nbjson.py (L56-L57))
2. Remove unused structs and enums
3. Reorder the keys in alphabetical order as that's the default.
(5781720423/nbformat/v4/nbjson.py (L51))
### Side effect
Removing the `preserve_order` feature means that the order of keys in
JSON output (`--format json`) will be in alphabetical order. This is
because the value is represented using `serde_json::Value` which
internally is a `BTreeMap`, thus sorting it as per the string key. For
posterity if this turns out to be not ideal, then we could define a
struct representing the JSON object and the order of struct fields will
determine the order in the JSON string.
## Test Plan
Add a test case to assert the raw JSON string.
## Summary
This changes the caching design from one cache file per source file, to
one cache file per package. This greatly reduces the amount of cache
files that are opened and written, while maintaining roughly the same
(combined) size as bincode is very compact.
Below are some very much not scientific performance tests. It uses
projects/sources to check:
* small.py: single, 31 bytes Python file with 2 errors.
* test.py: single, 43k Python file with 8 errors.
* fastapi: FastAPI repo, 1134 files checked, 0 errors.
Source | Before # files | After # files | Before size | After size
-------|-------|-------|-------|-------
small.py | 1 | 1 | 20 K | 20 K
test.py | 1 | 1 | 60 K | 60 K
fastapi | 1134 | 518 | 4.5 M | 2.3 M
One question that might come up is why fastapi still has 518 cache files
and not 1? That is because this is using the existing package
resolution, which sees examples, docs, etc. as separate from the "main"
source code (in the fastapi directory in the repo). In this future it
might be worth consider switching to a one cache file per repo strategy.
This new design is not perfect and does have a number of known issues.
First, like the old design it doesn't remove the cache for a source file
that has been (re)moved until `ruff clean` is called.
Second, this currently uses a large mutex around the mutation of the
package cache (e.g. inserting result). This could be (or become) a
bottleneck. It's future work to test and improve this (if needed).
Third, currently the packages and opened and stored in a sequential
loop, this could be done parallel. This is also future work.
## Test Plan
Run `ruff check` (with caching enabled) twice on any Python source code
and it should produce the same results.
## Summary
We now _always_ generate fixes, so `FixMode::None` and
`FixMode::Generate` are redundant. We can also remove the TODO around
`--fix-dry-run`, since that's our default behavior.
Closes#5081.
## Summary
Ignore pyproject.toml file for adding noqa directives using `--add-noqa`
## Test Plan
`cargo run --bin ruff -- check --add-noqa .`
fixes: #5012
* Create dummy format CLI
* Hide format from clap, too
Missed that this is a separate option from `#[doc(hidden)]`
* Remove cargo feature and replace with warning
* No-alloc files parameter matching
* beta warning: warn -> warn_user_once
* Rephrase warning
* Add basic jupyter notebook support behind a feature flag
* Address review comments
* Rename in separate commit to make both git and clippy happy
* cfg(feature = "jupyter_notebook") another test
* Address more review comments
* Address more review comments
* and clippy and windows
* More review comment
## Summary
This PR moves `Diagnostic`, `DiagnosticKind`, and `Fix` into their own crate, which will enable us to further split up Ruff, since sub-linter crates (which need to implement functions that return `Diagnostic`) can now depend on `ruff_diagnostics` rather than Ruff.
This PR introduces a new `CacheKey` trait for types that can be used as a cache key.
I'm not entirely sure if this is worth the "overhead", but I was surprised to find `HashableHashSet` and got scared when I looked at the time complexity of the `hash` function. These implementations must be extremely slow in hashed collections.
I then searched for usages and quickly realized that only the cache uses these `Hash` implementations, where performance is less sensitive.
This PR introduces a new `CacheKey` trait to communicate the difference between a hash and computing a key for the cache. The new trait can be implemented for types that don't implement `Hash` for performance reasons, and we can define additional constraints on the implementation: For example, we'll want to enforce portability when we add remote caching support. Using a different trait further allows us not to implement it for types without stable identities (e.g. pointers) or use other implementations than the standard hash function.
In ruff-lsp (https://github.com/charliermarsh/ruff-lsp/pull/76) we want to add a "Disable \<rule\> for this line" quickfix. However, finding the correct line into which the `noqa` comment should be inserted is non-trivial (multi-line strings for example).
Ruff already has this info, so expose it in the JSON output for use by ruff-lsp.
Rule::noqa_code previously return a single &'static str,
which was possible because we had one enum listing all
rule code prefixes. This commit series will however split up
the RuleCodePrefix enum into several enums ... so we'll end up
with two &'static str ... this commit wraps the return type
of Rule::noqa_code into a newtype so that we can easily change
it to return two &'static str in the 6th commit of this series.
Post this commit series several codes can be mapped to a single rule,
this commit therefore renames Rule::code to Rule::noqa_code,
which is the code that --add-noqa will add to ignore a rule.
The synopsis is as follows.
List all top-level config keys:
$ ruff config
allowed-confusables
builtins
cache-dir
... etc.
List all config keys in a specific section:
$ ruff config mccabe
max-complexity
Describe a specific config option:
$ ruff config mccabe.max-complexity
The maximum McCabe complexity to allow before triggering `C901` errors.
Default value: 10
Type: int
Example usage:
```toml
# Flag errors (`C901`) whenever the complexity level exceeds 5.
max-complexity = 5
```
The new `ruff rule` output format introduced in
551b810aeb doesn't print Markdown but
rather some rich text with escape sequences for colors and links,
it's actually the "text" format that prints Markdown, so naming the new
format "markdown" is very confusing. This commit therefore renames it to
"pretty".
This isn't a breaking change since there hasn't been a release yet.