ruff/fuzz
Carl Meyer a176c1ac80
[red-knot] use fixpoint iteration for all cycles (#14029)
Pulls in the latest Salsa main branch, which supports fixpoint
iteration, and uses it to handle all query cycles.

With this, we no longer need to skip any corpus files to avoid panics.

Latest perf results show a 6% incremental and 1% cold-check regression.
This is not a "no cycles" regression, as tomllib and typeshed do trigger
some definition cycles (previously handled by our old
`infer_definition_types` fallback to `Unknown`). We don't currently have
a benchmark we can use to measure the pure no-cycles regression, though
I expect there would still be some regression; the fixpoint iteration
feature in Salsa does add some overhead even for non-cyclic queries.

I think this regression is within the reasonable range for this feature.
We can do further optimization work later, but I don't think it's the
top priority right now. So going ahead and acknowledging the regression
on CodSpeed.

Mypy primer is happy, so this doesn't regress anything on our
currently-checked projects. I expect it probably unlocks adding a number
of new projects to our ecosystem check that previously would have
panicked.

Fixes #13792
Fixes #14672
2025-03-12 12:41:40 +00:00
..
corpus [red-knot] Add fuzzer to catch panics for invalid syntax (#14678) 2024-12-04 14:36:58 +05:30
fuzz_targets Add OsSystem support to mdtests (#16518) 2025-03-06 10:41:40 +01:00
.gitignore Improve ruff_parse_simple to find UTF-8 violations (#5008) 2023-06-12 12:10:23 -04:00
Cargo.toml [red-knot] use fixpoint iteration for all cycles (#14029) 2025-03-12 12:41:40 +00:00
init-fuzzer.sh Rename red_knot_workspace to red_knot_project (#15615) 2025-01-20 14:02:36 +01:00
README.md [red-knot] Add fuzzer to catch panics for invalid syntax (#14678) 2024-12-04 14:36:58 +05:30
reinit-fuzzer.sh Minor fuzzer improvements (#9375) 2024-01-03 01:52:42 +00:00

ruff-fuzz

Fuzzers and associated utilities for automatic testing of Ruff.

Usage

To use the fuzzers provided in this directory, start by invoking:

./fuzz/init-fuzzers.sh

This will install cargo-fuzz and optionally download a dataset which improves the efficacy of the testing. This step is necessary for initialising the corpus directory, as all fuzzers share a common corpus. The dataset may take several hours to download and clean, so if you're just looking to try out the fuzzers, skip the dataset download, though be warned that some features simply cannot be tested without it (very unlikely for the fuzzer to generate valid python code from "thin air").

Once you have initialised the fuzzers, you can then execute any fuzzer with:

cargo fuzz run -s none name_of_fuzzer -- -timeout=1

Users using Apple M1 devices must use a nightly compiler and omit the -s none portion of this command, as this architecture does not support fuzzing without a sanitizer. You can view the names of the available fuzzers with cargo fuzz list. For specific details about how each fuzzer works, please read this document in its entirety.

IMPORTANT: You should run ./reinit-fuzzer.sh after adding more file-based testcases. This will allow the testing of new features that you've added unit tests for.

Debugging a crash

Once you've found a crash, you'll need to debug it. The easiest first step in this process is to minimise the input such that the crash is still triggered with a smaller input. cargo-fuzz supports this out of the box with:

cargo fuzz tmin -s none name_of_fuzzer artifacts/name_of_fuzzer/crash-...

From here, you will need to analyse the input and potentially the behaviour of the program. The debugging process from here is unfortunately less well-defined, so you will need to apply some expertise here. Happy hunting!

A brief introduction to fuzzers

Fuzzing, or fuzz testing, is the process of providing generated data to a program under test. The most common variety of fuzzers are mutational fuzzers; given a set of existing inputs (a "corpus"), it will attempt to slightly change (or "mutate") these inputs into new inputs that cover parts of the code that haven't yet been observed. Using this strategy, we can quite efficiently generate testcases which cover significant portions of the program, both with expected and unexpected data. This is really quite effective for finding bugs.

The fuzzers here use cargo-fuzz, a utility which allows Rust to integrate with libFuzzer, the fuzzer library built into LLVM. Each source file present in fuzz_targets is a harness, which is, in effect, a unit test which can handle different inputs. When an input is provided to a harness, the harness processes this data and libFuzzer observes the code coverage and any special values used in comparisons over the course of the run. Special values are preserved for future mutations and inputs which cover new regions of code are added to the corpus.

Each fuzzer harness in detail

Each fuzzer harness in fuzz_targets targets a different aspect of Ruff and tests them in different ways. While there is implementation-specific documentation in the source code itself, each harness is briefly described below.

red_knot_check_invalid_syntax

This fuzz harness checks that the type checker (Red Knot) does not panic when checking a source file with invalid syntax. This rejects any corpus entries that is already valid Python code. Currently, this is limited to syntax errors that's produced by Ruff's Python parser which means that it does not cover all possible syntax errors (https://github.com/astral-sh/ruff/issues/11934). A possible workaround for now would be to bypass the parser and run the type checker on all inputs regardless of syntax errors.

ruff_parse_simple

This fuzz harness does not perform any "smart" testing of Ruff; it merely checks that the parsing and unparsing of a particular input (what would normally be a source code file) does not crash. It also attempts to verify that the locations of tokens and errors identified do not fall in the middle of a UTF-8 code point, which may cause downstream panics. While this is unlikely to find any issues on its own, it executes very quickly and covers a large and diverse code region that may speed up the generation of inputs and therefore make a more valuable corpus quickly. It is particularly useful if you skip the dataset generation.

ruff_parse_idempotency

This fuzz harness checks that Ruff's parser is idempotent in order to check that it is not incorrectly parsing or unparsing an input. It can be built in two modes: default (where it is only checked that the parser does not enter an unstable state) or full idempotency (the parser is checked to ensure that it will always produce the same output after the first unparsing). Full idempotency mode can be used by enabling the full-idempotency feature when running the fuzzer, but this may be too strict of a restriction for initial testing.

ruff_fix_validity

This fuzz harness checks that fixes applied by Ruff do not introduce new errors using the existing ruff_linter::test::test_snippet testing utility. It currently is only configured to use default settings, but may be extended in future versions to test non-default linter settings.

ruff_formatter_idempotency

This fuzz harness ensures that the formatter is idempotent which detects possible unsteady states of Ruff's formatter.

ruff_formatter_validity

This fuzz harness checks that Ruff's formatter does not introduce new linter errors/warnings by linting once, counting the number of each error type, then formatting, then linting again and ensuring that the number of each error type does not increase across formats. This has the beneficial side effect of discovering cases where the linter does not discover a lint error when it should have due to a formatting inconsistency.