Implement import sorting (#633)

This commit is contained in:
Charlie Marsh 2022-11-10 19:05:56 -05:00 committed by GitHub
parent 887b9aa840
commit 3cc74c0564
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
47 changed files with 1521 additions and 27 deletions

View file

@ -27,9 +27,10 @@ An extremely fast Python linter, written in Rust.
Ruff aims to be orders of magnitude faster than alternative tools while integrating more Ruff aims to be orders of magnitude faster than alternative tools while integrating more
functionality behind a single, common interface. Ruff can be used to replace Flake8 (plus a variety functionality behind a single, common interface. Ruff can be used to replace Flake8 (plus a variety
of plugins), [`pydocstyle`](https://pypi.org/project/pydocstyle/), [`yesqa`](https://github.com/asottile/yesqa), of plugins), [`isort`](https://pypi.org/project/isort/), [`pydocstyle`](https://pypi.org/project/pydocstyle/),
and even a subset of [`pyupgrade`](https://pypi.org/project/pyupgrade/) and [`autoflake`](https://pypi.org/project/autoflake/) [`yesqa`](https://github.com/asottile/yesqa), and even a subset of [`pyupgrade`](https://pypi.org/project/pyupgrade/)
all while executing tens or hundreds of times faster than any individual tool. and [`autoflake`](https://pypi.org/project/autoflake/) all while executing tens or hundreds of times
faster than any individual tool.
(Coming from Flake8? Try [`flake8-to-ruff`](https://pypi.org/project/flake8-to-ruff/) to (Coming from Flake8? Try [`flake8-to-ruff`](https://pypi.org/project/flake8-to-ruff/) to
automatically convert your existing configuration.) automatically convert your existing configuration.)
@ -285,16 +286,16 @@ Ruff supports several workflows to aid in `noqa` management.
First, Ruff provides a special error code, `M001`, to enforce that your `noqa` directives are First, Ruff provides a special error code, `M001`, to enforce that your `noqa` directives are
"valid", in that the errors they _say_ they ignore are actually being triggered on that line (and "valid", in that the errors they _say_ they ignore are actually being triggered on that line (and
thus suppressed). **You can run `ruff /path/to/file.py --extend-select M001` to flag unused `noqa` thus suppressed). You can run `ruff /path/to/file.py --extend-select M001` to flag unused `noqa`
directives.** directives.
Second, Ruff can _automatically remove_ unused `noqa` directives via its autofix functionality. Second, Ruff can _automatically remove_ unused `noqa` directives via its autofix functionality.
**You can run `ruff /path/to/file.py --extend-select M001 --fix` to automatically remove unused You can run `ruff /path/to/file.py --extend-select M001 --fix` to automatically remove unused
`noqa` directives.** `noqa` directives.
Third, Ruff can _automatically add_ `noqa` directives to all failing lines. This is useful when Third, Ruff can _automatically add_ `noqa` directives to all failing lines. This is useful when
migrating a new codebase to Ruff. **You can run `ruff /path/to/file.py --add-noqa` to automatically migrating a new codebase to Ruff. You can run `ruff /path/to/file.py --add-noqa` to automatically
add `noqa` directives to all failing lines, with the appropriate error codes.** add `noqa` directives to all failing lines, with the appropriate error codes.
## Supported Rules ## Supported Rules
@ -365,6 +366,14 @@ For more, see [pycodestyle](https://pypi.org/project/pycodestyle/2.9.1/) on PyPI
| W292 | NoNewLineAtEndOfFile | No newline at end of file | | | W292 | NoNewLineAtEndOfFile | No newline at end of file | |
| W605 | InvalidEscapeSequence | Invalid escape sequence: '\c' | | | W605 | InvalidEscapeSequence | Invalid escape sequence: '\c' | |
### isort
For more, see [isort](https://pypi.org/project/isort/5.10.1/) on PyPI.
| Code | Name | Message | Fix |
| ---- | ---- | ------- | --- |
| I001 | UnsortedImports | Import block is un-sorted or un-formatted | 🛠 |
### pydocstyle ### pydocstyle
For more, see [pydocstyle](https://pypi.org/project/pydocstyle/6.1.1/) on PyPI. For more, see [pydocstyle](https://pypi.org/project/pydocstyle/6.1.1/) on PyPI.
@ -681,7 +690,7 @@ Today, Ruff can be used to replace Flake8 when used with any of the following pl
- [`flake8-comprehensions`](https://pypi.org/project/flake8-comprehensions/) - [`flake8-comprehensions`](https://pypi.org/project/flake8-comprehensions/)
- [`flake8-bugbear`](https://pypi.org/project/flake8-bugbear/) (19/32) - [`flake8-bugbear`](https://pypi.org/project/flake8-bugbear/) (19/32)
Ruff also implements the functionality that you get from [`yesqa`](https://github.com/asottile/yesqa), Ruff can also replace [`isort`](https://pypi.org/project/isort/), [`yesqa`](https://github.com/asottile/yesqa),
and a subset of the rules implemented in [`pyupgrade`](https://pypi.org/project/pyupgrade/) (14/34). and a subset of the rules implemented in [`pyupgrade`](https://pypi.org/project/pyupgrade/) (14/34).
If you're looking to use Ruff, but rely on an unsupported Flake8 plugin, free to file an Issue. If you're looking to use Ruff, but rely on an unsupported Flake8 plugin, free to file an Issue.
@ -702,6 +711,31 @@ on Rust at all.
Ruff does not yet support third-party plugins, though a plugin system is within-scope for the Ruff does not yet support third-party plugins, though a plugin system is within-scope for the
project. See [#283](https://github.com/charliermarsh/ruff/issues/283) for more. project. See [#283](https://github.com/charliermarsh/ruff/issues/283) for more.
### How does Ruff's import sorting compare to [`isort`](https://pypi.org/project/isort/)?
Ruff's import sorting is intended to be equivalent to `isort` when used `profile = "black"` and
`combine_as_imports = true`. Like `isort`, Ruff's import sorting is compatible with Black.
Ruff is less configurable than `isort`, but supports the `known-first-party`, `known-third-party`,
`extra-standard-library`, and `src` settings, like so:
```toml
[tool.ruff]
select = [
# Pyflakes
"F",
# Pycodestyle
"E",
"W",
# isort
"I"
]
src = ["src", "tests"]
[tool.ruff.isort]
known-first-party = ["my_module1", "my_module2"]
```
### Does Ruff support NumPy- or Google-style docstrings? ### Does Ruff support NumPy- or Google-style docstrings?
Yes! To enable a specific docstring convention, start by enabling all `pydocstyle` error codes, and Yes! To enable a specific docstring convention, start by enabling all `pydocstyle` error codes, and

View file

@ -208,6 +208,7 @@ mod tests {
let actual = convert(&HashMap::from([]), None)?; let actual = convert(&HashMap::from([]), None)?;
let expected = Pyproject::new(Options { let expected = Pyproject::new(Options {
line_length: None, line_length: None,
src: None,
fix: None, fix: None,
exclude: None, exclude: None,
extend_exclude: None, extend_exclude: None,
@ -224,6 +225,7 @@ mod tests {
target_version: None, target_version: None,
flake8_annotations: None, flake8_annotations: None,
flake8_quotes: None, flake8_quotes: None,
isort: None,
pep8_naming: None, pep8_naming: None,
}); });
assert_eq!(actual, expected); assert_eq!(actual, expected);
@ -239,6 +241,7 @@ mod tests {
)?; )?;
let expected = Pyproject::new(Options { let expected = Pyproject::new(Options {
line_length: Some(100), line_length: Some(100),
src: None,
fix: None, fix: None,
exclude: None, exclude: None,
extend_exclude: None, extend_exclude: None,
@ -255,6 +258,7 @@ mod tests {
target_version: None, target_version: None,
flake8_annotations: None, flake8_annotations: None,
flake8_quotes: None, flake8_quotes: None,
isort: None,
pep8_naming: None, pep8_naming: None,
}); });
assert_eq!(actual, expected); assert_eq!(actual, expected);
@ -270,6 +274,7 @@ mod tests {
)?; )?;
let expected = Pyproject::new(Options { let expected = Pyproject::new(Options {
line_length: Some(100), line_length: Some(100),
src: None,
fix: None, fix: None,
exclude: None, exclude: None,
extend_exclude: None, extend_exclude: None,
@ -286,6 +291,7 @@ mod tests {
target_version: None, target_version: None,
flake8_annotations: None, flake8_annotations: None,
flake8_quotes: None, flake8_quotes: None,
isort: None,
pep8_naming: None, pep8_naming: None,
}); });
assert_eq!(actual, expected); assert_eq!(actual, expected);
@ -301,6 +307,7 @@ mod tests {
)?; )?;
let expected = Pyproject::new(Options { let expected = Pyproject::new(Options {
line_length: None, line_length: None,
src: None,
fix: None, fix: None,
exclude: None, exclude: None,
extend_exclude: None, extend_exclude: None,
@ -317,6 +324,7 @@ mod tests {
target_version: None, target_version: None,
flake8_annotations: None, flake8_annotations: None,
flake8_quotes: None, flake8_quotes: None,
isort: None,
pep8_naming: None, pep8_naming: None,
}); });
assert_eq!(actual, expected); assert_eq!(actual, expected);
@ -332,6 +340,7 @@ mod tests {
)?; )?;
let expected = Pyproject::new(Options { let expected = Pyproject::new(Options {
line_length: None, line_length: None,
src: None,
fix: None, fix: None,
exclude: None, exclude: None,
extend_exclude: None, extend_exclude: None,
@ -353,6 +362,7 @@ mod tests {
docstring_quotes: None, docstring_quotes: None,
avoid_escape: None, avoid_escape: None,
}), }),
isort: None,
pep8_naming: None, pep8_naming: None,
}); });
assert_eq!(actual, expected); assert_eq!(actual, expected);
@ -371,6 +381,7 @@ mod tests {
)?; )?;
let expected = Pyproject::new(Options { let expected = Pyproject::new(Options {
line_length: None, line_length: None,
src: None,
fix: None, fix: None,
exclude: None, exclude: None,
extend_exclude: None, extend_exclude: None,
@ -422,6 +433,7 @@ mod tests {
target_version: None, target_version: None,
flake8_annotations: None, flake8_annotations: None,
flake8_quotes: None, flake8_quotes: None,
isort: None,
pep8_naming: None, pep8_naming: None,
}); });
assert_eq!(actual, expected); assert_eq!(actual, expected);
@ -437,6 +449,7 @@ mod tests {
)?; )?;
let expected = Pyproject::new(Options { let expected = Pyproject::new(Options {
line_length: None, line_length: None,
src: None,
fix: None, fix: None,
exclude: None, exclude: None,
extend_exclude: None, extend_exclude: None,
@ -459,6 +472,7 @@ mod tests {
docstring_quotes: None, docstring_quotes: None,
avoid_escape: None, avoid_escape: None,
}), }),
isort: None,
pep8_naming: None, pep8_naming: None,
}); });
assert_eq!(actual, expected); assert_eq!(actual, expected);

View file

@ -32,3 +32,7 @@ build-backend = "maturin"
bindings = "bin" bindings = "bin"
sdist-include = ["Cargo.lock"] sdist-include = ["Cargo.lock"]
strip = true strip = true
[tool.isort]
profile = "black"
known_third_party = ["fastapi", "pydantic", "starlette"]

View file

@ -0,0 +1,5 @@
from collections import Awaitable
from collections import AsyncIterable
from collections import Collection
from collections import ChainMap
from collections import MutableSequence, MutableMapping

View file

@ -0,0 +1,4 @@
import os
import os
import os as os1
import os as os2

View file

@ -0,0 +1 @@
from collections import Collection

View file

@ -0,0 +1,2 @@
from collections import Collection
import os

View file

@ -0,0 +1,6 @@
x = 1; import sys
import os
if True:
x = 1; import sys
import os

View file

@ -0,0 +1,3 @@
# OK
import os
import sys

View file

@ -0,0 +1,6 @@
if True:
import sys
import os
else:
import sys
import os

View file

@ -0,0 +1,2 @@
import sys
import os

View file

@ -0,0 +1,5 @@
import sys
import leading_prefix
import numpy as np
import os
from leading_prefix import Class

View file

@ -0,0 +1,3 @@
import sys
import os
from __future__ import annotations

View file

@ -0,0 +1,4 @@
import pandas as pd
import sys
import numpy as np
import os

View file

@ -0,0 +1,6 @@
import sys
import os; x = 1
if True:
import sys
import os; x = 1

View file

@ -23,6 +23,7 @@ use crate::ast::{helpers, operations, visitor};
use crate::autofix::fixer; use crate::autofix::fixer;
use crate::checks::{Check, CheckCode, CheckKind}; use crate::checks::{Check, CheckCode, CheckKind};
use crate::docstrings::definition::{Definition, DefinitionKind, Documentable}; use crate::docstrings::definition::{Definition, DefinitionKind, Documentable};
use crate::isort::track::ImportTracker;
use crate::python::builtins::{BUILTINS, MAGIC_GLOBALS}; use crate::python::builtins::{BUILTINS, MAGIC_GLOBALS};
use crate::python::future::ALL_FEATURE_NAMES; use crate::python::future::ALL_FEATURE_NAMES;
use crate::python::typing; use crate::python::typing;
@ -77,6 +78,7 @@ pub struct Checker<'a> {
deferred_functions: Vec<(&'a Stmt, Vec<usize>, Vec<usize>, VisibleScope)>, deferred_functions: Vec<(&'a Stmt, Vec<usize>, Vec<usize>, VisibleScope)>,
deferred_lambdas: Vec<(&'a Expr, Vec<usize>, Vec<usize>)>, deferred_lambdas: Vec<(&'a Expr, Vec<usize>, Vec<usize>)>,
deferred_assignments: Vec<usize>, deferred_assignments: Vec<usize>,
import_tracker: ImportTracker<'a>,
// Internal, derivative state. // Internal, derivative state.
visible_scope: VisibleScope, visible_scope: VisibleScope,
in_f_string: Option<Range>, in_f_string: Option<Range>,
@ -115,6 +117,8 @@ impl<'a> Checker<'a> {
deferred_functions: Default::default(), deferred_functions: Default::default(),
deferred_lambdas: Default::default(), deferred_lambdas: Default::default(),
deferred_assignments: Default::default(), deferred_assignments: Default::default(),
import_tracker: ImportTracker::new(),
// Internal, derivative state.
visible_scope: VisibleScope { visible_scope: VisibleScope {
modifier: Modifier::Module, modifier: Modifier::Module,
visibility: module_visibility(path), visibility: module_visibility(path),
@ -181,6 +185,9 @@ where
'b: 'a, 'b: 'a,
{ {
fn visit_stmt(&mut self, stmt: &'b Stmt) { fn visit_stmt(&mut self, stmt: &'b Stmt) {
// Call-through to any composed visitors.
self.import_tracker.visit_stmt(stmt);
self.push_parent(stmt); self.push_parent(stmt);
// Track whether we've seen docstrings, non-imports, etc. // Track whether we've seen docstrings, non-imports, etc.
@ -1657,6 +1664,9 @@ where
} }
fn visit_excepthandler(&mut self, excepthandler: &'b Excepthandler) { fn visit_excepthandler(&mut self, excepthandler: &'b Excepthandler) {
// Call-through to any composed visitors.
self.import_tracker.visit_excepthandler(excepthandler);
match &excepthandler.node { match &excepthandler.node {
ExcepthandlerKind::ExceptHandler { type_, name, .. } => { ExcepthandlerKind::ExceptHandler { type_, name, .. } => {
if self.settings.enabled.contains(&CheckCode::E722) && type_.is_none() { if self.settings.enabled.contains(&CheckCode::E722) && type_.is_none() {
@ -2591,5 +2601,8 @@ pub fn check_ast(
// Check docstrings. // Check docstrings.
checker.check_definitions(); checker.check_definitions();
// Check import blocks.
// checker.check_import_blocks();
checker.checks checker.checks
} }

41
src/check_imports.rs Normal file
View file

@ -0,0 +1,41 @@
//! Lint rules based on import analysis.
use rustpython_parser::ast::Suite;
use crate::ast::visitor::Visitor;
use crate::autofix::fixer;
use crate::checks::Check;
use crate::isort;
use crate::isort::track::ImportTracker;
use crate::settings::Settings;
use crate::source_code_locator::SourceCodeLocator;
fn check_import_blocks(
tracker: ImportTracker,
locator: &SourceCodeLocator,
settings: &Settings,
autofix: &fixer::Mode,
) -> Vec<Check> {
let mut checks = vec![];
for block in tracker.into_iter() {
if !block.is_empty() {
if let Some(check) = isort::plugins::check_imports(block, locator, settings, autofix) {
checks.push(check);
}
}
}
checks
}
pub fn check_imports(
python_ast: &Suite,
locator: &SourceCodeLocator,
settings: &Settings,
autofix: &fixer::Mode,
) -> Vec<Check> {
let mut tracker = ImportTracker::new();
for stmt in python_ast {
tracker.visit_stmt(stmt);
}
check_import_blocks(tracker, locator, settings, autofix)
}

View file

@ -204,6 +204,8 @@ pub enum CheckCode {
N816, N816,
N817, N817,
N818, N818,
// isort
I001,
// Ruff // Ruff
RUF001, RUF001,
RUF002, RUF002,
@ -216,6 +218,7 @@ pub enum CheckCode {
pub enum CheckCategory { pub enum CheckCategory {
Pyflakes, Pyflakes,
Pycodestyle, Pycodestyle,
Isort,
Pydocstyle, Pydocstyle,
Pyupgrade, Pyupgrade,
PEP8Naming, PEP8Naming,
@ -234,6 +237,7 @@ impl CheckCategory {
match self { match self {
CheckCategory::Pycodestyle => "pycodestyle", CheckCategory::Pycodestyle => "pycodestyle",
CheckCategory::Pyflakes => "Pyflakes", CheckCategory::Pyflakes => "Pyflakes",
CheckCategory::Isort => "isort",
CheckCategory::Flake8Builtins => "flake8-builtins", CheckCategory::Flake8Builtins => "flake8-builtins",
CheckCategory::Flake8Bugbear => "flake8-bugbear", CheckCategory::Flake8Bugbear => "flake8-bugbear",
CheckCategory::Flake8Comprehensions => "flake8-comprehensions", CheckCategory::Flake8Comprehensions => "flake8-comprehensions",
@ -252,6 +256,7 @@ impl CheckCategory {
match self { match self {
CheckCategory::Pycodestyle => Some("https://pypi.org/project/pycodestyle/2.9.1/"), CheckCategory::Pycodestyle => Some("https://pypi.org/project/pycodestyle/2.9.1/"),
CheckCategory::Pyflakes => Some("https://pypi.org/project/pyflakes/2.5.0/"), CheckCategory::Pyflakes => Some("https://pypi.org/project/pyflakes/2.5.0/"),
CheckCategory::Isort => Some("https://pypi.org/project/isort/5.10.1/"),
CheckCategory::Flake8Builtins => { CheckCategory::Flake8Builtins => {
Some("https://pypi.org/project/flake8-builtins/2.0.1/") Some("https://pypi.org/project/flake8-builtins/2.0.1/")
} }
@ -281,6 +286,7 @@ pub enum LintSource {
FileSystem, FileSystem,
Lines, Lines,
Tokens, Tokens,
Imports,
} }
#[derive(Debug, PartialEq, Eq, Serialize, Deserialize)] #[derive(Debug, PartialEq, Eq, Serialize, Deserialize)]
@ -470,6 +476,8 @@ pub enum CheckKind {
MixedCaseVariableInGlobalScope(String), MixedCaseVariableInGlobalScope(String),
CamelcaseImportedAsAcronym(String, String), CamelcaseImportedAsAcronym(String, String),
ErrorSuffixOnExceptionName(String), ErrorSuffixOnExceptionName(String),
// isort
UnsortedImports,
// Ruff // Ruff
AmbiguousUnicodeCharacterString(char, char), AmbiguousUnicodeCharacterString(char, char),
AmbiguousUnicodeCharacterDocstring(char, char), AmbiguousUnicodeCharacterDocstring(char, char),
@ -495,6 +503,7 @@ impl CheckCode {
| CheckCode::RUF002 | CheckCode::RUF002
| CheckCode::RUF003 => &LintSource::Tokens, | CheckCode::RUF003 => &LintSource::Tokens,
CheckCode::E902 => &LintSource::FileSystem, CheckCode::E902 => &LintSource::FileSystem,
CheckCode::I001 => &LintSource::Imports,
_ => &LintSource::AST, _ => &LintSource::AST,
} }
} }
@ -717,6 +726,8 @@ impl CheckCode {
CheckKind::CamelcaseImportedAsAcronym("...".to_string(), "...".to_string()) CheckKind::CamelcaseImportedAsAcronym("...".to_string(), "...".to_string())
} }
CheckCode::N818 => CheckKind::ErrorSuffixOnExceptionName("...".to_string()), CheckCode::N818 => CheckKind::ErrorSuffixOnExceptionName("...".to_string()),
// isort
CheckCode::I001 => CheckKind::UnsortedImports,
// Ruff // Ruff
CheckCode::RUF001 => CheckKind::AmbiguousUnicodeCharacterString('𝐁', 'B'), CheckCode::RUF001 => CheckKind::AmbiguousUnicodeCharacterString('𝐁', 'B'),
CheckCode::RUF002 => CheckKind::AmbiguousUnicodeCharacterDocstring('𝐁', 'B'), CheckCode::RUF002 => CheckKind::AmbiguousUnicodeCharacterDocstring('𝐁', 'B'),
@ -895,6 +906,7 @@ impl CheckCode {
CheckCode::N816 => CheckCategory::PEP8Naming, CheckCode::N816 => CheckCategory::PEP8Naming,
CheckCode::N817 => CheckCategory::PEP8Naming, CheckCode::N817 => CheckCategory::PEP8Naming,
CheckCode::N818 => CheckCategory::PEP8Naming, CheckCode::N818 => CheckCategory::PEP8Naming,
CheckCode::I001 => CheckCategory::Isort,
CheckCode::RUF001 => CheckCategory::Ruff, CheckCode::RUF001 => CheckCategory::Ruff,
CheckCode::RUF002 => CheckCategory::Ruff, CheckCode::RUF002 => CheckCategory::Ruff,
CheckCode::RUF003 => CheckCategory::Ruff, CheckCode::RUF003 => CheckCategory::Ruff,
@ -1085,6 +1097,8 @@ impl CheckKind {
CheckKind::MixedCaseVariableInGlobalScope(..) => &CheckCode::N816, CheckKind::MixedCaseVariableInGlobalScope(..) => &CheckCode::N816,
CheckKind::CamelcaseImportedAsAcronym(..) => &CheckCode::N817, CheckKind::CamelcaseImportedAsAcronym(..) => &CheckCode::N817,
CheckKind::ErrorSuffixOnExceptionName(..) => &CheckCode::N818, CheckKind::ErrorSuffixOnExceptionName(..) => &CheckCode::N818,
// isort
CheckKind::UnsortedImports => &CheckCode::I001,
// Ruff // Ruff
CheckKind::AmbiguousUnicodeCharacterString(..) => &CheckCode::RUF001, CheckKind::AmbiguousUnicodeCharacterString(..) => &CheckCode::RUF001,
CheckKind::AmbiguousUnicodeCharacterDocstring(..) => &CheckCode::RUF002, CheckKind::AmbiguousUnicodeCharacterDocstring(..) => &CheckCode::RUF002,
@ -1644,6 +1658,8 @@ impl CheckKind {
CheckKind::PEP3120UnnecessaryCodingComment => { CheckKind::PEP3120UnnecessaryCodingComment => {
"utf-8 encoding declaration is unnecessary".to_string() "utf-8 encoding declaration is unnecessary".to_string()
} }
// isort
CheckKind::UnsortedImports => "Import block is un-sorted or un-formatted".to_string(),
// Ruff // Ruff
CheckKind::AmbiguousUnicodeCharacterString(confusable, representant) => { CheckKind::AmbiguousUnicodeCharacterString(confusable, representant) => {
format!( format!(
@ -1749,12 +1765,13 @@ impl CheckKind {
| CheckKind::UnnecessaryGeneratorSet | CheckKind::UnnecessaryGeneratorSet
| CheckKind::UnnecessaryLRUCacheParams | CheckKind::UnnecessaryLRUCacheParams
| CheckKind::UnnecessaryListCall | CheckKind::UnnecessaryListCall
| CheckKind::UnnecessaryListComprehensionSet
| CheckKind::UnnecessaryListComprehensionDict | CheckKind::UnnecessaryListComprehensionDict
| CheckKind::UnnecessaryListComprehensionSet
| CheckKind::UnnecessaryLiteralDict(_) | CheckKind::UnnecessaryLiteralDict(_)
| CheckKind::UnnecessaryLiteralSet(_) | CheckKind::UnnecessaryLiteralSet(_)
| CheckKind::UnnecessaryLiteralWithinListCall(_) | CheckKind::UnnecessaryLiteralWithinListCall(_)
| CheckKind::UnnecessaryLiteralWithinTupleCall(_) | CheckKind::UnnecessaryLiteralWithinTupleCall(_)
| CheckKind::UnsortedImports
| CheckKind::UnusedImport(_, false) | CheckKind::UnusedImport(_, false)
| CheckKind::UnusedLoopControlVariable(_) | CheckKind::UnusedLoopControlVariable(_)
| CheckKind::UnusedNOQA(_) | CheckKind::UnusedNOQA(_)

View file

@ -203,6 +203,10 @@ pub enum CheckCodePrefix {
F9, F9,
F90, F90,
F901, F901,
I,
I0,
I00,
I001,
M, M,
M0, M0,
M00, M00,
@ -852,6 +856,10 @@ impl CheckCodePrefix {
CheckCodePrefix::F9 => vec![CheckCode::F901], CheckCodePrefix::F9 => vec![CheckCode::F901],
CheckCodePrefix::F90 => vec![CheckCode::F901], CheckCodePrefix::F90 => vec![CheckCode::F901],
CheckCodePrefix::F901 => vec![CheckCode::F901], CheckCodePrefix::F901 => vec![CheckCode::F901],
CheckCodePrefix::I => vec![CheckCode::I001],
CheckCodePrefix::I0 => vec![CheckCode::I001],
CheckCodePrefix::I00 => vec![CheckCode::I001],
CheckCodePrefix::I001 => vec![CheckCode::I001],
CheckCodePrefix::M => vec![CheckCode::M001], CheckCodePrefix::M => vec![CheckCode::M001],
CheckCodePrefix::M0 => vec![CheckCode::M001], CheckCodePrefix::M0 => vec![CheckCode::M001],
CheckCodePrefix::M00 => vec![CheckCode::M001], CheckCodePrefix::M00 => vec![CheckCode::M001],
@ -1216,6 +1224,10 @@ impl CheckCodePrefix {
CheckCodePrefix::F9 => PrefixSpecificity::Hundreds, CheckCodePrefix::F9 => PrefixSpecificity::Hundreds,
CheckCodePrefix::F90 => PrefixSpecificity::Tens, CheckCodePrefix::F90 => PrefixSpecificity::Tens,
CheckCodePrefix::F901 => PrefixSpecificity::Explicit, CheckCodePrefix::F901 => PrefixSpecificity::Explicit,
CheckCodePrefix::I => PrefixSpecificity::Category,
CheckCodePrefix::I0 => PrefixSpecificity::Hundreds,
CheckCodePrefix::I00 => PrefixSpecificity::Tens,
CheckCodePrefix::I001 => PrefixSpecificity::Explicit,
CheckCodePrefix::M => PrefixSpecificity::Category, CheckCodePrefix::M => PrefixSpecificity::Category,
CheckCodePrefix::M0 => PrefixSpecificity::Hundreds, CheckCodePrefix::M0 => PrefixSpecificity::Hundreds,
CheckCodePrefix::M00 => PrefixSpecificity::Tens, CheckCodePrefix::M00 => PrefixSpecificity::Tens,

View file

@ -1,4 +1,4 @@
use rustpython_ast::{Expr, Location}; use rustpython_ast::{Located, Location};
use crate::ast::types::Range; use crate::ast::types::Range;
use crate::check_ast::Checker; use crate::check_ast::Checker;
@ -24,9 +24,9 @@ pub fn leading_space(line: &str) -> String {
.collect() .collect()
} }
/// Extract the leading indentation from a docstring. /// Extract the leading indentation from a line.
pub fn indentation<'a>(checker: &'a Checker, docstring: &Expr) -> String { pub fn indentation<'a, T>(checker: &'a Checker, located: &Located<T>) -> String {
let range = Range::from_located(docstring); let range = Range::from_located(located);
checker checker
.locator .locator
.slice_source_code_range(&Range { .slice_source_code_range(&Range {

68
src/isort/categorize.rs Normal file
View file

@ -0,0 +1,68 @@
use std::collections::{BTreeMap, BTreeSet};
use std::fs;
use std::path::PathBuf;
use once_cell::sync::Lazy;
use crate::python::sys::KNOWN_STANDARD_LIBRARY;
#[derive(Debug, PartialOrd, Ord, PartialEq, Eq, Clone)]
pub enum ImportType {
Future,
StandardLibrary,
ThirdParty,
FirstParty,
}
pub fn categorize(
module_base: &str,
src: &[PathBuf],
known_first_party: &BTreeSet<String>,
known_third_party: &BTreeSet<String>,
extra_standard_library: &BTreeSet<String>,
) -> ImportType {
if known_first_party.contains(module_base) {
ImportType::FirstParty
} else if known_third_party.contains(module_base) {
ImportType::ThirdParty
} else if extra_standard_library.contains(module_base) {
ImportType::StandardLibrary
} else if let Some(import_type) = STATIC_CLASSIFICATIONS.get(module_base) {
import_type.clone()
} else if KNOWN_STANDARD_LIBRARY.contains(module_base) {
ImportType::StandardLibrary
} else {
if find_local(src, module_base) {
ImportType::FirstParty
} else {
ImportType::ThirdParty
}
}
}
static STATIC_CLASSIFICATIONS: Lazy<BTreeMap<&'static str, ImportType>> = Lazy::new(|| {
BTreeMap::from([
("__future__", ImportType::Future),
("__main__", ImportType::FirstParty),
// Force `disutils` to be considered third-party.
("disutils", ImportType::ThirdParty),
// Relative imports (e.g., `from . import module`).
("", ImportType::FirstParty),
])
});
fn find_local(paths: &[PathBuf], base: &str) -> bool {
for path in paths {
if let Ok(metadata) = fs::metadata(path.join(base)) {
if metadata.is_dir() {
return true;
}
}
if let Ok(metadata) = fs::metadata(path.join(format!("{base}.py"))) {
if metadata.is_file() {
return true;
}
}
}
false
}

256
src/isort/mod.rs Normal file
View file

@ -0,0 +1,256 @@
use std::collections::{BTreeMap, BTreeSet};
use std::path::PathBuf;
use ropey::RopeBuilder;
use rustpython_ast::{Stmt, StmtKind};
use crate::isort::categorize::{categorize, ImportType};
use crate::isort::types::{AliasData, ImportBlock, ImportFromData, Importable};
mod categorize;
pub mod plugins;
pub mod settings;
pub mod track;
mod types;
// Hard-code four-space indentation for the imports themselves, to match Black.
const INDENT: &str = " ";
fn normalize_imports<'a>(imports: &'a [&'a Stmt]) -> ImportBlock<'a> {
let mut block: ImportBlock = Default::default();
for import in imports {
match &import.node {
StmtKind::Import { names } => {
for name in names {
block.import.insert(AliasData {
name: &name.node.name,
asname: &name.node.asname,
});
}
}
StmtKind::ImportFrom {
module,
names,
level,
} => {
let targets = block
.import_from
.entry(ImportFromData { module, level })
.or_default();
for name in names {
targets.insert(AliasData {
name: &name.node.name,
asname: &name.node.asname,
});
}
}
_ => unreachable!("Expected StmtKind::Import | StmtKind::ImportFrom"),
}
}
block
}
fn categorize_imports<'a>(
block: ImportBlock<'a>,
src: &[PathBuf],
known_first_party: &BTreeSet<String>,
known_third_party: &BTreeSet<String>,
extra_standard_library: &BTreeSet<String>,
) -> BTreeMap<ImportType, ImportBlock<'a>> {
let mut block_by_type: BTreeMap<ImportType, ImportBlock> = Default::default();
// Categorize `StmtKind::Import`.
for alias in block.import {
let import_type = categorize(
&alias.module_base(),
src,
known_first_party,
known_third_party,
extra_standard_library,
);
block_by_type
.entry(import_type)
.or_default()
.import
.insert(alias);
}
// Categorize `StmtKind::ImportFrom`.
for (import_from, aliases) in block.import_from {
let classification = categorize(
&import_from.module_base(),
src,
known_first_party,
known_third_party,
extra_standard_library,
);
block_by_type
.entry(classification)
.or_default()
.import_from
.insert(import_from, aliases);
}
block_by_type
}
pub fn sort_imports(
block: Vec<&Stmt>,
line_length: &usize,
src: &[PathBuf],
known_first_party: &BTreeSet<String>,
known_third_party: &BTreeSet<String>,
extra_standard_library: &BTreeSet<String>,
) -> String {
// Normalize imports (i.e., deduplicate, aggregate `from` imports).
let block = normalize_imports(&block);
// Categorize by type (e.g., first-party vs. third-party).
let block_by_type = categorize_imports(
block,
src,
known_first_party,
known_third_party,
extra_standard_library,
);
// Generate replacement source code.
let mut output = RopeBuilder::new();
let mut first_block = true;
for import_type in [
ImportType::Future,
ImportType::StandardLibrary,
ImportType::ThirdParty,
ImportType::FirstParty,
] {
if let Some(import_block) = block_by_type.get(&import_type) {
// Add a blank line between every section.
if !first_block {
output.append("\n");
} else {
first_block = false;
}
// Format `StmtKind::Import` statements.
for AliasData { name, asname } in import_block.import.iter() {
if let Some(asname) = asname {
output.append(&format!("import {} as {}\n", name, asname));
} else {
output.append(&format!("import {}\n", name));
}
}
// Format `StmtKind::ImportFrom` statements.
for (import_from, aliases) in import_block.import_from.iter() {
let prelude: String = format!("from {} import ", import_from.module_name());
let members: Vec<String> = aliases
.iter()
.map(|AliasData { name, asname }| {
if let Some(asname) = asname {
format!("{} as {}", name, asname)
} else {
name.to_string()
}
})
.collect();
// Can we fit the import on a single line?
let expected_len: usize =
// `from base import `
prelude.len()
// `member( as alias)?`
+ members.iter().map(|part| part.len()).sum::<usize>()
// `, `
+ 2 * (members.len() - 1);
if expected_len <= *line_length {
// `from base import `
output.append(&prelude);
// `member( as alias)?(, )?`
for (index, part) in members.into_iter().enumerate() {
if index > 0 {
output.append(", ");
}
output.append(&part);
}
// `\n`
output.append("\n");
} else {
// `from base import (\n`
output.append(&prelude);
output.append("(");
output.append("\n");
// ` member( as alias)?,\n`
for part in members {
output.append(INDENT);
output.append(&part);
output.append(",");
output.append("\n");
}
// `)\n`
output.append(")");
output.append("\n");
}
}
}
}
output.finish().to_string()
}
#[cfg(test)]
mod tests {
use std::path::Path;
use anyhow::Result;
use rustpython_parser::lexer::LexResult;
use test_case::test_case;
use crate::autofix::fixer;
use crate::checks::{Check, CheckCode};
use crate::linter::tokenize;
use crate::{fs, linter, noqa, Settings, SourceCodeLocator};
fn check_path(path: &Path, settings: &Settings, autofix: &fixer::Mode) -> Result<Vec<Check>> {
let contents = fs::read_file(path)?;
let tokens: Vec<LexResult> = tokenize(&contents);
let locator = SourceCodeLocator::new(&contents);
let noqa_line_for = noqa::extract_noqa_line_for(&tokens);
linter::check_path(
path,
&contents,
tokens,
&locator,
&noqa_line_for,
settings,
autofix,
)
}
#[test_case(Path::new("reorder_within_section.py"))]
#[test_case(Path::new("no_reorder_within_section.py"))]
#[test_case(Path::new("separate_future_imports.py"))]
#[test_case(Path::new("separate_third_party_imports.py"))]
#[test_case(Path::new("separate_first_party_imports.py"))]
#[test_case(Path::new("deduplicate_imports.py"))]
#[test_case(Path::new("combine_import_froms.py"))]
#[test_case(Path::new("preserve_indentation.py"))]
#[test_case(Path::new("fit_line_length.py"))]
#[test_case(Path::new("import_from_after_import.py"))]
#[test_case(Path::new("leading_prefix.py"))]
#[test_case(Path::new("trailing_suffix.py"))]
fn isort(path: &Path) -> Result<()> {
let snapshot = format!("{}", path.to_string_lossy());
let mut checks = check_path(
Path::new("./resources/test/fixtures/isort")
.join(path)
.as_path(),
&Settings {
src: vec![Path::new("resources/test/fixtures/isort").to_path_buf()],
..Settings::for_rule(CheckCode::I001)
},
&fixer::Mode::Generate,
)?;
checks.sort_by_key(|check| check.location);
insta::assert_yaml_snapshot!(snapshot, checks);
Ok(())
}
}

116
src/isort/plugins.rs Normal file
View file

@ -0,0 +1,116 @@
use rustpython_ast::{Location, Stmt};
use textwrap::{dedent, indent};
use crate::ast::types::Range;
use crate::autofix::{fixer, Fix};
use crate::checks::CheckKind;
use crate::docstrings::helpers::leading_space;
use crate::isort::sort_imports;
use crate::{Check, Settings, SourceCodeLocator};
fn extract_range(body: &[&Stmt]) -> Range {
let location = body.first().unwrap().location;
let end_location = body.last().unwrap().end_location.unwrap();
Range {
location,
end_location,
}
}
fn extract_indentation(body: &[&Stmt], locator: &SourceCodeLocator) -> String {
let location = body.first().unwrap().location;
let range = Range {
location: Location::new(location.row(), 0),
end_location: location,
};
let existing = locator.slice_source_code_range(&range);
leading_space(&existing)
}
fn match_leading_content(body: &[&Stmt], locator: &SourceCodeLocator) -> bool {
let location = body.first().unwrap().location;
let range = Range {
location: Location::new(location.row(), 0),
end_location: location,
};
let prefix = locator.slice_source_code_range(&range);
prefix.chars().any(|char| !char.is_whitespace())
}
fn match_trailing_content(body: &[&Stmt], locator: &SourceCodeLocator) -> bool {
let end_location = body.last().unwrap().end_location.unwrap();
let range = Range {
location: end_location,
end_location: Location::new(end_location.row() + 1, 0),
};
let suffix = locator.slice_source_code_range(&range);
suffix.chars().any(|char| !char.is_whitespace())
}
/// I001
pub fn check_imports(
body: Vec<&Stmt>,
locator: &SourceCodeLocator,
settings: &Settings,
autofix: &fixer::Mode,
) -> Option<Check> {
let range = extract_range(&body);
let indentation = extract_indentation(&body, locator);
// Special-cases: there's leading or trailing content in the import block.
let has_leading_content = match_leading_content(&body, locator);
let has_trailing_content = match_trailing_content(&body, locator);
// Generate the sorted import block.
let expected = sort_imports(
body,
&settings.line_length,
&settings.src,
&settings.isort.known_first_party,
&settings.isort.known_third_party,
&settings.isort.extra_standard_library,
);
if has_leading_content || has_trailing_content {
let mut check = Check::new(CheckKind::UnsortedImports, range);
if autofix.patch() {
let mut content = String::new();
if has_leading_content {
content.push('\n');
}
content.push_str(&indent(&expected, &indentation));
check.amend(Fix::replacement(
content,
// Preserve leading prefix (but put the imports on a new line).
if has_leading_content {
range.location
} else {
Location::new(range.location.row(), 0)
},
// TODO(charlie): Preserve trailing suffixes. Right now, we strip them.
Location::new(range.end_location.row() + 1, 0),
));
}
Some(check)
} else {
// Expand the span the entire range, including leading and trailing space.
let range = Range {
location: Location::new(range.location.row(), 0),
end_location: Location::new(range.end_location.row() + 1, 0),
};
let actual = dedent(&locator.slice_source_code_range(&range));
if actual != expected {
let mut check = Check::new(CheckKind::UnsortedImports, range);
if autofix.patch() {
check.amend(Fix::replacement(
indent(&expected, &indentation),
range.location,
range.end_location,
));
}
Some(check)
} else {
None
}
}
}

32
src/isort/settings.rs Normal file
View file

@ -0,0 +1,32 @@
//! Settings for the `isort` plugin.
use std::collections::BTreeSet;
use serde::{Deserialize, Serialize};
#[derive(Debug, PartialEq, Eq, Serialize, Deserialize, Default)]
#[serde(deny_unknown_fields, rename_all = "kebab-case")]
pub struct Options {
pub known_first_party: Option<Vec<String>>,
pub known_third_party: Option<Vec<String>>,
pub extra_standard_library: Option<Vec<String>>,
}
#[derive(Debug, Hash, Default)]
pub struct Settings {
pub known_first_party: BTreeSet<String>,
pub known_third_party: BTreeSet<String>,
pub extra_standard_library: BTreeSet<String>,
}
impl Settings {
pub fn from_options(options: Options) -> Self {
Self {
known_first_party: BTreeSet::from_iter(options.known_first_party.unwrap_or_default()),
known_third_party: BTreeSet::from_iter(options.known_third_party.unwrap_or_default()),
extra_standard_library: BTreeSet::from_iter(
options.extra_standard_library.unwrap_or_default(),
),
}
}
}

View file

@ -0,0 +1,22 @@
---
source: src/isort/mod.rs
expression: checks
---
- kind: UnsortedImports
location:
row: 1
column: 0
end_location:
row: 6
column: 0
fix:
patch:
content: "from collections import (\n AsyncIterable,\n Awaitable,\n ChainMap,\n Collection,\n MutableMapping,\n MutableSequence,\n)\n"
location:
row: 1
column: 0
end_location:
row: 6
column: 0
applied: false

View file

@ -0,0 +1,22 @@
---
source: src/isort/mod.rs
expression: checks
---
- kind: UnsortedImports
location:
row: 1
column: 0
end_location:
row: 5
column: 0
fix:
patch:
content: "import os\nimport os as os1\nimport os as os2\n"
location:
row: 1
column: 0
end_location:
row: 5
column: 0
applied: false

View file

@ -0,0 +1,6 @@
---
source: src/isort/mod.rs
expression: checks
---
[]

View file

@ -0,0 +1,22 @@
---
source: src/isort/mod.rs
expression: checks
---
- kind: UnsortedImports
location:
row: 1
column: 0
end_location:
row: 3
column: 0
fix:
patch:
content: "import os\nfrom collections import Collection\n"
location:
row: 1
column: 0
end_location:
row: 3
column: 0
applied: false

View file

@ -0,0 +1,39 @@
---
source: src/isort/mod.rs
expression: checks
---
- kind: UnsortedImports
location:
row: 1
column: 7
end_location:
row: 2
column: 9
fix:
patch:
content: "\nimport os\nimport sys\n"
location:
row: 1
column: 7
end_location:
row: 3
column: 0
applied: false
- kind: UnsortedImports
location:
row: 5
column: 11
end_location:
row: 6
column: 13
fix:
patch:
content: "\n import os\n import sys\n"
location:
row: 5
column: 11
end_location:
row: 7
column: 0
applied: false

View file

@ -0,0 +1,6 @@
---
source: src/isort/mod.rs
expression: checks
---
[]

View file

@ -0,0 +1,39 @@
---
source: src/isort/mod.rs
expression: checks
---
- kind: UnsortedImports
location:
row: 2
column: 0
end_location:
row: 4
column: 0
fix:
patch:
content: " import os\n import sys\n"
location:
row: 2
column: 0
end_location:
row: 4
column: 0
applied: false
- kind: UnsortedImports
location:
row: 5
column: 0
end_location:
row: 7
column: 0
fix:
patch:
content: " import os\n import sys\n"
location:
row: 5
column: 0
end_location:
row: 7
column: 0
applied: false

View file

@ -0,0 +1,22 @@
---
source: src/isort/mod.rs
expression: checks
---
- kind: UnsortedImports
location:
row: 1
column: 0
end_location:
row: 3
column: 0
fix:
patch:
content: "import os\nimport sys\n"
location:
row: 1
column: 0
end_location:
row: 3
column: 0
applied: false

View file

@ -0,0 +1,22 @@
---
source: src/isort/mod.rs
expression: checks
---
- kind: UnsortedImports
location:
row: 1
column: 0
end_location:
row: 6
column: 0
fix:
patch:
content: "import os\nimport sys\n\nimport numpy as np\n\nimport leading_prefix\nfrom leading_prefix import Class\n"
location:
row: 1
column: 0
end_location:
row: 6
column: 0
applied: false

View file

@ -0,0 +1,22 @@
---
source: src/isort/mod.rs
expression: checks
---
- kind: UnsortedImports
location:
row: 1
column: 0
end_location:
row: 4
column: 0
fix:
patch:
content: "from __future__ import annotations\n\nimport os\nimport sys\n"
location:
row: 1
column: 0
end_location:
row: 4
column: 0
applied: false

View file

@ -0,0 +1,22 @@
---
source: src/isort/mod.rs
expression: checks
---
- kind: UnsortedImports
location:
row: 1
column: 0
end_location:
row: 5
column: 0
fix:
patch:
content: "import os\nimport sys\n\nimport numpy as np\nimport pandas as pd\n"
location:
row: 1
column: 0
end_location:
row: 5
column: 0
applied: false

View file

@ -0,0 +1,39 @@
---
source: src/isort/mod.rs
expression: checks
---
- kind: UnsortedImports
location:
row: 1
column: 0
end_location:
row: 2
column: 9
fix:
patch:
content: "import os\nimport sys\n"
location:
row: 1
column: 0
end_location:
row: 3
column: 0
applied: false
- kind: UnsortedImports
location:
row: 5
column: 4
end_location:
row: 6
column: 13
fix:
patch:
content: " import os\n import sys\n"
location:
row: 5
column: 0
end_location:
row: 7
column: 0
applied: false

206
src/isort/track.rs Normal file
View file

@ -0,0 +1,206 @@
use rustpython_ast::{
Alias, Arg, Arguments, Boolop, Cmpop, Comprehension, Constant, Excepthandler,
ExcepthandlerKind, Expr, ExprContext, Keyword, MatchCase, Operator, Pattern, Stmt, StmtKind,
Unaryop, Withitem,
};
use crate::ast::visitor::Visitor;
#[derive(Debug)]
pub struct ImportTracker<'a> {
pub blocks: Vec<Vec<&'a Stmt>>,
}
impl<'a> ImportTracker<'a> {
pub fn new() -> Self {
Self {
blocks: vec![vec![]],
}
}
fn add_import(&mut self, stmt: &'a Stmt) {
let index = self.blocks.len() - 1;
self.blocks[index].push(stmt);
}
fn finalize(&mut self) {
let index = self.blocks.len() - 1;
if !self.blocks[index].is_empty() {
self.blocks.push(vec![]);
}
}
pub fn into_iter(self) -> impl IntoIterator<Item = Vec<&'a Stmt>> {
self.blocks.into_iter()
}
}
impl<'a, 'b> Visitor<'b> for ImportTracker<'a>
where
'b: 'a,
{
fn visit_stmt(&mut self, stmt: &'b Stmt) {
// Track imports.
if matches!(
stmt.node,
StmtKind::Import { .. } | StmtKind::ImportFrom { .. }
) {
self.add_import(stmt);
} else {
self.finalize();
}
// Track scope.
match &stmt.node {
StmtKind::FunctionDef { body, .. } => {
for stmt in body {
self.visit_stmt(stmt);
}
self.finalize();
}
StmtKind::AsyncFunctionDef { body, .. } => {
for stmt in body {
self.visit_stmt(stmt);
}
self.finalize();
}
StmtKind::ClassDef { body, .. } => {
for stmt in body {
self.visit_stmt(stmt);
}
self.finalize();
}
StmtKind::For { body, orelse, .. } => {
for stmt in body {
self.visit_stmt(stmt);
}
self.finalize();
for stmt in orelse {
self.visit_stmt(stmt);
}
self.finalize();
}
StmtKind::AsyncFor { body, orelse, .. } => {
for stmt in body {
self.visit_stmt(stmt);
}
self.finalize();
for stmt in orelse {
self.visit_stmt(stmt);
}
self.finalize();
}
StmtKind::While { body, orelse, .. } => {
for stmt in body {
self.visit_stmt(stmt);
}
self.finalize();
for stmt in orelse {
self.visit_stmt(stmt);
}
self.finalize();
}
StmtKind::If { body, orelse, .. } => {
for stmt in body {
self.visit_stmt(stmt);
}
self.finalize();
for stmt in orelse {
self.visit_stmt(stmt);
}
self.finalize();
}
StmtKind::With { body, .. } => {
for stmt in body {
self.visit_stmt(stmt);
}
self.finalize();
}
StmtKind::AsyncWith { body, .. } => {
for stmt in body {
self.visit_stmt(stmt);
}
self.finalize();
}
StmtKind::Match { cases, .. } => {
for match_case in cases {
self.visit_match_case(match_case);
}
}
StmtKind::Try {
body,
handlers,
orelse,
finalbody,
} => {
for excepthandler in handlers {
self.visit_excepthandler(excepthandler)
}
for stmt in body {
self.visit_stmt(stmt);
}
self.finalize();
for stmt in orelse {
self.visit_stmt(stmt);
}
self.finalize();
for stmt in finalbody {
self.visit_stmt(stmt);
}
self.finalize();
}
_ => {}
}
}
fn visit_annotation(&mut self, _: &'b Expr) {}
fn visit_expr(&mut self, _: &'b Expr) {}
fn visit_constant(&mut self, _: &'b Constant) {}
fn visit_expr_context(&mut self, _: &'b ExprContext) {}
fn visit_boolop(&mut self, _: &'b Boolop) {}
fn visit_operator(&mut self, _: &'b Operator) {}
fn visit_unaryop(&mut self, _: &'b Unaryop) {}
fn visit_cmpop(&mut self, _: &'b Cmpop) {}
fn visit_comprehension(&mut self, _: &'b Comprehension) {}
fn visit_excepthandler(&mut self, excepthandler: &'b Excepthandler) {
let ExcepthandlerKind::ExceptHandler { body, .. } = &excepthandler.node;
for stmt in body {
self.visit_stmt(stmt);
}
self.finalize();
}
fn visit_arguments(&mut self, _: &'b Arguments) {}
fn visit_arg(&mut self, _: &'b Arg) {}
fn visit_keyword(&mut self, _: &'b Keyword) {}
fn visit_alias(&mut self, _: &'b Alias) {}
fn visit_withitem(&mut self, _: &'b Withitem) {}
fn visit_match_case(&mut self, match_case: &'b MatchCase) {
for stmt in &match_case.body {
self.visit_stmt(stmt);
}
self.finalize();
}
fn visit_pattern(&mut self, _: &'b Pattern) {}
}

55
src/isort/types.rs Normal file
View file

@ -0,0 +1,55 @@
use std::collections::{BTreeMap, BTreeSet};
#[derive(Debug, Hash, Ord, PartialOrd, Eq, PartialEq)]
pub struct ImportFromData<'a> {
pub module: &'a Option<String>,
pub level: &'a Option<usize>,
}
#[derive(Debug, Hash, Ord, PartialOrd, Eq, PartialEq)]
pub struct AliasData<'a> {
pub name: &'a str,
pub asname: &'a Option<String>,
}
pub trait Importable {
fn module_name(&self) -> String;
fn module_base(&self) -> String;
}
impl Importable for AliasData<'_> {
fn module_name(&self) -> String {
self.name.to_string()
}
fn module_base(&self) -> String {
self.module_name().split('.').next().unwrap().to_string()
}
}
impl Importable for ImportFromData<'_> {
fn module_name(&self) -> String {
let mut module_name = String::new();
if let Some(level) = self.level {
if level > &0 {
module_name.push_str(&".".repeat(*level));
}
}
if let Some(module) = self.module {
module_name.push_str(module);
}
module_name
}
fn module_base(&self) -> String {
self.module_name().split('.').next().unwrap().to_string()
}
}
#[derive(Debug, Default)]
pub struct ImportBlock<'a> {
// Map from (module, level) to `AliasData`.
pub import_from: BTreeMap<ImportFromData<'a>, BTreeSet<AliasData<'a>>>,
// Set of (name, asname).
pub import: BTreeSet<AliasData<'a>>,
}

View file

@ -17,6 +17,7 @@ mod ast;
pub mod autofix; pub mod autofix;
pub mod cache; pub mod cache;
pub mod check_ast; pub mod check_ast;
mod check_imports;
mod check_lines; mod check_lines;
mod check_tokens; mod check_tokens;
pub mod checks; pub mod checks;
@ -32,6 +33,7 @@ mod flake8_comprehensions;
mod flake8_print; mod flake8_print;
pub mod flake8_quotes; pub mod flake8_quotes;
pub mod fs; pub mod fs;
mod isort;
mod lex; mod lex;
pub mod linter; pub mod linter;
pub mod logging; pub mod logging;

View file

@ -16,6 +16,7 @@ use crate::ast::types::Range;
use crate::autofix::fixer; use crate::autofix::fixer;
use crate::autofix::fixer::fix_file; use crate::autofix::fixer::fix_file;
use crate::check_ast::check_ast; use crate::check_ast::check_ast;
use crate::check_imports::check_imports;
use crate::check_lines::check_lines; use crate::check_lines::check_lines;
use crate::check_tokens::check_tokens; use crate::check_tokens::check_tokens;
use crate::checks::{Check, CheckCode, CheckKind, LintSource}; use crate::checks::{Check, CheckCode, CheckKind, LintSource};
@ -63,23 +64,32 @@ pub(crate) fn check_path(
let mut checks: Vec<Check> = vec![]; let mut checks: Vec<Check> = vec![];
// Run the token-based checks. // Run the token-based checks.
if settings let use_tokens = settings
.enabled .enabled
.iter() .iter()
.any(|check_code| matches!(check_code.lint_source(), LintSource::Tokens)) .any(|check_code| matches!(check_code.lint_source(), LintSource::Tokens));
{ if use_tokens {
check_tokens(&mut checks, locator, &tokens, settings, autofix); check_tokens(&mut checks, locator, &tokens, settings, autofix);
} }
// Run the AST-based checks. // Run the AST-based checks.
if settings let use_ast = settings
.enabled .enabled
.iter() .iter()
.any(|check_code| matches!(check_code.lint_source(), LintSource::AST)) .any(|check_code| matches!(check_code.lint_source(), LintSource::AST));
{ let use_imports = settings
.enabled
.iter()
.any(|check_code| matches!(check_code.lint_source(), LintSource::Imports));
if use_ast || use_imports {
match parse_program_tokens(tokens, "<filename>") { match parse_program_tokens(tokens, "<filename>") {
Ok(python_ast) => { Ok(python_ast) => {
checks.extend(check_ast(&python_ast, locator, settings, autofix, path)) if use_ast {
checks.extend(check_ast(&python_ast, locator, settings, autofix, path));
}
if use_imports {
checks.extend(check_imports(&python_ast, locator, settings, autofix));
}
} }
Err(parse_error) => { Err(parse_error) => {
if settings.enabled.contains(&CheckCode::E999) { if settings.enabled.contains(&CheckCode::E999) {

View file

@ -1,4 +1,5 @@
pub mod builtins; pub mod builtins;
pub mod future; pub mod future;
pub mod keyword; pub mod keyword;
pub mod sys;
pub mod typing; pub mod typing;

228
src/python/sys.rs Normal file
View file

@ -0,0 +1,228 @@
use std::collections::BTreeSet;
use once_cell::sync::Lazy;
// See: https://pycqa.github.io/isort/docs/configuration/options.html#known-standard-library
pub static KNOWN_STANDARD_LIBRARY: Lazy<BTreeSet<&'static str>> = Lazy::new(|| {
BTreeSet::from([
"_ast",
"_dummy_thread",
"_thread",
"abc",
"aifc",
"argparse",
"array",
"ast",
"asynchat",
"asyncio",
"asyncore",
"atexit",
"audioop",
"base64",
"bdb",
"binascii",
"binhex",
"bisect",
"builtins",
"bz2",
"cProfile",
"calendar",
"cgi",
"cgitb",
"chunk",
"cmath",
"cmd",
"code",
"codecs",
"codeop",
"collections",
"colorsys",
"compileall",
"concurrent",
"configparser",
"contextlib",
"contextvars",
"copy",
"copyreg",
"crypt",
"csv",
"ctypes",
"curses",
"dataclasses",
"datetime",
"dbm",
"decimal",
"difflib",
"dis",
"distutils",
"doctest",
"dummy_threading",
"email",
"encodings",
"ensurepip",
"enum",
"errno",
"faulthandler",
"fcntl",
"filecmp",
"fileinput",
"fnmatch",
"formatter",
"fpectl",
"fractions",
"ftplib",
"functools",
"gc",
"getopt",
"getpass",
"gettext",
"glob",
"graphlib",
"grp",
"gzip",
"hashlib",
"heapq",
"hmac",
"html",
"http",
"imaplib",
"imghdr",
"imp",
"importlib",
"inspect",
"io",
"ipaddress",
"itertools",
"json",
"keyword",
"lib2to3",
"linecache",
"locale",
"logging",
"lzma",
"macpath",
"mailbox",
"mailcap",
"marshal",
"math",
"mimetypes",
"mmap",
"modulefinder",
"msilib",
"msvcrt",
"multiprocessing",
"netrc",
"nis",
"nntplib",
"ntpath",
"numbers",
"operator",
"optparse",
"os",
"ossaudiodev",
"parser",
"pathlib",
"pdb",
"pickle",
"pickletools",
"pipes",
"pkgutil",
"platform",
"plistlib",
"poplib",
"posix",
"posixpath",
"pprint",
"profile",
"pstats",
"pty",
"pwd",
"py_compile",
"pyclbr",
"pydoc",
"queue",
"quopri",
"random",
"re",
"readline",
"reprlib",
"resource",
"rlcompleter",
"runpy",
"sched",
"secrets",
"select",
"selectors",
"shelve",
"shlex",
"shutil",
"signal",
"site",
"smtpd",
"smtplib",
"sndhdr",
"socket",
"socketserver",
"spwd",
"sqlite3",
"sre",
"sre_compile",
"sre_constants",
"sre_parse",
"ssl",
"stat",
"statistics",
"string",
"stringprep",
"struct",
"subprocess",
"sunau",
"symbol",
"symtable",
"sys",
"sysconfig",
"syslog",
"tabnanny",
"tarfile",
"telnetlib",
"tempfile",
"termios",
"test",
"textwrap",
"threading",
"time",
"timeit",
"tkinter",
"token",
"tokenize",
"trace",
"traceback",
"tracemalloc",
"tty",
"turtle",
"turtledemo",
"types",
"typing",
"unicodedata",
"unittest",
"urllib",
"uu",
"uuid",
"venv",
"warnings",
"wave",
"weakref",
"webbrowser",
"winreg",
"winsound",
"wsgiref",
"xdrlib",
"xml",
"xmlrpc",
"zipapp",
"zipfile",
"zipimport",
"zlib",
"zoneinfo",
])
});

View file

@ -2,16 +2,17 @@
//! command-line options. Structure mirrors the user-facing representation of //! command-line options. Structure mirrors the user-facing representation of
//! the various parameters. //! the various parameters.
use std::path::PathBuf; use std::path::{Path, PathBuf};
use anyhow::{anyhow, Result}; use anyhow::{anyhow, Result};
use once_cell::sync::Lazy; use once_cell::sync::Lazy;
use path_absolutize::path_dedot;
use regex::Regex; use regex::Regex;
use crate::checks_gen::CheckCodePrefix; use crate::checks_gen::CheckCodePrefix;
use crate::settings::pyproject::load_options; use crate::settings::pyproject::load_options;
use crate::settings::types::{FilePattern, PerFileIgnore, PythonVersion}; use crate::settings::types::{FilePattern, PerFileIgnore, PythonVersion};
use crate::{flake8_annotations, flake8_quotes, pep8_naming}; use crate::{flake8_annotations, flake8_quotes, fs, isort, pep8_naming};
#[derive(Debug)] #[derive(Debug)]
pub struct Configuration { pub struct Configuration {
@ -25,10 +26,12 @@ pub struct Configuration {
pub line_length: usize, pub line_length: usize,
pub per_file_ignores: Vec<PerFileIgnore>, pub per_file_ignores: Vec<PerFileIgnore>,
pub select: Vec<CheckCodePrefix>, pub select: Vec<CheckCodePrefix>,
pub src: Vec<PathBuf>,
pub target_version: PythonVersion, pub target_version: PythonVersion,
// Plugins // Plugins
pub flake8_annotations: flake8_annotations::settings::Settings, pub flake8_annotations: flake8_annotations::settings::Settings,
pub flake8_quotes: flake8_quotes::settings::Settings, pub flake8_quotes: flake8_quotes::settings::Settings,
pub isort: isort::settings::Settings,
pub pep8_naming: pep8_naming::settings::Settings, pub pep8_naming: pep8_naming::settings::Settings,
} }
@ -71,6 +74,25 @@ impl Configuration {
.map_err(|e| anyhow!("Invalid dummy-variable-rgx value: {e}"))?, .map_err(|e| anyhow!("Invalid dummy-variable-rgx value: {e}"))?,
None => DEFAULT_DUMMY_VARIABLE_RGX.clone(), None => DEFAULT_DUMMY_VARIABLE_RGX.clone(),
}, },
src: options
.src
.map(|src| {
src.iter()
.map(|path| {
let path = Path::new(path);
match project_root {
Some(project_root) => fs::normalize_path_to(path, project_root),
None => fs::normalize_path(path),
}
})
.collect()
})
.unwrap_or_else(|| {
vec![match project_root {
Some(project_root) => project_root.clone(),
None => path_dedot::CWD.clone(),
}]
}),
target_version: options.target_version.unwrap_or(PythonVersion::Py310), target_version: options.target_version.unwrap_or(PythonVersion::Py310),
exclude: options exclude: options
.exclude .exclude
@ -115,6 +137,10 @@ impl Configuration {
.flake8_quotes .flake8_quotes
.map(flake8_quotes::settings::Settings::from_options) .map(flake8_quotes::settings::Settings::from_options)
.unwrap_or_default(), .unwrap_or_default(),
isort: options
.isort
.map(isort::settings::Settings::from_options)
.unwrap_or_default(),
pep8_naming: options pep8_naming: options
.pep8_naming .pep8_naming
.map(pep8_naming::settings::Settings::from_options) .map(pep8_naming::settings::Settings::from_options)

View file

@ -4,14 +4,16 @@
use std::collections::BTreeSet; use std::collections::BTreeSet;
use std::hash::{Hash, Hasher}; use std::hash::{Hash, Hasher};
use std::path::PathBuf;
use path_absolutize::path_dedot;
use regex::Regex; use regex::Regex;
use crate::checks::CheckCode; use crate::checks::CheckCode;
use crate::checks_gen::{CheckCodePrefix, PrefixSpecificity}; use crate::checks_gen::{CheckCodePrefix, PrefixSpecificity};
use crate::settings::configuration::Configuration; use crate::settings::configuration::Configuration;
use crate::settings::types::{FilePattern, PerFileIgnore, PythonVersion}; use crate::settings::types::{FilePattern, PerFileIgnore, PythonVersion};
use crate::{flake8_annotations, flake8_quotes, pep8_naming}; use crate::{flake8_annotations, flake8_quotes, isort, pep8_naming};
pub mod configuration; pub mod configuration;
pub mod options; pub mod options;
@ -27,10 +29,12 @@ pub struct Settings {
pub extend_exclude: Vec<FilePattern>, pub extend_exclude: Vec<FilePattern>,
pub line_length: usize, pub line_length: usize,
pub per_file_ignores: Vec<PerFileIgnore>, pub per_file_ignores: Vec<PerFileIgnore>,
pub src: Vec<PathBuf>,
pub target_version: PythonVersion, pub target_version: PythonVersion,
// Plugins // Plugins
pub flake8_annotations: flake8_annotations::settings::Settings, pub flake8_annotations: flake8_annotations::settings::Settings,
pub flake8_quotes: flake8_quotes::settings::Settings, pub flake8_quotes: flake8_quotes::settings::Settings,
pub isort: isort::settings::Settings,
pub pep8_naming: pep8_naming::settings::Settings, pub pep8_naming: pep8_naming::settings::Settings,
} }
@ -48,9 +52,11 @@ impl Settings {
extend_exclude: config.extend_exclude, extend_exclude: config.extend_exclude,
flake8_annotations: config.flake8_annotations, flake8_annotations: config.flake8_annotations,
flake8_quotes: config.flake8_quotes, flake8_quotes: config.flake8_quotes,
isort: config.isort,
line_length: config.line_length, line_length: config.line_length,
pep8_naming: config.pep8_naming, pep8_naming: config.pep8_naming,
per_file_ignores: config.per_file_ignores, per_file_ignores: config.per_file_ignores,
src: config.src,
target_version: config.target_version, target_version: config.target_version,
} }
} }
@ -63,9 +69,11 @@ impl Settings {
extend_exclude: Default::default(), extend_exclude: Default::default(),
line_length: 88, line_length: 88,
per_file_ignores: Default::default(), per_file_ignores: Default::default(),
src: vec![path_dedot::CWD.clone()],
target_version: PythonVersion::Py310, target_version: PythonVersion::Py310,
flake8_annotations: Default::default(), flake8_annotations: Default::default(),
flake8_quotes: Default::default(), flake8_quotes: Default::default(),
isort: Default::default(),
pep8_naming: Default::default(), pep8_naming: Default::default(),
} }
} }
@ -78,9 +86,11 @@ impl Settings {
extend_exclude: Default::default(), extend_exclude: Default::default(),
line_length: 88, line_length: 88,
per_file_ignores: Default::default(), per_file_ignores: Default::default(),
src: vec![path_dedot::CWD.clone()],
target_version: PythonVersion::Py310, target_version: PythonVersion::Py310,
flake8_annotations: Default::default(), flake8_annotations: Default::default(),
flake8_quotes: Default::default(), flake8_quotes: Default::default(),
isort: Default::default(),
pep8_naming: Default::default(), pep8_naming: Default::default(),
} }
} }
@ -101,6 +111,7 @@ impl Hash for Settings {
// Add plugin properties in alphabetical order. // Add plugin properties in alphabetical order.
self.flake8_annotations.hash(state); self.flake8_annotations.hash(state);
self.flake8_quotes.hash(state); self.flake8_quotes.hash(state);
self.isort.hash(state);
self.pep8_naming.hash(state); self.pep8_naming.hash(state);
} }
} }

View file

@ -6,7 +6,7 @@ use serde::{Deserialize, Serialize};
use crate::checks_gen::CheckCodePrefix; use crate::checks_gen::CheckCodePrefix;
use crate::settings::types::PythonVersion; use crate::settings::types::PythonVersion;
use crate::{flake8_annotations, flake8_quotes, pep8_naming}; use crate::{flake8_annotations, flake8_quotes, isort, pep8_naming};
#[derive(Debug, PartialEq, Eq, Serialize, Deserialize, Default)] #[derive(Debug, PartialEq, Eq, Serialize, Deserialize, Default)]
#[serde(deny_unknown_fields, rename_all = "kebab-case")] #[serde(deny_unknown_fields, rename_all = "kebab-case")]
@ -21,9 +21,11 @@ pub struct Options {
pub line_length: Option<usize>, pub line_length: Option<usize>,
pub per_file_ignores: Option<BTreeMap<String, Vec<CheckCodePrefix>>>, pub per_file_ignores: Option<BTreeMap<String, Vec<CheckCodePrefix>>>,
pub select: Option<Vec<CheckCodePrefix>>, pub select: Option<Vec<CheckCodePrefix>>,
pub src: Option<Vec<String>>,
pub target_version: Option<PythonVersion>, pub target_version: Option<PythonVersion>,
// Plugins // Plugins
pub flake8_annotations: Option<flake8_annotations::settings::Options>, pub flake8_annotations: Option<flake8_annotations::settings::Options>,
pub flake8_quotes: Option<flake8_quotes::settings::Options>, pub flake8_quotes: Option<flake8_quotes::settings::Options>,
pub isort: Option<isort::settings::Options>,
pub pep8_naming: Option<pep8_naming::settings::Options>, pub pep8_naming: Option<pep8_naming::settings::Options>,
} }

View file

@ -143,9 +143,11 @@ mod tests {
extend_ignore: None, extend_ignore: None,
per_file_ignores: None, per_file_ignores: None,
dummy_variable_rgx: None, dummy_variable_rgx: None,
src: None,
target_version: None, target_version: None,
flake8_annotations: None, flake8_annotations: None,
flake8_quotes: None, flake8_quotes: None,
isort: None,
pep8_naming: None, pep8_naming: None,
}) })
}) })
@ -172,9 +174,11 @@ line-length = 79
extend_ignore: None, extend_ignore: None,
per_file_ignores: None, per_file_ignores: None,
dummy_variable_rgx: None, dummy_variable_rgx: None,
src: None,
target_version: None, target_version: None,
flake8_annotations: None, flake8_annotations: None,
flake8_quotes: None, flake8_quotes: None,
isort: None,
pep8_naming: None, pep8_naming: None,
}) })
}) })
@ -201,9 +205,11 @@ exclude = ["foo.py"]
extend_ignore: None, extend_ignore: None,
per_file_ignores: None, per_file_ignores: None,
dummy_variable_rgx: None, dummy_variable_rgx: None,
src: None,
target_version: None, target_version: None,
flake8_annotations: None, flake8_annotations: None,
flake8_quotes: None, flake8_quotes: None,
isort: None,
pep8_naming: None, pep8_naming: None,
}) })
}) })
@ -230,9 +236,11 @@ select = ["E501"]
extend_ignore: None, extend_ignore: None,
per_file_ignores: None, per_file_ignores: None,
dummy_variable_rgx: None, dummy_variable_rgx: None,
src: None,
target_version: None, target_version: None,
flake8_annotations: None, flake8_annotations: None,
flake8_quotes: None, flake8_quotes: None,
isort: None,
pep8_naming: None, pep8_naming: None,
}) })
}) })
@ -260,9 +268,11 @@ ignore = ["E501"]
extend_ignore: None, extend_ignore: None,
per_file_ignores: None, per_file_ignores: None,
dummy_variable_rgx: None, dummy_variable_rgx: None,
src: None,
target_version: None, target_version: None,
flake8_annotations: None, flake8_annotations: None,
flake8_quotes: None, flake8_quotes: None,
isort: None,
pep8_naming: None, pep8_naming: None,
}) })
}) })
@ -336,6 +346,7 @@ other-attribute = 1
vec![CheckCodePrefix::F401] vec![CheckCodePrefix::F401]
),])), ),])),
dummy_variable_rgx: None, dummy_variable_rgx: None,
src: None,
target_version: None, target_version: None,
flake8_annotations: None, flake8_annotations: None,
flake8_quotes: Some(flake8_quotes::settings::Options { flake8_quotes: Some(flake8_quotes::settings::Options {
@ -344,6 +355,7 @@ other-attribute = 1
docstring_quotes: Some(Quote::Double), docstring_quotes: Some(Quote::Double),
avoid_escape: Some(true), avoid_escape: Some(true),
}), }),
isort: None,
pep8_naming: Some(pep8_naming::settings::Options { pep8_naming: Some(pep8_naming::settings::Options {
ignore_names: Some(vec![ ignore_names: Some(vec![
"setUp".to_string(), "setUp".to_string(),

View file

@ -7,7 +7,7 @@ use regex::Regex;
use crate::checks::CheckCode; use crate::checks::CheckCode;
use crate::checks_gen::CheckCodePrefix; use crate::checks_gen::CheckCodePrefix;
use crate::settings::types::{FilePattern, PythonVersion}; use crate::settings::types::{FilePattern, PythonVersion};
use crate::{flake8_annotations, flake8_quotes, pep8_naming, Configuration}; use crate::{flake8_annotations, flake8_quotes, isort, pep8_naming, Configuration};
/// Struct to render user-facing exclusion patterns. /// Struct to render user-facing exclusion patterns.
#[derive(Debug)] #[derive(Debug)]
@ -45,10 +45,12 @@ pub struct UserConfiguration {
pub line_length: usize, pub line_length: usize,
pub per_file_ignores: Vec<(Exclusion, Vec<CheckCode>)>, pub per_file_ignores: Vec<(Exclusion, Vec<CheckCode>)>,
pub select: Vec<CheckCodePrefix>, pub select: Vec<CheckCodePrefix>,
pub src: Vec<PathBuf>,
pub target_version: PythonVersion, pub target_version: PythonVersion,
// Plugins // Plugins
pub flake8_annotations: flake8_annotations::settings::Settings, pub flake8_annotations: flake8_annotations::settings::Settings,
pub flake8_quotes: flake8_quotes::settings::Settings, pub flake8_quotes: flake8_quotes::settings::Settings,
pub isort: isort::settings::Settings,
pub pep8_naming: pep8_naming::settings::Settings, pub pep8_naming: pep8_naming::settings::Settings,
// Non-settings exposed to the user // Non-settings exposed to the user
pub project_root: Option<PathBuf>, pub project_root: Option<PathBuf>,
@ -89,9 +91,11 @@ impl UserConfiguration {
}) })
.collect(), .collect(),
select: configuration.select, select: configuration.select,
src: configuration.src,
target_version: configuration.target_version, target_version: configuration.target_version,
flake8_annotations: configuration.flake8_annotations, flake8_annotations: configuration.flake8_annotations,
flake8_quotes: configuration.flake8_quotes, flake8_quotes: configuration.flake8_quotes,
isort: configuration.isort,
pep8_naming: configuration.pep8_naming, pep8_naming: configuration.pep8_naming,
project_root, project_root,
pyproject, pyproject,