Commit graph

1875 commits

Author SHA1 Message Date
Charlie Marsh
615337a54d
Remove newline-insertion logic from JoinNodesBuilder (#6205)
## Summary

This PR moves the "insert empty lines" behavior out of
`JoinNodesBuilder` and into the `Suite` formatter. I find it a little
confusing that the logic is split between those two formatters right
now, and since this is _only_ used in that one place, IMO it is a bit
simpler to just inline it and use a single approach to tracking state
(right now, both are stateful).

The only other place this was used was for decorators. As a side effect,
we now remove blank lines in both of these cases, which is a known but
intentional deviation from Black (which preserves the empty line before
the comment in the first case):

```python
@foo

# Hello
@bar
def baz():
    pass

@foo

@bar
def baz():
    pass
```
2023-07-31 16:58:15 -04:00
Charlie Marsh
6ee5cb37c0
Reset model state when exiting deferred visitors (#6208)
## Summary

Very subtle bug related to the AST traversal. Given:

```python
from __future__ import annotations

from logging import getLogger

__all__ = ("getLogger",)


def foo() -> None:
    pass
```

We end up visiting the `-> None` annotation, then reusing the state
snapshot when we go to visit the `__all__` exports, so when we visit
`"getLogger"`, we think we're inside of a deferred type annotation.

This PR changes all the deferred visitors to snapshot and restore the
state, which is a lot safer -- that way, the visitors avoid modifying
the current visitor state. (Previously, they implicitly left the visitor
state set to the state of the _last_ thing they visited.)

Closes https://github.com/astral-sh/ruff/issues/6207.
2023-07-31 19:46:52 +00:00
konsti
0fddb31235
Use tracing for format_dev (#6177)
## Summary

[tracing](https://github.com/tokio-rs/tracing) is library for logging,
tracing and related features that has a large ecosystem. Using
[tracing-subscriber](https://docs.rs/tracing-subscriber) and
[tracing-indicatif](https://github.com/emersonford/tracing-indicatif),
we get a nice logging output that you can configure with `RUST_LOG`
(e.g. `RUST_LOG=debug`) and a live look into the formatter progress.

Default:
![Screenshot from 2023-07-30
13-59-53](6432f835-9ff1-4771-955b-398e54c406dc)

`RUST_LOG=debug`:
![Screenshot from 2023-07-30
14-01-32](5f2c87da-0867-4159-82e7-b5757eebb8eb)

It's easy to see in this output which files take a disproportionate
amount of time.

[Peek 2023-07-30
14-35.webm](2c92db5c-1354-465b-a6bc-ddfb281d6f9d)

It opens up further integration with the tracing ecosystem,
[tracing-timing](https://docs.rs/tracing-timing/latest/tracing_timing/)
and [tokio-console](https://github.com/tokio-rs/console) can e.g. show
histograms and the json output allows us building better pipelines than
grepping a log file.

One caveat is using `parent: None` for the logging statements because
tracing subscriber does not allow deactivating the span without
reimplementing all the other log message formatting, too, and we don't
need span information, esp. since it would currently show the progress
bar span.

## Test Plan

n/a
2023-07-31 19:14:01 +00:00
konsti
a7aa3caaae
Rename formatter_progress to formatter_ecosystem_checks (#6194)
Rename the `scripts/formatter_progress.sh` to
`formatter/formatter_ecosysytem_checks.sh` since it fits the actual task
better.
2023-07-31 18:33:12 +00:00
konsti
e52b636da0
Log configuration in ruff_dev (#6193)
**Summary** This includes two changes:
 * Allow setting `-v` in `ruff_dev`, using the `ruff_cli` implementation
 * `debug!` which ruff configuration strategy was used

This is a byproduct of debugging #6187.

**Test Plan** n/a
2023-07-31 17:52:38 +00:00
konsti
9063f4524d
Fix formatting of trailing unescaped quotes in raw triple quoted strings (#6202)
**Summary** This prevents us from turning `r'''\""'''` into
`r"""\"""""`, which is invalid syntax.

This PR fixes CI, which is currently broken on main (in a way that still
passes on linter PRs and allows merging formatter PRs, but it's bad to
have a job be red). Once merged, i'll make the formatted ecosystem
checks a required check.

**Test Plan** Added a regression test.
2023-07-31 19:25:16 +02:00
Charlie Marsh
dbd60b2cf5
Bump version to 0.0.281 (#6195) 2023-07-31 13:21:43 -04:00
Charlie Marsh
7eb2ba47cc
Add empty line after import block (#6200)
## Summary

Ensures that, given:

```python
import os
x = 1
```

We format like:

```python
import os

x = 1
```
2023-07-31 12:01:45 -04:00
Dhruv Manilawala
cb34e6d322
Avoid parenthesizing comprehension element (#6198)
## Summary

This PR adds a new precedence level for the comprehension element. This fixes
the generator to not add parentheses around the comprehension element every
time.

The new precedence level is `COMPREHENSION_ELEMENT` and it should occur after
the `NAMED_EXPR` precedence level because named expressions are always parenthesized.

This matches the behavior of Python `ast.unparse` and tested with the
following snippet:

```python
import ast

code = ""
ast.unparse(ast.parse(code))
```

## Test Plan

Add a bunch of test cases for all the valid nodes at that position.

fixes: #5777
2023-07-31 20:56:42 +05:30
Harutaka Kawamura
0274de1fff
Preserve backslash in raw string literal (#6152) 2023-07-31 12:48:17 +00:00
konsti
a540933bc9
Print log when formatter ecosystem checks fail (#6187)
**Summary** Print the errors when the formatter ecosystem checks failed.
Im not happy that we current collect the log in the first place, but
this is the less invasive change and we need it to unblock reviewing
#6152.

**Test Plan**
1547787940
2023-07-31 14:45:38 +02:00
Micha Reiser
311a1f9ec4
Remove len from JoinCommaSeparatedBuilder (#6185) 2023-07-31 12:19:47 +00:00
Luc Khai Hai
b95fc6d162
Format bytes string (#6166)
<!--
Thank you for contributing to Ruff! To help us out with reviewing,
please consider the following:

- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->

## Summary

Format bytes string

Closes #6064

## Test Plan

Added a fixture based on string's one
2023-07-31 10:46:40 +02:00
Charlie Marsh
de898c52eb
Avoid falsely marking non-submodules as submodule aliases (#6182)
## Summary

We have some code to ensure that if an aliased import is used, any
submodules should be marked as used too. This comment says it best:

```rust
// If the name of a submodule import is the same as an alias of another import, and the
// alias is used, then the submodule import should be marked as used too.
//
// For example, mark `pyarrow.csv` as used in:
//
// ```python
// import pyarrow as pa
// import pyarrow.csv
// print(pa.csv.read_csv("test.csv"))
// ```
```

However, it looks like when we go to look up `pyarrow` (of `import
pyarrow as pa`), we aren't checking to ensure the resolved binding is
_actually_ an import. This was causing us to attribute `print(rm.ANY)`
to `def requests_mock` here:

```python
import requests_mock as rm

def requests_mock(requests_mock: rm.Mocker):
    print(rm.ANY)
```

Closes https://github.com/astral-sh/ruff/issues/6180.
2023-07-30 22:16:25 +00:00
Charlie Marsh
76741cac77
Add global and nonlocal formatting (#6170)
## Summary

Adds `global` and `nonlocal` formatting, without the "deviation from
black" outlined in the linked issue, which I'll do separately.

See: https://github.com/astral-sh/ruff/issues/4798.

## Test Plan

Added a fixture in the Ruff-specific directory since the Black fixtures
don't seem to cover this.
2023-07-29 14:39:42 +00:00
Charlie Marsh
5d9814d84d
Remove parentheses around some walrus operators (#6173)
## Summary

Closes https://github.com/astral-sh/ruff/issues/5781

## Test Plan

Added cases to
`crates/ruff_python_formatter/resources/test/fixtures/ruff/expression/named_expr.py`
one-by-one and adjusted the condition as needed.
2023-07-29 10:06:26 -04:00
Charlie Marsh
4231ed2fc3
Skip partial duplicates when applying multi-edit fixes (#6144)
## Summary

Right now, if we have two fixes that have an overlapping edit, but not
an _identical_ set of edits, they'll conflict, causing us to do another
linter traversal. Here, I've enabled the fixer to support partially
overlapping edits, which (as an example) let's us greatly reduce the
number of iterations required in the test suite.

The most common case here is that in which a bunch of edits need to
import some symbol, and then use that symbol, but in different ways. In
that case, all edits will have a common fix (to import the symbol), but
deviate in some way. With this change, we can do all of those edits in
one pass.

Note that the simplest way to enable this was to store sorted edits on
`Fix`. We don't allow modifying the edits on `Fix` once it's
constructed, so this is an easy change, and allows us to avoid a bunch
of clones and traversals later on.

Closes #5800.
2023-07-29 12:11:57 +00:00
Charlie Marsh
badbfb2d3e
Skip BOM when determining Locator's line starts (#6159)
## Summary

If a file has a BOM, the import sorter _always_ reports the imports as
unsorted. The acute issue is that we detect that the line has leading
content (before the imports), which we always consider a violation.
Rather than fixing that one site, this PR instead makes `.line_start`
BOM-aware.

Fixes https://github.com/astral-sh/ruff/issues/6155.
2023-07-29 11:47:13 +00:00
Dhruv Manilawala
44bdf20221
[pep8-naming]: New config option extend-ignore-names (#6169)
## Summary

This PR adds a new config option for `pep8-naming` plugin called
`extend-ignore-names` which is used to extend the default values in
`ignore-names` option.

resolves: #6050
2023-07-29 17:11:04 +05:30
Dhruv Manilawala
3c99fbf808
Implement --diff for Jupyter Notebooks (#6149)
## Summary

Implement `--diff` for Jupyter Notebooks

## Test Plan

1. Use `crates/ruff/resources/test/fixtures/jupyter/isort.ipynb` as a
test case
and add a markdown cell in between the code cells to check that the diff
   outputs the correct cell index.
2. Run the command:
`cargo run --bin ruff --package ruff_cli -- check --no-cache --isolated
--select=ALL crates/ruff/resources/test/fixtures/jupyter/isort.ipynb
--fix --diff`

<details><summary>Example output:</summary>
<p>

```diff
--- /Users/dhruv/playground/ruff/notebooks/test.ipynb:cell 0
+++ /Users/dhruv/playground/ruff/notebooks/test.ipynb:cell 0
@@ -1,3 +0,0 @@
-from pathlib import Path
-import random
-import math
--- /Users/dhruv/playground/ruff/notebooks/test.ipynb:cell 4
+++ /Users/dhruv/playground/ruff/notebooks/test.ipynb:cell 4
@@ -1,5 +1,3 @@
-from typing import Any
-import collections
 # Newline should be added here
 def foo():
     pass

--- /Users/dhruv/playground/ruff/notebooks/image_classification.ipynb:cell 8
+++ /Users/dhruv/playground/ruff/notebooks/image_classification.ipynb:cell 8
@@ -1,8 +1,7 @@
 import pprint
 import tempfile
 
-from IPython import display
 import matplotlib.pyplot as plt
-
 import tensorflow as tf
-import tensorflow_datasets as tfds
+import tensorflow_datasets as tfds
+from IPython import display
--- /Users/dhruv/playground/ruff/notebooks/image_classification.ipynb:cell 10
+++ /Users/dhruv/playground/ruff/notebooks/image_classification.ipynb:cell 10
@@ -1,5 +1,4 @@
 import tensorflow_models as tfm
 
 # These are not in the tfm public API for v2.9. They will be available in v2.10
-from official.vision.serving import export_saved_model_lib
-import official.core.train_lib
+from official.vision.serving import export_saved_model_lib
--- /Users/dhruv/playground/ruff/notebooks/image_classification.ipynb:cell 13
+++ /Users/dhruv/playground/ruff/notebooks/image_classification.ipynb:cell 13
@@ -1,5 +1,5 @@
-exp_config = tfm.core.exp_factory.get_exp_config('resnet_imagenet')
-tfds_name = 'cifar10'
+exp_config = tfm.core.exp_factory.get_exp_config("resnet_imagenet")
+tfds_name = "cifar10"
 ds,ds_info = tfds.load(
 tfds_name,
 with_info=True)
--- /Users/dhruv/playground/ruff/notebooks/image_classification.ipynb:cell 15
+++ /Users/dhruv/playground/ruff/notebooks/image_classification.ipynb:cell 15
@@ -6,12 +6,12 @@
 # Configure training and testing data
 batch_size = 128
 
-exp_config.task.train_data.input_path = ''
+exp_config.task.train_data.input_path = ""
 exp_config.task.train_data.tfds_name = tfds_name
-exp_config.task.train_data.tfds_split = 'train'
+exp_config.task.train_data.tfds_split = "train"
 exp_config.task.train_data.global_batch_size = batch_size
 
-exp_config.task.validation_data.input_path = ''
+exp_config.task.validation_data.input_path = ""
 exp_config.task.validation_data.tfds_name = tfds_name
-exp_config.task.validation_data.tfds_split = 'test'
+exp_config.task.validation_data.tfds_split = "test"
 exp_config.task.validation_data.global_batch_size = batch_size
--- /Users/dhruv/playground/ruff/notebooks/image_classification.ipynb:cell 17
+++ /Users/dhruv/playground/ruff/notebooks/image_classification.ipynb:cell 17
@@ -1,16 +1,16 @@
 logical_device_names = [logical_device.name for logical_device in tf.config.list_logical_devices()]
 
-if 'GPU' in ''.join(logical_device_names):
-  print('This may be broken in Colab.')
-  device = 'GPU'
-elif 'TPU' in ''.join(logical_device_names):
-  print('This may be broken in Colab.')
-  device = 'TPU'
+if "GPU" in "".join(logical_device_names):
+  print("This may be broken in Colab.")
+  device = "GPU"
+elif "TPU" in "".join(logical_device_names):
+  print("This may be broken in Colab.")
+  device = "TPU"
 else:
-  print('Running on CPU is slow, so only train for a few steps.')
-  device = 'CPU'
+  print("Running on CPU is slow, so only train for a few steps.")
+  device = "CPU"
 
-if device=='CPU':
+if device=="CPU":
   train_steps = 20
   exp_config.trainer.steps_per_loop = 5
 else:
@@ -20,9 +20,9 @@
 exp_config.trainer.summary_interval = 100
 exp_config.trainer.checkpoint_interval = train_steps
 exp_config.trainer.validation_interval = 1000
-exp_config.trainer.validation_steps =  ds_info.splits['test'].num_examples // batch_size
+exp_config.trainer.validation_steps =  ds_info.splits["test"].num_examples // batch_size
 exp_config.trainer.train_steps = train_steps
-exp_config.trainer.optimizer_config.learning_rate.type = 'cosine'
+exp_config.trainer.optimizer_config.learning_rate.type = "cosine"
 exp_config.trainer.optimizer_config.learning_rate.cosine.decay_steps = train_steps
 exp_config.trainer.optimizer_config.learning_rate.cosine.initial_learning_rate = 0.1
 exp_config.trainer.optimizer_config.warmup.linear.warmup_steps = 100
--- /Users/dhruv/playground/ruff/notebooks/image_classification.ipynb:cell 21
+++ /Users/dhruv/playground/ruff/notebooks/image_classification.ipynb:cell 21
@@ -1,14 +1,14 @@
 logical_device_names = [logical_device.name for logical_device in tf.config.list_logical_devices()]
 
 if exp_config.runtime.mixed_precision_dtype == tf.float16:
-    tf.keras.mixed_precision.set_global_policy('mixed_float16')
+    tf.keras.mixed_precision.set_global_policy("mixed_float16")
 
-if 'GPU' in ''.join(logical_device_names):
+if "GPU" in "".join(logical_device_names):
   distribution_strategy = tf.distribute.MirroredStrategy()
-elif 'TPU' in ''.join(logical_device_names):
+elif "TPU" in "".join(logical_device_names):
   tf.tpu.experimental.initialize_tpu_system()
-  tpu = tf.distribute.cluster_resolver.TPUClusterResolver(tpu='/device:TPU_SYSTEM:0')
+  tpu = tf.distribute.cluster_resolver.TPUClusterResolver(tpu="/device:TPU_SYSTEM:0")
   distribution_strategy = tf.distribute.experimental.TPUStrategy(tpu)
 else:
-  print('Warning: this will be really slow.')
+  print("Warning: this will be really slow.")
   distribution_strategy = tf.distribute.OneDeviceStrategy(logical_device_names[0])
--- /Users/dhruv/playground/ruff/notebooks/image_classification.ipynb:cell 23
+++ /Users/dhruv/playground/ruff/notebooks/image_classification.ipynb:cell 23
@@ -1,5 +1,3 @@
 with distribution_strategy.scope():
   model_dir = tempfile.mkdtemp()
   task = tfm.core.task_factory.get_task(exp_config.task, logging_dir=model_dir)
-
-#  tf.keras.utils.plot_model(task.build_model(), show_shapes=True)
--- /Users/dhruv/playground/ruff/notebooks/image_classification.ipynb:cell 24
+++ /Users/dhruv/playground/ruff/notebooks/image_classification.ipynb:cell 24
@@ -1,4 +1,4 @@
 for images, labels in task.build_inputs(exp_config.task.train_data).take(1):
   print()
-  print(f'images.shape: {str(images.shape):16}  images.dtype: {images.dtype!r}')
-  print(f'labels.shape: {str(labels.shape):16}  labels.dtype: {labels.dtype!r}')
+  print(f"images.shape: {images.shape!s:16}  images.dtype: {images.dtype!r}")
+  print(f"labels.shape: {labels.shape!s:16}  labels.dtype: {labels.dtype!r}")
--- /Users/dhruv/playground/ruff/notebooks/image_classification.ipynb:cell 27
+++ /Users/dhruv/playground/ruff/notebooks/image_classification.ipynb:cell 27
@@ -1 +1 @@
-plt.hist(images.numpy().flatten());
+plt.hist(images.numpy().flatten())
--- /Users/dhruv/playground/ruff/notebooks/image_classification.ipynb:cell 29
+++ /Users/dhruv/playground/ruff/notebooks/image_classification.ipynb:cell 29
@@ -1,2 +1,2 @@
-label_info = ds_info.features['label']
+label_info = ds_info.features["label"]
 label_info.int2str(1)
--- /Users/dhruv/playground/ruff/notebooks/image_classification.ipynb:cell 31
+++ /Users/dhruv/playground/ruff/notebooks/image_classification.ipynb:cell 31
@@ -10,9 +10,6 @@
     if predictions is None:
       plt.title(label_info.int2str(labels[i]))
     else:
-      if labels[i] == predictions[i]:
-        color = 'g'
-      else:
-        color = 'r'
+      color = "g" if labels[i] == predictions[i] else "r"
       plt.title(label_info.int2str(predictions[i]), color=color)
     plt.axis("off")
--- /Users/dhruv/playground/ruff/notebooks/image_classification.ipynb:cell 35
+++ /Users/dhruv/playground/ruff/notebooks/image_classification.ipynb:cell 35
@@ -1,3 +1,3 @@
-plt.figure(figsize=(10, 10));
+plt.figure(figsize=(10, 10))
 for images, labels in task.build_inputs(exp_config.task.validation_data).take(1):
   show_batch(images, labels)
--- /Users/dhruv/playground/ruff/notebooks/image_classification.ipynb:cell 37
+++ /Users/dhruv/playground/ruff/notebooks/image_classification.ipynb:cell 37
@@ -1,7 +1,7 @@
 model, eval_logs = tfm.core.train_lib.run_experiment(
     distribution_strategy=distribution_strategy,
     task=task,
-    mode='train_and_eval',
+    mode="train_and_eval",
     params=exp_config,
     model_dir=model_dir,
     run_post_eval=True)
--- /Users/dhruv/playground/ruff/notebooks/image_classification.ipynb:cell 38
+++ /Users/dhruv/playground/ruff/notebooks/image_classification.ipynb:cell 38
@@ -1 +0,0 @@
-#  tf.keras.utils.plot_model(model, show_shapes=True)
--- /Users/dhruv/playground/ruff/notebooks/image_classification.ipynb:cell 40
+++ /Users/dhruv/playground/ruff/notebooks/image_classification.ipynb:cell 40
@@ -1,4 +1,4 @@
 for key, value in eval_logs.items():
     if isinstance(value, tf.Tensor):
       value = value.numpy()
-    print(f'{key:20}: {value:.3f}')
+    print(f"{key:20}: {value:.3f}")
--- /Users/dhruv/playground/ruff/notebooks/image_classification.ipynb:cell 42
+++ /Users/dhruv/playground/ruff/notebooks/image_classification.ipynb:cell 42
@@ -4,5 +4,5 @@
 
 show_batch(images, labels, tf.cast(predictions, tf.int32))
 
-if device=='CPU':
-  plt.suptitle('The model was only trained for a few steps, it is not expected to do well.')
+if device=="CPU":
+  plt.suptitle("The model was only trained for a few steps, it is not expected to do well.")
--- /Users/dhruv/playground/ruff/notebooks/image_classification.ipynb:cell 45
+++ /Users/dhruv/playground/ruff/notebooks/image_classification.ipynb:cell 45
@@ -1,8 +1,8 @@
 # Saving and exporting the trained model
 export_saved_model_lib.export_inference_graph(
-    input_type='image_tensor',
+    input_type="image_tensor",
     batch_size=1,
     input_image_size=[32, 32],
     params=exp_config,
     checkpoint_path=tf.train.latest_checkpoint(model_dir),
-    export_dir='./export/')
+    export_dir="./export/")
--- /Users/dhruv/playground/ruff/notebooks/image_classification.ipynb:cell 47
+++ /Users/dhruv/playground/ruff/notebooks/image_classification.ipynb:cell 47
@@ -1,3 +1,3 @@
 # Importing SavedModel
-imported = tf.saved_model.load('./export/')
-model_fn = imported.signatures['serving_default']
+imported = tf.saved_model.load("./export/")
+model_fn = imported.signatures["serving_default"]
--- /Users/dhruv/playground/ruff/notebooks/image_classification.ipynb:cell 49
+++ /Users/dhruv/playground/ruff/notebooks/image_classification.ipynb:cell 49
@@ -1,10 +1,10 @@
 plt.figure(figsize=(10, 10))
-for data in tfds.load('cifar10', split='test').batch(12).take(1):
+for data in tfds.load("cifar10", split="test").batch(12).take(1):
   predictions = []
-  for image in data['image']:
-    index = tf.argmax(model_fn(image[tf.newaxis, ...])['logits'], axis=1)[0]
+  for image in data["image"]:
+    index = tf.argmax(model_fn(image[tf.newaxis, ...])["logits"], axis=1)[0]
     predictions.append(index)
-  show_batch(data['image'], data['label'], predictions)
+  show_batch(data["image"], data["label"], predictions)
 
-  if device=='CPU':
-    plt.suptitle('The model was only trained for a few steps, it is not expected to do better than random.')
+  if device=="CPU":
+    plt.suptitle("The model was only trained for a few steps, it is not expected to do better than random.")

Would fix 61 errors.
```

</p>
</details> 

resolves: #4727
2023-07-29 04:22:56 +00:00
Charlie Marsh
4802c7c7d8
Avoid key-in-dict violations for self accesses (#6165)
Closes https://github.com/astral-sh/ruff/issues/6163.
2023-07-29 03:35:26 +00:00
Charlie Marsh
646ff6497c
Ignore end-of-line file exemption comments (#6160)
## Summary

This PR protects against code like:

```python
from typing import Optional

import bar  # ruff: noqa
import baz

class Foo:
    x: Optional[str] = None
```

In which the user wrote `# ruff: noqa` to ignore a specific error, not
realizing that it was a file-level exemption that thus turned off all
lint rules.

Specifically, if a `# ruff: noqa` directive is not at the start of a
line, we now ignore it and warn, since this is almost certainly a
mistake.
2023-07-29 00:40:32 +00:00
Victor Hugo Gomes
e0d5c7564f
[flake8-pyi] Implement PYI049 (#6136)
## Summary

Checks for the presence of unused private `typing.TypedDict`
definitions.

ref #848 

## Test Plan

Snapshots and manual runs of flake8
2023-07-29 00:34:36 +00:00
Victor Hugo Gomes
7838d8c8af
Implement PYI047 (#6134)
## Summary

Checks for the presence of unused private `typing.TypeAlias`
definitions.

ref #848 

## Test Plan

Snapshots and manual runs of flake8
2023-07-29 00:21:29 +00:00
Zanie Blue
047c211837
Add semantic analysis of type aliases and parameters (#6109)
Requires https://github.com/astral-sh/RustPython-Parser/pull/42
Related https://github.com/PyCQA/pyflakes/pull/778
[PEP-695](https://peps.python.org/pep-0695)
Part of #5062 

<!--
Thank you for contributing to Ruff! To help us out with reviewing,
please consider the following:

- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->

## Summary

<!-- What's the purpose of the change? What does it do, and why? -->
Adds a scope for type parameters, a type parameter binding kind, and
checker visitation of type parameters in type alias statements, function
definitions, and class definitions.

A few changes were necessary to ensure correctness following the
insertion of a new scope between function and class scopes and their
parent.

## Test Plan

<!-- How was it tested? -->
Undefined name snapshots.

Unused type parameter rule will be added as follow-up.
2023-07-28 17:06:37 -05:00
Charlie Marsh
134d447d4c
Avoid refactoring x[:1]-like slices in RUF015 (#6150)
## Summary

Right now, `RUF015` will try to rewrite `x[:1]` as `[next(x)]`. This
isn't equivalent if `x`, for example, is empty, where slicing like
`x[:1]` is forgiving, but `next` raises `StopIteration`. For me this is
a little too much of a deviation to be comfortable with, and most of the
value in this rule is the `x[0]` to `next(x)` conversion anyway.

Closes https://github.com/astral-sh/ruff/issues/6148.
2023-07-28 09:38:13 -04:00
Charlie Marsh
cd4147423c
Skip PERF203 violations for multi-statement loops (#6145)
Closes https://github.com/astral-sh/ruff/issues/5858.
2023-07-28 04:55:55 +00:00
Charlie Marsh
d15436458f
Only run unused private type rules over finalized bindings (#6142)
## Summary

In #6134 and #6136, we see some false positives for "shadowed" class
definitions. For example, here, the first definition is flagged as
unused, since from the perspective of the semantic model (which doesn't
understand branching), it appears to be immediately shadowed in the
`else`, and thus never used:

```python
if sys.version_info >= (3, 11):
    class _RootLoggerConfiguration(TypedDict, total=False):
        level: _Level
        filters: Sequence[str | _FilterType]
        handlers: Sequence[str]

else:
    class _RootLoggerConfiguration(TypedDict, total=False):
        level: _Level
        filters: Sequence[str]
        handlers: Sequence[str]
```

Instead of looking at _all_ bindings, we should instead look at the
"live" bindings, which is similar to how other rules (like unused
variables detection) is structured. We thus move the rule from
`bindings.rs` (which iterates over _all_ bindings, regardless of whether
they're shadowed) to `deferred_scopes.rs`, which iterates over all
"live" bindings once a scope has been fully analyzed.

## Test Plan

`cargo test`
2023-07-28 02:16:09 +00:00
Charlie Marsh
0bc3edf6c9
Add documentation and test cases for redefinition (#6135) 2023-07-28 00:01:42 +00:00
Aarni Koskela
3d54d31cd9
Implement E241 and E242 (tab/multiple ws after commas) (#6094)
## Summary

This PR implements pycodestyle's E241 (tab after comma) and E242
(multiple whitespace after comma) lints.

These are marked as nursery rules like many other pycodestyle rules.

Refs #2402

## Test Plan

E24.py copied from pycodestyle.
2023-07-27 18:58:41 +00:00
Tom Kuson
1418ee62f8
Add more documentation to the flake8-bandit rules (#6128)
## Summary

Completes the documentation for the ruleset, apart from four rules which
have contradictions, so need to be thought about more regarding how to
document that. Related to #2646.

## Test Plan

`python scripts/test_docs_formatted.py`
2023-07-27 18:57:45 +00:00
Harutaka Kawamura
bf987f80f4
Add PT017 and PT019 docs (#6115) 2023-07-27 18:56:34 +00:00
rembridge
bb08eea5cc
missing-whitespace-around-operators comment (#6106)
**Summary**

Updated doc comments for `missing_whitespace_around_operator.rs`. Online
docs also benefit from this update.

**Test Plan**

Checked docs via
[mkdocs](389fe13c93/CONTRIBUTING.md (L267-L296))
2023-07-27 14:52:43 -04:00
Tom Kuson
d16216a2c2
Add documentation to the flynt rules (#6130)
## Summary

Completes the documentation for the one and only (current) rule in the
`flynt` ruleset. Related to #2646.

## Test Plan

`python scripts/test_docs_formatted.py`
2023-07-27 14:32:59 -04:00
Jelle van der Waa
0853004f41
[pylint] Implement eq-without-hash rule (PLW1641) (#5955)
Implement
https://pylint.pycqa.org/en/latest/user_guide/messages/warning/eq-without-hash.html
Issue https://github.com/astral-sh/ruff/issues/970

It's not enabled by default in pylint, so I guess it shouldn't in Ruff
either?
2023-07-27 18:28:44 +00:00
Harutaka Kawamura
fb5bbe30c7
Update SIM115 to cover pathlib.Path.open (#6118) 2023-07-27 14:20:52 -04:00
Charlie Marsh
dd706c7a35
Fix E211 documentation (#6133) 2023-07-27 17:19:33 +00:00
Charlie Marsh
e15b9c5572
Cache name resolutions in the semantic model (#6047)
## Summary

This PR stores the mapping from `ExprName` node to resolved `BindingId`,
which lets us skip scope lookups in `resolve_call_path`. It's enabled by
#6045, since that PR ensures that when we analyze a node (and thus call
`resolve_call_path`), we'll have already visited its `ExprName`
elements.

In more detail: imagine that we're traversing over `foo.bar()`. When we
read `foo`, it will be an `ExprName`, which we'll then resolve to a
binding via `handle_node_load`. With this change, we then store that
binding in a map. Later, if we call `collect_call_path` on `foo.bar`,
we'll identify `foo` (the "head" of the attribute) and grab the resolved
binding in that map. _Almost_ all names are now resolved in advance,
though it's not a strict requirement, and some rules break that pattern
(e.g., if we're analyzing arguments, and they need to inspect their
annotations, which are visited in a deferred manner).

This improves performance by 4-6% on the all-rules benchmark. It looks
like it hurts performance (1-2% drop) in the default-rules benchmark,
presumedly because those rules don't call `resolve_call_path` nearly as
much, and so we're paying for these extra writes.

Here's the benchmark data:

```
linter/default-rules/numpy/globals.py
                        time:   [67.270 µs 67.380 µs 67.489 µs]
                        thrpt:  [43.720 MiB/s 43.792 MiB/s 43.863 MiB/s]
                 change:
                        time:   [+0.4747% +0.7752% +1.0626%] (p = 0.00 < 0.05)
                        thrpt:  [-1.0514% -0.7693% -0.4724%]
                        Change within noise threshold.
Found 1 outliers among 100 measurements (1.00%)
  1 (1.00%) high severe
linter/default-rules/pydantic/types.py
                        time:   [1.4067 ms 1.4105 ms 1.4146 ms]
                        thrpt:  [18.028 MiB/s 18.081 MiB/s 18.129 MiB/s]
                 change:
                        time:   [+1.3152% +1.6953% +2.0414%] (p = 0.00 < 0.05)
                        thrpt:  [-2.0006% -1.6671% -1.2981%]
                        Performance has regressed.
linter/default-rules/numpy/ctypeslib.py
                        time:   [637.67 µs 638.96 µs 640.28 µs]
                        thrpt:  [26.006 MiB/s 26.060 MiB/s 26.113 MiB/s]
                 change:
                        time:   [+1.5859% +1.8109% +2.0353%] (p = 0.00 < 0.05)
                        thrpt:  [-1.9947% -1.7787% -1.5611%]
                        Performance has regressed.
linter/default-rules/large/dataset.py
                        time:   [3.2289 ms 3.2336 ms 3.2383 ms]
                        thrpt:  [12.563 MiB/s 12.581 MiB/s 12.599 MiB/s]
                 change:
                        time:   [+0.8029% +0.9898% +1.1740%] (p = 0.00 < 0.05)
                        thrpt:  [-1.1604% -0.9801% -0.7965%]
                        Change within noise threshold.

linter/all-rules/numpy/globals.py
                        time:   [134.05 µs 134.15 µs 134.26 µs]
                        thrpt:  [21.977 MiB/s 21.995 MiB/s 22.012 MiB/s]
                 change:
                        time:   [-4.4571% -4.1175% -3.8268%] (p = 0.00 < 0.05)
                        thrpt:  [+3.9791% +4.2943% +4.6651%]
                        Performance has improved.
Found 8 outliers among 100 measurements (8.00%)
  2 (2.00%) low mild
  3 (3.00%) high mild
  3 (3.00%) high severe
linter/all-rules/pydantic/types.py
                        time:   [2.5627 ms 2.5669 ms 2.5720 ms]
                        thrpt:  [9.9158 MiB/s 9.9354 MiB/s 9.9516 MiB/s]
                 change:
                        time:   [-5.8304% -5.6374% -5.4452%] (p = 0.00 < 0.05)
                        thrpt:  [+5.7587% +5.9742% +6.1914%]
                        Performance has improved.
Found 7 outliers among 100 measurements (7.00%)
  6 (6.00%) high mild
  1 (1.00%) high severe
linter/all-rules/numpy/ctypeslib.py
                        time:   [1.3949 ms 1.3956 ms 1.3964 ms]
                        thrpt:  [11.925 MiB/s 11.931 MiB/s 11.937 MiB/s]
                 change:
                        time:   [-6.2496% -6.0856% -5.9293%] (p = 0.00 < 0.05)
                        thrpt:  [+6.3030% +6.4799% +6.6662%]
                        Performance has improved.
Found 7 outliers among 100 measurements (7.00%)
  3 (3.00%) high mild
  4 (4.00%) high severe
linter/all-rules/large/dataset.py
                        time:   [5.5951 ms 5.6019 ms 5.6093 ms]
                        thrpt:  [7.2527 MiB/s 7.2623 MiB/s 7.2711 MiB/s]
                 change:
                        time:   [-5.1781% -4.9783% -4.8070%] (p = 0.00 < 0.05)
                        thrpt:  [+5.0497% +5.2391% +5.4608%]
                        Performance has improved.
```

Still playing with this (the concepts need better names, documentation,
etc.), but opening up for feedback.
2023-07-27 13:01:56 -04:00
qdegraaf
0638a26347
Add AnyExpressionYield to consolidate ExprYield and ExprYieldFrom (#6127)
Co-authored-by: Micha Reiser <micha@reiser.io>
2023-07-27 16:01:16 +00:00
Charlie Marsh
13af91299d Avoid walking past root when resolving imports (#6126)
## Summary

Noticed in #5954: we walk _past_ the root rather than stopping _at_ the
root when attempting to traverse along the parent path. It's effectively
an off-by-one bug.
2023-07-27 10:22:13 -04:00
konsti
d317af442f Fix windows test warnings (#6124)
See
1539299869.
These didn't fail CI because we run clippy on linux only.
2023-07-27 10:22:13 -04:00
Micha Reiser
6bf6646c5d Respect indent when measuring with MeasureMode::AllLines (#6120) 2023-07-27 10:22:13 -04:00
konsti
9574ff3dc7 Unbreak main (#6123)
This fixes main breaking due to two merges.
2023-07-27 10:22:13 -04:00
konsti
06d9ff9577 Don't format trailing comma for lambda arguments (#5946)
**Summary** lambda arguments don't have parentheses, so they shouldn't
get a magic trailing comma either. This fixes some unstable formatting

**Test Plan** Added a regression test.

89 (from previously 145) instances of unstable formatting remaining.

```
$ cargo run --bin ruff_dev --release -- format-dev --stability-check --error-file formatter-ecosystem-errors.txt --multi-project target/checkouts > formatter-ecosystem-progress.txt
$ rg "Unstable formatting" target/formatter-ecosystem-errors.txt | wc -l
89
```

Closes #5892
2023-07-27 10:22:13 -04:00
Micha Reiser
40f54375cb
Pull in RustPython parser (#6099) 2023-07-27 09:29:11 +00:00
Victor Hugo Gomes
86539c1fc5
[flake8-pyi] Implement PYI046 (#6098)
## Summary
Checks for the presence of unused private `typing.Protocol` definitions.

ref #848 

## Test Plan

Snapshots and manual runs of flake8.
2023-07-27 02:34:56 +00:00
rembridge
d04367a042
call-datetime-without-tzinfo comment (#6105)
## Summary

Updated doc comment for `call_datetime_without_tzinfo.rs`. Online docs
also benefit from this update.

## Test Plan

Checked docs via
[mkdocs](389fe13c93/CONTRIBUTING.md (L267-L296))
2023-07-26 23:21:03 +00:00
Simon Brugman
ffdd653c54
[flake8-use-pathlib] Implement glob (PTH207) (#5939)
Discovered that the usage of `glob.glob` is
[widespread](https://grep.app/search?current=7&q=glob.glob%28&filter%5Blang%5D%5B0%5D=Python)
when working on the previous lints for `flake8-use-pathlib`.
2023-07-26 23:15:05 +00:00
rembridge
132f07c27b
whitespace-before-parameters comment (#6103) 2023-07-26 23:01:47 +00:00
Victor Hugo Gomes
c0dbcb3434
[flake8-pyi] Implement PYI018 (#6018)
## Summary

Check for unused private `TypeVar`. See [original
implementation](2a86db8271/pyi.py (L1958)).

```
$ flake8 --select Y018 crates/ruff/resources/test/fixtures/flake8_pyi/PYI018.pyi

crates/ruff/resources/test/fixtures/flake8_pyi/PYI018.pyi:4:1: Y018 TypeVar "_T" is not used
crates/ruff/resources/test/fixtures/flake8_pyi/PYI018.pyi:5:1: Y018 TypeVar "_P" is not used
```

```
$ ./target/debug/ruff --select PYI018 crates/ruff/resources/test/fixtures/flake8_pyi/PYI018.pyi --no-cache

crates/ruff/resources/test/fixtures/flake8_pyi/PYI018.pyi:4:1: PYI018 TypeVar `_T` is never used
crates/ruff/resources/test/fixtures/flake8_pyi/PYI018.pyi:5:1: PYI018 TypeVar `_P` is never used
Found 2 errors.
```
In the file `unused_private_type_declaration.rs`, I'm planning to add
other rules that are similar to `PYI018` like the `PYI046`, `PYI047` and
`PYI049`.

ref #848

## Test Plan

Snapshots and manual runs of flake8.
2023-07-26 22:56:15 +00:00