Commit graph

4500 commits

Author SHA1 Message Date
Charlie Marsh
5d9814d84d
Remove parentheses around some walrus operators (#6173)
## Summary

Closes https://github.com/astral-sh/ruff/issues/5781

## Test Plan

Added cases to
`crates/ruff_python_formatter/resources/test/fixtures/ruff/expression/named_expr.py`
one-by-one and adjusted the condition as needed.
2023-07-29 10:06:26 -04:00
Micha Reiser
1d7ad30188
CI: Update formatter dependencies (#6168) 2023-07-29 15:24:24 +02:00
Charlie Marsh
4231ed2fc3
Skip partial duplicates when applying multi-edit fixes (#6144)
## Summary

Right now, if we have two fixes that have an overlapping edit, but not
an _identical_ set of edits, they'll conflict, causing us to do another
linter traversal. Here, I've enabled the fixer to support partially
overlapping edits, which (as an example) let's us greatly reduce the
number of iterations required in the test suite.

The most common case here is that in which a bunch of edits need to
import some symbol, and then use that symbol, but in different ways. In
that case, all edits will have a common fix (to import the symbol), but
deviate in some way. With this change, we can do all of those edits in
one pass.

Note that the simplest way to enable this was to store sorted edits on
`Fix`. We don't allow modifying the edits on `Fix` once it's
constructed, so this is an easy change, and allows us to avoid a bunch
of clones and traversals later on.

Closes #5800.
2023-07-29 12:11:57 +00:00
Charlie Marsh
badbfb2d3e
Skip BOM when determining Locator's line starts (#6159)
## Summary

If a file has a BOM, the import sorter _always_ reports the imports as
unsorted. The acute issue is that we detect that the line has leading
content (before the imports), which we always consider a violation.
Rather than fixing that one site, this PR instead makes `.line_start`
BOM-aware.

Fixes https://github.com/astral-sh/ruff/issues/6155.
2023-07-29 11:47:13 +00:00
Dhruv Manilawala
44bdf20221
[pep8-naming]: New config option extend-ignore-names (#6169)
## Summary

This PR adds a new config option for `pep8-naming` plugin called
`extend-ignore-names` which is used to extend the default values in
`ignore-names` option.

resolves: #6050
2023-07-29 17:11:04 +05:30
Dhruv Manilawala
3c99fbf808
Implement --diff for Jupyter Notebooks (#6149)
## Summary

Implement `--diff` for Jupyter Notebooks

## Test Plan

1. Use `crates/ruff/resources/test/fixtures/jupyter/isort.ipynb` as a
test case
and add a markdown cell in between the code cells to check that the diff
   outputs the correct cell index.
2. Run the command:
`cargo run --bin ruff --package ruff_cli -- check --no-cache --isolated
--select=ALL crates/ruff/resources/test/fixtures/jupyter/isort.ipynb
--fix --diff`

<details><summary>Example output:</summary>
<p>

```diff
--- /Users/dhruv/playground/ruff/notebooks/test.ipynb:cell 0
+++ /Users/dhruv/playground/ruff/notebooks/test.ipynb:cell 0
@@ -1,3 +0,0 @@
-from pathlib import Path
-import random
-import math
--- /Users/dhruv/playground/ruff/notebooks/test.ipynb:cell 4
+++ /Users/dhruv/playground/ruff/notebooks/test.ipynb:cell 4
@@ -1,5 +1,3 @@
-from typing import Any
-import collections
 # Newline should be added here
 def foo():
     pass

--- /Users/dhruv/playground/ruff/notebooks/image_classification.ipynb:cell 8
+++ /Users/dhruv/playground/ruff/notebooks/image_classification.ipynb:cell 8
@@ -1,8 +1,7 @@
 import pprint
 import tempfile
 
-from IPython import display
 import matplotlib.pyplot as plt
-
 import tensorflow as tf
-import tensorflow_datasets as tfds
+import tensorflow_datasets as tfds
+from IPython import display
--- /Users/dhruv/playground/ruff/notebooks/image_classification.ipynb:cell 10
+++ /Users/dhruv/playground/ruff/notebooks/image_classification.ipynb:cell 10
@@ -1,5 +1,4 @@
 import tensorflow_models as tfm
 
 # These are not in the tfm public API for v2.9. They will be available in v2.10
-from official.vision.serving import export_saved_model_lib
-import official.core.train_lib
+from official.vision.serving import export_saved_model_lib
--- /Users/dhruv/playground/ruff/notebooks/image_classification.ipynb:cell 13
+++ /Users/dhruv/playground/ruff/notebooks/image_classification.ipynb:cell 13
@@ -1,5 +1,5 @@
-exp_config = tfm.core.exp_factory.get_exp_config('resnet_imagenet')
-tfds_name = 'cifar10'
+exp_config = tfm.core.exp_factory.get_exp_config("resnet_imagenet")
+tfds_name = "cifar10"
 ds,ds_info = tfds.load(
 tfds_name,
 with_info=True)
--- /Users/dhruv/playground/ruff/notebooks/image_classification.ipynb:cell 15
+++ /Users/dhruv/playground/ruff/notebooks/image_classification.ipynb:cell 15
@@ -6,12 +6,12 @@
 # Configure training and testing data
 batch_size = 128
 
-exp_config.task.train_data.input_path = ''
+exp_config.task.train_data.input_path = ""
 exp_config.task.train_data.tfds_name = tfds_name
-exp_config.task.train_data.tfds_split = 'train'
+exp_config.task.train_data.tfds_split = "train"
 exp_config.task.train_data.global_batch_size = batch_size
 
-exp_config.task.validation_data.input_path = ''
+exp_config.task.validation_data.input_path = ""
 exp_config.task.validation_data.tfds_name = tfds_name
-exp_config.task.validation_data.tfds_split = 'test'
+exp_config.task.validation_data.tfds_split = "test"
 exp_config.task.validation_data.global_batch_size = batch_size
--- /Users/dhruv/playground/ruff/notebooks/image_classification.ipynb:cell 17
+++ /Users/dhruv/playground/ruff/notebooks/image_classification.ipynb:cell 17
@@ -1,16 +1,16 @@
 logical_device_names = [logical_device.name for logical_device in tf.config.list_logical_devices()]
 
-if 'GPU' in ''.join(logical_device_names):
-  print('This may be broken in Colab.')
-  device = 'GPU'
-elif 'TPU' in ''.join(logical_device_names):
-  print('This may be broken in Colab.')
-  device = 'TPU'
+if "GPU" in "".join(logical_device_names):
+  print("This may be broken in Colab.")
+  device = "GPU"
+elif "TPU" in "".join(logical_device_names):
+  print("This may be broken in Colab.")
+  device = "TPU"
 else:
-  print('Running on CPU is slow, so only train for a few steps.')
-  device = 'CPU'
+  print("Running on CPU is slow, so only train for a few steps.")
+  device = "CPU"
 
-if device=='CPU':
+if device=="CPU":
   train_steps = 20
   exp_config.trainer.steps_per_loop = 5
 else:
@@ -20,9 +20,9 @@
 exp_config.trainer.summary_interval = 100
 exp_config.trainer.checkpoint_interval = train_steps
 exp_config.trainer.validation_interval = 1000
-exp_config.trainer.validation_steps =  ds_info.splits['test'].num_examples // batch_size
+exp_config.trainer.validation_steps =  ds_info.splits["test"].num_examples // batch_size
 exp_config.trainer.train_steps = train_steps
-exp_config.trainer.optimizer_config.learning_rate.type = 'cosine'
+exp_config.trainer.optimizer_config.learning_rate.type = "cosine"
 exp_config.trainer.optimizer_config.learning_rate.cosine.decay_steps = train_steps
 exp_config.trainer.optimizer_config.learning_rate.cosine.initial_learning_rate = 0.1
 exp_config.trainer.optimizer_config.warmup.linear.warmup_steps = 100
--- /Users/dhruv/playground/ruff/notebooks/image_classification.ipynb:cell 21
+++ /Users/dhruv/playground/ruff/notebooks/image_classification.ipynb:cell 21
@@ -1,14 +1,14 @@
 logical_device_names = [logical_device.name for logical_device in tf.config.list_logical_devices()]
 
 if exp_config.runtime.mixed_precision_dtype == tf.float16:
-    tf.keras.mixed_precision.set_global_policy('mixed_float16')
+    tf.keras.mixed_precision.set_global_policy("mixed_float16")
 
-if 'GPU' in ''.join(logical_device_names):
+if "GPU" in "".join(logical_device_names):
   distribution_strategy = tf.distribute.MirroredStrategy()
-elif 'TPU' in ''.join(logical_device_names):
+elif "TPU" in "".join(logical_device_names):
   tf.tpu.experimental.initialize_tpu_system()
-  tpu = tf.distribute.cluster_resolver.TPUClusterResolver(tpu='/device:TPU_SYSTEM:0')
+  tpu = tf.distribute.cluster_resolver.TPUClusterResolver(tpu="/device:TPU_SYSTEM:0")
   distribution_strategy = tf.distribute.experimental.TPUStrategy(tpu)
 else:
-  print('Warning: this will be really slow.')
+  print("Warning: this will be really slow.")
   distribution_strategy = tf.distribute.OneDeviceStrategy(logical_device_names[0])
--- /Users/dhruv/playground/ruff/notebooks/image_classification.ipynb:cell 23
+++ /Users/dhruv/playground/ruff/notebooks/image_classification.ipynb:cell 23
@@ -1,5 +1,3 @@
 with distribution_strategy.scope():
   model_dir = tempfile.mkdtemp()
   task = tfm.core.task_factory.get_task(exp_config.task, logging_dir=model_dir)
-
-#  tf.keras.utils.plot_model(task.build_model(), show_shapes=True)
--- /Users/dhruv/playground/ruff/notebooks/image_classification.ipynb:cell 24
+++ /Users/dhruv/playground/ruff/notebooks/image_classification.ipynb:cell 24
@@ -1,4 +1,4 @@
 for images, labels in task.build_inputs(exp_config.task.train_data).take(1):
   print()
-  print(f'images.shape: {str(images.shape):16}  images.dtype: {images.dtype!r}')
-  print(f'labels.shape: {str(labels.shape):16}  labels.dtype: {labels.dtype!r}')
+  print(f"images.shape: {images.shape!s:16}  images.dtype: {images.dtype!r}")
+  print(f"labels.shape: {labels.shape!s:16}  labels.dtype: {labels.dtype!r}")
--- /Users/dhruv/playground/ruff/notebooks/image_classification.ipynb:cell 27
+++ /Users/dhruv/playground/ruff/notebooks/image_classification.ipynb:cell 27
@@ -1 +1 @@
-plt.hist(images.numpy().flatten());
+plt.hist(images.numpy().flatten())
--- /Users/dhruv/playground/ruff/notebooks/image_classification.ipynb:cell 29
+++ /Users/dhruv/playground/ruff/notebooks/image_classification.ipynb:cell 29
@@ -1,2 +1,2 @@
-label_info = ds_info.features['label']
+label_info = ds_info.features["label"]
 label_info.int2str(1)
--- /Users/dhruv/playground/ruff/notebooks/image_classification.ipynb:cell 31
+++ /Users/dhruv/playground/ruff/notebooks/image_classification.ipynb:cell 31
@@ -10,9 +10,6 @@
     if predictions is None:
       plt.title(label_info.int2str(labels[i]))
     else:
-      if labels[i] == predictions[i]:
-        color = 'g'
-      else:
-        color = 'r'
+      color = "g" if labels[i] == predictions[i] else "r"
       plt.title(label_info.int2str(predictions[i]), color=color)
     plt.axis("off")
--- /Users/dhruv/playground/ruff/notebooks/image_classification.ipynb:cell 35
+++ /Users/dhruv/playground/ruff/notebooks/image_classification.ipynb:cell 35
@@ -1,3 +1,3 @@
-plt.figure(figsize=(10, 10));
+plt.figure(figsize=(10, 10))
 for images, labels in task.build_inputs(exp_config.task.validation_data).take(1):
   show_batch(images, labels)
--- /Users/dhruv/playground/ruff/notebooks/image_classification.ipynb:cell 37
+++ /Users/dhruv/playground/ruff/notebooks/image_classification.ipynb:cell 37
@@ -1,7 +1,7 @@
 model, eval_logs = tfm.core.train_lib.run_experiment(
     distribution_strategy=distribution_strategy,
     task=task,
-    mode='train_and_eval',
+    mode="train_and_eval",
     params=exp_config,
     model_dir=model_dir,
     run_post_eval=True)
--- /Users/dhruv/playground/ruff/notebooks/image_classification.ipynb:cell 38
+++ /Users/dhruv/playground/ruff/notebooks/image_classification.ipynb:cell 38
@@ -1 +0,0 @@
-#  tf.keras.utils.plot_model(model, show_shapes=True)
--- /Users/dhruv/playground/ruff/notebooks/image_classification.ipynb:cell 40
+++ /Users/dhruv/playground/ruff/notebooks/image_classification.ipynb:cell 40
@@ -1,4 +1,4 @@
 for key, value in eval_logs.items():
     if isinstance(value, tf.Tensor):
       value = value.numpy()
-    print(f'{key:20}: {value:.3f}')
+    print(f"{key:20}: {value:.3f}")
--- /Users/dhruv/playground/ruff/notebooks/image_classification.ipynb:cell 42
+++ /Users/dhruv/playground/ruff/notebooks/image_classification.ipynb:cell 42
@@ -4,5 +4,5 @@
 
 show_batch(images, labels, tf.cast(predictions, tf.int32))
 
-if device=='CPU':
-  plt.suptitle('The model was only trained for a few steps, it is not expected to do well.')
+if device=="CPU":
+  plt.suptitle("The model was only trained for a few steps, it is not expected to do well.")
--- /Users/dhruv/playground/ruff/notebooks/image_classification.ipynb:cell 45
+++ /Users/dhruv/playground/ruff/notebooks/image_classification.ipynb:cell 45
@@ -1,8 +1,8 @@
 # Saving and exporting the trained model
 export_saved_model_lib.export_inference_graph(
-    input_type='image_tensor',
+    input_type="image_tensor",
     batch_size=1,
     input_image_size=[32, 32],
     params=exp_config,
     checkpoint_path=tf.train.latest_checkpoint(model_dir),
-    export_dir='./export/')
+    export_dir="./export/")
--- /Users/dhruv/playground/ruff/notebooks/image_classification.ipynb:cell 47
+++ /Users/dhruv/playground/ruff/notebooks/image_classification.ipynb:cell 47
@@ -1,3 +1,3 @@
 # Importing SavedModel
-imported = tf.saved_model.load('./export/')
-model_fn = imported.signatures['serving_default']
+imported = tf.saved_model.load("./export/")
+model_fn = imported.signatures["serving_default"]
--- /Users/dhruv/playground/ruff/notebooks/image_classification.ipynb:cell 49
+++ /Users/dhruv/playground/ruff/notebooks/image_classification.ipynb:cell 49
@@ -1,10 +1,10 @@
 plt.figure(figsize=(10, 10))
-for data in tfds.load('cifar10', split='test').batch(12).take(1):
+for data in tfds.load("cifar10", split="test").batch(12).take(1):
   predictions = []
-  for image in data['image']:
-    index = tf.argmax(model_fn(image[tf.newaxis, ...])['logits'], axis=1)[0]
+  for image in data["image"]:
+    index = tf.argmax(model_fn(image[tf.newaxis, ...])["logits"], axis=1)[0]
     predictions.append(index)
-  show_batch(data['image'], data['label'], predictions)
+  show_batch(data["image"], data["label"], predictions)
 
-  if device=='CPU':
-    plt.suptitle('The model was only trained for a few steps, it is not expected to do better than random.')
+  if device=="CPU":
+    plt.suptitle("The model was only trained for a few steps, it is not expected to do better than random.")

Would fix 61 errors.
```

</p>
</details> 

resolves: #4727
2023-07-29 04:22:56 +00:00
Charlie Marsh
4802c7c7d8
Avoid key-in-dict violations for self accesses (#6165)
Closes https://github.com/astral-sh/ruff/issues/6163.
2023-07-29 03:35:26 +00:00
Charlie Marsh
646ff6497c
Ignore end-of-line file exemption comments (#6160)
## Summary

This PR protects against code like:

```python
from typing import Optional

import bar  # ruff: noqa
import baz

class Foo:
    x: Optional[str] = None
```

In which the user wrote `# ruff: noqa` to ignore a specific error, not
realizing that it was a file-level exemption that thus turned off all
lint rules.

Specifically, if a `# ruff: noqa` directive is not at the start of a
line, we now ignore it and warn, since this is almost certainly a
mistake.
2023-07-29 00:40:32 +00:00
Victor Hugo Gomes
e0d5c7564f
[flake8-pyi] Implement PYI049 (#6136)
## Summary

Checks for the presence of unused private `typing.TypedDict`
definitions.

ref #848 

## Test Plan

Snapshots and manual runs of flake8
2023-07-29 00:34:36 +00:00
Victor Hugo Gomes
7838d8c8af
Implement PYI047 (#6134)
## Summary

Checks for the presence of unused private `typing.TypeAlias`
definitions.

ref #848 

## Test Plan

Snapshots and manual runs of flake8
2023-07-29 00:21:29 +00:00
Zanie Blue
047c211837
Add semantic analysis of type aliases and parameters (#6109)
Requires https://github.com/astral-sh/RustPython-Parser/pull/42
Related https://github.com/PyCQA/pyflakes/pull/778
[PEP-695](https://peps.python.org/pep-0695)
Part of #5062 

<!--
Thank you for contributing to Ruff! To help us out with reviewing,
please consider the following:

- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->

## Summary

<!-- What's the purpose of the change? What does it do, and why? -->
Adds a scope for type parameters, a type parameter binding kind, and
checker visitation of type parameters in type alias statements, function
definitions, and class definitions.

A few changes were necessary to ensure correctness following the
insertion of a new scope between function and class scopes and their
parent.

## Test Plan

<!-- How was it tested? -->
Undefined name snapshots.

Unused type parameter rule will be added as follow-up.
2023-07-28 17:06:37 -05:00
Charlie Marsh
134d447d4c
Avoid refactoring x[:1]-like slices in RUF015 (#6150)
## Summary

Right now, `RUF015` will try to rewrite `x[:1]` as `[next(x)]`. This
isn't equivalent if `x`, for example, is empty, where slicing like
`x[:1]` is forgiving, but `next` raises `StopIteration`. For me this is
a little too much of a deviation to be comfortable with, and most of the
value in this rule is the `x[0]` to `next(x)` conversion anyway.

Closes https://github.com/astral-sh/ruff/issues/6148.
2023-07-28 09:38:13 -04:00
Charlie Marsh
cd4147423c
Skip PERF203 violations for multi-statement loops (#6145)
Closes https://github.com/astral-sh/ruff/issues/5858.
2023-07-28 04:55:55 +00:00
Charlie Marsh
d15436458f
Only run unused private type rules over finalized bindings (#6142)
## Summary

In #6134 and #6136, we see some false positives for "shadowed" class
definitions. For example, here, the first definition is flagged as
unused, since from the perspective of the semantic model (which doesn't
understand branching), it appears to be immediately shadowed in the
`else`, and thus never used:

```python
if sys.version_info >= (3, 11):
    class _RootLoggerConfiguration(TypedDict, total=False):
        level: _Level
        filters: Sequence[str | _FilterType]
        handlers: Sequence[str]

else:
    class _RootLoggerConfiguration(TypedDict, total=False):
        level: _Level
        filters: Sequence[str]
        handlers: Sequence[str]
```

Instead of looking at _all_ bindings, we should instead look at the
"live" bindings, which is similar to how other rules (like unused
variables detection) is structured. We thus move the rule from
`bindings.rs` (which iterates over _all_ bindings, regardless of whether
they're shadowed) to `deferred_scopes.rs`, which iterates over all
"live" bindings once a scope has been fully analyzed.

## Test Plan

`cargo test`
2023-07-28 02:16:09 +00:00
Charlie Marsh
0bc3edf6c9
Add documentation and test cases for redefinition (#6135) 2023-07-28 00:01:42 +00:00
Aarni Koskela
3d54d31cd9
Implement E241 and E242 (tab/multiple ws after commas) (#6094)
## Summary

This PR implements pycodestyle's E241 (tab after comma) and E242
(multiple whitespace after comma) lints.

These are marked as nursery rules like many other pycodestyle rules.

Refs #2402

## Test Plan

E24.py copied from pycodestyle.
2023-07-27 18:58:41 +00:00
Tom Kuson
1418ee62f8
Add more documentation to the flake8-bandit rules (#6128)
## Summary

Completes the documentation for the ruleset, apart from four rules which
have contradictions, so need to be thought about more regarding how to
document that. Related to #2646.

## Test Plan

`python scripts/test_docs_formatted.py`
2023-07-27 18:57:45 +00:00
Harutaka Kawamura
bf987f80f4
Add PT017 and PT019 docs (#6115) 2023-07-27 18:56:34 +00:00
rembridge
bb08eea5cc
missing-whitespace-around-operators comment (#6106)
**Summary**

Updated doc comments for `missing_whitespace_around_operator.rs`. Online
docs also benefit from this update.

**Test Plan**

Checked docs via
[mkdocs](389fe13c93/CONTRIBUTING.md (L267-L296))
2023-07-27 14:52:43 -04:00
Tom Kuson
d16216a2c2
Add documentation to the flynt rules (#6130)
## Summary

Completes the documentation for the one and only (current) rule in the
`flynt` ruleset. Related to #2646.

## Test Plan

`python scripts/test_docs_formatted.py`
2023-07-27 14:32:59 -04:00
Jelle van der Waa
0853004f41
[pylint] Implement eq-without-hash rule (PLW1641) (#5955)
Implement
https://pylint.pycqa.org/en/latest/user_guide/messages/warning/eq-without-hash.html
Issue https://github.com/astral-sh/ruff/issues/970

It's not enabled by default in pylint, so I guess it shouldn't in Ruff
either?
2023-07-27 18:28:44 +00:00
Harutaka Kawamura
fb5bbe30c7
Update SIM115 to cover pathlib.Path.open (#6118) 2023-07-27 14:20:52 -04:00
Charlie Marsh
dd706c7a35
Fix E211 documentation (#6133) 2023-07-27 17:19:33 +00:00
Charlie Marsh
e15b9c5572
Cache name resolutions in the semantic model (#6047)
## Summary

This PR stores the mapping from `ExprName` node to resolved `BindingId`,
which lets us skip scope lookups in `resolve_call_path`. It's enabled by
#6045, since that PR ensures that when we analyze a node (and thus call
`resolve_call_path`), we'll have already visited its `ExprName`
elements.

In more detail: imagine that we're traversing over `foo.bar()`. When we
read `foo`, it will be an `ExprName`, which we'll then resolve to a
binding via `handle_node_load`. With this change, we then store that
binding in a map. Later, if we call `collect_call_path` on `foo.bar`,
we'll identify `foo` (the "head" of the attribute) and grab the resolved
binding in that map. _Almost_ all names are now resolved in advance,
though it's not a strict requirement, and some rules break that pattern
(e.g., if we're analyzing arguments, and they need to inspect their
annotations, which are visited in a deferred manner).

This improves performance by 4-6% on the all-rules benchmark. It looks
like it hurts performance (1-2% drop) in the default-rules benchmark,
presumedly because those rules don't call `resolve_call_path` nearly as
much, and so we're paying for these extra writes.

Here's the benchmark data:

```
linter/default-rules/numpy/globals.py
                        time:   [67.270 µs 67.380 µs 67.489 µs]
                        thrpt:  [43.720 MiB/s 43.792 MiB/s 43.863 MiB/s]
                 change:
                        time:   [+0.4747% +0.7752% +1.0626%] (p = 0.00 < 0.05)
                        thrpt:  [-1.0514% -0.7693% -0.4724%]
                        Change within noise threshold.
Found 1 outliers among 100 measurements (1.00%)
  1 (1.00%) high severe
linter/default-rules/pydantic/types.py
                        time:   [1.4067 ms 1.4105 ms 1.4146 ms]
                        thrpt:  [18.028 MiB/s 18.081 MiB/s 18.129 MiB/s]
                 change:
                        time:   [+1.3152% +1.6953% +2.0414%] (p = 0.00 < 0.05)
                        thrpt:  [-2.0006% -1.6671% -1.2981%]
                        Performance has regressed.
linter/default-rules/numpy/ctypeslib.py
                        time:   [637.67 µs 638.96 µs 640.28 µs]
                        thrpt:  [26.006 MiB/s 26.060 MiB/s 26.113 MiB/s]
                 change:
                        time:   [+1.5859% +1.8109% +2.0353%] (p = 0.00 < 0.05)
                        thrpt:  [-1.9947% -1.7787% -1.5611%]
                        Performance has regressed.
linter/default-rules/large/dataset.py
                        time:   [3.2289 ms 3.2336 ms 3.2383 ms]
                        thrpt:  [12.563 MiB/s 12.581 MiB/s 12.599 MiB/s]
                 change:
                        time:   [+0.8029% +0.9898% +1.1740%] (p = 0.00 < 0.05)
                        thrpt:  [-1.1604% -0.9801% -0.7965%]
                        Change within noise threshold.

linter/all-rules/numpy/globals.py
                        time:   [134.05 µs 134.15 µs 134.26 µs]
                        thrpt:  [21.977 MiB/s 21.995 MiB/s 22.012 MiB/s]
                 change:
                        time:   [-4.4571% -4.1175% -3.8268%] (p = 0.00 < 0.05)
                        thrpt:  [+3.9791% +4.2943% +4.6651%]
                        Performance has improved.
Found 8 outliers among 100 measurements (8.00%)
  2 (2.00%) low mild
  3 (3.00%) high mild
  3 (3.00%) high severe
linter/all-rules/pydantic/types.py
                        time:   [2.5627 ms 2.5669 ms 2.5720 ms]
                        thrpt:  [9.9158 MiB/s 9.9354 MiB/s 9.9516 MiB/s]
                 change:
                        time:   [-5.8304% -5.6374% -5.4452%] (p = 0.00 < 0.05)
                        thrpt:  [+5.7587% +5.9742% +6.1914%]
                        Performance has improved.
Found 7 outliers among 100 measurements (7.00%)
  6 (6.00%) high mild
  1 (1.00%) high severe
linter/all-rules/numpy/ctypeslib.py
                        time:   [1.3949 ms 1.3956 ms 1.3964 ms]
                        thrpt:  [11.925 MiB/s 11.931 MiB/s 11.937 MiB/s]
                 change:
                        time:   [-6.2496% -6.0856% -5.9293%] (p = 0.00 < 0.05)
                        thrpt:  [+6.3030% +6.4799% +6.6662%]
                        Performance has improved.
Found 7 outliers among 100 measurements (7.00%)
  3 (3.00%) high mild
  4 (4.00%) high severe
linter/all-rules/large/dataset.py
                        time:   [5.5951 ms 5.6019 ms 5.6093 ms]
                        thrpt:  [7.2527 MiB/s 7.2623 MiB/s 7.2711 MiB/s]
                 change:
                        time:   [-5.1781% -4.9783% -4.8070%] (p = 0.00 < 0.05)
                        thrpt:  [+5.0497% +5.2391% +5.4608%]
                        Performance has improved.
```

Still playing with this (the concepts need better names, documentation,
etc.), but opening up for feedback.
2023-07-27 13:01:56 -04:00
qdegraaf
0638a26347
Add AnyExpressionYield to consolidate ExprYield and ExprYieldFrom (#6127)
Co-authored-by: Micha Reiser <micha@reiser.io>
2023-07-27 16:01:16 +00:00
konsti
2a65e6fc38 Explain check_docs_formatted.py error message (#6125)
## Summary

This is an error message only change to lead an implementor of a new
rule that has an unformatted or invalid bad example to the
right code.

## Test Plan

n/a
2023-07-27 10:22:13 -04:00
Charlie Marsh
13af91299d Avoid walking past root when resolving imports (#6126)
## Summary

Noticed in #5954: we walk _past_ the root rather than stopping _at_ the
root when attempting to traverse along the parent path. It's effectively
an off-by-one bug.
2023-07-27 10:22:13 -04:00
konsti
d317af442f Fix windows test warnings (#6124)
See
1539299869.
These didn't fail CI because we run clippy on linux only.
2023-07-27 10:22:13 -04:00
Micha Reiser
6bf6646c5d Respect indent when measuring with MeasureMode::AllLines (#6120) 2023-07-27 10:22:13 -04:00
konsti
9574ff3dc7 Unbreak main (#6123)
This fixes main breaking due to two merges.
2023-07-27 10:22:13 -04:00
konsti
06d9ff9577 Don't format trailing comma for lambda arguments (#5946)
**Summary** lambda arguments don't have parentheses, so they shouldn't
get a magic trailing comma either. This fixes some unstable formatting

**Test Plan** Added a regression test.

89 (from previously 145) instances of unstable formatting remaining.

```
$ cargo run --bin ruff_dev --release -- format-dev --stability-check --error-file formatter-ecosystem-errors.txt --multi-project target/checkouts > formatter-ecosystem-progress.txt
$ rg "Unstable formatting" target/formatter-ecosystem-errors.txt | wc -l
89
```

Closes #5892
2023-07-27 10:22:13 -04:00
Micha Reiser
40f54375cb
Pull in RustPython parser (#6099) 2023-07-27 09:29:11 +00:00
Victor Hugo Gomes
86539c1fc5
[flake8-pyi] Implement PYI046 (#6098)
## Summary
Checks for the presence of unused private `typing.Protocol` definitions.

ref #848 

## Test Plan

Snapshots and manual runs of flake8.
2023-07-27 02:34:56 +00:00
rembridge
d04367a042
call-datetime-without-tzinfo comment (#6105)
## Summary

Updated doc comment for `call_datetime_without_tzinfo.rs`. Online docs
also benefit from this update.

## Test Plan

Checked docs via
[mkdocs](389fe13c93/CONTRIBUTING.md (L267-L296))
2023-07-26 23:21:03 +00:00
Simon Brugman
ffdd653c54
[flake8-use-pathlib] Implement glob (PTH207) (#5939)
Discovered that the usage of `glob.glob` is
[widespread](https://grep.app/search?current=7&q=glob.glob%28&filter%5Blang%5D%5B0%5D=Python)
when working on the previous lints for `flake8-use-pathlib`.
2023-07-26 23:15:05 +00:00
rembridge
132f07c27b
whitespace-before-parameters comment (#6103) 2023-07-26 23:01:47 +00:00
Victor Hugo Gomes
c0dbcb3434
[flake8-pyi] Implement PYI018 (#6018)
## Summary

Check for unused private `TypeVar`. See [original
implementation](2a86db8271/pyi.py (L1958)).

```
$ flake8 --select Y018 crates/ruff/resources/test/fixtures/flake8_pyi/PYI018.pyi

crates/ruff/resources/test/fixtures/flake8_pyi/PYI018.pyi:4:1: Y018 TypeVar "_T" is not used
crates/ruff/resources/test/fixtures/flake8_pyi/PYI018.pyi:5:1: Y018 TypeVar "_P" is not used
```

```
$ ./target/debug/ruff --select PYI018 crates/ruff/resources/test/fixtures/flake8_pyi/PYI018.pyi --no-cache

crates/ruff/resources/test/fixtures/flake8_pyi/PYI018.pyi:4:1: PYI018 TypeVar `_T` is never used
crates/ruff/resources/test/fixtures/flake8_pyi/PYI018.pyi:5:1: PYI018 TypeVar `_P` is never used
Found 2 errors.
```
In the file `unused_private_type_declaration.rs`, I'm planning to add
other rules that are similar to `PYI018` like the `PYI046`, `PYI047` and
`PYI049`.

ref #848

## Test Plan

Snapshots and manual runs of flake8.
2023-07-26 22:56:15 +00:00
Victor Hugo Gomes
788643f718
Add "--select E402" to example snippet in CONTRIBUTING.md (#6108)
## Summary
In Ruff only a subset of rules are enabled by default. This change
change aims to clarify that when adding a new rule, you must explicitly
use the `--select name_of_rule` command to ensure the rule gets
executed.

This was talked about on Discord a while back.

## Test Plan
Checked docs via mkdocs
2023-07-26 22:48:53 +00:00
Charlie Marsh
64a186272f
Move utf8-encoding-declaration to token-based rules (#6110)
Closes #5979.
2023-07-26 22:42:37 +00:00
Charlie Marsh
8113615534
Add some additional documentation around import categorization (#6107)
Closes https://github.com/astral-sh/ruff/issues/5529.
2023-07-26 22:39:01 +00:00
konsti
ecf4058e52
Fix cargo test -p ruff (#6104) 2023-07-26 22:44:53 +02:00
Zanie Blue
2d2673f613
Add comment regarding class scope short circuit (#6101) 2023-07-26 14:55:05 -05:00
Harutaka Kawamura
564304eba2
Add PT001 documentation (#6023) 2023-07-26 18:05:25 +00:00
Harutaka Kawamura
5b8fc753ec
Add PT024 documentation (#6026) 2023-07-26 13:48:37 -04:00
konsti
13f9a16e33
Rewrite placement logic (#6040)
## Summary
This is a rewrite of the main comment placement logic. `place_comment`
now has three parts:

- place own line comments
  - between branches
  - after a branch
- place end-of-line comments
  - after colon
  - after a branch
- place comments for specific nodes (that include module level comments)

The rewrite fixed three bugs: `class A: # trailing comment` comments now
stay end-of-line, `try: # comment` remains end-of-line and deeply
indented try-else-finally comments remain with the right nested
statement.

It will be much easier to give more alternative branches nodes since
this is abstracted away by `is_node_with_body` and the first/last child
helpers. Adding new node types can now be done by adding an entry to the
`place_comment` match. The code went from 1526 lines before #6033 to
1213 lines now.

It thinks it easier to just read the new `placement.rs` rather than
reviewing the diff.

## Test Plan

The existing fixtures staying the same or improving plus new ones for
the bug fixes.
2023-07-26 16:21:23 +00:00
Micha Reiser
2cf00fee96
Remove parser dependency from ruff-python-ast (#6096) 2023-07-26 17:47:22 +02:00
Harutaka Kawamura
99127243f4
Raise PTH201 for Path("") (#6095) 2023-07-26 09:22:46 -04:00
Harutaka Kawamura
77396c6f92
Fix SIM102 to handle indented elif (#6072)
<!--
Thank you for contributing to Ruff! To help us out with reviewing,
please consider the following:

- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->

## Summary

<!-- What's the purpose of the change? What does it do, and why? -->

The `SIM102` auto-fix fails if `elif` is indented like this:

## Example

```python
def f():
    # SIM102
    if a:
        pass
    elif b:
        if c:
            d
```

```
> cargo run -p ruff_cli -- check --select SIM102 --fix a.py
...
error: Failed to fix nested if: Failed to extract statement from source
a.py:5:5: SIM102 Use a single `if` statement instead of nested `if` statements
Found 1 error.
```

## Test Plan

<!-- How was it tested? -->

New test
2023-07-26 14:37:32 +02:00
Micha Reiser
16e1737d1b
Use cursor based lexer (#6012) 2023-07-26 11:32:26 +02:00
Dhruv Manilawala
025fa4eba8
Integrate the new Jupyter AST nodes in Ruff (#6086)
## Summary

This PR adds the implementation for the new Jupyter AST nodes i.e.,
`ExprLineMagic` and `StmtLineMagic`.

## Test Plan

Add test cases for `unparse` containing magic commands

resolves: #6087
2023-07-26 08:20:30 +00:00