Commit graph

1905 commits

Author SHA1 Message Date
Charlie Marsh
b095b7204b
Add a TypeParams node to the AST (#6261)
## Summary

Similar to #6259, this PR adds a `TypeParams` node to the AST, to
capture the list of type parameters with their surrounding brackets.

If a statement lacks type parameters, the `type_params` field will be
`None`.
2023-08-02 14:12:45 +00:00
Charlie Marsh
981e64f82b
Introduce an Arguments AST node for function calls and class definitions (#6259)
## Summary

This PR adds a new `Arguments` AST node, which we can use for function
calls and class definitions.

The `Arguments` node spans from the left (open) to right (close)
parentheses inclusive.

In the case of classes, the `Arguments` is an option, to differentiate
between:

```python
# None
class C: ...

# Some, with empty vectors
class C(): ...
```

In this PR, we don't really leverage this change (except that a few
rules get much simpler, since we don't need to lex to find the start and
end ranges of the parentheses, e.g.,
`crates/ruff/src/rules/pyupgrade/rules/lru_cache_without_parameters.rs`,
`crates/ruff/src/rules/pyupgrade/rules/unnecessary_class_parentheses.rs`).

In future PRs, this will be especially helpful for the formatter, since
we can track comments enclosed on the node itself.

## Test Plan

`cargo test`
2023-08-02 10:01:13 -04:00
Ran Benita
0d62ad2480
Permit ClassVar and Final without subscript in RUF012 (#6273)
Fix #6267.
2023-08-02 12:58:44 +00:00
Harutaka Kawamura
b4f224ecea
Fix links in docs (#6265)
<!--
Thank you for contributing to Ruff! To help us out with reviewing,
please consider the following:

- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->

## Summary

<!-- What's the purpose of the change? What does it do, and why? -->

Before:

<img width="1031" alt="Screen Shot 2023-08-02 at 15 57 10"
src="171a21d5-01a5-4aa5-8079-4e7f8a59ade8">

After:

<img width="1031" alt="Screen Shot 2023-08-02 at 15 58 03"
src="afd1b9b7-89e0-4e38-a4a6-e3255b62f021">


## Test Plan

<!-- How was it tested? -->

Manual inspection
2023-08-02 09:42:25 +02:00
Charlie Marsh
7842c82a0a
Preserve end-of-line comments on import-from statements (#6216)
## Summary

Ensures that we keep comments at the end-of-line in cases like:

```python
from foo import (  # comment
  bar,
)
```

Closes https://github.com/astral-sh/ruff/issues/6067.
2023-08-01 18:58:05 +00:00
Charlie Marsh
9c708d8fc1
Rename Parameter#arg and ParameterWithDefault#def fields (#6255)
## Summary

This PR renames...

- `Parameter#arg` to `Parameter#name`
- `ParameterWithDefault#def` to `ParameterWithDefault#parameter` (such
that `ParameterWithDefault` has a `default` and a `parameter`)

## Test Plan

`cargo test`
2023-08-01 14:28:34 -04:00
Charlie Marsh
adc8bb7821
Rename Arguments to Parameters in the AST (#6253)
## Summary

This PR renames a few AST nodes for clarity:

- `Arguments` is now `Parameters`
- `Arg` is now `Parameter`
- `ArgWithDefault` is now `ParameterWithDefault`

For now, the attribute names that reference `Parameters` directly are
changed (e.g., on `StmtFunctionDef`), but the attributes on `Parameters`
itself are not (e.g., `vararg`). We may revisit that decision in the
future.

For context, the AST node formerly known as `Arguments` is used in
function definitions. Formally (outside of the Python context),
"arguments" typically refers to "the values passed to a function", while
"parameters" typically refers to "the variables used in a function
definition". E.g., if you Google "arguments vs parameters", you'll get
some explanation like:

> A parameter is a variable in a function definition. It is a
placeholder and hence does not have a concrete value. An argument is a
value passed during function invocation.

We're thus deviating from Python's nomenclature in favor of a scheme
that we find to be more precise.
2023-08-01 13:53:28 -04:00
Charlie Marsh
a82eb9544c
Implement Black's rules around newlines before and after class docstrings (#6209)
## Summary

Black allows up to one blank line _before_ a class docstring, and
enforces one blank line _after_ a class docstring. This PR implements
that handling. The cases in
`crates/ruff_python_formatter/resources/test/fixtures/ruff/statement/class_definition.py`
match Black identically.
2023-08-01 13:33:01 -04:00
konsti
1df7e9831b
Replace .map_or(false, $closure) with .is_some_and(closure) (#6244)
**Summary**
[Option::is_some_and](https://doc.rust-lang.org/stable/std/option/enum.Option.html#method.is_some_and)
and
[Result::is_ok_and](https://doc.rust-lang.org/std/result/enum.Result.html#method.is_ok_and)
are new methods is rust 1.70. I find them way more readable than
`.map_or(false, ...)`.

The changes are `s/.map_or(false,/.is_some_and(/g`, then manually
switching to `is_ok_and` where the value is a Result rather than an
Option.

**Test Plan** n/a^
2023-08-01 19:29:42 +02:00
Micha Reiser
debfca3a11
Remove Parse trait (#6235) 2023-08-01 18:35:03 +02:00
Charlie Marsh
83fe103d6e
Allow generic tuple and list calls in __all__ (#6247)
## Summary

Allows, e.g., `__all__ = list[str]()`.

Closes https://github.com/astral-sh/ruff/issues/6226.
2023-08-01 12:01:48 -04:00
Charlie Marsh
928ab63a64
Add empty lines before nested functions and classes (#6206)
## Summary

This PR ensures that if a function or class is the first statement in a
nested suite that _isn't_ a function or class body, we insert a leading
newline.

For example, given:

```python
def f():
    if True:

        def register_type():
            pass
```

We _want_ to preserve the newline, whereas today, we remove it.

Note that this only applies when the function or class doesn't have any
leading comments.

Closes https://github.com/astral-sh/ruff/issues/6066.
2023-08-01 15:30:59 +00:00
Charlie Marsh
1a85953129
Don't require docstrings in .pyi files (#6239)
Closes https://github.com/astral-sh/ruff/issues/6224.
2023-08-01 10:02:57 -04:00
Charlie Marsh
743118ae9a
Bump version to 0.0.282 (#6241) 2023-08-01 13:21:33 +00:00
Charlie Marsh
0753017cf1
Revert "Expand scope of quoted-annotation rule (#5766)" (#6237)
This is causing some problems, so we'll just revert for now.

Closes https://github.com/astral-sh/ruff/issues/6189.
2023-08-01 09:03:02 -04:00
Charlie Marsh
29fb655e04
Fix logger-objects documentation (#6238)
Closes https://github.com/astral-sh/ruff/issues/6234.
2023-08-01 12:57:56 +00:00
Micha Reiser
f45e8645d7
Remove unused parser modes
<!--
Thank you for contributing to Ruff! To help us out with reviewing, please consider the following:

- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->

## Summary

This PR removes the `Interactive` and `FunctionType` parser modes that are unused by ruff

<!-- What's the purpose of the change? What does it do, and why? -->

## Test Plan

`cargo test`

<!-- How was it tested? -->
2023-08-01 13:10:07 +02:00
Micha Reiser
7c7231db2e
Remove unsupported type_comment field
<!--
Thank you for contributing to Ruff! To help us out with reviewing, please consider the following:

- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->

## Summary

This PR removes the `type_comment` field which our parser doesn't support.

<!-- What's the purpose of the change? What does it do, and why? -->

## Test Plan

`cargo test`

<!-- How was it tested? -->
2023-08-01 12:53:13 +02:00
Micha Reiser
4ad5903ef6
Delete type-ignore node
<!--
Thank you for contributing to Ruff! To help us out with reviewing, please consider the following:

- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->

## Summary

This PR removes the type ignore node from the AST because our parser doesn't support it, and just having it around is confusing.

<!-- What's the purpose of the change? What does it do, and why? -->

## Test Plan

`cargo build`

<!-- How was it tested? -->
2023-08-01 12:34:50 +02:00
konsti
c6986ac95d
Consistent CommentPlacement conversion signatures (#6231)
**Summary** Allow passing any node to `CommentPlacement::{leading,
trailing, dangling}` without manually converting. Conversely, Restrict
the comment to the only type we actually pass.

**Test Plan** No changes.
2023-08-01 12:01:17 +02:00
Micha Reiser
ecfdd8d58b
Add static assertions to nodes (#6228) 2023-08-01 11:54:49 +02:00
David Szotten
07468f8be9
format ExprJoinedStr (#5932) 2023-08-01 08:26:30 +02:00
David Szotten
ba990b676f
add DebugText for self-documenting f-strings (#6167) 2023-08-01 07:55:03 +02:00
Harutaka Kawamura
44a8d1c644
Add PT021, PT022 and PT023 docs (#6143) 2023-08-01 00:41:54 -04:00
Charlie Marsh
88b984e885
Avoid detecting continuations at non-start-of-line (#6219)
## Summary

Previously, given:

```python
a = \
  5;
```

When detecting continuations starting at the offset of the `;`, we'd
flag the previous line as a continuation. We should only flag a
continuation if there isn't leading content prior to the offset.

Closes https://github.com/astral-sh/ruff/issues/6214
2023-08-01 00:20:29 -04:00
Charlie Marsh
bf584c6d74
Remove use of SmallVec in unnecessary-literal-union (#6221)
I prefer to use this on an as-needed basis.
2023-08-01 04:03:58 +00:00
Konrad Listwan-Ciesielski
6ea3c178fd
Add DTZ002 documentation (#6146)
## Summary

Adds documentation for DTZ002. Related to
https://github.com/astral-sh/ruff/issues/2646.

## Test Plan

`python scripts/test_docs_formatted.py`
2023-08-01 04:00:50 +00:00
Charlie Marsh
764d35667f
Avoid PERF401 false positive on list access in loop (#6220)
Closes https://github.com/astral-sh/ruff/issues/6210.
2023-08-01 03:56:53 +00:00
Charlie Marsh
ff9ebbaa5f
Skip trivia when searching for named exception (#6218)
Closes https://github.com/astral-sh/ruff/issues/6213.
2023-08-01 03:42:30 +00:00
Micha Reiser
38b5726948
formatter: WithNodeLevel helper (#6212) 2023-07-31 21:22:17 +00:00
Charlie Marsh
615337a54d
Remove newline-insertion logic from JoinNodesBuilder (#6205)
## Summary

This PR moves the "insert empty lines" behavior out of
`JoinNodesBuilder` and into the `Suite` formatter. I find it a little
confusing that the logic is split between those two formatters right
now, and since this is _only_ used in that one place, IMO it is a bit
simpler to just inline it and use a single approach to tracking state
(right now, both are stateful).

The only other place this was used was for decorators. As a side effect,
we now remove blank lines in both of these cases, which is a known but
intentional deviation from Black (which preserves the empty line before
the comment in the first case):

```python
@foo

# Hello
@bar
def baz():
    pass

@foo

@bar
def baz():
    pass
```
2023-07-31 16:58:15 -04:00
Charlie Marsh
6ee5cb37c0
Reset model state when exiting deferred visitors (#6208)
## Summary

Very subtle bug related to the AST traversal. Given:

```python
from __future__ import annotations

from logging import getLogger

__all__ = ("getLogger",)


def foo() -> None:
    pass
```

We end up visiting the `-> None` annotation, then reusing the state
snapshot when we go to visit the `__all__` exports, so when we visit
`"getLogger"`, we think we're inside of a deferred type annotation.

This PR changes all the deferred visitors to snapshot and restore the
state, which is a lot safer -- that way, the visitors avoid modifying
the current visitor state. (Previously, they implicitly left the visitor
state set to the state of the _last_ thing they visited.)

Closes https://github.com/astral-sh/ruff/issues/6207.
2023-07-31 19:46:52 +00:00
konsti
0fddb31235
Use tracing for format_dev (#6177)
## Summary

[tracing](https://github.com/tokio-rs/tracing) is library for logging,
tracing and related features that has a large ecosystem. Using
[tracing-subscriber](https://docs.rs/tracing-subscriber) and
[tracing-indicatif](https://github.com/emersonford/tracing-indicatif),
we get a nice logging output that you can configure with `RUST_LOG`
(e.g. `RUST_LOG=debug`) and a live look into the formatter progress.

Default:
![Screenshot from 2023-07-30
13-59-53](6432f835-9ff1-4771-955b-398e54c406dc)

`RUST_LOG=debug`:
![Screenshot from 2023-07-30
14-01-32](5f2c87da-0867-4159-82e7-b5757eebb8eb)

It's easy to see in this output which files take a disproportionate
amount of time.

[Peek 2023-07-30
14-35.webm](2c92db5c-1354-465b-a6bc-ddfb281d6f9d)

It opens up further integration with the tracing ecosystem,
[tracing-timing](https://docs.rs/tracing-timing/latest/tracing_timing/)
and [tokio-console](https://github.com/tokio-rs/console) can e.g. show
histograms and the json output allows us building better pipelines than
grepping a log file.

One caveat is using `parent: None` for the logging statements because
tracing subscriber does not allow deactivating the span without
reimplementing all the other log message formatting, too, and we don't
need span information, esp. since it would currently show the progress
bar span.

## Test Plan

n/a
2023-07-31 19:14:01 +00:00
konsti
a7aa3caaae
Rename formatter_progress to formatter_ecosystem_checks (#6194)
Rename the `scripts/formatter_progress.sh` to
`formatter/formatter_ecosysytem_checks.sh` since it fits the actual task
better.
2023-07-31 18:33:12 +00:00
konsti
e52b636da0
Log configuration in ruff_dev (#6193)
**Summary** This includes two changes:
 * Allow setting `-v` in `ruff_dev`, using the `ruff_cli` implementation
 * `debug!` which ruff configuration strategy was used

This is a byproduct of debugging #6187.

**Test Plan** n/a
2023-07-31 17:52:38 +00:00
konsti
9063f4524d
Fix formatting of trailing unescaped quotes in raw triple quoted strings (#6202)
**Summary** This prevents us from turning `r'''\""'''` into
`r"""\"""""`, which is invalid syntax.

This PR fixes CI, which is currently broken on main (in a way that still
passes on linter PRs and allows merging formatter PRs, but it's bad to
have a job be red). Once merged, i'll make the formatted ecosystem
checks a required check.

**Test Plan** Added a regression test.
2023-07-31 19:25:16 +02:00
Charlie Marsh
dbd60b2cf5
Bump version to 0.0.281 (#6195) 2023-07-31 13:21:43 -04:00
Charlie Marsh
7eb2ba47cc
Add empty line after import block (#6200)
## Summary

Ensures that, given:

```python
import os
x = 1
```

We format like:

```python
import os

x = 1
```
2023-07-31 12:01:45 -04:00
Dhruv Manilawala
cb34e6d322
Avoid parenthesizing comprehension element (#6198)
## Summary

This PR adds a new precedence level for the comprehension element. This fixes
the generator to not add parentheses around the comprehension element every
time.

The new precedence level is `COMPREHENSION_ELEMENT` and it should occur after
the `NAMED_EXPR` precedence level because named expressions are always parenthesized.

This matches the behavior of Python `ast.unparse` and tested with the
following snippet:

```python
import ast

code = ""
ast.unparse(ast.parse(code))
```

## Test Plan

Add a bunch of test cases for all the valid nodes at that position.

fixes: #5777
2023-07-31 20:56:42 +05:30
Harutaka Kawamura
0274de1fff
Preserve backslash in raw string literal (#6152) 2023-07-31 12:48:17 +00:00
konsti
a540933bc9
Print log when formatter ecosystem checks fail (#6187)
**Summary** Print the errors when the formatter ecosystem checks failed.
Im not happy that we current collect the log in the first place, but
this is the less invasive change and we need it to unblock reviewing
#6152.

**Test Plan**
1547787940
2023-07-31 14:45:38 +02:00
Micha Reiser
311a1f9ec4
Remove len from JoinCommaSeparatedBuilder (#6185) 2023-07-31 12:19:47 +00:00
Luc Khai Hai
b95fc6d162
Format bytes string (#6166)
<!--
Thank you for contributing to Ruff! To help us out with reviewing,
please consider the following:

- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->

## Summary

Format bytes string

Closes #6064

## Test Plan

Added a fixture based on string's one
2023-07-31 10:46:40 +02:00
Charlie Marsh
de898c52eb
Avoid falsely marking non-submodules as submodule aliases (#6182)
## Summary

We have some code to ensure that if an aliased import is used, any
submodules should be marked as used too. This comment says it best:

```rust
// If the name of a submodule import is the same as an alias of another import, and the
// alias is used, then the submodule import should be marked as used too.
//
// For example, mark `pyarrow.csv` as used in:
//
// ```python
// import pyarrow as pa
// import pyarrow.csv
// print(pa.csv.read_csv("test.csv"))
// ```
```

However, it looks like when we go to look up `pyarrow` (of `import
pyarrow as pa`), we aren't checking to ensure the resolved binding is
_actually_ an import. This was causing us to attribute `print(rm.ANY)`
to `def requests_mock` here:

```python
import requests_mock as rm

def requests_mock(requests_mock: rm.Mocker):
    print(rm.ANY)
```

Closes https://github.com/astral-sh/ruff/issues/6180.
2023-07-30 22:16:25 +00:00
Charlie Marsh
76741cac77
Add global and nonlocal formatting (#6170)
## Summary

Adds `global` and `nonlocal` formatting, without the "deviation from
black" outlined in the linked issue, which I'll do separately.

See: https://github.com/astral-sh/ruff/issues/4798.

## Test Plan

Added a fixture in the Ruff-specific directory since the Black fixtures
don't seem to cover this.
2023-07-29 14:39:42 +00:00
Charlie Marsh
5d9814d84d
Remove parentheses around some walrus operators (#6173)
## Summary

Closes https://github.com/astral-sh/ruff/issues/5781

## Test Plan

Added cases to
`crates/ruff_python_formatter/resources/test/fixtures/ruff/expression/named_expr.py`
one-by-one and adjusted the condition as needed.
2023-07-29 10:06:26 -04:00
Charlie Marsh
4231ed2fc3
Skip partial duplicates when applying multi-edit fixes (#6144)
## Summary

Right now, if we have two fixes that have an overlapping edit, but not
an _identical_ set of edits, they'll conflict, causing us to do another
linter traversal. Here, I've enabled the fixer to support partially
overlapping edits, which (as an example) let's us greatly reduce the
number of iterations required in the test suite.

The most common case here is that in which a bunch of edits need to
import some symbol, and then use that symbol, but in different ways. In
that case, all edits will have a common fix (to import the symbol), but
deviate in some way. With this change, we can do all of those edits in
one pass.

Note that the simplest way to enable this was to store sorted edits on
`Fix`. We don't allow modifying the edits on `Fix` once it's
constructed, so this is an easy change, and allows us to avoid a bunch
of clones and traversals later on.

Closes #5800.
2023-07-29 12:11:57 +00:00
Charlie Marsh
badbfb2d3e
Skip BOM when determining Locator's line starts (#6159)
## Summary

If a file has a BOM, the import sorter _always_ reports the imports as
unsorted. The acute issue is that we detect that the line has leading
content (before the imports), which we always consider a violation.
Rather than fixing that one site, this PR instead makes `.line_start`
BOM-aware.

Fixes https://github.com/astral-sh/ruff/issues/6155.
2023-07-29 11:47:13 +00:00
Dhruv Manilawala
44bdf20221
[pep8-naming]: New config option extend-ignore-names (#6169)
## Summary

This PR adds a new config option for `pep8-naming` plugin called
`extend-ignore-names` which is used to extend the default values in
`ignore-names` option.

resolves: #6050
2023-07-29 17:11:04 +05:30
Dhruv Manilawala
3c99fbf808
Implement --diff for Jupyter Notebooks (#6149)
## Summary

Implement `--diff` for Jupyter Notebooks

## Test Plan

1. Use `crates/ruff/resources/test/fixtures/jupyter/isort.ipynb` as a
test case
and add a markdown cell in between the code cells to check that the diff
   outputs the correct cell index.
2. Run the command:
`cargo run --bin ruff --package ruff_cli -- check --no-cache --isolated
--select=ALL crates/ruff/resources/test/fixtures/jupyter/isort.ipynb
--fix --diff`

<details><summary>Example output:</summary>
<p>

```diff
--- /Users/dhruv/playground/ruff/notebooks/test.ipynb:cell 0
+++ /Users/dhruv/playground/ruff/notebooks/test.ipynb:cell 0
@@ -1,3 +0,0 @@
-from pathlib import Path
-import random
-import math
--- /Users/dhruv/playground/ruff/notebooks/test.ipynb:cell 4
+++ /Users/dhruv/playground/ruff/notebooks/test.ipynb:cell 4
@@ -1,5 +1,3 @@
-from typing import Any
-import collections
 # Newline should be added here
 def foo():
     pass

--- /Users/dhruv/playground/ruff/notebooks/image_classification.ipynb:cell 8
+++ /Users/dhruv/playground/ruff/notebooks/image_classification.ipynb:cell 8
@@ -1,8 +1,7 @@
 import pprint
 import tempfile
 
-from IPython import display
 import matplotlib.pyplot as plt
-
 import tensorflow as tf
-import tensorflow_datasets as tfds
+import tensorflow_datasets as tfds
+from IPython import display
--- /Users/dhruv/playground/ruff/notebooks/image_classification.ipynb:cell 10
+++ /Users/dhruv/playground/ruff/notebooks/image_classification.ipynb:cell 10
@@ -1,5 +1,4 @@
 import tensorflow_models as tfm
 
 # These are not in the tfm public API for v2.9. They will be available in v2.10
-from official.vision.serving import export_saved_model_lib
-import official.core.train_lib
+from official.vision.serving import export_saved_model_lib
--- /Users/dhruv/playground/ruff/notebooks/image_classification.ipynb:cell 13
+++ /Users/dhruv/playground/ruff/notebooks/image_classification.ipynb:cell 13
@@ -1,5 +1,5 @@
-exp_config = tfm.core.exp_factory.get_exp_config('resnet_imagenet')
-tfds_name = 'cifar10'
+exp_config = tfm.core.exp_factory.get_exp_config("resnet_imagenet")
+tfds_name = "cifar10"
 ds,ds_info = tfds.load(
 tfds_name,
 with_info=True)
--- /Users/dhruv/playground/ruff/notebooks/image_classification.ipynb:cell 15
+++ /Users/dhruv/playground/ruff/notebooks/image_classification.ipynb:cell 15
@@ -6,12 +6,12 @@
 # Configure training and testing data
 batch_size = 128
 
-exp_config.task.train_data.input_path = ''
+exp_config.task.train_data.input_path = ""
 exp_config.task.train_data.tfds_name = tfds_name
-exp_config.task.train_data.tfds_split = 'train'
+exp_config.task.train_data.tfds_split = "train"
 exp_config.task.train_data.global_batch_size = batch_size
 
-exp_config.task.validation_data.input_path = ''
+exp_config.task.validation_data.input_path = ""
 exp_config.task.validation_data.tfds_name = tfds_name
-exp_config.task.validation_data.tfds_split = 'test'
+exp_config.task.validation_data.tfds_split = "test"
 exp_config.task.validation_data.global_batch_size = batch_size
--- /Users/dhruv/playground/ruff/notebooks/image_classification.ipynb:cell 17
+++ /Users/dhruv/playground/ruff/notebooks/image_classification.ipynb:cell 17
@@ -1,16 +1,16 @@
 logical_device_names = [logical_device.name for logical_device in tf.config.list_logical_devices()]
 
-if 'GPU' in ''.join(logical_device_names):
-  print('This may be broken in Colab.')
-  device = 'GPU'
-elif 'TPU' in ''.join(logical_device_names):
-  print('This may be broken in Colab.')
-  device = 'TPU'
+if "GPU" in "".join(logical_device_names):
+  print("This may be broken in Colab.")
+  device = "GPU"
+elif "TPU" in "".join(logical_device_names):
+  print("This may be broken in Colab.")
+  device = "TPU"
 else:
-  print('Running on CPU is slow, so only train for a few steps.')
-  device = 'CPU'
+  print("Running on CPU is slow, so only train for a few steps.")
+  device = "CPU"
 
-if device=='CPU':
+if device=="CPU":
   train_steps = 20
   exp_config.trainer.steps_per_loop = 5
 else:
@@ -20,9 +20,9 @@
 exp_config.trainer.summary_interval = 100
 exp_config.trainer.checkpoint_interval = train_steps
 exp_config.trainer.validation_interval = 1000
-exp_config.trainer.validation_steps =  ds_info.splits['test'].num_examples // batch_size
+exp_config.trainer.validation_steps =  ds_info.splits["test"].num_examples // batch_size
 exp_config.trainer.train_steps = train_steps
-exp_config.trainer.optimizer_config.learning_rate.type = 'cosine'
+exp_config.trainer.optimizer_config.learning_rate.type = "cosine"
 exp_config.trainer.optimizer_config.learning_rate.cosine.decay_steps = train_steps
 exp_config.trainer.optimizer_config.learning_rate.cosine.initial_learning_rate = 0.1
 exp_config.trainer.optimizer_config.warmup.linear.warmup_steps = 100
--- /Users/dhruv/playground/ruff/notebooks/image_classification.ipynb:cell 21
+++ /Users/dhruv/playground/ruff/notebooks/image_classification.ipynb:cell 21
@@ -1,14 +1,14 @@
 logical_device_names = [logical_device.name for logical_device in tf.config.list_logical_devices()]
 
 if exp_config.runtime.mixed_precision_dtype == tf.float16:
-    tf.keras.mixed_precision.set_global_policy('mixed_float16')
+    tf.keras.mixed_precision.set_global_policy("mixed_float16")
 
-if 'GPU' in ''.join(logical_device_names):
+if "GPU" in "".join(logical_device_names):
   distribution_strategy = tf.distribute.MirroredStrategy()
-elif 'TPU' in ''.join(logical_device_names):
+elif "TPU" in "".join(logical_device_names):
   tf.tpu.experimental.initialize_tpu_system()
-  tpu = tf.distribute.cluster_resolver.TPUClusterResolver(tpu='/device:TPU_SYSTEM:0')
+  tpu = tf.distribute.cluster_resolver.TPUClusterResolver(tpu="/device:TPU_SYSTEM:0")
   distribution_strategy = tf.distribute.experimental.TPUStrategy(tpu)
 else:
-  print('Warning: this will be really slow.')
+  print("Warning: this will be really slow.")
   distribution_strategy = tf.distribute.OneDeviceStrategy(logical_device_names[0])
--- /Users/dhruv/playground/ruff/notebooks/image_classification.ipynb:cell 23
+++ /Users/dhruv/playground/ruff/notebooks/image_classification.ipynb:cell 23
@@ -1,5 +1,3 @@
 with distribution_strategy.scope():
   model_dir = tempfile.mkdtemp()
   task = tfm.core.task_factory.get_task(exp_config.task, logging_dir=model_dir)
-
-#  tf.keras.utils.plot_model(task.build_model(), show_shapes=True)
--- /Users/dhruv/playground/ruff/notebooks/image_classification.ipynb:cell 24
+++ /Users/dhruv/playground/ruff/notebooks/image_classification.ipynb:cell 24
@@ -1,4 +1,4 @@
 for images, labels in task.build_inputs(exp_config.task.train_data).take(1):
   print()
-  print(f'images.shape: {str(images.shape):16}  images.dtype: {images.dtype!r}')
-  print(f'labels.shape: {str(labels.shape):16}  labels.dtype: {labels.dtype!r}')
+  print(f"images.shape: {images.shape!s:16}  images.dtype: {images.dtype!r}")
+  print(f"labels.shape: {labels.shape!s:16}  labels.dtype: {labels.dtype!r}")
--- /Users/dhruv/playground/ruff/notebooks/image_classification.ipynb:cell 27
+++ /Users/dhruv/playground/ruff/notebooks/image_classification.ipynb:cell 27
@@ -1 +1 @@
-plt.hist(images.numpy().flatten());
+plt.hist(images.numpy().flatten())
--- /Users/dhruv/playground/ruff/notebooks/image_classification.ipynb:cell 29
+++ /Users/dhruv/playground/ruff/notebooks/image_classification.ipynb:cell 29
@@ -1,2 +1,2 @@
-label_info = ds_info.features['label']
+label_info = ds_info.features["label"]
 label_info.int2str(1)
--- /Users/dhruv/playground/ruff/notebooks/image_classification.ipynb:cell 31
+++ /Users/dhruv/playground/ruff/notebooks/image_classification.ipynb:cell 31
@@ -10,9 +10,6 @@
     if predictions is None:
       plt.title(label_info.int2str(labels[i]))
     else:
-      if labels[i] == predictions[i]:
-        color = 'g'
-      else:
-        color = 'r'
+      color = "g" if labels[i] == predictions[i] else "r"
       plt.title(label_info.int2str(predictions[i]), color=color)
     plt.axis("off")
--- /Users/dhruv/playground/ruff/notebooks/image_classification.ipynb:cell 35
+++ /Users/dhruv/playground/ruff/notebooks/image_classification.ipynb:cell 35
@@ -1,3 +1,3 @@
-plt.figure(figsize=(10, 10));
+plt.figure(figsize=(10, 10))
 for images, labels in task.build_inputs(exp_config.task.validation_data).take(1):
   show_batch(images, labels)
--- /Users/dhruv/playground/ruff/notebooks/image_classification.ipynb:cell 37
+++ /Users/dhruv/playground/ruff/notebooks/image_classification.ipynb:cell 37
@@ -1,7 +1,7 @@
 model, eval_logs = tfm.core.train_lib.run_experiment(
     distribution_strategy=distribution_strategy,
     task=task,
-    mode='train_and_eval',
+    mode="train_and_eval",
     params=exp_config,
     model_dir=model_dir,
     run_post_eval=True)
--- /Users/dhruv/playground/ruff/notebooks/image_classification.ipynb:cell 38
+++ /Users/dhruv/playground/ruff/notebooks/image_classification.ipynb:cell 38
@@ -1 +0,0 @@
-#  tf.keras.utils.plot_model(model, show_shapes=True)
--- /Users/dhruv/playground/ruff/notebooks/image_classification.ipynb:cell 40
+++ /Users/dhruv/playground/ruff/notebooks/image_classification.ipynb:cell 40
@@ -1,4 +1,4 @@
 for key, value in eval_logs.items():
     if isinstance(value, tf.Tensor):
       value = value.numpy()
-    print(f'{key:20}: {value:.3f}')
+    print(f"{key:20}: {value:.3f}")
--- /Users/dhruv/playground/ruff/notebooks/image_classification.ipynb:cell 42
+++ /Users/dhruv/playground/ruff/notebooks/image_classification.ipynb:cell 42
@@ -4,5 +4,5 @@
 
 show_batch(images, labels, tf.cast(predictions, tf.int32))
 
-if device=='CPU':
-  plt.suptitle('The model was only trained for a few steps, it is not expected to do well.')
+if device=="CPU":
+  plt.suptitle("The model was only trained for a few steps, it is not expected to do well.")
--- /Users/dhruv/playground/ruff/notebooks/image_classification.ipynb:cell 45
+++ /Users/dhruv/playground/ruff/notebooks/image_classification.ipynb:cell 45
@@ -1,8 +1,8 @@
 # Saving and exporting the trained model
 export_saved_model_lib.export_inference_graph(
-    input_type='image_tensor',
+    input_type="image_tensor",
     batch_size=1,
     input_image_size=[32, 32],
     params=exp_config,
     checkpoint_path=tf.train.latest_checkpoint(model_dir),
-    export_dir='./export/')
+    export_dir="./export/")
--- /Users/dhruv/playground/ruff/notebooks/image_classification.ipynb:cell 47
+++ /Users/dhruv/playground/ruff/notebooks/image_classification.ipynb:cell 47
@@ -1,3 +1,3 @@
 # Importing SavedModel
-imported = tf.saved_model.load('./export/')
-model_fn = imported.signatures['serving_default']
+imported = tf.saved_model.load("./export/")
+model_fn = imported.signatures["serving_default"]
--- /Users/dhruv/playground/ruff/notebooks/image_classification.ipynb:cell 49
+++ /Users/dhruv/playground/ruff/notebooks/image_classification.ipynb:cell 49
@@ -1,10 +1,10 @@
 plt.figure(figsize=(10, 10))
-for data in tfds.load('cifar10', split='test').batch(12).take(1):
+for data in tfds.load("cifar10", split="test").batch(12).take(1):
   predictions = []
-  for image in data['image']:
-    index = tf.argmax(model_fn(image[tf.newaxis, ...])['logits'], axis=1)[0]
+  for image in data["image"]:
+    index = tf.argmax(model_fn(image[tf.newaxis, ...])["logits"], axis=1)[0]
     predictions.append(index)
-  show_batch(data['image'], data['label'], predictions)
+  show_batch(data["image"], data["label"], predictions)
 
-  if device=='CPU':
-    plt.suptitle('The model was only trained for a few steps, it is not expected to do better than random.')
+  if device=="CPU":
+    plt.suptitle("The model was only trained for a few steps, it is not expected to do better than random.")

Would fix 61 errors.
```

</p>
</details> 

resolves: #4727
2023-07-29 04:22:56 +00:00