Ruff v0.0.285 is out now. Install it from PyPI, or your package manager of choice:
pip install --upgrade ruff
As a reminder: Ruff is an extremely fast Python linter, written in Rust. Ruff can be used to replace Flake8 (plus dozens of plugins), isort, pydocstyle, pyupgrade, and more, all while executing tens or hundreds of times faster than any individual tool.
View the full changelog on GitHub, or read on for the highlights.
Stabilized support for Jupyter Notebooks #
Experimental support for Jupyter Notebooks was added to Ruff in v0.0.276
.
With support for IPython magic commands, --diff
for notebooks, and some changes to rule behavior,
Ruff's support for Jupyter Notebooks is considered stable and ready for production use in v0.0.285
.
Note Jupyter Notebooks are still not linted by default and requires opt-in by including them in your target paths:
[tool.ruff]
include = ["*.py", "*.pyi", "**/pyproject.toml", "*.ipynb"]
See the docs on Jupyter Notebook discovery for details.
Jupyter Notebook support was contributed by @dhruvmanila.
Support for IPython magic commands #
IPython magic commands are a powerful feature in Jupyter Notebooks, allowing you to update IPython settings, run system commands, access Python help menus, and much more. Initially, Ruff did not support parsing code including magic commands typically indicated by a leading %
, %%
, !
since they are not valid Python. Consequently, cells containing magic commands would be skipped during analysis which could result in invalid diagnoses.
For example, given the following cells in a notebook:
Cell 1
import pandas as pd
%matplotlib inline
Cell 2
df = pd.read_csv("foo.csv")
In previous versions of Ruff, an undefined name (F821
) violation would be raised in the second cell since the import is defined in a skipped cell. Now, Ruff supports parsing magic commands in Jupyter Notebooks and cells including magic commands will be included in the analysis eliminating a common source of false positives.
Diff support for Jupyter Notebooks #
When using Ruff to apply fixes to violations, it's often useful to preview the changes with the --diff
flag. However, displaying
changes to Jupyter Notebooks is challenging due to differences between the contents of a raw and rendered notebook file.
Now, Ruff displays diffs of rendered cells allowing you to easily validate changes to your notebooks.
For example, here's a diff applying fixes for I001
, F841
and EM101
to a notebook:
--- /Users/ruff/notebooks/test.ipynb:cell 1
+++ /Users/ruff/notebooks/test.ipynb:cell 1
@@ -1,2 +1,2 @@
-from pathlib import Path
-import sys
+import sys
+from pathlib import Path
--- /Users/ruff/notebooks/test.ipynb:cell 4
+++ /Users/ruff/notebooks/test.ipynb:cell 4
@@ -1,2 +1,2 @@
def foo():
- unused_variable = "hello world"
+ pass
--- /Users/ruff/notebooks/test.ipynb:cell 6
+++ /Users/ruff/notebooks/test.ipynb:cell 6
@@ -1,2 +1,3 @@
if foo is None:
- raise ValueError("foo cannot be `None`")
+ msg = "foo cannot be `None`"
+ raise ValueError(msg)
Would fix 3 errors.
Rule changes for Jupyter Notebooks #
Jupyter Notebooks have slightly different semantics than regular Python. During the experimental period, we discovered a couple rules that needed to be updated to behave differently in notebooks.
The unused-variable
(F841
) fix will no longer remove the entire statment if the variable is assigned to a Jupyter magic command.
For example, if you have an unused variable assigned to a shell command as follows:
an_unused_variable = !ping google.com
We would previously suggest deleting the entire line but will now suggest just removing the variable to preserve side-effects:
- an_unused_variable = !ping google.com
+ !ping google.com
Jupyter Notebooks allow you to await
asynchronous functions outside of asynchronous contexts. Consequently, yield-outside-function
(F704
) has been updated to allow top-level usage of await
in Jupyter Notebooks instead of raising a violation.
Rule changes contributed by @dhruvmanila in #6141 and @charliermarsh in #6607
New rule: pytest-unittest-raises-assertion
(PT027
) #
What does it do? #
Checks for uses of exception-related assertion methods from the unittest
module.
Why does it matter? #
To enforce the assertion style recommended by pytest
, pytest.raises
is
preferred over the exception-related assertion methods in unittest
, like
assertRaises
.
For example, given the following snippet:
import unittest
class TestFoo(unittest.TestCase):
def test_foo(self):
with self.assertRaises(ValueError):
raise ValueError("foo")
The unittest
style context manager is used but the pytest
style context should be used:
import unittest
import pytest
class TestFoo(unittest.TestCase):
def test_foo(self):
with pytest.raises(ValueError):
raise ValueError("foo")
See pytest
documentation: Assertions about expected exceptions for background.
This rule is derived from flake8-pytest-style.
Contributed by @harupy.
New rule: pytest-duplicate-parametrize-test-cases
(PT014
) #
What does it do? #
Checks for duplicate test cases in pytest.mark.parametrize
.
Why does it matter? #
Duplicate test cases are redundant and should be removed.
For example, given the following snippet:
import pytest
@pytest.mark.parametrize(
("param1", "param2"),
[
(1, 2),
(1, 2),
],
)
def test_foo(param1, param2):
...
The test case (1, 2)
is duplicated and increase test times without increasing coverage.
Instead, omit the redundant case:
import pytest
@pytest.mark.parametrize(
("param1", "param2"),
[
(1, 2),
],
)
def test_foo(param1, param2):
...
See pytest
documentation: How to parametrize fixtures and test functions for background.
This rule is derived from flake8-pytest-style.
Contributed by @harupy.
New rule: banned-module-level-imports
(TID253
) #
What does it do? #
Checks for module-level imports that should instead be imported lazily
(e.g., within a function definition, or an if TYPE_CHECKING:
block, or
some other nested context).
Why does it matter? #
Some modules are expensive to import. For example, importing torch
or
tensorflow
can introduce a noticeable delay in the startup time of a
Python program.
In such cases, you may want to enforce that the module is imported lazily as needed, rather than at the top of the file. This could involve inlining the import into the function that uses it, rather than importing it unconditionally, to ensure that the module is only imported when necessary.
For example, given the following snippet:
import tensorflow as tf
def show_version():
print(tf.__version__)
tensorflow
will be imported when your module is imported but is only needed when
show_version
is called. Instead, delay the import:
def show_version():
import tensorflow as tf
print(tf.__version__)
This rule requires configuring module names in the flake8-tidy-imports.banned-module-level-imports
setting.
This rule is derived from flake8-tidy-imports.
Contributed by @durumu.
New rule: bad-dunder-name
(W3201
) #
This rule is available in the Ruff nursery
What does it do? #
Checks for any misspelled dunder name method and for any method
defined with __...__
that's not one of the pre-defined methods.
The pre-defined methods encompass all of Python's standard dunder methods.
Why does it matter? #
Misspelled dunder name methods may cause your code to not function as expected.
Since dunder methods are associated with customizing the behavior
of a class in Python, introducing a dunder method such as __foo__
that diverges from standard Python dunder methods could potentially
confuse someone reading the code.
For example, given the following snippet:
class Foo:
def __init_(self):
...
There is a likely typo in the __init__
method definition and it should be written as:
class Foo:
def __init__(self):
...
Or given the following snippet:
class Foo:
def __bar__(self):
...
__bar__
is not a known dunder method and should probably not be used. Instead,
define it as a private method:
class Foo:
def _bar(self):
...
This rule is derived from pylint.
Contributed by @LaBatata101.
New rule: subprocess-run-check
(W1510
) #
What does it do? #
Checks for uses of subprocess.run
without an explicit check
argument.
Why does it matter? #
By default, subprocess.run
does not check the return code of the process
it runs. This can lead to silent failures.
Instead, consider using check=True
to raise an exception if the process
fails, or set check=False
explicitly to mark the behavior as intentional.
For example, given the following snippet:
import subprocess
subprocess.run(["ls", "nonexistent"]) # No exception raised.
If the command fails, no exception will be raised in Python. Set check
to ensure a failed command is detected:
import subprocess
subprocess.run(["ls", "nonexistent"], check=True) # Raises exception.
Or indicate that you do not care if it fails by explicitly setting check
to false:
import subprocess
subprocess.run(["ls", "nonexistent"], check=False) # Explicitly no check.
See Python documentation: subprocess.run
for background.
This rule is derived from pylint.
Contributed by @tjkuson.
New rule: quadratic-list-summation
(RUF017
) #
What does it do? #
Checks for the use of sum()
to flatten lists of lists, which has
quadratic complexity.
Why does it matter? #
The use of sum()
to flatten lists of lists is quadratic in the number of
lists, as sum()
creates a new list for each element in the summation.
Instead, consider using another method of flattening lists to avoid quadratic complexity. The following methods are all linear in the number of lists:
functools.reduce(operator.iconcat, lists, [])
list(itertools.chain.from_iterable(lists)
[item for sublist in lists for item in sublist]
For example, given the following snippet:
lists = [[1, 2, 3], [4, 5, 6], [7, 8, 9]]
joined = sum(lists, [])
Flattening the lists will not be performant if they are long. Instead, consider using the following performant implementation:
import functools
import operator
lists = [[1, 2, 3], [4, 5, 6], [7, 8, 9]]
functools.reduce(operator.iconcat, lists, [])
See How Not to Flatten a List of Lists in Python or How do I make a flat list out of a list of lists? for background.
Contributed by @evanrittenhouse.