Ruff v0.0.278 is out now. Install it from PyPI, or your package manager of choice:

pip install --upgrade ruff

As a reminder: Ruff is an extremely fast Python linter, written in Rust. Ruff can be used to replace Flake8 (plus dozens of plugins), isort, pydocstyle, pyupgrade, and more, all while executing tens or hundreds of times faster than any individual tool.

View the full changelog on GitHub, or read on for the highlights.

Invalid # noqa directives now emit warnings

Previously, if Ruff encountered an invalid suppression comment (also known as a # noqa directive), it would either silently ignore it or, in some cases it, treat it as a blanket suppression comment (i.e., ignore all violations for a given line).

Ruff will now emit a warning for each invalid suppression comment it encounters.

For example, given:

import os  # noqa: unused-import

Previously, Ruff would treat this as equivalent to # noqa, and ignore all violations on the line. Now, Ruff will emit a warning:

warning: Invalid `# noqa` directive on line 1: expected a comma-separated list of codes (e.g., `# noqa: F401, F841`).

Ruff's # noqa parser now also supports optional whitespace between each character in the directive.

flake8: noqa comments now support suppressing dedicated codes

Ruff supports the use of both # flake8: noqa and, equivalently, # ruff: noqa to suppress all violations in a given file.

Previously, Ruff also supported # ruff: noqa: F401 to suppress all F401 violations in a given file. However, Ruff did not support # flake8: noqa: F401, instead treating that as equivalently to # flake8: noqa. While confusing, this behavior was consistent with Flake8, which doesn't support suppressing dedicated codes with # flake8: noqa.

Ruff now supports # flake8: noqa: F401 as a dedicated code suppression comment, in addition to # ruff: noqa: F401.

isort's known-first-party and friends now accept globs

Settings like known-first-party now accept globs, in addition to module names. For example, the following would mark any modules that start with my_module_ as first-party:

[tool.ruff.isort]
known-first-party = "my_module_*"

New rule: unnecessary-list-allocation-for-first-element (RUF015)

What does it do?

Checks for uses of list(...)[0] that can be replaced with next(iter(...)).

Why does it matter?

Calling list(...) will create a new list of the entire collection, which can be very expensive for large collections. If you only need the first element of the collection, you can use next(iter(...)) to lazily fetch the first element without creating a new list.

Note that migrating from list(...)[0] to next(iter(...)) can change the behavior of your program in two ways:

  1. First, list(...) will eagerly evaluate the entire collection, while next(iter(...)) will only evaluate the first element. As such, any side effects that occur during iteration will be delayed.
  2. Second, list(...)[0] will raise IndexError if the collection is empty, while next(iter(...)) will raise StopIteration.

For example, given the following snippet:

head = list(range(1000000000000))[0]

Instead, use next(iter(...)):

head = next(iter(range(1000000000000)))

This rule is autofixable:

@@ -245,7 +245,7 @@ def _parse_setuptools_arguments(
         names = ", ".join(plat_names)
         msg = f"--plat-name is ambiguous: {names}"
         raise BuildError(msg)
-    plat_name = list(plat_names)[0]
+    plat_name = next(iter(plat_names))

Contributed by @evanrittenhouse.

New rule: invalid-index-type (RUF016)

What does it do?

Checks for indexed access to lists, strings, tuples, bytes, and comprehensions using a type other than an integer or slice.

Why does it matter?

Only integers or slices can be used as indices to these types. Using other types will result in a TypeError at runtime and a SyntaxWarning at import time.

For example, given the following snippet:

var = [1, 2, 3]["x"]

Instead, use an integer or slice as the index:

var = [1, 2, 3][0]

Contributed by @zanieb.

New rule: re-sub-positional-args (B034)

What does it do?

Checks for calls to re.sub, re.subn, and re.split that pass count, maxsplit, or flags as positional arguments.

Why does it matter?

Passing count, maxsplit, or flags as positional arguments to re.sub, re.subn, or re.split can lead to confusion, as most methods in the re module accepts flags as the third positional argument, while re.sub, re.subn, and re.split have different signatures.

Instead, pass count, maxsplit, and flags as keyword arguments.

For example, given the following snippet:

import re

re.split("pattern", "replacement", 1)

Instead, pass maxsplit as a keyword argument:

import re

re.split("pattern", "replacement", maxsplit=1)

This rule is derived from flake8-bugbear.

Contributed by @charliermarsh.

New rule: unnecessary-literal-union (PYI030)

What does it do?

Checks for the presence of multiple literal types in a union.

Why does it matter?

Literal types accept multiple arguments and it is clearer to specify them as a single literal.

For example, given the following snippet:

from typing import Literal

field: Literal[1] | Literal[2]

Instead, use a single Literal type:

from typing import Literal

field: Literal[1, 2]

This rule is derived from flake8-pyi.

Contributed by @zanieb.

New rule: type-name-incorrect-variance (PLC0105)

What does it do?

Checks for type names that do not match the variance of their associated type parameter.

Why does it matter?

PEP 484 recommends the use of the _co and _contra suffixes for covariant and contravariant type parameters, respectively (while invariant type parameters should not have any such suffix).

For example, given the following snippet:

from typing import TypeVar

T = TypeVar("T", covariant=True)
U = TypeVar("U", contravariant=True)
V_co = TypeVar("V_co")

Instead, use the _co and _contra suffixes:

from typing import TypeVar

T_co = TypeVar("T_co", covariant=True)
U_contra = TypeVar("U_contra", contravariant=True)
V = TypeVar("V")

This rule is derived from pylint.

Contributed by @tjkuson.

New rule: typevar-bivariance (PLC0131)

What does it do?

Checks for TypeVar and ParamSpec definitions in which the type is both covariant and contravariant.

Why does it matter?

By default, Python's generic types are invariant, but can be marked as either covariant or contravariant via the covariant and contravariant keyword arguments. While the API does allow you to mark a type as both covariant and contravariant, this is not supported by the type system, and should be avoided.

Instead, change the variance of the type to be either covariant, contravariant, or invariant. If you want to describe both covariance and contravariance, consider using two separate type parameters.

For context: an "invariant" generic type only accepts values that exactly match the type parameter; for example, list[Dog] accepts only list[Dog], not list[Animal] (superclass) or list[Bulldog] (subclass). This is the default behavior for Python's generic types.

A "covariant" generic type accepts subclasses of the type parameter; for example, Sequence[Animal] accepts Sequence[Dog]. A "contravariant" generic type accepts superclasses of the type parameter; for example, Callable[Dog] accepts Callable[Animal].

For example, given the following snippet:

from typing import TypeVar

T = TypeVar("T", covariant=True, contravariant=True)

Instead, use a single variance:

from typing import TypeVar

T_co = TypeVar("T_co", covariant=True)
T_contra = TypeVar("T_contra", contravariant=True)

This rule is derived from pylint.

Contributed by @tjkuson.

Bug fixes