TL;DR: In collaboration with PyTorch, NVIDIA, and Quansight, we're releasing an experimental build of uv with support for "wheel variants". Variants make it easier to distribute and install Python packages built for diverse hardware or software setups, like GPUs or SIMD support.

You can try it today by installing PyTorch 2.8.0 with the new experimental build. "Variant-enabled" uv will automatically select the appropriate PyTorch build based on your machine's GPU:

curl
win
$ curl -LsSf https://astral.sh/uv/install.sh | INSTALLER_DOWNLOAD_URL=https://wheelnext.astral.sh sh
$ uv venv
$ uv pip install torch
$ powershell -c { $env:INSTALLER_DOWNLOAD_URL = 'https://wheelnext.astral.sh'; irm https://astral.sh/uv/install.ps1 | iex }
$ uv venv
$ uv pip install torch

If you've ever had trouble installing PyTorch, or JAX, or FlashAttention, then you'll understand that it's often harder than it should be to distribute and install packages built for diverse software or hardware setups, like GPUs. The WheelNext project aims to solve this problem.

The desired end state for WheelNext is such that uv pip install torch should "just work" by installing the appropriate version of PyTorch based on your hardware: the CUDA 12.8 build if you have a compatible NVIDIA GPU, the ROCm build for AMD GPUs, the XPU build for Intel hardware, etc.

It's like uv's --torch-backend=auto flag, but based on a generalized design that could be applied across the Python ecosystem.

Today, in collaboration with PyTorch, NVIDIA, and Quansight, we're releasing an experimental build of uv with support for wheel variants, one of the key pillars of the WheelNext design. You can try it out today to install the variant-enabled builds of PyTorch 2.8.0 via the uv installer.

# Install the variant-enabled build of uv.
curl -LsSf https://astral.sh/uv/install.sh | \
  INSTALLER_DOWNLOAD_URL=https://wheelnext.astral.sh sh

# Install PyTorch.
uv venv
uv pip install torch

Wheel variants are highly experimental and have not yet been proposed under Python's standards process. We plan to submit a proposal later this year; for now, the aim is to stress-test the initial design. As such, variant support is currently implemented in a proof-of-concept, standalone build of uv. To restore the default uv behavior, re-run the installer:

# On Linux or macOS.
curl -LsSf https://astral.sh/uv/install.sh | sh

# On Windows.
powershell -c "irm https://astral.sh/uv/install.ps1 | iex"

Motivation

Python's ability to interface with native code is one of its superpowers. NumPy, for example, is largely C code, exposed through a Python API. When you install, import, and call NumPy functions, you're ultimately running optimized C code from Python. Today, many of the most popular Python packages include C, or C++, or Rust, exposed through a Python API and distributed as a Python package.

Compiling these "extension modules", however, can be slow. (Imagine building PyTorch from scratch every time you need to install it!) In response, the Python packaging ecosystem has moved towards distributing pre-compiled, pre-built artifacts (known as "wheels") that you can unpack and run without any additional work.

Since these modules often contain native code, authors tend to build and distribute many different wheels for a single package version, to support different end-user architectures (ARM vs. x86), operating systems (Linux vs. macOS vs. Windows), Python versions (3.12 vs. 3.13), and more. The latest NumPy release, for example, includes 50 different wheels, with names like numpy-2.3.1-cp313-cp313-macosx_11_0_arm64.whl to encode those various dimensions.

When an installer like uv or pip is asked to install NumPy, it'll look at all the available wheels, cross-reference them against the user's system, and install a compatible build. For example, on my machine, uv would look for a wheel that's compatible with a macOS ARM machine.

By design, the set of dimensions on which a wheel can be classified is fixed: you can include the supported Python version (cp313), the operating system (macosx_11_0), and the architecture (arm64), but nothing more.

Over time, though, software and hardware have become more and more diverse.

GPUs are the best example here: when you build PyTorch, you're not only building for a specific architecture and operating system but also a specific accelerator ecosystem (e.g., NVIDIA CUDA, AMD ROCm, Intel XPU).

Yet the wheel specification, as it exists today, has no way to encode that information. If I see a PyTorch build named torch-2.4.0-cp310-cp310-linux_x86_64.whl, does that represent a CUDA build? If so, is it compatible with CUDA 12.8? A ROCM build? A CPU-only build? There's no way for an installer like uv to know.

By necessity, libraries like PyTorch and JAX have devised creative solutions to this problem, but they all have their limitations.

Wheel variants

This is the problem that wheel variants are designed to solve: enabling package authors to encode these build "variants" through standardized packaging metadata that installers can understand.

For example, the PyTorch team already builds wheels for NVIDIA, AMD, and Intel hardware. With wheel variants, they could encode that information in the built artifact, and installers could automatically select the "right" build for a given user's machine, all through standards-compliant workflows.

We're continuing to iterate on the design of wheel variants, but the gist of it is as follows:

  • Any vendor can publish a "provider plugin" (as a Python package) to perform feature detection. For example, NVIDIA can publish a plugin that, when invoked, reports back on any installed NVIDIA GPUs and drivers.
  • Package authors can encode the "properties" necessary to install a given wheel variant. For example, PyTorch's CUDA-enabled wheel could encode that a CUDA-compatible GPU is required to install it.
  • Package installers can query for this information at resolve- and install-time. For example, uv would query for the list of available PyTorch variants, run the NVIDIA provider to detect the user's GPU state, and select the appropriate wheel by cross-referencing the two.

In other words, running uv pip install torch from the WheelNext build should select the appropriate PyTorch wheel "automatically" by introspecting your GPU setup, just as it would for your operating system and architecture.

With today's release, the variant-enabled build of uv implements this design, and is capable of installing both the CPU-only and NVIDIA CUDA builds of PyTorch from the experimental PyTorch index:

# Install the variant-enabled build of uv.
curl -LsSf https://astral.sh/uv/install.sh | \
  INSTALLER_DOWNLOAD_URL=https://wheelnext.astral.sh sh

# Install PyTorch.
uv venv
uv pip install torch

The wheel variant design is very general — it's in no way specific to GPUs, and would apply equally well to CPU instruction sets (e.g., for SIMD support in libraries like Pillow) or any other "variant" that might be relevant to a built artifact. It's also not specific to PyTorch — it could be applied to any package that builds or wants to build for diverse hardware or software combinations.

But delivering variant-enabled PyTorch builds, with a variant-enabled package installer, is a big first step.

WheelNext

The WheelNext effort is a joint effort between PyTorch, NVIDIA, Quansight, Astral, and a variety of other partners, aimed at evolving the Python packaging ecosystem to solve some of the outstanding problems around distributing packages that include native code.

While it's early days, we've been buoyed by how collaborative the effort has been from the start and the momentum that's been building over the past few months.

Going forward, we'll continue to refine the design and stress-test its applicability to more packages and packaging scenarios, with an eye towards submitting a PEP before the end of the year. (We expect the wheel variant proposal to undergo significant iteration up to and through the PEP process.)

Thank you to Jonathan Dekhtiar (NVIDIA), Eli Uriegas (Meta), Andrey Talman (Meta), Ralf Gommers (Quansight), Michał Górny (Quansight), their colleagues, and all other collaborators for your efforts in moving WheelNext forward. I'd also like to thank Konstantin Schütze from the Astral team for implementing WheelNext support in uv and for his contributions to the overall WheelNext design.