Driver >=560.35.05

PyTorch 2.9.1 + CUDA 12.6

Generate a production-ready Dockerfile with verified compatibility

Configuration Summary

Framework
PyTorch 2.9.1
CUDA Version
12.6
Python Support
3.10, 3.11, 3.12
Min Driver
>=560.35.05

Note: 稳定生产环境,适合Hopper/Ampere架构

What's in PyTorch 2.9.1

  • Official CUDA 12.8 (cu128) wheels with Blackwell (10.0) native support
  • Python 3.10-3.12 support (3.9 deprecated)
  • Enhanced Hopper (H100/H200) and Blackwell (B200/GB200) architecture optimizations
  • cuDNN 9.x performance improvements with Ubuntu 24.04
  • Advanced torch.compile() with improved inductor optimizations
  • Enhanced FlexAttention API for efficient custom attention patterns

Performance: Up to 3x faster on Blackwell GPUs compared to PyTorch 2.4

Best For

This Version

  • Blackwell B200/GB200 GPU deployments (2025 latest hardware)
  • Hopper H100/H200 production inference and training
  • Modern Python 3.11/3.12 environments with Ubuntu 24.04
  • LLM inference requiring maximum CUDA 12.8 performance

CUDA 12.6

  • Stable production environments for Hopper/Ampere
  • Ubuntu 22.04 deployments with proven stability
  • Alternative when 12.8 driver requirements not met
Note: No Blackwell native support, consider 12.8 for B200/GB200

Generate Dockerfile

Configuration

Local GPU or CPU environment

稳定生产环境,适合Hopper/Ampere架构

Requires NVIDIA Driver >=560.35.05
Dockerfile
1# syntax=docker/dockerfile:1
2# ^ Required for BuildKit cache mounts and advanced features
3
4# Generated by DockerFit (https://tools.eastondev.com/docker)
5# PYTORCH 2.9.1 + CUDA 12.6 | Python 3.11
6# Multi-stage build for optimized image size
7
8# ==============================================================================
9# Stage 1: Builder - Install dependencies and compile
10# ==============================================================================
11FROM nvidia/cuda:12.6.3-cudnn-devel-ubuntu22.04 AS builder
12
13# Build arguments
14ARG DEBIAN_FRONTEND=noninteractive
15
16# Environment variables
17ENV PYTHONUNBUFFERED=1
18ENV PYTHONDONTWRITEBYTECODE=1
19ENV TORCH_CUDA_ARCH_LIST="8.0;8.6;8.9;9.0"
20
21# Install Python 3.11 from deadsnakes PPA (Ubuntu 22.04)
22RUN apt-get update && apt-get install -y --no-install-recommends \
23 software-properties-common \
24 && add-apt-repository -y ppa:deadsnakes/ppa \
25 && apt-get update && apt-get install -y --no-install-recommends \
26 python3.11 \
27 python3.11-venv \
28 python3.11-dev \
29 build-essential \
30 git
31 && rm -rf /var/lib/apt/lists/*
32
33# Create virtual environment
34ENV VIRTUAL_ENV=/opt/venv
35RUN python3.11 -m venv $VIRTUAL_ENV
36ENV PATH="$VIRTUAL_ENV/bin:$PATH"
37
38# Upgrade pip
39RUN pip install --no-cache-dir --upgrade pip setuptools wheel
40
41# Install PyTorch with BuildKit cache
42RUN --mount=type=cache,target=/root/.cache/pip \
43 pip install torch torchvision torchaudio \
44 --index-url https://download.pytorch.org/whl/cu126
45
46# Install project dependencies
47COPY requirements.txt .
48RUN --mount=type=cache,target=/root/.cache/pip \
49 pip install -r requirements.txt
50
51# ==============================================================================
52# Stage 2: Runtime - Minimal production image
53# ==============================================================================
54FROM nvidia/cuda:12.6.3-cudnn-runtime-ubuntu22.04 AS runtime
55
56# Labels
57LABEL maintainer="Generated by DockerFit"
58LABEL version="2.9.1"
59LABEL description="PYTORCH 2.9.1 + CUDA 12.6"
60
61# Environment variables
62ENV PYTHONUNBUFFERED=1
63ENV PYTHONDONTWRITEBYTECODE=1
64ENV NVIDIA_VISIBLE_DEVICES=all
65ENV NVIDIA_DRIVER_CAPABILITIES=compute,utility
66
67# Install Python 3.11 runtime from deadsnakes PPA (Ubuntu 22.04)
68RUN apt-get update && apt-get install -y --no-install-recommends \
69 software-properties-common \
70 && add-apt-repository -y ppa:deadsnakes/ppa \
71 && apt-get update && apt-get install -y --no-install-recommends \
72 python3.11 \
73 libgomp1
74 && apt-get remove -y software-properties-common \
75 && apt-get autoremove -y \
76 && rm -rf /var/lib/apt/lists/*
77
78# Create non-root user for security
79ARG USERNAME=appuser
80ARG USER_UID=1000
81ARG USER_GID=$USER_UID
82RUN groupadd --gid $USER_GID $USERNAME \
83 && useradd --uid $USER_UID --gid $USER_GID -m $USERNAME
84
85# Copy virtual environment from builder
86COPY --from=builder --chown=$USERNAME:$USERNAME /opt/venv /opt/venv
87ENV VIRTUAL_ENV=/opt/venv
88ENV PATH="$VIRTUAL_ENV/bin:$PATH"
89
90# Set working directory
91WORKDIR /app
92
93# Copy application code
94COPY --chown=$USERNAME:$USERNAME . .
95
96# Switch to non-root user
97USER $USERNAME
98
99# Expose port
100EXPOSE 8000
101
102# Default command
103CMD ["python", "main.py"]
🚀 Recommended

High-Performance GPU Cloud

Deploy your Docker containers with powerful NVIDIA GPUs. A100/H100 available, 32+ global locations.

  • NVIDIA A100/H100 GPU instances
  • Hourly billing, starting at $0.004/h
  • 32+ global data centers
  • One-click container & bare metal deployment
🎁 Deploy Now

Frequently Asked Questions

What NVIDIA driver version do I need?

For PyTorch 2.9.1 with CUDA 12.6, you need NVIDIA driver version >=560.35.05 or higher.

Run nvidia-smi to check your current driver version.

Which Python version should I use?

PyTorch 2.9.1 supports Python versions: 3.10, 3.11, 3.12.

We recommend using Python 3.11 for the best balance of compatibility and features.

How do I verify GPU access in the container?

After building your image, run:

docker run --gpus all your-image python -c "import torch; print(torch.cuda.is_available())"

This should print True if GPU is accessible.