Recommended Driver >=555.42.06

TensorFlow 2.19.0 + CUDA 12.5

Generate a production-ready Dockerfile with verified compatibility

Configuration Summary

Framework
TensorFlow 2.19.0
CUDA Version
12.5
Python Support
3.10, 3.11, 3.12
Min Driver
>=555.42.06

Note: 当前稳定版本,pip CUDA包推荐

Install Command
pip install tensorflow[and-cuda]==2.19.0

What's in TensorFlow 2.19.0

  • Current stable 2025 production release
  • Robust CUDA 12.5 support with cuDNN 9.x
  • Optimized for H100/H200 inference and training
  • JAX backend support for enhanced performance
  • Stable Keras 3 API for production use

Best For

Use Cases

  • Production environments requiring proven stability
  • Hopper GPU deployments (H100/H200)
  • Enterprise ML pipelines with Keras 3
  • Large-scale distributed training workloads

CUDA 12.5 Advantages

  • Latest TensorFlow with newest CUDA features
  • H100 and Ada Lovelace GPUs
  • Maximum XLA compilation performance

Generate Dockerfile

Configuration

Local GPU or CPU environment

当前稳定版本,pip CUDA包推荐

Requires NVIDIA Driver >=555.42.06
Dockerfile
1# syntax=docker/dockerfile:1
2# ^ Required for BuildKit cache mounts and advanced features
3
4# Generated by DockerFit (https://tools.eastondev.com/docker)
5# TENSORFLOW 2.19.0 + CUDA 12.5 | Python 3.11
6# Multi-stage build for optimized image size
7
8# ==============================================================================
9# Stage 1: Builder - Install dependencies and compile
10# ==============================================================================
11FROM python:3.10-slim-bookworm AS builder
12
13# Build arguments
14ARG DEBIAN_FRONTEND=noninteractive
15
16# Environment variables
17ENV PYTHONUNBUFFERED=1
18ENV PYTHONDONTWRITEBYTECODE=1
19
20# Create virtual environment
21ENV VIRTUAL_ENV=/opt/venv
22RUN python -m venv $VIRTUAL_ENV
23ENV PATH="$VIRTUAL_ENV/bin:$PATH"
24
25# Upgrade pip
26RUN pip install --no-cache-dir --upgrade pip setuptools wheel
27
28# Install TensorFlow with pip CUDA packages (no system CUDA needed)
29# This installs CUDA/cuDNN via pip, avoiding dual CUDA dependency
30RUN --mount=type=cache,target=/root/.cache/pip \
31 pip install tensorflow[and-cuda]==2.19.0
32
33# Install project dependencies
34COPY requirements.txt .
35RUN --mount=type=cache,target=/root/.cache/pip \
36 pip install -r requirements.txt
37
38# ==============================================================================
39# Stage 2: Runtime - Minimal production image
40# ==============================================================================
41FROM python:3.10-slim-bookworm AS runtime
42
43# Labels
44LABEL maintainer="Generated by DockerFit"
45LABEL version="2.19.0"
46LABEL description="TENSORFLOW 2.19.0 + CUDA 12.5"
47
48# Environment variables
49ENV PYTHONUNBUFFERED=1
50ENV PYTHONDONTWRITEBYTECODE=1
51ENV NVIDIA_VISIBLE_DEVICES=all
52ENV NVIDIA_DRIVER_CAPABILITIES=compute,utility
53
54# Create non-root user for security
55ARG USERNAME=appuser
56ARG USER_UID=1000
57ARG USER_GID=$USER_UID
58RUN groupadd --gid $USER_GID $USERNAME \
59 && useradd --uid $USER_UID --gid $USER_GID -m $USERNAME
60
61# Copy virtual environment from builder
62COPY --from=builder --chown=$USERNAME:$USERNAME /opt/venv /opt/venv
63ENV VIRTUAL_ENV=/opt/venv
64ENV PATH="$VIRTUAL_ENV/bin:$PATH"
65
66# Set working directory
67WORKDIR /app
68
69# Copy application code
70COPY --chown=$USERNAME:$USERNAME . .
71
72# Switch to non-root user
73USER $USERNAME
74
75# Expose port
76EXPOSE 8000
77
78# Default command
79CMD ["python", "main.py"]
🚀 Recommended

High-Performance GPU Cloud

Deploy your Docker containers with powerful NVIDIA GPUs. A100/H100 available, 32+ global locations.

  • NVIDIA A100/H100 GPU instances
  • Hourly billing, starting at $0.004/h
  • 32+ global data centers
  • One-click container & bare metal deployment
🎁 Deploy Now

Frequently Asked Questions

What NVIDIA driver version do I need?

For TensorFlow 2.19.0 with CUDA 12.5, you need NVIDIA driver version >=555.42.06 or higher.

Run nvidia-smi to check your current driver version.

How do I install TensorFlow with CUDA support?

TensorFlow 2.19.0 uses the following installation command:

pip install tensorflow[and-cuda]==2.19.0

Since TensorFlow 2.15+, CUDA libraries are bundled via tensorflow[and-cuda].

How do I verify GPU access in the container?

After building your image, run:

docker run --gpus all your-image python -c "import tensorflow as tf; print(tf.config.list_physical_devices('GPU'))"

This should show available GPU devices.