Driver >=545.23.08
TensorFlow 2.19.0 + CUDA 12.3
Generate a production-ready Dockerfile with verified compatibility
Configuration Summary
Framework
TensorFlow 2.19.0
CUDA Version
12.3
Python Support
3.10, 3.11, 3.12
Min Driver
>=545.23.08
Note: 兼容旧驱动
Install Command
pip install tensorflow[and-cuda]==2.19.0 What's in TensorFlow 2.19.0
- Current stable 2025 production release
- Robust CUDA 12.5 support with cuDNN 9.x
- Optimized for H100/H200 inference and training
- JAX backend support for enhanced performance
- Stable Keras 3 API for production use
Best For
Use Cases
- Production environments requiring proven stability
- Hopper GPU deployments (H100/H200)
- Enterprise ML pipelines with Keras 3
- Large-scale distributed training workloads
CUDA 12.3 Advantages
- Modern data center GPUs (A100, A10G)
- Good balance of features and stability
- Cloud platform compatibility
Generate Dockerfile
Configuration
Local GPU or CPU environment
兼容旧驱动
Requires NVIDIA Driver >=545.23.08
Dockerfile
1# syntax=docker/dockerfile:12# ^ Required for BuildKit cache mounts and advanced features34# Generated by DockerFit (https://tools.eastondev.com/docker)5# TENSORFLOW 2.19.0 + CUDA 12.3 | Python 3.116# Multi-stage build for optimized image size78# ==============================================================================9# Stage 1: Builder - Install dependencies and compile10# ==============================================================================11FROM python:3.10-slim-bookworm AS builder1213# Build arguments14ARG DEBIAN_FRONTEND=noninteractive1516# Environment variables17ENV PYTHONUNBUFFERED=118ENV PYTHONDONTWRITEBYTECODE=11920# Create virtual environment21ENV VIRTUAL_ENV=/opt/venv22RUN python -m venv $VIRTUAL_ENV23ENV PATH="$VIRTUAL_ENV/bin:$PATH"2425# Upgrade pip26RUN pip install --no-cache-dir --upgrade pip setuptools wheel2728# Install TensorFlow with pip CUDA packages (no system CUDA needed)29# This installs CUDA/cuDNN via pip, avoiding dual CUDA dependency30RUN --mount=type=cache,target=/root/.cache/pip \31 pip install tensorflow[and-cuda]==2.19.03233# Install project dependencies34COPY requirements.txt .35RUN --mount=type=cache,target=/root/.cache/pip \36 pip install -r requirements.txt3738# ==============================================================================39# Stage 2: Runtime - Minimal production image40# ==============================================================================41FROM python:3.10-slim-bookworm AS runtime4243# Labels44LABEL maintainer="Generated by DockerFit"45LABEL version="2.19.0"46LABEL description="TENSORFLOW 2.19.0 + CUDA 12.3"4748# Environment variables49ENV PYTHONUNBUFFERED=150ENV PYTHONDONTWRITEBYTECODE=151ENV NVIDIA_VISIBLE_DEVICES=all52ENV NVIDIA_DRIVER_CAPABILITIES=compute,utility5354# Create non-root user for security55ARG USERNAME=appuser56ARG USER_UID=100057ARG USER_GID=$USER_UID58RUN groupadd --gid $USER_GID $USERNAME \59 && useradd --uid $USER_UID --gid $USER_GID -m $USERNAME6061# Copy virtual environment from builder62COPY --from=builder --chown=$USERNAME:$USERNAME /opt/venv /opt/venv63ENV VIRTUAL_ENV=/opt/venv64ENV PATH="$VIRTUAL_ENV/bin:$PATH"6566# Set working directory67WORKDIR /app6869# Copy application code70COPY --chown=$USERNAME:$USERNAME . .7172# Switch to non-root user73USER $USERNAME7475# Expose port76EXPOSE 80007778# Default command79CMD ["python", "main.py"]
🚀 Recommended
High-Performance GPU Cloud
Deploy your Docker containers with powerful NVIDIA GPUs. A100/H100 available, 32+ global locations.
- NVIDIA A100/H100 GPU instances
- Hourly billing, starting at $0.004/h
- 32+ global data centers
- One-click container & bare metal deployment
Frequently Asked Questions
What NVIDIA driver version do I need?
For TensorFlow 2.19.0 with CUDA 12.3, you need NVIDIA driver version >=545.23.08 or higher.
Run nvidia-smi to check your current driver version.
How do I install TensorFlow with CUDA support?
TensorFlow 2.19.0 uses the following installation command:
pip install tensorflow[and-cuda]==2.19.0 Since TensorFlow 2.15+, CUDA libraries are bundled via tensorflow[and-cuda].
How do I verify GPU access in the container?
After building your image, run:
docker run --gpus all your-image python -c "import tensorflow as tf; print(tf.config.list_physical_devices('GPU'))"
This should show available GPU devices.