Ready-to-go
cloud GPUs

  • Per-second billing
  • Rent A100 by fractions to save costs
  • Run GPU in virtual machine or in container
  • 2x cheaper Nvidia A100 GPUs
    than on AWS or Google Cloud
    Located in the EU
    Cluster of NVidia A100

    Templates with popular machine learning tools

    tensorflow logopytorch logokeras logocaffe logocaffe2 logomxnet logoonnx logojupyter logoh2o.ai logofast.ai logo

    Choose the best GPU for deep learning in cloud

    CardPrice per hour2FP32 (ML benchmarks score)1FP64, TFLOPS (peak)Memory, GBCUDA cores
    A100

    by Puzl

    $1.6019.7
    40

    HBM2

    13824
    Half of A100

    by Puzl, based on Nvidia MIG

    $0.80

    x0.5

    0.57

    4.85

    x0.5

    20

    HBM2

    5925

    x0.43

    Quarter of A100

    by Puzl, based on Nvidia MIG

    $0.40

    x0.25

    0.284

    2.43

    x0.25

    10

    HBM2

    3950

    x0.29

    A4000
    $0.46

    x0.29

    0.347

    0.6

    x0.06

    16

    GDDR6

    6144

    x0.44

    A5000
    $0.84

    x0.52

    0.546

    0.87

    x0.09

    24

    GDDR6

    8192

    x0.59

    Tesla T4
    $0.26

    x0.16

    0.138

    0.25

    x0.03

    16

    GDDR6

    2560

    x0.19

    Tesla V100
    $1.17

    x0.73

    0.415

    7

    x0.72

    16

    HBM2

    5120

    x0.37

    * Benchmarks are made on instances with 1 GPU, 16GB RAM, 4vCPU and fast storage with similar IOPS and bandwidth rate.

    ** Based on average normalized GPU score for ResNet, Inception and AlexNet benchmarks. Normalization was performed to A100 score (1 is a score of A100).

    *** The minimum market price per 1 GPU on demand, taken from public price lists of popular cloud and hosting providers. Information is current as of February 2022.

    **** Values for MIG based GPUs are approximate. Find more details about Nvidia MIG technology here.

    Try new service

    CI Runners for GitLab - background
    CI Runners for GitLab - mascot
    released
    CI Runners for GitLab
    Request Nvidia A100 GPUs directly from your GitLab pipelines. Turn GitLab into your powerful MLOps platform!
    Pay only for resources consumed by your pipelines.
    Use Nvidia A100 GPUs in your GitLab CI/CD.
    No vendor lock-in: if unsatisfied, you can quickly switch to other options.
    GITLAB is a trademark of GitLab Inc. in the United States and other countries and regions.
    Save with committed usage

    Commit to consistent usage of GPU-hours and get reserved GPUs under a discounted price.

    -10%
    1 month
    commitment
    $1.44
    $1.6
    for 1 GPU-hour of Nvidia A100
    -20%
    6 months
    commitment
    $1.28
    $1.6
    for 1 GPU-hour of Nvidia A100
    -30%
    2 years
    commitment
    $1.12
    $1.6
    for 1 GPU-hour of Nvidia A100
    -40%
    3 years
    commitment
    $0.96
    $1.6
    for 1 GPU-hour of Nvidia A100

    Running on professional server platforms
    with last generation AMD EPYC™

    amd epyc cpu
    AMD EPYC™ 7502 3.3GHz

    *AMD, and the AMD Arrow logo, AMD EPYC and combinations thereof are trademarks of Advanced Micro Devices, Inc.

    AMD EPYC™ CPU

    With 2nd generation AMD processors you can allocate 196 vCPU and 10 GPU in one instance.

    Flexible NVMe®-based data storage

    Fast, reliable data storage for your datasets and trained models, runtime-extensible up to 4TB.

    Fast ECC RAM

    DDR4 ECC 2.9Ghz memory with flexible allocation up to 1TB.

    Zero infrastructure

    Instant
    start
    Run Kubernetes pods in seconds. Launch your code in containers or in virtual machines.
    Kubernetes
    out of the box
    You get full access to a separate Kubernetes namespace, pre-configured with all security policies needed.
    Fair
    costs
    Pay only for resources used by the running pods, no cluster maintenance fee.

    Easy to deploy

    No need to explore one more cloud API:
    Kubernetes is a new unified way to deploy your applications.
    kubernetes dashboard
    puzl dashboard