Skip to main content

GPU Access

Covalent Cloud provides access to a variety of GPUs, as tabulated below.

Executing tasks on GPU resources requires assigning a GPU-equipped cloud executors to the tasks in question. Cloud executors specify a modular set of resources resources like vCPUs, GPUs, memory, and storage, as well as the software environment (i.e. Python version, Python packages, and any other libraries).

Here’s an example of a cloud executor that specifies 4x H100 GPUs.

import covalent_cloud as cc

gpu_executor = cc.CloudExecutor(

def train_model(model_id, data, parameters):
# Your model training code here
# ...

GPU Types

The following types of GPUs are currently supported in Covalent Cloud. Note that memory refers to normal RAM, whereas vRAM refers to a GPU’s internal memory.

gpu_typeGPU TypevRAM per GPUMax num_cpusnum_gpusMax memory
"h100"H100 80GB80 GB2521,2,4,81440 GB
"a100-80g"A100 80GB80 GB2521,2,4,8960 GB
"v100"V10016 GB961,4,8825 GB
"l40"L4048 GB2521,2,4,8480 GB
"a10"A10G24 GB1921,4,8825 GB
"a6000"RTX A600048 GB1281,2,4,8480 GB
"a4000"RTX A400016 GB641,2,4,8,10240 GB
"a5000"RTX A500024 GB641,2,4,8240 GB
"t4"T416 GB961,4,8412 GB

Each GPU type is priced differently. See here for up-to-date GPU pricing.

Cloud executor parameters

Each parameter in a CloudExecutor instance specifies a relevant resource; whether it’s hardware, memory, or time. With the exception of gpu_type, the value of each parameter reflects the amount of each resource that will be available to an electron that’s assigned a given executor.

nametypedefault valueinterpretation of default value
num_cpusint1task execution uses 1 vCPU
memoryint or str1024task execution uses 1024 MB of RAM
num_gpusint0task execution uses no GPUs
gpu_typestr''GPU type not specified (necessary when num_gpus > 0)
envstr'default'task executes in the user’s default software environment
time_limitint, str, or timedelta1800task execution will be cancelled after 30 minutes