Skip to main content

Deploying Services from Workflows

Function services integrate seamlessly with Covalent Workflows. Calling the @cc.service-decorated function inside a @ct.lattice adds a task to the workflow graph that deploys the service and waits for an active state.

Tip

You can also deploy standalone function services, without Covalent workflows.

In-workflow deployments are especially useful for hosting custom fine-tuned models. Suppose we have the following workflow task (a.k.a. an "electron"):

import covalent as ct
import covalent_cloud as cc

ft_task_executor = cc.CloudExecutor(
env="llm-fine-tuning",
gpu_type=cc.cloud_executor.GPU_TYPE.A100,
num_gpus=1,
memory="64GB",
num_cpus=2,
time_limit="2 hours and 30 minutes",
)

@ct.electron(executor=ft_task_executor)
def fine_tune_model(model_id, dataset_id):
# run fine-tuning code here
# save model at some path
# ...
return fine_tuned_model_path

For example, suppose llm_service (below) is a function service like the one defined here. Simply calling llm_service(new_model_path) inside the lattice will deploy the fine-tuned model created by the the preceding task.

cpu_executor = cc.CloudExecutor(env="llm-fine-tuning")

@ct.lattice(executor=cpu_executor, workflow_executor=cpu_executor)
def fine_tune_and_deploy_service(model_id, dataset_id):

# Step 1: Fine tune the model.
new_model_path = fine_tune_model(model_id, dataset_id)

# Step 2: Deploy the fine-tuned model as function service.
ft_llm_client = llm_service(new_model_path)

# Step 3: Return the service client to the user.
return ft_llm_client

The workflow can then be dispatched as usual. Note that we specify a volume to store the fine-tuned model.

Caution

It’s important to include the same volume used by the service when dispatching this workflow! See here for more information.

dispatch_id = cc.dispatch(fine_tune_and_deploy_service, volume=my_volume)(
model_id="NousResearch/Meta-Llama-3-8B",
dataset_id="NousResearch/json-mode-eval",
)

res = cc.get_result(dispatch_id, wait=True)
res.result.load()

# Extract client from workflow result.
fine_tuned_llm_client = res.result.value

Neither the cc.deploy() nor the cc.get_deployment() utility are necessary inside the Covalent workflows. Calling the service function triggers deployment automatically, and an activate state for the service is always awaited before returning.