Getting Started
Welcome to Covalent Cloud! This guide will assist you in quickly deploying your computing applications, whether they are AI apps or other types. With Covalent Cloud, you can run computing batch jobs, create scalable API endpoints, and orchestrate complex workflows without the burden of managing any infrastructure directly from Python.
Installation and Initial Setup
Start by installing the Covalent Cloud SDK:
pip install -U covalent-cloud
Create an account and retrieve your API key from the dashboard and set it in your local environment.
import covalent as ct
import covalent_cloud as cc
cc.save_api_key("your-api-key-here")
Environment Setup
Skip the hassle of managing Docker containers. With Covalent Cloud, you can directly create and manage Python environments. This approach is simpler, scalable, and keeps your environments always available across the platform. Create and reuse as many as you need with ease.
cc.create_env(
name="sklearn-env",
pip=["scikit-learn","yfinance"],
conda={"channels": ["anaconda"],
"dependencies": ["matplotlib"]},
wait=True,
)
For more on creating and managing environments, see our Environment Management Guide
Define Compute Resources
Cloud Executors represent a modular set of compute resources, specifying the resources needed for each task. Note that resources are not created when defining an executor; they are only utilized when the function attached to this executor is run.
cpu_ex = cc.CloudExecutor(env="sklearn-env",
num_cpus=2,
memory="8GB",
time_limit="2 hours")
To utilize GPUs, specify the GPU type and the number of GPUs:
gpu_ex = cc.CloudExecutor(env="sklearn-env",
num_cpus=24,
num_gpus=4,
gpu_type="h100",
time_limit="30 minutes")
Learn more about defining resources and other GPUs in our Compute Resource Guide.
- Compute Jobs
- Inference API endpoints
- Orchestrating Compute Workflows
Running batch jobs on Covalent Cloud is straightforward. By adding a single decorator to your Python functions, you can easily dispatch and execute them using the specified compute resources. Click here to learn more about job submissions. Here's how you can train a stock predictor using the environment and resources we defined earlier:
import numpy as np
import yfinance as yf
import sklearn
import sklearn.model_selection
import sklearn.svm
import covalent_cloud as cc
import covalent as ct
@ct.lattice(executor=cpu_ex,workflow_executor=cpu_ex)
@ct.electron(executor=cpu_ex)
def fit_svr_model_and_evaluate(ticker,n_chunks,C=1):
# Fetch and prepare data
data = yf.download(ticker, start='2022-01-01', end='2023-01-01')['Close'].values
X,y = np.array([data[i:i+n_chunks] for i in range(len(data)-n_chunks)]),data[n_chunks:]
# Split data into train and test sets
X_train, X_test, y_train, y_test = sklearn.model_selection.train_test_split(X, y, test_size=0.2, shuffle=False)
# Fit SVR model
model = sklearn.svm.SVR(C=C).fit(X_train, y_train)
# Predict and calculate accuracy
predictions = model.predict(X_test)
accuracy = sklearn.metrics.mean_squared_error(y_test, predictions)
return model, accuracy
runid=cc.dispatch(fit_svr_model_and_evaluate)('AAPL',n_chunks=6,C=10)
result=cc.get_result(runid,wait=True)
result.result.load()
print(result.result.value)
Full Code
import numpy as np
import yfinance as yf
import sklearn
import sklearn.model_selection
import sklearn.svm
import covalent_cloud as cc
import covalent as ct
# Save your API key
cc.save_api_key("your-api-key-here")
# Define environment
cc.create_env(
name="sklearn-env",
pip=["scikit-learn", "yfinance"],
conda={"channels": ["anaconda"],
"dependencies": ["matplotlib"]},
wait=True,
)
# Define executor
cpu_ex = cc.CloudExecutor(env="sklearn-env",
num_cpus=2,
memory="8GB",
time_limit="2 hours")
@ct.lattice(executor=cpu_ex, workflow_executor=cpu_ex)
@ct.electron(executor=cpu_ex)
def fit_svr_model_and_evaluate(ticker, n_chunks, C=1):
# Fetch and prepare data
data = yf.download(ticker, start='2022-01-01', end='2023-01-01')['Close'].values
X, y = np.array([data[i:i+n_chunks] for i in range(len(data)-n_chunks)]), data[n_chunks:]
# Split data into train and test sets
X_train, X_test, y_train, y_test = sklearn.model_selection.train_test_split(X, y, test_size=0.2, shuffle=False)
# Fit SVR model
model = sklearn.svm.SVR(C=C).fit(X_train, y_train)
# Predict and calculate accuracy
predictions = model.predict(X_test)
accuracy = sklearn.metrics.mean_squared_error(y_test, predictions)
return model, accuracy
runid = cc.dispatch(fit_svr_model_and_evaluate)('AAPL', n_chunks=6, C=10)
result = cc.get_result(runid, wait=True)
result.result.load()
print(result.result.value)
Deploying scalable API services with Covalent Cloud is just as easy. You can define services and endpoints directly in Python, making it simple to serve models or other applications. Everything from distributed REST endpoints with authentication is handled here, eliminating the need for users to write proxy services or additional infrastructure. Once deployed, you get a ready-to-use, scalable endpoint with authentication for your production applications. To learn more, check here.
Example: Serving a Predictive Model
Here’s how to deploy a Scikit-Learn model as an API service:
import numpy as np
import yfinance as yf
import sklearn
import sklearn.svm
import covalent_cloud as cc
import covalent as ct
@cc.service(executor=cpu_ex,name='stock_prediction',auth=False)
def stock_prediction(ticker,n_chunks,C=1):
data = yf.download(ticker, start='2022-01-01', end='2023-01-01')['Close'].values
X,y = np.array([data[i:i+n_chunks] for i in range(len(data)-n_chunks)]),data[n_chunks:]
# Fit SVR model
model = sklearn.svm.SVR(C=C).fit(X, y)
return {'model':model,'n_chunks':n_chunks}
@stock_prediction.endpoint('/predict')
def predict(model,n_chunks,stock_price):
if len(stock_price)!=n_chunks:
return {'error':'Invalid input size'}
return model.predict([stock_price]).tolist()
predictor=cc.deploy(stock_prediction)(ticker='AAPL',n_chunks=6,C=10)
To interact with the service, use the direct client from the deployment
predictor.reload() #reload predictor to update the model after deployment
stock_price=[135.21000671, 135.27000427, 137.86999512, 141.11000061,142.52999878, 141.86000061] #143.9600067138672
predictor.predict(stock_price=stock_price)[0]
# 143.10442415896003
Or use your favorite REST client:
curl -s -X POST\
"https://fn.prod.covalent.xyz/0664c0f4af7d37dbf2a468a61/predict"\
-d '{"stock_price":[135.21000671, 135.27000427, 137.86999512, 141.11000061,142.52999878, 141.86000061]}'
# 143.10442415896003
Finally, make sure to teardown the deployment:
predictor.teardown()
For more information on deploying and managing API services, see our Service Deployment Guide.
Full Code
import numpy as np
import yfinance as yf
import sklearn
import sklearn.svm
import covalent_cloud as cc
import covalent as ct
# Save your API key
cc.save_api_key("your-api-key-here")
# Define environment
cc.create_env(
name="sklearn-env",
pip=["scikit-learn", "yfinance"],
conda={"channels": ["anaconda"],
"dependencies": ["matplotlib"]},
wait=True,
)
# Define executor
cpu_ex = cc.CloudExecutor(env="sklearn-env",
num_cpus=2,
memory="8GB",
time_limit="2 hours")
@cc.service(executor=cpu_ex, name='stock_prediction', auth=False)
def stock_prediction(ticker, n_chunks, C=1):
data = yf.download(ticker, start='2022-01-01', end='2023-01-01')['Close'].values
X, y = np.array([data[i:i+n_chunks] for i in range(len(data)-n_chunks)]), data[n_chunks:]
model = sklearn.svm.SVR(C=C).fit(X, y)
return {'model': model, 'n_chunks': n_chunks}
@stock_prediction.endpoint('/predict')
def predict(model,n_chunks,stock_price):
if len(stock_price)!=n_chunks:
return {'error':'Invalid input size'}
return model.predict([stock_price]).tolist()
predictor = cc.deploy(stock_prediction)(ticker='AAPL', n_chunks=6, C=10)
# Interact with the service using the direct client from the deployment
predictor.reload() # Reload predictor to update the model after deployment
stock_price = [135.21000671, 135.27000427, 137.86999512, 141.11000061, 142.52999878, 141.86000061]
print(predictor.predict(stock_price=stock_price)[0])
# 143.10442415896003
# Finally, teardown the deployment
predictor.teardown()
Covalent Cloud makes it easy to combine batch jobs and/or services into seamless workflows. This allows you to execute complex sequences of compute-dependent tasks efficiently and deploy the results as API services. Here’s how you can use workflows and deployment together. To learn further check here.
Let us train multiple Scikit-Learn models with different parameters, select the one with the best accuracy, and deploy it as an API service.
1. Define tasks
import numpy as np
import yfinance as yf
import sklearn
import sklearn.model_selection
import sklearn.svm
import covalent_cloud as cc
import covalent as ct
@ct.electron(executor=cpu_ex)
def fit_svr_model_and_evaluate(ticker,n_chunks,C=1):
# Fetch and prepare data
data = yf.download(ticker, start='2022-01-01', end='2023-01-01')['Close'].values
X,y = np.array([data[i:i+n_chunks] for i in range(len(data)-n_chunks)]),data[n_chunks:]
# Split data into train and test sets
X_train, X_test, y_train, y_test = sklearn.model_selection.train_test_split(X, y, test_size=0.2, shuffle=False)
# Fit SVR model
model = sklearn.svm.SVR(C=C).fit(X_train, y_train)
# Predict and calculate accuracy
predictions = model.predict(X_test)
accuracy = sklearn.metrics.mean_squared_error(y_test, predictions)
return model,n_chunks, accuracy
@ct.electron(executor=cpu_ex)
def choose_best(model_accuracies):
best_model = None
best_accuracy = float('inf')
for model,n_chunks,accuracy in model_accuracies:
if accuracy < best_accuracy:
best_model = model
best_accuracy = accuracy
best_n_chunks = n_chunks
return best_model,best_n_chunks, best_accuracy
2. Define deployment service
@cc.service(executor=cpu_ex,name='stock_prediction',auth=False)
def deploy_model(model,n_chunks):
return {'model':model,'n_chunks':n_chunks}
@deploy_model.endpoint('/predict')
def predict(model,n_chunks,stock_price):
if len(stock_price)!=n_chunks:
return {'error':'Invalid input size'}
return model.predict([stock_price]).tolist()
3. Define the workflow
@ct.lattice(executor=cpu_ex,workflow_executor=cpu_ex)
def workflow(ticker,n_chunks_list,Cs):
model_accuracies=[]
for n_chunks in n_chunks_list:
for C in Cs:
model_accuracies.append(fit_svr_model_and_evaluate(ticker,n_chunks,C))
best_model,best_n_chunks, best_accuracy = choose_best(model_accuracies)
return deploy_model(best_model,best_n_chunks)
4. Run the workflow
runid=cc.dispatch(workflow)(ticker='AAPL',n_chunks_list=[5,10,15],Cs=[0.1,1,10])
result=cc.get_result(runid,wait=True)
result.result.load()
print(result.result.value)
Finally lets test the workflow
predictor=result.result.value
stock_price=[135.21000671, 135.27000427, 137.86999512, 141.11000061,142.52999878]
predictor.predict(stock_price=stock_price)[0]
# 143.53946728925536
# or use REST Curl command
#!curl -s -X POST\
# "https://fn.prod.covalent.xyz/0664c156df7d37dbf2a468a63/predict"\
# -d '{"stock_price":[135.21000671, 135.27000427, 137.86999512, 141.11000061,142.52999878]}'
Full Code
import numpy as np
import yfinance as yf
import sklearn
import sklearn.model_selection
import sklearn.svm
import covalent_cloud as cc
import covalent as ct
# Save your API key
cc.save_api_key("your-api-key-here")
# Define environment
cc.create_env(
name="sklearn-env",
pip=["scikit-learn", "yfinance"],
conda={"channels": ["anaconda"],
"dependencies": ["matplotlib"]},
wait=True,
)
# Define executor
cpu_ex = cc.CloudExecutor(env="sklearn-env",
num_cpus=2,
memory="8GB",
time_limit="2 hours")
@ct.electron(executor=cpu_ex)
def fit_svr_model_and_evaluate(ticker, n_chunks, C=1):
# Fetch and prepare data
data = yf.download(ticker, start='2022-01-01', end='2023-01-01')['Close'].values
X, y = np.array([data[i:i+n_chunks] for i in range(len(data)-n_chunks)]), data[n_chunks:]
# Split data into train and test sets
X_train, X_test, y_train, y_test = sklearn.model_selection.train_test_split(X, y, test_size=0.2, shuffle=False)
# Fit SVR model
model = sklearn.svm.SVR(C=C).fit(X_train, y_train)
# Predict and calculate accuracy
predictions = model.predict(X_test)
accuracy = sklearn.metrics.mean_squared_error(y_test, predictions)
return model, n_chunks, accuracy
@ct.electron(executor=cpu_ex)
def choose_best(model_accuracies):
best_model = None
best_accuracy = float('inf')
for model, n_chunks, accuracy in model_accuracies:
if accuracy < best_accuracy:
best_model = model
best_accuracy = accuracy
best_n_chunks = n_chunks
return best_model, best_n_chunks, best_accuracy
@cc.service(executor=cpu_ex, name='stock_prediction', auth=False)
def deploy_model(model, n_chunks):
return {'model': model, 'n_chunks': n_chunks}
@deploy_model.endpoint('/predict')
def predict(model, n_chunks, stock_price):
if len(stock_price) != n_chunks:
return {'error': 'Invalid input size'}
return model.predict([stock_price]).tolist()
@ct.lattice(executor=cpu_ex, workflow_executor=cpu_ex)
def workflow(ticker, n_chunks_list, Cs):
model_accuracies = []
for n_chunks in n_chunks_list:
for C in Cs:
model_accuracies.append(fit_svr_model_and_evaluate(ticker, n_chunks, C))
best_model, best_n_chunks, best_accuracy = choose_best(model_accuracies)
return deploy_model(best_model, best_n_chunks)
runid = cc.dispatch(workflow)(ticker='AAPL', n_chunks_list=[5, 10, 15], Cs=[0.1, 1, 10])
result = cc.get_result(runid, wait=True)
result.result.load()
print(result.result.value)
# Test the workflow
predictor = result.result.value
stock_price = [135.21000671, 135.27000427, 137.86999512, 141.11000061, 142.52999878]
print(predictor.predict(stock_price=stock_price)[0])
# 143.53946728925536
# or use REST Curl command
# !curl -s -X POST \
# "https://fn.prod.covalent.xyz/0664c156df7d37dbf2a468a63/predict" \
# -d '{"stock_price":[135.21000671, 135.27000427, 137.86999512, 141.11000061, 142.52999878]}'