Using Databricks

The Orchestrate SDK can be easily integrated into the Databricks platform to enable analytics and a Medallion Architecture.

Getting Started

To get started, first create an account on Databricks.

Go to the Workspace tab and create a “Notebook”. (You can optionally also create a folder for your Orchestrate work.)

Installing the SDK

Add a code block and install the Orchestrate Python SDK in your notebook using the pip command.

Install SDK in Databricks
%pip install orchestrate-api
%pip install orchestrate-api

Add your API Key

You will need to provide your API Key to Databricks. In production, you will want to store your key securely; for example, you might use the Databricks Secrets storage. For testing purposes, you can specify your API key as a Python variable.

Reference API Key for Testing
API_KEY = "YOUR_API_KEY" # WARNING: This is for example purposes only. Store your key securely in production.
API_KEY = "YOUR_API_KEY" # WARNING: This is for example purposes only. Store your key securely in production.

Analyzing Data with the Notebook

You can call Python SDK methods easily from the notebook:

Calling SDK Methods
from orchestrate import OrchestrateApi

api = OrchestrateApi(api_key=API_KEY)
result = api.terminology.classify_condition(code="119981000146107", system="SNOMED");
from orchestrate import OrchestrateApi api = OrchestrateApi(api_key=API_KEY) result = api.terminology.classify_condition(code="119981000146107", system="SNOMED");

From there, you can use analyze the data, create tabular data displays, and more. Databricks notebooks let you mingle Python code with Markdown, SQL, R, and Scala. You can also import external libraries. See the Databricks notebook documentation for more information.

Select “Run All” to execute all the notebook cells (including the Orchestrate SDK installation and API Key definition).