Skip to main content

How it Works

Telepath turns database data into fully-deployed machine learning models. The following overview of the product lifecycle explains how each stage of the process works.

End-to-End Lifecycle

Creating and using predictive models with Telepath is an end-to-end process which begins with your raw data and ends with a fully-hosted API endpoint. It's helpful to think of the process as three sequential stages:

  1. Declare Resources - the stage where you use code to define the model you want to create and which data should be used to train it.
  2. Model Training - the stage where the training data is read from your database and the machine learning model is auto-generated by Telepath.
  3. Predictions - the stage where you make predictions with your model via API.

Read more about each stage below.

Declare Resources

In Telepath, “Resources” are the declarative code snippets that define the type of model you want to create and the data that should be used to train it. Resources are created by writing some simple code, using the Telepath SDK.

You can think of the Resources code as a recipe that Telepath will follow whenever it trains a model. The “recipe” analogy is appropriate because a Resource, like a recipe, only describes an item to be created — it's not actually an instance of the item. Furthermore, every time the recipe is used, a new and separate instance of the item is created. For example, when you define a Pipeline Resource in Telepath, you're defining the query that Telepath should use to read data from your database. It only describes how the data should be read, but it doesn't actually contain any data. And every time Telepath uses the Pipeline to read data, a separate copy of the data is being extracted.

Creating a Resources File

You declare resources by creating a Resources file that contains all of the resources code. It looks something like this:

// Import the Resource classes from the Telepath SDK
import { Source } from '@telepath/telepath'

// Declare the r
new Source('my-source', {
sourceConnectionSlug: 'my-connection'
});
Note

Telepath expects the Resources file to be named index.ts and be located in the same directory as your telepath.yml file. This can be overridden by modifying the entrypoint property in your telepath.yml.

Deploy Resources

Once the resources have been defined, you need to “deploy” them to Telepath's backend. During this process, Telepath will read your Resources file and create the backend infrastructure necessary to utilize them to train your models.

You deploy your resources using the Telepath CLI command: telepath deploy.

Whenever you make changes to your Resources file, you must re-deploy to Telepath, and the Telepath backend will update itself accordingly.

Model Training

During the Model Training stage, Telepath reads training data from your database and sends it to the AutoML engine which automatically generates a custom machine learning model and an API endpoint to access the model.

In keeping with the “recipe” analogy from above, Model Training is like using a recipe to actually cook something. Every time you train a model, it uses the same Resources (recipe), but it creates a new and separate instance of the model.

You can initiate the Model Training process by using the Telepath CLI command telepath model train or by clicking the “Train New Model” button in the Telepath UI.

Once Model Training is complete, you can access details about your Model and Prediction API in the Telepath UI.

Predictions

Once a model has been trained, it will automatically be deployed as a Prediction API. This is an ordinary HTTP endpoint that you can use to make a prediction by sending a POST request. The Prediction API can be used to make real-time predictions, so it can called directly from your application.

Every Prediction API comes with a unique URL and documentation which can be found on the model details page in the Telepath UI.