Kash Pourdeilami

Hello there- I'm Kash (Khashayar) Pourdeilami. I'm currently working on Terrene. Terrene simplifies big data infrastructure by automating the training and deployment of deep learning neural networks. With Terrene, you can load any data from any data sources and run predictive analytics on it.

Let's say I want to load data from a SQL database and model the data inside of it, all I have to achieve that with Terrene is to link my database and specify which columns to use as inputs and labels

from terrene.store import SQLDatabaseManager
from terrene.transfer import SQLDataManager
from terrene.enrich import PredictiveModelManager

store_manager = SQLDatabaseManager(
    credentials=credentials, workspace=workspace)
transfer = input_dataset_manager.create(
    name="my file", description="training dataset",
    query=query, store=store)
predictive_model_manager = PredictiveModelManager(
    credentials=credentials, workspace=workspace)

query = “SELECT * FROM users;”

store = store_manager.create(
    name="default database",
    description="default warehouse for my workspace")
input_dataset_manager = SQLDataManager(
    credentials=credentials, workspace=workspace)
model = predictive_model_manager.create(
    name="my predictive model", description="predictive model",
    input_variables="col1, col2, col3, col4", output_variables="col5")

model.train()

The really cool thing about this is that I don't have to do any manual feature engineering. Let's say one my columns in the dataset has values like D56, C32, and etc., Terrene will automatically break that column up into two columns containing the integer and character values from the original column. Another cool thing that Terrene does is that it automatically augments the dataset. (i.e. adding dark spots to images, rotating them, etc.)

One of the big problems a lot of companies currently are facing is around deployment and optimization of their models. Terrene automatically creates REST endpoints to the trained models that can be accessed in Google sheets, Excel, Android, iOS, etc.

Also I can re-train my models on multiple datasets. Lets say my model has made a bunch of predictions and outputted the results in a database, all I have to do is to input the "actual" values into the database and retrain the model

endpoint.store = store
endpoint.table = "predictions"
endpoint.save()

query = “SELECT * FROM predictions;”
new_dataset = input_dataset_manager.create(
    name="my file", description="training dataset",
    query=query, store=store)

model.train(dataset=new_dataset, optimizer={
  "type": "adam", "lr": 0.0001})

The newly trained model will immediately be accessible over the REST endpoint without having to redistribute the iOS and Android apps through App Store and Google Play.

results matching ""

    No results matching ""