by Mathurin Aché
This blog post series will be composed into 3 steps:
The third blog post will deal with the subject of experiment tracking by combining learning outside the platform and monitoring in the Prevision.io platform. It will conclude the present blog post series.
In this post, we are going to answer this question:
How can I benefit from the experiment tracking and other advantages included in the Prevision.io platform while continuing to build my experiment outside the platform and / or with third-party solutions?
If you use another environment to train your models and you wish to benefit from the experiment tracking solutions offered by Prevision.io :
You load and prepare data in your environment, on Kaggle notebook or on Google Colab
2. You train one or many models in your environment, on Prevision.io notebooks, on Kaggle notebook or on Google Colab and export them in ONNX format.
For each model listed above, you will see a part which consists of converting the model from the scikit learn format to the ONNX format which is the format expected to be processed in the Prevision.io platform.
3. You upload data models, configuration files using the user interface.
Configuring your external model
Process to import your external model
Configuring your external experiment
For each external model, you need to set a name, a yaml with features configuration,
and a ONNX file containing the model
You can import as many models as you want
To go further, external model import uses the Standardized ONNX Format and most of the standard ML libraries have a module for export.
After few minutes, you obtain a dashboard with all models
Now you can Evaluate your experiment.
External model information
External model feature importance
External model confusion matrix
External model metrics
Good news: Once imported, you still can benefit from the insightful analytics available for internally trained models.
4. You upload data models and the related configuration files using the SDK (Python or R).
5. Once your imported model is deployed, you are able to use it periodically (every hour, every day, every month …).
To proceed to Deployments, I refer you to the paragraph explaining how to deploy an experiment in article 2 of this series or in documentation here https://previsionio.readthedocs.io/fr/latest/studio/deployments/index.html
In this guide, we went through the whole experiment tracking process, while using Prevision.io.
As we have seen, it is essential for a data scientist to document the different iterations over all the data science project stages: from data ingestion, to feature engineering, to model selection, to hyperparameters tuning, while accessing the in depth visual analysis, until the model is deployed and in production.
Prevision.io brings powerful AI management capabilities to data science users so more AI projects make it into production and stay in production. Our purpose-built AI Management platform was designed by data scientists for data scientists and citizen data scientists to scale their value, domain expertise, and impact. The platform manages the hidden complexities and burdensome tasks that get in the way of realizing the tremendous productivity and performance gains AI can deliver across your business.