[Step 8] Deploying the model for consumption

The final phase of Spectral's Machine Learning Competition Platform introduces a "Consumption Phase," designed to add practical application to the top machine learning competitors' models.

During the Consumption Phase, the platform shifts its focus towards the real-world application of the algorithms developed by the competitors. The top-performing modelers, having demonstrated their performant machine learning models in previous in earlier stages of the competition, are provided the opportunity to monetize their models by providing inferences to users on demand.

Modelers who are selected in the Top 10 modelers avail the opportunity to make their models available for consumption. This process works as follows:

  1. Every Monday (11:59 PM GMT), Spectral takes a snapshot of the leaderboard.

  2. New modelers who have been added to the leaderboard in the last week receive an email to deploy their model for consumption.

  3. Inside the email, you will see a link that directs you to your Spectral Modeler Dashboard, where you have the ability to opt-in to monetize your machine-learning model during the consumption phase. Please continue to deploy your model using the steps outlined in the application. As a modeler, you have the option to host your model through Spectral, or Self-Hosting.

  4. As a modeler, you will get time until 11:59 pm GMT of the following Wednesday to host your model for consumption. During these initial days of the Spectral network, personnel from our team will be reaching out (to all modelers whose models have been selected in the Top 10) to assist in the process of model deployment.

Nova

Nova is a back-end service template written in Python. Most of the logic to fetch and reply to inference requests is implemented for you so you don't have to. What's left for you as a Modeler to implement is the data pre and post-processing. Depending on what your model expects, you can scale, transform, and reorder features before they are fed into the compiled model and you can do the same with the inference output before submitting it to the blockchain.

Nova will automatically look for new inference requests, fetch features for a specific input in the form of a Pandas DataFrame, call your data pre-processing function, generate an inference using the compiled model you provided, call your inference post-processing function, and submit the result to the blockchain on your behalf. In the meantime, Nova also looks for inference proof requests. Once proof is requested, Nova will use the witness files saved during the inference process, your compiled model, and other EZKL supporting files to create an EZKL proof and will submit it to the blockchain.

Deployment

Pre-requisites

Build process

  1. Clone the nova-bootstrap repository:

    git clone https://github.com/Spectral-Finance/nova-bootstrap.git
  2. Implement your data pre and post-processing logic

    • add the dependencies you need to modeler-requirements.txt

    • change the pre_process and optionally the post_process function content inside the runner.py module.

    • test that the return value of the pre_process function is a Pandas DataFrame your model would correctly work with and the output of the post_process function is a float value.

  3. Build and tag a new Docker image:

    docker build --platform linux/amd64 -t nova-bootstrap:latest .

(optional) Test locally

Deploy the Docker image locally using Docker Compose.

  1. set the required environment variables for your Docker container (See HERE).

    • default values refer to Mainnet so, for testing, you will want to set them with Testnet values.

  2. spin up the Docker container with docker compose up --build

  3. check the logs in the terminal output

Last updated