Model Validation

Introduction

All submitted models are evaluated against rigorous evaluation criteria to ensure that any underperforming models are not allowed in the network. The models are evaluated using the standard ML model validation metrics on the predictions generated on a hold-out testing dataset.

Post-model submission, Modelers are required to provide the predictions from their submitted model on the out-of-time validation dataset provided by the Validators. This ensures that any underperforming models do not form part of the network.

Details

  • Download the validation dataset through the Spectral's CLI

  • [Optional] Fetch any additional data required by the submitted model

  • Perform the same pre-processing, feature engineering, and feature selection steps as those performed during the model training phase

  • Make predictions using the submitted model on the processed validation dataset

  • [If required] Transform the model predictions to into any other format required for the submission (refer to the Challenge Page for submission requirements), e.g., predicted labels

  • Prepare the submission file as required for the competition

  • Submit the submission file through the CLI

  • Modelers can submit multiple submissions for each Challenge's validation data. However, only their latest submission will be considered for model validation

Validation Criteria

Validators will compare the submitted model predictions against the live data observed during the Pause Window. Models will be evaluated against the Performance Benchmarks as specified on the Competition Page and in the following sub-pages.

Models that don't pass the performance benchmarks will be knocked out of the Challenge.

Last updated