ML model monitoring is a process of checking the performance of ML models. It is done by comparing the predicted outputs with the actual outputs.
The process of ML model monitoring can be done in two ways:
Monitoring at runtime: This type of monitoring is done when the ML model runs.
Monitoring offline: This type of monitoring is done after the ML model has been trained and deployed.
Monitoring ML models is essential because it helps identify the model’s errors and fix them. It also helps to identify the areas where the model needs improvement.
ML Model Performance Monitoring is done by various metrics such as,
– Accuracy: The percentage of predictions that were correct.
– Precision: The percentage of predictions that were correct among those made.
– Recall: The percentage of correct predictions among those made and not false negatives.
– F1 Score: A weighted average between precision and recall, where precision is given more weight than recall.
TYPES OF ML MODEL PERFORMANCE MONITORING
1. In-depth monitoring: This is a new type of deep learning currently in development. In-depth monitoring provides expert supervision to ML models through human experts. These monitors can identify and predict when a model’s performance will deteriorate before it happens.
2. Automated monitoring: Automated monitoring is helpful to watch over an ML model and ensure it is doing its job correctly. An automated system can do this type of monitoring that automatically notifies the model owner when something goes wrong.
THE USES OF ML MODEL PERFORMANCE MONITORING
ML Model Performance Monitoring is a crucial part of the data engineering process. It helps to identify and fix any issues in the model. The ML model monitoring process includes:
Monitoring the training process to identify any errors or anomalies
Monitoring the validation process to identify any errors or anomalies
Monitoring the production process to identify any errors or anomalies
Monitoring the performance of a model over time to identify any changes in the model’s behavior or performance
ML Model Performance Monitoring is a crucial part of the data scientist’s job.
The ML model monitoring process includes:
Monitoring performance metrics
Identifying potential improvements for the model.
To monitor the performance metrics of an ML model, it is essential to track how well the model is performing on unseen data. The best way of doing this would be through a dashboard that displays training and testing metrics on an ongoing basis.
There are two ways to identify anomalies in data: The first way involves examining the model’s probability to assign a particular label for data. The second way consists of reviewing the mean squared error and variance of predictions made by the model.
Monitoring ML models is a crucial part of the data stewardship process. It helps to ensure that the models are performing as expected. The data stewards should monitor the model performance regularly, for example, once a week or once a month. They also monitor the model performance when there is an update to the model or an update to any of its parameters.
Monitoring the ML model can be done by looking at the model’s performance on a validation set. If there are any changes in performance, then it could indicate overfitting and that the model may need to be retrained or evaluated using a different dataset. Model performance on a validation set is often monitored by calculating the model’s error rate as a percentage of the training data observed.
Data stewards generally keep an eye on the following:-
How often are the forecasts generated?
How accurately was the forecast generated in comparison to historical values?
What is the quality of data in each bucket (e.g., how many observations per month)?