Performance Metrics in Neural Architecture Search
Currently there are two main sources of Performance Metrics used in the hannah's NAS subystem.
- Backend generated metrics. Backends generated metrics are returned by the backend's
profile
method. Backend generated metrics are usually generated by running the neural networks, either on real target hardware or on accurate simulators. We currently do not enforce accuracy requirements on the reported metrics, but we will consider them as golden reference results for the evaluation and if necessary the training of the performance estimators, so they should be as accurate as possible. - Estimators can provide metrics before the neural networks have been trained. Predictors are used in presampling phases of the neural architecture search. Predictors are not and will not be used outside of neural architecture search.
There are 2 subclasses of predictors.
- Machine Learning based predictors: These predictors provide an interface based on: predict
, update
, load
, train
- Analytical predictors, the interface of these methods only contains the: predict
The predictor interfaces are defined in hannah.nas.performance_prediction.protcol
as python protocols.
`