TensorRT Backend
For deployment on NVIDIA targets we support TensorRT backends. Currently the TensorRT backend always compiles for the first GPU of the local system.
Installation
Tensorrt is unfortunately not compatible to poetry installation and must be installed separately pip install tensorrt
Configuration
The backend supports the following configuration options.
- val_batches
- 1 (number of batches used for validation)
- test_batches
- 1 (number of batches used for test)
- val_frequency
- 10 (run backend every n validation epochs)
TODO:
- [ ] remote execution support
- [ ] profiling and feedback support