Reducing .tflite Model Size
Any of the zoo .tflite models I see are no more than 3MB in size. On an edgetpu they run fine. However, when I train my own object detection model the .pb file is 60MB and the .tfl
Solution 1:
The issue here may be due to the fact that you have 2 different config files for each step:
python train.py \
...
--pipeline_config_path=training/ssd_mobilenet_v1_coco.config
python export_tflite_ssd_graph.py \
--pipeline_config_path=training/ssd_mobilenet_v2_coco.config \
...
Was this meant to be? Also, it looks like you deployed the model immediately after training without compiling it. Please refer to this doc for more info on the edgetpu_compiler: https://coral.withgoogle.com/docs/edgetpu/compiler/
AFAIK, a 20MB model should run just fine as long as it meets all requirements listed on the page:
- Tensor parameters are quantized (8-bit fixed-point numbers).
- Tensor sizes are constant at compile-time (no dynamic sizes).
- Model parameters (such as bias tensors) are constant at compile-time.
- Tensors are either 1-, 2-, or 3-dimensional. If a tensor has more than 3 dimensions, then only the 3 innermost dimensions may have a size greater than 1.
- The model uses only the operations supported by the Edge TPU. The listed operations are here: https://coral.withgoogle.com/docs/edgetpu/models-intro/#supported-operations
Your whole pipeline should be to:
1) Train the model
2) Convert to tflite
3) Compiled for EdgeTPU (the step that actually delegates the work onto the TPU)
Hope this helps.
Post a Comment for "Reducing .tflite Model Size"