Vectorblox AI
Overview
Overview |
Development Flow |
Documentation |
Getting Started |
Deployment Options |
VectorBlox Accelerator Software Development Kit
The VectorBlox Accelerator SDK contains different tools that compile a neural network description from TensorFlow and ONNX into a Binary Large Object (BLOB) that is stored in flash and loaded into the DDR memory during execution.
|
|
CoreVectorBlox IP
|
Development Flow
Step 1: Prepare your trained model |
Use Python scripts provided in the SDK to convert your trained model into an optimized INT8 representation called a Binary Large Object (BLOB). Run the BLOB through the VectorBlox Simulator to verify accuracy of the network and to ensure successful conversion of the network. ![]() |
Step 2: Prepare your hardware |
The PolarFire Video Kit is configured to run as an AI enabled Smart Camera. The SDK includes a pre-compiled kit image bitstream. Write the bistream into the PolarFire FPGA using the FlashPro programmer included with the kit. Write the BLOB, generated from Step 1, into the SPI flash of the kit. ![]() |
Step 3: Write your embedded code |
Use the provided embedded code in C/C++ based SoftConsole IDE and generate and program the hex file. Connect the video kit to a HDMI monitor and turn it on. Modify the embedded code to load and run many CNN BLOBs, switch CNNs dynamically on the fly or load CNNs sequentially for simultaneous inferencing. |
Documentation
Document | Description |
CoreVectorBlox IP Handbook | CoreVectorBlox Libero IP Datasheet |
VectorBlox Programmers Guide | VectorBlox SDK Installation and Tool Flow Environment Guide |
VectorBlox Demo Guide | Getting Started Guide for VectorBlox in the PolarFire Video Kit |
Core VectorBlox IP Release Notes | Release Notes for current CoreVectorBlox IP version |
Getting Started
Deployment Options
PolarFire Video Kit![]() |
![]() |
Smart Camera Reference Design using the PolarFire Video Kit | |
1. Video frame is received via MIPI CSI-2
2. And stored in the DDR via AXI-4 interconnect
3. Before inference – the frame is read back from the DDR
4. Converted from RAW to RGB, RGB to planer R, G, B and written back into DDR
5. CoreVectorBlox engine runs inference on R, G, B arrays and writes results back into DDR
6. Mi-V sorts probabilities, creates an overlay frame with bounding boxes, classification results, fps etc. and stores the frame in DDR
7. The original video frame is read and blended with the overlay frame and sent out to an HDMI display
|