Spaces:
Sleeping
Sleeping
Ticket Name: RTOS/TDA2: Running a model on multiple cores | |
Query Text: | |
Part Number: TDA2 Tool/software: TI-RTOS Hi, I can not find the use of "coreID" on the vision sdk. I want to know when I read the NET.BIN and the PRM.bin, what is the role of the "coreID"? If I set the layersGroupId = 4, what's the mean on the vision sdk? Thank you. BR, Tianxing | |
Responses: | |
Hi, This "coreID" is not used in VSDK for TIDL usecases, so you can ignore it. The "layersGropuId" is not a single value, it indicates group of layers in the net are processed together, so this parameter needs to be set for all the layers in the net. Please refer to config files in TIDL OD usecase | |
Hi, Could I set the layer execute on the dsp or eve, when the tidl usecase run? Thank you. | |
Hi, That depends on the use case, for TIDL OD usecase you can set all layers to run on EVE except Concat, Flatten and detectionOutput layers. How set this is shown in the import config files of SSD. Thanks, Praveen | |
Hi Praveen, Could you give me a sample, thank you. BR, Tianxing | |
Attached the sample import config file.. tidl_import_JDetNet.txt # Default - 0 | |
randParams = 0 | |
# 0: Caffe, 1: TensorFlow, Default - 0 | |
modelType = 0 | |
# 0: Fixed quantization By tarininng Framework, 1: Dyanamic quantization by TIDL, Default - 1 | |
quantizationStyle = 1 | |
# quantRoundAdd/100 will be added while rounding to integer, Default - 50 | |
quantRoundAdd = 25 | |
numParamBits = 8 | |
# 0 : 8bit Unsigned, 1 : 8bit Signed Default - 1 | |
inElementType = 0 | |
inputNetFile = "..\..\test\testvecs\config\caffe_jacinto_models\trained\image_detection\jdetNet_768x320\deploy.prototxt | |
inputParamsFile = "..\..\test\testvecs\config\caffe_jacinto_models\trained\image_detection\jdetNet_768x320\ti-jdetNet_768x320.caffemodel" | |
outputNetFile = "..\..\test\testvecs\config\tidl_models\jdetnet\tidl_net_jdetNet_ssd.bin" | |
outputParamsFile = "..\..\test\testvecs\config\tidl_models\jdetnet\tidl_param_jdetNet_ssd.bin" | |
rawSampleInData = 1 | |
preProcType = 4 | |
sampleInData = "..\..\test\testvecs\input\trace_dump_0_768x320.y" | |
tidlStatsTool = "..\quantStatsTool\eve_test_dl_algo.out.exe" | |
layersGroupId = 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 2 1 2 1 2 1 2 1 2 1 2 1 2 1 2 1 2 1 2 2 2 2 2 2 2 0 | |
conv2dKernelType = 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 | |
Thanks, Praveen | |
Hi Praveen, If means that layer run on which core to be set by the layersGroupId of the config file, I see the 1 for EVE and the 2 for DSP? When I read the source code of tidlModelImport, the layersGroupId is assignment to coreID as follow: tIDLNetStructure.TIDLLayers[tiLayerIndex].coreID = gParams.layersGroupId[tiLayerIndex]; tIDLNetStructure.TIDLLayers[tiLayerIndex].layersGroupId = gParams.layersGroupId[tiLayerIndex]; On line 4161 of function caffe_import() Could I set the layers run on EVE1, EVE2 or DSP1, DSP2 by the layerGroupId. BR, Tianxing | |
Yes, you can do that by assigning unique layersGroupId for each of the cores. For example, if you wish to run your network on two EVE and two DSP cores, then you can assign layersGroupId as follows.. 1 for EVE1, 2 for EVE2, 3 for EVE3, 4 for DSP. Please note that same mapping should used in running inference. Thanks, Praveen | |
Hi Praveen, Thank you for your reply. What' s the mean that "same mapping should used in running inference"? That' s mean when I create chain, I should used the EVE1, EVE2, DSP1 and DSP2? Best Regard, Tianxing | |
Yes. Thanks, Praveen | |
Hi Praveen, However, I can't find the relationship between the config fie and sdk code. Best Regards, Tianxing | |
Please refer to tidl and VSDK documents to better understanding. Also, search in the e2e forum you may get some more information you are looking for. Thanks, Praveen | |