Spaces:
Running
how can i get the json points as output file when i upload the image
In this space i am getting open pose image as output when i pass an image as input. But as of my requirement i need to get json as output when i pass image as input. can you suggest me how can i do that
I will update a version to provide the function of json downloading soon.
Thanks for the response when can i expect the update d version. or any suggestion how to do that?
Thank you for using SJTU-TES.
I have updated a version that allows users to download the json format of the openpose runtime results, and you can try uploading images to get the results.
If you need to run locally openpose and get the results, I believe that read "https://github.com/Hzzone/pytorch-openpose/issues/45" will be helpful to you.
{"candidate": [[408.0, 153.0, 0.9660332202911377, 0.0], [390.0, 345.0, 0.9222150444984436, 1.0], [265.0, 333.0, 0.8901361227035522, 2.0], [215.0, 574.0, 0.864933431148529, 3.0], [224.0, 807.0, 0.8668944239616394, 4.0], [517.0, 356.0, 0.9019652009010315, 5.0], [544.0, 601.0, 0.8676999807357788, 6.0], [577.0, 821.0, 0.8900860548019409, 7.0], [310.0, 784.0, 0.7015151381492615, 8.0], [252.0, 1016.0, 0.10410969704389572, 9.0], [485.0, 791.0, 0.7013563513755798, 10.0], [376.0, 123.0, 0.9572119116783142, 11.0], [438.0, 122.0, 0.914131224155426, 12.0], [340.0, 151.0, 0.8056827187538147, 13.0], [474.0, 149.0, 0.8303753137588501, 14.0]], "subset": [[0.0, 1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, -1.0, 10.0, -1.0, -1.0, 11.0, 12.0, 13.0, 14.0, 26.08822213933809, 15.0]]}
i am getting the json in this format . are there any updation to get as face,right hand,left hand , body as keys.. , . and in the above each array contains 4 values wt does that mean. i think its a box. can you clarity on this/
candidate: x, y, score, id
subset: 0-17 is the index in candidate, 18 is the total score, 19 is the total parts
The joint with id is as follow:
In subset you may find some -1, A median subset of -1 means that no corresponding key point is detected at that location. For example, if subset[i][0] has a value of -1, it means that for the I-th human pose, the first key point (such as the nose) is not detected. This may be because the key point is obscured, the image quality is not high, or the model does not have enough confidence to determine the point.
About the face, hands estimation. The estimate of the hand was included in the code, but not superimposed on the picture. You can turn to the source code or "https://github.com/CMU-Perceptual-Computing-Lab/openpose" to gain more information.
thank you for the support. can i know how to get separate keypoints like pose,right hand,left hand ,face in the json file. could you please help me out with that