06-06-2023, 09:28 PM
We have a couple of objects in the scene (loaded from obj / stl files), say, a Coke can and a pen. We can already take a simulated RGB image of these objects. However, we would also like to simulate a semantic segmentation. Specifically, this is an image corresponding to the RGB image where every pixel black, red, or green. The pixel should be red if it corresponds to the Coke can, and green if it corresponds to the pen, and black otherwise. I have attached a picture demonstrating the type of output I'm aiming for.
The dumb way to do this is to set visibility=False for every item in the scene, except the Coke can, and then take an image. Repeat for the pen. Then binarize both images and compost them on top of each other. But I'm wondering if RoboDK has some existing functionality to accomplish this?
The dumb way to do this is to set visibility=False for every item in the scene, except the Coke can, and then take an image. Repeat for the pen. Then binarize both images and compost them on top of each other. But I'm wondering if RoboDK has some existing functionality to accomplish this?