RoboDK Forum
Converting keypoints to world coordinates- Printable Version

+- RoboDK Forum (//www.sinclairbody.com/forum)
+-- Forum: RoboDK (EN) (//www.sinclairbody.com/forum/Forum-RoboDK-EN)
+--- Forum: RoboDK API (//www.sinclairbody.com/forum/Forum-RoboDK-API)
+--- Thread: Converting keypoints to world coordinates (/Thread-Converting-keypoints-to-world-coordinates)



Converting keypoints to world coordinates-Nour-04-29-2022

Hi ,
I'm trying to have robotic system that:
scans holes in a space
goes to each hole center for image capture

but I'm having trouble to change image coordinates from a camera to world coordinates.

The final code arrangement would be:

Robot at home inspection position
robot calls a scanning function ( mounting holes in my code)
The code moves towards each circle
camera takes a picture
the robot goes back to home position and moves a tranlsation distance
I attached the code


RE: Converting keypoints to world coordinates-Sam-05-02-2022

Hi Nour,

One of the challenges with vision is to convert accurately from a 2D still image to a 3D space.
For this to be accurate, you need to have a reference distance within the image. Ideally, at the same distance / focus point of your features.

这个例子中利用的事实distance from the camera to the features is consistent, thus allowing us to find a constant pixel/mm ratio and camera height experimentally.
//www.sinclairbody.com/doc/en/PythonAPI/examples.html#d-pose-estimation-of-a-known-object

What exactly are you trying to achieve?
Are you trying to do path correction with a camera to be dead centre with the mounting holes?
How far from the mounting holes is your programmed path?
Are you trying to record the mounting holes positions in 3D space?
How accurate your application needs to be?


RE: Converting keypoints to world coordinates-Nour-05-02-2022

Hi Sam,
My objective is having the camera optical axis to be directly above the center of the holes
the distance is variable. i.e the distance is equal to the distance that makes the hole fill the camera image, so that a snapshot is taken. Its variable
it needs to be accurate as the photo taken will measure the tilt of the hole to the nearest degree
I think I'm recording the holes in 2D space. My intention is to find some way to have the robot at a home position scan a part for holes. The holes are then stored in a matrix, then the robot goes near each circle and centers the optical axis with the hole center for a snapshot to be taken for future post processing. Is there anyway to do that?
I have checked the link and read there is a why to transfer the coordinates from images to world coordinates?
I attached the file below as an example

(05-02-2022, 12:50 PM)Sam Wrote:Hi Nour,

One of the challenges with vision is to convert accurately from a 2D still image to a 3D space.
For this to be accurate, you need to have a reference distance within the image. Ideally, at the same distance / focus point of your features.

这个例子中利用的事实distance from the camera to the features is consistent, thus allowing us to find a constant pixel/mm ratio and camera height experimentally.
//www.sinclairbody.com/doc/en/PythonAPI/examples.html#d-pose-estimation-of-a-known-object

What exactly are you trying to achieve?
Are you trying to do path correction with a camera to be dead centre with the mounting holes?
How far from the mounting holes is your programmed path?
Are you trying to record the mounting holes positions in 3D space?
How accurate your application needs to be?



RE: Converting keypoints to world coordinates-Sam-05-02-2022

There are a lot of assumptions, and it is quite difficult for me to assess your needs.
Looking at your other posts, it seems that you are scanning holes on a wing and I already provided a partial answer here:
//www.sinclairbody.com/forum/Thread-Camera-robot-detection-possibility

Finding a blob and its relative distance from the camera axis is fairly easy to do in pixel.
However, can you guarantee that the camera is perpendicular to the hole's surface and that the relative distance from the camera to the holes is constant?

If so, you can retrieve the pixel/mm ratio in the XY plane experimentally. For instance, place a ruler over the hole and the camera at your target, take the snapshot, divide the ruler length in pixels by the ruler length in mm. It should be good enough to iterate and converge to the hole centre. The examples in our documentations have every bit and pieces to achieve this.

If not, you need to consider a lot of variables.
  • How do you align the Z axis of the camera to be perpendicular with the holes? Consider using the moments of the blobs.
  • How do you dynamically find the distance from the camera to the hole? Consider having a reference measurement/marker.
  • How do you compensate for defects?