Metrology Home > Solutions > Orbita Quality


The technology used today is increasingly accurate, for which high accuracy and reliability in industrial measurements is required. Vision measurement allows to meet the measurement needs in both geometric elements as in shapes and surfaces, allowing the control of parts with high precision and speed, whether in desktop or large editing systems with fixed or mounted sensors on robotic tools.

Compared to other technologies, vision measurement offers the ability to measure parts with very small features or soft or delicate material parts, taking in a short time a lot of data from each feature for a fast evaluation of high accuracy and repeatability.

The Orbita Ingenieria Metrology System can be adapted to any industrial system, ensuring high accuracy and reliability in the performed measurements. The sensors used are designed to be accurate, reliable and easy to incorporate into manufacturing processes. To minimise integration times, the sensors used incorporate tools for automatic detection of devices connected to the network. Additionally, the user interface makes sensor configuration and maintenance simple and intuitive.

All sensors incorporate their own light source and advanced exposure control ensuring their robustness against ambient light changes. The 3D Guidance software provides optimised algorithms for different finishes and materials used in the industry.

Integration The Orbita sensors are designed for easy integration and adaptation to all robotic guidance solutions. These can be presented in two different configurations: Dynamic: Mounted on the robot tool. Static: Located on a fixed stand on the workstation. Depending on the characteristics of the desired solution and the chosen technology, one type of integration or another is selected, achieving different results.

Measurement process

Stage 1 – Measurement acquisition

First, an image is acquired in order to locate the characteristic to be measured. The characteristics for which you want to know the 3D position must be within the image taken, using image processing techniques. Once the X and Y coordinates are calculated, the line passing through the camera’s focal centre and the real point are projected. This line is called the projection line and refers to the camera’s coordinate system.

Stage 2 - Depth of measurement The built-in laser makes a cross-shaped projection. The first step is to identify the pixels of the rays within the acquired image. Then four points are extracted and the plane containing them is calculated. This plane is called the support plane and refers to the extrinsic coordinates of the camera.

Phase 3 – Intersection and transformation

The intersection between the projection line and the support plane becomes the 3D reference point corresponding to the function sought. Because the measurement point refers to the extrinsic coordinates system of the camera, a transformation is required to reference a global coordinate system, invariant to the location of the sensor. The required transformation matrix is calculated in order to perform this process. The information returned by the robot on its current position is used for this purpose.

After the transformation phase of the coordinate system has been performed, the system is ready to measure any parameter or characteristic on any of the axes.