UAI Annotator - Algorithmic Data Annotation Tool

Powered by the automation engine for unmatched automation rates.

UAI Annotator is our annotation tooling for camera, lidar and sensor fusion data. Superpowered by the automation engine for data annotation, UAI Annotator delivers unprecedented automation rates accelerating the training & testing of AI based systems like assisted and autonomous driving.

The Automation Leap in
Data Labeling

Getting a piece of assisted & autonomous driving technology on the road takes a lot of frames to be labeled. Even with the best tools and the best labelers, there is this much you can do before the costs go through the roof.

Zero-Touch Annotation™ is our trademark technology using deep neural networks for object & attribute detection to automate labeling, cutting costs for manual annotation and quality assurance dramatically.

Why the industry chooses us

We are proud to work with the most innovative and technologically advanced mobility companies and suppliers. Let us introduce our team of experts to your next project.

Dr. Florian Faion Research Scientist LiDAR Perception

Highly accurate annotation is an indispensable prerequisite for supervised machine learning. We rely on the labeling service and tools from understand.ai.

EnBW Barriersystems
Lars Ehlkes Machine Learning Engineer

EnBW Barriersystems appreciates the high quality of image annotations by understand.ai - the personal communication, fast response and fit of individual user needs combined with fair pricing is just outstanding!

LTTS
Indrajit Sen Vice President and Regional Head, DACH

There’s a lot of automation inside. The architecture of the UAI tooling takes full advantage of AI. The way it’s structured makes it scalable, able to support large teams of labelers and parallel annotation. It’s boosting our throughput capabilities and making our customers happy.

Eindhoven University of Technology
Prof. Veronika Cheplygina Eindhoven University of Technology

understand.ai responded very quickly and were able to provide high-quality annotations within the same day, for an appropriate price. I would definitely recommend understand.ai to researchers in a similar position.

Sensagrate
Darryl Keaton II Founder and President

Data annotation can be difficult and expensive without a dependable partner, who’s easy to onboard with your needs. The willingness of understand.ai to take customer feedback and apply it quickly to the business and the flexibility of their tools for cost and throughput growth helped us to get us where we wanted.

Volkswagen
Dr. Peter Schlicht Project Manager AI-Technologies for automated driving

understand.ai is not only software, you also get a dedicated team that is knowledgeable and confident to exchange competencies and creative solution-oriented approaches, which definitely enriched our work

 3D bounding boxes in a lidar point cloud - before and after annotation
3D Boxes in Point Cloud
 3D bounding boxes in a lidar point cloud - before and after annotation
Raw Point Cloud

3D LiDAR Bounding Boxes

LiDAR is an active optical sensor technology which scans the earth’s surface to determine highly accurate x, y and z measurements. It transmits laser beams to a specific object and reflects its movements back to the receiver and analyzes the time span and distance with GPS and INS information to construct a 3D point cloud of reflective obstacles. LiDAR adds complementary information and is very strong at detecting pedestrians and other non-metallic objects at night, a scenario in which camera and RADAR sensors often fail.

Our UAI Annotator covers all your needs of fast, high quality and reliable labeling of 3D bounding boxes or cuboids for LiDAR, through a highly automated tool designed to give your labellers superpowers. Thanks to superior 3D point cloud handling millions of points are processed smoothly, navigated with ease and integrated seamlessly into the workflow.

2D bounding boxes - image before and after annotation
Bounding Box
2D bounding boxes - image before and after annotation
Raw measured data

2D Bounding Boxes

The most commonly used annotation type is 2D bounding boxes. They are easy to apply to machine learning models and faster to annotate in comparison to other annotation types.

Unlike segmentation, bounding boxes may also contain invisible parts of the classified object by approximating occlusions or truncations. Due to the inherent instance-awareness of bounding boxes, your algorithms will get a better understanding of the concept of specific objects and -if you want- track certain objects throughout the sequence. Bounding Boxes are most often used for testing and validation of new sensors or for tracking of objects in sequential data.

camera image before and after semantic segmentation
Semantic Segmentation
camera image before and after semantic segmentation
Raw measured data

2D & 3D Semantic
Segmentation

Depending on the raw data, bounding boxes can contain noise in the form of background and occlusions. This is tackled with semantic segmentation, where each pixel assigned to the class of your selected objects will be annotated. It is therefore the closest to a true representation of reality in 2D & 3D space, regarding class assignments. It also is more versatile, since it is very easy to distinguish between objects, e.g., road, lanes and curbs and track instances of them throughout the sequence.

UAI is the only tooling provider capable of annotating and exporting 2D segmentations in both pixel-based and polygon-based modes. UAI Annotator removes the complexity of semantic segmentation by automating the tricky bits, reduces the costs associated with the annotation process and its set-up.

camera image without and with annotated polylines
Annotated Polylines
camera image without and with annotated polylines
Raw Measured Data

2D Polyline Annotation

Polylines are used to annotate road lanes and other open ended or closed objects, so they’re recognizable to perception algorithms. Polyline annotation enables the accurate detection of the path ahead of an autonomous vehicle.They are also used for self-localization of vehicles in High-Definition (HD) Maps.

Polylines are an essential part of training data sets for reliable and safe self-driving AI models. UAI Annotator can annotate any traffic environment with enhanced meta information describing road conditions, surface or colour, markings and more.

Sensor fusion - the fusion of inputs from multiple sensors, such as camera and lidar
Point cloud scan
Sensor fusion - the fusion of inputs from multiple sensors, such as camera and lidar
Camera picture

3D Sensor Fusion Annotation

Sensor fusion is commonly defined as the fusion of inputs from multiple sensors, such as radar, lidar and cameras with the goal to create a virtual model of the environment around the vehicle. The combination of different sensor types allows for a much more complete representation of the real world then a single sensor could possibly provide.

Understand.ai is specialized on sensor fusion projects and can provide an unprecedented level of automation in this area.

The Leader in Automation

This is the automation engine for data annotation. Tooling designed by AI-natives to cut costs and to accelerate the training and testing of your assisted & autonomous driving technology.

Ever Improving Automation Rates
Data annotation automation rates