Zero-Touch Annotation™ Automation

Zero errors, zero delays, zero worries about your data annotation budget.

The Automation Engine

With algorithmic annotations and automated workflows the understand.ai automation engine gives labelers superpowers. Ramp-up your annotation throughput and achieve the highest quality for the best price with Zero-Touch Annotation™ technology.

Let’s Talk About Costs

Getting a piece of assisted & autonomous driving technology on the road takes a lot of frames to be labeled. Even with the best tools and the best labelers, there is this much you can do before the costs go through the roof.

Thanks to the automation levels of Zero-Touch Annotation™, you’ll be able to reduce the cost of manual work dramatically.

Why the industry chooses us

We are proud to work with the most innovative and technologically advanced mobility companies and suppliers. Let us introduce our team of experts to your next project.

Dr. Florian Faion Research Scientist LiDAR Perception

Highly accurate annotation is an indispensable prerequisite for supervised machine learning. We rely on the labeling service and tools from understand.ai.

Lars Ehlkes Machine Learning Engineer

EnBW Barriersystems appreciates the high quality of image annotations by understand.ai - the personal communication, fast response and fit of individual user needs combined with fair pricing is just outstanding!

Prof. Veronika Cheplygina Eindhoven University of Technology

understand.ai responded very quickly and were able to provide high-quality annotations within the same day, for an appropriate price. I would definitely recommend understand.ai to researchers in a similar position.

Darryl Keaton II Founder and President

Data annotation can be difficult and expensive without a dependable partner, who’s easy to onboard with your needs. The willingness of understand.ai to take customer feedback and apply it quickly to the business and the flexibility of their tools for cost and throughput growth helped us to get us where we wanted.

Dr. Peter Schlicht Project Manager AI-Technologies for automated driving

understand.ai is not only software, you also get a dedicated team that is knowledgeable and confident to exchange competencies and creative solution-oriented approaches, which definitely enriched our work

3D Boxes in Point Cloud
Raw Point Cloud

3D Annotation Tooling

LiDAR is an active optical sensor technology which scans the earth’s surface to determine highly accurate x, y and z measurements. It transmits laser beams to a specific object and reflects its movements back to the receiver and analyzes the time span and distance with GPS and INS information to construct a 3D point cloud of reflective obstacles. LiDAR adds complementary information and is very strong at detecting pedestrians and other non-metallic objects at night, a scenario in which camera and RADAR sensors often fail.

Our UAI Annotator covers all your needs of fast, high quality and reliable labeling of 3D bounding boxes or cuboids for LiDAR, through a highly automated tool designed to give your labellers superpowers. Thanks to superior 3D point cloud handling millions of points are processed smoothly, navigated with ease and integrated seamlessly into the workflow.

Bounding Box
Raw measured data

2D Annotation Tooling -
Bounding Boxes

The most commonly used annotation type is 2D bounding boxes. They are easy to apply to machine learning models and faster to annotate in comparison to other annotation types.

Unlike segmentation, bounding boxes may also contain invisible parts of the classified object by approximating occlusions or truncations. Due to the inherent instance-awareness of bounding boxes, your algorithms will get a better understanding of the concept of specific objects and -if you want- track certain objects throughout the sequence. Bounding Boxes are most often used for testing and validation of new sensors or for tracking of objects in sequential data.

Semantic Segmentation
Raw measured data

2D & 3D Semantic
Segmentations

Since the world is not made out of boxes, we are also offering a more precise method to annotate your data - semantic segmentation.

Depending on the raw data, bounding boxes can contain noise in the form of background and occlusions. This is tackled with semantic segmentation, where each pixel assigned to the class of your selected objects will be annotated. It is therefore the closest to a true representation of reality in 2D & 3D space, regarding class assignments. It also is more versatile, since it is very easy to distinguish between objects, e.g., road, lanes and curbs and track instances of them throughout the sequence. It is also possible to annotate classes that are not instantiable, e.g. groups of pedestrians. Pixelwise & voxelwise semantic segmentation is often used for training the neural networks on annotated images, videos or scans.

Annotated Polylines
Raw Measured Data

2D & 3D Annotation
Tooling - Polylines

Polylines are used to annotate road lanes and other open ended or closed objects, so they’re recognizable to perception algorithms. Polyline annotation enables the accurate detection of the path ahead of an autonomous vehicle.

They are also used for self-localization of vehicles in High-Definition (HD) Maps. Polylines are an essential part of training data sets for reliable and safe self-driving AI models.

The Leader in Automation

This is the automation engine for data annotation. Tooling designed by AI-natives to cut costs and to accelerate the training and testing of your assisted & autonomous driving technology.

Ever Improving Automation Rates