Real-world Scenario Generation in Comparison with Object-list-based Scenarios

7 minutes read

What we discussed so far in Scenario Generation

In our last blog post, we described why and how we generate real-world scenarios from measurement data at large scale for testing and validation of automated driving functions. We also gave an insight, how this approach can improve and accelerate the development process in general. We described the challenge of validating automated driving functions because the test-space for such functions is enormous and it is not feasible to perform all the tests on public roads. We showed how important simulation is for the purpose of validating and testing of autonomous driving functions and why real-world scenarios are crucial for this task.

In this post, we want to show you the difference between scenarios generated from object lists, generated by the perception stack of an autonomous vehicle on the one side and scenarios generated from raw data recordings by different sensors on the other side. We also highlight and contrast the advantages and disadvantages of both approaches.

Real-world vs. Object-list-based Scenario Generation in Autonomous Driving
Real-world Scenarios vs. Object-list-based Scenarios

Object lists – what is this and where do they come from?

When we are talking about object lists here, we mean a list of objects derived from the perception stack in autonomous (prototype) vehicles or from individual sensors that are able to detect and track objects on the road like pedestrians, bicyclists, cars, trucks, busses, and so on. In the case of individual sensors, this could be a camera, a LiDAR sensor or a radar sensor that is able to detect objects on the road based on their different physical principles. In the respective electronic control units (ECUs) of each sensor, different algorithms make sure that the right objects are detected and tracked over different timestamps to also enable predictions on a potential trajectory within the next few seconds.

Currently, these detection and tracking features are mainly based on traditional computer vision algorithms in case of a camera for example. These kinds of object lists are used for advanced driver assistance systems, for the most part, these days. In the case of autonomous (prototype) vehicles, we talk about object lists that are generated from sensor fusion algorithms. The algorithms take input from different sensors (camera, LiDAR, radar, …) and are fused with either the raw data (early fusion) or the objects lists of the individual sensors (late fusion) together to generate one so-called environment model, which includes all information about the surrounding environment of the vehicle including all traffic participants that are present. We could take this as one big object list throughout all sensor modalities.

The generation of scenarios from object lists

The straightforward solution of generating scenarios for simulation is for sure, to take the object list generated by sensors or the perception stack of my vehicle and transform them into a readable format for the simulation environment in use. As the object lists are recorded by most of the prototype vehicles out there anyhow, this is an easy and cheap way to realize a replay of encounters during test drives in simulation. Developers and Testers can take them as they are and can do replay simulations out-of-the-box. But this solution comes with drawbacks as well. Firstly, there are only replay scenarios possible, which means there is no possibility to simulate variants of the encountered scenarios. Secondly, the perception in the vehicles is not perfect. This has many reasons but the main reason is, that the perception is bound to realtime and also limited to the constrained compute resources inside the vehicle, which constrains the algorithms itself. This results in imperfect object lists, which have inherited false positives and false negatives. This means that there are objects reflected in the object lists which were not present in reality (so-called false positives) and on the other side there are objects in the object list missing which were there in reality (false negatives).

False positive and false negatives in autonomous driving
Scene with box in magenta (false positive) and multiple missed cars (false negatives) in the middle

When these flawed object lists are taken to create scenarios for simulation out of them, these false positives and false negatives are directly reflected in the simulation scenario which means that the algorithm under test is challenged with a completely different situation, in terms of semantics and criticality, compared to how it was challenged in the real-world test drive. As a consequence, all insights generated by using this scenario are not really transferable to reality.

Object-list based perception flaws in scenario simulation
Flaws of in-vehicle object-list based perception lead to flaws in the scenario simulation

Scenarios from real-world raw data measurements

In contrast to scenarios that are generated from object lists, scenarios that are generated from raw data can reflect real ground truth scenarios, which means that all objects or traffic participants, which have been there in reality, are also reflected in the simulatable scenario. Not only all the objects (on our side of the highway) are there, but also their trajectories are reflected in a very precise way.

Image for post
Scene with complete and precise coverage of all relevant objects

This can be realized by an AI-supported toolchain which is neither bound to real-time nor constrained by limited compute resources and could, therefore, generate much more accurate object lists, which means that there are nearly no false positives or false negatives. On top of that, humans can be kept in the loop to guide the system at points where the system is not confident enough to determine all the objects on its own or where the human guidance adds value for important and/or hard decisions. This allows even more precise ground truth object lists, which can than be transformed into the simulation environment. This allows for a very accurate replay of encountered situations, which allows engineers to check if an update of their algorithm improved the driving function in a very special situation. The next step is then to test the function in multiple situations to check, in which situations the driving function now performs better and in which maybe worse, to test the overall performance over many real-world scenarios.

Real-world Scenario Generation Results
Real-world scenarios close the precision gap between real-world and simulation


In this blogpost we discussed two possible solutions to generate simulation scenarios from real-world situations. The first solution using object lists from the perception stack of the vehicle or single sensors is cheap but comes with a lot of limitations and drawbacks, when it comes to testing and validating of level 3 to level 5 driving functions. The approach to generate scenarios from recorded raw data has many advantages compared to the other approach including a higher accuracy and coverage regarding detected and tracked objects due to a completely different approach and therefore a better transferability to the real world of insights gathered by simulating these types of scenarios.

Scalability is similar, object-list-based Scenarios are significantly better regarding the price. But in every other category that matters, Scenario Generation from raw measured data is by far the more superior solution.

Real-world scenarios close the precision gap between real-world and simulation
Scenario Generation from raw data (solid line) outperforms object-list-based Scenarios (dotted line)

Would you like to learn more about Scenario Generation from real-world data? Get in touch!

Dominik Dörr (Product Manager Scenarios)