How AI is shaping the future of simulation
Author : Seth DeLand, MathWorks
03 October 2022
As its capabilities and reach continues to evolve, Artificial Intelligence (AI) plays an increasingly important role for engineers as they are tasked with integrating AI into systems.
However, this presents its own set of challenges – after all, AI models can only be as effective as the data they’re trained on. But, as engineers explore new ways to create effective AI models, the combination of AI and simulation can help to solve the challenges of data quality, time and reliability.
Firstly, AI addresses the challenge of insufficient data as it enables simulation models to synthesise data that is difficult or expensive to collect. Secondly, AI models can be used as approximations for computationally expensive and complex high-fidelity simulations. Finally, AI models are being used in embedded systems for applications such as controls, signal processing, and embedded vision, where simulation has become key to the design process.
The following explores this in more detail.
Addressing data quality
Recently, the development of data-centric AI has brought focus to the importance of training data. A simulation is only as good as the data used to train the model. ‘Bad’ data can waste hours of an engineer’s time trying to determine why the model is not working, without the promise of insightful results.
It has been shown that rather than tweaking the AI model’s architecture and parameters, time spent improving the training data instead often yields larger improvements in accuracy. However, collating good, clean real-world data is time-consuming and difficult. Also, while most AI models are static (using fixed parameter values), engineers must be mindful of a constant stream of new data that is not necessarily captured in the training set.
Therefore, the use of simulation to augment existing training data has multiple benefits:
• Computational simulation is much less costly than physical experiments
• The engineer has full control over the environment, and can simulate scenarios that are difficult or dangerous to create in the real world
• Simulation gives access to internal states that might not be measured in an experimental setup, which can be very useful when debugging poor AI model performance in certain situations
With an AI model’s performance so dependent on the quality of its data, engineers can improve outcomes with an iterative process of simulating data, updating the model, observing what conditions it cannot predict well, and collecting more simulated data for those conditions.
Seth DeLand, Data Analytics Product Marketing Manager, MathWorks
Using tools such as Simulink and Simscape, they can generate simulated data that mirrors real-world scenarios. Furthermore, they can simulate their data in the same environment in which they build their AI models, automate more of the process, and not have to worry about switching toolchains.
Implementing time-effective AI models
When designing algorithms that interact with physical systems, such as an algorithm to control a hydraulic valve, a simulation-based model of the system enables rapid design iteration for your algorithms. Whether it’s a ‘plant model’ (control field), ‘channel model’ (wireless field) or ‘environment model’ (reinforcement learning field), this model provides the necessary accuracy to recreate the physical system your algorithms interact with.
The problem here is that to achieve the “necessary accuracy”, engineers have historically built high-fidelity models from first principles. For complex systems, this takes a long time. Such long-running simulations allow for less design iteration, and leave little time to evaluate potentially better design alternatives.
This is where AI comes in. Engineers can use an AI model (a reduced-order model) to approximate their high-fidelity model, or even bypass creating a physics-based model at all, and just train the AI model from experimental data. This reduced-order model will be much less computationally expensive than the first-principles model, giving the engineer time to perform more exploration of the design space. And, if a physics-based model of the system does exist, it can always be used later on to validate the design determined using the AI model.
Also, advances in the AI space, such as neural ODEs (Ordinary Differential Equations), combine AI training techniques with models that have physics-based principles embedded within them. These are useful when an engineer wishes to retain certain aspects of the physical system, while approximating the rest of the system with a more data-centric approach.
Using AI to enhance algorithm efficacy
When designing algorithms in applications like control systems, engineers have come to rely on simulations. Often, they develop virtual sensors that attempt to calculate a value that isn’t directly measured from the available sensors. A variety of approaches are used, including linear models and Kalman filters.
However, these methods can struggle to capture the nonlinear behaviour present in many real-world systems. As a result, engineers are turning to AI-based approaches that have the flexibility to model the complexities. They use data (either measured or simulated) to train an AI model that can predict the unobserved state from the observed states, and then integrate that model with the system.
The AI model is then included in the controls algorithm that ends up on the physical hardware, though this has performance/memory limitations – it typically needs to be programmed in a lower-level language like C/C++. These requirements restrict the types of machine learning models appropriate for such applications, so the engineers may need to try multiple models and compare trade-offs in accuracy and on-device performance.
Reinforcement learning takes this a step further. It learns the entire control strategy, rather than just the estimator. This is a powerful technique in some challenging applications, such as robotics and autonomous systems, but building such a model does require an accurate model of the environment, which may not be readily available. It also requires the computational power to run a large number of simulations.
In addition to virtual sensors and reinforcement learning, AI algorithms are increasingly used in embedded vision, audio and signal processing, and wireless applications. For example, an AI algorithm detects road markings to help an automated vehicle stay centred in a lane. In a hearing device, AI algorithms can help to enhance speech, and suppress noise. In a wireless application, they can apply digital predistortion to offset nonlinearities in a power amplifier.
In all of these, AI algorithms are part of the larger system. Simulation is used for integration testing to ensure that the overall design meets requirements.
AI is driving the future of simulation
As models evolve to serve increasingly complex applications, AI and simulation together will become even more essential to the engineer’s toolbox. Tools like Simulink and MATLAB have empowered engineers to optimise workflows and cut development time, and with the ability to develop, test, and validate models in an accurate and affordable way before hardware is introduced, use of these methodologies will only continue to grow.
With more and more engineers developing the ability to create and test AI models in an accurate and cost-effective way, this approach will only continue to grow in use and help to underpin future innovation.
More on MathWorks here.