Why automating intelligence is critical to technology innovators
Author : Jeff Harris, Keysight Technologies
01 September 2022
Today's newest products are increasingly complex, due to evolving standards, applications distributed across multiple cloud environments, and acceleration in an “all in one” expectation from new users. This increased complexity directly impacts development validation and test coverage for new products, increasing pressure on development teams.
With so many potential vectors to test, there is an expectation that automating design and test procedures would be mainstream. However, according to a recent Forrester study commissioned by Keysight, today 89 percent of companies still employ manual processes, with only 11 percent of companies fully automating their test matrices today. While full automation adoption is still low, companies do see value in automation, with 75 percent reporting some automation and nearly half wanting to automate in the next three years.
With design and test automation poised to dramatically impact innovation cycles, we asked Jeff Harris, Vice President, Portfolio and Global Marketing at Keysight Technologies, to expand on the state of automation in the industry today and where automation will transform the development life cycle.
Why did Keysight commission a Forrester Test and Measurement Automation Thought Leadership Study?
In December 2021, we commissioned Forrester Consulting to evaluate the use of data integration, analytics, artificial intelligence (AI), and machine learning (ML) in a typical product development cycle. Forrester surveyed 406 development leaders and asked them a series of questions related to how much they currently employ AI and ML in their product development process. The survey included developers across North America, EMEA, and APAC. We really wanted to understand the scalability of test innovation in a typical product development process.
Are organisations satisfied with their current testing system?
On the surface, we were encouraged to hear that most organisations report they are satisfied with their current development approaches, with 86 percent being moderately to very satisfied. When we looked underneath, though, we found that those same organisations report that 84 percent of projects and designs are either complex, multilayer sub-systems or integrated systems, most of which are not being tested.
Despite what initially looks like satisfaction, we learned in the study that companies feel the pressure to do more, especially when asked about the future. For example, in three years, nearly half (45 percent) of companies would consider using a fully automated test approach, with 72 percent expecting to at least use augmented automation, where automation replaces some, but not all, of test processes, in their development process.
Currently, only one in 10 companies use full automation in their development process, but we expect the COVID-19 pandemic to accelerate the adoption of remote-development, automated test sequencing. We also expect a much higher use of digital twins as development teams strive to continue working together but from different locations.
Can you give an example of how companies can evolve to a more automated approach?
Here is a good example of where the use of digital twin models would accelerate development processes and dramatically extend test coverage. Any good software developer can emulate a sequence of test IP traffic. However, emulating a wide range of protocols across all the libraries of different user applications, and emulating traffic mixes of different users simultaneously exchanging different kinds of traffic at once, is much more complex. At a certain scale, it is easier for developers to use emulators, otherwise known as digital twins.
Even a traditional “C” programmer starting from a blank screen and adding low-level libraries proves problematic in a fast-changing development world. The complexity ramp-up is a new challenge requiring higher-level tools. Tools that take measurements and simulations into account as more systems and processes are being cloned into digital twins with deeper simulations and emulators.
Automation is rapidly becoming a must-have requirement. Currently, a fully manual test plan based on human data entry, some python or graphical programming, and Excel sheets can only cover a small portion of possible user stories. Each has to be updated manually for every software release.
New generations of software need to reflect application knowledge, advanced analytics algorithms, and measurement depth, wrapped in a framework of AI and ML to solve complex problems.
How is the number of tests and test times increasing due to the complexity?
The introduction of multi-cloud environments, the still-evolving 5G standards, high-speed I/O protocols and intertwined application layering has increased the number of tests (77 percent) and length of time to test (67 percent) for companies. For illustration purposes, let's assume that a product used to be tested in one minute with 30 tests, and now, it takes one minute 45 seconds and over 50 tests.
While these numbers sound small, when both time and the number of tests increase at the same time, a developer’s validation matrix starts expanding exponentially rather than linearly. The result would mean either increasing both validation and production tests or cutting the reach of the test matrix. One choice dramatically increases unit costs, while the other choice dramatically increases operational risk. Neither is a good choice.
How can companies react to nearly double the increase in test times and number of tests?
While test automation software is part of the answer and definitely needed, it is not enough. Automation is only as good as the analytics and insights they produce. In the Forrester survey, respondents disclosed that their test routines covered "more than needed" over half the time. Automation can help reduce the time it takes to test, but doesn’t solve the test reach, quality, and coverage question. Automating intelligent, farther-reaching test sequences backed with analytics and insights would solve both test speed and test reach at the same time.
What is Keysight doing to address these complex test issues?
Keysight created a model to describe how we combine deep measurement expertise with intelligent analytics and insights, and the knowledge of how to use those insights to automate workflows. The end-to-end process is known as automating intelligence. These three core ingredients: measurement depth, analytics and insights, and workflow automation are at the core of every piece of software Keysight delivers.
Automating intelligence builds on the industry's deepest measurement technologies and simulations to provide faster insights that developers can use to get to market with greater speed and lower risk. Whether measuring power and ground, waveform signal quality, high-speed data I/Os, network integrity or application delivery, we think about what it will take to help customers speed their development processes.
And as individual technologies become more interconnected and products have more required interfaces, Keysight will continue to help with:
• automating intelligent insights across new workflows,
• accelerating the engineering development approach, and
• automating the necessary tests (leveraging advanced analytics, AI, and ML).
How will automation help solve these challenges?
Keysight sees all this – the complexity, the reliability, the interactions, and the faster pace of new technology introduction – presenting a new development paradigm for companies wanting to push the boundaries of innovation: a need for farther-reaching automation and digital twin development environments to accelerate the speed of delivery and improve product quality.
Hardware developers have long relied on emulation environments as part of the layout before prototyping. Using digital twins reduces the number of design variables by allowing them to measure the impact of different operating environments, conditions, and protocol evolutions against known good references. Similarly, software developers use scrum methods and test in emulation sandboxes to incrementally build and deploy new features in smaller groupings, while limiting the number of variables.
The rising complexity of product interactions – communication protocols that evolve, cloud platforms that evolve, continuous software and firmware updates – poses real challenges for developers as each represents a slew of new variables. Using automation and continuously updated digital twins wherever possible enables development teams to reduce the variables related to their specific design. Reducing design variables using 'known good' digital twin references increases the likelihood that innovations that function in practice are the same as envisioned in the developer's mind.
How does intelligence factor in?
If you were building a new office building, you would want a top-ranked architect and contractor. You count on their intelligence in measuring every aspect and giving you thoughtful design choices. Electronics are no different. The more precise the measurement system and the better the understanding of the team that builds your test system, the more you trust their results.
With the exponential increase in complexity and fragmentation, it’s necessary to move from simply ‘collecting data’ to making sure your measurement system has the intelligence you can rely on for better decision-making and insights. Next-generation software supports this change by using advanced analytics based on an understanding of real applications and AI algorithms that continually process data, learn, and evolve to improve decision quality. These actionable outputs and insights deliver better design choices for your development teams.
Can an automating intelligence approach reduce time to market?
In the study we commissioned, Forrester identified that the three most significant contributors to time to market were shared data across teams, better analytics on current test and measurement data, and software tools across the product development lifecycle. Together, these factors can speed development and validation times, creating a faster time-to-market cycle. Organisations need to understand and address all three:
• Data – Keysight believes that the next generation software will require improved data integration and sharing on all fronts:
- Physical – better measurement data and understanding of industry standards
- Digital – simulation and emulations, as well as combining physical and digital data from digital twins into a hybrid model with data fusion from several sources
• Better analytics – Automation is a requirement for the future, but analytics have the potential to revolutionise test organisations with the advances of AI and ML paired with deep application knowledge to reveal true insights. "What to measure" is now a critical stumbling block for many as the permutations of complex parameters continue to grow at an unprecedented rate
• Software tools across the product development lifecycle – there’s a need for higher-level tools that can follow products, remove artificial barriers between design and test, and automate the workflow
What is the difference between automating intelligence and artificial intelligence?
Artificial intelligence is only one building block needed to automate intelligence. How a measurement is taken, how it is analysed, and understanding how disjoint parts of a design connect are critical to intelligently automating any design, build, or operation.
While AI can successfully identify patterns, groups, relations, and outcomes, it is only as good as its designer in understanding the impact of changes in industry standards, frequency bands, legislation, measurement limitations, customer use conditions, etc. This is why the more complete approach to automating intelligence is vital.
What is success? Are organisations ready?
The survey highlighted that for developers, the trifecta of automation and AI comprises increased productivity, the ability to simulate product function/performance, and bug fix automation/simulation.
In addition, there are wider business benefits stemming from adopting innovative test automation technologies that include:
• A higher-quality product that increases customer satisfaction
• Ability to reduce product time to market
• More agile and efficient product development cycles
Development teams that have already adopted Keysight’s Automating Intelligence approach are reaping these benefits today. Some examples include:
• NASA uses Keysight’s intelligent test automation in the Orion space programme to ensure that onboard software and equipment work as expected without faults. By deploying intelligent automation, NASA can accelerate the delivery and quality of its complex, mission-critical software system
• Oxford University Hospitals NHS Foundation Trust is world-renowned for its excellence in healthcare, training, and research. It turned to Keysight to automate regression testing, populate training domains, automate appointment scheduling, and add patient observations. The reliability, predictability, and consistency of the automation have helped the hospital improve productivity and accelerate the pace of testing
• FUJIFILM Group turned to Keysight to automate testing of its software embedded in medical devices. The platform enabled it to achieve efficiencies and ensure a high-quality and reliable product
Contact Details and Archive...