• External Contributor

Unlocking Data to Speed Up Product Design

By: Fatma Kocer, IndustryWeek

November 9, 2020

Machine learning can increasingly extract knowledge from past simulations, leaving more time for innovation.

How many of us can afford to reinvent the wheel? Manufacturers must deal with ever-shorter product lifecycles and intense time-to-market pressures. Yet in many new product development programs, designers are effectively obliged to relearn lessons of the past before they can start exploring new ideas. That’s because, until recently, there’s been little prospect of unlocking the insight contained in often vast archives of computer-based simulation data.

Why is this data so significant? Since it emerged in the late 1970s, simulation has played an increasingly important role in testing design concepts before they are transformed into prototypes and the finished product. In the process, simulation has had a dramatic impact on the speed, efficiency, and accessibility of industrial design.

Now there’s growing recognition of the value that lies in its fast-growing back catalog. By applying lessons learned in previous design cycles to new projects, enterprises can avoid much of the routine simulation that typically characterizes the start of a project. As a result, they stand to slash the time and cost of bringing new products to market.

Rewriting the rules of design

Not for the first time, it’s machine learning (ML) that is rewriting the rules. By enabling designers to extract knowledge from past simulations, ML is allowing them to ‘cut to the chase’ when creating new products. Initial rounds of simulation can be skipped, and the focus placed firmly on innovating and adding value. And it’s still early days for this approach. Leveraging the power of ML is only going to get easier.

When simulation first came onto the scene, each cycle or iteration was so time-consuming and expensive it was used purely to validate a final design, prior to production. Now, the sheer speed of simulation means it is routinely employed at every stage, from initial concept onwards. Designers are free to explore more iterations in a shorter space of time. Outcomes are improved and the risk of mistakes only coming to light late in the day is minimized. But at the same time, products are becoming ever more complex, and regulation increasingly rigorous. Consequently, the number and size of simulations required is growing fast. In some working environments, there also remains a less-than-perfect fit between designers using computer-aided design (CAD) tools and the specialists responsible for running simulations. That adds further delay.

Keeping a lid on time and cost

In trying to keep a lid on new product development costs and time to market, there’s obvious logic to mining old simulation data to see if key design questions have, in effect, already been answered. Of course, when ML is proposed as a means of achieving this, the first thing that comes to mind might be more simulations, not fewer. However, in this case, it’s all about reducing the time it takes for the overall number of iterations needed.

Proving the benefits

A good example of the possibilities is provided by the recent experience of a North American automobile manufacturer. A key objective in the design of any new vehicle hood is to minimize injury risk to pedestrians in the event of a collision. Indeed, in this respect, manufacturers face strict safety regulations. To comply, designers must determine what is known as the head-injury criteria (HIC) at up to 50 different points across the hood. What’s more, each HIC needs to be calculated for both a child and an adult, with different rules applying in different markets. As a result, this one aspect of designing a single component would typically require around 150 separate simulations. At around six hours for each, requiring multiple CPUs, the design time and computing resources add up.

To accelerate the process, we extracted metadata from historical simulations and trained the ML model. The goal was to apply these to an extensive archive of hood simulations to determine the individual influence of various design elements and to be able to do quick performance predictions for new design suggestions. These included so-called global inputs, such as the material used for the hood, and local inputs, such as where reinforcements were positioned. Ultimately, the design team was provided with a software tool that allowed them to test, in real time, the likely impact of their design proposals on HIC outputs. It even provided a percentage-based confidence rating, to demonstrate just how much insight has been extracted from past simulations. Hundreds, if not thousands, of designs can be explored this way in a matter of hours instead of days.

Not surprisingly, it’s an approach that is being adopted rapidly. But it doesn’t yet represent a “magic bullet.” For the moment, the vast majority of manufacturers must rely on external experts to extract metadata for each application. What’s more, current processes aren’t scalable. As a result, some manufacturers are holding off, either because they don’t have sufficiently organized archive data to draw on, or simply can’t make a financial case for collaborating with specialists on the development of predictive models for one-off projects.

However, things are changing fast, with off-the-shelf predictive modeling tools in development that will apply ML to historical simulation data without any need for bespoke tedious processes. Crucially, these won’t be standalone. They will be embedded within the company’s existing simulation tools. Going forward, this means ML becoming part of a designer’s familiar operating environment, integrated within existing workflows.

For anyone involved in product development, that’s an exciting prospect. And this particular design revolution has another interesting implication. All of a sudden, historical simulation data has become a valuable asset. Organizations with access to it have the potential to reap significant competitive advantages. Overwhelming, these are likely to be larger, well-established enterprises, particularly those that were among the early adopters of simulation. For a change, data will be something agile startups have to play catch up on—and an advantage established players should be keen to grasp with both hands.

Fatma Kocer is vice president, engineering and data science, Altair