How to Integrate Artificial Intelligence into Your Workflow

Source –

MathWorks’ Johanna Pingel talks with Senior Editor Bill Wong about how engineers can use artificial intelligence to optimize their workflows.

Engineers are increasingly seeking to integrate AI into their projects, both to improve their results and remain ahead of their profession’s digital curve. To successfully integrate AI, engineers should make sure they understand what, exactly, AI is in the first place, and how it can fit into their current workflow. It may not be as straightforward as they first believe.

In this Q&A, Electronic Design’s Senior Editor Bill Wong talks with Johanna Pingel, product marketing manager at MathWorks, about how engineers can integrate AI into their projects, and how it can ultimately be used to optimize a complete workflow.

How do you define AI from an engineering perspective?

When engineers discuss AI, they’re usually focusing on AI models, but AI is much more. It’s an often-nebulous term that describes an operational strategy supported by machine learning. In engineering terms, the concept of “AI” actually spans four steps within a workflow: data preparation, modeling, simulation and testing, and deployment.

Which step or steps are most important when incorporating AI into a workflow?

Each step is important. It’s crucial for engineers to remember that, because they often expect to spend most of their time on the second step—developing and fine-tuning AI models.

While modeling is undeniably a key part of the process, it’s neither the beginning nor end of the integration journey. If any step in practical AI implementation is most important, it’s the first, which is data preparation. It’s critical to uncovering issues early on and knowing which parts of the workflow to focus on to achieve the best results.

Of course, the most important step will depend on the specific application. But when in doubt, start with the data.

What else should engineers consider before incorporating AI into their workflow?

Engineers should recognize the value of their existing knowledge. When developing an AI workflow, many believe they lack the skills necessary to incorporate AI into their projects, and that’s rarely true. They have inherent knowledge of the problem they’re trying to solve, and access to data preparation and modeling tools that can help them leverage that expertise, even if they’re not AI experts.

They should also keep in mind that AI is only one part in a much larger system, and all parts must work together for its implementation to be successful.

Walk us through the four steps to developing a complete AI-driven workflow. What role does each step play in successfully incorporating AI into a project?

As mentioned, the first step, data preparation, is arguably the most important. Often, when deep-learning models don’t work the way they’re expected to, engineers focus on the second stage—fine-tuning their model, tweaking its parameters, and implementing multiple training iterations. They fail to realize that to be effective, AI models need to be trained on robust, accurate data. If an engineer gives the model anything less, they will get no insight from their results, and likely spend hours trying to learn why the model isn’t working.

Instead, engineers are better served by focusing on the data they’re feeding into the model. Preprocessing the data and ensuring it’s correctly labeled helps ensure the model will be capable of understanding the data.

For example, engineers at construction equipment manufacturer Caterpillar have access to high volumes of field data generated by their machinery’s industry-wide use, but recognize that the sheer volume of data can interfere with their model’s effectiveness. To streamline the process, Caterpillar uses MATLAB to automatically label and integrate data into their machine-learning models, resulting in more promising insights from their field machinery. The process is scalable and gives Caterpillar’s engineers the freedom to apply their domain expertise to the company’s AI models without forcing them to become AI experts themselves.

Once the data is prepared, how important is the next step of the workflow—namely, modeling?

Assuming the data-preparation stage is complete, the engineer’s goal at the modeling stage is to create an accurate, robust model capable of making intelligent decisions based on the data. This also is the stage where engineers should decide what form it should take, whether that’s machine learning like a support vector machine (SVM) or decision trees, deep learning like neural networks, or a combination of the two; choosing whichever option produces the best result for their application and business needs.

It’s important for engineers to have direct access to multiple workflow algorithms, such as classification, prediction, and regression. In addition to providing more options, this allows them to test their ideas with prebuilt models developed by the broader community, and potentially use one as a starting point.

It’s also crucial for engineers to remember that AI modeling is an iterative step within the workflow. They must track whatever changes they’re making throughout, as it can help them identify the parameters that increase the model’s accuracy and create reproducible results.

Now that we’ve prepared our data and set up a model, where does simulation and testing come in?

This step is the key to validating that an AI model is working properly and, more importantly, working effectively with other systems before it’s deployed in the real world. Engineers must keep in mind that AI models are part of a larger system and must work with all other pieces of that system. Consider an automated driving model: Not only must the engineers design a perception system for object detection like stop signs, other cars, and pedestrians, but it must be integrated with other systems like controls, path planning, and localization to be effective.

The testing stage is essentially an opportunity for engineers to ensure the model they developed is accurate, and the best way to test that model is through simulation, using virtual tools such as Simulink.

At this stage, engineers should ask themselves questions to ensure their model will respond the way it’s supposed to, regardless of the situation. What is the model’s overall accuracy? Is the model performing as expected in every scenario? Does the model cover all edge cases?

By testing for accuracy via simulation, engineers can verify their model’s reliability under all anticipated use cases, avoiding costly redesigns that drain both money and time once a model is deployed.

So, we’re finally ready to deploy our model. What role does AI play in this final step?

The deployment stage is no longer about the model, which has now been verified to be processing and extracting accurate insights from prepared data, but about the hardware it’s being applied to and the language used. For example, a model can run directly on a GPU, and automatic generation of highly optimized CUDA code can eliminate coding errors often introduced to the GPU through manual translation.

Engineers should keep this stage in mind throughout the process, ensuring they ultimately share an implementation-ready model compatible with their project’s designated hardware environment, which can range from the cloud, to desktop servers, to FPGAs.

Here, too, the right tools can make this stage easier. Flexible software capable of generating the final code in all scenarios enables engineers to deploy a model across multiple environments without forcing a rewrite of original code.

Related Posts

Notify of
Inline Feedbacks
View all comments
Would love your thoughts, please comment.x
Artificial Intelligence