DétacherÉpinglerFermer

TwinCAT Machine Learning offers further inference engine

Server engine for increasing machine learning requirements

With TwinCAT Machine Learning Server as an additional inference engine, TwinCAT Machine Learning also meets the increasingly growing requirements of machine learning (ML) or deep learning for industrial applications. This is because ML models are becoming more and more complex, the execution speed is expected to increase, and greater flexibility of inference engines is demanded with respect to ML models.

TwinCAT Machine Learning Server is a standard TwinCAT PLC library and a so-called near-real-time inference engine, i.e., in contrast to the two previous engines, it is not executed in hard real time, but in a separate process on the IPC. In return, basically all AI models can be executed in the server engine and this with full support of the standardized exchange format Open Neural Network Exchange (ONNX). Furthermore, there are AI-optimized hardware options for this TwinCAT product that enable scalable performance.

The TwinCAT Machine Learning Server can operate in classic parallelization on CPU kernels, either using the integrated GPU of the Beckhoff Industrial PCs or accessing dedicated GPUs, e.g., from NVIDIA. This provides an inference engine with maximum flexibility in terms of models and high performance in terms of hardware. Applications can be found in predictive and prescriptive models as well as in machine vision and robotics. Examples include image-based methods for sorting or evaluating products, for defect classification as well as defect or product localization, and for calculating gripping positions.

Ms. Shana Lambrechts

Ms. Shana Lambrechts
Beckhoff Automation BV
Klaverbladstraat 11.2/2
3560 Lummen
Belgium

+32 13 2522-00
bab-marketing@beckhoff.com
www.beckhoff.com/nl-be/