분리고정닫기

본사 대한민국
Beckhoff Automation Co., Ltd.

대륭테크노타운 3차 12층
가산디지털2로 115
08505 금천구, 서울특별시, 대한민국

+82 2 2107-3242
info-kr@beckhoff.com
www.beckhoff.com/ko-kr/

TwinCAT Machine Learning:

Scalable, open and in real time

Machine learning for all areas of automation

Beckhoff offers a solution for machine learning (ML) and deep learning (DL) that is seamlessly integrated in TwinCAT 3. Both terms are also subsumed under the umbrella term artificial intelligence (AI). Machine learning is the manifestation of AI that has been able to make immense progress in recent years, not least because of the great successes of deep learning. DL, for example, has seen an enormous surge of innovation in the processing of images and voice signals.

TwinCAT Machine Learning builds on many of the well-known advantages of PC-based control: System openness through the use of established standards, constant performance increases through advances in CPUs or even the use of GPUs as a special performance boost for ML and DL. The inference machines provided for the TwinCAT automation software can be addressed directly from the PLC and are therefore an integral part of a machine application. This merges the worlds of automation technology and data science and lays a foundation for new ideas and applications.

Product Manager, Dr. Fabian Bause, on the fundamental principles and application examples of machine learning
Product Manager, Dr. Fabian Bause, on the fundamental principles and application examples of machine learning

AI applications in industrial automation

Check out the accompanying video to find examples of applications that have been successfully implemented on the basis of Beckhoff technology. The applications below have been singled out, not least by the McKinsey market research institute, as the top fields of application for AI in the industry:

  • collaborative and context-aware robotics
  • reducing waste quantities
  • machine optimization
  • machine-integrated quality checking
  • predictive maintenance

Workflow from data collection through training to integration of the trained ML model in the TwinCAT 3 Runtime (XAR).
Workflow from data collection through training to integration of the trained ML model in the TwinCAT 3 Runtime (XAR).

Workflow with Beckhoff tools: From data to the ML model

The fundamental idea with machine learning is to no longer follow the classic engineering route, e.g. of designing solutions for specific tasks and then turning these solutions into algorithms. Instead, the sought-after relationship between input variables and output variables is to be learned by means of model data. This process is particularly promising when variance is expected that is difficult to predict and to which it is difficult to react in classically developed methods. If, for example, the quality of wooden boards or agricultural products is to be evaluated visually, the large variance of these natural products must be included in the evaluation. Deep learning approaches are capable of doing this and promise good results quickly.

Beckhoff offers a closed, flexible and – in particular – an open-system workflow for the entire cycle, from data collection through model training to deployment of the trained model, without lock-in effect.

Data collection

Each application and also each IT infrastructure places different demands on the method of collecting machine data: SQL or noSQL, file-based, local or remote, limited port releases, cloud-based data lake, and many more. For all of these scenarios there are a large number of established TwinCAT products available, such as the TwinCAT 3 Database Server TF6420, the TwinCAT 3 Scope Server TF3300, TwinCAT 3 Analytics Logger TF3500, or the TwinCAT 3 IoT Data Agent TF6720. For image data, TwinCAT Vision even provides an entire product family for image acquisition, image (pre)processing in the PLC and image storage.

The task of data collection generally falls within the realm of automation specialists. They know the control architecture, the general conditions on the shop floor, and are optimally equipped with the products referenced above to carry out their work efficiently and in line with the needs of the situation.

Model training

Models are trained based on the supplied data in frameworks such as PyTorch, TensorFlow, SciKit-Learn®, etc. These frameworks, which are established in the data science community, are generally open-source and can therefore be used free of charge. Maximum flexibility is therefore assured and no limits are set in the case of an interdisciplinary project between automation engineers and data scientists – neither within the company nor across company boundaries. Each team member can work in their familiar environment, for example with TwinCAT for the automation specialist and TensorFlow for the data scientist.

Are you in need of a team member for such interdisciplinary projects? We can help. Simply get in touch with your local Beckhoff sales staff or use the contact form. Beckhoff offers support as usual and is also happy to actively help you with the successful completion of your data science project. Should you be interested in the services of a data scientist, we can search for suitable companies for you in our network.

Real-time-capable ML model inference capabilities as a standard module in TwinCAT 3: no separate hardware is required and the functionality is implemented solely in software on the same platform as the remaining control application.
Real-time-capable ML model inference capabilities as a standard module in TwinCAT 3: no separate hardware is required and the functionality is implemented solely in software on the same platform as the remaining control application.

Deployment

The trained ML model can simply be exported in the standardized Open Neural Network Exchange format (ONNX) from any AI framework and handed over to the TwinCAT programmer. This ONNX file contains all information about how input data of the model is linked to output data. The operations described in it can be complex or very simple, computationally intensive or fast to compute.

Depending on the requirements of the application to be implemented and also of the ML model, three TwinCAT products are available to the TwinCAT programmer for inference, i.e. execution of the model with TwinCAT.

With TF3800 TwinCAT 3 Machine Learning Inference Engine and TF3810 TwinCAT 3 Neural Network Inference Engine, two products exist for inference in hard real time, i.e. directly within the TwinCAT runtime. The real time inference takes place accordingly on the CPU of the IPC. The products can load and run selected models optimized for real time. They are especially ideal for latency-critical applications, such as when using ML models in closed-loop or open-loop control of processes, see e.g. TwinCAT 3 Cogging Compensation.

As a further product for the inference of ML models, the TF3820 TwinCAT 3 Machine Learning Server is available. This component is capable of making hardware accelerators, such as GPUs, usable for inference as well. The inference is outsourced to a separate process for this purpose. The interface is fully integrated into the PLC which means that the call is made from the PLC and the result is returned to the PLC. The choice of possible AI models is almost unlimited, there is maximally flexibility with regard to the model selection and the executing hardware. The possibility of hardware acceleration makes this inference engine an ideal product when it comes to image processing with deep learning networks, including visual inspection of surfaces, visual inspection for completeness, and locating objects or defects. In addition, the product offers maximum degrees of freedom in the modeling of an ML model, so that even highly specialized models, e.g. for predicting process disturbances or product quality properties, can be executed seamlessly from TwinCAT.

Model maintenance

ML models have the property of improving through training with larger sets of data. Likewise, general conditions can change gradually or spontaneously when the machine is being operated. To take account of this, you can update your trained ML models during the runtime of the machine: Thus, without stopping the machine, without recompilation, and at the same time completely remotely via the standard IT infrastructure. In addition, you can also operate your training environment remotely or locally on the IPC in the operating system context and thus retrain models close to the process, exchange them and load them with TwinCAT.

Products

TF3800 | TwinCAT 3 Machine Learning Inference Engine

TF3800

Beckhoff offers a machine learning (ML) solution that is seamlessly integrated into TwinCAT 3. Building on established standards, it brings to ML applications the advantages of system openness familiar from PC-based control. In addition, the TwinCAT solution supports the execution of the machine learning models in real time. Its capabilities provide machine builders with an optimum foundation for enhancing machine performance.

TF3810 | TwinCAT 3 Neural Network Inference Engine

TF3810

Beckhoff offers a machine learning (ML) solution that is seamlessly integrated into TwinCAT 3. Building on established standards, it brings to ML applications the advantages of system openness familiar from PC-based control. In addition, the TwinCAT solution supports the execution of the machine learning models in real time. Its capabilities provide machine builders with an optimum foundation for enhancing machine performance.

TF3820 | TwinCAT 3 Machine Learning Server

TF3820

Beckhoff offers a solution for machine learning (ML) and deep learning (DL) that is seamlessly integrated in TwinCAT 3. The TF3820 TwinCAT 3 Machine Learning Server is a high-performance execution module (inference engine) for trained ML and DL models.

TE5920 | TwinCAT 3 Cogging Compensation for linear motors

TE5920

In applications with the linear motors from the AL8000 series, the TE5920 software is used to reduce cogging. Cogging or cogging forces are created by the magnetic attraction between the iron core in the primary section and the permanent magnets in the secondary section. This physical effect leads to an unwanted and uneven “locking-in” of the motor, so that applications with extremely high demands on accuracy and synchronism, such as high-precision milling machines or digital printers, can only be implemented to a limited extent. TE5920 compensates for the cogging forces whereby, in addition to the magnetic attraction, cogging forces of the mechanical construction or energy chains can also be taken into account. This extends the range of applications of AL8000 iron-core linear motors and provides an alternative to ironless linear motors.

C6675 | Control cabinet Industrial PC

C6675

The C6675 is a perfect symbiosis of the properties of a C6670 control cabinet industrial server and a C6650 control cabinet Industrial PC. Together with a Beckhoff Control Panel, this produces an ideal combination for a powerful control platform in machine and system engineering with the TwinCAT automation software. The C6675 is equipped with components of the highest performance class, e.g. an Intel® Celeron®, Pentium® or Core™ i3/i5/i7 processor of the latest generation on a Beckhoff ATX motherboard. The housing and cooling concept adopted from the C6670 also enables the use of a GPU accelerator card, among other things. A total of 300 W is available for plug-in cards. Applications in the field of machine learning or vision can thus be realized in an industrial environment.