DetachFixClose
pcc-0423-twincat-chat-stage

Oct 30, 2023

Fast and efficient PLC code generation and more with artificial intelligence

AI-assisted engineering with TwinCAT Chat

With TwinCAT Chat, Large Language Models (LLMs) such as ChatGPT from OpenAI can be conveniently used in the TwinCAT XAE engineering environment for the development of a project. In this interview, Dr. Fabian Bause and Jannis Doppmeier from TwinCAT Product Management describe the most important application considerations and possible efficiency potential from control programming to enterprise management.

Since the introduction of ChatGPT, everyone has been talking about Large Language models (LLMs). Beckhoff was one of the first suppliers to present an application in the automation sector with TwinCAT Chat at Hannover Messe 2023. What was the feedback from customers at and after the trade show?

Jannis Doppmeier: The feedback from customers was consistently positive. Both management representatives and direct users expressed a high level of interest. A large proportion of customers saw significant potential in this technology for the automation sector. Some even expressed concrete interest in testing a beta version in the future, as soon as it is available. This indicates that there is growing demand for advanced solutions in this segment. With the introduction of TwinCAT Chat, Beckhoff has made an important contribution to the integration of LLMs in industrial applications.

Dr. Fabian Bause, TwinCAT Product Manager, Beckhoff Automation
Dr. Fabian Bause, TwinCAT Product Manager, Beckhoff Automation

What fundamental advantages can LLMs offer for the automation engineer on the one hand and for enterprise management on the other?

Jannis Doppmeier: Large Language Models (LLMs) offer a number of benefits for both automation engineers and enterprise management. For automation engineers, LLMs have the potential to revolutionize the development process by automatically generating and completing code. This speeds up the entire process. In addition, you can even have LLMs create personal tutorials and ask specifically for solutions to problems that arise, which speeds up the process of finding solutions. Another advantage is the ability to consistently implement and comply with guidelines and best practices in automation. From an enterprise management perspective, LLMs promote knowledge transfer within the company. They can act as a central knowledge base, storing valuable information and making it available when needed. In addition, LLMs can relieve the pressure on the support team by serving as the first point of contact for customer inquiries. This not only improves response times, but also potentially increases customer satisfaction. Overall, LLMs offer an efficient and innovative solution to numerous challenges in the modern business world.

Are there still technical uncertainties when it comes to using LLMs?

Dr. Fabian Bause: Yes, definitely. There are numerous technical uncertainties, but that is not surprising considering the speed of development at the moment. A key challenge for the automation industry at the moment is the “fantasizing” of LLMs. What is meant by this is that an LLM will also repeatedly generate “made-up” answers that are not necessarily recognizable as such by the user. In the early development phase, for example, we found some motion functions in PLC code generated by TwinCAT Chat that do not exist at all – at least not in TwinCAT. But these are issues that can be addressed and will improve significantly over time.

And are there uncertainties from a legal point of view too?

Dr. Fabian Bause: Absolutely. The European Union’s AI Act is currently a source of uncertainty. It has not been finally adopted yet, and for that reason alone there is a great deal of uncertainty in the industry. A key challenge for policymakers in regulating AI applications is that political processes are much slower than the rapid pace of advancement in the field of generic AI. It will be interesting to see how a generic regulation will apply to the many AI developments that are still unknown. But there is no doubt that certain regulatory measures are needed.

Will AI applications like TwinCAT Chat be able to replace control programmers with all their creativity in the future?

Dr. Fabian Bause: No, certainly not. It is not our goal to completely replace programmers, nor do the current technical developments imply that this will be the case. Instead, the goal is to provide programmers with better and better tools so that they can work effectively. It’s all about increasing a programmer’s productivity – not least as one of the key ways to combat the skills shortage. If vacancies cannot be filled because there are simply no qualified specialists to be found, AI must be used to ensure continued competitiveness.

Jannis Doppmeier, TwinCAT Product Manager, Beckhoff Automation
Jannis Doppmeier, TwinCAT Product Manager, Beckhoff Automation

What are the technical features of TwinCAT Chat?

Jannis Doppmeier: TwinCAT Chat was developed to offer users a clear advantage over the conventional use of, for example, ChatGPT in the web browser. The key added value lies in its deep integration, especially with regard to the specialized requirements of the automation industry. The core features include the direct integration of the chat function into the development environment (IDE). This greatly simplifies the development process, as communication and code exchange are seamlessly integrated. Furthermore, the basic initialization of our model has been tailored specifically to TwinCAT requests. This way you can ask your specific questions directly and don’t have to tell the model that you are using TwinCAT and expect the code examples in Structured Text. Another highlight is the ability to easily adopt generated code. This not only saves developers time, but also reduces human errors that can occur during manual transfers. Interaction with TwinCAT Chat has been designed in such a way that the need to type commands is reduced to a minimum. Instead, the user can simply click on pre-tested requests that are specifically designed to improve their workflow. These requests include actions such as:

  • Optimize: The system can make suggestions to increase the performance or improve the efficiency of the code.
  • Document: TwinCAT Chat helps to create comments and documentation so that the code is easier for other team members to understand.
  • Complete: If code fragments are missing or incomplete, our system can generate suggestions to complete them to ensure functionality.
  • Refactoring: TwinCAT Chat can refactor code according to certain guidelines and policies so that it is more in line with company guidelines.

Overall, this system provides an efficient and intuitive user interface that greatly facilitates the development process.

In addition to the current focus on supporting PLC code generation, which other areas will gain in importance in the future?

Dr. Fabian Bause: The beauty of LLMs is that, with a little imagination, they can be used universally. In addition to PLC code generation, we are also working on a chatbot that automatically creates a TwinCAT HMI project. The goal is that a user will only have to formulate how they want their HMI to be structured and TwinCAT will generate the entire HMI project in the background. The customer will therefore receive immediate feedback in the form of the visualized HMI. This is made possible by explaining the programming interface for the HMI to the LLM – because in fact it is also just another “language” that can be easily mastered by the LLM. Another project involves a chatbot interface to our documentation system, which contains many gigabytes of knowledge in the form of documentation. And that is precisely the challenge for our customers: We provide a huge amount of knowledge in text form. And why? Because it is the only way to make information available to hundreds of people at the same time – in other words, written text is simply a tool. The natural way for humans to share information is through language. One person asks, the other person understands or rather interprets the question and generates an answer based on their experience. That’s what we can achieve with LLMs – we ask a question and an LLM is able to interpret that question. No specific keywords need to be used as the system can cope with questions that may be poorly worded. If the LLM is also granted access to the large Beckhoff library, the model can generate targeted responses. So in the future, we will be able to ask specific questions rather than having to search for answers via keywords.

TwinCAT Chat also opens up new ways of working for the user. What does this mean exactly and what are the advantages in practice?

Jannis Doppmeier: Our tool is an innovative solution which increases developer productivity significantly by acting as a digital assistant. Code no longer needs to be manually created line by line. This assistant performs routine tasks that are often time-consuming and repetitive. This gives developers more time and capacity to focus on their core tasks – the actual design and conception of the software. In a market where every advantage counts, our tool offers companies the opportunity to remain competitive despite staff shortages and to meet the increasing demands of their customers.

What is the significance of the language model used?

Dr. Fabian Bause: Currently, several language models from the well-known IT giants are competing with each other, such as ChatGPT from openAI, PaLM or Bard from Google, or ERNIE from Baidu. What the major models have in common is that they are all offered as cloud services via an API. Apart from technical differences, there are regional challenges too. For example, ChatGPT and Google’s LLMs are not accessible from China. This poses a challenge for Beckhoff because the Chinese market plays a central role for us. Furthermore, the integration of a third party’s cloud service into our products entails a strong dependency on this provider. How will the service evolve from a technical perspective, how stable and backward compatible will the developments be, how might the usage costs and privacy policies for the service change in the future? Because of that uncertainty, we are working hard on training our own models – not from scratch, of course, but based on commercially available open LLMs. In this way, we are focusing on a clearly defined, much smaller scope of application rather than competing with the general models like ChatGPT.

Further information