
Trends in AI in companies — and in regulation
AI regulation affecting German companies is currently taking place primarily at European level. With the AI Regulation, a first legislative package came into force last year, the requirements of which have had to be directly complied with throughout Europe since the beginning of February 2025. The idea of protecting fundamental rights is at the heart of the AI Regulation. The fundamental rights of all people living in the EU are to be particularly protected if they come into contact with AI. The EU legislator has therefore prioritized AI competence and prohibited AI practices in the first regulatory stage of the AI Regulation.
AI competence is intended to ensure that AI is only used by companies whose employees are sufficiently familiar with the topic of AI and have received adequate training. The ban on certain AI practices, in particular AI systems that deliberately manipulate or deliberately deceive people, is intended to ensure that particularly dangerous AI applications do not come into circulation within the EU in the first place.
Further regulatory stages of the AI Regulation will follow gradually over the next few months and into 2027. The legislator is thus creating an ever-tightening net of regulations for companies to observe.
The EU is aware of the criticism of heavy regulation. This is also shown by the recent failure of the AI Liability Directive, which was initially completely abandoned as a legislative proposal. However, companies in Europe, and therefore also in Germany, must implement the AI Regulation with its current and future obligations.
Incidentally, an efficient way to achieve compliance could be to use AI solutions to meet these obligations, for example to create specifications within the company or to pre-fill documentation. AI agents are likely to increasingly come to the fore here. They could complete multi-step processes without human intervention, for example independently opening a file, identifying relevant data and transferring it to a draft email or spreadsheet.
As already described, companies already have to fulfill some obligations under the AI Regulation. A recent lawsuit by a Dutch foundation for market information research shows that non-compliance can have consequences. The lawsuit was filed against TikTok and X on February 5, 2025 — just three days after the AI Regulation on prohibited AI came into force. The subject matter is claims for damages from potentially affected users, which could amount to billions. According to the plaintiff, the reason for this is the personalization of recommendation systems based on intimate personal information, particularly that of children and young people. The AI in dispute is manipulative, misleading and exploitative.
The outcome of these proceedings is completely open. However, the filing of the lawsuit shows one thing: the requirements of the AI regulations should not be taken lightly and they are the basis not only for investigations by the authorities, but also for private plaintiffs.
Many of our clients have already begun gradually implementing measures to comply with the AI Regulation in recent years. It is clear that companies need a transition phase to adapt to new regulations. This has already been demonstrated by the introduction of GDPR compliance measures.
A steady and considered approach is recommended in order to avoid being under acute pressure to act and having to implement regulations hastily. At the same time, companies can also respond better to the changes for employees. Communication with employees about compliance with new obligations and also about employees’ concerns when dealing with new technologies can be better addressed in this way.
Our clients are currently training employees, i.e. building up “AI expertise”. However, they also examine AI to be developed at an early stage to determine whether it is subject to a ban or, as high-risk AI, must be subject to special risk management. The documentation requirements are then also strict. In addition to the AI Regulation, other regulations on data protection, cyber security and the protection of business secrets must of course also be observed. The regulatory requirements are therefore diverse and must also be implemented in companies.
These arise primarily from the large number of regulations for companies and their overlapping content. For example, the AI Regulation, the GDPR and the Data Act all contain information and documentation obligations, some of which overlap but some of which are difficult to reconcile. In some constellations, they are nevertheless applicable at the same time.
Other requirements arise, for example, from product safety or product liability. From a legal perspective, it is clear that all requirements must be complied with in a timely and comprehensive manner in terms of corporate compliance. In practice, however, it is not possible to create a new position and hire a new employee for every new regulation. In any case, this would hardly be economically feasible. It will then usually come down to the management weighing up the risks, making the best possible use of existing resources and implementing the requirements as quickly as possible.
In order to avoid being held liable for a risk decision in retrospect or, in the worst case, even prosecuted under criminal law, those responsible can only protect themselves by making well-founded and documented decisions and being able to prove that the decisions were justifiable at the time and in no way reproachable. At this point, the documentation serves as a kind of “insurance policy” to safeguard the decision on the specific issues of company organization and product development.
I see another challenge — but also an opportunity — in the Data Act, which I would like to talk about briefly. The Data Act is intended to promote the data economy and ensure that companies make the data generated by their products available to users. However, other companies must also be granted access to the “data treasures” under certain conditions. The first reaction is often: “How can we prevent that?” or “That’s a really bad idea!”. These reactions are understandable from the data owner’s point of view. However, I would also like to take this opportunity to promote the opportunity presented by the Data Act: German companies should consider at an early stage which raw data they might need from other companies (in the EU) in order to create new business areas and business models. If they succeed, they can position themselves as pioneers in new markets. In any case, I believe that a “Data Economy — Made in Germany” can bring Germany forward as a business location.
Dr. Siebert has been a German lawyer in Berlin since 1999, studied in Kiel and Münster and at Emory University in Atlanta/Georgia (USA). He received his doctorate from the University of Constance in 1998. He was a lawyer from 1999 to 2024 and a partner at the law firm Büsing, Müffelmann & Theye (BMT) since 2004. Since January 2025, he has been Managing Partner at WIPIT Partnerschaft mbB Rechtsanwälte Steuerberater. Dr. Siebert specializes in intellectual property and technology law.
He primarily advises companies in the automotive, mechanical engineering and IT sectors. His consulting activities focus on research and development, in particular contract drafting and technical compliance. Since 2009, he has provided intensive advice on eDiscovery in international product liability and patent infringement cases. Data protection, AI regulation and the law of digitalization in general are becoming increasingly important in his advisory work.