Increasing Risk and Legal Regulation
As technology continues to advance, the risks associated with digital threats are increasing, requiring stronger security measures; at the same time, artificial intelligence is becoming more central in corporate IT systems, offering efficiency gains but also exposing businesses to new vulnerabilities that cybercriminals may exploit, resulting in both operational and legal challenges.
The field of cybersecurity is increasingly subject to legal regulation, through privacy rules, sector-specific measures, and cross-sectoral regulations. New and stricter laws and regulations are being implemented to address the growing threats. There is also increasing awareness of these issues in value chains and contracts. The consequences of security breaches are greater when critical systems depend on cybersecurity. At the same time, the requirements for cybersecurity are increasing, both in terms of industry practice and regulatory demands.
In Norway, the Digital Security Act has already been adopted and will enter into force in 2024/2025. This law imposes strict security requirements on companies in certain sectors, and more will follow as the NIS 2 Directive is implemented. These regulations are part of a broader European shift, where privacy legislation such as the GDPR also plays a critical role.
The new regulations require companies not only to be proactive in their approach to cybersecurity, but also to document and continuously update their security measures. This includes everything from risk assessments and security updates to internal control systems that detail the security measures in place. Even companies not directly covered by such regulations may be affected—for example, suppliers to regulated companies may indirectly be subject to similar requirements for their deliveries.
AI systems offer significant benefits for Norwegian businesses, but also introduce new vulnerabilities that require thorough assessment and appropriate management. In this article, we explore how AI can create vulnerabilities in IT systems and which measures can be implemented to mitigate these risks, in light of increasing legal pressure and strict cybersecurity requirements.
The upcoming EU AI Regulation (also known as the AI Act) will eventually be implemented in Norway, and may also set requirements for Norwegian companies’ use of AI in certain situations. However, the AI Act is not the focus of this article.
How AI Systems Can Create Vulnerabilities in IT Systems
Artificial intelligence (AI) is now being integrated as a core part of many companies’ IT systems. AI is used to automate complex tasks, optimise performance, and support decision-making. Increased reliance on AI also introduces new vulnerabilities that can compromise these systems. Vulnerabilities may arise from the inherent complexity of AI algorithms, the data they process and are trained on, and the way AI is integrated with existing IT systems. Below, we discuss some areas where AI itself can create vulnerabilities in a company’s IT infrastructure.
Data Manipulation and Poisoning Attacks
AI systems, especially those powered by machine learning (ML), depend on large datasets for training and continuous improvement. If these datasets are compromised, it can lead to significant vulnerabilities. An attacker can manipulate training data and provide the system with incorrect or inaccurate information (input), which in turn causes the system to produce incorrect outputs. This is known as a data poisoning attack. For example, in a cybersecurity context, an AI system designed to detect intrusions could be trained with falsified data, causing it to misinterpret or ignore certain types of threats.
Model Exploitation and Inference Attacks
AI models can be reverse-engineered or exploited by attackers to extract sensitive information. A common attack in this area is a model inversion attack, where an attacker uses the outputs from an AI system to reconstruct the input data. In this way, confidential information such as personal data, trade secrets, or other sensitive details can be reconstructed.
Adversarial Attacks
One of the more technical challenges is the vulnerability of AI systems to so-called adversarial attacks. These are attacks that cause the AI model to make mistakes by providing it with modified input. For example, an image recognition system can be tricked into misclassifying an object if an attacker introduces small, subtle changes to the image—changes that may not be visible to humans, but can confuse the AI model. This weakness can be exploited to bypass security systems that rely on AI-based facial recognition or other forms of automated classification.
Bias and Unintentional Discrimination
AI systems can also create vulnerabilities through unintentional discrimination. If an AI model is trained on data with built-in bias, its outputs may be affected or even reinforce that bias. This can lead a company to make discriminatory decisions. Not only does this create legal and ethical vulnerabilities, but it can also result in security systems having blind spots that attackers can exploit to compromise the company’s infrastructure.
Black-Box Systems and Lack of Transparency
Many AI systems, especially deep learning models, function as “black boxes”—systems whose inner workings are not fully understood. This often means that developers cannot easily interpret how the system arrives at certain decisions. Such a lack of transparency can make it more difficult to identify and correct vulnerabilities. A lack of transparency and understanding of how models work can also make it difficult to trace whether a system has been manipulated, and if so, the cause.
Challenges Integrating with Existing IT Systems
AI systems often need to be integrated with existing IT infrastructures, such as databases, cloud platforms, and user interfaces. These integration points can introduce vulnerabilities if not properly secured. For example, poorly configured interfaces (APIs) or insufficient protection of communication channels can become entry points for attackers seeking access to a company’s IT systems. In addition, AI systems require regular updates and security patches, which, if not performed correctly, can make them vulnerable to potential threats.
Take Action Now!
In the coming period, stricter security requirements will be introduced for a number of Norwegian companies, both at the regulatory level and in terms of collaboration with other businesses.
The Digital Security Act has already been adopted and will enter into force in Norway during 2024/2025. This law imposes regulatory security requirements on companies in a range of critical sectors, with a sanctions system that entails significant exposure in the event of breaches. Over time, more companies will be covered by the NIS 2 regulations, either because they themselves are subject to the rules or because they provide services to such companies.
In our view, all companies should map their own systems, identify vulnerabilities, and, through risk assessments, identify measures that can help increase security, regardless of whether they are directly subject to the regulations.
It is also important to recognise that while AI can optimise and automate many aspects of business operations, the security aspects of these systems must be handled with great care. Companies must therefore be proactive and continuously update their risk assessments and security systems to protect themselves against these threats. This should also be reflected in the company’s policies and agreements related to the use of such systems.
At Hjort, we have extensive experience in developing such internal control systems and are happy to assist.


