Assuring compliance with pending AI liability regulation

Assuring compliance with pending AI liability regulation

  • Export:

By Iosif Itkin, Chief Executive Officer of Exactpro

Europe’s laws on automobile liability are based on the principle that drivers must always be in control of their vehicles and responsible for their operation in traffic. However, advancements in artificial intelligence (AI) have created a roadblock – if a self-driving car crashes, who is liable? Efforts are ongoing to modernise and harmonise traffic liability rules for driverless vehicles, but currently the process is slower than the rapid pace of technology.

There is a similar dynamic underway in financial services. In a step toward closing the gap between innovation and regulation, the European Commission (EC) is expected to publish proposals for regulating AI next year. The question of liability is tricky, given the growing complexity of supply chains for new technologies. An EC working group argued that “strict liability should lie with the person who is in control of the risk connected with the operation of emerging digital technologies and who benefits from their operation.” For many firms, this person is the CTO or CRO.

Given that testing is a necessary for risk assessment, it’s likely that CTOs and CROs will be driven to re-evaluate their approach as proposed AI liability laws progress.

Independent testing is the only way to ensure an AI-based system works as expected before the system is live. Despite concerns about people being replaced by robots, as innovation progresses, the technological advances are increasing the volume of tasks that require analytical thinking and decision-making.

Of course, when humans take responsibility for decisions, they are liable for failures, in line with how the EC appears to be approaching a regulatory framework. As revealed in testing, software failures can often be traced to errors explicitly or implicitly introduced by humans, and for AI systems, these issues can arise in anything from training data selection and manual mark up to the choice of algorithm and fine-tuning the model. The necessary role of humans in AI requires additional effort from CTOs and CROs around risk analysis and prioritisation.

Testing is important for traditional technology, but crucial for AI-based systems, because they inherit and magnify any uncertainty introduced by human involvement into the software life cycle. Software designed to simulate human reasoning is expected to demonstrate human unpredictability, creating near-infinite possibilities of input combinations and conditions permutations.

While no single testing process is capable of evaluating all inputs fed to the system under all possible functional and non-functional conditions, by deploying a variety of testing approaches – such as model-based testing, fuzz testing, and datamorphic and metamorphic approaches - CTOs can cover sufficient behaviour patterns to understand an AI application’s capabilities and associated risks.

CTOs should also consider that AI systems are dependent on data, which is often unintentionally corrupted by human involvement. Independent assessment of the system before it goes live ensures that the data input used in the testing process is free of preconceptions and the implicit knowledge that are inevitably a factor during data collection and pre-processing, which would then carry over into model training and fine tuning.

As regulations governing AI liability move ahead, it is important that CTOs do not allow fear of liability to lead them to avoid activities, such as testing, that may uncover failures. Software failures are a valuable source of information about the system under test. Exactpro’s R&D team built a dataset of bug reports describing system failures and used it as the foundation to train an AI model capable of predicting the possible root causes of these failures, their domain and severity. This type of tool is strategically important and indicates that all test execution data should be stored as a massive dataset creating numerous possibilities for intellectual data analytics.

Whenever regulators focus on technology, there is a concern that the scrutiny will hinder innovation. While that is certainly a possibility for AI, the necessary assessments can also create the opportunity to discover vital information about the technology and its applications, enabling further innovation. For CTOs willing to embrace rigorous testing to ensure understanding and awareness of the risks presented by their firms’ AI-based systems, preparing for regulatory oversight can create a culture of investigating failures and then leveraging the associated data analytics to develop more robust, higher-quality software.

  • Export:

Related Articles