SecurityWorldMarket

16/04/2024

Critical steps firms should consider before the new EU AI Act

Stamford, Ct (USA)

European Union lawmakers reached a political agreement on the draft artificial intelligence (AI) act in December 2023. Proposed by the European Commission in April 2021, the draft AI act, the first binding worldwide horizontal regulation on AI, sets a common framework for the use and supply of AI systems in the EU.  With the final approval of the act looming and hefty fines on the cards for those that do not comply, VP Analyst at leading research company, Gartner, Nader Henein suggests that most companies are not prepared to comply with these sweeping AI regulations. Gartner recommends that organisations should put in place an AI governance programme to catalogue and categorise AI use cases and address any banned instances as soon as possible.

Nader Henein, believes that businesses are not yet prepared to comply with these new regulations. "Many organisations think that because they are not building AI tools and services in-house, they are free and clear. What they don’t realise is that almost every organisation has exposure to the AI Act because they are not only responsible for the AI capabilities they build, but also those capabilities they already bought," he says.  Here Henein, explains how the rules around the AI Act coming into force and recommends 4 key steps that companies should consider in order to organise their activities in relation to the Act. 


Time scales

The rules around prohibited-AI systems will become effective 6 months from the AI Act coming into force, and those rules will carry the highest fine tier at €35 million or 7% of global turnover. Eighteen months later (at the two-year mark), the majority of the rules associated with high-risk AI systems come into force. Those will apply to many enterprise use cases requiring a fair bit of due diligence and even more of the documentation outlined in this, and subsequent guides.

Preparation, cataloguing and categorisation

Gartner recommends that the first and most critical step is to discover and catalogue AI-enabled capabilities with enough detail for the subsequent risk assessment. Many organisations have hundreds of AI driven capabilities deployed within the enterprise, some of which are purpose built, but the majority are invisible; embedded across many of the platforms used on a day-to-day basis. Cataloging requires organisations, providers, and developers to undertake the discovery and listing of each AI-enabled system deployed across the enterprise. This will facilitate subsequent categorisation into one of the four risk tiers outlined in the Act: low-risk AI systems, high-risk AI systems, prohibited AI systems and general purpose AI-systems.

Nader Henein defines four main deployment classes and suggests how to deal with them:

1. AI In-the-wild

AI tools, generative or otherwise, available in the public domain that employees are using for work-related purposes formally and informally, such as ChatGPT or Bing.

  • How to catalogue: This requires employee education and a series of surveys to quickly compile the list of systems in use. In parallel, the IT team may be able to identify additional AI tools used by looking at the organisation’s web traffic analytics.

2. Embedded AI

AI capabilities built into standard solutions, and SaaS offerings used within the enterprise. Service providers have been complementing their offerings with AI capabilities for the better part of the past decade, many of which are completely invisible to the organisation, such as machine learning models powering spam or malware detection engines.

  • How to catalogue: Organisations will need to expand their third-party risk management program and request detailed information from their providers as embedded AI capabilities may not be obvious.

3. AI In-house

AI capabilities trained, tested, and developed internally where the organisation has full visibility over the data, the technologies, and the subsequent tuning made to the models, as well as the purpose for which they are used.

  • How to catalogue: Organisations building and maintaining their own AI models would have data scientists and a governance programme to curate the data in scope, making the discovery process seamless and the data needed for the subsequent categorisation readily available.

4. Hybrid AI 

Enterprise AI capabilities that are built in-house using one or more off-the-shelf foundational models, generative or otherwise, complemented with enterprise data.

  • How to catalogue: As this is a combination of external pre-trained models with internal data, this undertaking becomes a combination of the vendor management questions from embedded AI and the internal metadata sourcing from AI in-house to collect the appropriate information needed to categorise each use case.

Tags


Product Suppliers
Back to top