Martin Ragan, co-founder of Slovak startup Cequence.io, enters a signal into a simple chatbot interface. In less than 10 seconds, the underlying artificial intelligence, or AI, sifts through thousands of contracts spanning thousands of pages and returns a detailed answer.
“Bullet points refer to a part of the contract that you can quickly navigate to and check the answer,” Ragan told DW. “Risk mitigation is a big part of it: ‘If I did this to supplier X would I have to pay a penalty?’ Now, you can make an educated business decision.”
Ragan, who says the chatbot is 80-90% accurate during implementation and gets better as it learns from more customer data, presented its AI contract management software at the Basque Open Industry exhibition in Bilbao last month.
Mixed Reviews for AI Act
1,000 kilometers (620 miles) away to the northeast, in Brussels, talks about the EU’s AI Act are in the final stages of the legislative process.
The bloc’s key legislation to regulate artificial intelligence, tabled by the European Parliament in June, has the overarching goal of putting a “railroad” around the technology’s development and use.
Ragan thinks the AI Act will have both positive and negative consequences.
“The opinions we’re hearing are mixed. We work with highly sensitive data; if the rules give our customers more confidence – confidence means the data is safe, and AI won’t just take it and send it somewhere.” And will not keep us where we are not. “I don’t want to get it – it will have a positive effect on us too.”
It appears unlikely that EU member states and lawmakers will be able to reach an agreement by the December 6 deadline. In mid-November, negotiations on draft legislation on so-called foundation models came to a sudden halt – large deep learning neural networks that power chatbots like Cequence.io.
Then, Germany, France and Italy, the three largest countries in the EU, spoke out against the stratified approach initially envisioned on the foundation model. He warned that imposing strict restrictions on these new models would harm the EU’s own champions, such as German OpenAI competitor Aleph Alpha.
Instead, he suggested a self-regulation foundation model through company pledges and codes of conduct.
Unresolved legal questions for SMEs
However, networks such as Digital SME have warned that this proposed self-regulation would shift the responsibility for conforming to regulation onto companies developing and using AI further down the supply chain, particularly small and medium-sized enterprises (SMEs). will do it.
The resulting compliance costs could weigh heavily on SMEs, resulting in legal uncertainty and hindering AI adoption, said a statement issued by the European Digital SME Alliance, which represents more than 45,000 enterprises in Europe. Claims.
Around 99% of all businesses in the EU are SMEs, employing around 100 million people and accounting for more than half of Europe’s GDP.
Margaret Rudzki, who works at the German Confederation of Crafts and Small Businesses (ZDH), believes it is important to address the liability issue to both increase trust in and adoption of AI.
Margaret Rudzki of ZDH says that establishing liability in case of accidents or damage is a key point in need of clarification. Image: ZDH
“There is huge potential for AI in our businesses, for example when it comes to predictive maintenance,” Rudzki explains. “But it is paramount to determine who is actually responsible for damage in the supply chain, for example when it comes to high-risk by AI products or systems.” DW at the European Commission’s flagship conference on SMEs in Bilbao last month.
He said, “Just imagine an AI-powered garage door – the algorithm malfunctions, and it hurts the neighbor’s kid. Who will be responsible? We need to resolve all these thorny legal questions.”
According to Eurostat, the EU statistics agency, only 7% of European SMEs used AI in 2021 – a far cry from the Commission’s 2030 target of 75%.
to regulate or not to regulate
In the tiered approach versus self-regulation debate regarding the foundation model, Cequence.io co-founder Martin Ragan sided with Germany, France, and Italy in calling for less regulation.
They believe that the reporting duties and other transparency obligations included in the AI Act could put EU companies at a competitive disadvantage.
“My main concern is how to slow down competition here in the EU compared to the US,” Ragan, who also worked in Silicon Valley for a year, told DW.
“Until you get a new model reviewed or approved by EU regulators, someone in the US can come up with the same idea or copy it, bring it to market much faster and “Can get you in trouble. It means you can do it.” Lose your entire business.”
Different rules for different risk levels
The AI Act proposal distinguishes between four risk levels: minimal or limited risk at no risk, such as generative AI tools like ChatGPT or Google’s Bard, which lawmakers consider high-risk applications, such as border control. management.
Ultimately, technologies like facial recognition tools or social scoring pose “unacceptable risks” and should be banned.
One purpose of the Act is to protect democratic processes such as elections from AI-generated deepfakes and other sources of disinformation. This is particularly relevant in light of next year’s EU-wide elections.
Yet if policymakers cannot resolve the sticking points and agree on a final set of principles and rules on December 6, the AI Act could be delayed until the 2024 elections. For Europe’s SMEs, the postponement will mean continued legal uncertainty around artificial intelligence.
Edited by: Christy Pladson
How the EU wants to regulate AI