Grafisch element EP&C header
Grafisch element EP&C header ingeklapt
Grafisch element EP&C header ingeklapt

Your Business First

OCTROOIEN | MODELLEN

Opinion: The European Act is great but AI doesn't stop at the border

nieuwheidsonderzoek

This opinion piece was previously published in Trouw.

Strict European legislation is a step in the right direction but beware of the continent becoming dependent on others, writes Holger Seitz, Patent Attorney at EP&C Patent Attorneys.

New european AI law

The European AI Act came into force this month. It is an important step towards safely and ethically regulating Artificial Intelligence (AI). While Europe once again opts for strict rules, China and the United States are taking a different course. What is the significance of European rules in a world where potentially dangerous technologies know no boundaries?

Technological innovations develop in cycles. AI is the most recent wave and is comparable to the industrial revolution and the rise of the Internet. These innovations offered huge opportunities but also brought significant risks with them. AI is no different, but the potential consequences are much greater and the risks more complex.

There is a danger, for example, that countries, groups or individuals with malicious intentions will use AI to develop harmful technologies. They could use sophisticated computers to carry out cyber attacks on national security systems, for instance.

Devastating

These attacks could not only cripple national infrastructures, but also escalate international tensions, with potentially devastating consequences. The current cyber confrontations between Russia and the Ukraine show how important this type of warfare has become.

Despite the new European legislation other major risks continue to exist. AI is a global technology. The devices we use every day - ranging from smartphones to smart cars - generate huge amounts of data that are often processed outside Europe. It is hard for us to control what happens to this data there.

Moreover, with the introduction of the strict AI law, the European Union risks getting left behind when it comes to innovations because the rules are driving AI companies out of Europe. This could make Europe more dependent on technologies from elsewhere.

united nations

An important lesson to be learned from history is that safety regulations are often implemented only after new technologies are introduced. The UK, for example, introduced the Factory Act in 1833 in response to wretched working conditions, especially for children, created by the rapid industrialisation.

A more topical example is the global environmental pollution we face today, caused in part by the large-scale use of fossil fuels. Often, we only realise what safety measures are required once an innovation has been in use for some time and the consequences start to materialise.

Limited effectiveness

To avoid these kinds of (to put it mildly) nasty surprises, it is essential to limit the risks of AI in advance and regulate it globally. This requires the harmonisation of laws between countries, i.e. each country retains its own laws and authorities, but these are aligned with international standards through treaties. In the case of patents this is done under the supervision of the World Trade Organisation, among others.

Investing in AI research and development, as Google EMEA director Martin Brittin argued (Trouw, 8 August) is also important to ensure the safety and ethics of AI. The European AI Act is a step in the right direction, but without global cooperation and uniform rules its effectiveness will remain limited.

We will only be able to deploy AI technologies in a safe and ethical manner through international cooperation, ideally under the leadership of the United Nations. The future of AI depends on our willingness to develop consistent global policies.

Topics: INNOVATION, GENERAL, INTELLECTUAL PROPERTY, PATENT, NOVELTY, PATENTS, SUSTAINABILITY