The Legal Challenges of Artificial Intelligence

  • January 22, 2019

Even though AI is not yet evolved to the point to be adopted massively, there was a dramatic rise in new tools, software, applications, platforms and improvements in algorithms based on machine learning in 2018, which will have an impact on the financial, legal, healthcare, mobility, and many other sectors.

The maturity of this technology has not yet happened and will not as long as AI will be facing barriers as it does now. Those barriers are the lack of regulatory environment, lack of trust in the technology, general fear of the technology, lack of risk mitigation strategies, outdated systems that could support AI. More to that, there is still a lot of unresolved complexity in the algorithms and a huge lack in data, which a machine needs for learning – the data are often incomplete, are missing or are in various formats (audio, video, photos, texts, etc.) and various qualities. As AI’s learning is based on huge quantity and good quality of data, we could say that AI is only as good as the data it is using for its learning.

In 2018 there have been many new innovations and breakthroughs in this area though, one of the most significant ones is the massive use of chatbots in retail, healthcare, banking, and other sectors. Smartphones became intelligent, can recognise faces and have other AI functionalities. There was a significant rise in the use of drones. Smart apps can find you a group of other people to share a ride with, predict crimes, and much more. There were also some defining events that happened in 2018 for AI, such as the Uber’s self-driving car caused a car accident that ended in the death of a pedestrian, which addressed a question of liability, and the Cambridge Analytica scandal, which addresses a question of data privacy. More to that, self-driving buses are about to hit Swedish roads in 2020.

WHAT IS AI?

Let’s see what artificial intelligence is and how will it define (create) the law in the coming years.

Artificial intelligence is a field in computer science that results in the creation of a machine that could analyze data, think, speak, recognize, make independent decisions, solve complex problems, learn, even feel and react without any human help.

Several legal challenges will have to be properly addressed before the AI will be fully mature. I will mention only a few most important ones, some of them have also been addressed in the report on the Artificial intelligence by the EU Parliament, (accessible here: http://www.europarl.europa.eu/doceo/document/A-8-2017-0005_EN.html?redirect#title1).

1. LIABILITY

The current legal framework does not have rules, under which robots shall be held liable for their acts or omissions that cause damage to third parties. However, the fact, that the EU parliament recognized in the report is, that the more autonomous robots are, the less they can be considered to be simple tools in the hands of humans (such as the manufacturer, the operator, the owner, the user, etc.). Robots can be so complex that it is questionable whether ordinary rules on liability are sufficient. This is so especially in cases where the cause cannot be traced back to a specific human and where the acts or omissions of robots which have caused harm could have been avoided.

The EU Parliament sees the solution in establishing a compulsory insurance scheme, similar to what already happens with cars, producers, or owners of robots. Moreover, the manufacturer, the programmer, the owner, etc. should have the opportunity to benefit from limited liability if they contribute to a compensation fund, or if they jointly take out insurance to guarantee compensation where damage is caused by a robot.

More to this, there is also a shortcoming in the current legal framework of the contractual liability. In cases, where machines are able to choose their counterparts, negotiate contractual terms, conclude contracts and decide whether and how to implement them, the traditional rules might be insufficient.

2. AI PERSONHOOD

Personhood is a quality of being an individual person, having rights and obligations. It is a status of every person that is a basis for rights and concepts, such as nationality, citizenship, equality, integrity, liberty, dignity, etc. It is a quality that every natural or legal person has. Personhood means that whenever there is a car accident, there is also a natural or legal person behind it who will be held liable for the consequences of the accident.

The question that was discussed often in the past year is should a machine be given personhood (rights and liabilities) in a same or similar way as natural and legal persons have it, but with its own specific features and implications.

The current legal framework does not hold robots liable per se for act or omissions, which cause damage, but traces a specific human (such as the manufacturer, the owner, the operator, etc.) back and if that person could have foreseen and avoided the robot’s harmful behavior. If yes, such person shall be held strictly liable for acts or omissions of a robot. There are also rules for product liability and rules for liability for harmful actions that could apply to damage caused by robots, but those rules are in many cases insufficient and are not able to identify the person liable for robot’s act or omission.

To avoid such legal loopholes, EU parliament proposes that more complex robots could have their own personhood (so-called “electronic personhood”), which would give them a set of rights and liabilities and would be applied to cases where robots make autonomous decisions or otherwise interact with third parties independently. If such approach is right from the legal and ethical point of view remains to be seen.

3. PROTECTION OF DATA PRIVACY & PRIVATE LIFE

For machine learning, the free flow of data is essential in order to utilize robots to their full potential. On the other hand, the Union legal framework for data protection must be fully complied.

For that, it is important, that robots are developed in such a way that they are safe, secure and fit for purpose and follow procedures for data processing, which must be compliant with existing legislation, confidentiality, anonymity, fair treatment, and due process. There must be a sufficient security layer in the networks in which robots operate to prevent security breaches, cyber-attacks or misuse of personal data, especially when a large amount of data is collected and processed.

There should be mechanisms that enable users to stop processing their personal data and to invoke the right to be forgotten. I see a great contradiction at this point, namely, if the machine needs data to learn, how will erasure of those data affect their learning ability and their operations.

4. INTELLECTUAL PROPERTY RIGHTS

The current legal definitions of creativity and innovation do not take into consideration non-human innovation.

AI will certainly have an impact on the traditional concepts of intellectual property. We already know music that is generated by AI and other inventions, which will surely transform the definitions of “authors”, “inventors”, “artists”, and the concepts of patents, trademarks, copyrights, designs, etc.

5. AGREEMENTS

Nowadays, the contracting process can be quite long and complex, drafting, executing and keeping track of all the data and all the contracts.

AI is already capable to recognize standard clauses, identify patterns, suggest alternatives, extract data from the contract, etc. With a contract management system, which is based on AI, companies are able to review contracts more rapidly, organize large amounts of contract data more easily, decrease the potential for contract disputes and also increase the volume of contracts being negotiated.

AI could also be used for contract negotiation and execution. AI will be soon able not only to store, manage, extract data, but also to analyze patterns and data, and propose the best negotiation options for parties.

It is also worth a mention at this point that the future of AI in intertwined with blockchain technology, which is a digital record system that is transparent and trustless. Smart contracts, which are computer protocols that are able to digitally facilitate, negotiate, execute, etc. a contract, are one of the foremost applications spurring the development of blockchain platforms. When the technology reaches its the point where contracts will be made mostly by robots, contractual law will have to change significantly and address questions of formation, modification, execution, enforceability, jurisdiction, notaries and authentication, and more.

6. COMPETITION LAW

As discussed above, AI needs data to learn. In the terms of competition law, the problem could arise from the fact that AI will use all kind of data to learn and react, especially through the accessibility of real-time online data on competitors’ algorithms. This would give robots the opportunity to detect, process and act on this information, and place one company in the same or better position its competitor, improve pricing models, offer better services and conditions for deals, etc. This could call for an alarm, since many of those reactions can be treated as concentrated practices, anti-competitive agreements, or similar.

CONCLUSION

In this fast-evolving world of autonomous means of transport, medical and care robots, smart bots and other gadgets, we are standing on the frontier of the new revolution. It is vital that we approach the new technology with care and consider all legal and also ethical applications and effects. The EU Parliament has already addressed the ethical responsibility of the engineers and researchers with a code of conduct, where the main values were pointed out, namely: non-maleficence, autonomy, justice, accountability, safety, reversibility and privacy, among others.

As seen above, the behavior of robots will inevitably have civil law implications, both in terms of contractual and of non-contractual liability. There is (or will be in the near future) a need for clarification and a unified definition of a robot in legal terms, responsibility for the actions of robots and potentially also of the separate personhood of robots in order to ensure legal certainty and transparency for producers and consumers.

If you need an AI legal specialist, contact us here.

Author: Mina Krzisnik, fintech lawyer

Call Now ButtonCall Us