Legal Implication of Artificial Intelligence

6. 19. 2021
Labels

Today's most advanced business realities are robotics and artificial intelligence (AI). Countries are undergoing transformations as processing skills grow, algorithms improve, and massive volumes of data become available. The artificial intelligence market is predicted to hit $35.8 billion this year, up 44 percent from the previous year, according to the International Data Corporation ("IDC"). According to IDC, global artificial intelligence investment would more than quadruple to $79.2 billion by 2022. In this article, we will discuss the wide range of legal challenges that are emerging in connection with the usage of artificial intelligence (AI), as well as some possible responses from the law.

Introduction 

Artificial Intelligence (AI) is the capability from a range of computers to rationalize and mimic human intelligence in machines that are programmed to think and take action to achieve a specific goal, like a human. (Copeland, 2018). It is the ability to connect, analyze, model, conclude and learn to react to changes in the dynamics of certain situations. There are two parts to AI. Machine learning refers to programs that learn from data that are feed into the computer systems automatically, without any human assistance. Deep learning is automated learning through massive unstructured data in text, images, video, etc. 

International Data Corporation (“IDC”) predicts that the global spend on AI will be US$35.8 billion by 2021, a significant 40% increase since 2018. The AI market is expected to continue to grow to at least US$80 billion by the end of 2022. ( IDC, 2019 )

Accenture reported in 2017 that AI has a big and direct effect on the pace of development of 16 sectors across 12 economies, implying that AI would considerably enhance growth and profitability by an average of 38 percent by 2035. This would result in a total economic increase of US$14 trillion across all 16 sectors. ( Mark & Paul, 2017 )

The key goal and benefit of AI have enhanced decision-making through machine learning or deep learning across all economical and lifestyle aspects. 

Quoting professors Andrew and Erik from the book Machine Crowd, “The proof is amazing that whenever the opportunity is open, relying on data and algorithms alone normally begins to improve choices and estimates rather than relying on the idea of even knowing and “expert” humans.”( Andrew and Erik, 2017 ). The key concern on the application of AI to all aspects of life is in the situation that it is out of human control and authority resulting in unexpected and poor outcomes.

The legal implication of AI

The world has witnessed new and major legal and moral problems growing in AI's capability. Some have emphasized the necessity for AI ethicists to explore the negative consequences of where this technological advancement may lead us (John 2019). The British House of Commons produced a study on automation through Robotic and AI in October 2016, highlighting various ethical and legal challenges such as direct decision-making, reducing bias, privacy, and responsibility (Robotics and AI 2016). On December 18, 2018, the European Commission's Artificial Intelligence High-Level Expert Group ("AI HLEG") released the first draft of the Draft Ethics Guidelines for Trustworthy AI ( European Commission, 2018). According to the guidelines, trustworthy AI needs both moral direction and technological robustness.

Its development, deployment, and use should include key rights and appropriate management, as well as key policies and consequences, to provide "an ethical purpose." Technical soundness and security: It should be technically sound and secure, given that the use of AI, even for good causes, may have unforeseen repercussions.

Privacy

The volume and similarity of data collection put privacy as one of the many legal challenges AI users will face. We predict that privacy will be at the forefront as AI advances. As more data is utilized through AI, there will be more concerns that arise from the application of AI.

To assist governments in updating their privacy laws to reflect public concern about the growing data gaps and the unbridled use of data by huge firms, governments have no choice but to change their privacy laws. Consumers are getting more concerned about their data being misused. Seven out of ten Europeans are concerned about their private information being misused.

Not just recognizing its benefits, but also taking the probable hazards and unforeseen consequences into consideration, the EU and worldwide authorities are paying very close attention to AI.

Out of this rising concern, the General Data Protection Regulation (“GDPR”) was approved by the European Parliament, and it is a full set of laws meant to keep the private data of all EU people secure from illegal use or activity. (Deloitte 2018) According to the GDPR, enterprises are obligated to inform individuals of the collection and use of their data and provide an explanation for the data's acquisition and how it will be used to build behavioral and attitudinal profiles of the individuals involved. To put it another way, firms must make it crystal clear to consumers what sort of data they collect and how it will be utilized. Concerns have been raised about whether the GDPR may be a roadblock to programmers working on increasingly sophisticated and difficult algorithms. The federal government regulates how personal information is used in the AI realm to protect American consumers. Many prominent American firms, like Apple, believe data management is inevitable, and to promote the establishment of legislation in the United States, they have started speaking out on the subject. (Mike and Ina, 2018) Accenture, a company that provides consulting services, presented a paper on a framework to aid US government agencies to examine, expand, and manage AI systems.

Contracts

Due to the unique nature of AI, individuals or companies seeking AI services may want to seek out additional contractual safeguards. Historically, the software would perform exactly as agreed. However, computer education is not stagnant; it is always evolving. As McAfee and Brynjolfsson wrote, "machine training systems become more helpful as they get more anticipatory, work on more active and technologically sophisticated tools, acquire access to more data, and include upgraded algorithms." The more data algorithms process, the more adept they get at identifying originals. Parties may consider contractual arrangements that agree on the functionality of the technology and that if unacceptable problems arise, contractual modifications will follow. These strengthened criteria place a premium on audit rights regarding algorithms in AI contracts, adequate service levels in the record, a determination of who owns AI-generated innovations, and indemnification clauses in the event of a malfunction. AI will compel contract drafters to take a more creative approach, requiring them to forecast where machine learning could go.

Arttifical Intelligence

Torts

The machine's knowledge base is always expanding, allowing it to make increasingly complex decisions depending on the data it processes. While the majority of results are predictable, given the insufficiency of human supervision, there is a distinct possibility of an unexpected or unpleasant event. The mechanical and artificial character of AI encourages the development of fresh ideas.

Tort law has historically been the vehicle through which the law has explored societal developments, particularly technological advancements. In the past, the bars have assimilated the permissible tort law analytical framework and applied those legal principles to the facts submitted to the court. The most common tort, negligence, examines whether a person owes another a duty of care, whether the person who owes that obligation breached it, and if the breach resulted in damages. Negligence is defined by the principle of reasonable foreseeability. Without the advantage of hindsight, the issue is whether a reasonable person is capable of foreseeing or predicting the general effects of his or her actions. The more AI systems diverge from normal techniques and coding, the more behaviors they may exhibit that are not only surprising to their developers but also completely unanticipated. Are we in a scenario where no one is held responsible for an outcome that may hurt others because it is unpredictable? Our governments, one would think, would recognize the necessity of avoiding such a disaster. When there is a lack of predictability, the law enforcement agency shifts from a neglect complaint to a strict responsibility report. The concept of strict liability, often known as the Ryland's v Fletcher rule, states that a defendant may be held legally liable even if no purposeful or negligent behavior is shown and only the defendant's actions cause injury to the plaintiff.

Conclusion 

The fundamental advantage of AI is the help it provides to people when making decisions across many areas of life. While AI would significantly increase growth and profitability, Legal and moral difficulties have also risen in AI's capacity. The increased data gaps, as well as the unregulated usage of data by huge businesses, will trigger worries, that compel several nations to employ a mechanism to manage their own AI systems. AI will force contract drafters to take a more creative approach and to secure additional contractual safeguards. These strengthened criteria place a premium on audit rights regarding algorithms in AI contracts, adequate service levels in the record, a determination of who owns AI-generated innovations, and indemnification clauses in the event of a malfunction. As AI systems deviate farther from traditional techniques and coding, more surprising and unforeseen behaviors are expected. We have to be very mindful that we don't get a scenario where no one is responsible for a potentially harmful outcome due to the lack of predictability in AI-managed outcomes.

Author: Tan Kim Chong

Bibliography

International Data Corporation, (11 March 2019) Spending on AI Systems

https://www.businesswire.com/news/home/20190311005093/

Copeland B.J, “Artificial intelligence” ( August 2018)

Mark Purdy, (2017) “How AI boosts Industry Profits and Innovation” 

Erik Brynjolfsson and Andrew McAfee, ( 2017 ) “Machine, Platform, Crowd: Harnessing Our Digital Future“ 

John Murawski, (March 2019), “Need for AI Ethicists Becomes Clearer as Companies Admit Tech’s Flaws” 

Technology Committee and House of Commons Science, (12 October 2016) “Robotics and artificial intelligence”,

European Commission, (18 December 2018), “Have your say: European expert group seeks feedback on draft ethics guidelines for trustworthy artificial intelligence” 

Deloitte, (2018), “AI and risk management” 

Fried Ina and Allen Mike, (18 November 2018) “Apple CEO Tim Cook calls new regulations “inevitable”

Application for study

Interactive online: