The rapid development of Artificial Intelligence (AI) – i.e., computer systems which are intended to replicate human cognitive functions – is the most hotly debated topic of 2023.
In particular, does future superintelligence – the first iterations of which we are witnessing with the likes of generative AI technologies such as ChatGPT – pose existential risks that could lead to human extinction? Or will it empower humankind to flourish in ways that were previously only the realms of science fiction movies?
It appears, no one knows. Even Geoffrey Hinton, the godfather of machine learning is flip flopping on the issue. Only this month he stated that he regretted his part in creating AI due to its ability to outstrip humanity. Whilst earlier this year other tech founders and big players in the AI field, including Elon Musk, and the people behind ChatGPT, issued an open letter calling for a pause in the creation of ‘giant’ AI systems so that the risks could be properly understood.
At the end of the month G7 nation officials will meet to consider the problems posed by generative artificial intelligence (AI) tools. Having agreed to create an intergovernmental forum called the “Hiroshima AI process” to debate issues around fast-growing AI tools and issues such as intellectual property protection, disinformation, and how the technology should be governed.
Whilst it seems unlikely AI will kill us all (at least not immediately), it is clear that rapid developments will have significant implications for a number of areas of the law. As more and more companies make increasing use of AI in their products, services and operations (some 75% of companies say they will be adopting it in some form, according to the latest research from the World Economic Forum) they need a thorough understanding of the potential issues they face.
The legal responsibility of AI is a critical issue.
Should AI developers be liable for damage caused by machines? The risks and liabilities involved in threats to security, harmful action, disruption, damage, or failure of AI systems are potentially catastrophic.
Clarity is needed to determine whether the provider of the AI or the user is the liable party. Making matters even more complicated, how can liability be assigned if multiple machines (with multiple providers, owners and users) are interacting?
How will the ownership of IP rights work and what are the potential liabilities?
Current intellectual property laws are not designed to deal with the issue of ownership of assets autonomously created by AI technology. Patent, copyright, and intellectual property laws were crafted by humans, for humans and do not explicitly recognise nonhuman creators.
Intellectual property rights including trademarks, copyright, patents etc., covering everything from innovative technology and ground-breaking inventions, to artworks, written works, and even confidential information/trade secrets – assets which IP rights would protect if a human had created them – cannot currently be protected if they are a product of autonomous AI. So, who is the IP owner and how can companies protect themselves until such times as there is a change to the law?
A myriad of concerns around data protection, security, and privacy
If a business enters client, customer or partner information into a generative AI, the AI may use that data in ways that the business cannot reliably predict or protect. As such UK businesses subject to GDPR regulations currently face significant consequences for how they utilise AI in their products, services, and operations. To make matters even more complicated the businesses developing generative AIs have already stated their own limited liability for business outcomes as a result of using their technologies, indemnifying themselves against claims, losses, and expenses.
Generative AI poses several risks for businesses. They may be vulnerable to data breaches resulting in privacy violations and potential misuse of personal data. Similarly, they pose a risk of inadequate anonymisation, again compromising privacy. AI tools may also share user data without consent, and lack transparent information about how data has been collected, used, shared, and retained. Finally, there is a risk that generative AIs may inadvertently perpetuate biases and discrimination. Leading me nicely to my final point…
What about the ethical and reputational risks?
Generative AI that creates biased or poor outcomes runs the risk of severe reputational harm. We have already seen that AIs can produce convincing but inaccurate content, raising ethical concerns about the spread of misinformation, liability issues, potential harm to credibility and reputation, and risk of regulatory penalties or litigation.
Case in point, a New York Lawyer is facing a court hearing after his firm used ChatGPT for legal research. The court found out that six legal cases referenced by the lawyer and his firm in an on-going case were bogus.
Businesses must be cognisant of the potential impactive of generative AI use on environmental, social and governance (ESG) goals. The potential environmental impact is huge. Recent research found training an AI can emit over 626 pounds of carbon dioxide, roughly the equivalent to the lifetime emission of five cars, whilst also consuming an insane amount of electricity.
This is a very high-level overview of some of the potential legal issues surrounding the use of generative AI. If you would like a more granular discussion on the legal and compliance issues and how you can manage risks and liabilities, please get in touch.