Artificial intelligence and ethical responsibility
BALTIMORE — Can a computer review a contract, conduct legal research or even write a brief?
Legal services companies advertise a variety of programs marketed as artificial intelligence to make attorneys’ work easier and faster, but the boom in so-called AI has not been accompanied by robust ethics considerations. The American Bar Association adopted a resolution last summer that “urges courts and lawyers to address the emerging ethical and legal issues related to the usage of artificial intelligence (‘AI’) in the practice of law.”
Sharon D. Nelson, an attorney and the president of Sensei Enterprises Inc. in Fairfax, Virginia, said discussions need to address bias in programs and technology, as well as the responsibility of attorneys who use the technology.
“I think there is almost no sector of the legal industry that does not want to come up with some sort of standard for ethical AI,” Nelson said. ”I think what we’re doing is synthesizing the work of others and saying, as it applies to the practice of law, ‘Here are the things we need to address before we develop a more robust code of conduct.’”
One of attorneys’ major ethical duties related to technology is knowing what it is capable of doing before using it on behalf of clients, according to Frank Pasquale, a professor at the University of Maryland Francis King Carey School of Law.
“My sense is that much of what is marketed as artificial intelligence is, in fact, very incremental improvements on already familiar technology,” he said.
Companies that claim they can do legal research or write briefs for you are actually using algorithms, whose results people then use to create a final product, he said.
“I think that the key is to be skeptical, to demand unsupervised time with the AI, the alleged AI … to test it out,” he said. “I think also that being able to inspect or be able to get some sense of the underlying data is [beneficial].”
Nelson said attorneys should be skeptical of vendors that claim to have AI technology.
“It’s still a mystery to most attorneys, what it is exactly,” she said. “Everybody wants to call what they’ve got AI.”
John W. Simek, vice president of Sensei Enterprises, said that “true AI [is] trying to mimic how the human mind works.”
Machine learning — a branch of AI that involves the recognition of patterns and the automation of processes based on human input — is the first step to AI, Simek said. In the law, machine learning is used in document review and discovery to sort through files after an attorney has taught the program what to seek.
In the absence of specific guidance, many basic tenets of attorney ethics can be applied to the use of AI and other legal technology, Nelson said.
“The big rule that covers them is [the duty of] competence,” Nelson said. “So they have to be competent with the technology and able to use the technology.”
Rules governing confidentiality and the supervision of lawyers and non-lawyers also apply, Nelson said.
Pasquale said attorneys should be duty-bound to disclose when they use some form of AI in a case — both to the court and to clients — but he added that this was more of a good practice and not a stipulated ethics rule.
Comparing legal AI to medical innovations, Pasquale pointed out that doctors usually tell patients if they are trying something novel.
“There’s a lot to learn for lawyers from the informed consent doctrine,” he said.