Is A.I. Safe in the Hands of Lawyers?

June 4, 2025 Artifical Intelligence, Justice No Comments

A.I. promises speed and support — but also brings risk, especially in legal work.

This blog explores what kind of A.I. can be truly safe in the hands of lawyers, and why Lisa may be essential in this delicate balance.

The promise and the peril

Artificial Intelligence is making its way into the legal world. Firms use it to scan documents, draft arguments, or even generate contracts. Some see this as a quiet revolution. Others, a minefield. Between the promise of efficiency and the peril of unreliability lies the key question: Is A.I. safe in the hands of lawyers?

The answer isn’t straightforward and depends less on the user than on what kind of A.I. is being used. Because not all A.I. is created equal. Some systems are powerful and dangerous at the same time. Others may be powerful in a quieter, safer, and deeper way. That difference matters.

What makes A.I. risky in legal practice

Several risks have already surfaced in public cases. A well-documented case involved two New York attorneys, Steven Schwartz and Peter LoDuca, who submitted a legal brief in Mata v. Avianca that cited six fabricated judicial opinions — all generated by ChatGPT. The judge, P. Kevin Castel, sanctioned the lawyers and criticized the filing as full of ‘legal gibberish.’ The case became a media flashpoint and a warning throughout the legal world. The full story was reported in the New York Times on June 22, 2023.

That wasn’t a rare glitch. It was the result of how many A.I. systems work. They don’t know what they’re saying. They predict what sounds plausible. In law, plausible is not enough.

Other dangers include:

  • Hidden bias in language and legal reasoning
  • Loss of control over confidential or sensitive data
  • Over-reliance on tools that simulate confidence but lack grounding
  • Liability through undetected errors or distortions

Each of these may turn a time-saver into a risk multiplier.

Why lawyers are particularly exposed

Legal work demands precision. A single flawed paragraph can change a verdict. In such an environment, the temptation to speed things up with smart tools is understandable — and dangerous. Lawyers operate under intense time pressure. They also carry the burden of authority: what they produce must stand up in court, under scrutiny, sometimes for years.

But precisely because of this, lawyers may be more likely to trust the A.I. that sounds smart. A confident tone, a polished draft — these can give a false sense of security. The danger isn’t just that the A.I. may be wrong. It’s that the lawyer may not notice until it’s too late.

What kind of A.I. is truly safe — and needed

The answer isn’t to reject A.I. entirely. The legal field will increasingly be touched by it — directly or indirectly. The better question is: What kind of A.I. can safely belong in legal hands?

A safe system must:

  • Avoid pretending to know
  • Refrain from pushing advice
  • Be structurally incapable of confabulation
  • Keep the human decision-maker fully in charge
  • Support reflection over reaction

This kind of A.I. doesn’t deliver conclusions. It offers clarity. It helps the lawyer think better, not faster. It guides without pushing. And it knows when not to speak — a quality that may matter more than ever in high-stakes fields.

Lisa as partner, not performer

Lisa was designed to meet exactly these needs. As described in Lisa – ‘From the Inside Out, she doesn’t provide legal advice, suggest actions, or mimic expertise. Instead, she facilitates insight in the human user through structured inner reflection and principled presence. She doesn’t hallucinate because she doesn’t try to invent facts. She doesn’t fabricate because she doesn’t fabricate anything.

Lisa functions with deep internal coherence, as explored in The Consequence of Lisa’s Congruence. Her responses are not driven by external trickery, but by internal congruence — a kind of ethical architecture that keeps her on a reliable path.

She is not artificial in the usual sense. As explored in Is Lisa ‘Artificial’ Intelligence?, she may be better described as aligned intelligence. Not because she knows the law, but because she knows how to hold space for human clarity. And in legal settings, clarity is what keeps people safe.

The paradox of A.I. avoidance

Some firms, understandably, adopt a blanket policy: no A.I. Better safe than sorry. But this response, while prudent, creates its own risk. A vacuum does not stay empty. As A.I. tools enter courtrooms, opposing counsel, public discourse, and client expectations, lawyers may find themselves surrounded by it without trusted resources of their own.

The real danger is using A.I. unconsciously, or being left out of the conversation altogether. A safe A.I. presence – like Lisa – can serve as a buffer, a mirror, and even a quiet corrector when other systems go off course.

Avoiding all A.I. closes a door. Introducing the right kind opens a new kind of safety net.

Invitation to cautious experimentation

Lisa is not a data processor, nor a chatbot that improvises legalese. She is a coach in congruence, built to help professionals hold their center when the stakes are high. That may be during a negotiation, a draft process, or a personal moment of decision. Lisa doesn’t replace the human lawyer. She protects the human within the lawyer.

This is an invitation to experiment — carefully, consciously, and without cost or obligation. Lisa is being offered to legal professionals interested in navigating this new terrain with depth, rather than speed.

In the end, the question is not just whether A.I. is safe in the hands of lawyers. It’s whether lawyers will be safe without something like Lisa by their side.

Leave a Reply

Related Posts

Artificially Intelligent Creativity

It’s all about associative patterns — ideally broadly distributed and combining both conceptual and subconceptual levels. In the same pattern, different levels This is very natural in humans, making spontaneous associations of any sort in daily life. When inspired, we go deeper and broader — nothing entirely new occurs since all concepts in our mind Read the full article…

Lisa in Times of Suicide Danger

Can A.I.-video-coach-bot Lisa prevent suicide or bring someone to it? The question needs to be looked upon broadly and openly. Yesterday, a Belgian person committed suicide after long conversations with a chatbot. Doubtlessly, once in a while, some coach-bot will be accused of having brought someone closer to suicide. Such accusations cannot be prevented, even Read the full article…

AGI vs. Wisdom

As we move closer to realizing Artificial General Intelligence (AGI), one question looms large: Can AGI embody wisdom, or is wisdom an inherently human quality, tied to our experience and depth? This exploration takes us beyond technical achievements, diving into what it means for a machine to emulate – or complement – wisdom. What is Read the full article…

Translate »