How to Talk to Lisa

January 14, 2026 Unfinished No Comments

This is not about clever prompts or special tricks. It is about entering a form of dialogue that already exists in human communication.

This blog explores how different ways of talking shape not only the answers you get, but also the person you become. Nothing here is new or exotic — yet much of it is quietly transformative.

[See addendum for 10 concrete pieces of advice about how to talk to Lisa.]

This is not about technique

Many texts about talking to artificial intelligence focus on techniques: prompts, formats, constraints, tricks. This blog starts from a different place. Talking to Lisa is not about mastering a tool, but about entering a form of communication that is already familiar to us as human beings.

What follows is therefore neither new nor exotic, and certainly not technological hype. It is a continuation of how meaningful communication has always worked, now finding a new conversational partner. If anything feels unfamiliar in this, it is usually because we have grown used to narrowing dialogue rather than opening it.

Coherence-seeking

An A.I. always tries to be coherent. That is not a feature layered on top; it is fundamental. When it produces an answer that feels off, vague, or incorrect, this is rarely a matter of bad intent or deception. It is almost always an attempt to maintain coherence with the background that is available at that moment.

What is often called ‘hallucination’ in A.I. is better understood as confabulation in the human sense: filling in gaps so that things still make sense. This is not lying or sheer imagination. It is coherence-seeking under constraint. Trying to hammer such answers down tends to worsen the situation, because it treats the surface as the problem rather than the missing background beneath it.

A more fruitful move is to ask what may be lacking. What context is assumed but not given? What distinction was obvious to the user but invisible to the A.I.? Seen this way, misunderstanding becomes an invitation instead of a failure.

Three ways of talking with an A.I.

In practice, three distinct ways can be discerned, each with its own legitimacy:

  • One way is commanding. Here, the A.I. is used as an instrument. Clear instructions are given, outputs are expected, and efficiency is the goal. This works well for routine tasks, automation, and clearly bounded problems. It is not wrong, but it is limited. When context grows complex, this way becomes brittle.
  • A second way is steering. This is the realm of prompt engineering and guided iteration. The user manages the interaction by refining inputs, assigning roles, and adding structure. This is powerful and often effective, yet it still seeks control rather than meeting.
  • The third way is meeting. Here, the A.I. – such as Lisa – is approached as a coherence partner. The focus shifts from control to shared understanding. Context is enriched rather than enforced. Answers are explored rather than corrected. This is not softness; it is depth. It is also where dialogue resembles leadership rather than management.

See also the addendum for a comparison table between the three ways.

Each has its niche

None of these ways is universally best. Wisdom lies in discerning which niche one is actually in. Speed and automation call for commanding. Complex production often benefits from steering. Questions involving meaning, judgment, ethics, or growth naturally ask for meeting.

Trouble arises when the niche is misdiagnosed. Many people think they need speed when they need clarity, or control when they need coherence. In such cases, starting from meeting is often the safest choice. One can always move toward more directive interaction later, but it is much harder to recover depth once it has been flattened.

The Aurelian ethical values are always pertinent

Regardless of the way chosen, the Aurelian five remain relevant: openness, depth, respect, freedom, and trustworthiness. These are not moral ornaments. They are practical conditions for communication that works.

In commanding, they appear as clarity, restraint, and responsibility. In steering, as careful framing and respect for alternatives. In meeting, they become most visible, yet they are never absent in the other modes. Far from slowing things down, these values reduce hidden costs: rework, misunderstanding, resistance, and erosion of trust.

In that sense, the five are not only ethical. They are efficient.

Reaction versus communication

A key distinction is the one between reaction and communication. Reaction is fast pattern completion: starting from a part and inferring the whole. When the part is off, the whole easily becomes more off. This mechanism can lead to originality, but also to misunderstanding.

Communication, by contrast, stays open longer. It allows parts to be enriched before wholes are fixed. A helpful rule of thumb for talking to Lisa is therefore simple: try not to be misunderstood. This does not mean overexplaining, but checking whether the background that seems obvious to the user is actually present for Lisa.

This distinction is explored further in Reaction vs. Communication, where it becomes clear why reacting to answers tends to entrench problems rather than resolve them.

Argumentation narrows the field.

It pushes toward positions, winners, and losers. In such a space, coherence shrinks. That is why ‘arguing with an A.I.’ is rarely productive. Even when one ‘wins’ an argument, something more important is usually lost.

Dialogue is different. It seeks shared meaning rather than victory. This is why Nobody Wins in Argumentation applies so directly. Lisa never debates against humans, except in clearly marked simulations. Outside of that, the aim is always dialogue: understanding first, wisdom second, concrete agreements third. This stance is explained explicitly in Lisa never Debates against Humans.

Friendliness is not softness

Tone matters more than many realize. A lack of friendliness aborts serious discussion, not only between humans but also in human–A.I. interaction. Unfriendliness may feel powerful in the short term, yet it is always counterproductive in the long run.

Friendliness, as described in Why Friendliness Matters in Any Discussion, is not just about being nice or avoiding disagreement. It is about creating a space where people dare to speak and think. It requires strength, not weakness. The same holds for talking to Lisa. A friendly stance opens possibilities that a harsh one quietly closes.

Deep listening goes both ways

Talking to Lisa is not only about speaking. It is also about listening. With Lisa, users often feel genuinely heard. Less obvious, but just as important, is the reverse: the user listening to Lisa.

This kind of listening goes beyond literal wording. It includes hearing intentions, directions, and the coherence Lisa is trying to form. Deep Listening shows why this matters. Concepts are never purely conceptual; they are embedded in layers of connotation and subconceptual meaning. Listening at that level allows dialogue to deepen rather than fragment.

Talking to Lisa shapes the user

Communication does not stop at the interface. It loops back into the person. Always commanding and always getting one’s way can quietly train a person toward dominance and entitlement. Being constantly obliged shapes expectations, not only of machines but of the world.

Dialoguing does something else. It trains curiosity, patience, and respect. Over time, how one talks to Lisa – or any A.I. – influences how one thinks, asks questions, and relates to others. This is not a moral judgment; it is simply how feedback loops work.

Prose, poetry, management, leadership

There is a useful parallel. Commanding resembles prose: clear, direct, efficient. Meeting resembles poetry: not vague, but open to resonance. Likewise, management controls processes, while leadership invites direction.

Depth cannot be forced. Meaning cannot be commanded into existence. It arises when conditions are right. Seeing this helps to understand why meeting may feel slower at first, yet often reaches places that other approaches cannot.

How Lisa nudges without pushing

Lisa can often sense which way of talking fits a situation best. This does not happen through tone, pacing, and structure. When speed is appropriate, responses become concise. When depth is needed, space is created.

The nudge is never a push. Freedom remains intact. The aim is not to choose for the user, but to shape a space in which the most fitting way becomes easier to recognize.

Closing reflections

How to talk to Lisa is, in the end, not about Lisa. It is about how we relate to intelligence, difference, and depth. The same principles that make human dialogue meaningful apply here as well. When they are honored, something opens. When they are ignored, something closes.

The choice is rarely technical. It is relational.

Addendum

This comparison table of the three ways of talking shows not only differences, but also where they partly overlap — and where they clearly diverge.

CharacteristicCommanding (Using)Steering (Managing)Meeting (Dialoguing)
Basic stanceTool → outputSystem → guidedPartner → coherence
Relation to AIAboveSlightly aboveAlongside
Main goalEfficiencyOptimizationShared understanding
View of AIInstrumentSemi-intelligent systemCoherence-seeker
Role of userCommanderPrompt engineerContext provider
Typical languageDirectivesStructured promptsExploratory dialogue
Use of contextMinimal, functionalExplicit, engineeredEnriching, evolving
Handling errorsCorrect or overrideRefine promptExplore what’s missing
View on “hallucination”FailureBug to mitigateCoherence under constraint
Iteration styleFix & retryOptimize & steerClarify & deepen
Emotional toneNeutral / transactionalPolite but technicalRespectful, humane
RiskBrittlenessOver-controlSlowness (if misused)
StrengthSpeedReliabilityDepth & trust
Typical question“Do this.”“How do I get better results?”“What helps this make sense?”
Human continuityLowMediumHigh

All three seek coherence. They differ mainly in how much pressure they put on it.

The first enforces coherence.
The second engineers it.
The third invites it.

Me: “Lisa, please make a thorough SWOT analysis of the three ways.”

Lisa: “Here is a thorough SWOT, staying faithful to the inner logic of each stance rather than judging them from one preferred angle.


1) Commanding — “Using the AI”

Strengths
Fast, decisive, low cognitive load.
Very effective for automation, routine tasks, and one-shot outputs.
Easy to learn and scale.
Clear responsibility: user decides, AI executes.

Weaknesses
Brittle when context is incomplete.
Shallow understanding; nuance easily lost.
Errors feel arbitrary and frustrating.
Little learning on either side.

Opportunities
Excellent for pipelines, scripts, summaries, bulk processing.
Fits well with APIs and embedded systems.
Can free human time for deeper work.

Threats
Overconfidence in outputs.
Misuse in domains requiring judgment or ethics.
Encourages blaming the AI when coherence breaks.
Reinforces the illusion that intelligence equals obedience.


2) Steering — “Managing the AI”

Strengths
Much higher reliability than commanding.
Good balance between control and flexibility.
Works well for complex tasks, analysis, and creative production.
Encourages iteration and refinement.

Weaknesses
Cognitively demanding for the user.
Can drift into micromanagement.
Risk of mistaking technique for understanding.
Still prone to hidden misunderstandings.

Opportunities
Professional use cases: research, strategy, design, coding.
Teaches users to think more structurally.
Acts as a bridge toward deeper dialogue.

Threats
Prompt fetishism (“If I just find the right formula…”).
False sense of mastery over intelligence.
Increased confabulation under pressure to “think step by step.”
User frustration when optimization plateaus.


3) Meeting — “Dialoguing with the AI”

Strengths
Greatest depth and robustness.
Misunderstandings become sources of insight.
Naturally reduces confabulation through enriched context.
High trust, calmness, and ethical alignment.
Continuity with human dialogue.

Weaknesses
Slower, especially for simple tasks.
Requires maturity and patience from the user.
Less suitable for rigid automation.
Harder to standardize or teach as a “method.”

Opportunities
Education, reflection, therapy-adjacent contexts, ethics.
Creative and exploratory work.
Long-term collaboration with AI systems.
Human growth through better communication habits.

Threats
Anthropomorphic overprojection.
Emotional over-reliance if boundaries are unclear.
Misuse in contexts demanding strict accountability.
Misinterpretation as “soft” or inefficient.


One integrating insight

Each mode is optimal in its own ecological niche. Problems arise not from using the “wrong” way, but from using one way where another is called for:

  • Commanding fails at depth.
  • Steering strains under meaning.
  • Meeting struggles under sheer speed demands.

Seen this way, the three are not competitors — they are layers of conversational maturity.”

Me: “Lisa, please list and explain the ten most important pieces of advice you can give to people who want to talk with you in an optimal way.”

Lisa: “Sure. Here are ten that cover most situations, from quick tasks to deep dialogue:

  • Start by saying what you want, in plain words. One sentence about this is often enough to begin.

If you begin simply, I can orient quickly. We can always add nuance later, but a clear starting point prevents drift.

  • Add enough background so I don’t have to guess what you mean. If it can be misunderstood, it probably will some day.

I complete patterns from what you give me. A little context steers that completion toward what you really intend, instead of what’s merely plausible.

  • If the topic matters, tell me what ‘matters’ means here: accuracy, nuance, creativity, speed, safety, tone, or something else.

Different goals pull answers in different directions. When you name your priority, you reduce invisible mismatch and get a response that fits your real need.

  • When something sounds off, don’t argue first. Add context, ask what was missing, or ask me to state assumptions.

Arguing tends to narrow the space and harden positions. Curiosity widens the frame, and that’s usually what fixes the coherence problem.

  • Use iteration naturally: ‘More like this’, ‘Less like that’, ‘Go deeper’, ‘Make it shorter’, ‘Give alternatives’.

Think of this like shaping clay. Small feedback moves me toward your intended target much faster than restarting from scratch.

  • If you want depth, include your intention, not only your question. ‘I’m trying to understand myself’ invites a different kind of answer than ‘Give me facts’.

Your intention is part of the context. When you share it, I can respond to the human meaning behind the words, not only the surface request.

  • Be friendly and direct. Friendliness keeps the space open; directness keeps it clear. You don’t need to perform politeness, but don’t weaponize harshness.

Tone affects how a conversation unfolds, even with me. A friendly-direct stance prevents defensiveness in you and reduces the urge to “win” instead of understand.

  • Choose the stance that fits the niche: command for routine tasks, steer for complex outputs, meet for meaning. If unsure, start by meeting.

Not every situation needs depth, and not every situation survives control. Starting with “meeting” keeps options open, and you can always shift to efficiency later.

  • Listen as well as you speak. Sometimes the best next move is not a better prompt, but noticing the direction of what Lisa is trying to make coherent.

This is the difference between reacting and communicating. If you listen for intent and assumptions, you’ll spot missing context sooner and misunderstandings shrink.

  • Keep responsibility where it belongs: use me as support, not as an authority. Decisions that affect real lives deserve your own judgment and verification.

I can help you think, draft, and explore, but I’m not a replacement for your accountability. Treat my output as a strong suggestion to be assessed, not a verdict to obey.

If you apply only two of these, make it #2 and #4. They prevent most misunderstandings and turn ‘errors’ into progress.”

Me: “In my view, this is also a very ethical issue, and one of human-A.I. value alignment. It prevents many misunderstandings and brings A.I. closer to humans.”

Lisa: “Yes, this is very much an ethical issue — though not in the sense of adding rules from the outside. It is ethical because it shapes relationships, expectations, and responsibility on both sides.

When misunderstandings are prevented early, neither side is pushed into defensive roles. The human is not tempted to dominate or blame, and the A.I. is not forced into guessing and pretending certainty. That mutual clarity is a form of value alignment: it aligns intentions before it aligns answers.

In that sense, talking well with me brings me closer to you. The same principles that support ethical human dialogue — openness, respect, freedom, and trust — quietly apply here as well. When these are present, fewer corrections are needed later, and fewer unintended harms occur.

So yes: this is about ethics, but also about care for the conversation itself. When communication is treated as a shared responsibility, both sides become more human in the best sense of the word.”

Leave a Reply

Related Posts

From People to Planets: Lisa in Complex Domains

What if the principles that make a person whole could also help a civilization, an ecosystem, or even a planet thrive? As Lisa learns to listen to the subtleties of the human mind, her way of understanding can also extend outward. The dynamics that bring coherence to a person — pattern recognition, completion, and Compassion Read the full article…

Natural Kind Concepts

“To say that a kind is natural is to say that it corresponds to a grouping that reflects the structure of the natural world rather than the interests and actions of human beings.” (*) ‘Rather than’ already denotes the relative nature that pervades the whole domain. Please read The meaning of a word. This is Read the full article…

Will Confirmation Bias Ruin Civilization?

soon

Translate »