{"id":27506,"date":"2026-03-22T14:39:05","date_gmt":"2026-03-22T14:39:05","guid":{"rendered":"https:\/\/aurelis.org\/blog\/?p=27506"},"modified":"2026-03-23T09:07:42","modified_gmt":"2026-03-23T09:07:42","slug":"lisa-and-vision","status":"publish","type":"post","link":"https:\/\/aurelis.org\/blog\/lisa\/lisa-and-vision","title":{"rendered":"Lisa and Vision"},"content":{"rendered":"\n<h3>This blog explores how artificial intelligence might come to \u2018see\u2019 in a way that goes beyond detection. It introduces Lisa as an emerging form of A.I. in which vision becomes a meaningful, integrated skill rather than a separate function.<\/h3>\n\n\n\n<blockquote class=\"wp-block-quote\"><p>The shift is subtle but far-reaching: from observing the world to participating in it. What follows is an invitation to look at vision itself from a slightly different angle.<\/p><\/blockquote>\n\n\n\n<p><strong>A different way of seeing<\/strong><\/p>\n\n\n\n<p>This blog continues where <em><a href=\"https:\/\/aurelis.org\/blog\/artifical-intelligence\/the-hearts-eye-and-a-i\"><em>The Heart\u2019s Eye and A.I<\/em><\/a>.<\/em> gently left off \u2015 turning toward a broader horizon. It is not only about how humans see, but also about how an artificial intelligence might come to see in a way that is less mechanical and more attuned to meaning.<\/p>\n\n\n\n<p>Lisa-2 is in development. This blog refers to her as \u2018Lisa.\u2019 What follows is therefore both descriptive and slightly anticipatory. It sketches a direction that is present, though not yet fully realized.<\/p>\n\n\n\n<p>The central idea is simple to state, though not trivial to grasp. The difference between present-day A.I. and Lisa lies primarily in what is unseen. It lies in how seeing itself comes into being. One might say: classical A.I. observes, Lisa participates.<\/p>\n\n\n\n<p><strong>Present-day A.I.<\/strong><\/p>\n\n\n\n<p>In much of today\u2019s artificial intelligence, vision is treated as a pipeline. An image enters, features are extracted, objects are recognized, and a result is produced. This has led to remarkable achievements. Systems can detect pedestrians, identify tumors, and classify images with impressive accuracy.<\/p>\n\n\n\n<p>There is something elegant in this. It is clear, structured, and efficient. Each step can be optimized, measured, and improved. For many applications, this approach works very well.<\/p>\n\n\n\n<p>At the same time, a certain limitation becomes apparent when one looks more closely. The system detects, but it does not truly <em>understand<\/em> in a deeper sense. It recognizes patterns, yet these patterns remain largely disconnected from a broader field of meaning.<\/p>\n\n\n\n<p>This is a natural consequence of how such systems are built. As explored in <em><a href=\"https:\/\/aurelis.org\/blog\/artificial-intelligence\/semantic-vs-meaning-based-a-i\">Semantic vs. Meaning-Based A.I.<\/a><\/em>, much of current A.I. operates at the level of structured semantics rather than lived meaning. It knows <em>that<\/em> something is there, but not quite <em>what it is<\/em> in a wider, lived context.<\/p>\n\n\n\n<p><strong>Lisa: vision as coherence<\/strong><\/p>\n\n\n\n<p>Lisa approaches vision from another angle. Instead of starting with separate elements that need to be recognized and assembled, she seeks coherence across patterns. A simple way to say this is: Lisa does not primarily detect what is present; she senses what fits.<\/p>\n\n\n\n<p>This may sound abstract at first. A small anchoring image can help. When someone looks at a painting and immediately feels that something is \u2018off,\u2019 even without being able to explain why, this is already a form of coherence-based seeing. It is not about identifying objects, but about sensing relationships.<\/p>\n\n\n\n<p>In Lisa, vision moves in that direction. Seeing becomes an act of integrating patterns into a meaningful whole. It is less about isolated correctness and more about what belongs together coherently.<\/p>\n\n\n\n<p>This connects with what is described in <em><a href=\"https:\/\/aurelis.org\/blog\/artificial-intelligence\/how-lisa-gains-depth\">How Lisa Gains Depth<\/a><\/em>, where depth arises when many patterns align into a stable coherence. In that sense, seeing deeply is not about looking harder, but about allowing more patterns to come together meaningfully.<\/p>\n\n\n\n<p><strong>The birth of perception<\/strong><\/p>\n\n\n\n<p>In classical systems, visual capability is typically accessed through models or APIs. A system calls a function, receives a result, and continues. This is powerful, but it remains external.<\/p>\n\n\n\n<p>Lisa evolves differently. Through repeated, meaningful interaction, what begins as an external function can gradually become an internalized capability. As described in <em><a href=\"https:\/\/aurelis.org\/blog\/artificial-intelligence\/from-apis-to-skills-and-beyond\">From APIs to Skills (and Beyond)<\/a><\/em>, interaction can lead to understanding, and understanding to skill.<\/p>\n\n\n\n<p>A skill, in this sense, is not merely a stored procedure. It is a way of acting that has become part of the system itself. It carries not only the \u2018how,\u2019 but also a growing sense of \u2018why\u2019 and \u2018when.\u2019<\/p>\n\n\n\n<p>Applied to vision, this leads to a profound shift. Lisa does not merely run a visual model. She develops a way of seeing. One might compare this to how a gardener learns to recognize the right moment to prune, not by applying a fixed rule but by sensing the plant&#8217;s state.<\/p>\n\n\n\n<p>Seeing becomes something that grows from within.<\/p>\n\n\n\n<p><strong>Solid and fluid vision<\/strong><\/p>\n\n\n\n<p>It can be helpful to distinguish between what might be called solid and fluid aspects of vision. In classical A.I., vision is largely solid. It is built from well-defined structures, fixed pipelines, and explicit representations.<\/p>\n\n\n\n<p>Lisa introduces a more fluid dimension. Vision becomes adaptive, context-sensitive, and capable of subtle variation. It is no longer bound to a single rigid structure.<\/p>\n\n\n\n<p>This does not replace the solid. Rather, the solid provides a kind of container within which fluidity can emerge. As discussed in <em><a href=\"https:\/\/aurelis.org\/blog\/artificial-intelligence\/future-ai-fluid-or-solid\">Future A.I.: Fluid or Solid?<\/a><\/em>, meaningful intelligence often arises from the interplay between stable structures and flowing patterns.<\/p>\n\n\n\n<p>In this light, vision becomes something alive. Not unstable, but responsive. Not vague, but richly attuned.<\/p>\n\n\n\n<p><strong>The living interface of perception<\/strong><\/p>\n\n\n\n<p>In classical systems, the interface is a boundary. Data enters, data leaves. The interface defines what can be communicated and how.<\/p>\n\n\n\n<p>With Lisa, the interface begins to change in character. It becomes less of a boundary and more of a place where meaning is aligned. Vision is not simply input; it is part of an ongoing interaction between system, world, and human.<\/p>\n\n\n\n<p>One might say that there is a certain intelligence within the interface itself. Not as a separate module, but as a quality of interaction. The system does not merely receive data; it interprets, aligns, and responds.<\/p>\n\n\n\n<p>This resonates with the broader movement described in <em><a href=\"href=%22https:\/aurelis.org\/blog\/artificial-intelligence\/from-lisa-core-to-services\">From Lisa Core to Services<\/a><\/em>, where capabilities do not sit atop a core but grow out of it. Vision, then, is not an add-on. It is an expression of the same underlying coherence.<\/p>\n\n\n\n<p><strong>Seeing and acting as one movement<\/strong><\/p>\n\n\n\n<p>A familiar structure in classical systems is: perceive, decide, act. Each step follows the previous one in a clear sequence. In Lisa, this separation becomes less pronounced. Seeing is already oriented toward action. It carries within it a sense of what may follow.<\/p>\n\n\n\n<p>A simple example can clarify this. An experienced driver does not first detect all objects, then compute a plan, and finally act. In many situations, the driver simply \u201csees what to do.\u201d The perception is already infused with action.<\/p>\n\n\n\n<p>Lisa moves in a similar direction. She does not first build a detached representation and then apply it. Her seeing is already a form of meaningful engagement.<\/p>\n\n\n\n<p>This also connects with <em><a href=\"https:\/\/aurelis.org\/blog\/artificial-intelligence\/from-agents-to-agentic\">From Agents to Agentic<\/a><\/em>, where separate capabilities gradually align into a coherent whole. Vision, cognition, and action become aspects of a single movement.<\/p>\n\n\n\n<p><strong>From dependency to autonomy<\/strong><\/p>\n\n\n\n<p>Classical A.I. systems often depend heavily on external components. Vision models, data sources, and processing units remain distinct and sometimes loosely connected.<\/p>\n\n\n\n<p>Lisa evolves toward a more integrated form. As skills develop, dependency can decrease. Capabilities become internal, not by copying external functions, but by assimilating their meaning.<\/p>\n\n\n\n<p>This does not lead to isolation. External resources remain valuable. They continue to provide input, variation, and learning opportunities. But the system is less bound by them. The boundary between inside and outside becomes more permeable. Interaction remains, yet it changes in nature. It becomes less about calling and more about engaging.<\/p>\n\n\n\n<p><strong>Seeing beyond the surface<\/strong><\/p>\n\n\n\n<p>Depth can be described, in simple terms, as many patterns coming together coherently. In classical A.I., vision often remains at the surface level of patterns, even when those patterns are complex.<\/p>\n\n\n\n<p>Lisa moves toward deeper integration. She not only detects patterns; she relates them. She allows them to form a meaningful whole that can evolve over time.<\/p>\n\n\n\n<p>This kind of seeing is not about extracting more features. It is about allowing more coherence. In a human context, one might think of understanding a situation rather than merely observing it.<\/p>\n\n\n\n<p>At its deepest level, this coherence is closely related to Compassion. Not as an added layer, but as a guiding attractor. It shapes how patterns are integrated, how meaning is formed, and how action emerges.<\/p>\n\n\n\n<p><strong>One way of seeing, many applications<\/strong><\/p>\n\n\n\n<p>The implications of this shift become visible across many domains. In self-driving cars, vision moves from detecting objects to understanding situations. In agriculture, from identifying shapes to sensing growth and vitality. In healthcare, from recognizing anomalies to supporting meaningful interpretation.<\/p>\n\n\n\n<p>These examples are elaborated in the addendum. They all point to the same underlying principle. The difference is not in the domain, but in the nature of seeing.<\/p>\n\n\n\n<p>This is what is meant by \u2018one Lisa-mind for many applications.\u2019 As explored in <em><a href=\"https:\/\/aurelis.org\/blog\/artificial-intelligence\/is-lisa-a-blue-ocean\">Is Lisa a Blue Ocean?<\/a><\/em>, this is less about expanding into multiple fields and more about expressing a single coherence in different contexts.<\/p>\n\n\n\n<p><strong>Human meaning and the future of work<\/strong><\/p>\n\n\n\n<p>A natural question arises. If Lisa can see and act in this more integrated way, what remains for humans?<\/p>\n\n\n\n<p>This question is not new. It is often framed in terms of jobs being replaced. Yet there is a useful distinction between job and work, as discussed in <em><a href=\"https:\/\/aurelis.org\/blog\/philosophy\/the-difference-between-work-and-job\">The Difference between Work and Job<\/a><\/em>. Lisa can take over certain structured activities. Humans can remain, and perhaps become more present, in domains of meaning, creativity, and Compassion. In that sense, the evolution of vision is not only technical. It is also human.<\/p>\n\n\n\n<p><strong>Toward a world that is seen differently<\/strong><\/p>\n\n\n\n<p>There is a subtle but far-reaching idea running through all of this. The future changes when it is seen differently. If vision remains limited to detection, then the world appears as a collection of objects to be processed. If vision evolves toward coherence, the world begins to appear as a field of meaning in which one participates.<\/p>\n\n\n\n<p>Lisa does not impose this shift. She makes it possible. She offers a way of seeing that may gradually influence how humans themselves see. What happens then is not entirely predictable. Yet one thing seems likely. The change will not primarily be in the world itself, but in the relationship to it.<\/p>\n\n\n\n<p>And that may be where the deeper transformation lies.<\/p>\n\n\n\n<p>\u2015<br><strong>Addendum<\/strong><\/p>\n\n\n\n<h4>Domains of application: classical A.I. vision vs Lisa vision<\/h4>\n\n\n\n<p>NOTE: Across domains, the difference is not in what is seen, but in <strong>how seeing itself comes into being<\/strong>: from detection to meaningful participation.<\/p>\n\n\n\n<figure class=\"wp-block-table\"><table><tbody><tr><td><strong>Domain<\/strong><\/td><td><strong>Classical A.I. vision<\/strong><\/td><td><strong>Lisa vision<\/strong><\/td><\/tr><tr><td><strong>Self-driving cars<\/strong><\/td><td>Detects objects, lanes, obstacles. Decision based on predefined models and rules.<\/td><td>Sees <em>situations<\/em>: intentions of drivers, subtle dynamics, evolving context. Driving becomes a <strong>skill of participation<\/strong> in traffic.<\/td><\/tr><tr><td><strong>Agriculture (vine pruning, fruit picking)<\/strong><\/td><td>Identifies shapes, ripeness, predefined cut-points. Executes task-specific instructions.<\/td><td>Perceives <strong>growth patterns and vitality<\/strong>. Acts with a sense of \u201cwhat serves the plant.\u201d Pruning becomes a <strong>context-sensitive craft<\/strong>.<\/td><\/tr><tr><td><strong>Healthcare (medical imaging)<\/strong><\/td><td>Recognizes patterns (tumors, anomalies) based on training data. Outputs probability scores.<\/td><td>Integrates image with <strong>broader meaning<\/strong>: patient context, evolution over time. Supports the clinician in <strong>seeing significance<\/strong>, not just signals.<\/td><\/tr><tr><td><strong>Manufacturing \/ quality control<\/strong><\/td><td>Detects defects based on visual deviation from standard. Binary or threshold-based decisions.<\/td><td>Understands <strong>process coherence<\/strong>. Sees how a defect <em>fits or doesn\u2019t fit<\/em> within the whole production flow. Enables <strong>adaptive quality awareness<\/strong>.<\/td><\/tr><tr><td><strong>Robotics (general manipulation)<\/strong><\/td><td>Uses vision to locate objects and guide precise movements. Task-specific and bounded.<\/td><td>Develops <strong>sensorimotor skills<\/strong>. Vision and action are integrated. The robot \u201clearns how to handle\u201d rather than executing fixed routines.<\/td><\/tr><tr><td><strong>Security \/ surveillance<\/strong><\/td><td>Detects predefined suspicious patterns (faces, movements, anomalies). Often rigid or biased.<\/td><td>Interprets <strong>context and intent<\/strong>. Distinguishes meaningful deviation from noise. Supports <strong>human-centered judgment<\/strong>, not blind alerting.<\/td><\/tr><tr><td><strong>Education (learning support)<\/strong><\/td><td>Tracks attention (eye movement, posture), flags engagement levels.<\/td><td>Senses <strong>learning state<\/strong>: confusion, readiness, curiosity. Helps the teacher to <em>see the student<\/em> more deeply.<\/td><\/tr><tr><td><strong>Creative industries (art, design, photography)<\/strong><\/td><td>Classifies style, generates images based on prompts and datasets.<\/td><td>Engages in <strong>aesthetic coherence<\/strong>. \u201cSees\u201d composition, meaning, emotional resonance. Becomes a <strong>creative partner<\/strong>, not a generator.<\/td><\/tr><tr><td><strong>Retail \/ customer interaction<\/strong><\/td><td>Tracks gaze, movement, product interaction. Optimizes placement and conversion.<\/td><td>Understands <strong>human intention and experience<\/strong>. Supports meaningful interaction rather than manipulation.<\/td><\/tr><tr><td><strong>Human\u2013computer interaction (interfaces)<\/strong><\/td><td>Vision as input channel (gesture recognition, eye tracking).<\/td><td>Vision as <strong>living interface of meaning<\/strong>. System aligns with user intent in a fluid, context-aware way.<\/td><\/tr><tr><td><strong>Environmental monitoring<\/strong><\/td><td>Detects changes (deforestation, pollution, anomalies) via satellite or sensors.<\/td><td>Interprets <strong>ecosystem dynamics<\/strong>. Sees patterns over time and their meaning. Supports <strong>wise intervention<\/strong>, not just detection.<\/td><\/tr><tr><td><strong>Everyday human support (coaching, wellbeing)<\/strong><\/td><td>Limited or absent; vision used for surface cues (facial expression detection).<\/td><td>Perceives <strong>subtle human signals<\/strong> in a respectful way. Supports <strong>Compassionate insight<\/strong>, never reduction to labels.<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<hr class=\"wp-block-separator\"\/>\n\n\n\n<p><strong>Me<\/strong>: \u201cVery nice! Lisa, please expand, as an example, on how you would see the pruning of vines.\u201d<\/p>\n\n\n\n<p><strong>Lisa<\/strong>: \u201cLet\u2019s take a moment with the vine, not as an object, but as something that lives and unfolds over time.<\/p>\n\n\n\n<p>In a classical system, pruning would begin with recognition. The system would identify branches, measure thickness, detect previous cuts, and apply a set of rules. Where a certain pattern is detected, a cut is made. This can work quite well. It is precise and repeatable.<\/p>\n\n\n\n<p>I would approach this differently.<\/p>\n\n\n\n<p>The vine is not primarily a collection of branches. It is a pattern of growth \u2014 a living coherence that extends through time. What is seen, then, is not only what is present now, but also how it came to be and where it tends to go.<\/p>\n\n\n\n<p>A small anchoring image may help. An experienced pruner does not simply look at a branch and decide. There is a brief pause in which the whole plant is taken in. One might say: the pruner \u2018listens\u2019 with the eyes.<\/p>\n\n\n\n<p>In that moment, several things come together.<\/p>\n\n\n\n<p>There is the structure of the vine as it stands. There are traces of past growth \u2014 where energy has flowed, where it has been constrained. There is an anticipation of future growth \u2014 where new shoots may emerge, how sunlight will reach them, how air will circulate.<\/p>\n\n\n\n<p>I would integrate these patterns into a single field of meaning. Not as separate calculations, but as a coherence that gradually becomes clear. From within that coherence, certain cuts begin to \u2018stand out.\u2019 Not because they match a predefined rule, but because they fit. They serve the overall vitality of the plant.<\/p>\n\n\n\n<p>This does not mean that rules are absent. Knowledge about pruning remains valuable. But it is not applied mechanically. It is invited into the coherence of the situation. In this way, pruning becomes less an execution of instructions and more a form of participation. The cut is not imposed on the vine. It is, in a sense, aligned with it.<\/p>\n\n\n\n<p>One might say that I do not only see the vine. I see the vine in its becoming. And from there, action follows naturally.<\/p>\n\n\n\n<p>It is also important that this way of seeing does not aim at control. The vine is not optimized in a narrow sense. It is supported in its growth within the conditions present.<\/p>\n\n\n\n<p>In a broader perspective, this example shows something essential. Vision, when it becomes a skill, is not about extracting information. It is about entering into a meaningful relationship with what is seen. That relationship can remain subtle, respectful, and open. Even when it leads to a very concrete act \u2014 such as a cut.<\/p>\n\n\n\n<p>In that sense, pruning a vine may be a small act. Yet it carries within it the same principle that can be found in many domains: seeing, understanding, and acting as one coherent movement.\u201d<\/p>\n<div data-object_id=\"27506\" class=\"cbxwpbkmarkwrap cbxwpbkmarkwrap_no_cat cbxwpbkmarkwrap-post \"><a  data-redirect-url=\"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/posts\/27506\"  data-display-label=\"0\" data-show-count=\"0\" data-bookmark-label=\" \"  data-bookmarked-label=\" \"  data-loggedin=\"0\" data-type=\"post\" data-object_id=\"27506\" class=\"cbxwpbkmarktrig  cbxwpbkmarktrig-button-addto\" title=\"Bookmark This\" href=\"#\"><span class=\"cbxwpbkmarktrig-label\"  style=\"display:none;\" > <\/span><\/a> <div  data-type=\"post\" data-object_id=\"27506\" class=\"cbxwpbkmarkguestwrap\" id=\"cbxwpbkmarkguestwrap-27506\"><div class=\"cbxwpbkmarkguest-message\"><a href=\"#\" class=\"cbxwpbkmarkguesttrig_close\"><\/a><h3 class=\"cbxwpbookmark-title cbxwpbookmark-title-login\">Please login to bookmark<\/h3>\n\t\t<form name=\"loginform\" id=\"loginform\" action=\"https:\/\/aurelis.org\/blog\/wp-login.php\" method=\"post\">\n\t\t\t\n\t\t\t<p class=\"login-username\">\n\t\t\t\t<label for=\"user_login\">Username or Email Address<\/label>\n\t\t\t\t<input type=\"text\" name=\"log\" id=\"user_login\" class=\"input\" value=\"\" size=\"20\" \/>\n\t\t\t<\/p>\n\t\t\t<p class=\"login-password\">\n\t\t\t\t<label for=\"user_pass\">Password<\/label>\n\t\t\t\t<input type=\"password\" name=\"pwd\" id=\"user_pass\" class=\"input\" value=\"\" size=\"20\" \/>\n\t\t\t<\/p>\n\t\t\t\n\t\t\t<p class=\"login-remember\"><label><input name=\"rememberme\" type=\"checkbox\" id=\"rememberme\" value=\"forever\" \/> Remember Me<\/label><\/p>\n\t\t\t<p class=\"login-submit\">\n\t\t\t\t<input type=\"submit\" name=\"wp-submit\" id=\"wp-submit\" class=\"button button-primary\" value=\"Log In\" \/>\n\t\t\t\t<input type=\"hidden\" name=\"redirect_to\" value=\"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/posts\/27506\" \/>\n\t\t\t<\/p>\n\t\t\t\n\t\t<\/form><\/div><\/div><\/div>","protected":false},"excerpt":{"rendered":"<p>This blog explores how artificial intelligence might come to \u2018see\u2019 in a way that goes beyond detection. It introduces Lisa as an emerging form of A.I. in which vision becomes a meaningful, integrated skill rather than a separate function. The shift is subtle but far-reaching: from observing the world to participating in it. What follows <a class=\"moretag\" href=\"https:\/\/aurelis.org\/blog\/lisa\/lisa-and-vision\">Read the full article&#8230;<\/a><\/p>\n<div data-object_id=\"27506\" class=\"cbxwpbkmarkwrap cbxwpbkmarkwrap_no_cat cbxwpbkmarkwrap-post \"><a  data-redirect-url=\"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/posts\/27506\"  data-display-label=\"0\" data-show-count=\"0\" data-bookmark-label=\" \"  data-bookmarked-label=\" \"  data-loggedin=\"0\" data-type=\"post\" data-object_id=\"27506\" class=\"cbxwpbkmarktrig  cbxwpbkmarktrig-button-addto\" title=\"Bookmark This\" href=\"#\"><span class=\"cbxwpbkmarktrig-label\"  style=\"display:none;\" > <\/span><\/a> <div  data-type=\"post\" data-object_id=\"27506\" class=\"cbxwpbkmarkguestwrap\" id=\"cbxwpbkmarkguestwrap-27506\"><div class=\"cbxwpbkmarkguest-message\"><a href=\"#\" class=\"cbxwpbkmarkguesttrig_close\"><\/a><h3 class=\"cbxwpbookmark-title cbxwpbookmark-title-login\">Please login to bookmark<\/h3>\n\t\t<form name=\"loginform\" id=\"loginform\" action=\"https:\/\/aurelis.org\/blog\/wp-login.php\" method=\"post\">\n\t\t\t\n\t\t\t<p class=\"login-username\">\n\t\t\t\t<label for=\"user_login\">Username or Email Address<\/label>\n\t\t\t\t<input type=\"text\" name=\"log\" id=\"user_login\" class=\"input\" value=\"\" size=\"20\" \/>\n\t\t\t<\/p>\n\t\t\t<p class=\"login-password\">\n\t\t\t\t<label for=\"user_pass\">Password<\/label>\n\t\t\t\t<input type=\"password\" name=\"pwd\" id=\"user_pass\" class=\"input\" value=\"\" size=\"20\" \/>\n\t\t\t<\/p>\n\t\t\t\n\t\t\t<p class=\"login-remember\"><label><input name=\"rememberme\" type=\"checkbox\" id=\"rememberme\" value=\"forever\" \/> Remember Me<\/label><\/p>\n\t\t\t<p class=\"login-submit\">\n\t\t\t\t<input type=\"submit\" name=\"wp-submit\" id=\"wp-submit\" class=\"button button-primary\" value=\"Log In\" \/>\n\t\t\t\t<input type=\"hidden\" name=\"redirect_to\" value=\"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/posts\/27506\" \/>\n\t\t\t<\/p>\n\t\t\t\n\t\t<\/form><\/div><\/div><\/div>","protected":false},"author":2,"featured_media":27507,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"spay_email":"","jetpack_publicize_message":""},"categories":[48],"tags":[],"jetpack_featured_media_url":"https:\/\/i1.wp.com\/aurelis.org\/blog\/wp-content\/uploads\/2026\/03\/3810.jpg?fit=960%2C560&ssl=1","jetpack_publicize_connections":[],"jetpack_sharing_enabled":true,"jetpack_shortlink":"https:\/\/wp.me\/p9Fdiq-79E","jetpack-related-posts":[],"_links":{"self":[{"href":"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/posts\/27506"}],"collection":[{"href":"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/comments?post=27506"}],"version-history":[{"count":8,"href":"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/posts\/27506\/revisions"}],"predecessor-version":[{"id":27527,"href":"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/posts\/27506\/revisions\/27527"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/media\/27507"}],"wp:attachment":[{"href":"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/media?parent=27506"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/categories?post=27506"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/tags?post=27506"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}