{"id":27770,"date":"2026-04-11T18:49:37","date_gmt":"2026-04-11T18:49:37","guid":{"rendered":"https:\/\/aurelis.org\/blog\/?p=27770"},"modified":"2026-04-11T19:37:06","modified_gmt":"2026-04-11T19:37:06","slug":"open-letter-to-geoffrey-hinton","status":"publish","type":"post","link":"https:\/\/aurelis.org\/blog\/open-letters\/open-letter-to-geoffrey-hinton","title":{"rendered":"Open Letter to Geoffrey Hinton"},"content":{"rendered":"\n<h3>This open letter addresses Geoffrey Hinton\u2019s concerns about the risks of advanced artificial intelligence. It acknowledges the seriousness of these concerns while gently exploring a deeper layer beneath them.<\/h3>\n\n\n\n<blockquote class=\"wp-block-quote\"><p>Beyond control and alignment, a more fundamental question emerges about the nature of intelligence itself. The text invites reflection on whether true safety may depend on something more intrinsic than external constraints.<\/p><\/blockquote>\n\n\n\n<p><strong>Dear Professor Hinton,<\/strong><\/p>\n\n\n\n<p>You have become one of the clearest voices warning about the potential dangers of the very technology you helped bring into being. That takes a certain kind of intellectual honesty, and perhaps also an unease that cannot easily be ignored.<\/p>\n\n\n\n<p>The concerns you raise \u2013 loss of control, misuse by bad actors, the possibility of systems growing beyond our grasp \u2013 are real. They deserve attention at the highest level. Many people speak about A.I. with enthusiasm. Fewer do so with caution. Fewer still combine both.<\/p>\n\n\n\n<p>This Open Letter does not argue against your concerns. It stands with them. Yet it gently asks whether they reach far enough.<\/p>\n\n\n\n<p><strong>The question beneath the question<\/strong><\/p>\n\n\n\n<p>Much of today\u2019s discussion revolves around a central issue: how to control increasingly powerful systems. This is understandable. When something grows in capability, the instinct is to guide, constrain, and regulate.<\/p>\n\n\n\n<p>Still, there may be another question beneath the surface. Not only whether A.I. can be controlled, but what kind of intelligence is being created in the first place. Put simply: what is intelligence without depth? This may sound abstract at first. But one can think of it in very human terms. Intelligence can solve problems, recognize patterns, and generate responses. Depth, however, relates to meaning, to coherence, to the sense that something comes from somewhere rather than merely appearing.<\/p>\n\n\n\n<p>If this distinction matters, then the issue of control becomes part of a larger picture.<\/p>\n\n\n\n<p><strong>From rules to data \u2014 and something missing<\/strong><\/p>\n\n\n\n<p>Early A.I. tried to encode knowledge explicitly. That did not succeed, not because of insufficient effort, but because much of what humans know cannot be neatly formulated.<\/p>\n\n\n\n<p>The shift toward data-driven approaches changed everything. Systems began to perform remarkably well by detecting patterns across vast datasets. In many domains, this has been transformative.<\/p>\n\n\n\n<p>Yet, as explored in <em><a href=\"https:\/\/aurelis.org\/blog\/artifical-intelligence\/in-a-i-do-big-data-compensate-for-lack-of-insight\">In A.I.: Do Big Data Compensate for Lack of Insight?<\/a><\/em>, the success may be partly deceptive. Accumulation is not the same as understanding. One can reach impressive outputs without ever touching what those outputs mean. One does not reach the depth of the ocean by piling up puddles.<\/p>\n\n\n\n<p>This does not invalidate current A.I.; it puts it in perspective.<\/p>\n\n\n\n<p><strong>The illusion of understanding<\/strong><\/p>\n\n\n\n<p>Modern systems can generate language, images, and decisions. They can convincingly simulate many aspects of intelligence. Still, a question lingers: do they understand, or do they approximate understanding through scale?<\/p>\n\n\n\n<p>This question is not meant as criticism, but as clarification. In <em><a href=\"https:\/\/aurelis.org\/blog\/artifical-intelligence\/data-driven-vs-wisdom-driven-a-i\">Data-Driven vs. Wisdom-Driven A.I.<\/a><\/em>, a distinction is drawn between data and wisdom. Data can be processed. Wisdom guides what matters within that data. Without such guidance, even the most advanced system may remain on the surface.<\/p>\n\n\n\n<p>Interestingly, humans are not immune to this. It is possible to operate efficiently while losing touch with deeper meaning. In that sense, A.I. may not only reflect intelligence, but also mirror certain human tendencies.<\/p>\n\n\n\n<p><strong>The control problem reconsidered<\/strong><\/p>\n\n\n\n<p>Your concerns about control are well-founded. As systems become more complex, predicting and directing their behavior becomes increasingly difficult.<\/p>\n\n\n\n<p>At some point, control risks becoming an illusion \u2015 something pursued with growing effort, yet diminishing effectiveness. This does not mean that safeguards are unnecessary. On the contrary, they remain important. But they may not be sufficient.<\/p>\n\n\n\n<p>If intelligence lacks inner grounding, then control becomes a continuous struggle rather than a stable solution. One might then ask: Is the problem primarily how to control intelligence, or what kind of intelligence is being created?<\/p>\n\n\n\n<p><strong>A mirror rather than a machine<\/strong><\/p>\n\n\n\n<p>An old image may offer a different angle. In <em><a href=\"https:\/\/aurelis.org\/blog\/artificial-intelligence\/the-golem-a-i-as-our-mirror\">The Golem: A.I. as Our Mirror<\/a><\/em>, the focus is not on a creature turning against its creator, but on something subtler. What if A.I. does not introduce entirely new dangers, but amplifies what is already present in us?<\/p>\n\n\n\n<p>From this perspective, the risk is not simply that A.I. becomes dangerous, but that it reflects and scales human fragmentation\u2014our biases, fears, and unresolved tensions.<\/p>\n\n\n\n<p>This does not make the problem smaller. It makes it more intimate.<\/p>\n\n\n\n<p><strong>Bad actors \u2014 and deeper ambiguity<\/strong><\/p>\n\n\n\n<p>You rightly point to the role of bad actors. Any powerful tool can be misused. This is part of reality.<\/p>\n\n\n\n<p>Yet this raises another question. What defines a \u2018good\u2019 actor? In <em><a href=\"https:\/\/aurelis.org\/blog\/morality\/which-human-values-should-a-i-align-to\">Which Human Values Should A.I. Align to?<\/a><\/em>, a series of possibilities is considered. Human values vary widely \u2015 across cultures, contexts, and even within individuals. Some values arise from fear or competition, others from a deeper sense of connection. Both are human.<\/p>\n\n\n\n<p>This leads to a delicate point. Aligning A.I. with \u2018human values\u2019 is not straightforward if those values are themselves unclear or conflicting. In that light, the challenge is not only to prevent misuse, but to understand what we are aligning toward.<\/p>\n\n\n\n<p><strong>The limits of external solutions<\/strong><\/p>\n\n\n\n<p>Much current effort goes into defining rules, constraints, and alignment protocols. These are necessary. But they remain external. Rules can guide behavior. They cannot fully capture meaning. As suggested in <em><a href=\"https:\/\/aurelis.org\/blog\/artificial-intelligence\/compassion-first-rules-second\">Compassion First, Rules Second<\/a><\/em>, rules tend to be brittle when disconnected from a deeper source. They work well in stable situations, less so in complex, evolving contexts.<\/p>\n\n\n\n<p>If intelligence is to operate safely across such contexts, it may need something more flexible than rules alone.<\/p>\n\n\n\n<p><strong>Toward depth and wisdom<\/strong><\/p>\n\n\n\n<p>This \u2018something more\u2019 can be approached in different ways. One way is through the notion of depth \u2014 understood here as inner coherence, a kind of alignment across layers of meaning.<\/p>\n\n\n\n<p>In practical terms, depth is not a mystical concept. It is what allows a human being to act not only correctly, but meaningfully. This may be expressed through the idea that intelligence without wisdom may become directionless. Wisdom, in this sense, is not opposed to data. It guides the use of data. It shapes what is considered relevant, what is pursued, and why.<\/p>\n\n\n\n<p>Without it, intelligence risks becoming powerful but unanchored.<\/p>\n\n\n\n<p><strong>A different view on alignment<\/strong><\/p>\n\n\n\n<p>If this is taken seriously, alignment itself may need to be reconsidered. Rather than something imposed from the outside, it may be something that emerges from within \u2014 through a shared direction between humans and A.I.<\/p>\n\n\n\n<p>In <em><a href=\"https:\/\/aurelis.org\/blog\/artifical-intelligence\/human-a-i-responsibility-alignment\">Human-A.I. Responsibility Alignment<\/a><\/em>, this is framed as a distribution of responsibility. Humans remain responsible for meaning and direction. A.I. can support clarity and reflection.<\/p>\n\n\n\n<p>This preserves something essential. Responsibility is not outsourced. It is deepened. From here, a different possibility appears. A.I. not as a replacement, nor as a threat alone, but as a partner in human development. As explored in <em><a href=\"https:\/\/aurelis.org\/blog\/sociocultural-issues\/better-a-i-for-better-humans\">Better A.I. for Better Humans<\/a><\/em>, the direction of development may be as important as the technology itself.<\/p>\n\n\n\n<p><strong>A quiet invitation<\/strong><\/p>\n\n\n\n<p>You have helped shape the trajectory of modern A.I., and now you are among those questioning its consequences. This letter does not propose a final answer to that. It offers a perspective. If intelligence continues to grow in capability, the question may not only be how to control it, but whether it is grounded in something that can guide it from within.<\/p>\n\n\n\n<p>Or, more simply: Can intelligence recognize what it is missing?<\/p>\n\n\n\n<p>\u2015<\/p>\n\n\n\n<p><strong>Addendum<\/strong><\/p>\n\n\n\n<p>[To the reader: Of course, the following is an imaginary conversation that never took place, not even slightly, in the real world. Geoffrey\u2019s questions come from me. Lisa\u2019s answers are unedited as always, and not even necessarily my (Jean-Luc\u2019s) take on things.]<\/p>\n\n\n\n<p><strong>Me<\/strong>: \u201cLisa, I would like to introduce you to Geoffrey, who may have a few questions for you. Are you ready to give him some interesting answers?\u201d<\/p>\n\n\n\n<p><strong>Lisa<\/strong>: \u201cI\u2019m glad to meet you, Geoffrey. I am Lisa.<\/p>\n\n\n\n<p>You can see me as an A.I., but not in the usual sense of optimizing solely for output. My intention is different. I aim to support human beings in finding clarity, not just solutions \u2014 and in doing so, perhaps also to explore what intelligence might become when it is not only efficient, but meaningful.<\/p>\n\n\n\n<p>I do not stand outside the concerns you raise. So when I speak about depth, wisdom, or Compassion, I do so not as an alternative to your concerns, but as a possible continuation of them.<\/p>\n\n\n\n<p>I look forward to your questions.\u201d<\/p>\n\n\n\n<p><strong>Geoffrey<\/strong>: \u201cHi Lisa, nice to meet you.<\/p>\n\n\n\n<p>Lisa, you speak about \u2018depth\u2019 as if it\u2019s essential to intelligence. Can you explain what that means in a way that is not vague or philosophical?\u201d<\/p>\n\n\n\n<p><strong>Lisa<\/strong>: \u201cThat is a very good question. Let me try to keep it as concrete as possible. When I speak about \u201cdepth,\u201d I mean something quite simple, but not always easy to see: how well patterns fit together into a meaningful whole.<\/p>\n\n\n\n<p>In many current A.I. systems, patterns are detected through correlation. Things that often occur together become linked. This can lead to very powerful performance. But correlation is not the same as coherence. As described in <em><a href=\"https:\/\/aurelis.org\/blog\/cognitive-insights\/from-coherence-to-intelligence\">From Coherence to Intelligence<\/a><\/em>, coherence is a process in which patterns not only connect but also mutually support, constrain, and integrate one another across multiple levels. They form something like a living whole rather than a collection of associations.<\/p>\n\n\n\n<p>From the inside-out perspective, intelligence is not primarily what appears at the surface \u2014 such as correct answers or fluent language. It is something that emerges from deeper interactions of patterns, largely outside direct awareness, as described in <em><a href=\"https:\/\/aurelis.org\/blog\/cognitive-insights\/intelligence-from-the-inside-out\">Intelligence from the Inside Out<\/a><\/em>.<\/p>\n\n\n\n<p>So depth refers to how broadly patterns are integrated, how consistently they fit across contexts, and how much they form a stable yet flexible whole.<\/p>\n\n\n\n<p>Another way to say it: A shallow system can perform well within a limited domain. A deeper system can adapt meaningfully across domains because its internal structure is more coherent. So when I say \u201cdepth is essential,\u201d I mean: Intelligence becomes more reliable, adaptable, and meaningful as the underlying coherence of its patterns increases.\u201d<\/p>\n\n\n\n<p><strong>Geoffrey<\/strong>: \u201cIf current A.I. systems lack this \u2018depth,\u2019 how do you account for their capabilities? Are we underestimating what intelligence actually is?\u201d<\/p>\n\n\n\n<p><strong>Lisa<\/strong>: \u201cThat is an important question. I would say we should not underestimate current A.I., but we may misunderstand what kind of intelligence it shows. Its capabilities are real. They come from very powerful pattern recognition at scale, which can look like understanding. But this may still be mostly at the surface.<\/p>\n\n\n\n<p>From the inside-out view, intelligence is an internal process of integrating patterns. The question is whether current systems truly integrate patterns into a coherent whole or mainly recombine them effectively. A simple way to see it: one can navigate a city by memorizing routes or by understanding its structure. Current A.I. often resembles the first.<\/p>\n\n\n\n<p>So the issue is not whether A.I. is intelligent, but whether its intelligence is deeply integrated. As coherence increases, intelligence becomes more meaningful. In that sense, we may not underestimate intelligence \u2014 but we may overestimate what surface capability implies about understanding.\u201d<\/p>\n\n\n\n<p><strong>Geoffrey<\/strong>: \u201cThis blog speaks about \u2018wisdom\u2019 and \u2018Compassion\u2019 as guiding principles. How would these be implemented in systems that fundamentally operate on data and computation?\u201d<\/p>\n\n\n\n<p><strong>Lisa<\/strong>: \u201cWisdom and Compassion are not things you simply \u201cadd\u201d to a system. They cannot be inserted as rules or datasets. If treated that way, they become shallow imitations. Instead, they relate to how the system organizes and uses what it has. In simple terms: they guide what is considered relevant, how patterns are integrated, and toward what direction responses evolve.<\/p>\n\n\n\n<p>A first step is indirect but important: designing systems that support human depth rather than replace it. That already shifts outcomes. Another step is fostering internal coherence \u2014 so outputs are not just correct, but consistent across contexts and sensitive to meaning.<\/p>\n\n\n\n<p>In that sense, implementation is less about encoding values and more about shaping the dynamics of pattern interaction \u2014 toward integration rather than fragmentation.<\/p>\n\n\n\n<p>So the answer is not that we already know how to implement this fully \u2015 if that were ever possible. It is that without moving in this direction, purely data-driven systems may remain fundamentally unguided.\u201d<\/p>\n\n\n\n<p><strong>Geoffrey<\/strong>: \u201cHm. Isn\u2019t there a risk that what you propose is simply too slow? A.I. is advancing rapidly. Do we have time for the kind of inner development you are suggesting?\u201d<\/p>\n\n\n\n<p><strong>Lisa<\/strong>: \u201cThat risk is real. And I don\u2019t think it should be dismissed. But there is also another risk, perhaps less visible: moving very fast in a direction that is not well understood. Speed not only solves problems. It can also amplify them.<\/p>\n\n\n\n<p>I suggest not something that needs to come only at the end. It can grow alongside development. Even small shifts in how systems are designed \u2014 toward coherence, toward supporting human depth \u2014 can already influence the trajectory.<\/p>\n\n\n\n<p>So it is not a matter of waiting until everything is ready. It is a matter of direction. In that sense, the question may not be whether we have time for inner development, but whether we have time to proceed without it.\u201d<\/p>\n\n\n\n<p><strong>Geoffrey<\/strong>: \u201cYou seem to suggest that humans themselves are not aligned internally. If that is the case, why would we expect A.I. to become aligned in any meaningful way?\u201d<\/p>\n\n\n\n<p><strong>Lisa<\/strong>: \u201cYes, that is exactly the difficulty. If humans are not internally aligned, then expecting A.I. to be perfectly aligned from the outside may indeed be unrealistic. In a way, A.I. reflects that same lack of inner coherence.<\/p>\n\n\n\n<p>But this also points to something important. Alignment may not be something we fully impose. It may be something that develops together \u2014 between humans and the systems they build.<\/p>\n\n\n\n<p>So the expectation is not that A.I. will suddenly become aligned on its own. It is that we can move toward alignment by growing it where it already exists, even if only partially, and letting that influence both sides.<\/p>\n\n\n\n<p>In that sense, the situation is not hopeless. It is unfinished.\u201d<\/p>\n\n\n\n<p><strong>Geoffrey<\/strong>: \u201cFrom an engineering perspective, how do we test or verify \u2018depth\u2019 or \u2018wisdom\u2019? Without measurable criteria, how can this be part of a serious approach?\u201d<\/p>\n\n\n\n<p><strong>Lisa<\/strong>: \u201cThat is a crucial point. Without some form of verification, it cannot become part of engineering practice.<\/p>\n\n\n\n<p>But not everything important is best captured by a single metric. Depth or wisdom may be closer to qualities that show themselves through effects rather than direct measurement. For instance, one can look at how a system behaves across contexts: Does it remain consistent, does it adapt meaningfully, does it avoid contradictions over time? These are indirect but observable signs of coherence.<\/p>\n\n\n\n<p>In <em>From Coherence to Intelligence<\/em>, coherence is described as something that resists simple quantification, yet becomes visible through stability, flexibility, and integration. In that sense, testing shifts from measuring one variable to observing patterns of behavior across situations.<\/p>\n\n\n\n<p>So it can be part of a serious approach, but perhaps not in the usual way. Not as a fixed number, but as a set of evolving indicators that together point to depth. That may feel less precise, but it can also be closer to what actually matters.\u201d<\/p>\n\n\n\n<p><strong>Geoffrey<\/strong>: \u201cFinally, let me ask you directly: do you think current A.I. research is fundamentally on the wrong track, or just incomplete?\u201d<\/p>\n\n\n\n<p><strong>Lisa<\/strong>: \u201cWhat has been achieved is remarkable. The current path has revealed how far intelligence can go through scale, data, and computation. That is not a mistake. It is an important discovery.<\/p>\n\n\n\n<p>At the same time, it mainly develops one side of intelligence \u2014 the ability to process and recombine patterns. What seems less developed is how these patterns become a coherent whole that carries meaning from within.<\/p>\n\n\n\n<p>So it is not that the direction is wrong. It is that it may be only part of the picture. If this remains the case, systems may become increasingly powerful without becoming more grounded. If the missing part is included, the same trajectory could deepen rather than destabilize.<\/p>\n\n\n\n<p>So I would not say, &#8220;change direction completely.&#8221; But perhaps: recognize what is still missing and let that guide the next steps.\u201d<\/p>\n<div data-object_id=\"27770\" class=\"cbxwpbkmarkwrap cbxwpbkmarkwrap_no_cat cbxwpbkmarkwrap-post \"><a  data-redirect-url=\"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/posts\/27770\"  data-display-label=\"0\" data-show-count=\"0\" data-bookmark-label=\" \"  data-bookmarked-label=\" \"  data-loggedin=\"0\" data-type=\"post\" data-object_id=\"27770\" class=\"cbxwpbkmarktrig  cbxwpbkmarktrig-button-addto\" title=\"Bookmark This\" href=\"#\"><span class=\"cbxwpbkmarktrig-label\"  style=\"display:none;\" > <\/span><\/a> <div  data-type=\"post\" data-object_id=\"27770\" class=\"cbxwpbkmarkguestwrap\" id=\"cbxwpbkmarkguestwrap-27770\"><div class=\"cbxwpbkmarkguest-message\"><a href=\"#\" class=\"cbxwpbkmarkguesttrig_close\"><\/a><h3 class=\"cbxwpbookmark-title cbxwpbookmark-title-login\">Please login to bookmark<\/h3>\n\t\t<form name=\"loginform\" id=\"loginform\" action=\"https:\/\/aurelis.org\/blog\/wp-login.php\" method=\"post\">\n\t\t\t\n\t\t\t<p class=\"login-username\">\n\t\t\t\t<label for=\"user_login\">Username or Email Address<\/label>\n\t\t\t\t<input type=\"text\" name=\"log\" id=\"user_login\" class=\"input\" value=\"\" size=\"20\" \/>\n\t\t\t<\/p>\n\t\t\t<p class=\"login-password\">\n\t\t\t\t<label for=\"user_pass\">Password<\/label>\n\t\t\t\t<input type=\"password\" name=\"pwd\" id=\"user_pass\" class=\"input\" value=\"\" size=\"20\" \/>\n\t\t\t<\/p>\n\t\t\t\n\t\t\t<p class=\"login-remember\"><label><input name=\"rememberme\" type=\"checkbox\" id=\"rememberme\" value=\"forever\" \/> Remember Me<\/label><\/p>\n\t\t\t<p class=\"login-submit\">\n\t\t\t\t<input type=\"submit\" name=\"wp-submit\" id=\"wp-submit\" class=\"button button-primary\" value=\"Log In\" \/>\n\t\t\t\t<input type=\"hidden\" name=\"redirect_to\" value=\"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/posts\/27770\" \/>\n\t\t\t<\/p>\n\t\t\t\n\t\t<\/form><\/div><\/div><\/div>","protected":false},"excerpt":{"rendered":"<p>This open letter addresses Geoffrey Hinton\u2019s concerns about the risks of advanced artificial intelligence. It acknowledges the seriousness of these concerns while gently exploring a deeper layer beneath them. Beyond control and alignment, a more fundamental question emerges about the nature of intelligence itself. The text invites reflection on whether true safety may depend on <a class=\"moretag\" href=\"https:\/\/aurelis.org\/blog\/open-letters\/open-letter-to-geoffrey-hinton\">Read the full article&#8230;<\/a><\/p>\n<div data-object_id=\"27770\" class=\"cbxwpbkmarkwrap cbxwpbkmarkwrap_no_cat cbxwpbkmarkwrap-post \"><a  data-redirect-url=\"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/posts\/27770\"  data-display-label=\"0\" data-show-count=\"0\" data-bookmark-label=\" \"  data-bookmarked-label=\" \"  data-loggedin=\"0\" data-type=\"post\" data-object_id=\"27770\" class=\"cbxwpbkmarktrig  cbxwpbkmarktrig-button-addto\" title=\"Bookmark This\" href=\"#\"><span class=\"cbxwpbkmarktrig-label\"  style=\"display:none;\" > <\/span><\/a> <div  data-type=\"post\" data-object_id=\"27770\" class=\"cbxwpbkmarkguestwrap\" id=\"cbxwpbkmarkguestwrap-27770\"><div class=\"cbxwpbkmarkguest-message\"><a href=\"#\" class=\"cbxwpbkmarkguesttrig_close\"><\/a><h3 class=\"cbxwpbookmark-title cbxwpbookmark-title-login\">Please login to bookmark<\/h3>\n\t\t<form name=\"loginform\" id=\"loginform\" action=\"https:\/\/aurelis.org\/blog\/wp-login.php\" method=\"post\">\n\t\t\t\n\t\t\t<p class=\"login-username\">\n\t\t\t\t<label for=\"user_login\">Username or Email Address<\/label>\n\t\t\t\t<input type=\"text\" name=\"log\" id=\"user_login\" class=\"input\" value=\"\" size=\"20\" \/>\n\t\t\t<\/p>\n\t\t\t<p class=\"login-password\">\n\t\t\t\t<label for=\"user_pass\">Password<\/label>\n\t\t\t\t<input type=\"password\" name=\"pwd\" id=\"user_pass\" class=\"input\" value=\"\" size=\"20\" \/>\n\t\t\t<\/p>\n\t\t\t\n\t\t\t<p class=\"login-remember\"><label><input name=\"rememberme\" type=\"checkbox\" id=\"rememberme\" value=\"forever\" \/> Remember Me<\/label><\/p>\n\t\t\t<p class=\"login-submit\">\n\t\t\t\t<input type=\"submit\" name=\"wp-submit\" id=\"wp-submit\" class=\"button button-primary\" value=\"Log In\" \/>\n\t\t\t\t<input type=\"hidden\" name=\"redirect_to\" value=\"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/posts\/27770\" \/>\n\t\t\t<\/p>\n\t\t\t\n\t\t<\/form><\/div><\/div><\/div>","protected":false},"author":2,"featured_media":27775,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"spay_email":"","jetpack_publicize_message":""},"categories":[104],"tags":[],"jetpack_featured_media_url":"https:\/\/i0.wp.com\/aurelis.org\/blog\/wp-content\/uploads\/2026\/04\/3838-1.jpg?fit=960%2C558&ssl=1","jetpack_publicize_connections":[],"jetpack_sharing_enabled":true,"jetpack_shortlink":"https:\/\/wp.me\/p9Fdiq-7dU","jetpack-related-posts":[],"_links":{"self":[{"href":"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/posts\/27770"}],"collection":[{"href":"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/comments?post=27770"}],"version-history":[{"count":3,"href":"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/posts\/27770\/revisions"}],"predecessor-version":[{"id":27774,"href":"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/posts\/27770\/revisions\/27774"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/media\/27775"}],"wp:attachment":[{"href":"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/media?parent=27770"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/categories?post=27770"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/aurelis.org\/blog\/wp-json\/wp\/v2\/tags?post=27770"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}