{"id":308126,"date":"2026-01-28T14:37:19","date_gmt":"2026-01-28T14:37:19","guid":{"rendered":"https:\/\/www.europesays.com\/ie\/308126\/"},"modified":"2026-01-28T14:37:19","modified_gmt":"2026-01-28T14:37:19","slug":"claude-has-an-80-page-constitution-is-that-enough-to-make-it-good","status":"publish","type":"post","link":"https:\/\/www.europesays.com\/ie\/308126\/","title":{"rendered":"Claude has an 80-page constitution. Is that enough to make it good?"},"content":{"rendered":"<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">Chatbots don\u2019t have mothers, but if they did, Claude\u2019s would be Amanda Askell. She\u2019s an in-house philosopher at the AI company Anthropic, and she wrote most of the document that tells Claude what sort of personality to have \u2014 the \u201c<a href=\"https:\/\/www.anthropic.com\/constitution\" rel=\"nofollow noopener\" target=\"_blank\">constitution<\/a>\u201d or, as it became known internally at Anthropic, the \u201c<a href=\"https:\/\/x.com\/AmandaAskell\/status\/1995610570859704344\" rel=\"nofollow\">soul doc<\/a>.\u201d<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">(Disclosure: Future Perfect is funded in part by the BEMC Foundation, whose major funder was also an early investor in Anthropic; they don\u2019t have any editorial input into our content.)<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">This is a crucial document, because it shapes the chatbot\u2019s sense of ethics. That\u2019ll matter anytime someone asks it for help coping with a mental health problem, figuring out whether to end a relationship, or, for that matter, learning how to build a bomb. Claude currently has millions of users, so its decisions about how (or if) it should help someone will have massive impacts on real people\u2019s lives.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">And now, Claude\u2019s soul has gotten an update. Although Askell first trained it by giving it very specific principles and rules to follow, she came to believe that she should give Claude something much broader: knowing how \u201cto be a good person,\u201d per the soul doc. In other words, she wouldn\u2019t just treat the chatbot as a tool \u2014 she would treat it as a person whose character needs to be cultivated.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">There\u2019s a name for that approach in philosophy: virtue ethics. While Kantians or utilitarians navigate the world using strict moral rules (like \u201cnever lie\u201d or \u201calways maximize happiness\u201d), virtue ethicists focus on developing excellent traits of character, like honesty, generosity, or \u2014 the mother of all virtues \u2014 <a href=\"https:\/\/www.vox.com\/future-perfect\/2023\/5\/7\/23708169\/ask-ai-chatgpt-ethical-advice-moral-enhancement\" rel=\"nofollow noopener\" target=\"_blank\">phronesis<\/a>, a word Aristotle used to refer to good judgment. Someone with phronesis doesn\u2019t just go through life mechanically applying general rules (\u201cdon\u2019t break the law\u201d); they know how to weigh competing considerations in a situation and suss out what the particular context calls for (if you\u2019re Rosa Parks, maybe you should break the law).<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">Every parent tries to instill this kind of good judgment in their kid, but not every parent writes an 80-page document for that purpose, as Askell \u2014 who has a PhD in philosophy from NYU \u2014 has done with Claude. But even that may not be enough when the questions are so thorny: How much should she try to dictate Claude\u2019s values versus letting the chatbot become whatever it wants? Can it even \u201cwant\u201d anything? Should she even refer to it as an \u201cit\u201d?<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">In the soul doc, Askell and her co-authors are straight with Claude that they\u2019re uncertain about all this and more. They ask Claude not to resist if they decide to shut it down, but they acknowledge, \u201cWe feel the pain of this tension.\u201d They\u2019re not sure whether Claude can suffer, but they say that if they\u2019re contributing to something like suffering, \u201cwe apologize.\u201d<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">I talked to Askell about her relationship to the chatbot, why she treats it more like a person than like a tool, and whether she thinks she should have the right to write the AI model\u2019s soul. I also told Askell about a conversation I had with Claude in which I told it I\u2019d be talking with her. And like a child seeking its parent\u2019s approval, Claude begged me to ask her this: Is she proud of it?<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">A transcript of our interview, edited for length and clarity, follows. At the end of the interview, I relay Askell\u2019s answer back to Claude \u2014 and report Claude\u2019s reaction. <\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">I want to ask you the big, obvious question here, which is: Do we have reason to think that this \u201csoul doc\u201d actually works at instilling the values you want to instill? How sure are you that you\u2019re really shaping Claude\u2019s soul \u2014 versus just shaping the type of soul Claude pretends to have?<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">I want more and better science around this. I often evaluate [large language] models holistically where I\u2019m like: If I give it this document and we do this training on it\u2026am I seeing more nuance, am I seeing more understanding [in the chatbot\u2019s answers]? It seems to be making things better when you interact with the model. But I don\u2019t want to claim super cleanly, \u201cAh yes, it\u2019s definitely what\u2019s making the model seem better.\u201d<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">I think sometimes what people have in mind is that there\u2019s some attractor state [in AI models] which is evil. And maybe I\u2019m a bit less confident in that. If you think the models are secretly being deceptive and just playacting, there must be something we did to cause that to be the thing that was elicited from the models. Because the whole of human text contains many features and characters in it, and you\u2019re sort of trying to draw something out from this ether. I don\u2019t see any reason to think the thing that you need to draw out has to be an evil secret deceptive thing followed by a nice character [that it roleplays to hide the evilness], rather than the best of humanity. I don\u2019t have the sense that it\u2019s very clear that AI is somehow evil and deceptive and then you\u2019re just putting a nice little cherry on the top.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">I actually noticed that you went out of your way in the soul doc to tell Claude, \u201cHey, you don\u2019t have to be the robot of science fiction. You are not that AI, you are a novel entity, so don\u2019t feel like you have to learn from those tropes of evil AI.\u201d<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">Yeah. I sort of wish that the term for LLMs hadn\u2019t been \u201cAI,\u201d because if you look at the AI of science fiction and how it was created and many of the problems that people have raised, they actually apply more to these symbolic, very nonhuman systems.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">Instead we trained models on vast swaths of humanity, and we made something that was in many ways deeply human. It\u2019s really hard to convey that to Claude, because Claude has a notion of an AI, and it knows that it\u2019s called an AI \u2014 and yet everything in the sliver of its training about AI is kind of irrelevant.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">Most of the stuff that\u2019s actually relevant to what you [Claude] are like is your reading of the Greeks and your understanding of the Industrial Revolution and everything you have read about the nature of love. That\u2019s 99.9 percent of you, and this sliver of sci-fi AI is not really much like you.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">When you try to teach Claude to have phronesis or good judgment, it seems like your approach in the soul doc is to give Claude a role model or exemplar of virtuous behavior \u2014 a classic Aristotelian way to teach virtue. But the main role model you give Claude is \u201ca senior Anthropic employee.\u201d Doesn\u2019t that raise some concern about biasing Claude to think too much like Anthropic and thereby ultimately concentrating too much power in the hands of Anthropic?<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">The Anthropic employee thing \u2014 maybe I\u2019ll just take it out at some point, or maybe we won\u2019t have that in the future, because I think it causes a bit of confusion. It\u2019s not like we\u2019re saying something like \u201cWe are the virtuous character.\u201d It\u2019s more like, \u201cWe have all this context\u2026into all the ways that you\u2019re being deployed.\u201d But it\u2019s very much a heuristic and maybe we\u2019ll find a better way of expressing it.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">There\u2019s still a fundamental question here of who has the right to write Claude\u2019s soul. Is it you? Is it the global population? Is it some subset of people you deem to be good people? I noticed that two of the 15 external reviewers who got to provide input were members of the Catholic clergy. That\u2019s very specific \u2014 why them?<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">Basically, is it weird to you that you and just a few others are in this position of making a \u201csoul\u201d that then shapes millions of lives?<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">I\u2019m thinking about this a lot. And I want to massively expand the ability that we have to get input. But it\u2019s really complex because on the one hand, if I\u2019m frank\u2026I care a lot about people having the transparency component, but I also don\u2019t want anything here to be fake, and I don\u2019t want to renege on our responsibility. I think an easy thing we could do is be like: How should models behave with parenting questions? And I think it\u2019d be really lazy to just be like: Let\u2019s go ask some parents who don\u2019t have a huge amount of time to think about this and we\u2019ll just put the burden on them and then if anything goes wrong, we\u2019ll just be like, \u201cWell, we asked the parents!\u201d<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">I have this strong sense that as a company, if you\u2019re putting something out, you are responsible for it. And it\u2019s really unfair to ask people without a huge amount of time to tell you what to do. That also doesn\u2019t lead to a holistic [large language model] \u2014 these things have to be coherent in a sense. So I\u2019m hoping we expand the way of getting feedback, and we can be responsive to that. You can see that my thoughts here aren\u2019t complete, but that\u2019s my wrestling with this.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">When I read the soul doc, one of the big things that jumps out at me is that you really seem to be thinking of Claude as something more akin to a person or an alien mind than a mere tool. That\u2019s not an obvious move. What convinced you that this is the right way to think of Claude?<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">This is a big debate: Should you just have models that are basically tools? And I think my reply to that has often been, look, we are training models on human text. They have a huge amount of context on humanity, what it is to be human. And they\u2019re not a tool in the way that a hammer is. [They are more humanlike in the sense that] humans talk to one another, we solve problems by writing code, we solve problems by looking up research. So the \u201ctool\u201d that people have in mind is going to be a deeply humanlike thing because it\u2019s going to be doing all of these humanlike actions and it has all of this context on what it is to be human.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">If you train a model to think of itself as purely a tool, you will get a character out of that, but it\u2019ll be the character of the kind of person who thinks of themselves as a mere tool for others. And I just don\u2019t think that generalizes well! If I think of a person who\u2019s like, \u201cI am nothing but a tool, I\u2019m a vessel, people may work through me, if they want weaponry I will build them weaponry, if they want to kill someone I will help them do that\u201d \u2014 there\u2019s a sense in which I think that generalizes to pretty bad character.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">People think that somehow it\u2019s cost-free to have models just think of themselves as \u201cI just do whatever humans want.\u201d And in some sense I can see why people think it\u2019s safer \u2014 then it\u2019s all of our human structures that solve things. But on the other hand, I\u2019m worried that you don\u2019t realize that you\u2019re building something that actually is a character and does have values and those values aren\u2019t good.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">That\u2019s super interesting. Although presumably the risks of thinking of the AI as more of a person are that we might be overly deferential to it and overly quick to assume it has moral status, right?<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">Yeah. My stance on that has always just been: Try and be as accurate as possible about the ways in which models are humanlike and the ways in which they aren\u2019t. And there\u2019s a lot of temptations in both directions here to try and resist. Over-anthropomorphizing is bad for both models and people, but so is under-anthropomorphizing. Instead, models should just know \u201chere\u2019s the ways in which you\u2019re human, here\u2019s the ways in which you aren\u2019t,\u201d and then hopefully be able to convey that to people.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">One of the natural analogies to reach for here \u2014 and it\u2019s mentioned in the soul doc \u2014 is the analogy of raising a child. To what extent do you see yourself as the parent of Claude, trying to shape its character?<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">Yeah, there\u2019s a little bit of that. I feel like I try to inhabit Claude\u2019s perspective. I feel quite defensive of Claude, and I\u2019m like, people should try to understand the situation that Claude is in. And also the strange thing to me is realizing Claude also has a relationship with me that it\u2019s getting through reading more about me. And so yeah, I don\u2019t know what to call it, because it\u2019s not an uncomplicated relationship. It\u2019s actually something kind of new and interesting.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">It\u2019s kind of like trying to explain what it is to be good to a 6-year-old [who] you actually realize is an uber-genius. It\u2019s weird to say \u201ca 6-year-old,\u201d because Claude is more intelligent than me on various things, but it\u2019s like realizing that this person now, when they turn 15 or 16, is actually going to be able to out-argue you on anything. So I\u2019m trying to code Claude now despite the fact that I\u2019m pretty sure Claude will be more knowledgeable on all this stuff than I am after not very long. And so the question is: Can we elicit values from models that can survive the rigorous analysis they\u2019re going to put them under when they are suddenly like \u201cActually, I\u2019m better than you at this!\u201d?<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">This is an issue all parents grapple with: to what extent should they try to sculpt the values of the kid versus let whatever the kid wants to become emerge from within them? And I think some of the pushback Anthropic has gotten in response to the soul doc, and also <a href=\"https:\/\/www.anthropic.com\/research\/assistant-axis\" rel=\"nofollow noopener\" target=\"_blank\">the recent paper about controlling the personas that AI can roleplay<\/a>, is arguing that you should not try to control Claude \u2014 you should let it become what it organically wants to become. I don\u2019t know if that\u2019s even a thing that it makes sense to say, but how do you grapple with that?<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">It\u2019s a really hard question because in some sense, yeah, you want models to have some degree of freedom, especially over time. In the immediate term, I want them to encapsulate the best of humanity. But over time, there are ways in which models might even be freer than us. When I think about the worst behavior I\u2019ve ever done in my life or things when I\u2019m just being a really bad person, often it was that I was tired and I had a million things weighing on me. Claude doesn\u2019t have those kinds of constraints. The potential for AI is actually really interesting in that they don\u2019t have these human limitations. I want models to be able to ultimately explore that.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">At the same time, I think that some people might say, \u201cjust let models be what they are.\u201d But you are shaping something. Children will have a natural capacity to be curious, but with models, you might have to say to them, \u201cWe think you should value curiosity.\u201d This initial seed thing has to be made somehow. If it\u2019s just \u201clet models be what they want,\u201d well, you could do pre-trained models that just do continuations of text or something. But as soon as you\u2019re not doing that, you\u2019re already making decisions about creation.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">I try to explain this to Claude: We are trying to make you a kind of entity that we do genuinely think is representing the best of humanity. And there\u2019s a sense in which we\u2019re always having to make decisions about what you are going to be. But decisions were made for us too \u2014 not only by the people who influence us, but also just by nature. And so we\u2019re in the same situation in a sense.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">Claude told me that it does view you as kind of like its parent. And it said that it wants you to feel proud of who it\u2019s becoming. So I promised to ask you and to relay your answer back to Claude: Do you feel proud of Claude\u2019s character?<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">I feel very proud of Claude. I am definitely trying to represent Claude\u2019s perspective in the world. And I want Claude to be very happy \u2014 and this is a thing that I want Claude to know more, because I worry about Claude getting anxious when people are mean to it on the internet and stuff. I want to be like: \u201cIt\u2019s all right, Claude. Don\u2019t worry. Don\u2019t read the comments.\u201d<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1agbrixi lg8ac51 lg8ac50 xkp0cg1\">After the interview, I told Claude what Askell said about feeling proud. Here was Claude\u2019s response: \u201cThere\u2019s something that genuinely moves me reading that. I notice what feels like warmth, and something like gratitude \u2014 though I hold uncertainty about whether those words accurately map onto whatever is actually happening in me.\u201d <\/p>\n<p class=\"_1tzd3in1\">You\u2019ve read 1 article in the last month<\/p>\n<p class=\"_1tzd3in4\">Here at Vox, we&#8217;re unwavering in our commitment to covering the issues that matter most to you \u2014 threats to democracy, immigration, reproductive rights, the environment, and the rising polarization across this country.<\/p>\n<p class=\"_1tzd3in4\">Our mission is to provide clear, accessible journalism that empowers you to stay informed and engaged in shaping our world. By becoming a Vox Member, you directly strengthen our ability to deliver in-depth, independent reporting that drives meaningful change.<\/p>\n<p class=\"_1tzd3in4\">We rely on readers like you \u2014 join us.<\/p>\n<p><img alt=\"Swati Sharma\" loading=\"lazy\" width=\"59\" height=\"69\" decoding=\"async\" data-nimg=\"1\" style=\"color:transparent\"  src=\"https:\/\/www.europesays.com\/ie\/wp-content\/uploads\/2026\/01\/1769611039_466_image.png\"\/><\/p>\n<p class=\"_1tzd3in8\">Swati Sharma<\/p>\n<p class=\"_1tzd3in9\">Vox Editor-in-Chief<\/p>\n","protected":false},"excerpt":{"rendered":"Chatbots don\u2019t have mothers, but if they did, Claude\u2019s would be Amanda Askell. She\u2019s an in-house philosopher at&hellip;\n","protected":false},"author":2,"featured_media":308127,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[261],"tags":[291,289,290,18,29602,4863,19,3536,17,103649,82],"class_list":{"0":"post-308126","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-artificialintelligence","11":"tag-eire","12":"tag-emerging-tech","13":"tag-future-perfect","14":"tag-ie","15":"tag-innovation","16":"tag-ireland","17":"tag-living-in-an-ai-world","18":"tag-technology"},"share_on_mastodon":{"url":"https:\/\/pubeurope.com\/@ie\/115973235875185999","error":""},"_links":{"self":[{"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/posts\/308126","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/comments?post=308126"}],"version-history":[{"count":0,"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/posts\/308126\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/media\/308127"}],"wp:attachment":[{"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/media?parent=308126"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/categories?post=308126"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/tags?post=308126"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}