{"id":6316,"date":"2025-04-09T21:02:11","date_gmt":"2025-04-09T21:02:11","guid":{"rendered":"https:\/\/www.europesays.com\/uk\/6316\/"},"modified":"2025-04-09T21:02:11","modified_gmt":"2025-04-09T21:02:11","slug":"ai-avatars-escape-the-uncanny-valley","status":"publish","type":"post","link":"https:\/\/www.europesays.com\/uk\/6316\/","title":{"rendered":"AI Avatars Escape the Uncanny Valley"},"content":{"rendered":"<p>What happens when AI doesn\u2019t just generate content, but embodies it? AI has already mastered the ability to produce realistic photos, videos, and voices, passing the visual and auditory Turing Test. The next big leap is in AI avatars: combining a face with a voice to create a talking character.<\/p>\n<p>Can\u2019t you just generate an image of a face, animate it, and add a voiceover? Not quite. The challenge isn\u2019t just nailing the lip sync \u2014 it\u2019s making facial expressions and body language move in tandem. It would be weird if your mouth opened in surprise, but your cheeks and chin didn\u2019t budge! And if a voice sounds excited but the corresponding face doesn\u2019t react, the human-like illusion falls apart.<\/p>\n<p>We\u2019re starting to see real progress here. AI avatars are already being used in content creation, advertising, and corporate communication. Today\u2019s versions are still mostly talking heads \u2014 functional, but limited \u2014 but we\u2019ve seen some exciting developments in the last few months, and there\u2019s clearly meaningful progress on the horizon.<\/p>\n<p>In this post, we\u2019ll break down what\u2019s working now, what\u2019s next, and the most impressive AI avatar products today, drawn from my hands-on testing of over 20 of them.<\/p>\n<p>How has the research evolved?<\/p>\n<p>AI avatars are a uniquely challenging research problem. To make a talking face, a model needs to learn realistic phoneme-to-viseme mapping: the relationship between speech sounds (phonemes) and their corresponding mouth movements (visemes). If this is \u201coff,\u201d the mouth and voice will look out of sync or even completely disconnected.<\/p>\n<p>To make the issue even more complex, your mouth isn\u2019t the only thing that moves when you talk. The rest of your face moves in conjunction, along with your upper body and sometimes your hands. And everyone has their own distinct style of speaking. Think about how you speak, compared to your favorite celebrity: even if you\u2019re saying the same sentence, your mouths will move differently. If you tried to apply your lip sync to their face, it would look weird.<\/p>\n<p>Over the last few years, this space has evolved significantly from a research perspective. I reviewed over 70 papers on AI talking heads since 2017 and saw a clear progression in model architecture \u2014 from CNNs and GANs, to 3D-based approaches like NeRFs and 3D Morphable Models, then to transformers and diffusion models, and most recently, to DiT (diffusion models based on the transformer architecture). The timeline below highlights the most cited papers from each year.<\/p>\n<p>Both the quality of generations and the capabilities of models have improved dramatically. Early approaches were limited. Imagine starting with a single photo of a person, masking the bottom half of their face, and generating new mouth movements based on target facial landmarks from audio input. These models were trained on a limited corpus of quality lip sync data, most of which was closely cropped at the face. More realistic results, like \u201c<a href=\"https:\/\/www.washington.edu\/news\/2017\/07\/11\/lip-syncing-obama-new-tools-turn-audio-clips-into-realistic-video\/\" target=\"_blank\" rel=\"noopener noreferrer\">\u200blip-syncing Obama\u200b<\/a>,\u201d required many hours of video of the target person and were very limited in outputs.<\/p>\n<p>Today\u2019s models are much more flexible and powerful. They can generate half-body or even full body movement, realistic talking faces, and dynamic background motion \u2014 all in the same video! These newer models are trained more like traditional text-to-video models on much larger datasets, using a variety of techniques to maintain lip sync accuracy amid all the motion.\u00a0<\/p>\n<p>The first preview of this came with Bytedance\u2019s \u200b<a href=\"https:\/\/omnihuman-lab.github.io\/\" target=\"_blank\" rel=\"noopener noreferrer\">OmniHuman-1<\/a>\u200b model, which was introduced in February (and was recently made available in <a href=\"https:\/\/dreamina.capcut.com\/\" target=\"_blank\" rel=\"noopener noreferrer\">Dreamina<\/a>). The space is moving quickly \u2014 <a href=\"https:\/\/www.hedra.com\/\" target=\"_blank\" rel=\"noopener noreferrer\">Hedra<\/a>\u200b released Character-3 in March, which in our head-to-head testing is now best-in-class for most use cases. Hedra also works for non-human characters, like this <a href=\"https:\/\/x.com\/venturetwins\/status\/1905097891787743375\" target=\"_blank\" rel=\"noopener noreferrer\">talking Waymo<\/a>, and enables users to prompt emotions and movement via text.<\/p>\n<p>New use cases are also emerging around AI animation, spurred by trends like the Studio Ghibli movement. The below video came from a starting image frame and the audio track. Hedra generated the character\u2019s lip sync and face + upper body movement. And check out the moving characters in the background!<\/p>\n<blockquote class=\"twitter-tweet\" data-media-max-width=\"560\">\n<p dir=\"ltr\" lang=\"en\">Presenting The Office x Studio Ghibli <a href=\"https:\/\/t.co\/nHYrGc2uDs\">pic.twitter.com\/nHYrGc2uDs<\/a><\/p>\n<p>\u2014 Justine Moore (@venturetwins) <a href=\"https:\/\/twitter.com\/venturetwins\/status\/1905338712583799128?ref_src=twsrc%5Etfw\" target=\"_blank\" rel=\"noopener\">March 27, 2025<\/a><\/p>\n<\/blockquote>\n<p>Real-world jobs for AI avatars<\/p>\n<p>There are countless use cases for AI avatars \u2014 just imagine all the different places where you interact with a character or watch a video where someone is speaking. We\u2019ve already seen usage across consumers, SMBs, and even enterprises.<\/p>\n<p><a href=\"https:\/\/www.europesays.com\/uk\/wp-content\/uploads\/2025\/04\/250407-AI-Avatar-Use-Case-Market-Map-r5-x2000.png\"><img fetchpriority=\"high\" decoding=\"async\" class=\"aligncenter size-full wp-image-376181\" src=\"https:\/\/www.europesays.com\/uk\/wp-content\/uploads\/2025\/04\/250407-AI-Avatar-Use-Case-Market-Map-r5-x2000.png\" alt=\"\" width=\"2000\" height=\"1127\"  \/><\/a><\/p>\n<p>This is an early market map. The space is evolving quickly, and the product distinctions are relatively rough. Many products theoretically could make avatars for most or all of these use cases, but we\u2019ve found, in practice, that it\u2019s hard to build the workflow and tune the model to excel at everything. Below, we\u2019ve outlined examples for how each segment of the market is leveraging AI avatars.<\/p>\n<p>Consumers: Character creation<\/p>\n<p>Anyone can now create animated characters from a single image, which is a massive unlock for creativity. It\u2019s hard to overstate how meaningful this is for everyday people who want to use AI to tell a story. One of the reasons early AI videos were criticized as \u201cslides of images\u201d is there were no talking characters (or speech only came in the form of voiceovers).<\/p>\n<p>When you can make something talk, your content becomes much more interesting. And beyond traditional narrative video, you can create things like<a href=\"https:\/\/x.com\/AIWarper\/status\/1899484291643605283\" target=\"_blank\" rel=\"noopener noreferrer\"> AI streamers<\/a>,<a href=\"https:\/\/x.com\/search?q=podcast%20hedra&amp;src=typed_query\" target=\"_blank\" rel=\"noopener noreferrer\"> podcasters<\/a>, and<a href=\"https:\/\/x.com\/CoffeeVectors\/status\/1898564461402673302\" target=\"_blank\" rel=\"noopener noreferrer\"> music videos<\/a>. The videos linked here were all made on \u200b<a href=\"https:\/\/hedra.com\/\" target=\"_blank\" rel=\"noopener noreferrer\">Hedra<\/a>, which enables users to create dynamic, speaking characters from a single starting image and either an audio clip or a script.<\/p>\n<p>If you\u2019re starting with a video instead of an image, \u200b<a href=\"https:\/\/sync.so\/\" target=\"_blank\" rel=\"noopener noreferrer\">Sync\u200b<\/a> can apply lip sync to make the character\u2019s face fit your audio. And if you want to use real human performance to drive the movement of your character, tools like \u200b<a href=\"https:\/\/runwayml.com\/research\/introducing-act-one\" target=\"_blank\" rel=\"noopener noreferrer\">Runway Act-One<\/a>\u200b and \u200b<a href=\"https:\/\/viggle.ai\/home\" target=\"_blank\" rel=\"noopener noreferrer\">Viggle<\/a> make it possible<a href=\"https:\/\/viggle.ai\/home\" target=\"_blank\" rel=\"noopener\">\u200b<\/a>.<\/p>\n<p>One of my favorite creators using AI to animate characters is \u200b<a href=\"https:\/\/www.youtube.com\/@NeuralViz\" target=\"_blank\" rel=\"noopener noreferrer\">Neural Viz<\/a>\u200b, whose series, \u201cThe Monoverse,\u201d imagines a post-human universe populated by Glurons. It\u2019s only a matter of time before we see an explosion of AI-generated shows \u2014 or even just standalone influencers \u2014 now that the barrier to entry is so much lower.<\/p>\n<p><a href=\"https:\/\/www.youtube.com\/watch?v=YGyvLlPad8Q\" target=\"_blank\" rel=\"noopener noreferrer\"><img loading=\"lazy\" decoding=\"async\" aria-describedby=\"caption-attachment-376090\" class=\"wp-image-376090 size-full\" src=\"https:\/\/www.europesays.com\/uk\/wp-content\/uploads\/2025\/04\/250404-Human-Oddities-Ep-1-Thumbnail-816x459-1.png\" alt=\"\" width=\"816\" height=\"459\"  \/><\/a><\/p>\n<p id=\"caption-attachment-376090\" class=\"wp-caption-text\">Unanswered Oddities \u2013 Episode 1: Humans (youtube.com\/@NeuralViz)<\/p>\n<p>As avatars become easier to stream in real-time, we also expect to see consumer-facing companies implement them as a core part of their UI. Imagine learning a language with a live AI \u201ccoach\u201d that is not just a disembodied voice, but a full character with a face and personality. Companies like <a href=\"https:\/\/praktika.ai\/\" target=\"_blank\" rel=\"noopener noreferrer\">Praktika<\/a> are already doing this, and it will only get more natural over time.\u00a0<\/p>\n<p><strong>SMBs: Lead generation<\/strong><\/p>\n<p>Ads have become one of the first killer use cases of AI avatars. Instead of hiring actors and a production crew, businesses can now have hyper-realistic AI characters promote their products. Companies like \u200b<a href=\"https:\/\/creatify.ai\/\" target=\"_blank\" rel=\"noopener noreferrer\">Creatify<\/a>\u200b and \u200b<a href=\"https:\/\/www.arcads.ai\/\" target=\"_blank\" rel=\"noopener noreferrer\">Arcads<\/a>\u200b make this seamless \u2014 just provide a product link and they generate an ad: writing the script, pulling B-roll and images, and \u201ccasting\u201d an AI actor.<\/p>\n<p>This has unlocked advertising for businesses that could never afford traditional ad production. It\u2019s particularly popular among ecommerce companies, games, and consumer apps. Chances are, you\u2019ve already seen AI-generated ads on YouTube or TikTok. Now B2B companies are exploring the tech as well, using AI avatars for content marketing or personalized outreach with tools like \u200b<a href=\"https:\/\/www.yuzulabs.io\/\" target=\"_blank\" rel=\"noopener noreferrer\">Yuzu Labs<\/a>\u200b and \u200b<a href=\"https:\/\/www.vidyard.com\/products\/ai-avatars\/\" target=\"_blank\" rel=\"noopener noreferrer\">Vidyard<\/a>\u200b.<\/p>\n<p>Many of these products combine an AI actor \u2014 whether a clone of a real person or a unique character \u2014 with other assets like product photos, video clips, and music. Users can control where these assets appear, or put it on \u201cautopilot\u201d and let the product pull together a video for you. You can either write the script yourself or use an AI-generated one.<\/p>\n<\/p>\n<p><strong>Enterprises: Scaling content<\/strong><\/p>\n<p>Beyond marketing, enterprises are finding a range of applications for AI avatars. A few examples:<\/p>\n<p><strong>Learning and development<\/strong>. Most large companies produce training and educational videos for employees, covering everything from onboarding to compliance, product tutorials, and skill development. AI tools like <a href=\"https:\/\/www.synthesia.io\/\" target=\"_blank\" rel=\"noopener noreferrer\">Synthesia<\/a> can automate this process, making content creation faster and more scalable. <a href=\"https:\/\/www.synthesia.io\/\" target=\"_blank\" rel=\"noopener\">\u200b<\/a>Some roles also require ongoing, video-based training \u2014 imagine a salesperson practicing their negotiation skills with an AI avatar from a product like \u200b<a href=\"https:\/\/www.anam.ai\/\" target=\"_blank\" rel=\"noopener noreferrer\">Anam<\/a>.<\/p>\n<p><strong>Localization<\/strong>. If a company has customers or employees in different countries, it may want to localize content into different languages or switch out cultural references. AI actors make it fast and easy to personalize your videos for different geographies. Thanks to<a href=\"https:\/\/elevenlabs.io\/blog\/what-is-video-translation\" target=\"_blank\" rel=\"noopener noreferrer\"> \u200bAI voice translation\u200b<\/a> from companies like ElevenLabs, businesses can generate the same video in dozens of languages, with natural-sounding voices.\u00a0<\/p>\n<p><strong>Executive presence<\/strong>. AI avatars let executives scale their presence by cloning their persona to create personalized content for employees or customers. Instead of filming every product announcement or a \u201cthank you\u201d message, companies can generate a realistic AI twin of their CEO or product lead. We\u2019re also seeing companies like \u200b<a href=\"https:\/\/www.delphi.ai\/\" target=\"_blank\" rel=\"noopener noreferrer\">Delphi<\/a>\u200b and \u200b<a href=\"https:\/\/www.heycicero.com\/\" target=\"_blank\" rel=\"noopener noreferrer\">Cicero<\/a>\u200b make it easy for thought leaders to interact with and answer questions from people they\u2019d never normally be able to meet 1:1.<\/p>\n<p><strong>What are the ingredients of an AI avatar?\u00a0<\/strong><\/p>\n<p><a href=\"https:\/\/www.europesays.com\/uk\/wp-content\/uploads\/2025\/04\/250402-Elements-of-AI-Avatar-B-x2000-2.png\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-full wp-image-376067\" src=\"https:\/\/www.europesays.com\/uk\/wp-content\/uploads\/2025\/04\/250402-Elements-of-AI-Avatar-B-x2000-2.png\" alt=\"\" width=\"2000\" height=\"1437\"  \/><\/a><\/p>\n<p>Creating a believable AI avatar is a challenge, with each element of realism presenting its own technical hurdles. It\u2019s not just about avoiding the uncanny valley, it\u2019s about solving fundamental problems in animation, speech synthesis, and real-time rendering. Here\u2019s a breakdown of what\u2019s required, why it\u2019s so hard to get right, and where we\u2019re seeing progress:<\/p>\n<ul>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><b>Face <\/b>\u2013 Whether you\u2019re cloning a person or creating a new character, you need a face that stays consistent between frames and moves realistically while talking. Context-aware expressiveness remains a challenge (e.g. an avatar yawning while saying \u201cI\u2019m tired\u201d).<\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><b>Voice <\/b>\u2013 The voice needs to sound real and match the character; a teenage girl\u2019s face shouldn\u2019t have an older woman\u2019s voice. Most of the AI avatar companies we\u2019ve met use \u200b<a href=\"https:\/\/elevenlabs.io\/\" target=\"_blank\" rel=\"noopener noreferrer\">ElevenLabs<\/a>\u200b, which has an extensive voice library and allows you to clone your own.<\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><b>Lip sync<\/b> \u2013 Getting quality lip sync is tricky. Entire companies, like \u200b<a href=\"https:\/\/sync.so\/\" target=\"_blank\" rel=\"noopener noreferrer\">Sync<\/a>\u200b, are dedicated to solving this problem. Other models like <a href=\"https:\/\/arxiv.org\/pdf\/2503.23307\" target=\"_blank\" rel=\"noopener noreferrer\">MoCha<\/a> (from Meta) and <a href=\"https:\/\/arxiv.org\/pdf\/2502.01061\" target=\"_blank\" rel=\"noopener noreferrer\">OmniHuman<\/a> are trained on larger datasets and use various techniques to strongly condition face generation on the accompanying audio. train on larger datasets but find ways to strongly condition face frame generation on audio.<\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><b>Body <\/b>\u2013 Your avatar can\u2019t just be a floating head! Newer models enable avatars with full bodies that can move, but we\u2019re still in early days in terms of both scaling them and delivering them to users.<\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><b>Background<\/b> \u2013 Avatars don\u2019t exist in a vacuum. The lighting, depth, and interactions in their surrounding environment need to match the scene. Ideally, avatars will even be able to touch and engage with things in their environment, like picking up a product.<\/li>\n<\/ul>\n<p>If you want your avatar to engage in real-time conversations \u2014 like joining a Zoom meeting \u2014 there are a few other things you need to add:<\/p>\n<ul>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><b>Brain<\/b> \u2013 Your avatar needs to be able to \u201cthink.\u201d Products that enable conversation today typically enable you to upload or connect to a knowledge base. In the future, more complex versions of this will hopefully include more memory and personality. Avatars should be able to remember past conversations with you and have their own \u201cflair.\u201d<\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><b>Streaming<\/b> \u2013 It\u2019s not easy to stream all of this with minimal latency. Products like \u200b<a href=\"https:\/\/livekit.io\/\" target=\"_blank\" rel=\"noopener noreferrer\">LiveKit<\/a>\u200b and \u200b<a href=\"https:\/\/www.agora.io\/en\/\" target=\"_blank\" rel=\"noopener noreferrer\">Agora<\/a>\u200b are making progress here, but it\u2019s hard to make all these models work while minimizing latency. We\u2019ve seen a few products do this well \u2014 like \u200b<a href=\"https:\/\/www.tolans.com\/\" target=\"_blank\" rel=\"noopener noreferrer\">Tolan<\/a>\u200b, an AI alien companion with a voice and face \u2014 but there\u2019s still work to be done.<\/li>\n<\/ul>\n<p>What would we like to see?<\/p>\n<p>There\u2019s still so much to build and improve in this space. A few areas that are top-of-mind:<\/p>\n<p>Character consistency and transformation<\/p>\n<p>Historically, each AI avatar had one, fixed \u201clook.\u201d Their outfit, pose, and environment were static. Some products are starting to offer more options. For example, this character from <a href=\"http:\/\/heygen.com\" target=\"_blank\" rel=\"noopener noreferrer\">HeyGen<\/a>, Raul, has 20 looks! But it would be great to more easily transform a character however you want.<\/p>\n<\/p>\n<p>Better facial movement and expressiveness<\/p>\n<p>Faces have long been the weak link of AI avatars, often looking robotic. That\u2019s starting to change with products like Captions\u2019 new Mirage, which delivers a more natural look and broader range of expressions. We\u2019d love to see AI avatars that understand the emotional context of a script and react appropriately, like looking scared if the character is fleeing from a monster.<\/p>\n<\/p>\n<p>Body movement<\/p>\n<p>Today, the vast majority of avatars have little movement below the face \u2014 even basic things like hand gestures. Gesture control has been fairly programmatic: for example, <a href=\"https:\/\/www.argil.ai\/\" target=\"_blank\" rel=\"noopener noreferrer\">Argil<\/a> allows you to select different types of body language for each segment of your video. We\u2019re excited to see more natural, inferred motion in the future.\u00a0<\/p>\n<\/p>\n<p>Interacting with the \u201creal world\u201d<\/p>\n<p>Right now, AI avatars can\u2019t interact with their surroundings. An attainable near-term goal may be enabling them to hold products in ads. <a href=\"https:\/\/www.topview.ai\/\" target=\"_blank\" rel=\"noopener noreferrer\">Topview<\/a> has already made progress (see the below video for their process and outcome), and we\u2019re excited to see what\u2019s to come as models improve.<\/p>\n<\/p>\n<p>More real-time applications<\/p>\n<p>To name a few potential use cases: doing a video call with an AI doctor, browsing curated products with an AI sales assistant, or FaceTiming with a character from your favorite TV show. The latency and reliability aren\u2019t quite human-level, but they\u2019re getting close. Check out a demo of me chatting with <a href=\"https:\/\/www.tavus.io\/\" target=\"_blank\" rel=\"noopener noreferrer\">Tavus<\/a>\u2018 latest model.<\/p>\n<\/p>\n<p>Where are we headed?<\/p>\n<p>One of our main learnings from investing in both foundation model companies and AI applications over the past few years? It\u2019s nearly impossible to predict with any degree of certainty where a given space is headed. However, it feels safe to say that the application layer is poised for rapid growth now that the underlying model quality finally feels good enough to generate AI talking heads that aren\u2019t painful to watch.<\/p>\n<p>We expect this space will give rise to multiple billion-dollar companies, with products segmented by use case and target customer. For example, an executive looking for an AI clone to film videos for customers will need (and be willing to pay) for a higher level of quality and realism than a fan making a quick clip of their favorite anime character to send to friends.<\/p>\n<p>Workflow is also important. If you\u2019re generating ads with AI influencers, you\u2019ll want to use a platform that can automatically pull in product details, write scripts, add B-roll and product photos, push the videos to your social channels, and measure results. On the other hand, if you\u2019re trying to tell a story using AI characters, you\u2019ll prioritize tools that enable you to save and re-use characters and scenes, and easily splice together different types of clips.<\/p>\n<p>                        <script async src=\"https:\/\/platform.twitter.com\/widgets.js\" charset=\"utf-8\"><\/script><\/p>\n","protected":false},"excerpt":{"rendered":"What happens when AI doesn\u2019t just generate content, but embodies it? AI has already mastered the ability to&hellip;\n","protected":false},"author":2,"featured_media":6317,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[3163],"tags":[323,1942,3605,3606,3607,2082,3608,3393,53,16,15],"class_list":{"0":"post-6316","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-b2b","11":"tag-b2c","12":"tag-companionship-social","13":"tag-consumer","14":"tag-creativity","15":"tag-productivity","16":"tag-technology","17":"tag-uk","18":"tag-united-kingdom"},"share_on_mastodon":{"url":"https:\/\/pubeurope.com\/@uk\/114310030627339101","error":""},"_links":{"self":[{"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/posts\/6316","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/comments?post=6316"}],"version-history":[{"count":0,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/posts\/6316\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/media\/6317"}],"wp:attachment":[{"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/media?parent=6316"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/categories?post=6316"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/tags?post=6316"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}