{"id":310796,"date":"2025-08-02T02:21:16","date_gmt":"2025-08-02T02:21:16","guid":{"rendered":"https:\/\/www.europesays.com\/uk\/310796\/"},"modified":"2025-08-02T02:21:16","modified_gmt":"2025-08-02T02:21:16","slug":"metas-photorealistic-codec-avatars-now-have-changeable-hairstyles","status":"publish","type":"post","link":"https:\/\/www.europesays.com\/uk\/310796\/","title":{"rendered":"Meta&#8217;s Photorealistic &#8216;Codec Avatars&#8217; Now Have Changeable Hairstyles"},"content":{"rendered":"<p>Meta&#8217;s prototype photorealistic &#8216;Codec Avatars&#8217; now support changeable hairstyles, separately modeling the head and hair.<\/p>\n<p>For around a decade now, Meta has been researching and developing the technology it calls Codec Avatars, photorealistic digital representations of humans driven in real-time by the face and eye tracking of VR headsets. The\u00a0<a href=\"https:\/\/www.uploadvr.com\/mark-zuckerberg-lex-fridman-interview-photorealistic-codec-avatars\/\" target=\"_blank\" rel=\"noopener\">highest-quality prototype<\/a>\u00a0achieves the remarkable feat of crossing the uncanny valley, in\u00a0<a href=\"https:\/\/www.uploadvr.com\/vr-killer-app-avatar-telepresence\/\" target=\"_blank\" rel=\"noopener\">our experience<\/a>.<\/p>\n<p>The goal of Codec Avatars is to deliver social presence, the subconscious feeling that you&#8217;re truly with another person, despite them not physically being there. No shipping technology today can do this. Video calls don&#8217;t even come close.<\/p>\n<p>In this interview, it&#8217;s likely the avatars were being decoded and rendered by a high-end PC, after both participants underwent a long scan in a multi-camera array.<\/p>\n<p>To eventually ship Codec Avatars, Meta has been working on increasing the system&#8217;s realism and adaptability, reducing the real-time rendering requirements, and making it possible to generate them with a <a href=\"https:\/\/www.uploadvr.com\/meta-researchers-generate-photorealistic-avatars-from-just-four-selfies\/\" target=\"_blank\" rel=\"noopener\">smartphone scan<\/a>.<\/p>\n<p>Generating a Codec Avatar originally required a massive custom capture array of more than 100 cameras and hundreds of lights, but last year Meta moved to only using this to train a &#8216;universal model&#8217;. After this, new Codec Avatars can be generated using a selfie video rotating your head. However, for the full quality Codec Avatars, this capture takes around an hour to be processed by a high-end server GPU.<\/p>\n<p>A Universal Relightable Gaussian Codec Avatar generated by a phone scan, rendered in real-time on PC VR last year.<\/p>\n<p>While Meta had shown off lower-quality Codec Avatars generated by a smartphone scan <a href=\"https:\/\/www.uploadvr.com\/meta-codec-avatars-iphone-scan\/\" target=\"_blank\" rel=\"noopener\">as early as 2022<\/a>, last year&#8217;s work brought this advantage to the higher-quality Codec Avatars, by moving to a Gaussian splatting approach.<\/p>\n<p>In recent years, Gaussian splatting has done for realistic volumetric rendering what large language models (LLMs) did for chatbots, propelling the technology from an expensive niche to shipping products like\u00a0<a href=\"https:\/\/www.uploadvr.com\/varjo-teleport-2-0-best-in-class\/\" target=\"_blank\" rel=\"noopener\">Varjo Teleport<\/a>\u00a0and\u00a0<a href=\"https:\/\/www.uploadvr.com\/niantics-gaussian-splat-scaniverse-is-now-an-app-on-the-quest-store\/\" target=\"_blank\" rel=\"noopener\">Niantic&#8217;s Scaniverse<\/a>.<\/p>\n<p>These newer Gaussian Codec Avatars are also inherently relightable, making them highly suitable for practical use in VR and mixed reality.<\/p>\n<p>Apple is also using Gaussian splatting for its <a href=\"https:\/\/www.uploadvr.com\/apple-executive-publicly-shows-visionos-26-persona\/\" target=\"_blank\" rel=\"noopener\">new Personas<\/a> in <a href=\"https:\/\/www.uploadvr.com\/visionos-26-announced-apple-vision-pro-wwdc25\/\" target=\"_blank\" rel=\"noopener\">visionOS 26<\/a>, which aren&#8217;t quite at the same quality as Meta&#8217;s research, but are actually available in a shipping product.<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/www.europesays.com\/uk\/wp-content\/uploads\/2025\/08\/Meta-Codec-Avatars-HairCUP-research-structure.jpg\" class=\"kg-image\" alt=\"\" loading=\"lazy\" width=\"2000\" height=\"729\" \/><\/p>\n<p>Meta&#8217;s latest research, presented in <a href=\"https:\/\/arxiv.org\/pdf\/2507.19481?ref=uploadvr.com\" target=\"_blank\" rel=\"noopener\">a paper<\/a> called &#8220;HairCUP: Hair Compositional Universal Prior for 3D Gaussian Avatars&#8221;, builds on the Gaussian Codec Avatars work from last year by adding a compositional split between the head and hair.<\/p>\n<p>In a shipping system, this would allow the user to swap out their hairstyle from a library of options, or their own prior scans, without needing to perform a new face scan.<\/p>\n<p>By its nature, the new approach also improves the seam between the hair and face, such as the fringe, and could better support hats in future.<\/p>\n<p>Meta is getting closer than ever to shipping Codec Avatars as an actual feature of its Horizon OS headsets. However, there are still multiple roadblocks.<\/p>\n<p>For starters, neither Quest 3 nor Quest 3S have eye tracking or face tracking, and there&#8217;s no indication that Meta plans to imminently launch another headset with these capabilities. Quest Pro had both, but <a href=\"https:\/\/www.uploadvr.com\/quest-pro-officially-discontinued\/\" target=\"_blank\" rel=\"noopener\">was discontinued<\/a> at the start of this year.<\/p>\n<p>The other issue is in the rendering requirements. While Meta showed off lower-quality Codec Avatars rendered by a Quest 2 <a href=\"https:\/\/www.uploadvr.com\/meta-codec-avatars-might-be-coming-to-quest\/\" target=\"_blank\" rel=\"noopener\">years ago<\/a>, the higher quality versions have to date been rendered by PC graphics cards. Apple Vision Pro proves that it&#8217;s possible to render Gaussian avatars on-device, but Quest 3 is slightly less powerful, and Meta lacks Apple&#8217;s full end-to-end control of the hardware and software stack.<\/p>\n<p><a class=\"kg-bookmark-container\" href=\"https:\/\/www.uploadvr.com\/meta-connect-2025-date-announced\/\" target=\"_blank\" rel=\"noopener\"><\/p>\n<p>Meta Connect 2025 Takes Place September 17 &amp; 18<\/p>\n<p>Meta Connect 2025 will take place on September 17 and 18, promising to \u201cpeel back the curtain on tomorrow\u2019s tech\u201d. Here\u2019s what we expect might be announced.<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/www.europesays.com\/uk\/wp-content\/uploads\/2025\/08\/Meta-Connect-2025-2.png\" alt=\"\" onerror=\"this.style.display = 'none'\"\/><\/a><\/p>\n<p>One possibility is that Meta launches a rudimentary flatscreen version of Codec Avatars first, to let you join WhatsApp and Messenger video calls with a more realistic form than\u00a0<a href=\"https:\/\/www.uploadvr.com\/quest-v76-ptc-meta-avatar-as-virtual-webcam\/\" target=\"_blank\" rel=\"noopener\">your Meta Avatar<\/a>.<\/p>\n<p>Meta Connect 2025 will take place from\u00a0<a href=\"https:\/\/www.uploadvr.com\/meta-connect-2025-date-announced\/\" target=\"_blank\" rel=\"noopener\">September 17<\/a>, and the company might share more about its progress on Codec Avatars then.<\/p>\n","protected":false},"excerpt":{"rendered":"Meta&#8217;s prototype photorealistic &#8216;Codec Avatars&#8217; now support changeable hairstyles, separately modeling the head and hair. For around a&hellip;\n","protected":false},"author":2,"featured_media":310797,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[3162],"tags":[53,16,15,3243,3244],"class_list":{"0":"post-310796","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-virtual-reality","8":"tag-technology","9":"tag-uk","10":"tag-united-kingdom","11":"tag-virtual-reality","12":"tag-vr"},"share_on_mastodon":{"url":"https:\/\/pubeurope.com\/@uk\/114956788821551948","error":""},"_links":{"self":[{"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/posts\/310796","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/comments?post=310796"}],"version-history":[{"count":0,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/posts\/310796\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/media\/310797"}],"wp:attachment":[{"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/media?parent=310796"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/categories?post=310796"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/tags?post=310796"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}