{"id":163494,"date":"2025-06-06T17:25:14","date_gmt":"2025-06-06T17:25:14","guid":{"rendered":"https:\/\/www.europesays.com\/uk\/163494\/"},"modified":"2025-06-06T17:25:14","modified_gmt":"2025-06-06T17:25:14","slug":"ar-vr-glasses-taking-shape-with-new-chips","status":"publish","type":"post","link":"https:\/\/www.europesays.com\/uk\/163494\/","title":{"rendered":"AR\/VR Glasses Taking Shape With New Chips"},"content":{"rendered":"<p>More augmented reality (AR), virtual reality (VR), and mixed reality (MR) wearables are coming, but how they are connected, and where image and other data is processed, are still in flux.<\/p>\n<p>Ray-Ban Meta AI glasses, for example, look like classic eyeglasses, but they rely on a tethered smart phone for such functions as taking pictures, AI voice assistance, and object identification. In contrast, the Apple Vision Pro mixed reality headset has sufficient built-in compute and battery power to operate as a standalone device, providing both AR and VR functions, but it is relatively heavy and cumbersome. As a result, few consumers might be willing to wear the goggles in their daily life besides gamers and those seeking new ways to work.<\/p>\n<p>\u201cMy opinion is that the next big market after mobile is AR wearables, but not VR wearables,\u201d said Vitali Liouti, senior director of segment strategy, product management at <a href=\"https:\/\/semiengineering.com\/entities\/imagination-technologies\/\" target=\"_blank\" rel=\"noopener\">Imagination Technologies<\/a>. \u201cThere\u2019s a good reason for that. If I use Vision Pro in flights, people look at me weirdly. If I wear the Ray-Ban Meta glasses, no one even notices. I take photos all the time. The nice thing about wearables like the Ray-Bans is they\u2019re comfortable, they\u2019re natural, and like mobile phones, they\u2019re easy to work.\u201d<\/p>\n<p>When it comes to these devices, function determines form as much as form determines function.<\/p>\n<p>\u201cAR glasses have temporarily shifted away from specialized optics or overlaying displays,\u201d said Amol Borkar, director of product management and marketing for Tensilica DSPs in the Silicon Solutions Group at <a href=\"https:\/\/semiengineering.com\/entities\/cadence-design-systems\/\" target=\"_blank\" rel=\"noopener\">Cadence<\/a>. \u201cInstead, they resemble regular spectacles equipped with a camera and microphone to capture verbal prompts. This simplifies the design significantly, as illustrated by products such as Meta\u2019s Ray-Ban Stories.\u201d<\/p>\n<p>In contrast, VR goggles tend to be bulkier and heavier because they fully enclose the eyes to block out the real world. That requires more processing power. \u201cVR systems generally incur higher costs due to their focus on providing an immersive experience,\u201d Borkar said. \u201cSince VR goggles do not allow for pass-through vision, images must be rendered at high frame rates on high-resolution screens (often OLED or fast-switching LCD) with high refresh rates (90Hz\u2013120Hz, or more) to ensure smooth visuals. Additionally, VR systems require highly accurate tracking of head, hand, and eye movements to reflect them in the display render correctly. Failure to achieve this may result in motion sickness and a subpar user experience.\u201d<\/p>\n<p><strong>Battery vs. cord<\/strong><br \/>The simplest way to give users more AI, AR, and eventually VR functions in small form factor glasses, is to keep them tethered to a phone or other gateway device for the necessary compute power.<\/p>\n<p>\u201cIt remains to be seen what will happen, but the predominant situation will be that you will have your wearable, and it will leverage the extremely advanced, extremely expensive chipset of the mobile phone around those AI models,\u201d said Imagination\u2019s Liouti. \u201cHere\u2019s where we have not just the challenge, but the opportunity. The more performant the mobile phone chipset becomes in running local models, the more performant the wearables will become. One thing will help the other grow.\u201d<\/p>\n<p>Others agree that a phone or gateway device will continue to play a key role.<\/p>\n<p>\u201cThere are two schools of camp,\u201d said John Weil, vice president and general manager of IoT and edge AI processors at <a href=\"https:\/\/semiengineering.com\/entities\/synaptics\/\" target=\"_blank\" rel=\"noopener\">Synaptics<\/a>. \u201cIf the thing you put on your head has more compute, less phone is needed. But if I was a betting man, the trend is more phone. The first one is trying to take a mobile processor and embed a device on your head. The other is what I call semi-custom products that are specifically optimized on the vision and audio modalities, and taking that, digitizing that, and using the cell phone as the compute element. One is you try to do more on the AR\/VR headset turnkey \u2014 both modalities, vision and audio \u2014 and bring it back out to the physical world, and you have limited battery life. But now imagine you can manipulate the modalities. You can do vision, you can do speech, you can do audio. You can do all of that, but you need a secondary computer \u2014 your cell phone, in this example \u2014 and then a third level might go all the way back to the cloud. Think of it as distributed computing. You need a tiny amount of vision and audio in the glass, and then you hop to the phone, and then you hop from the phone to the cloud and then back. Depending on the human latency that you need in milliseconds, depends on how far back you can go, and that will dramatically improve battery power.\u201d<\/p>\n<p>In the second scenario, the glasses serve as the visual instrument to interact with a phone. Instead of pulling the phone out of your pocket, the next logical step is you\u2019ll just wear glasses,\u201d Weil noted. \u201cThe analogy I use is the way you buy a smart monitor and you hook it to your computer over, say, Thunderbolt. You walk in, sit down at your desk, plug in, get a monitor. That monitor has various capabilities. Today, your AR\/VR things are going to become smart monitors to your cell phone, so most people are shifting toward the cell phone as the primary compute.\u201d<\/p>\n<p>In a recent <a href=\"https:\/\/www.counterpointresearch.com\/insight\/post-insight-research-notes-blogs-rayban-meta-smart-glasses-drive-global-smart-glasses-market-surge-in-2024-fuelling-momentum-in-2025-with-projected-60-cagr-through-2029\/\" target=\"_blank\" rel=\"noopener\">report<\/a>, Counterpoint defined smart glasses as being tethered to a smart phone, while AI glasses have their own computing processor with a dedicated unit for processing AI tasks, such as a neural processing unit, data processing unit, or other AI accelerator to run tiny AI models and perform on-device AI tasks.<\/p>\n<p><img data-recalc-dims=\"1\" fetchpriority=\"high\" decoding=\"async\" class=\"alignnone wp-image-24258917\" src=\"https:\/\/www.europesays.com\/uk\/wp-content\/uploads\/2025\/06\/Counterpoint-Smart-Glasses-screenshot.png\" alt=\"\" width=\"806\" height=\"355\"  \/><\/p>\n<p><strong>Fig. 1: Defining AI glasses vs. smart glasses. Source: Counterpoint Global Smart Glasses Ecosystem &amp; Market Trends, January 2025<\/strong><\/p>\n<p>By this definition, AI glasses have limited AR and no VR capability. So wearables have a way to go if users want more AR\/VR without having to wear goggles.<\/p>\n<p>\u201cYou can communicate with Ray-Bans right now, but it\u2019s a bit clunky,\u201d said Imagination\u2019s Liouti. \u201cAt some point we\u2019re going to see the tree, and it\u2019s going to tell you, \u2018This is a cherry blossom tree. This needs watering.\u2019 I get really geeky about it, because it\u2019s such an amazing experience, but these technologies need a few generations to come to the level where mobile phones are. People forget that mobile phones have gone through 50 iterations. That\u2019s what wearables need.\u201d<\/p>\n<p>Others agree. \u201cWhile the first wave of smart glasses focused on capturing moments, the next generation will be about interpreting and understanding them,\u201d said Parag Beeraka, senior director of consumer computing, client line of business at <a href=\"https:\/\/semiengineering.com\/entities\/arm\/\" target=\"_blank\" rel=\"noopener\">Arm<\/a>. \u201cAs the demand for smaller, smarter, always-on devices offering premium user experiences grows, AI-first wearables will not just be connected and voice-enabled, but will use agentic AI to reason, predict, assist, and adapt \u2014 allowing us to mix both the physical and virtual world together.\u201d<\/p>\n<p>For example, you\u2019ll be able to step outside and ask the AI to find a coffee place and your glasses will guide you there. \u201cThe next generation of XR [extended reality] smart glasses will be capable of interpreting user behavior and reading the environment in real-time, making them a context-aware assistant,\u201d said Beeraka. \u201cEdge AI platforms are evolving to support advanced, power-efficient inference and on-device reasoning in ultra-low power form factors, making heterogeneous compute central to the success of future wearables.\u201d<\/p>\n<p>Leading-edge 3D-ICs may be the missing piece that lets smaller glasses have more functions.<\/p>\n<p>\u201cWhether compute is in the headset or in a separate device depends on the form factor,\u201d said Marc Swinnen, director of product marketing at <a href=\"https:\/\/semiengineering.com\/entities\/ansys\/\" target=\"_blank\" rel=\"noopener\">Ansys<\/a>. \u201cHow big is this going to be? That has always been one of the attractions of 3D-IC. You can shrink the form factor of your system. Instead of having a PCB board with four or five chips on there, you scrunch them together. If you want to make these VR sets realistic and commercially viable, there\u2019s a huge drive toward making custom silicon that does exactly what you need, the way you do it, and is as efficient as possible in power. Nobody wants their ear burning as the thing gets hot on power, on speed, on application software. For example, Meta has a division that is working on silicon for their VR headsets. It\u2019s a testament to the power of silicon these days. The bespoke silicon has become really, really central to the success of so many of these entire companies or divisions in companies.\u201d<\/p>\n<p>Smaller manufacturing technology nodes also may make it possible to incorporate more compute power into the smaller form factors and make them self-contained without a tethered device, said Cadence\u2019s Borkar. \u201cHowever, this involves significant costs. Reducing the size to or below 7nm is very expensive, and the current total addressable market (TAM) or return on investment (ROI) in the AR\/VR sector does not yet fully justify such an investment, even for the big players.\u201d<\/p>\n<p>In terms of custom products, TSMC recently showed a concept for AR glasses, while Bloomberg <a href=\"https:\/\/www.bloomberg.com\/news\/articles\/2025-05-08\/apple-is-developing-specialized-chips-for-glasses-new-macs-and-ai-servers\" target=\"_blank\" rel=\"noopener\">reported<\/a> that Apple is designing specialized chips for smart glasses, likely to be manufactured by TSMC. Meanwhile GlobalFoundries is working on MicroLED displays built on two wafers, a front plane based on GaN LEDs and a CMOS backplane, to enable better smart glasses <a href=\"https:\/\/gf.com\/blog\/enabling-the-next-generation-of-smart-glasses\/\" target=\"_blank\" rel=\"noopener\">micro-display<\/a>.<\/p>\n<p><img loading=\"lazy\" data-recalc-dims=\"1\" decoding=\"async\" class=\"alignnone wp-image-24258901\" src=\"https:\/\/www.europesays.com\/uk\/wp-content\/uploads\/2025\/06\/Fig2_ARVR.png\" alt=\"\" width=\"684\" height=\"326\"  \/><\/p>\n<p><strong>Fig. 2: TSMC\u2019s concept for AR glasses chips, as shown at its recent North America Technology Symposium. Source: TSMC<\/strong><\/p>\n<p>Overall, the solution is likely to include a mix of advanced chips, heterogeneous compute, and offloading some functions to another device.<\/p>\n<p>\u201cAs AR\/VR glasses become more compact, lightweight, and smarter, we\u2019re seeing a dynamic blend of compute models,\u201d said Arm\u2019s Beeraka. \u201cWhile many experiences will leverage the tethered device \u2014 typically a smartphone or wearable device \u2014 for offloading heavy processing, the ambition for truly standalone smart glasses is accelerating. Both approaches demand heterogeneous compute architectures to enable efficient processing across a range of diverse workloads, including sensor fusion, AI-driven perception, and real-time rendering. Architectural breakthroughs and AI processing shifts are already facilitating the high-performance, low-power balancing act that makes sleek, mainstream wearables commercially viable. Whether offloaded or on the device itself, edge compute is key to bringing together power efficiency with comfort and usability to deliver immersive experiences without compromising battery life.\u201d<\/p>\n<p><strong>Connectivity challenges for tethered devices<\/strong><br \/>As long as AI\/AR glasses are tethered to a phone or other device, there is the question of which communication standard is best to link the phone to the glasses, and to link the phone to network towers and the cloud. Challenges are compounded when considering 5G and eventually 6G, which will offer the very low latency needed for advanced AR\/VR features.<\/p>\n<p>A full 5G\/6G chipset is expensive and power-hungry, said Ansys\u2019 Swinnen. \u201cIt might not be worth putting the whole telephone communication system into your VR when you could get by with something like Wi-Fi or Bluetooth that\u2019s cheaper and uses less power.\u201d<\/p>\n<p>However, Bluetooth and Wi-Fi also have their limitations. \u201cAs data needs and usage grow, we can expect more standards to be developed,\u201d said Cadence\u2019s Borkar. \u201cThe future will likely be some type of wireless tethering with higher bandwidth and lower latency than Bluetooth while providing the ability to send videos, images, and other human inputs over short distances.\u201d<\/p>\n<p>Others agree the solution lies in new wireless standards. \u201cThese will improve the bandwidth between the AR\/VR instrument and the compute element on your cell phone, or whatever device you\u2019re using,\u201d said Synaptics\u2019 Weil. \u201cThere\u2019s a higher data rate Bluetooth standard out now that goes up to 8-megabit speed, so you get the power of low-energy Bluetooth [BLE] at a higher data rate. That gives you a lot more capability so you don\u2019t need a Wi-Fi direct connection.\u201d<\/p>\n<p>To help solve these multi-protocol challenges, Synaptics recently released Wi-Fi, Bluetooth, and Zigbee\/Thread combo SoCs that support high peak speed and low latency for applications including AR\/VR and gaming.<\/p>\n<p>Another option is ultra-wideband (UWB) wireless transceivers, designed to coexist with Bluetooth and Wi-Fi. NXP and SPARK Microsystems are both working on such devices, though the chips would need to be adopted by the AR\/VR companies and the phone companies to solve the 5G\/6G delivery challenge.<\/p>\n<p>\u201cAI glasses cannot have a battery that will be well suited to a comfortable form factor with Wi-Fi technology,\u201d said Fares Mubarak, CEO of SPARK. \u201cAnd Bluetooth, frankly, isn\u2019t good enough to do the kind of connectivity at the latency and at the data rate that they look for.\u201d<\/p>\n<p>Companies will not want to spend billions of dollars deploying 6G infrastructure to get from the cloud to a phone in a single digit millisecond, but then take 140 milliseconds to get from the phone to the VR device, noted Frederic Nabki, co-founder and CTO of SPARK. \u201cInstead of burning all that latency benefit by Bluetooth, they can use Spark technology to close that half-meter between the compute device and your ear with sub-millisecond latency. Now, finally, the promise of 6G is fully kept all the way from the cloud to your ear.\u201d<\/p>\n<p>Another solution is to connect the devices with a cable in order to guarantee better performance and latency. For example, the Sony PlayStation VR2 is tethered to the PlayStation 5 gaming console with a USB-C cable, providing both power and data transmission.<\/p>\n<p>\u201cThere\u2019s a transition before XR can be just a pair of glasses,\u201d said Gervais Fong, senior director of product marketing for mobile, automotive, and consumer interfaces at <a href=\"https:\/\/semiengineering.com\/entities\/synopsys-inc\/\" target=\"_blank\" rel=\"noopener\">Synopsys<\/a>. \u201cAs they become more powerful, they will likely need to connect those glasses in certain operating modes to an XR-capable phone. The cable could be USB4 v.2, because if you look at the XR glass designs, I\u2019ve seen anywhere from 12 to 16 different cameras and sensors in those types of glasses, and that consumes a huge amount of bandwidth. Imagine all that video that you have to transmit, either from the cameras sending it down to the phone for processing, or for the phone sending 4K-type resolution to each one of your eyes \u2014 and having the resolution and the frame refresh rate that\u2019s fast enough so that you don\u2019t get dizzy. That consumes a high degree of video bandwidth, so at the same time, it\u2019s going to consume power.\u201d<\/p>\n<p>In terms of channel loss, Synopsys uses margin to meet the worst-case specification across process, voltage, and temperature. \u201cWe don\u2019t know the quality of the cable a person uses, or the channel, or the quality of the PHY on the other side that we\u2019re connecting through the cable, and anything along the line that\u2019s connected there,\u201d Fong said. \u201cIf it is not good quality, and within spec, it\u2019s going to cause problems. The more margin we can have on our side, the better the chance that an SoC designer using our PHY is able to successfully get that signal through to the other end. That, for them, improves interoperability, which is the big thing with USB. When you plug it in, you expect it to work. But there\u2019s a lot of work behind the scenes that industry has done so that when you plug it in, it just works.\u201d<\/p>\n<p>Others say a cable is a short-term solution only.<\/p>\n<p>\u201cWhile today\u2019s AR\/VR experiences may benefit from high bandwidth wired standards like USB4 v2.0, cables are unlikely to be the long-term answer,\u201d said Arm\u2019s Beeraka. \u201cThe future of spatial computing relies on freedom of movement and seamless interactions. We expect to see more advanced low-power wireless technologies emerging to bridge this performance gap, delivering the bandwidth and latency required to support immersive experiences without sacrificing comfort or convenience. Platforms that enable the compute efficiency and intelligence required at the edge will be crucial to enabling a wireless future.\u201d<\/p>\n<p><strong>Enabling 6G through edge AI and edge compute<\/strong><br \/>The low latency and high determinism of edge AI technology are creating new use cases for AR\/VR, as well as improving existing use cases through AI.<\/p>\n<p>\u201cThere are two major things driving the adoption of edge AI,\u201d said Steve Tateosian, senior vice president of IoT, consumer, and industrial MCUs at <a href=\"https:\/\/semiengineering.com\/entities\/infineon-technologies\/\" target=\"_blank\" rel=\"noopener\">Infineon<\/a>. \u201cOne, latency goes down drastically. I\u2019m not going to say it\u2019s zero latency, but from a human perspective, it\u2019s zero latency to act, to engage with a device locally, as opposed to going to the cloud. Two is determinism. Especially when we\u2019re talking about human-machine interaction, we as humans naturally have an expectation around how we interact with our environment. For example, if you walk into a dark room, your expectation when you flip the light switch on is that the light comes on immediately, and you\u2019re not standing there in the dark for three seconds wondering, \u2018Do I need to turn this light switch on again?&#8217;\u201d<\/p>\n<p>Others agree that really short latencies are key. \u201cWith the delay time, when you have a VR or AR headset on and turn your head, the maximum lag you can tolerate is going to be a millisecond or it becomes really confusing to your brain, because all the overlays are trying to catch up,\u201d said Shawn Carpenter, 5G\/6G program director at Ansys. \u201cBecause you also have this time-of-flight delay, from where you\u2019re operating to wherever the information is being processed to put the overlay down for you, you can\u2019t have the actual edge processing be too far away.\u201d<\/p>\n<p>What that probably means is that with 6G, very-high-performance computing will be brought directly to the base station. \u201cThen you can have just the time-of-flight delay,\u201d Carpenter said. \u201cPhysics imposes this delay time for how long it physically takes to transmit the signal to the nearest base station. If the signal then has to go to some data center in Virginia and then come back to you, that\u2019s going to impose delays that are dead on arrival. The concept won\u2019t work. You\u2019re going to probably have something like a set of GPUs or TPUs right there at the base station.\u201d<\/p>\n<p>In addition to processing delays, a further challenge is that a lot of bandwidth will be needed in order to get data through the pipe. \u201cIf you don\u2019t have a large number of devices that you\u2019re trying to communicate to, you could probably get [6G-enabled VR] done within the home with Wi-Fi 7,\u201d he said. \u201cThen the question is whether your pipe out of the home gives you enough bandwidth, or whether you\u2019re going to need to have some kind of fixed wireless access. If you have fiber to the home, you may be able to get some of that done, but you\u2019re probably going to have to communicate with a fairly extensive computing resource to do the overlay, to do all the AI that recognizes what you\u2019re looking at.\u201d<\/p>\n<p>For example, an aircraft mechanic wearing VR goggles to see inside a virtual aircraft engine will want AI to label things \u2014 what\u2019s hot, what\u2019s electric. Something is going to have to do that computing, and it\u2019s probably not going to be a PC, said Carpenter. \u201cProbably it doesn\u2019t have enough power, so you\u2019re going to have to connect to a computing resource. You probably could get it done with two hops, one within the home, and then another one to the local fixed wireless station, provided you have a high enough bandwidth access point to do something like that. You could use your phone as that relay device, as well.\u201d<\/p>\n<p>Cellular is a mixed bag. While 4G LTE and sub-6 GHz 5G are slow but relatively predictable, 5G and 6G millimeter require line of sight. \u201cThe higher you go in frequency, the shorter the range is, and UWB is no exception to physics,\u201d said Mubarak. But the challenge can be solved through complementary technologies. For example, if a person is using AR glasses for gaming, they want high-resolution audio with the lowest latency on the headset, and UWB would provide it while Bluetooth would be inactive \u2014 unless the person got a phone call. \u201cLet\u2019s say you lose your game. You get up, walk to the kitchen, you lose UWB connectivity. Then the device can, with the proper application layer, switch from ultra-wideband to a compressed solution such as Bluetooth, and the consumer will end up having their seamless solution.\u201d<\/p>\n<p>However, with a good line-of-sight connection, or with an array of well-placed repeaters, 6G could be the enabler of next-gen AR\/VR capabilities. \u201cWe\u2019re not talking about a virtual reality kind of situation or assisted virtual,\u201d said Ron Squiers, solution networking specialist at <a href=\"https:\/\/semiengineering.com\/entities\/mentor-a-siemens-business\/\" target=\"_blank\" rel=\"noopener\">Siemens EDA<\/a>. \u201cWe\u2019re talking about holographic images being in the room with you and not noticing any difference about the person that you\u2019re talking to.\u201d<\/p>\n<p><strong>Conclusion<\/strong><br \/>Like AI, we are only just starting to see the huge scope of AR\/VR applications, but the shrinking from goggles to glasses will take time.<\/p>\n<p>\u201cClearly people are moving to these AI glasses, which are not a ski mask like the first-gen devices,\u201d said Spark\u2019s Nabki. \u201cThey want rich interaction with the AI. They don\u2019t want you to type. They want you to talk to it. They want you to interact with it. They want you to show it images.\u201d<\/p>\n<p>The bottom line is, \u201cNo matter how good this VR headset is, if you have a thick cable coming off and there\u2019s a box the size of a fridge that needs to run it, it\u2019s never going to be a commercial success, no matter how well it works,\u201d said Ansys\u2019 Swinnen. \u201cThe silicon is critical to making it work.\u201d<\/p>\n<p>It also remains to be seen what killer application will make VR a must-have technology. \u201cBoth AR and VR technologies need a solid ecosystem to thrive,\u201d said Cadence\u2019s Borkar. \u201cAs a self-contained system, VR calls for its own ecosystem and applications. Despite numerous attempts, it has seen limited success and remains a niche market without a definitive \u2018killer\u2019 app or use case. When considered an extension of a phone or PC, VR might seem like just an enhanced display, possibly not worth the cost for some customers.\u201d<\/p>\n<p><strong>Related Reading<\/strong><br \/><a href=\"https:\/\/semiengineering.com\/wearable-connectivity-ai-enable-new-use-cases\/\" target=\"_blank\" rel=\"noopener\">Wearable Connectivity, AI Enable New Use Cases<\/a><br \/>New types of wearables and devices can record bodily data or simulate the senses without needing to meet stringent med-tech rules.<br \/><a href=\"https:\/\/semiengineering.com\/three-way-race-to-3d-ics\/\" target=\"_blank\" rel=\"noopener\">Three-Way Race To 3D-ICs<\/a><br \/>Intel, TSMC, and Samsung are developing a broad set of technologies and relationships that will be required for the next generation of AI chips.<\/p>\n<p><\/p>\n","protected":false},"excerpt":{"rendered":"More augmented reality (AR), virtual reality (VR), and mixed reality (MR) wearables are coming, but how they are&hellip;\n","protected":false},"author":2,"featured_media":163495,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[3162],"tags":[53,16,15,3243,3244],"class_list":{"0":"post-163494","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-virtual-reality","8":"tag-technology","9":"tag-uk","10":"tag-united-kingdom","11":"tag-virtual-reality","12":"tag-vr"},"share_on_mastodon":{"url":"https:\/\/pubeurope.com\/@uk\/114637591298394638","error":""},"_links":{"self":[{"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/posts\/163494","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/comments?post=163494"}],"version-history":[{"count":0,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/posts\/163494\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/media\/163495"}],"wp:attachment":[{"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/media?parent=163494"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/categories?post=163494"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/tags?post=163494"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}