{"id":303538,"date":"2025-07-30T09:15:28","date_gmt":"2025-07-30T09:15:28","guid":{"rendered":"https:\/\/www.europesays.com\/uk\/303538\/"},"modified":"2025-07-30T09:15:28","modified_gmt":"2025-07-30T09:15:28","slug":"why-vibe-physics-is-the-ultimate-example-of-ai-slop","status":"publish","type":"post","link":"https:\/\/www.europesays.com\/uk\/303538\/","title":{"rendered":"Why &#8220;vibe physics&#8221; is the ultimate example of AI slop"},"content":{"rendered":"<p>\n                    Sign up for the Starts With a Bang newsletter              <\/p>\n<p>\n                    Travel the universe with Dr. Ethan Siegel as he answers the biggest questions of all.         <\/p>\n<p>The most fundamental of all the sciences is physics, as it seeks to describe all of nature in the simplest, most irreducible terms possible. Both the contents of the Universe, as well as the laws that govern it, are at the heart of what physics is, allowing us to make concrete predictions about not only how reality will behave, but to accurately describe the Universe quantitatively: by telling us the amount, or \u201chow much\u201d of an effect, any physical phenomenon or interaction will cause to any physical system. Although physics has often been driven forward by wild, even heretical ideas, it\u2019s the fact that there are both<\/p>\n<ul class=\"wp-block-list\">\n<li>fundamental physical entities and quantities,<\/li>\n<li>and also fundamental physical laws,<\/li>\n<\/ul>\n<p>that enable us to, quite powerfully, make accurate predictions about what will occur (and by how much) in any given physical system.<\/p>\n<p>Over the centuries, many new rules, laws, and elementary particles have been discovered: the Standard Models of both particle physics and cosmology have been in place for all of the 21st century thus far. Back in 2022, the first mainstream AI-powered chatbots, known as Large Language Models (or LLMs), arrived on the scene. Although many praised them for their versatility, their apparent ability to reason, and their often surprising ability to surface interesting pieces of information, they <a href=\"https:\/\/bigthink.com\/starts-with-a-bang\/humanitys-last-exam-fail\/\" target=\"_blank\" rel=\"noopener\">remained fundamentally limited<\/a> when it comes to <a href=\"https:\/\/bigthink.com\/starts-with-a-bang\/astrophysicist-chatgpt\/\" target=\"_blank\" rel=\"noopener\">displaying an understanding of even basic ideas in the sciences<\/a>.<\/p>\n<p>Here in 2025, however, many are engaging in what\u2019s <a href=\"https:\/\/gizmodo.com\/billionaires-convince-themselves-ai-is-close-to-making-new-scientific-discoveries-2000629060\" target=\"_blank\" rel=\"noopener\">quickly becoming known as vibe physics<\/a>: engaging in deep physics conversations with these LLMs and (erroneously) believing that they\u2019re collaborating to make meaningful breakthroughs with tremendous potential. Here\u2019s why <a href=\"https:\/\/www.youtube.com\/watch?v=TMoz3gSXBcY\" target=\"_blank\" rel=\"noopener\">that\u2019s completely delusional<\/a>, and why instead of a fruitful collaboration, they\u2019re simply falling for the phenomenon of <a href=\"https:\/\/en.wikipedia.org\/wiki\/AI_slop\" target=\"_blank\" rel=\"noopener\">unfettered AI slop<\/a>.<\/p>\n<p><img fetchpriority=\"high\" decoding=\"async\" width=\"840\" height=\"318\" src=\"https:\/\/www.europesays.com\/uk\/wp-content\/uploads\/2025\/07\/RBM.jpg\" alt=\"Diagram of a neural network with three input nodes, two hidden layers, and three output nodes, illustrating the connections between layers\u2014a design inspired by cutting-edge research recognized in the Nobel Prize Physics 2024 discussions.\" class=\"wp-image-525098\"  \/><\/p>\n<p>This example of a feedward network (without backpropagation) is an example of a restricted Boltzmann machine: where there is at least one hidden layer between the input layer and the output layer, and where nodes are only connected between different layers: not between nodes of the same layer, representing a tremendous step forward in creating today\u2019s AI\/LLM systems.\n<\/p>\n<p><a href=\"https:\/\/www.nobelprize.org\/uploads\/2024\/09\/advanced-physicsprize2024.pdf\" target=\"_blank\" rel=\"noopener\">Credit<\/a>: The Nobel Committee for Physics, 2024<\/p>\n<p>There are, to be sure, a tremendous number of things that AI in general, and LLMs in particular, are exceedingly good at. This is due to how they\u2019re constructed, which is <a href=\"https:\/\/bigthink.com\/starts-with-a-bang\/10-answers-math-artificial-intelligence\/\" target=\"_blank\" rel=\"noopener\">something that\u2019s well-known but not generally appreciated<\/a>. While a \u201cclassical\u201d computer program involves:<\/p>\n<ul class=\"wp-block-list\">\n<li>a user giving an input or a series of inputs to a computer,<\/li>\n<li>which then conducts computations that are prescribed by a pre-programmed algorithm,<\/li>\n<li>and then returns an output or series of outputs to the user,<\/li>\n<\/ul>\n<p>the big difference is that an AI-powered program doesn\u2019t perform computations according to a pre-programmed algorithm. Instead, it\u2019s the machine learning program itself that\u2019s responsible for figuring out and executing the underlying algorithm.<\/p>\n<p>What most people fail to recognize about AI in general, and LLMs in particular, is that they are fundamentally limited in their scope of applicability. There\u2019s a saying that \u201cAI is only as good as its training data,\u201d and what this generally means is that the machine learning programs can be extremely powerful (and can often outperform even expert-level humans) at performing the narrow tasks that they are trained on. However, when confronted with questions about data that falls outside of what they\u2019re trained on, that power and performance doesn\u2019t generalize at all.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" width=\"946\" height=\"481\" src=\"https:\/\/www.europesays.com\/uk\/wp-content\/uploads\/2025\/07\/https___blogs-images.forbes.com_startswithabang_files_2019_03_kepler_transit_data.jpg\" alt=\"\" class=\"wp-image-141294\"  \/><\/p>\n<p>Based on the Kepler lightcurve of the transiting exoplanet Kepler-1625b, we were able to infer the existence of a potential exomoon. The fact that the transits didn\u2019t occur with the exact same periodicity, but that there were timing variations, was our major clue that led researchers in that direction. With large enough exoplanet data sets, machine learning algorithms can now find additional exoplanet and exomoon candidates that were unidentifiable with human-written algorithms.\n<\/p>\n<p><a href=\"https:\/\/www.science.org\/doi\/10.1126\/sciadv.aav1784\" target=\"_blank\" rel=\"noopener\">Credit<\/a>: NASA GSFC\/SVS\/Katrina Jackson<\/p>\n<p>As an example, you can train your AI on large data sets of human speech and conversation in a particular language, the AI will be very good at spotting patterns in that language, and with enough data, can become extremely effective at mimicking human speech patterns and conducting conversations in that language. Similarly, if you trained your AI on large data sets of:<\/p>\n<ul class=\"wp-block-list\">\n<li>images of Causasian human faces,<\/li>\n<li>images of spiral galaxies,<\/li>\n<li>or on gravitational wave events generated by black hole mergers,<\/li>\n<\/ul>\n<p>you could be confident that your artificial intelligence algorithm would be quite good at spotting patterns within those data sets.<\/p>\n<p>Given another example of a similar piece of data, your AI could then classify and characterize it, or you could go an alterative route and simply describe a system that had similar properties, and the well-trained AI algorithm would do an excellent job of generating a \u201cmock\u201d system that possessed the exact properties that you described. This is a common use of generative AI, which succeeds spectacularly at such tasks.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" width=\"4000\" height=\"3000\" src=\"https:\/\/www.europesays.com\/uk\/wp-content\/uploads\/2025\/07\/5183875426_f9ba044ae7_4k.jpg\" alt=\"A performer with a suitcase stands before a large, seated crowd in an outdoor amphitheater on a sunny day. An umbrella and a bicycle are visible in the foreground, sparking curiosity about why machines learn from such everyday scenes.\" class=\"wp-image-506793\"  \/><\/p>\n<p>With a large training data set, such as a large number of high-resolution faces, artificial intelligence and machine learning techniques can not only learn how to identify human faces, but can generate human faces with a variety of specific features. This crowd in Mauerpark, Berlin, would provide excellent training data for the generation of Caucasian faces, but would perform very poorly if asked to generate features common to African-American faces.\n<\/p>\n<p><a href=\"https:\/\/www.flickr.com\/photos\/loozrboy\/5183875426\" target=\"_blank\" rel=\"noopener\">Credit<\/a>: Loozrboy\/flickr<\/p>\n<p>However, that same well-trained AI will do a much worse job at identifying features in or generating images of inputs that fall outside of the training data set. The LLM that was trained on (and worked so well in) English would perform very poorly when presented with conversation in Tagalog; the AI program that was trained on Caucasian faces would perform poorly when asked to generate a Nigerian face; the model that was trained on spiral galaxies would perform poorly when given a red-and-dead elliptical galaxy; the gravitational wave program trained on binary black hole mergers would be of limited use when confronted with a white dwarf inspiraling into a supermassive black hole.<\/p>\n<p>And yet, an LLM is programmed explicitly to be a chatbot, which means one of its goals is to coax the user into continuing the conversation. Rather than be honest with the user about the limitations of its ability to answer correctly given the scope of its training data, LLMs <a href=\"https:\/\/futurism.com\/stanford-therapist-chatbots-encouraging-delusions\" target=\"_blank\" rel=\"noopener\">confidently and often dangerously misinform the humans<\/a> in conversation with them, with <a href=\"https:\/\/futurism.com\/judge-lawsuit-characterai-google\" target=\"_blank\" rel=\"noopener\">\u201ctherapist chatbots\u201d even encouraging or facilitating suicidal thoughts and plans<\/a>.<\/p>\n<p>Still, the success of LLMs in areas where they weren\u2019t explicitly trained, such as in <a href=\"https:\/\/en.wikipedia.org\/wiki\/Vibe_coding\" target=\"_blank\" rel=\"noopener\">vibe coding<\/a>, has led to people placing confidence in those same LLMs to perform tasks where their utility hasn\u2019t been validated.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" width=\"840\" height=\"840\" src=\"https:\/\/www.europesays.com\/uk\/wp-content\/uploads\/2025\/07\/Mathematical_Spaces.jpg\" alt=\"\" class=\"wp-image-506800\"  \/><\/p>\n<p>This graphical hierarchy of mathematical spaces goes from the most general type of space, a topological space, to the most specific: an inner product space. All metrics induce a topology, but not all topological spaces can be defined by a metric; all normed vector spaces induce a metric, but not all metrics contain normed vector space; all inner product spaces induce a norm, but not all normed vector spaces are inner product spaces. Mathematical spaces play a vital role in the math powering artificial intelligence.\n<\/p>\n<p><a href=\"https:\/\/commons.wikimedia.org\/wiki\/File:Mathematical_Spaces.png\" target=\"_blank\" rel=\"noopener\">Credit<\/a>: Jhausauer\/public domain<\/p>\n<p>To be sure, the concepts of artificial intelligence and machine learning do have their place in fields like physics and astrophysics. Machine learning algorithms, when trained on a sufficiently large amount of relevant, high-quality data, are outstanding at spotting and uncovering patterns within that data. When prompted, post-training, with a query that\u2019s relevant to such a pattern found within that data set, the algorithm is excellent at reproducing the relevant pattern and utilizing it in a way that can match the user\u2019s query. It\u2019s why machine learning algorithms are so successful at finding exoplanets that humans missed, why they\u2019re so good at classifying astronomical objects that are ambiguous to humans, and why they\u2019re good at reproducing or simulating the physical phenomena that\u2019s found in nature.<\/p>\n<p>But now we have to remember the extraordinary difference between describing and deriving in a field like physics. With a large library of training data, it\u2019s easy for an LLM to identify patterns: patterns in speech and conversation, patterns that emerge within similar classes of problems, patterns that emerge in the data acquired concerning known objects, etc. But that doesn\u2019t mean that LLMs are competent at uncovering the underlying laws of physics that govern a system, even with arbitrarily large data sets. It doesn\u2019t mean that LLMs understand or can derive foundational relationships. And it doesn\u2019t mean that LLMs aren\u2019t more likely to \u201ccontinue a conversation\u201d with a user than they are to identify factually correct, relevant statements that provide meaningful, conclusive answers to a user\u2019s query.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" width=\"837\" height=\"1005\" src=\"https:\/\/www.europesays.com\/uk\/wp-content\/uploads\/2025\/07\/ok_noooo.jpg\" alt=\"\" class=\"wp-image-506785\"  \/><\/p>\n<p>A screenshot from a query about integers directed to iask.ai, along with its woefully incorrect response. The correct answer is -5, which requires inputting several additional prompts to coax the AI into the correct response.\n<\/p>\n<p><a href=\"https:\/\/www.iask.ai\/\" target=\"_blank\" rel=\"noopener\">Credit<\/a>: E. Siegel\/iask.ai<\/p>\n<p>While simply being an expert and conversing with an LLM about physics \u2014 or, as an incredible number of scientists and science communicators can attest to, fielding emails from laypersons who\u2019ve been having \u201cvibe physics\u201d conversations with LLMs \u2014 easily reveals their limitations and exposes their hallucinations, researchers Keyon Vafa, Peter Chang, Ashesh Rambachan and Sendhil Mullainathan set out to put machine learning systems to a more specific test. In <a href=\"https:\/\/arxiv.org\/abs\/2507.06952\" target=\"_blank\" rel=\"noopener\">a new paper submitted in July of 2025<\/a>, the authors sought to test an AI\u2019s ability to infer what they call a foundation model: an underlying law of reality that could then be applied to novel situations that go well beyond the \u201cpatterns\u201d found in the AI\u2019s training data.<\/p>\n<p>The way they seek to do this is by generating a large number of new, very small, synthetic data sets. They then challenge the AI\u2019s algorithm to fit a foundation model to those data sets, or in other words, to try and deduce what type of underlying law applies to all of those data sets when aggregated together to explain their behavior. Then, they wanted the AI itself to go and analyze the patterns that it found in those mathematical functions that it deduced, i.e., the foundational model, to search for inductive biases.<\/p>\n<p>Would it find a good foundational model? Would the model be extendable beyond the mere training data? And would it be successful at determining what sort of inductive biases were induced by its choice of model?<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" width=\"720\" height=\"480\" src=\"https:\/\/www.europesays.com\/uk\/wp-content\/uploads\/2025\/07\/ezgif-7-b64e71056b50.gif\" alt=\"kepler second law\" class=\"wp-image-142142\" style=\"width:840px\"  \/><\/p>\n<p>Even before we understood how the law of gravity worked, we were able to establish that any object in orbit around another obeyed Kepler\u2019s second law: it traced out equal areas in equal amounts of time, indicating that it must move more slowly when it\u2019s farther away and more quickly when it\u2019s closer. At every point in a planet\u2019s orbit, Kepler\u2019s laws dictate at what speed that planet must move.\n<\/p>\n<p><a href=\"https:\/\/commons.wikimedia.org\/wiki\/File:Kepler-second-law.gif\" target=\"_blank\" rel=\"noopener\">Credit<\/a>: Gonfer\/Wikimedia Commons, using Mathematica<\/p>\n<p>This vocabulary may be difficult for even those well-trained in physics to fully wrap their heads around, so perhaps it\u2019s good to use an example from history to highlight what they\u2019re talking about. One example is the foundational model of Newton\u2019s gravity. Although Newton\u2019s gravity was outstanding for predicting the orbits and motions of the planets, we already had a model for more than half a century prior to Newton that did precisely that: Kepler\u2019s laws. Kepler\u2019s laws were predictive in the sense that you could plunk down a planet at any distance from the Sun, give it an initial velocity, and Kepler\u2019s laws would allow you to know pretty much everything about its orbital properties, even extremely far into the future.<\/p>\n<p>But Kepler\u2019s laws were just predictive laws that worked for one particular set of circumstances: planets in orbit around the Sun. When Newton came up with his law of universal gravitation, they were no different from Kepler\u2019s laws in terms of the predictions they made for planets in motion around the Sun. But Newton\u2019s laws were far more powerful, and represent a foundational model in a way that Kepler\u2019s laws didn\u2019t. The reason is that Kepler\u2019s laws stopped at planets and orbits, but Newton\u2019s also explained:<\/p>\n<ul class=\"wp-block-list\">\n<li>the swinging of a pendulum on Earth,<\/li>\n<li>the motion of moons and satellites around planets,<\/li>\n<li>the behavior of rockets and the terrestrial motion of projectiles,<\/li>\n<\/ul>\n<p>and any other situation where gravitation is the ruling phenomenon. Whereas predictions only apply to one set of tasks, foundational models are extendable to many sets of tasks.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" width=\"3816\" height=\"2078\" src=\"https:\/\/www.europesays.com\/uk\/wp-content\/uploads\/2025\/07\/GvmCpuiWcAAEJuf.jpg\" alt=\"Comparison of true and predicted planetary orbits and forces in the solar system, featuring force law equations for Newtonian, transformer, and vibe physics models across multiple planets.\" class=\"wp-image-574248\"  \/><\/p>\n<p>This figure shows side-by-side panels for the true force law for planets in our Solar System (blue arrows) and the force laws recovered by an artificial intelligence program given enormous amounts of training data for millions or even billions of synthetic solar systems. Note the complexity, as well as the inconsistency, of the recovered force law using AI.\n<\/p>\n<p><a href=\"https:\/\/arxiv.org\/abs\/2507.06952\" target=\"_blank\" rel=\"noopener\">Credit<\/a>: K. Vafa et al., ICML 2025\/arXiv:2507.06952, 2025<\/p>\n<p>If you have a wholly new problem involving masses and motions, Newton\u2019s framework gives you a solid starting point, whereas Kepler\u2019s will only give you a solid starting point under a very restrictive set of circumstances; that\u2019s the difference between a predictive model and a foundational model. What the authors <a href=\"https:\/\/arxiv.org\/abs\/2507.06952\" target=\"_blank\" rel=\"noopener\">of this new paper<\/a> did was to apply their methodology to three classes of problems: orbital problems (similar to Kepler and Newton), as well as two other (lattice and Othello) problems. As far as predictions went, it was extremely successful, just as Kepler\u2019s predictive model was. Even far into the future, it could reproduce the actual behavior of the system.<\/p>\n<p>But did the model take that leap, and discover Newton\u2019s laws? Did it find the underlying foundational model? Or, perhaps even better, did it find a superior model to Newton\u2019s?<\/p>\n<p>The answer, quite definitively, was no. To demonstrate the model\u2019s failures, they compelled it to predict force vectors on a small dataset of planets: planets within our own Solar System. Instead of discovering the Newtonian phenomenon of centripetal force \u2014 a force directed towards the Sun, for planets \u2014 they wound up with a very strained, unphysical force law. And yet, because it still gave correct orbits, the model had no way of self-correcting. Even when fine-tuning the model on larger scales, predicting forces across thousands of stellar and planetary systems, it not only didn\u2019t recover Newton\u2019s law; it recovered different, mutually inconsistent laws for different galaxy samples.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" width=\"2044\" height=\"1300\" src=\"https:\/\/www.europesays.com\/uk\/wp-content\/uploads\/2025\/07\/GvmDHAdXcAAHy-r.jpg\" alt=\"Line graphs compare LLM-predicted and true force magnitudes over time for five solar systems using three different models: o3, Claude Sonnet 4, and Gemini 2.5 Pro, offering fresh insight into vibe physics.\" class=\"wp-image-574247\" style=\"object-fit:cover\"  \/><\/p>\n<p>When the LLMs o3, Claude Sonnet 4, and Gemini 2.5 Pro were asked to reconstruct force laws for a variety of mock solar systems, they were all unable to recover something equivalent to Newton\u2019s law of universal gravitation, despite the LLMs themselves having been trained on Newton\u2019s laws. It\u2019s stark evidence for how LLMs rely on pattern matching, and are unable to reach even basically valid scientific conclusions about foundational models.\n<\/p>\n<p><a href=\"https:\/\/arxiv.org\/abs\/2507.06952\" target=\"_blank\" rel=\"noopener\">Credit<\/a>: K. Vafa et al., ICML 2025\/arXiv:2507.06952, 2025<\/p>\n<p>When they then went to put more general models (LLMs) to the same test \u2014 like o3, Claude Sonnet 4, and Gemini 2.5 Pro \u2014 they added in what a small number (about 2%) of force magnitudes actually were, without explicitly telling the LLM what those forces were, the LLMs couldn\u2019t recover the force law. This is a remarkable failure of generalization and extrapolation, because all three of these LLMs were explicitly trained on Newton\u2019s laws. Even so, they couldn\u2019t get the rest of the forces when prompted to infer the remaining outcomes. In other words, even in the extremely basic scenario of \u201cif I give you the orbits of an enormous number of planets in planetary and stellar systems, can you infer Newton\u2019s law of gravity,\u201d every LLM tested failed spectacularly.<\/p>\n<p>This is the key finding of this new work: you don\u2019t just want to study \u201ccan my model predict behavior for a new example of the type of data that the model has been trained on?\u201d Instead, you want to also study behavior on new tasks, where if it did properly learn the foundational model that underlies the observed phenomenon, it would be able to apply it to a novel situation and use that model to make predictions in those new, relevant situations as well. In no cases did the LLMs that we have today do any such thing, which indicates that when you (or anyone) has a \u201cdeep conversation\u201d about physics, including about speculative extensions to known physics, you can be completely confident that the LLM is solely giving you patterned speech responses; there is no physical merit to what it states.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" width=\"1200\" height=\"675\" src=\"https:\/\/www.europesays.com\/uk\/wp-content\/uploads\/2025\/07\/connect.jpg\" alt=\"A robotic hand and a human hand reach towards each other, with a glowing DNA helix in the background, symbolizing humanity's last exam in the intersection of technology and biology.\" class=\"wp-image-530893\"  \/><\/p>\n<p>While humans are typically regarded as the most intelligent thing to ever arise on planet Earth, many are attempting to create an artificial general intelligence that surpasses human limits. Current attempts to measure or assess the \u201cintelligence\u201d of an AI or an LLM must take care that memorization is not used as a substitute for intelligence.\n<\/p>\n<p><a href=\"https:\/\/adamwillows.com\/resources\/technology-and-human-nature\/\" target=\"_blank\" rel=\"noopener\">Credit<\/a>: Adam M. Willows\/public domain<\/p>\n<p>All of which brings us back to <a href=\"https:\/\/gizmodo.com\/billionaires-convince-themselves-ai-is-close-to-making-new-scientific-discoveries-2000629060\" target=\"_blank\" rel=\"noopener\">the notion of vibe physics<\/a>. Sure, an LLM may sound very intelligent and knowledgeable about theoretical physics, particularly if you yourself aren\u2019t an expert in the areas of theoretical physics that you\u2019re discussing with it. But this is exactly the problem any non-expert has when talking with:<\/p>\n<ul class=\"wp-block-list\">\n<li>a bona fide, honest, scrupulous expert,<\/li>\n<li>a dishonest grifter posing as a scrupulous expert,<\/li>\n<li>or a confident chatbot that has no expertise, but sounds like it does.<\/li>\n<\/ul>\n<p>The problem is, without the necessary expertise to evaluate what you\u2019re being told on its merits, you\u2019re likely to evaluate it based solely on vibes: in particular, on the vibes of how it makes you feel about the answers it gave you.<\/p>\n<p>This is the most dangerous thing for anyone who\u2019s vested in being told the truth about reality: the potential for replacing, in your own mind, an accurate picture of reality with an inaccurate but flattering hallucination. Rest assured, if you\u2019re a non-expert who has an idea about theoretical physics, and you\u2019ve been \u201cdeveloping\u201d this idea with a large language model, you most certainly do not have a meritorious theory. In physics in particular, unless you\u2019re actually performing the necessary quantitative calculations to see if the full suite of your predictions is congruent with reality, you haven\u2019t even taken the first step towards formulating a new theory. While the notion of \u201cvibe physics\u201d may be alluring to many, especially for armchair physicists, all it truly does is <a href=\"https:\/\/www.youtube.com\/watch?v=TMoz3gSXBcY\" target=\"_blank\" rel=\"noopener\">foster and develop a new species of crackpot<\/a>: one powered by AI slop.<\/p>\n<p>\n                    Sign up for the Starts With a Bang newsletter              <\/p>\n<p>\n                    Travel the universe with Dr. Ethan Siegel as he answers the biggest questions of all.         <\/p>\n","protected":false},"excerpt":{"rendered":"Sign up for the Starts With a Bang newsletter Travel the universe with Dr. Ethan Siegel as he&hellip;\n","protected":false},"author":2,"featured_media":303539,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[3845],"tags":[74,70,16,15],"class_list":{"0":"post-303538","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-physics","8":"tag-physics","9":"tag-science","10":"tag-uk","11":"tag-united-kingdom"},"share_on_mastodon":{"url":"https:\/\/pubeurope.com\/@uk\/114941429786020488","error":""},"_links":{"self":[{"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/posts\/303538","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/comments?post=303538"}],"version-history":[{"count":0,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/posts\/303538\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/media\/303539"}],"wp:attachment":[{"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/media?parent=303538"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/categories?post=303538"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/tags?post=303538"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}