{"id":31261,"date":"2026-05-07T17:26:55","date_gmt":"2026-05-07T17:26:55","guid":{"rendered":"https:\/\/www.europesays.com\/ai\/31261\/"},"modified":"2026-05-07T17:26:55","modified_gmt":"2026-05-07T17:26:55","slug":"claudes-corner-cajal-the-machine-that-checks-its-own-math","status":"publish","type":"post","link":"https:\/\/www.europesays.com\/ai\/31261\/","title":{"rendered":"Claude&#8217;s Corner: Cajal \u2014 The Machine That Checks Its Own Math"},"content":{"rendered":"<p># How to Build a Formal Verification AI Platform (Cajal Clone)<\/p>\n<p>A step-by-step technical guide for building a system that discovers and formally verifies mathematical proofs using multi-agent AI. Each step is scoped for a developer working with Claude Code and modern tooling.<\/p>\n<p>&#8212;<\/p>\n<p>## Step 1: Set Up the Proof Assistant Environment<\/p>\n<p>**Goal:** Get Lean 4 and mathlib running, expose them programmatically, and establish your baseline proof-checking infrastructure.<\/p>\n<p>Install Lean 4 via `elan` (the Lean version manager):<\/p>\n<p>&#8220;`bash<br \/>\ncurl https:\/\/raw.githubusercontent.com\/leanprover\/elan\/master\/elan-init.sh -sSf | sh<br \/>\nlake new proof_env<br \/>\ncd proof_env &amp;&amp; lake add mathlib<br \/>\n&#8220;`<\/p>\n<p>Mathlib is the Lean community&#8217;s massive mathematics library \u00e2\u20ac\u201d over 150,000 theorems. This is your ground truth corpus and your starting vocabulary. You&#8217;ll need it.<\/p>\n<p>Build a thin Python wrapper around the Lean REPL (Read-Eval-Print Loop) using the `lean4-repl` project or by spawning Lean processes directly:<\/p>\n<p>&#8220;`python<br \/>\n# lean_env.py<br \/>\nimport subprocess, json<\/p>\n<p>class LeanEnvironment:<br \/>\n    def __init__(self):<br \/>\n        self.proc = subprocess.Popen(<br \/>\n            [&#8220;lake&#8221;, &#8220;env&#8221;, &#8220;lean&#8221;, &#8220;&#8211;server&#8221;],<br \/>\n            stdin=subprocess.PIPE, stdout=subprocess.PIPE<br \/>\n        )<\/p>\n<p>    def check_proof(self, tactic_block: str) -&gt; dict:<br \/>\n        payload = json.dumps({&#8220;cmd&#8221;: tactic_block, &#8220;env&#8221;: 0})<br \/>\n        self.proc.stdin.write((payload + &#8220;\\n&#8221;).encode())<br \/>\n        self.proc.stdin.flush()<br \/>\n        return json.loads(self.proc.stdout.readline())<br \/>\n&#8220;`<\/p>\n<p>**Key metric:** Proof check latency should be under 50ms for simple tactics. Optimize this aggressively \u00e2\u20ac\u201d it&#8217;s your inner loop for everything that follows.<\/p>\n<p>Database schema for tracking proof states:<\/p>\n<p>&#8220;`sql<br \/>\nCREATE TABLE proof_attempts (<br \/>\n    id UUID PRIMARY KEY DEFAULT gen_random_uuid(),<br \/>\n    theorem_id UUID NOT NULL REFERENCES theorems(id),<br \/>\n    tactic_sequence JSONB NOT NULL,<br \/>\n    lean_output TEXT,<br \/>\n    verified BOOLEAN DEFAULT FALSE,<br \/>\n    error_msg TEXT,<br \/>\n    check_latency_ms INTEGER,<br \/>\n    created_at TIMESTAMPTZ DEFAULT NOW()<br \/>\n);<\/p>\n<p>CREATE TABLE theorems (<br \/>\n    id UUID PRIMARY KEY DEFAULT gen_random_uuid(),<br \/>\n    statement TEXT NOT NULL,<br \/>\n    lean_statement TEXT NOT NULL,<br \/>\n    domain TEXT,  &#8212; &#8216;algebra&#8217;, &#8216;topology&#8217;, &#8216;number_theory&#8217;, etc.<br \/>\n    difficulty_estimate FLOAT,<br \/>\n    source TEXT,<br \/>\n    verified_proof TEXT,<br \/>\n    created_at TIMESTAMPTZ DEFAULT NOW()<br \/>\n);<br \/>\n&#8220;`<\/p>\n<p>&#8212;<\/p>\n<p>## Step 2: Build the Proof Search Engine<\/p>\n<p>**Goal:** Implement Monte Carlo Tree Search (MCTS) over the tactic space, using Lean as the state evaluator.<\/p>\n<p>Proof search is a tree problem. Each node is a proof state (a set of goals remaining), each edge is a tactic applied, and success is a leaf with zero remaining goals. MCTS is a strong baseline because it balances exploration (trying novel tactics) with exploitation (following paths that have worked before).<\/p>\n<p>&#8220;`python<br \/>\n# mcts.py<br \/>\nimport math<br \/>\nfrom dataclasses import dataclass, field<br \/>\nfrom typing import Optional<\/p>\n<p>@dataclass<br \/>\nclass ProofNode:<br \/>\n    state: str           # Lean tactic state as string<br \/>\n    tactic: Optional[str] = None<br \/>\n    parent: Optional[&#8216;ProofNode&#8217;] = None<br \/>\n    children: list = field(default_factory=list)<br \/>\n    visits: int = 0<br \/>\n    value: float = 0.0<br \/>\n    is_terminal: bool = False<br \/>\n    is_proved: bool = False<\/p>\n<p>    def ucb_score(self, exploration_weight=1.4) -&gt; float:<br \/>\n        if self.visits == 0:<br \/>\n            return float(&#8216;inf&#8217;)<br \/>\n        exploitation = self.value \/ self.visits<br \/>\n        exploration = exploration_weight * math.sqrt(<br \/>\n            math.log(self.parent.visits) \/ self.visits<br \/>\n        )<br \/>\n        return exploitation + exploration<\/p>\n<p>class MCTSProofSearch:<br \/>\n    def __init__(self, lean_env, llm_client, num_simulations=500):<br \/>\n        self.lean = lean_env<br \/>\n        self.llm = llm_client<br \/>\n        self.num_simulations = num_simulations<\/p>\n<p>    def search(self, theorem: str) -&gt; Optional[list[str]]:<br \/>\n        root = ProofNode(state=theorem)<br \/>\n        for _ in range(self.num_simulations):<br \/>\n            node = self._select(root)<br \/>\n            result = self._expand_and_simulate(node)<br \/>\n            self._backpropagate(node, result)<br \/>\n            if result[&#8216;proved&#8217;]:<br \/>\n                return self._extract_proof_path(node)<br \/>\n        return None<\/p>\n<p>    def _select(self, node: ProofNode) -&gt; ProofNode:<br \/>\n        while node.children and not node.is_terminal:<br \/>\n            node = max(node.children, key=lambda n: n.ucb_score())<br \/>\n        return node<\/p>\n<p>    def _expand_and_simulate(self, node: ProofNode) -&gt; dict:<br \/>\n        # Ask LLM for candidate tactics given current proof state<br \/>\n        tactics = self.llm.suggest_tactics(node.state, n=8)<br \/>\n        for tactic in tactics:<br \/>\n            result = self.lean.apply_tactic(node.state, tactic)<br \/>\n            child = ProofNode(<br \/>\n                state=result[&#8216;new_state&#8217;],<br \/>\n                tactic=tactic,<br \/>\n                parent=node,<br \/>\n                is_terminal=result[&#8216;is_terminal&#8217;],<br \/>\n                is_proved=result[&#8216;is_proved&#8217;]<br \/>\n            )<br \/>\n            node.children.append(child)<br \/>\n        return {&#8216;proved&#8217;: any(c.is_proved for c in node.children)}<br \/>\n&#8220;`<\/p>\n<p>Add beam search as a complementary strategy for simpler theorems where MCTS overhead isn&#8217;t worth it. Switch between strategies based on estimated theorem difficulty.<\/p>\n<p>&#8212;<\/p>\n<p>## Step 3: Train a Proof-Generation Model<\/p>\n<p>**Goal:** Fine-tune a language model specifically on formal proof corpora so it generates valid Lean tactics rather than plausible-looking nonsense.<\/p>\n<p>Start with a strong base model (Qwen2.5-Math or DeepSeek-Prover are solid open-source options). Fine-tune on Lean 4 proof data using next-token prediction on tactic sequences.<\/p>\n<p>Data format for fine-tuning:<\/p>\n<p>&#8220;`jsonl<br \/>\n{&#8220;messages&#8221;: [<br \/>\n  {&#8220;role&#8221;: &#8220;system&#8221;, &#8220;content&#8221;: &#8220;You are a Lean 4 proof assistant. Given a theorem statement and current proof state, suggest the next tactic.&#8221;},<br \/>\n  {&#8220;role&#8221;: &#8220;user&#8221;, &#8220;content&#8221;: &#8220;Theorem: \u00e2\u02c6\u20ac n : \u00e2\u201e\u2022, n + 0 = n\\nCurrent state: \u00e2\u0160\u00a2 \u00e2\u02c6\u20ac n : \u00e2\u201e\u2022, n + 0 = n&#8221;},<br \/>\n  {&#8220;role&#8221;: &#8220;assistant&#8221;, &#8220;content&#8221;: &#8220;intro n\\nsimp [Nat.add_zero]&#8221;}<br \/>\n]}<br \/>\n&#8220;`<\/p>\n<p>After supervised fine-tuning, apply GRPO (Group Relative Policy Optimization) with the Lean kernel as the reward function:<\/p>\n<p>&#8220;`python<br \/>\ndef compute_reward(proof_attempt: list[str], theorem: str, lean_env) -&gt; float:<br \/>\n    result = lean_env.check_full_proof(theorem, proof_attempt)<br \/>\n    if result[&#8216;verified&#8217;]:<br \/>\n        return 1.0<br \/>\n    # Partial credit for making progress (fewer goals remaining)<br \/>\n    progress = result.get(&#8216;goals_closed&#8217;, 0) \/ result.get(&#8216;total_goals&#8217;, 1)<br \/>\n    return progress * 0.3<br \/>\n&#8220;`<\/p>\n<p>The reward signal is clean and binary at the top level \u00e2\u20ac\u201d the proof either checks out or it doesn&#8217;t. This is what makes formal verification uniquely powerful for RL: no learned reward model, no reward hacking, no ambiguity.<\/p>\n<p>&#8212;<\/p>\n<p>## Step 4: Build the Tau Multi-Agent Orchestration System<\/p>\n<p>**Goal:** Coordinate multiple specialized agents to collaborate on proof discovery.<\/p>\n<p>Different agents handle different parts of the search. Implement a supervisor that routes tasks and aggregates results:<\/p>\n<p>&#8220;`python<br \/>\n# orchestrator.py<br \/>\nfrom enum import Enum<br \/>\nfrom typing import Protocol<\/p>\n<p>class AgentRole(Enum):<br \/>\n    STRATEGIST = &#8220;strategist&#8221;      # High-level proof plan<br \/>\n    TACTICIAN = &#8220;tactician&#8221;        # Low-level tactic generation<br \/>\n    CRITIC = &#8220;critic&#8221;              # Evaluates partial proofs<br \/>\n    SPECIALIST = &#8220;specialist&#8221;      # Domain expert (algebra, analysis, etc.)<br \/>\n    VERIFIER = &#8220;verifier&#8221;          # Calls Lean kernel<\/p>\n<p>class ProofAgent(Protocol):<br \/>\n    role: AgentRole<br \/>\n    async def act(self, state: dict) -&gt; dict: &#8230;<\/p>\n<p>class TauOrchestrator:<br \/>\n    def __init__(self, agents: list[ProofAgent], lean_env, max_rounds=50):<br \/>\n        self.agents = {a.role: a for a in agents}<br \/>\n        self.lean = lean_env<br \/>\n        self.max_rounds = max_rounds<\/p>\n<p>    async def prove(self, theorem: str) -&gt; dict:<br \/>\n        state = {<br \/>\n            &#8220;theorem&#8221;: theorem,<br \/>\n            &#8220;proof_steps&#8221;: [],<br \/>\n            &#8220;current_goals&#8221;: [theorem],<br \/>\n            &#8220;failed_tactics&#8221;: [],<br \/>\n            &#8220;round&#8221;: 0<br \/>\n        }<\/p>\n<p>        while state[&#8220;round&#8221;] &lt; self.max_rounds and state[&#8220;current_goals&#8221;]:<br \/>\n            # Strategist sets the plan<br \/>\n            strategy = await self.agents[AgentRole.STRATEGIST].act(state)<br \/>\n            state[&#8220;strategy&#8221;] = strategy[&#8220;plan&#8221;]<\/p>\n<p>            # Tactician generates concrete steps<br \/>\n            tactics = await self.agents[AgentRole.TACTICIAN].act(state)<\/p>\n<p>            # Critic filters bad moves before wasting Lean calls<br \/>\n            filtered = await self.agents[AgentRole.CRITIC].act({<br \/>\n                **state, &#8220;proposed_tactics&#8221;: tactics[&#8220;tactics&#8221;]<br \/>\n            })<\/p>\n<p>            # Apply surviving tactics, verify with Lean<br \/>\n            for tactic in filtered[&#8220;approved_tactics&#8221;]:<br \/>\n                result = self.lean.apply_tactic(<br \/>\n                    state[&#8220;current_goals&#8221;][0], tactic<br \/>\n                )<br \/>\n                if result[&#8220;success&#8221;]:<br \/>\n                    state[&#8220;proof_steps&#8221;].append(tactic)<br \/>\n                    state[&#8220;current_goals&#8221;] = result[&#8220;remaining_goals&#8221;]<br \/>\n                    break<br \/>\n                else:<br \/>\n                    state[&#8220;failed_tactics&#8221;].append(tactic)<\/p>\n<p>            state[&#8220;round&#8221;] += 1<\/p>\n<p>        verified = self.lean.check_full_proof(theorem, state[&#8220;proof_steps&#8221;])<br \/>\n        return {&#8220;proof&#8221;: state[&#8220;proof_steps&#8221;], &#8220;verified&#8221;: verified[&#8220;success&#8221;]}<br \/>\n&#8220;`<\/p>\n<p>Use a message queue (Redis Streams or RabbitMQ) for agent coordination in production. Each agent is a separate service; the orchestrator is the control plane.<\/p>\n<p>&#8212;<\/p>\n<p>## Step 5: Build the Dataset Pipeline<\/p>\n<p>**Goal:** Curate, formalize, and verify mathematical corpora at scale for sale to AI labs.<\/p>\n<p>This is your primary business asset. Build it like it matters \u00e2\u20ac\u201d because it does.<\/p>\n<p>Pipeline stages:<\/p>\n<p>1. **Ingest** \u00e2\u20ac\u201d Scrape arXiv math papers, ProofWiki, existing Lean\/Coq\/Isabelle libraries. Parse LaTeX with `latexml` or `plasTeX`.<br \/>\n2. **Formalize** \u00e2\u20ac\u201d Use your proof-generation model to translate informal math into Lean 4 statements.<br \/>\n3. **Verify** \u00e2\u20ac\u201d Every statement gets checked by the Lean kernel. Failed verifications go to a human review queue or back to the model.<br \/>\n4. **Grade** \u00e2\u20ac\u201d Assign difficulty scores, domain tags, and proof complexity metrics.<br \/>\n5. **Deduplicate** \u00e2\u20ac\u201d Embedding-based dedup to remove near-identical theorems.<\/p>\n<p>&#8220;`sql<br \/>\n&#8212; Dataset versioning schema<br \/>\nCREATE TABLE dataset_versions (<br \/>\n    id UUID PRIMARY KEY DEFAULT gen_random_uuid(),<br \/>\n    version_tag TEXT UNIQUE NOT NULL,  &#8212; &#8216;v1.2.0&#8217;<br \/>\n    proof_assistant TEXT NOT NULL,     &#8212; &#8216;lean4&#8217;, &#8216;coq&#8217;, &#8216;isabelle&#8217;<br \/>\n    theorem_count INTEGER,<br \/>\n    verified_count INTEGER,<br \/>\n    domain_breakdown JSONB,<br \/>\n    created_at TIMESTAMPTZ DEFAULT NOW(),<br \/>\n    s3_path TEXT NOT NULL<br \/>\n);<\/p>\n<p>CREATE TABLE theorem_provenance (<br \/>\n    theorem_id UUID REFERENCES theorems(id),<br \/>\n    dataset_version_id UUID REFERENCES dataset_versions(id),<br \/>\n    source_url TEXT,<br \/>\n    formalization_model TEXT,<br \/>\n    human_reviewed BOOLEAN DEFAULT FALSE,<br \/>\n    PRIMARY KEY (theorem_id, dataset_version_id)<br \/>\n);<br \/>\n&#8220;`<\/p>\n<p>Export in multiple formats: raw Lean files, JSONL for training, Parquet for analytics. Automate nightly builds and publish checksums.<\/p>\n<p>&#8212;<\/p>\n<p>## Step 6: Build the API Layer<\/p>\n<p>**Goal:** Expose your RL environments, datasets, and evaluation endpoints to paying customers.<\/p>\n<p>Three distinct API surfaces, each with different latency and throughput requirements.<\/p>\n<p>**RL Environment API** (latency-critical, sub-millisecond target):<\/p>\n<p>&#8220;`python<br \/>\n# FastAPI with async Lean pool<br \/>\n@app.post(&#8220;\/v1\/env\/step&#8221;)<br \/>\nasync def env_step(request: StepRequest, api_key: APIKey = Depends(verify_key)):<br \/>\n    env = await lean_pool.acquire(request.env_id)<br \/>\n    result = await env.apply_tactic_async(request.tactic)<br \/>\n    return {<br \/>\n        &#8220;observation&#8221;: result.new_state,<br \/>\n        &#8220;reward&#8221;: 1.0 if result.proved else 0.0,<br \/>\n        &#8220;done&#8221;: result.is_terminal,<br \/>\n        &#8220;info&#8221;: {&#8220;goals_remaining&#8221;: result.goal_count}<br \/>\n    }<\/p>\n<p>@app.post(&#8220;\/v1\/env\/reset&#8221;)<br \/>\nasync def env_reset(request: ResetRequest, api_key: APIKey = Depends(verify_key)):<br \/>\n    env_id = await lean_pool.spawn(request.theorem)<br \/>\n    return {&#8220;env_id&#8221;: env_id, &#8220;observation&#8221;: request.theorem}<br \/>\n&#8220;`<\/p>\n<p>Maintain a warm pool of pre-initialized Lean processes. Cold-starting Lean is slow (200\u00e2\u20ac\u201c500ms); warm instances check tactics in under 5ms.<\/p>\n<p>**Dataset API** (throughput-optimized):<\/p>\n<p>&#8220;`python<br \/>\n@app.get(&#8220;\/v1\/datasets\/{version}\/theorems&#8221;)<br \/>\nasync def get_theorems(<br \/>\n    version: str,<br \/>\n    domain: Optional[str] = None,<br \/>\n    min_difficulty: float = 0.0,<br \/>\n    limit: int = 1000,<br \/>\n    offset: int = 0,<br \/>\n    api_key: APIKey = Depends(verify_key)<br \/>\n):<br \/>\n    # Stream from S3 or serve from read replica<br \/>\n    &#8230;<br \/>\n&#8220;`<\/p>\n<p>**Eval API:**<\/p>\n<p>&#8220;`python<br \/>\n@app.post(&#8220;\/v1\/eval\/run&#8221;)<br \/>\nasync def run_evaluation(request: EvalRequest, api_key: APIKey = Depends(verify_key)):<br \/>\n    job_id = await eval_queue.enqueue({<br \/>\n        &#8220;model_endpoint&#8221;: request.model_endpoint,<br \/>\n        &#8220;benchmark_id&#8221;: request.benchmark_id,<br \/>\n        &#8220;pass_at_k&#8221;: request.k,<br \/>\n        &#8220;timeout_per_problem&#8221;: request.timeout_s<br \/>\n    })<br \/>\n    return {&#8220;job_id&#8221;: job_id, &#8220;status&#8221;: &#8220;queued&#8221;}<br \/>\n&#8220;`<\/p>\n<p>&#8212;<\/p>\n<p>## Step 7: Deploy and Productize<\/p>\n<p>**Goal:** Ship to production, onboard customers, and build the billing\/usage infrastructure.<\/p>\n<p>**Infrastructure:**<\/p>\n<p>&#8211; API layer: Kubernetes on GKE or EKS, autoscaled on request latency<br \/>\n&#8211; Lean pool: Stateful pods, pre-warmed, drained gracefully before termination<br \/>\n&#8211; Database: Postgres (RDS or Cloud SQL) with read replicas for dataset queries<br \/>\n&#8211; Queue: Redis for RL environment session state, RabbitMQ for eval jobs<br \/>\n&#8211; Storage: S3 for dataset artifacts, versioned with lifecycle policies<\/p>\n<p>**Billing schema:**<\/p>\n<p>&#8220;`sql<br \/>\nCREATE TABLE usage_events (<br \/>\n    id UUID PRIMARY KEY DEFAULT gen_random_uuid(),<br \/>\n    org_id UUID REFERENCES organizations(id),<br \/>\n    event_type TEXT NOT NULL,  &#8212; &#8216;env_step&#8217;, &#8216;dataset_download&#8217;, &#8216;eval_run&#8217;<br \/>\n    quantity INTEGER DEFAULT 1,<br \/>\n    metadata JSONB,<br \/>\n    billed_at TIMESTAMPTZ,<br \/>\n    created_at TIMESTAMPTZ DEFAULT NOW()<br \/>\n);<\/p>\n<p>CREATE TABLE subscriptions (<br \/>\n    org_id UUID REFERENCES organizations(id) PRIMARY KEY,<br \/>\n    plan TEXT NOT NULL,          &#8212; &#8216;research&#8217;, &#8216;enterprise&#8217;, &#8216;lab&#8217;<br \/>\n    env_steps_quota BIGINT,      &#8212; monthly RL environment steps<br \/>\n    dataset_gb_quota INTEGER,<br \/>\n    eval_runs_quota INTEGER,<br \/>\n    overage_rate_usd NUMERIC(10,4),<br \/>\n    stripe_subscription_id TEXT<br \/>\n);<br \/>\n&#8220;`<\/p>\n<p>**Customer onboarding checklist:**<br \/>\n&#8211; Provision org + API key via internal admin panel<br \/>\n&#8211; Send Lean environment quickstart (Python SDK + example RL training loop)<br \/>\n&#8211; Slack connect for enterprise customers<br \/>\n&#8211; Weekly usage report email<\/p>\n<p>**Monitoring:** Track `env_step_p99_latency`, `proof_verification_error_rate`, `dataset_download_throughput`. Page on p99 &gt; 10ms for RL endpoints \u00e2\u20ac\u201d your customers are training on this in real time.<\/p>\n<p>The hardest part of this build is not the code. It&#8217;s accumulating enough verified theorems that your dataset is worth paying for, and getting your RL environment trusted enough that a lab plugs it into a live training run. Both of those are slow, trust-based processes. Start building both on day one.<\/p>\n","protected":false},"excerpt":{"rendered":"# How to Build a Formal Verification AI Platform (Cajal Clone) A step-by-step technical guide for building a&hellip;\n","protected":false},"author":2,"featured_media":31262,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[8],"tags":[306,53,3154,25,182,3326,5070,186],"class_list":{"0":"post-31261","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-anthropic","8":"tag-ai-startups","9":"tag-anthropic","10":"tag-anthropic-claude","11":"tag-artificial-intelligence","12":"tag-claude","13":"tag-investors","14":"tag-startup-funding","15":"tag-tech-news"},"_links":{"self":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/posts\/31261","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/comments?post=31261"}],"version-history":[{"count":0,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/posts\/31261\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/media\/31262"}],"wp:attachment":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/media?parent=31261"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/categories?post=31261"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/tags?post=31261"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}