{"id":778359,"date":"2026-05-06T23:50:11","date_gmt":"2026-05-06T23:50:11","guid":{"rendered":"https:\/\/www.europesays.com\/us\/778359\/"},"modified":"2026-05-06T23:50:11","modified_gmt":"2026-05-06T23:50:11","slug":"plasticity-and-language-in-the-anaesthetized-human-hippocampus","status":"publish","type":"post","link":"https:\/\/www.europesays.com\/us\/778359\/","title":{"rendered":"Plasticity and language in the anaesthetized human hippocampus"},"content":{"rendered":"<p>Patient recruitment<\/p>\n<p>Experiments were conducted according to protocol guidelines approved by the Institutional Review Board for Baylor College of Medicine and Affiliated Hospitals (H-50885 for the Neuropixels recordings and H-18112 for the EMU recordings). All of the recruited patients for the Neuropixels recordings were diagnosed with drug-resistant temporal lobe epilepsy and were scheduled to undergo an anteromesial temporal lobectomy for seizure control. All of the patients provided written informed consent to participate in the study and were aware that participation was voluntary and would not affect their clinical course. Included patients\u2019 age ranged from 25\u201354 years old (average, 39.6\u2009\u00b1\u200911.8), with three female and four male patients. Four resections were on the left side, and three were on the right. In one individual (p3), recordings were performed in the middle temporal lobe before resection. None of the patients reported explicit memory of intraoperative events after the case when discussed in the post-operative care unit or while recovering in the hospital the next day.<\/p>\n<p>Note that we include for comparison purposes a cohort of awake patients listening to podcast stimuli. These patients were recruited from patients undergoing invasive recordings in the EMU at Baylor St Luke\u2019s Hospital. Details on methods for this group of patients were reported previously<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 21\" title=\"Franch, M. et al. A vectorial code for semantics in human hippocampus. Preprint at bioRxiv &#010;                https:\/\/doi.org\/10.1101\/2025.02.21.639601&#010;                &#010;               (2025).\" href=\"http:\/\/www.nature.com\/articles\/s41586-026-10448-0#ref-CR21\" id=\"ref-link-section-d67402416e2284\" rel=\"nofollow noopener\" target=\"_blank\">21<\/a>,<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 34\" title=\"Katlowitz, K. A. et al. Attention is all you need (in the brain): semantic contextualization in human hippocampus. Preprint at bioRxiv &#010;                https:\/\/doi.org\/10.1101\/2025.06.23.661103&#010;                &#010;               (2025).\" href=\"http:\/\/www.nature.com\/articles\/s41586-026-10448-0#ref-CR34\" id=\"ref-link-section-d67402416e2287\" rel=\"nofollow noopener\" target=\"_blank\">34<\/a>,<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" title=\"Zhu, H. et al. Semantic axes in the brain support analogical representations. Preprint at bioRxiv &#10;                https:\/\/doi.org\/10.64898\/2026.01.28.702241&#10;                &#10;               (2026).\" href=\"#ref-CR52\" id=\"ref-link-section-d67402416e2290\">52<\/a>,<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" title=\"Chavez, A. G. et al. Mirror manifolds: partially overlapping neural subspaces for speaking and listening. Preprint at bioRxiv &#10;                https:\/\/doi.org\/10.1101\/2025.09.20.677504&#10;                &#10;               (2025).\" href=\"#ref-CR53\" id=\"ref-link-section-d67402416e2290_1\">53<\/a>,<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 54\" title=\"Yan, X. et al. Shared neural geometries for bilingual semantic representations. Preprint at bioRxiv &#010;                https:\/\/doi.org\/10.1101\/2025.11.16.688726&#010;                &#010;               (2025).\" href=\"http:\/\/www.nature.com\/articles\/s41586-026-10448-0#ref-CR54\" id=\"ref-link-section-d67402416e2293\" rel=\"nofollow noopener\" target=\"_blank\">54<\/a>.<\/p>\n<p>Neuropixels data acquisition set-up and intraoperative recordings<\/p>\n<p>Neuropixels 1.0-S probes (IMEC) with 384 recording channels (total recording contacts\u2009=\u2009960, usable recording contacts\u2009=\u2009384) were used for recordings (dimensions: 70\u2009\u03bcm width, 100\u2009\u03bcm thickness, 10\u2009mm length). The Neuropixels probe, consisting of both the recording shank and the headstage, were individually sterilized with ethylene oxide (Bioseal)<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 6\" title=\"Coughlin, B. et al. Modified Neuropixels probes for recording human neurophysiology in the operating room. Nat. Protoc. 18, 2927&#x2013;2953 (2023).\" href=\"http:\/\/www.nature.com\/articles\/s41586-026-10448-0#ref-CR6\" id=\"ref-link-section-d67402416e2305\" rel=\"nofollow noopener\" target=\"_blank\">6<\/a>. Our intraoperative data acquisition system included a custom-built rig including a PXI chassis affixed with an IMEC\/Neuropixels PXIe Acquisition module (PXIe-1071) and National Instruments DAQ (PXI-6133) for acquiring neuronal signals and any other task-relevant analogue\/digital signals respectively. Our recording rig was certified by the Biomedical Engineering at Baylor St Luke\u2019s Medical Center, where the intraoperative recording experiments were conducted. A high-performance computer (10-core processor) was used for neural data acquisition using open-source software such as SpikeGLX 3.0 and OpenEphys v.0.6x for data acquisition (the action potential (AP) band was band-pass filtered from 0.3\u2009kHz to 10\u2009kHz and acquired at 30\u2009kHz sampling rate; the LFP band was band-pass filtered from 0.5\u2009Hz to 500\u2009Hz and acquired at a 2,500\u2009Hz sampling rate). We used a short-map probe channel configuration for recording, selecting the 384 contacts located along the bottom third of the recording shank.<\/p>\n<p>Audio was played through a separate computer using pregenerated .wav files and captured at 30\u2009kHz or 1,000\u2009kHz on the NIDAQ through a coaxial cable splitter that sent the same signal to speakers adjacent to the patient. MATLAB (MathWorks) in conjunction with a LabJack (LabJack U6) was used to generate a continuous TTL pulse of which the width was modulated by the current timestamp and recorded on both the neural and audio datafiles. Online synchronization of the AP and LFP files was performed by the OpenEphys recording software. Offline synchronization of the neural and audio data was performed by calculating a scale and offset factor via a linear regression between the time stamps of the reconstructed TTL pulses and confirmed with visual inspection of the aligned traces.<\/p>\n<p>Acute intraoperative recordings were conducted in brain tissue designated for resection based on purely clinical considerations. The probe was positioned using a ROSA ONE Brain (Zimmer Biomet) robotic arm and lowered into the brain 5\u20136\u2009mm from the ependymal surface using an AlphaOmega (Alpha Omega Engineering). The penetration was monitored through online visualization of the neuronal data and through direct visualization with the operating microscope (Kinevo 900). Reference and ground signals on the Neuropixels probe were acquired by connecting to sterile needles placed in the scalp (separate needles inserted at distinct scalp locations for ground and reference respectively).<\/p>\n<p>For all patients (n\u2009=\u20097), we conducted neuronal recordings under general anaesthesia for at most 30\u2009min as per the experimental protocol. All of the patients were under total intravenous anaesthesia, with propofol as the main anaesthetic for each experimental protocol (Extended Data Table <a data-track=\"click\" data-track-label=\"link\" data-track-action=\"table anchor\" href=\"http:\/\/www.nature.com\/articles\/s41586-026-10448-0#Tab1\" rel=\"nofollow noopener\" target=\"_blank\">1<\/a>). Inhaled anaesthetics were only used for induction and stopped at least 1\u2009h before recordings. The anaesthesiologist titrated the anaesthetic drug infusion rates so that the BIS monitor (Medtronic) value was between 45 and 60 for the duration of the surgical case<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 55\" title=\"Singh, H. Bispectral index (BIS) monitoring during propofol-induced sedation and anaesthesia. Eur. J. Anaesthesiol. 16, 31&#x2013;36 (1999).\" href=\"http:\/\/www.nature.com\/articles\/s41586-026-10448-0#ref-CR55\" id=\"ref-link-section-d67402416e2324\" rel=\"nofollow noopener\" target=\"_blank\">55<\/a>. Notably, BIS values range between 0 (completely comatose) and 100 (fully awake), with standard intraoperative values between 40 and 60. Specific anaesthesia depth was flat across the brief time of the experiment. First, recordings took place several hours after anaesthesia induction and several hours before the end of the procedure, so patients were well into the stable portion of the surgery. Second, the anaesthesiologist was maintaining active monitoring and stably controlled anaesthesia levels.<\/p>\n<p>For patients p4, p5 and p6, we recorded neuronal activity during passive auditory stimuli presentation. For p4, a sequence of auditory stimuli (pure tones; f1\u2009=\u20091\u2009kHz, f2\u2009=\u20093\u2009kHz) was presented with an 80\u201320 probability distribution, with the less frequent auditory stimulus serving as an auditory oddball stimulus (n\u2009=\u2009300 trials). For p5 and p6 we counterbalanced the tones. A sequence of auditory stimuli (pure tones; f1\u2009=\u2009200\u2009Hz, f2\u2009=\u20095\u2009kHz) were presented with an 80\u201320 probability distribution, while switching the tone frequency designated as the auditory oddball stimulus halfway through (first half, n\u2009=\u2009150 trials, f2 is auditory oddball; second half, n\u2009=\u2009150 trials, f1 is auditory oddball). We interleaved a washout period (30 trials) using the same auditory stimuli but presented at a 50\u201350 probability distribution in between the two counterbalanced tasks. The auditory pure tone stimuli were presented for a 100\u2009ms duration, and the intertrial interval for the auditory oddball task was randomly drawn from between 1 and 3\u2009s.<\/p>\n<p>Sound stimuli for the auditory oddball task consisted of high- and low-pitched tones. The low-pitched tone was 100\u2009ms duration and 200\u2009Hz, approximating a square wave. The high-pitched tone was 100\u2009ms duration and 5\u2009kHz frequency, also approximating a square wave. These stimuli were constructed to have distinct perceived pitch and salient onset structure. Stimulus waveforms were matched in amplitude. Sounds were delivered in stereo, using a sound delivery system that was calibrated in the testing suite (B&amp;K type 4939-A-011 calibration mic and NEXUS 4939-A-011 conditioning amplifier). Both speakers had a relatively flat frequency response (\u00b15\u2009dB) across the used frequency range (200\u20136,000\u2009Hz) and no high- or low-frequency roll-off.<\/p>\n<p>For patients p6, p8, p9 and p11, we also recorded neuronal activity during podcast episodes. Patient p6 listened to three stories, each approximately 7\u2009min long, taken from The Moth Radio Hour (<a href=\"https:\/\/themoth.org\/podcast\" rel=\"nofollow noopener\" target=\"_blank\">https:\/\/themoth.org\/podcast<\/a>). The stories were Wild Women and Dancing Queens, My Father\u2019s Hands and Juggling and Jesus. Each episode consists of a single speaker narrating an autobiographical story. Patient p8 listened to Why We Should NOT Look for Aliens\u2014The Dark Forest, an educational video created by the Kurzgesagt group (Kurzgesagt) (<a href=\"https:\/\/www.youtube.com\/watch?v=xAUJYP8tnRE\" rel=\"nofollow noopener\" target=\"_blank\">https:\/\/www.youtube.com\/watch?v=xAUJYP8tnRE<\/a>). The selected stories were chosen to be varied, engaging and linguistically rich.<\/p>\n<p>Micro-CT<\/p>\n<p>As the recordings were performed only in tissue planned for resection, we first removed a small cube of tissue around the probe and then proceeded with the remainder of the resection. The cube specimens were processed according to previously described methods<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 56\" title=\"Hsu, C.-W. et al. High resolution imaging of mouse embryos and neonates with X-ray micro-computed tomography. Curr. Protoc. Mouse Biol. 9, e63 (2019).\" href=\"http:\/\/www.nature.com\/articles\/s41586-026-10448-0#ref-CR56\" id=\"ref-link-section-d67402416e2399\" rel=\"nofollow noopener\" target=\"_blank\">56<\/a>. In brief, resected specimens were fixed in 4% PFA for 16\u2009h at 4\u2009\u00b0C. They were then stabilized using a modified stability buffer (mStability), containing 4% acrylamide (Bio-Rad, 1610140), 0.25% (w\/v) VA044 (Wako Chemical, 017-19362), 0.05% (w\/v) saponin (Millipore-Sigma, 84510) and 0.1% sodium azide (Millipore-Sigma, S2002). The samples were equilibrated in the hydrogel solution for 16\u2009h at 4\u2009\u00b0C before undergoing cross-linking at \u221290\u2009kPa and 37\u2009\u00b0C for 3\u2009h. After cross-linking, excess hydrogel solution was removed, and specimens were washed four times with 1\u00d7 PBS. Next, the samples were immersed in 0.1\u2009N iodine and incubated with gentle agitation for 24\u2009h at room temperature before being embedded in agarose and imaged using a Zeiss Xradia Context micro-CT at 3\u2009\u00b5m per voxel resolution. The acquired back-projection images were reconstructed using Scout-and-Scan Reconstructor (Carl Zeiss, v.16.8) and converted to NRRD format using the Harwell Automated Recon Processor (HARP, v.2.4.1)<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 57\" title=\"Brown, J. M. et al. A bioimage informatics platform for high-throughput embryo phenotyping. Brief. Bioinform. 19, 41&#x2013;51 (2018).\" href=\"http:\/\/www.nature.com\/articles\/s41586-026-10448-0#ref-CR57\" id=\"ref-link-section-d67402416e2403\" rel=\"nofollow noopener\" target=\"_blank\">57<\/a>, an open-source, cross-platform application developed in Python. The 3D volumes were analysed, and optical sections were captured using 3D Slicer<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 58\" title=\"Fedorov, A. et al. 3D Slicer as an image computing platform for the Quantitative Imaging Network. Magn. Reson. Imaging 30, 1323&#x2013;1341 (2012).\" href=\"http:\/\/www.nature.com\/articles\/s41586-026-10448-0#ref-CR58\" id=\"ref-link-section-d67402416e2407\" rel=\"nofollow noopener\" target=\"_blank\">58<\/a>. All tissue was inspected with a microscope by S.R.H. and her laboratory, and no abnormalities were reported.<\/p>\n<p>Neuronal data processing<\/p>\n<p>Patients did not experience seizures during the surgery (probably due to propofol anaesthesia), so we did not do any seizure-related data-cleaning. The lack of seizures was confirmed by review of the waveform activity by a trained neurologist.<\/p>\n<p>Motion\u00a0correction<\/p>\n<p>We used previously developed and validated motion estimation and interpolation algorithms to correct for the motion artefacts from brain movement<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 59\" title=\"Windolf, C. et al. DREDge: robust motion correction for high-density extracellular recordings across species. Nat. Methods 22, 788&#x2013;800 (2025).\" href=\"http:\/\/www.nature.com\/articles\/s41586-026-10448-0#ref-CR59\" id=\"ref-link-section-d67402416e2426\" rel=\"nofollow noopener\" target=\"_blank\">59<\/a>. Motion was estimated via the DREDge software package (Decentralized Registration of Electrophysiology Data software; <a href=\"https:\/\/github.com\/evarol\/DREDge\" rel=\"nofollow noopener\" target=\"_blank\">https:\/\/github.com\/evarol\/DREDge<\/a>) using either a combination of motion traces obtained using raw LFP and\/or AP band data, fine-tuned for individual recordings. Motion\u00a0correction was then implemented using interpolation methods (<a href=\"https:\/\/github.com\/williamunoz\/InterpolationAfterDREDge\" rel=\"nofollow noopener\" target=\"_blank\">https:\/\/github.com\/williamunoz\/InterpolationAfterDREDge<\/a>). Both the AP and LFP band data are motion\u00a0corrected and were used for further preprocessing and analysis steps. If the estimated motion led to no improvement in the spike locations then spike sorting proceeded with the motion correction package built into Kilosort 4 without performing interpolation.<\/p>\n<p>Unit extraction and classification<\/p>\n<p>Automated spike detection and clustering were performed by Kilosort 2.0 if motion correction was already applied using the DREDge algorithm or KiloSort 4.0<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 60\" title=\"Pachitariu, M., Sridhar, S., Pennington, J. &amp; Stringer, C. Spike sorting with Kilosort4. Nat. Methods 21, 914&#x2013;921 (2024).\" href=\"http:\/\/www.nature.com\/articles\/s41586-026-10448-0#ref-CR60\" id=\"ref-link-section-d67402416e2452\" rel=\"nofollow noopener\" target=\"_blank\">60<\/a> if motion correction was not applied separately. Manually curation of spike clustered was performed using the open-source software Phy<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 61\" title=\"Rossant, C. et al. Spike sorting for large, dense electrode arrays. Nat. Neurosci. 19, 634&#x2013;641 (2016).\" href=\"http:\/\/www.nature.com\/articles\/s41586-026-10448-0#ref-CR61\" id=\"ref-link-section-d67402416e2456\" rel=\"nofollow noopener\" target=\"_blank\">61<\/a>. Unit quality metrics were calculated using SpikeInterface<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 62\" title=\"Buccino, A. P. et al. SpikeInterface, a unified framework for spike sorting. eLife 9, e61834 (2020).\" href=\"http:\/\/www.nature.com\/articles\/s41586-026-10448-0#ref-CR62\" id=\"ref-link-section-d67402416e2460\" rel=\"nofollow noopener\" target=\"_blank\">62<\/a> and were considered single units if they had a d\u2032 greater than 1 and fewer than 3% of spikes were violations of a 2\u2009ms interspike interval refractory period.<\/p>\n<p>LFP data<\/p>\n<p>LFP data were bandpass-filtered between 0.1 and 20\u2009Hz and aligned to task events to extract local ERPs. The LFP band amplitude in the specific bands was calculated by first band-pass filtering the raw signal within defined frequency limits (for example, 70\u2013150 for high gamma) and then taking the absolute value of the Hilbert-transformed complex signal. Given the high correlation between adjacent channels, only ten channels equally spanning the length of the probe were used to calculate statistics.<\/p>\n<p>Neuronal data analysis<\/p>\n<p>All analyses were performed using custom MATLAB code unless otherwise noted.<\/p>\n<p>Motion analysis<\/p>\n<p>The motion-corrected location estimates were obtained at a 250\u2009Hz sampling frequency using the DREDge algorithm. This signal was downsampled to 10\u2009Hz. The power spectrum of the calculated motion was then estimated using Welch\u2019s overlapped segment averaging estimator for frequencies between 0.1 and 3\u2009Hz. The amount of motion was defined as the r.m.s.e. of the location trace of the probes centre relative to its average location.<\/p>\n<p>Tone responses<\/p>\n<p>Both single units and multiunits were used for all analyses. A tone responsive neuron was defined as having a statistically significant increase in the average firing rate in the first second after tone onset (shifted by 50\u2009ms to account for neural processing delay to the hippocampus) relative to the preceding 200\u2009ms baseline (\u03b1\u2009&lt;\u20090.05, Wilcoxon signed-rank test). Visual demonstrations of the peri-stimulus average firing rate were smoothed through a causal Gaussian filter with a s.d. of 150\u2009ms for visualization; however, all statistical analyses were performed on the raw spike count. Response onset latency was computed as the time taken to the peak response. A mixed Gaussian model with two components was then fit to the distribution of latencies. Given the trough between the two peaks at 291\u2009ms and evidence of average oddball response occurring in the first segment, a window of 0\u2013300\u2009ms was used for analysis characterizing tone and oddball selectivity.<\/p>\n<p>Neural tuning<\/p>\n<p>To determine response tuning properties, we modelled trial responses in the peristimulus period using general linear regression models. Neural data in the analysis time window of 0\u2013300\u2009ms was used for tuning analyses. Unit response was defined as the average firing rate, LFP power was defined as the root-mean-squared value of the band-pass-filtered LFP, and gamma power was defined as the average gamma band amplitude. All vectors were z-scored to allow for comparison of the neural response modulation across units\/channels. The independent variables were effects-coded for tone type (frequency 1 versus frequency 2), trial type (standard versus oddball) and an interaction term (conjunctive coding). We set the \u03b1 level at 0.05 to determine whether the \u03b2 coefficient for the independent variables was statistically significant.<\/p>\n<p>Neuronal population coding<\/p>\n<p>To determine the information content present in the population, a SVM with a linear kernel was trained using tenfold cross validation for 200 iterations. Accuracy for each iteration was defined as the average accuracy across the folds. Significant coding was defined as the distribution of 200 iterations being statistically different from 0.5 (chance). Algorithm validation was performed by shuffling the dataset and demonstrating that it always performed at chance level. Subsampling was performed to avoid performance bias from an unbalanced dataset (that is, more standard trials than oddball trials). To investigate the neuronal population response dynamics for tone and oddball encoding as a function of time, we used sets of sequential trials (50 trials) from each of the two counterbalanced blocks (total of 100 trials). For example, the first set was using trials 1:50 and 181:230, whereas the last set was using trials 101:150 and 281:330. Decoding analyses were also run separately for early versus late trials (first 75 versus last 75 trials within a 150-trial block) for tone and oddball encoding, respectively.<\/p>\n<p>Neuronal response learning dynamics<\/p>\n<p>Next, to determine the neural mechanism underlying statistical learning required for oddball detection, we evaluated single-trial response dynamics across the neuronal population. For each trial, we generated a neuronal response population vector. We then computed the Euclidean distance (\\(\\Vert {\\bf{u}}-{\\bf{v}}\\Vert \\)) and cosine angle (\\(\\mathrm{invcos}({\\bf{u}}\\cdot {\\bf{v}}\/\\Vert {\\bf{u}}\\Vert \\times \\Vert {\\bf{v}}\\Vert )\\)) between the mean vector across all standard trials and each individual oddball unit vector, evaluating each as a function of the oddball index.<\/p>\n<p>Mixed-effects models<\/p>\n<p>Where applicable, we used mixed effects models to quantify how task conditions affect spike count and other neurophysiological variables while accounting for the hierarchical data structure of multiple subjects, neurons and channels. For analyses of spike count, we summed spikes over equivalent durations across task conditions and fit a GLME model using a log link function and modelling spike counts as Poisson distributed. For LFP variables, a LME model was used. All analyses used a random effects structure for neurons or channels nested within-participant.<\/p>\n<p>Continuous-rate RNN model<\/p>\n<p>We implemented a continuous-rate RNN and trained it to perform an oddball detection task, closely mirroring the one used for the experimental dataset. The network contains 200 recurrently connected units (80% of which are excitatory and 20% of which are inhibitory units). The network is governed by the following equation:<\/p>\n<p>$${\\tau }_{i}({dx}_{i}\/dt)=-{x}_{i}(t)+W\\cdot r(t)+u(t)$$<\/p>\n<p>$${r}_{i}(t)=1\/(1+{{\\rm{e}}}^{-{x}_{i}(t)})$$<\/p>\n<p>$$o(t)={W}_{{\\rm{o}}{\\rm{u}}{\\rm{t}}}\\times r(t)$$<\/p>\n<p>where \u03c4i represents the synaptic decay time constant, xi(t) indicates the synaptic current variable for neuron i at timepoint t, W is the recurrent connectivity matrix (N by N, that is, 200 by 200), and u(t) is the input data given to the network at timepoint t. u is a 2-by-200 matrix where the first dimension refers to the number of input channels and the second dimension is the total number of timepoints. A firing rate of a unit was estimated by passing the synaptic current variable (x) through a standard logistic sigmoid function. The output (o) of the network was computed as a linear weighted sum of the entire population of units.<\/p>\n<p>In each trial, the network model receives input data mimicking auditory signals. The input consists of two signal streams, each representing a distinct auditory tone (that is, tone A versus tone B; Fig. <a data-track=\"click\" data-track-label=\"link\" data-track-action=\"figure anchor\" href=\"http:\/\/www.nature.com\/articles\/s41586-026-10448-0#Fig3\" rel=\"nofollow noopener\" target=\"_blank\">3f,g<\/a>). Only one tone is presented to the network per trial. The model was trained to produce an output signal approaching +1 when tone A was presented and an output signal approaching \u22121 when tone B was presented. To closely replicate the experimental task design, we used three different sequential contexts during network training. In the first stage, tone A was presented predominantly (80% of trials), followed by an equal distribution of tone A and tone B (50\/50) in the second stage. In the third stage, tone B was predominant (80%).<\/p>\n<p>We optimized the network parameters, including recurrent connectivity, readout weights and synaptic decay time constants, using gradient descent via backpropagation through time (BPTT). The network was required to achieve over 95% task accuracy in the current context before a new context was introduced. To evaluate the model\u2019s ability to decode both tone identity and oddball context, we performed linear SVM decoding using population activity from the recurrent units. For each decoding analysis, we generated 100 trials for each condition. A linear SVM classifier was trained using 70% of the trials and tested on the remaining 30%, and this procedure was repeated 100 times to estimate decoding accuracy. Separate SVM classifiers were trained for tone identity and oddball context classification.<\/p>\n<p>Neuronal data analysis: natural language stimuliNatural language stimuli<\/p>\n<p>All of the patients were native English speakers. The podcast played during the task was automatically transcribed using Assembly AI (<a href=\"https:\/\/www.assemblyai.com\/\" rel=\"nofollow noopener\" target=\"_blank\">https:\/\/www.assemblyai.com\/<\/a>). The transcribed words and corresponding timestamp outputs from Assembly AI were converted to a TextGrid and then loaded into Praat<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 63\" title=\"Boersma, P. Praat: doing phonetics by computer (Praat Org, 2011).\" href=\"http:\/\/www.nature.com\/articles\/s41586-026-10448-0#ref-CR63\" id=\"ref-link-section-d67402416e2980\" rel=\"nofollow noopener\" target=\"_blank\">63<\/a>. The original .wav file was also loaded into Praat and the spectrograms and labels and timestamps were manually checked and corrected to ensure the word onset and offset times were accurate. This process was then repeated by a second reviewer to ensure the validity of the time stamps. The TextGrid output of corrected words and timestamps from Praat was converted to an Excel file and loaded into MATLAB and Python for further analysis.<\/p>\n<p>Natural language statistics<\/p>\n<p>Word frequency was defined based on a corpus of movie subtitles spanning a total of 51\u2009million words<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 22\" title=\"Brysbaert, M. &amp; New, B. Moving beyond Ku&#x10D;era and Francis: a critical evaluation of current word frequency norms and the introduction of a new and improved word frequency measure for American English. Behav. Res. Methods 41, 977&#x2013;990 (2009).\" href=\"http:\/\/www.nature.com\/articles\/s41586-026-10448-0#ref-CR22\" id=\"ref-link-section-d67402416e2992\" rel=\"nofollow noopener\" target=\"_blank\">22<\/a>. Words that did not elicit a response during the duration of the word were excluded from this analysis. To compare the relative contributions to firing rate, a linear model was trained to estimate the logarithmic firing rate from the logarithmic duration and corpus frequency of each word. Word surprisal values were calculated using the GPT-2 large model<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 64\" title=\"Radford, A. et al. Language models are unsupervised multitask learners (OpenAI, 2019).\" href=\"http:\/\/www.nature.com\/articles\/s41586-026-10448-0#ref-CR64\" id=\"ref-link-section-d67402416e2996\" rel=\"nofollow noopener\" target=\"_blank\">64<\/a> from the Hugging Face Transformers library<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 65\" title=\"Wolf, T. et al. Transformers: state-of-the-art natural language processing. In Proc. 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations (eds Liu, Q. &amp; Schlangen, D.) 38&#x2013;45 (Association for Computational Linguistics, Online, 2020).\" href=\"http:\/\/www.nature.com\/articles\/s41586-026-10448-0#ref-CR65\" id=\"ref-link-section-d67402416e3000\" rel=\"nofollow noopener\" target=\"_blank\">65<\/a>, computing the negative log probability of each word conditioned on the preceding context. Specifically, surprisal was defined by the equation<\/p>\n<p>$$\\mathrm{surprisal}=-\\log P({w}_{i}|{w}_{i-1},{w}_{i-2},\\ldots ,{w}_{i-1},{w}_{1})$$<\/p>\n<p>where P(wi) refers to the probability of word i given the preceding words.<\/p>\n<p>We used the pre-trained fastText Word2Vec model in MATLAB to extract word embeddings for all words in our dataset<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 66\" title=\"Mikolov, T., Chen, K., Corrado, G. &amp; Dean, J. Efficient estimation of word representations in vector space. Preprint at &#010;                https:\/\/arxiv.org\/abs\/1301.3781&#010;                &#010;               (2013).\" href=\"http:\/\/www.nature.com\/articles\/s41586-026-10448-0#ref-CR66\" id=\"ref-link-section-d67402416e3130\" rel=\"nofollow noopener\" target=\"_blank\">66<\/a>,<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 67\" title=\"Joulin, A., Grave, E., Bojanowski, P. &amp; Mikolov, T. Bag of tricks for efficient text classification. in Proc. 15th Conference of the European Chapter of the Association for Computational Linguistics Vol. 2 (eds Lapata, M. et al.) 427&#x2013;431 (Association for Computational Linguistics, 2017).\" href=\"http:\/\/www.nature.com\/articles\/s41586-026-10448-0#ref-CR67\" id=\"ref-link-section-d67402416e3133\" rel=\"nofollow noopener\" target=\"_blank\">67<\/a>. This pretrained model provides 300-dimensional word embedding vectors, trained on 16\u2009billion tokens from Wikipedia, UMBC webbase corpus and <a href=\"https:\/\/statmt.org\" rel=\"nofollow noopener\" target=\"_blank\">https:\/\/statmt.org<\/a>, to capture semantic relationships between words. Notably, Word2Vec is a non-contextual embedder, so all instances of the same word are represented the same. Some surname words, such as Harwood or proper nouns like Applebee\u2019s did not have word embeddings and were discarded from the analysis. A simple linear model was trained to predict the firing rate of individual neurons from the semantic matrices using tenfold cross-validation. Accuracy was defined as the correlation between true and predicted firing rates. Words with 0\u2009Hz or above 25\u2009Hz were removed from this analysis. To prevent overfitting, principal component analysis (PCA) was used to reduce the dimensionality of the semantic embedding vectors to account for 30% of the variance before modelling. This threshold was defined as the minimum of the r.m.s.e. of the model that balanced under and overfitting. To predict future or previous words the alignment between words was shifted forwards or backwards, respectively. This relationship was then fit with a piecewise exponential decay<\/p>\n<p>$$r(i)={\\beta }_{0}\\times {e}^{i\/{\\beta }_{1}}\\,\\mathrm{for}\\,i\\ge 0$$<\/p>\n<p>$$r(i)={\\beta }_{0}\\times {e}^{-i\/{\\beta }_{2}}\\,{\\rm{f}}{\\rm{o}}{\\rm{r}}\\,i &lt; 0$$<\/p>\n<p>Where \u03b20 is the amplitude of the correlation at 0 lag, and \u03b21 and \u03b22 are equivalent to the time constant of the decay for positive and negative lags, respectively.<\/p>\n<p>Word embedding, semantic clustering and part of speech classification<\/p>\n<p>To identify the natural semantic categories present in our word data, all unique words heard by the participants were clustered into groups using a word-embedding approach<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 21\" title=\"Franch, M. et al. A vectorial code for semantics in human hippocampus. Preprint at bioRxiv &#010;                https:\/\/doi.org\/10.1101\/2025.02.21.639601&#010;                &#010;               (2025).\" href=\"http:\/\/www.nature.com\/articles\/s41586-026-10448-0#ref-CR21\" id=\"ref-link-section-d67402416e3330\" rel=\"nofollow noopener\" target=\"_blank\">21<\/a>,<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 68\" title=\"Henry, S., Cuffy, C. &amp; McInnes, B. T. Vector representations of multi-word terms for semantic relatedness. J. Biomed. Inf. 77, 111&#x2013;119 (2018).\" href=\"http:\/\/www.nature.com\/articles\/s41586-026-10448-0#ref-CR68\" id=\"ref-link-section-d67402416e3333\" rel=\"nofollow noopener\" target=\"_blank\">68<\/a>. We used the same 300-dimensional embedding from the previous GLM analysis. To compute and visualize semantic clusters, we first used a t-distributed stochastic neighbour embedding algorithm on word embedding values to reduce the dimensionality of each unique word based on their cosine distance to all other words, therefore reflecting their semantic similarity. Words with similar meanings now have similar 2D coordinates. We then applied the k-means clustering algorithm to these 2D word representations and visualized clustered words on a 2D word map (12 clusters). We then manually inspected and assigned a distinct label to each semantic cluster and adjusted clusters for accuracy. For example, words bordering the edges of clusters would sometimes get misgrouped and were manually corrected. The final 12 semantic categories of the words are body parts, places, emotional words, mental words, social words, objects, visual words, numerical words, actions, identity words, function words and proper nouns. Correction for multiple comparisons was performed using the Benjamini\u2013Hochberg approach. A SVM was trained for each semantic category (versus all other categories) using a radial basis function kernel. Model training and accuracy metrics were weighted to the relative frequency of each group. We used tenfold cross validation and 200 iterations.<\/p>\n<p>To extract part-of-speech (POS) for each word in the dataset, we used an automated pipeline through Stanford CoreNLP, a natural language processing toolkit<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 25\" title=\"Manning, C. D. et al. The Stanford CoreNLP Natural Language Processing Toolkit. In Proc. 52nd Annual Meeting of the Association for Computational Linguistics: System Demonstrations (eds Bontcheva, K. &amp; Zhu, J.) 55&#x2013;60 (Association for Computational Linguistics, 2014).\" href=\"http:\/\/www.nature.com\/articles\/s41586-026-10448-0#ref-CR25\" id=\"ref-link-section-d67402416e3346\" rel=\"nofollow noopener\" target=\"_blank\">25<\/a>. We initialized a CoreNLPParser with the \u2018pos\u2019 tag-type, which specializes in POS tagging. The transcript was first segmented into sentences based on punctuation. Each sentence was then tokenized and passed through the CoreNLPParser\u2019s tagging function. This process leveraged CoreNLP\u2019s advanced linguistic models to analyse the context and structure of each sentence, assigning appropriate POS tags to individual words. The 15 POS types were as follows: noun, adjective, numeral, determiner, conjunction, preposition or subordinating conjunction, auxiliary, possessive pronoun, pronoun, adverb, particle, interjection, verb, wh-word and existential. POS types with fewer than 45 words were removed from analysis. A similar SVM was used for POS.<\/p>\n<p>Probe localization<\/p>\n<p>Intraoperative navigation (StealthStation navigation platform, Medtronic) was used to the label probe entry site after it was removed from the brain. RAVE was used to transform patient-specific coordinates into MNI152 average space and plot them on a glass brain with hippocampal segmentation<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 51\" title=\"Magnotti, J. F., Wang, Z. &amp; Beauchamp, M. S. RAVE: Comprehensive open-source software for reproducible analysis and visualization of intracranial EEG data. NeuroImage 223, 117341 (2020).\" href=\"http:\/\/www.nature.com\/articles\/s41586-026-10448-0#ref-CR51\" id=\"ref-link-section-d67402416e3359\" rel=\"nofollow noopener\" target=\"_blank\">51<\/a>.<\/p>\n<p>Ethics statement<\/p>\n<p>Experiments were conducted according to protocol guidelines approved by the Institutional Review Board for Baylor College of Medicine and Affiliated Hospitals (H-50885 for the Neuropixels recordings, and H-18112 for the EMU recordings).<\/p>\n<p>Reporting summary<\/p>\n<p>Further information on research design is available in the\u00a0<a data-track=\"click\" data-track-label=\"link\" data-track-action=\"supplementary material anchor\" href=\"http:\/\/www.nature.com\/articles\/s41586-026-10448-0#MOESM2\" rel=\"nofollow noopener\" target=\"_blank\">Nature Portfolio Reporting Summary<\/a> linked to this article.<\/p>\n","protected":false},"excerpt":{"rendered":"Patient recruitment Experiments were conducted according to protocol guidelines approved by the Institutional Review Board for Baylor College&hellip;\n","protected":false},"author":3,"featured_media":778360,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[11],"tags":[41991,210,38835,10046,3466,10047,50512,159,110423,67,132,68],"class_list":{"0":"post-778359","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-health","8":"tag-consciousness","9":"tag-health","10":"tag-hippocampus","11":"tag-humanities-and-social-sciences","12":"tag-language","13":"tag-multidisciplinary","14":"tag-perception","15":"tag-science","16":"tag-sensory-processing","17":"tag-united-states","18":"tag-unitedstates","19":"tag-us"},"share_on_mastodon":{"url":"https:\/\/pubeurope.com\/@us\/116530317014349782","error":""},"_links":{"self":[{"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/posts\/778359","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/users\/3"}],"replies":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/comments?post=778359"}],"version-history":[{"count":0,"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/posts\/778359\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/media\/778360"}],"wp:attachment":[{"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/media?parent=778359"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/categories?post=778359"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/tags?post=778359"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}