{"id":103408,"date":"2025-07-30T00:19:14","date_gmt":"2025-07-30T00:19:14","guid":{"rendered":"https:\/\/www.europesays.com\/us\/103408\/"},"modified":"2025-07-30T00:19:14","modified_gmt":"2025-07-30T00:19:14","slug":"white-house-ai-action-plan-potential-implications-for-health-care","status":"publish","type":"post","link":"https:\/\/www.europesays.com\/us\/103408\/","title":{"rendered":"White House AI Action Plan: Potential Implications for Health Care"},"content":{"rendered":"<p>On July 23, 2025, the Trump Administration issued an <a rel=\"noopener nofollow\" href=\"https:\/\/www.whitehouse.gov\/wp-content\/uploads\/2025\/07\/Americas-AI-Action-Plan.pdf\" target=\"_blank\">artificial intelligence (AI) action plan<\/a> titled \u201cWinning the Race: America\u2019s AI Action Plan\u201d (the Plan) to guide AI innovation in the U.S. The Plan includes 90 policy recommendations that will shape future AI guidance and policies impacting a range of entities and industry sectors, including health care\/life sciences and entities involved in clinical research.<\/p>\n<p>As summarized in our recent <a rel=\"noopener nofollow\" href=\"https:\/\/www.crowell.com\/en\/insights\/client-alerts\/white-house-ai-action-plan-seeks-to-establish-dominance-boost-innovation-and-scrutinize-regulations\" target=\"_blank\">client alert<\/a>, the Plan establishes three pillars to guide the development of \u201cAmerican AI\u201d: 1) accelerate AI innovation; 2) build American AI infrastructure, and 3) lead in international AI diplomacy and security. The Plan states that the U.S. must achieve global dominance in AI and contains recommendations on promoting innovation, ensuring economic competitiveness, and advancing national security. The Plan also identifies several health-specific issues, including support for scientific research and innovation, data quality and privacy issues, and AI standards development efforts. In the summary below, we highlight policy recommendations and directives for specific agencies included in the Plan that may impact health care\/life science and research entities.<\/p>\n<p><strong>Deregulation and Interaction with State Law<\/strong><\/p>\n<p>In contrast to the previous administration, the Trump Administration is taking a \u201cderegulatory approach\u201d to guide AI development. To this end, it seeks to remove \u201cbureaucratic red tape\u201d and \u201conerous\u201d regulations. The Plan states that the federal government should not allow federal funding for AI to be directed toward states with \u201cburdensome AI regulations that waste these funds,\u201d but further states that it should \u201cnot interfere with states\u2019 rights to pass prudent laws that are not unduly restrictive to innovation.\u201d<\/p>\n<p>The Plan recommends that the Office of Science and Technology Policy (OSTP) issue a Request for Information to receive public feedback about federal regulations that hinder AI innovation and adoption and work to take appropriate action. Building on President Trump\u2019s Executive Order (EO) on \u201c<a rel=\"noopener nofollow\" href=\"https:\/\/www.whitehouse.gov\/presidential-actions\/2025\/01\/unleashing-prosperity-through-deregulation\/\" target=\"_blank\">Unleashing Prosperity Through Deregulation<\/a>,\u201d the Plan directs the Office of Management and Budget (OMB) to work with federal agencies to identify, revise, or repeal regulations and guidance that it deems may unnecessarily hinder AI development or deployment. It recommends that OMB work with federal agencies that have AI-related discretionary funding programs to ensure that they consider a state\u2019s AI regulatory climate when making funding decisions and \u201climit funding if the state\u2019s AI regulatory regimes may hinder the effectiveness of that funding or award.\u201d<\/p>\n<p>Additionally, the Plan directs the Federal Communications Commission (FCC) to evaluate whether state AI regulations interfere with its ability to implement its obligations and authorities. It also directs the review of all Federal Trade Commission (FTC) investigations, final orders, consent decrees, and injunction, commenced under the previous administration, to ensure that they do not unduly burden AI innovation.<\/p>\n<p>Notably, the Plan seeks to discourage (but does not define) \u201cburdensome\u201d regulation of AI by proposing to reduce federal support for states that have AI regulations that contravene the Trump <br \/>Administration\u2019s position. This recommendation follows an unsuccessful legislative attempt to include a ten-year moratorium on state regulation of AI, which was proposed as part of the House-passed version of the One Big Beautiful Bill Act (<a href=\"https:\/\/www.congress.gov\/bill\/119th-congress\/house-bill\/1\" rel=\"nofollow noopener\" target=\"_blank\">H.R. 1<\/a>).<\/p>\n<p>In recent years, several states have enacted legislation to govern entities\u2019 development and deployment of AI at the state level. For example, Utah and Colorado were among the first states to enact comprehensive AI statutes, which define and govern \u201chigh-risk AI\u201d, including those used in health care and clinical settings. The Colorado law <a rel=\"noopener nofollow\" href=\"https:\/\/leg.colorado.gov\/bills\/sb24-205\" target=\"_blank\">requires<\/a> deployers to use \u201creasonable care\u201d to protect consumers from \u201cany known or reasonably foreseeable risks of algorithmic discrimination\u201d from the use of the high-risk AI system. Given the current Administration\u2019s priorities and if the Plan\u2019s policy recommendations are implemented in guidance, conflicting directives included in federal and state laws may impact entities\u2019 compliance programs. Moreover, some state AI programs include funding amounts for entities to invest in AI projects. While it remains unclear the extent to which federal funding will be tied to existing state AI programs, entities in states with stricter AI regulations may encounter reduced eligibility for federal support.<\/p>\n<p><strong>Enable AI Adoption and Build Scientific Datasets<\/strong><\/p>\n<p>The Plan seeks to foster a culture of AI innovation and to create high-quality, AI-ready datasets. It proposes establishing AI Centers of Excellence (i.e., regulatory sandboxes) around the country where entities can rapidly deploy and test AI tools. These efforts would be enabled by several federal agencies such as the Food and Drug Administration (FDA). The Plan also recommends that the National Institute of Standards and Technology (NIST) launch several sector-specific initiatives, including in health care, to convene a broad range of public, private, and academic stakeholders to develop national standards for AI systems.<\/p>\n<p>The Plan\u2019s recommendations under this section may have certain implications for the FDA\u2019s regulation of AI-enabled medical devices and other AI-related FDA activities. Under the previous administration, the FDA <a rel=\"noopener nofollow\" href=\"https:\/\/www.cmhealthlaw.com\/2025\/01\/fda-proposes-framework-to-assess-ai-model-output-credibility-to-support-regulatory-decision-making\/\" target=\"_blank\">issued<\/a> draft guidance to provide recommendations on the use of AI intended to support a regulatory decision about a drug or biological product\u2019s safety, effectiveness, or quality. Previous guidance focused on advancing transparency and ensuring that comprehensive, representative datasets are used to train AI. Whether future efforts will build on previous guidance and activity is unknown.<\/p>\n<p>The Plan makes several recommendations related to AI datasets, including directing the National Science and Technology Council (NSTC) Machine Learning and AI Subcommittee to make recommendations on minimum data quality standards for the use of biological, materials science, chemical, physical, and other scientific data modalities in AI model training. It directs OMB to promulgate regulations on presumption of accessibility and expanding secure access, as required in the Confidential Information Protection and Statistical Efficiency Act of 2018, to increase access to federal statistical data. The Plan\u2019s data recommendations may have implications for data and privacy and security issues, especially as entities navigate complying with established federal and state regulations.<\/p>\n<p><strong>Remove Ideological Bias and DEI<\/strong><\/p>\n<p>In line with previous Trump Administration actions, the Plan seeks to advance free speech and ensure that AI procured by the federal government does not reflect \u201csocial engineering agendas.\u201d The Plan recommends that NIST revise the AI Risk Management Framework to eliminate references to \u201cmisinformation\u201d, Diversity, Equity, and Inclusion (DEI), and climate change. It also recommends updating procurement guidelines to ensure that the government only contracts with frontier large language model (LLM) developers who ensure that their systems are \u201cobjective and free from top-down ideological bias.\u201d On the same day that the White House issued the Plan, President Trump signed an EO titled \u201c<a rel=\"noopener nofollow\" href=\"https:\/\/www.whitehouse.gov\/presidential-actions\/2025\/07\/preventing-woke-ai-in-the-federal-government\/\" target=\"_blank\">Preventing Woke AI in the Federal Government<\/a>\u201d to prevent AI models that incorporate \u201cideological biases or social agendas,\u201d including DEI.<\/p>\n<p>Given the lack of any guidance or definitions around the terminology used in this part of the Plan, it is unclear how healthcare\/life science and research entities can comply with these recommendations. Guidance and other clarifying notices from federal agencies on this issue should be monitored, most notably, from the Department of Health and Human Services (HHS) and the FDA.<\/p>\n<p><strong>Invest in AI-Enabled Science<\/strong><\/p>\n<p>The Plan includes several recommendations designed to enable basic research to support entities\u2019 AI-enabled scientific advancement. Many of the recommendations focus on public-private partnerships and government action to facilitate partnerships between organizations, including the use of Focused-Research Organizations (FROs), which are non-profit entities designed to tackle specific scientific or technological challenges that require coordinated effort and produce public goods. Through a collaboration of federal partners, including the National Science Foundation (NSF), the Plan recommends investing in automated cloud-enabled labs for a range of scientific fields, built by the private sector and federal agencies. It recommends the use of long-term agreements to support FROs or others using AI and other emerging technologies to make fundamental scientific advancements. The Plan also includes policy recommendations related to data, including proposing incentivizing researchers to release higher-quality datasets and requiring federally funded researchers to disclose AI models that use non-proprietary, non-sensitive datasets. These recommendations signal that increased data-sharing among federal agencies may soon take place, creating another potential point of tension between federal recommendations and state laws and regulations around data privacy and cybersecurity.<\/p>\n<p><strong>Invest in Biosecurity<\/strong><\/p>\n<p>The Plan highlights the importance of biosecurity efforts to prevent malicious actors from taking advantage of advancements in biology. The Plan proposes a multi-tiered approach designed to screen for malicious actors and requires all institutions that receive federal funding to use \u201cnucleic acid synthesis tools and synthesis providers that have robust nucleic acid sequence screening and customer verification procedures.\u201d It also includes recommendations to facilitate data sharing between nucleic acid synthesis provides and to enable national security-related AI evaluations. These recommendations may impact public institutions that work with or provide contracting services around sequencing in addition to private entities that may receive National Institutes of Health (NIH) funding but offer commercial products that are related to sequencing.<\/p>\n<p><strong>Takeaways<\/strong><\/p>\n<p>The Trump Administration\u2019s AI Action Plan may shift compliance requirements for a wide variety of healthcare entities as they continue to develop and deploy AI. In the coming months, entities should expect to see agency activity to implement the Plan in addition to federal AI initiatives and opportunities. In addition to monitoring developments coming out of the AI Action Plan, these entities should also begin examining their AI governance plans as well as identifying state law compliance obligations to harmonize compliance efforts. Crowell will continue to monitor federal and state AI developments as they become available. Please reach out if you have any questions.<\/p>\n","protected":false},"excerpt":{"rendered":"On July 23, 2025, the Trump Administration issued an artificial intelligence (AI) action plan titled \u201cWinning the Race:&hellip;\n","protected":false},"author":3,"featured_media":103409,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[35],"tags":[210,1141,1142,67,132,68],"class_list":{"0":"post-103408","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-health-care","8":"tag-health","9":"tag-health-care","10":"tag-healthcare","11":"tag-united-states","12":"tag-unitedstates","13":"tag-us"},"share_on_mastodon":{"url":"https:\/\/pubeurope.com\/@us\/114939321515099470","error":""},"_links":{"self":[{"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/posts\/103408","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/users\/3"}],"replies":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/comments?post=103408"}],"version-history":[{"count":0,"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/posts\/103408\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/media\/103409"}],"wp:attachment":[{"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/media?parent=103408"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/categories?post=103408"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/tags?post=103408"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}