{"id":237179,"date":"2025-09-18T20:24:13","date_gmt":"2025-09-18T20:24:13","guid":{"rendered":"https:\/\/www.europesays.com\/us\/237179\/"},"modified":"2025-09-18T20:24:13","modified_gmt":"2025-09-18T20:24:13","slug":"harrell-pledges-seattle-will-be-ai-leader-plan-fuzzy-on-details-the-urbanist","status":"publish","type":"post","link":"https:\/\/www.europesays.com\/us\/237179\/","title":{"rendered":"Harrell Pledges Seattle Will Be AI Leader, Plan Fuzzy on Details \u00bb The Urbanist"},"content":{"rendered":"<p><a href=\"https:\/\/www.theurbanist.org\/wp-content\/uploads\/2025\/09\/Harrell-AI-presser-Sundberg.jpg\" data-caption=\"This month, Mayor Bruce Harrell sketched out a Seattle AI Plan outlining how government would use AI tools, though many questions remained to be hashed out. (Amy Sundberg)\" target=\"_blank\" rel=\"noopener\"><img loading=\"lazy\" decoding=\"async\" width=\"696\" height=\"348\" class=\"entry-thumb td-modal-image\" src=\"https:\/\/www.europesays.com\/us\/wp-content\/uploads\/2025\/09\/Harrell-AI-presser-Sundberg-696x348.jpg\"   alt=\"\" title=\"Harrell AI presser Sundberg\"\/><\/a>This month, Mayor Bruce Harrell sketched out a Seattle AI Plan outlining how government would use AI tools, though many questions remained to be hashed out. (Amy Sundberg)<\/p>\n<p>Last week, Mayor Bruce Harrell announced the City\u2019s new <a href=\"https:\/\/seattle.gov\/documents\/Departments\/Tech\/Privacy\/AI\/City%20of%20Seattle%202025-2026%20AI%20Plan.pdf?utm_medium=email&amp;utm_source=govdelivery\" target=\"_blank\" rel=\"noopener\" title=\"\">AI Plan<\/a>, which looks to harness the current artificial intelligence (AI) boom in the tech sector by integrating AI into the City\u2019s operations, public services, and civic engagement. But critics worry about the ethical implications of AI use in public services, as well as its potential impacts on public workers and documented environmental harms.<\/p>\n<p>\u201cAI will be harnessed to accelerate permitting and housing, improve public safety, enhance the responsiveness of services for Seattle residents, and enable more accessible and plain-language interactions to remove barriers,\u201d the plan states.<\/p>\n<p>The plan includes an AI training program for City employees and partnerships with academia, industry, and communities. Its definition of AI is broad, and because of its high level nature, it often lacks specifics.<\/p>\n<p>The City will also be adding a new AI Leadership position to its Information Technology Department (ITD).\u00a0<\/p>\n<p>\u201cWe are trying to be very intentional about positioning Seattle as a national leader in responsible artificial intelligence implementation, make no mistake about that,\u201d Harrell said at last week\u2019s press event. \u201cWe believe it will position us to be not only a strong city in terms of our values, but as a port city, as a maritime city, as a biotech leader, as a high tech leader, really fits into our fabric as a city.\u201d<\/p>\n<p>The City also unveiled its new associated <a href=\"https:\/\/www.seattle.gov\/documents\/departments\/tech\/privacy\/ai\/artificial_intelligence_policy-pol211%20-%20signed.pdf\" target=\"_blank\" rel=\"noopener\">AI Policy<\/a>, which lays out rules for City employees\u2019 use of AI, including a list of prohibited uses. It requires attribution to an AI tool for generated images, videos, and source code; attribution for text generated by AI is required if it is \u201cused substantively in a final product.\u201d\u00a0<\/p>\n<p>The policy says employees must conduct a racial equity toolkit before using AI in a new way, and departments are supposed to work to understand environmental impacts of specific tools prior to procurement. How these rules will work in practice, and whether they will dissuade certain AI use cases, remains unclear.<\/p>\n<p>The Chief Technology Officer, currently Rob Lloyd, is responsible for enforcement of this policy.<\/p>\n<p>Seattle\u2019s many AI pilots<\/p>\n<p>The City has engaged in 42 AI pilots thus far, 24 of which are currently ongoing.\u00a0<\/p>\n<p>Of the 18 completed pilots, the majority were designated for \u201cgeneral\u201d use and included AI applications such as Fathom AI, Beautiful.ai, Otter.ai, Hootsuite, AI Calendar, etcetera, that perform basic functions such as transcription, calendar coordination, and content creation and coordination.\u00a0<\/p>\n<p><img fetchpriority=\"high\" decoding=\"async\" width=\"1024\" height=\"627\" data-attachment-id=\"200253\" data-permalink=\"https:\/\/www.theurbanist.org\/2025\/09\/18\/harrell-pledges-seattle-will-be-ai-leader-plan-fuzzy-on-details\/ai-plan-presser-with-harrell-sundberg\/\" data-orig-file=\"https:\/\/www.theurbanist.org\/wp-content\/uploads\/2025\/09\/AI-Plan-presser-with-Harrell-Sundberg.jpg\" data-orig-size=\"1280,784\" data-comments-opened=\"0\" data-image-meta=\"{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;1&quot;}\" data-image-title=\"AI Plan presser with Harrell \u2013 Sundberg\" data-image-description=\"\" data-image-caption=\"&lt;p&gt;Seattle\u2019s Chief Technology Officer Rob Lloyd appear with the mayor at his AI plan announcement. (Amy Sundberg)&lt;\/p&gt;&#10;\" data-medium-file=\"https:\/\/www.theurbanist.org\/wp-content\/uploads\/2025\/09\/AI-Plan-presser-with-Harrell-Sundberg.jpg\" data-large-file=\"https:\/\/www.europesays.com\/us\/wp-content\/uploads\/2025\/09\/AI-Plan-presser-with-Harrell-Sundberg-1024x627.jpg\" src=\"https:\/\/www.europesays.com\/us\/wp-content\/uploads\/2025\/09\/AI-Plan-presser-with-Harrell-Sundberg-1024x627.jpg\" alt=\"\" class=\"wp-image-200253\"  \/>Seattle\u2019s Chief Technology Officer Rob Lloyd appear with the mayor at his AI plan announcement. (Amy Sundberg)<\/p>\n<p>One completed pilot by the Seattle Police Department (SPD) tested CaseGuard for redacting body-worn video for public disclosure. The Seattle Fire Department (SFD) tested the tool Corti to both provide audio analysis of non-emergency calls and for \u201cPublic Safety Triaging, Documentation, Quality Improvement &amp; Training.\u201d<\/p>\n<p>Of the 24 ongoing pilots, 11 are being conducted within ITD. Four are currently running within the Seattle Department of Construction &amp; Inspections (SDCI) with the goal of streamlining and speeding up the permitting process. The Seattle Department of Transportation (SDOT) is testing software to identify intersections where engineering improvements can reduce the risk of crashes.<\/p>\n<p>ITD spokesperson Megan Ebb told The Urbanist that the Community Assisted Response &amp; Engagement (<strong>CARE<\/strong>) department is also looking into using AI tools to analyze call prioritization data, which might help them better dispatch the right resources to the right call. However, this is not yet an active pilot.\u00a0<\/p>\n<p>SPD has two new pilots: one is an Amazon Q\/Bedrock business chatbot \u201cfor general business, \u2019low-risk\u2019 use cases\u201d and the other is with <a href=\"https:\/\/c3.ai\/products\/c3-law-enforcement\/\" target=\"_blank\" rel=\"noopener\">C3 AI<\/a>, which ITD says is being used to more easily reference policy information. Ebb said C3 AI is not being used as a tool to assist basic criminal investigations at this time.\u00a0<\/p>\n<p>\u201cWith AI being a new technology, SPD is gradually working on responsible governance for the use of these tools in the future,\u201d Ebb told The Urbanist. \u201cAs part of this process, the department is testing platforms in limited use cases, such as summarizing survey results to help present information in easier to understand ways.\u201d\u00a0<\/p>\n<p>Amazon published <a href=\"https:\/\/aws.amazon.com\/blogs\/publicsector\/enhancing-public-safety-operations-with-amazon-bedrock\/\" target=\"_blank\" rel=\"noopener\">a blog post<\/a> earlier this summer about possible public safety applications of Bedrock, focusing on the tool receiving noise complaint calls and automatically generating incident reports. Other use cases mentioned in the post include reporting of traffic incidents without injuries, abandoned vehicle reporting, and graffiti and vandalism reporting.<\/p>\n<p>ITD did not share any specific current use cases with The Urbanist.\u00a0<\/p>\n<p>When asked more about the Amazon AI pilot, Ben Dalgetty, another ITD spokesperson, said SPD is currently in the process of developing policy and technical guardrails for this technology.\u00a0 \u201cFuture uses of this technology may have implications for the identification of complex trends and patterns used to prevent crime but a lot of work needs to be done before SPD is comfortable employing it in this way,\u201d Dalgetty said.<\/p>\n<p>This description sounds dangerously close to AI-fueled predictive policing, which <a href=\"https:\/\/www.forensicscolleges.com\/blog\/predictive-policing-artificial-intelligence#:~:text=Keep%20reading%20to%20learn%20the,pairing%20research%20and%20community%20organizing.\" target=\"_blank\" rel=\"noopener\">studies have shown can lead to<\/a> the disproportionate targeting of minority communities. Depending on the data used by the AI tool, this bias can be further exacerbated in a number of ways.\u00a0<\/p>\n<p>A number of U.S. Senators sent <a href=\"https:\/\/www.wyden.senate.gov\/imo\/media\/doc\/letter_to_doj_predictive_policing_and_title_vi_1242024.pdf\" target=\"_blank\" rel=\"noopener\">a letter<\/a> to the U.S. Department of Justice at the beginning of 2024, asking for grants funding predictive policing projects to be halted.<\/p>\n<p>\u201cMounting evidence indicates that predictive policing technologies do not reduce crime. Instead, they worsen the unequal treatment of Americans of color by law enforcement,\u201d the senators wrote. And later, \u201cThe continued use of such systems creates a dangerous feedback loop: biased predictions are used to justify disproportionate stops and arrests in minority neighborhoods, which further biases statistics on where crimes are happening.\u201d<\/p>\n<p>Meanwhile, ITD\u2019s recent annual learning conference featured two Oracle executives demonstrating an AI software interface for police case record management. The software involved AI transcription of audio and video files as well as summaries of text.\u00a0<\/p>\n<p>\u201cThere\u2019s tremendous issues here, especially in the policing context, especially when we\u2019re talking about going from something that\u2019s originally an audio recording,\u201d said Dr. Emily Bender, a linguistics professor at the University of Washington who specializes in computational linguistics.\u00a0<\/p>\n<p>Bender explained that AI transcription technology doesn\u2019t work as well when the language used diverges from the \u201cprestige\u201d standard, which can then cause more errors. AI models also have biases in the data on which they\u2019re trained.<\/p>\n<p>\u201cThe generative AI system is set up to basically output plausible-looking text based on the immediate input plus all of its pre-training data, and so it is very likely to output additional details that just look plausible in context but bear no resemblance to what happened,\u201d Bender said.<\/p>\n<p>Officers might not be given the time and space to correct transcription and other errors in such an AI system after the fact. Such a system would also harm accountability, as reports it generates could be presented in a court of law as an account from the involved officer, when it didn\u2019t begin as a personal account from the officer about what actually happened.<\/p>\n<p>There has recently been community concern around SPD\u2019s use of generative AI in communicating with the public. <a href=\"https:\/\/publicola.com\/2025\/09\/16\/police-department-acknowledges-using-ai-but-says-it-isnt-substantive-enough-to-label\/\" target=\"_blank\" rel=\"noopener\">PubliCola reported<\/a> on an Office of Police Accountability (OPA) complaint from an anonymous community member alleging that SPD used AI to generate public-facing materials, including blog posts and a statement from new Chief Shon Barnes. While a well-known AI detection tool found the above examples to have been likely completely or partially written by AI, SPD denied using generative AI in \u201ca substantive way\u201d for communications.<\/p>\n<p>This example shows a weakness in the City\u2019s AI Policy, which is vague about what a substantive use of AI text generation entails. If the SPD communications above really were generated with AI tools, there should have been attribution acknowledging such. As PubliCola reported, the OPA referred the complaint as a <a href=\"https:\/\/content.govdelivery.com\/accounts\/WASEATTLE\/bulletins\/3153877\" target=\"_blank\" rel=\"noopener\">\u201csupervisor action<\/a>,\u201d meaning that there will be no consequences beyond training or coaching.\u00a0<\/p>\n<p><a href=\"https:\/\/www.cascadepbs.org\/news\/2025\/08\/wa-city-officials-are-using-chatgpt-to-write-government-documents\/\" target=\"_blank\" rel=\"noopener\">Cascade PBS recently reported<\/a> that in Everett, even after the city adopted an AI policy requiring attribution of AI-generated text, city staff hasn\u2019t always followed the guidelines.<\/p>\n<p>In addition to AI tools sometimes providing incorrect information (sometimes known as \u201challucinations\u201d), the use of AI could further erode trust in government, a particular problem for SPD as it seeks to rebuild trust in the community. Some critics have argued AI-generated text has no place in government communications.<\/p>\n<p>\u201cThere are no appropriate use cases for synthetic text,\u201d said Bender. \u201cSetting up a system that is just designed to mimic the way people use language can only be harmful.\u201d<\/p>\n<p>Other concerns about City use of AI<\/p>\n<p>The example of cities like Everett call into question the actual implementation of the City\u2019s AI Plan.\u00a0<\/p>\n<p>The recent release of OpenAI\u2019s newest chatbot model, ChatGPT-5, which upon launch was <a href=\"https:\/\/www.bloodinthemachine.com\/p\/gpt-5-is-a-joke-will-it-matter\" target=\"_blank\" rel=\"noopener\">unable to draw<\/a> an accurate map of the United States or create an accurate list of U.S. Presidents, has accelerated criticism of generative AI and raised questions about the future of this technology.\u00a0<\/p>\n<p>Even early into its deployment, AI has faced criticism for its environmental impacts. The data centers that power AI require large amounts of freshwater for cooling, and <a href=\"https:\/\/www.bloomberg.com\/graphics\/2025-ai-impacts-data-centers-water-data\/\" target=\"_blank\" rel=\"noopener\">Bloomberg News found<\/a> that two thirds of data centers built since 2022 are in locations already experiencing water stress. The power needs of data centers are also expected to grow, with <a href=\"https:\/\/www.mckinsey.com\/industries\/private-capital\/our-insights\/how-data-centers-and-the-energy-sector-can-sate-ais-hunger-for-power\" target=\"_blank\" rel=\"noopener\">McKinsey writing<\/a> that power needs will be three times higher than current capacity within five years in the United States, increasing <a href=\"https:\/\/www.reuters.com\/technology\/artificial-intelligence\/how-ai-cloud-computing-may-delay-transition-clean-energy-2024-11-21\/\" target=\"_blank\" rel=\"noopener\">reliance on fossil fuels<\/a>.<\/p>\n<p>Promises that AI can readily replace human work have not necessarily materialized in practice. For example, the Swedish company Klarna began laying off about 700 employees in 2022 in order to replace them with AI, only to decide to rehire human employees this spring.\u00a0<\/p>\n<p>Harrell emphasized that the City doesn\u2019t intend to use AI to replace its workers and pledged to keep union leaders in the loop.\u00a0<\/p>\n<p>\u201cWe work collaboratively with our labor partners,\u201d Harrell said. \u201cAnd as we look at certain tasks that could possibly be replaced by AI, we always make sure, as we sit with a human-centered approach, we work with our labor partners and make sure that these discussions are open and transparent.\u201d<\/p>\n<p>The AI Plan adds that \u201c[a]s intelligent systems begin to automate routine and\/or administrative tasks, job roles will indeed refocus on higher-value, creative, people-facing, and decision-making responsibilities.\u201d<\/p>\n<p>The Urbanist reached out to Protec 17, the union that represents many of the City\u2019s workers. \u201cWe have recently been made aware of the City\u2019s AI plan, and are in the process of gathering information and analyzing the impacts to PROTEC17 members,\u201d their Executive Director Karen Estevenin said. \u201cWhile we are concerned with many aspects around how AI could impact the workforce \u2014 including any reduction in positions \u2014 we also are interested in exploring smart, safe, and effective uses for AI that could support the work and improve the working conditions of City employees.\u201d<\/p>\n<p>Another issue that could impact City workers is the use of AI leading to deskilling workers. A recent paper <a href=\"https:\/\/www.thelancet.com\/journals\/langas\/article\/PIIS2468-1253(25)00133-5\/abstract\" target=\"_blank\" rel=\"noopener\">in The Lancet<\/a> found that enterologists who regularly used AI for polyp detection during colonoscopy became <a href=\"https:\/\/time.com\/7309274\/ai-lancet-study-artificial-intelligence-colonoscopy-cancer-detection-medicine-deskilling\/\" target=\"_blank\" rel=\"noopener\">deskilled within six months<\/a>, with the rate of a certain type of polyp detection without AI usage dropping from 28% to 22%.\u00a0<\/p>\n<p>Earlier this summer, <a href=\"https:\/\/arxiv.org\/pdf\/2506.08872v1\" target=\"_blank\" rel=\"noopener\">MIT\u2019s Media Lab found<\/a> that the use of large language models (LLMs) for essay writing \u201ccame at a cognitive cost\u201d and used weaker cognitive engagement compared to people using only their brains or a search engine.\u00a0<\/p>\n<p>\u201cIf the idea is to do these pilots and evaluate, then the City really ought to be evaluating impacts on deskilling of the workforce and the quality of service that can be offered,\u201d said Bender.<\/p>\n<p>There is also the question of accountability when using AI in the public sector. While both Harrell and the AI Plan are clear as to keeping a human in the loop when using these tools, these workers could potentially be used as <a href=\"https:\/\/estsjournal.org\/index.php\/ests\/article\/view\/260\/177\" target=\"_blank\" rel=\"noopener\">moral crumple zones<\/a>, taking the hit for AI errors.\u00a0<\/p>\n<p>\u201cSometimes the person who is the human in the loop ends up taking the impact when something goes wrong, and they are effectively protecting the larger organization that decided to do the automation,\u201d said Bender.<\/p>\n<p>The push to adopt AI<\/p>\n<p>In spite of the possible perils, Harrell is enthusiastic about incorporating AI into the City of Seattle.<\/p>\n<p>\u201cArtificial intelligence is more than just a buzzword in Seattle \u2013 it\u2019s a powerful tool we are harnessing to build a better city for all,\u201d said Harrell. \u201cBy using this technology intentionally and responsibly, we are fostering a nation-leading AI economy, creating jobs and opportunities for our residents, while making progress on creating a more innovative, equitable, and efficient future for everyone.\u201d<\/p>\n<p>Last year Harrell was invited to serve on the Office of Homeland Security\u2019s Artificial Intelligence Safety and Security Board, whose <a href=\"https:\/\/www.dhs.gov\/archive\/artificial-intelligence-safety-and-security-board#:~:text=The%20Artificial%20Intelligence%20Safety%20and,in%20our%20nation&#039;s%20critical%20infrastructure.\" target=\"_blank\" rel=\"noopener\">stated purpose<\/a> is to develop recommendations for infrastructure stakeholders to leverage AI responsibly and prevent and prepare for AI-related disruptions to critical services. Harrell <a href=\"https:\/\/www.usmayors.org\/2022\/07\/22\/nations-mayors-launch-standing-committee-on-technology-and-innovation-to-strengthen-city-broadband-deployment-cybersecurity-defenses-and-digital-services\/\" target=\"_blank\" rel=\"noopener\">also serves<\/a> as the Chair of the U.S. Conference of Mayors\u2019 Standing Committee on Technology and Innovation.<\/p>\n<p>During his speech at the State of Downtown event hosted by the Downtown Seattle Association this February, Harrell spoke of his concern around cyber security and AI, while also seeming to <a href=\"https:\/\/www.theurbanist.org\/2025\/02\/27\/harrell-calls-musk-thiel-smart-innovators-tells-downtown-businesses-hes-fighting-for-them\/\" target=\"_blank\" rel=\"noopener\">praise right-wing technology leaders<\/a> who funded Trump\u2019s campaign.\u00a0<\/p>\n<p>\u201cWe know that our current president surrounds himself by some of the smartest innovators around,\u201d Harrell said. \u201cWhen we drop names like Andreessen or Peter Thiel or David Sacks or Elon Musk, these are smart innovators.\u201d\u00a0<\/p>\n<p>Ironically, Musk presents a strong cautionary tale for the use of AI LLMs in government with his work earlier this year at the Department of Government Efficiency (DOGE). In spite of Musk and DOGE\u2019s stated goal of efficiency, <a href=\"https:\/\/www.theatlantic.com\/politics\/archive\/2025\/05\/musk-doge-spending-cuts\/682736\/\" target=\"_blank\" rel=\"noopener\">The Atlantic found<\/a> that the U.S. government actually spent more money this February and March than they did in the same months last year.\u00a0<\/p>\n<p>Instead, DOGE and its role in facilitating the adoption of LLMs at the federal level have handed unprecedented power to the wealthy tech CEOs who control the contracted models: namely, Mark Zuckerberg (<a href=\"https:\/\/www.wired.com\/story\/doge-used-meta-ai-model-review-fork-emails-from-federal-workers\/?_sp=b3960327-a97d-43d6-8541-8b1a2adaa3c5.1748860056475\" target=\"_blank\" rel=\"noopener\">Meta<\/a>), Peter Thiel (<a href=\"https:\/\/economictimes.indiatimes.com\/news\/international\/us\/palantir-to-create-vast-federal-data-platform-tying-together-millions-of-americans-private-records-stock-jumps\/articleshow\/121521062.cms?from=mdr\" target=\"_blank\" rel=\"noopener\">Palantir<\/a>), and Musk (<a href=\"https:\/\/www.benzinga.com\/general\/social-media\/25\/02\/43535256\/palantir-adds-elon-musks-ai-chatbot-grok-to-aip\" target=\"_blank\" rel=\"noopener\">xAI<\/a>). The AI systems currently in use by the federal government <a href=\"https:\/\/www.techpolicy.press\/musk-ai-and-the-weaponization-of-administrative-error\/\" target=\"_blank\" rel=\"noopener\">are leading to<\/a> less accountability, less transparency, and the creation of a vast surveillance state.<\/p>\n<p>In contrast, Harrell is selling Seattle as a leader in responsible AI implementation.<\/p>\n<p>\u201cYou see the controls that we\u2019ve put in for our AI policy, and our plan is to say that it has to go through responsible use,\u201d said Lloyd. \u201cThere is a security process, there is a privacy consideration. And as we go through that, we are also saying that we will enable AI to make the City of Seattle able to solve civic challenges.\u201d<\/p>\n<p>The AI Plan calls for the creation of a Citywide AI Governance Group that will be responsible for providing input and guidance on the direction of the AI Plan and its priorities. This group will be convened later this fall after the AI Leadership position is filled.\u00a0<\/p>\n<p>The Office of Civil Rights did not participate in the development of the AI Plan. However, they will be asked to participate in the governance group, Ebb said.\u00a0<\/p>\n<p>Bender said it will be important for the City to get specific about what they want to automate, avoid synthetic text machines, and choose use cases in a well-tested and sensible way that provides for clear accountability.\u00a0<\/p>\n<p>Meanwhile, with the City facing a structural budget deficit, it could be tempting to use AI tools as a Band-Aid to keep things running. But that strategy is likely to create greater disparities.<\/p>\n<p>\u201cIf you think about who has the ability to opt out of poorly provisioned city services, it\u2019s the wealthy,\u201d Bender said. \u201cAnd everybody else is going to be stuck with what we\u2019re doing collectively. So why don\u2019t we do it well?\u201d<\/p>\n<p><a class=\"m-a-box-avatar-url\" href=\"https:\/\/www.theurbanist.org\/about\/amy-sundberg\/\" target=\"_blank\" rel=\"noopener\"><img loading=\"lazy\" decoding=\"async\" width=\"150\" height=\"150\" src=\"https:\/\/www.europesays.com\/us\/wp-content\/uploads\/2025\/09\/Amy-Sundberg-park-headshot-1280-150x150.jpg\" class=\"attachment-150x150 size-150x150\" alt=\"\" itemprop=\"image\"   data-attachment-id=\"160246\" data-permalink=\"https:\/\/www.theurbanist.org\/amy-sundberg-park-headshot-1280\/\" data-orig-file=\"https:\/\/www.theurbanist.org\/wp-content\/uploads\/2023\/06\/Amy-Sundberg-park-headshot-1280.jpg\" data-orig-size=\"1280,960\" data-comments-opened=\"0\" data-image-meta=\"{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;1655033258&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;1&quot;}\" data-image-title=\"Amy Sundberg park headshot 1280\" data-image-description=\"\" data-image-caption=\"\" data-medium-file=\"https:\/\/www.theurbanist.org\/wp-content\/uploads\/2023\/06\/Amy-Sundberg-park-headshot-1280.jpg\" data-large-file=\"https:\/\/www.theurbanist.org\/wp-content\/uploads\/2023\/06\/Amy-Sundberg-park-headshot-1280-1024x768.jpg\"\/><\/a><\/p>\n<p>Amy Sundberg is the publisher of Notes from the Emerald City, a weekly newsletter on Seattle politics and policy with a particular focus on public safety, police accountability, and the criminal legal system. She also writes science fiction, fantasy, and horror novels. She is particularly fond of Seattle\u2019s parks, where she can often be found walking her little dog.<\/p>\n","protected":false},"excerpt":{"rendered":"This month, Mayor Bruce Harrell sketched out a Seattle AI Plan outlining how government would use AI tools,&hellip;\n","protected":false},"author":3,"featured_media":237180,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[21],"tags":[691,738,158,67,132,68],"class_list":{"0":"post-237179","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-technology","11":"tag-united-states","12":"tag-unitedstates","13":"tag-us"},"share_on_mastodon":{"url":"https:\/\/pubeurope.com\/@us\/115227175403679696","error":""},"_links":{"self":[{"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/posts\/237179","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/users\/3"}],"replies":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/comments?post=237179"}],"version-history":[{"count":0,"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/posts\/237179\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/media\/237180"}],"wp:attachment":[{"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/media?parent=237179"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/categories?post=237179"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/tags?post=237179"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}