A headshot of a womanProfessor Nadiyah J. Humber

When UConn Law Professor Nadiyah J. Humber set out to examine artificial intelligence in housing, employment, and lending, she focused on one central question: Does AI affect the racial wealth gap, and if so, how?

In the United States, wealth determines long‑term security, influencing housing stability, educational opportunity, and intergenerational mobility. As of 2022, the median White household held approximately $285,000 in wealth, compared to about $45,000 for the median Black household. Because housing, employment, and access to credit are among the primary drivers of wealth‑building, the growing use of artificial intelligence in these systems raises urgent questions about whether technology will help close or deepen the divide.

A new report, coauthored by Professor Humber and Professor Yvette Pappoe of the University of the District of Columbia David A. Clarke School of Law and sponsored by The Leadership Conference on Civil and Human Rights, shows how AI tools can unintentionally magnify longstanding inequities when left unchecked. The work reflects UConn Law’s mission of community, equity, and access to justice and underscores the school’s leadership in understanding how emerging technologies shape opportunities in people’s everyday lives.

AI: Enemy or Useful Tool?

Across housing, employment, and lending, the research reveals how artificial intelligence increasingly shapes economic opportunity for people of color, often with limited transparency and little room for human judgment.

“Artificial intelligence is not the enemy,” Humber says, reflecting on the report’s findings. “It’s a tool. But how it’s designed and used matters, and in some cases, it’s reinforcing existing patterns of exclusion.”

Renters described automated tenant‑screening systems that were difficult to navigate and offered no opportunity to explain individual circumstances. Job seekers reported submitting numerous applications and hearing back from only a few employers, as automated résumé filters relied on criteria shaped by biased data, reflecting how these tools were trained. These decisions often triggered cascading financial strain: housing delays led to credit‑card debt, prolonged job searches depleted savings, and each setback harmed credit profiles.

What Comes Next

While the findings reveal significant risks, Humber emphasizes that artificial intelligence also presents an opportunity if its use is guided intentionally. The need for these guardrails is underscored by what participants in the study described, such as systems that moved quickly, offered little explanation, and rarely allowed for meaningful review.

“The opportunity exists,” she says. “If the tools were designed differently, more inclusively, they could meaningfully address issues around the wealth gap, opportunities for AI literacy, wealth creation, housing, and employment.” The report offers policy recommendations for technology developers, users, and lawmakers, including calls for greater transparency, human review of automated decisions, and the use of more inclusive data models. Sponsored by The Leadership Conference on Civil and Human Rights, the research is informing advocacy efforts and legislative conversations already underway at the national level.

Leading the Conversation on AI and the Law

For Humber, ensuring that AI systems promote rather than undermine equitable access is a responsibility that sits squarely within legal education, and one that UConn Law is taking up through research, teaching, and public engagement.

Scholarly work includes Professor Kiel Brennan‑Marquez’s examination of the limits of automation in law itself. His research explores whether artificial intelligence can meaningfully replicate forms of legal judgment rooted in discretion, mercy, and moral responsibility, qualities that resist formalization even as legal systems increasingly rely on rule‑based technologies.

UConn Law reinforces this intellectual leadership through teaching and practice-based instruction. The curriculum includes courses such as Artificial Intelligence Ethics and Governance, Artificial Intelligence and Social Impact, Cyberlaw, Data Privacy Law, and Cybersecurity and Privacy Compliance, equipping students to understand both the mechanics and consequences of AI‑driven decision‑making. This past fall, Matthew Lowe joined the faculty as a Visiting Professor from Practice, bringing real‑world expertise in AI governance, privacy, and cybersecurity shaped by his work as in‑house counsel in the technology sector.

Beyond the classroom, the Law School’s Insurance Law Center, led by Director Professor Travis Pantin, hosted a groundbreaking national conference on AI, Insurance Law, and Regulation, convening leading scholars, regulators, and industry experts to examine how insurance can shape responses to AI risk.

Together, these efforts show how UConn Law is helping to shape real‑world conversations about how artificial intelligence should be governed and used in ways that affect people’s everyday lives.