The push to ensure artificial intelligence is deployed responsibly and ethically has largely been coming from academic researchers and legislators. That’s about to change.
The newly formed Center for Responsible Artificial Intelligence and Governance (CRAIG), which Northeastern University associate professor of philosophy and CRAIG member John Basl called a first-of-its-kind National Science Foundation-funded research effort, combines academic rigor with real-world industry expertise to solve some of the most pressing AI challenges, experts involved in the research said. From technical questions around privacy to issues of regulation, CRAIG is tackling it all in a way that hasn’t been done before.
“Companies don’t really have the infrastructure for that,” said Basl, who represents one of four partner universities leading CRAIG. “What companies have the infrastructure for is the compliance bit, complying with existing laws. So, the idea was to create a center that was drawing on industry challenges but bringing in academia to bear on those solutions. … This will be a call to arms to get that done.”
In addition to Northeastern, faculty from Ohio State University, Baylor University and Rutgers University form the core of CRAIG’s research arm. Meta, Nationwide, Honda Research, CISCO, Worthington Steel and Bread Financial are already involved on the industry side, with more partners being brought into the center.
Typically, responsible AI is the first element that gets cut out of any AI-related project at a company, said Cansu Canca, director of Northeastern’s Responsible AI Practice. CRAIG addresses that core tension by letting private industry partners identify problems they face in this area. The researchers at CRAIG then propose research projects that address those specific challenges.
“With this setup, you can claim with confidence that the research is really done with the same academic rigor that we hold ourselves subject to in academia, without raising any doubt that industry influences our research or our objectivity,” Canca said.
Philosophy professor John Basl poses for a portrait on Friday May 17, 2019. Photo by Matthew Modoono/Northeastern University
09/27/24 – BOSTON MA. – Cansu Canca, Director of Responsible AI Practice at the Institute for Experiential AI, poses for a portrait in the ISEC building on Northeastern’s Boston campus on Sept. 27, 2024. Photo by Matthew Modoono/Northeastern University
Northeastern University experts John Basl and Cancu Canca represent one of four academic institutions involved in CRAIG’s partnerships with industry leaders like Meta. Photos by Matthew Modoono/Northeastern University
Among the many challenges CRAIG researchers aim to address around responsible AI is homogenization. When a single AI model makes all the decisions in a specific industry or sector, it runs the risk of being biased or leaving certain people out of the equation.
“Hiring managers might not like people with purple shoes, but some other hiring managers might,” Basl said. “But if it’s all the same hiring manager, purple shoe people are out.”
Through CRAIG, Basl aims to find ways of measuring and mitigating this kind of limited-view decision-making that can have real implications in finance or insurance AI applications.
It sounds like a simple model, but Basl and Canca said this combination of academic expertise and industry reach doesn’t exist. Connecting academic research to real-world applications is always a challenge, and that’s especially true in such a new field like responsible AI.
When companies do turn to researchers, they are looking for more basic solutions because they are usually in the initial stages of implementing responsible AI. For academics well-versed in the field who are investigating more experimental AI applications, it can be challenging, according to Canca.
“Having a center, a structure, a mandate that really focuses on responsible AI, that allows you to do novel work and allows you to connect this novel work to actual industry applications, get feedback from the application, that’s fantastic,” Canca said.
For academia and the AI industry, CRAIG is a win-win, according to Basl. The collaboration doesn’t only help solve significant issues around responsible AI; it creates the next generation of workers trained specifically to tackle those very questions. Over the next five years, CRAIG will support 30 Ph.D. researchers along with Northeastern co-ops and hundreds of additional students through summer programs and coursework.
The opportunity to put that talent and expertise into use at industry leaders like Meta that have such a seismic impact with their technology is the real promise of CRAIG, Canca explained.
“The dream would be to have this really grow, to really create a broader industry group and a broader researcher community so that we can set standards, we can build new tools, we can agree on which tool is the best method to use on which problem instead of always having this experimental style in responsible AI,” Canca said.