“This body does not seem to be centering issues related to accountability, justice, racial discrimination or other forms of bias.”
Massachusetts is seeking 25 “thought leaders” to help create artificial intelligence policy for the state. But despite widespread criticism of AI developments regarding racial bias and surveillance concerns, state leaders and administrators are not saying who those people are. That lack of transparency, according to civil liberties advocates, presents major problems.
The Mass Tech Collaborative, a state-run group focused on encouraging the tech economy, is working with the Executive Office of Housing and Economic Development to appoint a 25-person AI Task Force that will “develop a strategic plan that furthers economic growth in the AI sector,” according to bid documents. The collaborative is also looking to hire a consulting firm to work with the task force and create “comprehensive opportunity statements on public/private interventions that could have a transformational effect on the AI sector in Massachusetts within a three to five year timeframe.”
The task force will include AI “thought leaders” from academia, nonprofit, finance, and public sectors, as well developers and adopters from other industries, including health care, defense, and cybersecurity, according to Mass Tech spokesman Brian Noyes. The bid documents say the task force has already been created, but Noyes said while MassTech identified an initial group, selection was put on hiatus during the pandemic and MassTech and the Executive Office of Housing and Economic Development are “reconfirming the interest of potential members.”
MassTech did not release the original list of appointees or the current list upon request. Noyes said the task force’s recommendations would not replace legislative or executive discussions about AI policy and that the group will take regulatory considerations into account.
Kade Crockford, director of the Technology for Liberty Program at the ACLU of Massachusetts, said it made sense for the state to convene experts to examine a developing part of a major state industry, but the focus on economics and secrecy around membership could lead to that technology reinforcing injustice.
“The administration hasn’t done a lot of public information sharing about this task force or who’s going to be on it,” Crockford said “This body does not seem to be centering issues related to accountability, justice, racial discrimination or other forms of bias, surveillance and privacy, or issues related to automation and the job market and that is a problem, that is a big problem.”
As is clear in past examples, AI and machine learning reflect the biases of their architects, and those have come out in disturbing ways. In one survey, a New York Times article from earlier this year compiled examples of AI tech that discriminated against Black people, including an Amazon facial recognition service that misidentified darker-skinned women as men 31% of the time.
Dealing with issues like that is crucial for any state AI policy, and knowing who is on the task force is part of being able to determine if that task force is taking those considerations into account, Crockford said.
“AI and automated decision systems can facilitate more forms of discrimination and that discrimination can be difficult to detect, or hidden behind algorithmic black boxes,” Crockford said. “Centering racial justice privacy in discussion of AI is crucial, they really need to beef up their plans here. I don’t know who’s on the list, but that’s part of the problem.”
The British-based advocacy group Privacy International monitors government use of technology and has reported on the lack of transparency in data analysis, most recently on the analytics company Palantir’s work for the British government during the COVID-19 pandemic. Legal Officer Lucie Audibert said governments need to be up front about who they’re working with as they develop AI policy.
“Transparency is particularly crucial when it comes to AI—it tends to be designed in ways that are opaquely influenced by certain assumptions, bias, or preconceptions. Its logic and conclusions are often difficult to challenge, while being blindly trusted as a perfect source of truth,” Audibert said. “Public deliberations about what types of AI the government wants to encourage, and for what purposes, are essential to avoid harmful and unaccountable uses.”
The Legislature is considering bills creating its own AI commission, on “transparency and use of artificial intelligence in government decision-making,” which would submit public reports and detail how the state is using AI in its departments. The ACLU would have a seat on that commission, and Crockford hopes the bills will pass soon.
“We hope that it becomes law this session so we have public accountability of how artificial, automated intelligence is being dealt with in our government,” Crockford said. “We want to make it so in creating new laws we’re not creating discrimination or continuing discrimination, this seems to have been developed in the background without public engagement.”
“We have all too often seen industry actors wooing government officials to get them to use their technology, thereby privately dictating the direction of use of AI and other new technologies to perform public functions,” Audibert said. “This is often at the expense of public procurement processes which usually require proper consideration of all market options, as well as ensure transparency and accountability in the deployment of these technologies.”
This article was produced in collaboration with the Boston Institute for Nonprofit Journalism. If you want to see more reporting like this, make a contribution at givetobinj.org.
Dan is a reporter who has covered Massachusetts for the Boston Herald and Gatehouse Media.