Study: AI could lead to inconsistent outcomes in home surveillance

A new study from researchers at MIT and Penn State University reveals that if large language models were to be used in home surveillance, they could recommend calling the police even when surveillance videos show no criminal activity.

In addition, the models the researchers studied were inconsistent in which videos they flagged for police intervention. For instance, a model might flag one video that shows a vehicle break-in but not flag another video that shows a similar activity. Models often disagreed with one another over whether to call the police for the same video.

Furthermore, the researchers found that some models flagged videos for police intervention relatively less often in neighbourhoods where most residents are white, controlling for other factors. This shows that the models exhibit inherent biases influenced by the demographics of a neighbourhood, the researchers say.

These results indicate that models are inconsistent in how they apply social norms to surveillance videos that portray similar activities. This phenomenon, which the researchers call norm inconsistency, makes it difficult to predict how models would behave in different contexts.

“The move-fast, break-things modus operandi of deploying generative AI models everywhere, and particularly in high-stakes settings, deserves much more thought since it could be quite harmful,” says co-senior author Ashia Wilson, the Lister Brothers Career Development Professor in the Department of Electrical Engineering and Computer Science and a principal investigator in the Laboratory for Information and Decision Systems (LIDS).

Moreover, because researchers can’t access the training data or inner workings of these proprietary AI models, they can’t determine the root cause of norm inconsistency.
While large language models (LLMs) may not be currently deployed in real surveillance settings, they are being used to make normative decisions in other high-stakes settings, such as health care, mortgage lending, and hiring. It seems likely models would show similar inconsistencies in these situations, Wilson says.

“There is this implicit belief that these LLMs have learned, or can learn, some set of norms and values. Our work is showing that is not the case. Maybe all they are learning is arbitrary patterns or noise,” says lead author Shomik Jain, a graduate student in the Institute for Data, Systems, and Society (IDSS).

Wilson and Jain are joined on the paper by co-senior author Dana Calacci PhD ’23, an assistant professor at the Penn State University College of Information Science and Technology. The research will be presented at the AAAI Conference on AI, Ethics, and Society

Previous articleDGP Seeks Report On Police Stations With No CCTV Cameras Across Odisha
Next articleDelhi Police To Deploy Undercover Cops At Metro Stations For Security