• Home
  • iOS
  • News
  • University Researchers Who Built a CSAM Scanning System Say It Is ‘Dangerous’ Technology

University Researchers Who Built a CSAM Scanning System Say It Is ‘Dangerous’ Technology

University Researchers Who Built a CSAM Scanning System Say It Is ‘Dangerous’ Technology

Respected researchers at Princeton University who built an image scanning system are warning Apple about the technology the Cupertino firm plans to use to scan iPhone users’ photo libraries for child sexual abuse material (CSAM), calling the technology “dangerous.”

Jonanath Mayer, an assistant professor of computer science and public affairs at Princeton University, along with Anunay Kulshrestha, a researcher at Princeton University Center for Information Technology Policy, both wrote an op-ed article for The Washington Post, discussing their experiences with building image detection technology.

The researchers’ project was designed to identify CSAM in end-to-end encrypted services. The researchers say they recognize the “value of end-to-end encryption, which protects data from third-party access.” That concern, they say, is what concerns them over CSAM “proliferating on encrypted platforms.”

The Princeton researchers say they were searching for a middle ground where CSAM could be searched for while protecting end-to-end encryption.

We sought to explore a possible middle ground, where online services could identify harmful content while otherwise preserving end-to-end encryption. The concept was straightforward: If someone shared material that matched a database of known harmful content, the service would be alerted. If a person shared innocent content, the service would learn nothing. People couldn’t read the database or learn whether content matched, since that information could reveal law enforcement methods and help criminals evade detection.

Knowledgeable observers argued a system like ours was far from feasible. After many false starts, we built a working prototype. But we encountered a glaring problem.

Apple’s plan to detect known CSAM images stored in iCloud Photos has proven to be controversial and has raised concerns from security researchers, academics, privacy groups, and others about the system potentially being abused by governments as a form of mass surveillance.

Apple employees have also reportedly raised concerns internally over the company’s plan.

Mayer and Kulshrestha said that their concerns over how governments could use the system to detect content other than CSAM disturbed them.

A foreign government could, for example, compel a service to out people sharing disfavored political speech. That’s no hypothetical: WeChat, the popular Chinese messaging app, already uses content matching to identify dissident material. India enacted rules this year that could require pre-screening content critical of government policy. Russia recently fined Google, Facebook and Twitter for not removing pro-democracy protest materials.

We spotted other shortcomings. The content-matching process could have false positives, and malicious users could game the system to subject innocent users to scrutiny.

We were so disturbed that we took a step we hadn’t seen before in computer science literature: We warned against our own system design, urging further research on how to mitigate the serious downsides….

Apple has attempted to address the concerns over its CSAM scanning plans with an FAQ page and additional documents explaining how the system will work, and how it will refuse demands to expand the image-detection system to include materials other than CSAM images. However, Apple has not said that it would pull out of a market rather than obeying a court order.

(Via MacRumors)