Apple algorithms – private technology for criminal investigation

Share
Share on linkedin
Share on facebook
Share on google
Share on twitter

In recent years, Apple has been developing software that can detect child pornographic material on Apple devices. Now, although prevention of child abuse is a high-priority item on the agendas of politicians and policy makers, critics nevertheless fear that this new software may have the long-term effect of opening the door to a world of total surveillance. In this week’s blog we will examine how this software works and what the potential risks are.

Last week, Apple announced that in the United States its products running iOS 15 or higher, will be standard equipped with software developed to detect content of a child pornographic nature, thus providing ‘filters’ for the protection of children.

Apple hopes to achieve this goal of child protection by filtering images sent and received by minors – children under the age of 13 – through Apple’s messaging service and notifying parents of the exchange, if detected, of questionable content. Additionally, Apple will be filtering pictures uploaded to its cloud service before allowing them to be stored. Red-flagged images will then be examined by Apple employees to decide whether or not they do classify as child pornography. If so, Apple will forward the singled-out images to criminal investigation and law enforcement agencies for final assessment.

Opinion, however, is divided, not on Apple’s intentions, but on the potential effects of the technology, which critics fear can easily be used for purposes of oppression by totalitarian regimes.

In this blog we will try to answer the question whether it is socially desirable for businesses like Apple to take on responsibilities of this nature and if so, if this is the way to do it. To answer this question, we will first look at how the software actually operates. Next, we will briefly discuss a few commonly raised criticisms.

The new software

Apple has uploaded a 12-page document explaining the inner workings of its software and in doing so creating the general impression that every effort has been made to preserve the highest possible level of privacy-friendliness. So how exactly does the software work? First of all, Apple does not actually get to look at all images stored on any given smartphone. Instead, it is the software that scans images directly on the Apple device itself, in both scenarios, whether the pictures are being filtered through Apple’s messaging service or when they are selected for upload to iCloud Photos.

At the heart of the software is a system of so-called ‘hashes’. Images on a phone are assigned a certain hash, which is a string of numbers linked to specific content. To this end, the software has been ‘trained’ by having it scan child pornographic pictures from a database made available by the U.S. National Center for Missing & Exploited Children. These pictures have their own specific hashes. So now, when a user tries to upload a picture to iCloud with hashes that match the ones associated with images from the ‘control’ database, the software will recognise it for what it is and pick it from the stack. Pictures with no matching hashes remain encrypted and are never examined or even seen by Apple.

Whenever a suspicious hash is detected, the image, by way of a doublecheck, is immediately scanned one more time. Only if the software, at that second scan, still signals the potential presence of child pornographic material, the encryption is lifted, at which point human intervention will decide whether or not the image qualifies as child pornography. If so, the relevant authorities will be notified.

Risks?

In a public letter addressed to Apple, various privacy experts, researchers and civil rights activists have expressed their concerns about Apple’s new software. Apple itself has stated that the software does nothing to lessen the effectiveness of its chat service encryption, but that has not made privacy organisations sit back and relax, as they still have their doubts. And rightly so, as illustrated by the recent Pegasus affair in which iPhones too, in spite of their ‘watertight’ encryption reputation, fell victim to undetected hacks.

Another often mentioned risk is that this new software opens the door to increasing surveillance and censorship. Today, the software may be used for a cause most people will readily subscribe to, but isn’t there an intrinsic danger of shifting boundaries, maybe not tomorrow but definitely in the long term? Currently, Apple is making a point of emphasising that the software will only be used in the U.S. and for the exclusive purpose of combatting child pornography. Critics, however, are afraid that further down the road Apple may end up accommodating the demands of totalitarian regimes, if only to protect its market share in specific countries or regions where the software could easily be used to identify gay people or track down critical journalists.

As the Electronic Frontier Foundation (EFF) has pointed out, the new Apple scanning software may be used for undesirable purposes in Western democratic societies as well. As illustrated by the Australian practice of allowing law enforcement agencies to request technical assistance in the context of criminal investigations, which in practical terms gives them authorisation, for instance, to hack into the mobile phones of persons under scrutiny. Now that Apple is in the process of actually pre-installing over-the-shoulder-looking software, the EFF is concerned that Western democracies may also reconsider their options in the arena of close citizen monitoring.

To return to our question of whether it is desirable for Apple to assume responsibilities in matters of this sensitive nature, it is probably fair to say that there is no simple yes or no by way of an answer. Strong action against child abuse is, in and of itself, more than understandable and justified, but only if the long-term effects on society of the means chosen to achieve the end remain to be taken into account and remain to be taken very seriously.

With tech companies increasingly taking on essential social roles and responsibilities, it is becoming more and more crucial to openly engage in the debate on their specific place in our societies, especially when among the tasks they choose to assume are activities in very specific areas of social relevance, like criminal investigation.

Apart from the broad social debate these developments call for, it is also important to consider the extent of the citizens’ support base. Do people really expect or even want tech companies to deploy this sort of software for these types of purposes? Or is society already accustomed to technically advanced businesses looking over everyone’s shoulder?

Lastly, it is important to remember that the effectiveness of this kind of technology in preventing child abuse, is as yet unproven, which also has to do with the media hype surrounding it. By now, everyone who has anything to fear from it, knows all there is to know about the new software. So what is to stop them from making the simple choice of avoiding the use of Apple devices and switching to other channels for the continuation of their illegal activities? There is a chance that instead of achieving its intended purpose the software will, in the end, turn out to only have jeopardised, if not actually infringed on, the freedom and privacy of law-abiding Apple users.

Darinka Zarić

Darinka Zarić

Darinka Zarić is a legal counsel at The Privacy Factory. Legal issues regarding the digital society appeal to her. Especially in the field of Privacy Law and the use of big data. She is currently following the master Internet, Intellectual Property and IT-Law at the Vrije Universiteit Amsterdam.

Follow our publications

cookie

We use only functional and analytical cookies to ensure that we give you the best experience on our website. This means that our cookies do not collect personal data. Learn more.