Ethics Quiz: Apple Thinks Of The Children

Apple privacy

Last week, Apple announced a plan to introduce new technology that will allow it to scan iPhones for images related to the sexual abuse and exploitation of children. These tools, however, which are scheduled to become operational soon, can be used for less admirable objectives, like so many technologies.

Apple’s innovation will allow parents have their children’s iMessage accounts scanned by Apple for sexual images sent or received. The Parents would be notified if this material turns up on the phones of children under 13. All children will be warned if they seek to view or share a sexually explicit image. The company will also scan the photos adults store on their iPhones and check them against records corresponding with known child sexual abuse material provided by organizations like the National Center for Missing and Exploited Children.

Cool, right? After all, “Think of the children!!” (Rationalization #58) But while Apple has promises to use this technology only to search only for child sexual abuse material, the same technology can be used being used for other purposes and without the phone owner’s consent. The government could work with Apple to use the same technology to acquire other kinds of images or documents stored on computers or phones. The technology could be used to monitor political views or “hate speech.

Computer scientist Matthew Green, writing with security analysist Alex Stamos, warns,

“The computer science and policymaking communities have spent years considering the kinds of problems raised by this sort of technology, trying to find a proper balance between public safety and individual privacy. The Apple plan upends all of that deliberation. Apple has more than one billion devices in the world, so its decisions affect the security plans of every government and every other technology company. Apple has now sent a clear message that it is safe to build and use systems that directly scan people’s personal phones for prohibited content.”

Your Ethics Alarms Ethics Quiz of the Day:

Does the single beneficial use of the Apple technology make it ethical to place individual privacy at risk?

 

34 thoughts on “Ethics Quiz: Apple Thinks Of The Children

  1. No, it’s the responsibility of parents to monitor their children’s internet use and set boundaries. As for an adult’s phone, Apple needs to stay out of it. Let the police get a search warrant rather than have government use Big Tech to get around protections against unreasonable search and seizure.

    • I definitely agree with your answer; unfortunately, I suspect images on iPhones and other smart phones, as well as images stored on “the cloud”, are already being scanned. This step is just to get official consent by putting it in the user agreement. I read one of those user agreements once and I’m not a lawyer and it was plain to me they reserved the right to do just about anything – that was back in 2008 – once the 2 years was up, I haven’t had a smart phone since.

      Even my wife didn’t know the pictures stored the GPS coordinates (unless you specifically turn it off). I loaded one of the photos she took of me onto my PC and showed her – even the image tool I was using had a button “show location on google maps”. There it was, our house on street view.

      You can pretty much conclude that all data on smart phones is already being scanned. I’m a retired SW engineer which doesn’t make me an expert on smart phone apps and user agreements; however, there’s a lot more going on “under the hood” than most people realize.

  2. Absolutely not. Parents should be checking their children’s phones without Apple’s nanny assistance, although in my opinion any parent who buys a child a smart phone is asking for trouble. They weren’t around when my daughter was a teen, but I assure you she would not have had one. She didn’t get a cell phone until she was sixteen and started driving.
    A law enforcement officer should continue to have to convince a judge that probable cause exists to issue a search warrant to obtain this type of information. With good investigative techniques, probable cause is usually not a hard bar to reach. One of my retired detective friends busted dozens of child pornographers and child sexual exploitation offenders using traditional investigative techniques and without invasive, indiscriminate high-tech snooping. This slope isn’t just slippery, it is greased and damn near vertical.

  3. Absolutely not. Unfortunately far too many techies believe their own BS. Anyone that follows developments in this field know that virtually every tech improvement is designed to capture more and more insight into human behaviors for the purpose of selling or psychological manipulation. Apple is merely couching a technology that threatens privacy as a tool of protection; just like all other legislation passed recently for our benefit.

  4. I wonder: is there a reasonable expectation of privacy with cellphone technology? After all, a cellphone is a glorified CB radio, right? Instead of bouncing signals off antennae, we’re bouncing them off satellites. Those data, under our user agreements, are probably subject to what the platforms want to do with it.

    Here is an example, Lord Remington Winchester has had a skin conditions so I have been searching for remedies. I get a ton of ads for skin and fur supplements every day. Why? Because google and amazon and ebay trade my searches between themselves and social media platforms all day long. Here, Apple, that grand bastion of liberty and freedom, is just putting in writing what it has already been doing.

    If the 2020 election demonstrated anything, it is the coordination between Big Tech and Big Government, where freedom of thought is directly threatened. This policy sounds like a great idea, right? I don’t trust Apple or any other big tech company at all. It will be abused and it will be used against unpopular opinion and beliefs.

    So, is it ethical? No. Is it inevitable? I am afraid so.

    jvb

    • JVB
      Apple’s main selling point is privacy through its encryption technology. They went so far as to deny the FBI access to the San Diego terrorist’s phone data. So if their selling proposition is based on encryption technology that would lead consumers to believe they have an expectation of privacy.

      • Their selling point is that they do the compliance things they need to do by giving you the most control and ownership of your content and proactively restricting what they have access to…. which is exactly what they’re doing here. These new features furthers that selling proposition, it does not detract from it.

        • Tim.
          My response to JVB was in keeping with your statement about controlling access by anyone other than the owner of the device. The expectation of privacy arises from those features.
          However, why would I want Apple to have the capacity to scan anything other than their software on my phone. If they have that capacity through this technology that is not giving me control to restrict what they can see. In my opinion this should be a stand alone app that can be placed on a specific phone by a parent if it is to be used at all.

          • This is 2 technologies:

            1) A scanner of photos you are choosing to place onto Apple’s Servers, only when they are sent to Apple’s Servers.

            2) An optional built-in feature, that can be activated or deactivated by parents, to scan incoming and outgoing messages in the Messages app for sexual content.

            In both use cases, you have full control. Don’t deploy the message restrictions, don’t use Apple’s servers. In no use case does anything “on the phone” get scanned. Just things coming IN or going OUT under specific circumstances.

  5. Creepy, creepy, creepy. So, Apple can go through family photos and determine which involve child abuse? Will they go through Joe, I am not demented, Biden’s photos and decide he’s a sexual harasser? Will I get thrown in jail for standing next to my fifteen year old grand daughter and putting my arm around her? Assholes. I remember when our daughter was at Georgetown and I went to visit her on a weekend and we stayed in a B&B on the Eastern Shore of Maryland. We were having breakfast and we joked about how most of the people in the breakfast room probably assumed I was her sugar daddy. Assholes. Arrogant assholes. Don’t these people have the ability to think one or two steps down the line? Are they so terminally and tragically arrogant?

  6. No. This technology renders all Apple security worthless. Apple advertizes that the contents of its phones are encrypted to the point that not even the NSA can hack them (the FBI even went to court to compel Apple to assist in the decryption of a suspects phone; Apple refused to cooperate, the court ruled the case moot when the FBI found a private vendor to successfully hack the phone).

    If Apple installs software that scans the encrypted hard drive contents, then the encryption is moot. The scanning software would need access to the encryption keys, thus rendering encryption worthless.

    In a broader way though, the phone operating system already has access to the keys (at least when the user is logged in). Apple controls the operating system, and it is only trust that holds Apple to account for not exploiting access to the data.

    The scanning function steps away from that promise not to abuse access to private data. Presumably, it will only compare data “signatures” locally. signature is simplication of a file that permanently obscure the contents of the file, but is unique to that file. By comparing locally, it downloads the signature if prohibited content (in a manner that cannot be unscrambled), and compares it to the signature of files on the phone. Content would only be flagged if it matched, and non-matched content would never have been sent to Apple’s servers. This protects against the risk of Apple analyzing the signatures of non-prohibited content.

    However Apple could change it’s policies any time, potentially sending signatures or even raw files to it’s servers for “analysis”. Implementing the scanning breaks the trust that Apple won’t further errode its policies. The scanning is unethical because it contradicts the promised privacy marketed over the past decade.

  7. I don’t understand that if a private business can do this, what’s to stop the government to contracting it out on their behalf? After all, that is pretty much what is happening except the government isn’t asking for it.

  8. In response to the question: No, emphatically. I won’t retread all the arguments already given, they should be enough.

    Now, one note about the technology. At least the second half, based on some reports I’ve seen from credible sources. While I cannot vouch for the details, I’d be very surprised if Apple’s approach is radically different.

    The detection of known CP files is done by generating a fingerprint from known criminal files and comparing it to the fingerprints of the files in the user’s device or iCloud account. This fingerprint cannot be dependent on the exact contents of the file, as any sort of image processing would make the new file undetectable without affecting visual quality. This trick has already been used to defeat YouTube piracy filters both for video and audio. Now they have more robust ones, but some other larger manipulations like reflecting the image, adding a frame, etc. still work sometimes.

    In the Apple photo case, the way it currently works is by first running a low-pass filter to eliminate details, textures, noise, etc. and leave just the contours, then the resolution is pared down to the next “standard dimensions” contained in the blacklisted images set. Finally, a transformation from color to black and white (or in some cases grayscale, with just 4 or 8 shades) is applied. This final image is fingerprinted, and these results are what is compared.

    If you’ve been following, you’re probably thinking, “it would be very easy to create an innocent image that matches a known-bad one”, and you would be right. There are proofs of concept for this attack, even automated ones that generate hundreds of hits for an image in a second. Some have added GPT generated images for the fun of it, making the system close to useless if the attack is deployed at scale.

    And that’s even before you think of the false positives, where the options that either the user gets reported to the FBI, or an image gets reviewed by a human in violation of most users’ privacy expectation.

    This system is terrible if it works as designed, and as far as I can tell it won’t, so it’s just virtue signaling adding costs and building a worse user experience for nothing.

  9. Corporate America is doing the bidding of a government that is leaning more totalitarian every day and they are intentionally infringing upon individual rights to help that totalitarian leaning government. These companies will not stop until we the people do something about it and the only thing I can think of to do is to start boycotting all the companies that infringe on individual rights even if it “hurts” to do so.

    • I’m with you philosophically, but I can’t see how that can work practically. I’ve never been in Apple’s grip, but I’m in Google’s, and I know perfectly well they’re no better. I did switch to Brave browser and Duck Duck Go search engine, but I’m still on an Android phone, using Gmail, Google Photos, and Docs, so I’m not going to pretend I’m pure. But the thing is that Google (and Apple, and to an extent Microsoft and Amazon) have captured huge market shares that make the “hurt” from refusing to use at least one of them a major handicap. Even if you or I can and will soak that, it’s a strong enough detriment that I can’t see enough people willing to do it to make a boycott successful.

      What we need are more alternatives. Like I said, I easily switched to Brave browser and Duck Duck Go. You can recommend people switch to those, and the loss of productivity/features is inconsequential enough that you could, conceivably, get enough people to make that minor sacrifice. But for photo hosting or smartphone OS, if only the most dedicated .01% is willing to go without entirely, a boycott simply won’t work.

  10. If apple has this capacity and bills it as a benefit for parents, then apple has the ability to make it an app available directly to parents to directly protect their own children.

    *that’s the ethical use of this*

  11. Wow. Jack, honestly, I wish you would have done a lot more research before putting this post up. When you consider what is actually happening with Apple’s proposal, you realize they’re trying to reinvent the wheel to restore privacy to the users of their products and services. Google, Microsoft, every single server storage company has to have human reviewers scrolling through accounts looking for bad images. That’s full privacy breach.

    Apple’s design is to stop the manual human reviews and provide automation so the focus can be on the bad images, not everything that exists. There are so many points to make on this, it’s hard to know where to start. But first, let’s start with Apple’s compliance obligations and responsibilities.

    1) Apple offers “cloud space”….server space by operating offsite servers. If Apple allows their servers to be used for CSAM storage or copyrighted material like pirated hollywood movies, they get into trouble. It is their compliance responsibility to ensure that isn’t happening. Much like a bank has a responsibility to know their customer isn’t a specific person on a SDN list.

    Knowing that, only images that you are uploading to an Apple Server are scanned, not the entire contents of the device. You can think of this as a customs checkpoint. Only things coming into the USA are scanned. You can have your illegal cocaine in Mexico, but if you want to bring it through customs to the US, it’s going to be checked to see if it’s illegal cocaine. Same concept. If you want your photo to go from your iPhone to Apple’s iCloud Photo Server, it’s going to get reviewed to ensure it’s not a known photo of *KNOWN* CSAM (Child Sexual Abuse Material). As I said before, other services do this, but they do it exclusively on the server side after you’ve sent them the photo by giving access to human reviewers to look at your photos, or training an AI to look at such images and seeing if it thinks your photo is the same.

    So, as Alex stated above, Apple is employing a process of “fingerprinting” a photo. This is called a “hash”. Basically, known CSAM photos are hashed into a fingerprint (think of it as a long string of 1s and 0s that is unique to any picture ever taken.) The fingerprint hash is created by an algorithm. Those hashes are combined into a database of hashes and stored on your phone. Then, you opt to move a photo onto Apple’s server. The algorithm runs against your photo and makes a fingerprint and that fingerprint is compared to the database of fingerprints. If there is a match, the photo is tagged with a security certificate. If no match, no certificate. Either way, the photo still goes to iCloud. If you accumulate 30 matches of known CSAM, then your account is referred for human review, and even then, it’s only the 30 photos that matched. If human review of those photos affirm the problem, it’s further escalated for law enforcement intervention.

    All that, just to say, if you have CSAM images, you’re on notice that all you have to do is not upload your criminal content to Apple Servers.

    So what about false positives? Well, what’s the risk here? That the private photo you entrusted to Apple by putting it out in the cloud gets a human review after you’ve had 30 false positives? To what end? The human review ascertains the false positive (which is unlikely in the first place) and they move on. Using a different service would have subjected your entire library to human review. Apple’s process limited your library exposure only to the false positives, and only after 30 of them.

    2) Child protection. If parents want to employ this option, it’s not what the alarmists make it to be. This is not “scanning your kids phone for sexual content”. No, this is simply a gatekeeper on iMessage. If you turn on this feature, when your kid gets a photo that the gatekeeper thinks is a nude, iMessage asks the kid if they’re sure they want to see it and it further tells them that if they choose to see it, their parent will be notified that they opted to look at something that was flagged. It’s a tool that parents can put to use to stay somewhat engaged in their child’s life. They don’t have to use it, but it at least gives parents a choice, particularly if they’re the ones paying for the phone and paying for the internet service.

    But again, this only works on Apple’s messaging app. If a kid has Tik Tok, Signal, Whats App, Twitter, Facebook, Instagram…. then this doesn’t really matter. Apple isn’t scanning the kid’s photo library, camera app, or any third party app. It’s only affecting the app Apple services, and only if the parent opts in to the scheme.

    What this does is helps close an avenue of messaging for targeted harassment, for grooming victims. I would need someone to explain to me how that’s a bad thing in all instances and why it shouldn’t be an available OPTION for a parent to employ with their 10 year old, knowing that anyone who objects to it can simply not turn it on in the first place.

    • Do I really deserve a “Wow” for this? In the first sentence, I posted link to the article arguing that there were serious privacy issues (another Times article on the same issue came out today), and frankly, I don’t understand enough about computers to benefit from more research. I do know that the question of whether any technological advance is unethical because it can be abused is a difficult one. That’s why this is an Ethics Quiz. But because part of the answer depends on the trustworthiness of the tech company involved, I am inherently leery.

    • Tim, absent the “Wow,” which is unnecessary, I think your points are very solid and well made.

      According to what I have read, the only things that may be scanned are things that go through Apple’s servers, or are stored there. The novel thing, and people just don’t understand the difference, is the iMessage software.

      People think of iMessage as just another implementation of SMS texting or MMS, but it’s not. In fact, they are distinct technologies — iMessage is an Apple-hosted messaging service completely separate from SMS. Keep in mind that this fact is far from being widely known, and people can be forgiven for thinking that Apple is intruding on their private communications because of this fact.

      Now, you can’t scan SMS the way Apple wants to do iMessage because they are sent through the common carrier data network, and common carriers protect themselves by allowing all content through their network. That has to be the subject of a legal process before it can be looked at.

      It’s important to note that if you don’t sync your Apple iPhone images or use the iMessage service to send images, apple will not be scanning your phone for illegal images, nor can they most likely. That would be a privacy violation. But if you use their servers to sync or send images, they have a reasonable argument to consider the legality of those messages. It’s not a slam dunk as you suggest — the caselaw on this is not well developed, but I can understand their concern.

      But Apple’s servers are not a common carrier, and just as you say, they are subject to potential legal penalties if they don’t police illegal content. If the phone’s images are synced with Apple’s service, it’s the exact same problem. Your use of the Specially Designated Nationals list is not an exact comparison (and not one most people would get), but it’s close enough for me. More accurate would be a blog that allowed people to store illegal images on it, even if they weren’t published to a broader audience.

      So thanks for pointing all that out.

      • Sorry about my initial reaction, but I feel it’s a valid reaction.

        Your description of iMessage and the Communication Safety feature is inaccurate. With this feature, Apple is using a different approach to identifying “nudes”. They aren’t scanning their imessage servers for CSAM. This is because imessage is an end-to-end encryption model. This is why they couldn’t help with the san bernadino case to some extent because the messages aren’t readable or scannable on their imessage services, they can only be processed or accessed once they are decrypted on one end of the transmission.

        If you *elect to deploy* this Communication Safety feature, any image received in the Messages app will be scanned for the potential that it is a “nude”. Sure, it will have a ton of false positives given the broad scope of the scan.

        However, this scan is happening “on device”, not at Apple. A positive match under this feature isn’t about identifying CSAM or illegal images, just “harmful” images. The result of this scan is three-fold

        1) This photo may be harmful, would you like to ignore it?
        2) You’ve said you want to see it, are you sure?
        3) Ok, here’s your image and your parents have been notified that you elected to see it so that you all can talk about it further if necessary.

        End of story. The results of this scan is not sent anywhere outside of the household.

        Does that help further understanding of this feature?

        • However, this scan is happening “on device”, not at Apple. A positive match under this feature isn’t about identifying CSAM or illegal images, just “harmful” images.

          Ah. Well, first of all thank you, I learned something I didn’t know about iMessage, probably because I haven’t used Apple products in decades. Once I read that it was handled by Apple’s servers, I didn’t consider that it might be stored there encrypted.

          So that changes things significantly. My reading of the case law would protect Apple from any charges regarding images on their servers if the images are encrypted and unrecoverable. In effect, this is plausible deniability.

          End of story. The results of this scan is not sent anywhere outside of the household.

          I find this much creepier now that I have been corrected about how it works. What they are deploying is effectively spyware, and the permission model is insufficient justification in my view. I am opposed to any such digital intrusion into people’s lives by third parties, regardless of their supposed benign nature. Yes, even children. If parents want to protect their kids that way, give them feature phones and disable MMS.

          I had thought Apple was trying to protect themselves from unreasonable liability, and I find that much more agreeable than trying to insert nanny spyware on children’s phones.

          I’m again’ it.

          • Calling it spyware is probably the best description. We can only really talk about how it’s properly used, not misused. After a little more reading, the notification for parents only happens for child accounts that are 13 & under. Older than that, they let the child make the decision and it goes no further.

            It’s a way to provide the user with information about harmful content, child exploitation, and grooming.

            So beyond that, Apple is in the business of selling their products, not shirking responsibility and telling their potential customer to go buy their kid a “feature phone”. Nor does Apple have an interest in making and selling a feature phone. If they can build this safety feature into the software and leave the decisions up to the parents (who are financing the whole debacle), then they’ve done their part.

            So, misuse? Could a controlling husband set up his untrustworthy wife with a child account on a phone he provides her in order to get notifications of nudes she sends or receives? Sure….but there’s a lot more going on there than the Communication Safety feature.

            Consider my position as the father of a 10-yr old. I buy a phone for her to use, but I don’t want her making app purchases because then I’m at risk of her buying $10k+ worth of gems on Farmville or some ridiculous thing. So I lock down the App Store and require permission requests. With the App Store locked down, I decide which Apps she can install and use and conveniently, I don’t allow social media apps or accounts or other communication apps like Signal or WhatsApp. All she has is Apple’s Messages.

            So, what do I do in the absence of this new feature? I could 1) force her to show me her messages received every day, but run the risk that she receives something bad and deletes it before I inspect what’s going on; or 2) I simply sign into her account on another iDevice like an iPad and capture all of the communications in duplicate and peruse at my own leisure.

            See, the child’s “privacy” was just an illusion to begin with and some parents sure do feel bad about being good parents and reading everything. But what if there was some way you could trust your kid and have assurance that if they were sending nudes or receiving nudes that you could be notified? Well in that case, you might actually stop being a “helicopter parent”, give a little trust and a little more privacy because you had the peace of mind granted by such a feature.

            Overall, we could talk about the ethics of invading “private conversations” of our children, but the only reason my child has a phone/device at all is because I told her “if I give this to you, there are no private conversations”. That was the agreement entered at the start, so, yes, I feel “ethically covered”.

            • My takeaways:

              This is all a perfect reason not to give children iPhones. Along with “There is no such thing as good spyware,” of course.

              As to children’s privacy being an illusion, I don’t necessarily disagree. But enlisting spyware to invade that privacy is a bridge too far. Better to restrict what they can do with technology.

Leave a Reply to John Paul Cancel reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.