It’s not the only one, but still…
Assembly Bill 1831, introduced by California Assemblyman Marc Berman (D–Palo Alto) this month, would expand the state’s definition of child pornography to include “representations of real or fictitious persons generated through use of artificially intelligent software or computer-generated means, who are, or who a reasonable person would regard as being, real persons under 18 years of age, engaging in or simulating sexual conduct.”
Does Berman comprehend why the possession of child pornography is a crime in the first place? Clearly not. Somebody please explain to him that the criminal element in child porn is the abuse of living children required to make it. The theory, which I have always considered something of a stretch but can accept the ethical argument it embodies from a utilitarian perspective, is that those who purchase or otherwise show a proactive fondness for such “art” in effect aid, abet, encourage and make possible the continuation of the criminal abuse and trafficking of minors. It is not that such photos, films and videos cause one to commit criminal acts on children. That presumption slides down a slippery slope that would justify banning everything from Mickey Spillane novels to “The Walking Dead.”
The bill specifies that such AI generated child porn must meet the state’s definition of obscenity to be illegal: material that “to the average person, applying contemporary statewide standards, appeals to the prurient interest”; “depicts or describes sexual conduct in a patently offensive way”; and “taken as a whole, lacks serious literary, artistic, political, or scientific value.” But in the 1982 case New York v. Ferber, the U.S. Supreme Court held that the only reason the First Amendment doesn’t include child pornography is that the state has a special interest in protecting children from harm. Eight years later in Osborne v. Ohio, the Court decreed that a ban on private possession of child pornography was constitutionally permissible. In both cases, the Court’s reasoning focused on the injury that child pornography’s production and dissemination inflicts on the children whose abuse it documents.
The attempted ban on possession that the proposed bill embraces ignores the Supreme Court’s reasonable and well-grounded conclusion that “a State has no business telling a man” what he can look at while “sitting alone in his own house,” with the sole exception being pornography created by harming and abusing real, live children, and not, presumably, computer created images that do not harm anyone while being created.
I’m sure you can see what’s coming: an argument that AI programs that create realistic depictions of children being sexually abused must, or probably so, or might, be aided by being exposed to actual depictions of child sexual abuse, just as AI-created model “Lexi Lov,” the busty blonde AI model chats with paying subscribers in 30 different languages, 24 hours a day was born of actual women, photographed while, well, being beautiful. Here’s “Lexi”…
That argument, however, would send obscenity law hurtling backwards in time to when mere possession of pornographic cartoons would risk arrest and prosecution. I really thought we had gotten past the time when government morals police tried to command what we could or couldn’t look at in our own homes.
________________
Source: Reason


Sorry, skimming here as I get dinner ready, but I do not think this was addressed:
Does the First Amendment apply to AI-generated materials?
Yes, this bills relates to POSSESSION of AI porn, so I think your point is ultimately correct.
However, just as the copyright issues with the monkey-selfie were covered here, AI generated materials could create legal issues (maybe more along the lines of copyright than of the first amendment variety).
If AI has probably no First Amendment protection (or does it? The First Amendment really restricts government action, not personal speech), this law might pass the initial smell test. But, once I saw it was about possession, I think the law fails, but not for First Amendment reasons (like you suggest).
-Jut
but I have to believe, at least until SkyNet takes over, that any court will regard AI as a tool of humane beings, and thus no more the “creator” of fake child porn than the Moog Synthesizer is the composer of music, or the Etch-a-Sketch is the artist. In all three cases, it’s a human being “speaking” through a mechanical device,and thus the creation cannot be banned or criminalized.
How do you discern what was AI generated and what was exploiting children. Seems like an easy defense just claim it was AI. Would it not be possible for someone to use real children but use AI to make them slightly different. Then, the problem is how do you prosecute those with real children if they claim they believed it to be artificially created?
These are just questions from a lay person.
The foundation is the invalidity of the consent of minors in this subject matter. Because their consent is invalid, there is no justification to take sexually explicit photographs, record sexually explicit videos of them, nor possess such images or videos.
Can the subject later consent to the use of these images after growing up? No appellate court has answered that question.
I don’t think it would matter what they believed, just whether it could be proven a child was exploited.
I can’t imagine what it would be like to prosecute someone for having CP if the police/FBI/whomever have to first be able to produce the actual child. Then, if they are able to find the child, would a defense attorney argue that the child’s likeness was used, but the actions performed were all AI-generated if the child didn’t testify as to what acts took place?
It definitely goes too far.
I can understand a law against the use of AI to take sexually explicit photographs, or record sexually explicit video, of underage persons. It would foreclose “the AI did it” defense which might persuade a jury or even a judge.
But there is a clear difference between a teacher secretly recording a sexual tryst with his 14-year-old student, and someone drawing a crude cartoon of a 14-year-old girl getting knocked up by her teacher.
Pardon my ignorance but AI imagery is created without original source material, (e.g.. Xerox copies), ¿no? So, if the AI imagery is not taken from actual sexual abuse of children, how can someone be prosecuted for possession of child born, notwithstanding the policy reasons Jack stated? Doesn’t a crime require intent, means, opportunity, and actual instrumentality with which to commit the crime? Am I guilty of drug trafficking if the substance is actually oregano?
jvb
JVB
Actually, AI is what is used to create deep fakes. I was just listening to Curt the Cyberguy on Fox and he was explaining that some high school girls had their legitimate images captured and then manipulated to show them in the nude. These images were circulated throughout the community.
Thus, if you capture a legitimate image and then manipulate it by altering some basic characteristics is it that original person anymore? Under Jack’s theory it is not and thus cannot be deemed child porn if the adulterated image was then further manipulated to show the AI image engaged in some sexual activity.
This law may not be the correct law but the advent of high powered computing in the hands of bad people who will create deep fakes, child porn or some other product that exploits another without consent needs to be addressed. How? I don’t know.
Well, using deep fakes and otherwise innocent photos of victims found on the the web would be offenses for such material’s creators, not anyone who sees them or acquires them, I would hope.
I can agree that the creators caused the offense but if people who consume child porn have real exploitive materials but believe or claim they are AI generated as a defense this undermines the goal of prosecuting consumers of child porn to reduce the exploitation.