News Coverage

Posted by: Facebook Observer in child pornography, Examiner.com, media, pedophilia
It was surprising to see the extent of the recent news coverage about Facebook's problems with child pornography traders.  I'm glad that people are concerned about the problem of child pornography, but would like make a few remarks.

First, social networks are attractive to pedophiles and a number of other "problem" users because they simply provide the opportunity to anonymously connect with like-minded people.  Many child pornography collectors have obsessive traits.  How many news stories have you read about someone getting caught with only a few child pornography images?  Generally, it's a matter of hundreds or even thousands.  Facebook is one of the biggest players in the social networking arena and it also offers features like image and video sharing and private groups.  So it wasn't a matter of if pedophiles would show up on Facebook, it was a matter of when.

Facebook initially seemed to focus on growing and adding new technological features and may not have anticipated that it would also attract child pornography traders until they ended up with quite a few of them.  The problem was that by that time, there were millions of accounts.  So they had to come up with automated ways of finding users who did not want to be found. Automatically shutting down accounts annoyed the people behind them, but they could always just create new profiles.  So many of them did.  To make things even more interesting, the problem users came from a number of countries, some of which are a lot more proactive about fighting child pornography than others.

For the most part, the pedophiles trading child pornography on Facebook seemed pretty stupid -- not too far removed from the idiots sharing it on open P2P networks.  The ones on Darknet seemed smarter and scarier.  I'd challenge the WND writers and anyone else interested in fighting child pornography to investigate that and try to come up with viable solutions.

PhotoDNA

Posted by: Facebook Observer in child pornography, Examiner.com, Facebook response, media
According to numerous articles and press releases, Facebook will be using a new software package called PhotoDNA developed by Microsoft and donated to NCMEC.  It uses a technique called "robust hashing" and is based on work by Dartmouth's Hany Farid.

It's easy to demand that something be done about child pornography, but much harder to actually curb it.  Computer vision is useful for simple things (like reading license plate numbers), but it has its limitations.  Most people have experienced the "CAPTCHA" technology to block spam where you have to read a series of letters or do something similar.  Why does that work?  Because humans can recognize things in images that computers can't.

However, it is possible for a computer to tell whether an image is identical to known child pornography.  Law enforcement and ISPs currently use a type of hashing known as SHA-1.  ISPs would calculate the SHA-1 hashes of images uploaded to their servers, compare them to hash values of known child pornography, and then take action if they detected a match.  An automated approach like this is very important for big players like Facebook that may have millions of images uploaded every day.  Having humans review all content isn't feasible.

The problem with the SHA-1 hashing was that it would only work if one image was exactly identical to another.  So let's say someone took a known CP image and slightly cropped or resized it.  The automated image recognition breaks because the files are no longer exactly the same.  This is what PhotoDNA addresses.  Note that even PhotoDNA only works with known CP images.  So if a predator creates his own CP and uploads it, the system won't recognize it.  (This happened with John Huitema, who recently pleaded guilty to victimizing a 2-year-old girl and producing child pornography.)

The information about how PhotoDNA works is pretty limited, since it's mostly in the form of Microsoft press releases.  So there are still a number of unanswered questions.  It sounds like the image is converted into black and white, resized to a standard size, broken up into small blocks, and the digital signature is calculated for each block.  So this is supposed to detect images that have been altered, and Microsoft said that their testing had yielded some promising results.

It does sound like PhotoDNA will be more robust than SHA-1 hashing, but there's not much information available on its limitations or future directions.  It's also not possible for regular people to download the software and play with it.  Some image manipulation involves discarding a lot of information.  Would PhotoDNA be able to detect a preview-sized image of known child pornography that someone was advertising on another site?  The "digital fingerprint" seems to rely a lot on edge detection and intensity.  What happens if someone alters a photo in a way that changes the edge information, e.g., by adding another object or lettering?

Some of the Facebook problem children have also shared CP videos.  It does not sound like PhotoDNA is currently being applied to that, and I'd be curious what the potential was in this area.   Some video codecs contain I-frames, which could probably be analyzed this way.

Ironically, Windows Live has apparently had some major problems with child pornography.  Has Microsoft tried applying PhotoDNA to its own network?  If not, why not?  If so, why doesn't it work better?

PhotoDNA

Posted by: Facebook Observer in child pornography, Examiner.com, Facebook response, media
According to numerous articles and press releases, Facebook will be using a new software package called PhotoDNA developed by Microsoft and donated to NCMEC.  It uses a technique called "robust hashing" and is based on work by Dartmouth's Hany Farid.

It's easy to demand that something be done about child pornography, but much harder to actually curb it.  Computer vision is useful for simple things (like reading license plate numbers), but it has its limitations.  Most people have experienced the "CAPTCHA" technology to block spam where you have to read a series of letters or do something similar.  Why does that work?  Because humans can recognize things in images that computers can't.

However, it is possible for a computer to tell whether an image is identical to known child pornography.  Law enforcement and ISPs currently use a type of hashing known as SHA-1.  ISPs would calculate the SHA-1 hashes of images uploaded to their servers, compare them to hash values of known child pornography, and then take action if they detected a match.  An automated approach like this is very important for big players like Facebook that may have millions of images uploaded every day.  Having humans review all content isn't feasible.

The problem with the SHA-1 hashing was that it would only work if one image was exactly identical to another.  So let's say someone took a known CP image and slightly cropped or resized it.  The automated image recognition breaks because the files are no longer exactly the same.  This is what PhotoDNA addresses.  Note that even PhotoDNA only works with known CP images.  So if a predator creates his own CP and uploads it, the system won't recognize it.  (This happened with John Huitema, who recently pleaded guilty to victimizing a 2-year-old girl and producing child pornography.)

The information about how PhotoDNA works is pretty limited, since it's mostly in the form of Microsoft press releases.  So there are still a number of unanswered questions.  It sounds like the image is converted into black and white, resized to a standard size, broken up into small blocks, and the digital signature is calculated for each block.  So this is supposed to detect images that have been altered, and Microsoft said that their testing had yielded some promising results.

It does sound like PhotoDNA will be more robust than SHA-1 hashing, but there's not much information available on its limitations or future directions.  It's also not possible for regular people to download the software and play with it.  Some image manipulation involves discarding a lot of information.  Would PhotoDNA be able to detect a preview-sized image of known child pornography that someone was advertising on another site?  The "digital fingerprint" seems to rely a lot on edge detection and intensity.  What happens if someone alters a photo in a way that changes the edge information, e.g., by adding another object or lettering?

Some of the Facebook problem children have also shared CP videos.  It does not sound like PhotoDNA is currently being applied to that, and I'd be curious what the potential was in this area.   Some video codecs contain I-frames, which could probably be analyzed this way.

Ironically, Windows Live has apparently had some major problems with child pornography.  Has Microsoft tried applying PhotoDNA to its own network?  If not, why not?  If so, why doesn't it work better?
Fox News New York had a story about Facebook users sharing child pornography.
FBI officials describe illegal photo sharing on social networks as "rampant." Nickolas Savage, assistant security chief of the FBI's cyber division, says pedophiles exchanging pictures on social networks can feed a vicious cycle.
"They can meet other people like themselves, and go off and validate their behavior," Savage says. "When they trade with others there's always a sense they need more material."
Child predators even steal innocent pictures of children that could come from their parent's Facebook profiles and unlocked photo albums.
Stolen or illegal images can be reported to Facebook right on the site. The company removes them. But Bechard thinks the company should do more.
They shut somebody out, but they don't lock the door," he says. "They just come back right in as another profile, putting up the same images and trading the same information with other pedophiles."
This is consistent with what we've seen.  For example, "Jimmy Lemoni" (ab)uses the grou.ps social network to create a new group dedicated to child pornography.  For obvious reasons, the URL has been redacted.


His friend "Ddeby Cchrm" came back with a new account very quickly after Facebook shut down the previous one.  Given the interests, it probably shouldn't be too surprising that "Ddeby" also seems to be involved in the group that Jimmy was advertising.

Fox News New York had a story about Facebook users sharing child pornography.
FBI officials describe illegal photo sharing on social networks as "rampant." Nickolas Savage, assistant security chief of the FBI's cyber division, says pedophiles exchanging pictures on social networks can feed a vicious cycle.
"They can meet other people like themselves, and go off and validate their behavior," Savage says. "When they trade with others there's always a sense they need more material."
Child predators even steal innocent pictures of children that could come from their parent's Facebook profiles and unlocked photo albums.
Stolen or illegal images can be reported to Facebook right on the site. The company removes them. But Bechard thinks the company should do more.
They shut somebody out, but they don't lock the door," he says. "They just come back right in as another profile, putting up the same images and trading the same information with other pedophiles."
This is consistent with what we've seen.  For example, "Jimmy Lemoni" (ab)uses the grou.ps social network to create a new group dedicated to child pornography.  For obvious reasons, the URL has been redacted.


His friend "Ddeby Cchrm" came back with a new account very quickly after Facebook shut down the previous one.  Given the interests, it probably shouldn't be too surprising that "Ddeby" also seems to be involved in the group that Jimmy was advertising.

Lydia Cacho, a Mexican journalist who has won awards from Amnesty International, International Women's Media Foundation, and UNESCO, is tackling the issue of child pornography on social networks.  She is calling on Facebook to report child pornography to the police.  Facebook's position has been that they do when it comes from a country that has an agreement with the U.S. to prosecute it.

As I've said before, one technical problem here is that large social networks may not always be aware of child pornography because not all of it can be automatically detected.  If a user who lives in the U.S. posts a file that the system recognizes as known child pornography, that user probably will be reported to law enforcement.  But if the image does not match any known child pornography hashes, the system will not automatically flag it no matter how bad it is.  Since many of the child pornography traders seem to use closed and secret groups for posting their material, they're also unlikely to be reported by other users.

From what I understand, most account shutdowns happen as a result of an automated process, not a human review.  The system is capable of detecting certain anomalies that are likely to be associated with abusive accounts, but it is unlikely to be smart enough to recognize accounts that need to be reported to law enforcement.  Once an account is shut down, the clock starts ticking and the information is automatically deleted after 90 days.
Lydia Cacho, a Mexican journalist renowned for her coverage of human trafficking issues, discusses pedophiles and social networks.  The original column is up at http://www.eluniversal.com.mx/columnas/88554.html and the Google translation is here.  She makes some excellent points; the social networks themselves have not done anything wrong, but they are often used by predators.  Shutting down the offending accounts is a start, and Facebook has certainly made an effort.  However, as she points out, the users are often back within 24 hours sharing the same materials, and the problem requires a coordinated effort between law enforcement in many countries.

It might be helpful if social networks made it easier for users to explicitly report child exploitation.  For example, if an image or video was specifically flagged as child pornography, it might be useful to take a hash or some kind of digital fingerprint.  The CP traders we've seen are often very repetitive and I'd guess that many of them are probably posting the same images and videos again and again.  That data plus whatever hashes the government shares would help automatically detect at least some known CP.

Another One Bites the Dust

Posted by: Facebook Observer in arrests, child pornography, media
Jerry Cannon, the user we knew as "Terry Lewis" has also been arrested.  He was quite persistent, and more than a little creepy.  This is his profile picture, which was recycled in more than one account.  We were aware of 7 accounts, and the news story said that he had 13.


A lot of the users on his friends list had profile pictures depicting girls and young women, and he was quite chatty.  He kept a collection of child pornography in his private albums and apparently hoped if he granted his friends access, they'd reciprocate by taking "private" pictures of themselves.  Of course, most of his Facebook friends probably weren't the attractive young girls in the Facebook profile photos.  One of them was an author investigating human trafficking, and another turned him into the FBI.  Then again, it's also unlikely that he really owned a "multi-million dollar buisness."

We contacted Kentucky ICAC about him, and they were responsive and professional.  However, the FBI had begun an investigation when the other Facebook user reported him.  It turns out that "Terry Lewis" was actually Jerry L Cannon, formerly the pastor at God's House in Dry Ridge, KY.  The news story is at http://blog.al.com/live/2011/02/kentucky_pastor_facebook_pornography.html and the criminal complaint is at http://www.scribd.com/doc/49175841/tlewis .