How to combat face abuse from the technical level?
Research and development of “reverse face change” detection technology
In September this year, Facebook announced that it would provide a toggle button that could turn on or off face recognition. If a user disables face recognition, Facebook will stop automatically marking the user on the image and stop advising others to mark the image.
In addition, Mike Schroepfer, chief technology officer of Facebook, announced on a blog that the company is working with Microsoft to jointly organize a “deepfakes identification challenge” with researchers from MIT, Oxford and other universities to explore how to detect deepfake face change video through data sets and benchmarks. It is understood that the goal of the “deepfake identification challenge” is to find a tool that can detect whether the video has been changed, and it can be easily operated by everyone. Meanwhile, Facebook will also support the “deepfake identification challenge” from data sets, funds, bonuses and other aspects to encourage more people to participate. It is said that Facebook will invest more than 10 million US dollars.
Truepic, a startup company in California, is also committed to solving the problem of AI face changing abuse: the company can provide image verification services based on mobile applications. Photos taken through its applications will be analyzed, watermark URLs, locations, copies of saved images, etc. will be added, and query verification will be supported to ensure their authenticity and reliability.
When a photo is captured in truepic’s IOS and Android Applications (or in an application embedded in its SDK), truepic will verify whether the captured image has been changed and watermark it with timestamp, geocoding, URL and other metadata. Truepic’s security server will store a version of the image, assign a six digit code and URL, and add a record on the blockchain.
Users can publish their truepic images or authentication links to the application to prove that their images are real. Viewers can access the watermark URL on the image and compare it with the version saved in the truepic library to ensure its authenticity and unmodified. Currently, truepic is working with Qualcomm to add its technology to mobile phone hardware components.
The same action was taken by serelay, a British social networking site. The company has created a twitter like account verification system for photos and videos, marking photos as “genuine” when they are taken. When taking photos or videos, serelay can capture data such as the relationship between the camera and the mobile phone tower or GPS satellite.
At the same time, the US Department of defense is also studying an image identification technology called forensic. Their idea is to look for inconsistencies in pictures and videos, such as inconsistent lights, shadows and camera noise. For complex deep fakes detection, other verification methods are needed, such as point capture method, to find the evidence of tampering by observing the inconsistency of facial expression and head movement. Moreover, they tried to automate authentication and let computer algorithms detect it. At present, this forensic method can be applied to photos and videos taken decades ago, as well as photos and videos recently taken with smart phones or digital cameras.
Matt Turek, who is in charge of the media forensics project of DARPA of the US Department of defense, believes that the makers of deepfakes have been constantly adapting to various detection technologies, so there will be no algorithm or technical solution to defeat the enemy with one move, but a set of overall solutions. Therefore, Whether it is to actively add watermark to the picture or identify the authenticity through “fault finding”, it is a necessary means to solve the problem of deepfake counterfeiting.
Starting from the essence of AI security, repair the potential risks of AI technology application
Some enterprises also try to cast higher threshold defense technologies to deal with AI from the essence of security confrontation.
For example, the realai team incubated from the Institute of artificial intelligence of Tsinghua University claimed that because the fake video images generated by deepfake will have “unnatural” textures, they trained the neural network through massive videos to learn the texture features under normal conditions, and then detected the inconsistent textures in the fake video. Using this technology, fake video can be detected frame by frame, and the accuracy is more than 90%.
Scholars at the University of California Riverside have also proposed a new algorithm to detect deepfake forged images. Similarly, a component of the algorithm is various “recursive neural networks”, which can segment the problematic image into small blocks, and then observe these small blocks pixel by pixel. After training, the neural network can detect thousands of deepfake images. It finds some characteristics of the fake at the single pixel level.
The other part of the algorithm is to transmit the whole image through a series of coding filters. These filters enable the algorithm to consider the whole image at a larger and more comprehensive level. Then, the algorithm compares the pixel by pixel output results with the higher-level coding filter analysis results. When these parallel analyses trigger an alarm signal in the same area of the image, it will be marked as a possible deepfake forgery.
At present, this algorithm can identify unmodified images and false images in single pixel images, and the identification accuracy is between 71% and 95%. The specific identification depends on the sample data set used, but the algorithm has not been extended to the detection of deepfake forged video. Next, the team will extend the algorithm to detect deepfake video. It includes how the image changes frame by frame and whether detectable patterns can be recognized from the changes.
In addition, the main material source platform of deepfakes video, GIF animation and short video sharing website gfycat also trained two AI models based on GIF search data and tools on the platform. One was named Project Angora, named after a long haired cat; The other is project Maru, which comes from a short haired cat.
Functionally, Angora, the long haired cat, has a good memory and strong hands-on ability. It can quickly find the original version or different versions of the face changing video. Maru, the short haired cat, has a keen sense of smell and golden eyes. It can make up for the deficiency of Angora, a long haired cat. For example, Angora, the long haired cat, doesn’t know how to paw some new contents that have not been marked, but Maru, the short haired cat, can distinguish and mark them. In addition, if the short haired cat Maru recognizes that the protagonist in this material is very similar to a celebrity, it will scan the “face” frame by frame to see if there is fraud – after all, no matter how perfect the face change is, it is difficult to achieve seamless in each frame.
However, these two AI algorithms can only crack down on counterfeits and cannot be reversed. Therefore, if these two algorithms face the exclusive video of the whole network, there is nothing they can do. In order to solve the situation that AI algorithm can not solve, gfycat also hired some human employees for review. In addition, it also used shared location, upload location and other data to help judge whether malicious forgery is included.
As security experts said, data is not 100% secure, only 100% defensive. Network security is a continuous arms race, which is bound to promote the development of attack and defense. Therefore, people should not rely too much on these algorithms verified as high accuracy, because an over trusted detection algorithm can also be weaponized by people trying to spread false information.
Identify fake pictures and videos with blockchain Technology
AI is not the only one who can use technology to solve technical problems. Blockchain technology can also solve the problem of fake pictures.
In July this year, the New York Times, a well-known media with a history of 100 years, announced that it would use blockchain technology to combat fake news, and announced the blockchain project “news provenance tracing” under development. The project is a blockchain network based on super ledger, which is jointly developed by the New York Times and IBM garage Department to create and share “metadata” of news pictures.
“Metadata” includes the shooting time, place, photographer and all editing and publishing information of news pictures. Through this information, the media and users can judge whether the picture has been modified by PS and others, and then judge the authenticity of the relevant information.
In this “no secret” era, although we can’t control the wave of data innovation, it’s time for us to be vigilant about the collection and use of facial data, and we should be more careful to safeguard our legitimate privacy rights and interests. Facing the current situation that face information is abused, the solution of the problem can not only rely on ordinary Internet users to improve their discrimination ability and network security awareness. After all, the cultivation of mass media literacy is not overnight. Then, the curse of industry self-discipline should be stronger, and external supervision should also go ahead and jointly plug loopholes in order to prevent accidents.
Post time: Jul-26-2021