Deepfakemain

Deep fakes happen to be a coined term that has been in use since 2017 to refer to heinous or genius techniques used to superimpose a person’s face onto another’s body, especially Pornstars. This is done by combining existing images and videos onto source images or videos using a machine learning technique called a generative adversarial network. As a result, fakes or rather unreal and unconsented porn videos, fake news among other malicious hoaxes are created. With the main societal concern is whether people can actually be able to distinguish real from fake.

The history of Deepfakes

deepfake

Deepfakes all started with an anonymous Redditor known as “deepfakes” started uploading videos of celebrities faces on porn actors bodies. The technology used involved tools that could insert a face into existing footage, frame by frame a possibly glitchy process that has evolved to featuring political figures, TV personalities, and now anyone.

Unfortunately, celebrities were the easiest targets, especially because the technology required more than a few images. However, currently, the underlying deepfakes software technology is getting hotter and more effective for companies working on augmented reality like Samsung and Google.

There has been a release on a breakthrough in controlling depth perception in video footages, which is the easiest way to distinguish real videos from fakes. With the most saddening bit being that people have not been sensitized enough and probably 70% of the world cannot separate a deep fake from a real clip.

Samsung researchers have recently also pointed towards creating avatars for games as well as video conferences. This avatar technology only requires a few images to be achieved which is basically the most recent advancement leading to the generation of full-body avatars.

Are Deepfakes obvious?

deepfake4

While various personalities, for instance, Hwang have projected doubts over claims that deep fakes technological advancements will in time make them indistinguishable, the real question is just how much damage will the victims who at this point might be you or I have incurred before it is actually proven to be a fake?

Well, there has been concerns already about people frustrating others through threats to leak their fakes already. And judging from the costs reduction, better equipped and skilled individuals deep fakes might affect lots of individuals in the future.

The worrying factor is that there is no fake detecting software available just yet, leave alone the fact that most people believe what they see on the media or social platforms, say for example the Belgium climate fake comment from Trump or the satirical Pelosi video. All of which raising the question of what action should be taken thereafter with Pelosi himself claiming that “If it seems clearly intended to defraud the listener to think something pejorative, it seems obvious to take it down,” he says. “But then once you go down that path, you fall into a line-drawing dilemma.

Deepfakes 2019 & public concerns

deepfake3

Thing is; technology doesn’t just vanish into thin air because someone somewhere wishes it does. And what actually happens is like just like a conceived babe it continues to grow through one generation to the next configuring itself with other technological advancements whether in the form of software or hardware. No wonder Samsung has recently developed an easier way to fake that does not require too much imagery meaning that your Facebook profiles might actually be more than enough to get you dancing nude, fucking or getting fucked across the internet ASAP!

There are concerns of course with most sites including Pornhub and Reddit banning fakes from their sites on the grounds that it is porn whose owner’s have not consented to viewership. The biggest fear yet being that this technology could actually be used to undermine democracy say for example a public figure, perhaps imitating a past motion support clip in the form of a grainy cell phone video so that imperfections are overlooked, and timed for the right political heat to shape the public’s opinion. And with no found solution until now ahead of 2020 elections political leaders have called for legislation banning their “malicious use.”

According to Robert Chesney, a professor of law at the University of Texas, however, political disruption does not even require cutting-edge technology, but it could even result from lower-quality stuff, driven towards sowing discord, but not necessarily to fool. The most practical example being, the three-minute clip of House Speaker Nancy Pelosi circulating on Facebook, appearing to show her drunkenly slurring her words in public, which clearly displays a lack of ethics.

deepfake5

By reducing the number of photos required, Samsung’s method might mean even bigger problems because it’s actually more real. However, with the cost implications and time required might be a little complicated. As Chesney says, “Some people might have felt a little insulated by the anonymity of not having much video or photographic evidence online.” Called “few-shot learning,” unfortunately for them, the technology will be able to surpass. Perhaps because it does most of the heavy computational lifting ahead of time. Rather than being tried, like the Trump-specific footage, this system is fed a far larger amount of video that includes diverse people, with the idea being based on the fact that the system will learn the basic contours of human heads and facial expressions. And from there, the neural network can apply what it knows to manipulate a given face based on only a few photos, for instance, as in the case of the Mona Lisa.

This technique is actually similar to a few more methods that have revolutionized the functionality of neural networks that assist users to learn stuff like language, with massive data sets that teach them generalizable principles. That has in return given rise to models such as OpenAI’s GPT-2, which crafts written language very fluently to the extent that its creators decided against releasing it, out of their own reservations or should I say the fear of it being used to relay fake news.

deepfake2

At present, if it were adapted for malicious use, this particular strain of chicanery would be easy to spot, according to Siwei Lyu, a professor at the State University of New York at Albany who studies deepfake forensics under Darpa’s program. The rather impressive demo, misses finer details, like the famous Marilyn Monroe’s mole, which vanishes into the thin air as she throws her head back to laugh. The researchers also haven’t addressed other countless hitches including; how to properly sync audios to the deepfake or how to iron out glitchy backgrounds. And although Lyu has bridged the second challenge. On a video fusing Obama’s face onto an impersonator singing Pharrell Williams’ “Happy.” The Albany researchers claim that they will not release the method for everyone’s safety.

If you asked me, I would say that this is enough proof that deepfakes will stop at nothing, not today or in the coming future. And the reality is that deepfakes are present in 2019, getting more realistic and easier to go around thanks to advancements in technology and softwares from companies such as Samsung.

PornDude’s final words

Deepfakes 2019 seem to be coming at everyone pretty strong with realer clips that may actually insinuate, criminalize or even demoralize anyone. With the bitter truth being that in as much as we love celebrity porn fakes tomorrow, we might lose our jobs due to our fake sex tapes.