My first experiment with AI in games was an air hockey game I published sometime in 2018. It was fairly rudimentary – the AI would just chase the puck and try to hit it. There was no real “intelligence” behind it.
In the new “Learning Edition” of Air Hockey, the Blue puck is still “dumb” with the same basic “follow the puck and hit it” logic as before, and its movement speed & update rate (how often it senses the puck) don’t change.
However, the Red puck is now a learning/”smart” AI – it uses the same “follow and hit” logic as the Blue puck, but will alter its speed and update rate based on its performance.
Will the “smart” Red puck be victorious after an hour of play, or will the dumb Blue puck foil it through luck and physics?
If you want to know even more, there’s lots of detail on the Red puck’s “smart” logic in this text file…
The automated programs certainly make mapping the swapped face onto the original video much easier, but they do seem to struggle more with side views. This is why I’ve swapped General Orumov for Timothy Dalton in the above clip, as the shots of his face are from the front – instead of swapping out Brosnan’s Bond, whose face is mostly shown at an angle.
In recent months, AI-assisted face-swapping has been the hot topic across the internet. From the moment the technology was unleashed in a dark corner of Reddit, to its now widespread family-friendly fame, the ethics and impact of face-swapping (or “deepfakes”) have been hugely controversial.
The process has even been automated as of Feb 2018, but before OpenFaceSwap came along it was a much more involved task. In those early days, I acquired the necessary source code and dependencies – Google’s TensorFlow machine learning platform, Keras2 to make TensorFlow work with Python, and the OpenCV image recognition library – in an attempt to swap Pierce Brosnan for Timothy Dalton in a clip from the GoldenEye 007 movie.
The process is simple in theory – find a video that you want to face-swap, extract images of the face you want to replace, feed them to TensorFlow along with images of the face you want to insert, and let the AI do the rest.
To explain in more detail: TensorFlow interprets the face you want to replace as “incorrect”, and the face you want to insert as “correct”. As the AI learns what an “incorrect” and “correct” face looks like, it slowly changes the “incorrect” face to look like the “correct” one, and you end up with a library of face-swapped images.
The next step is to reinsert the face-swapped images into the video, and I ended up with this blurry half-Brosnan half-Dalton abomination:
Controversy aside, this didn’t make it onto the main portfolio site because of the poor quality – but that’s what this Skunkworks Blog is for!