When Google revealed Project Gameface, the company was proud to show off a hands-free, AI-powered gaming mouse that, according to its announcement, “enables people to control a computer’s cursor using their head movement and facial gestures.” While this may not be the first AI-based gaming tool, it was certainly one of the first to put AI in the hands of players, rather than developers.
The project was inspired by Lancy Carr, a quadriplegic video game streamer who utilizes a head-tracking mouse as part of his gaming setup. After his existing hardware was lost in a fire, Google stepped in to create an open source, highly configurable, low-cost alternative to expensive replacement hardware, powered by machine learning. While AI’s broader existence is proving divisive, we set out to discover whether AI, when used for good, could be the future of gaming accessibility.
It’s important to define AI, and machine learning, to understand clearly how they work in Gameface. When we use the terms “AI” and “machine learning,” we’re referring to both the same and different things.
“AI is a concept,” Laurence Moroney, AI advocacy lead at Google and one of the minds behind Gameface, tells Startup. “Machine learning is a technique you use to implement that concept.”
Machine learning, then, fits under the umbrella of AI, along with implementations like large language models. But where familiar applications like OpenAI’s ChatGPT and StabilityAI’s Stable Diffusion are iterative, machine learning is characterized by learning and adapting without instruction, drawing inferences from readable patterns.
Moroney explains how this is applied to Gameface in a series of machine learning models. “The first was to be able to detect where a face is in an image,” he says. “The second was, once you had an image of a face, to be able to understand where obvious points (eyes, nose, ears, etc.) are.”
After this, another model can map and decipher gestures from those points, assigning them to mouse inputs.
It’s an explicitly assistive implementation of AI, as opposed to those often touted as making human input redundant. Indeed, this is how Moroney suggests AI is best applied, to broaden “our capacity to do things that weren’t previously feasible.”
This sentiment extends beyond Gameface’s potential to make gaming more accessible. AI, Moroney suggests, can have a major impact on accessibility for players, but also on the way developers create accessibility solutions.
“Anything that lets developers be orders of magnitude more effective at solving classes of problems that were previously infeasible,” he says, “can only be beneficial in the accessibility, or any other, space.”
This is something developers are already beginning to understand. Artem Koblov, creative director of Perelesoq, tells Startup that he wants to see “more resources directed toward solving routine tasks, rather than creative invention.”
Doing so allows AI to aid in time-consuming technical processes. With the right applications, AI could create a leaner, more permissive, development cycle in which it both helps in the mechanical implementation of accessibility solutions and leaves developers more time to consider them.
“As a developer, you want to have as many tools that can help you make your job easier,” says Conor Bradley, creative director of Soft Leaf Studios. He points to gains in current implementations of AI in accessibility, including “real-time text-to-speech and speech-to-text generation, and speech and image recognition.” And he sees potential for future developments. “In time, I can see more and more games making use of these powerful AI tools to make our games more accessible.”