Intel could be heading towards an AI-powered frame generation future, thanks to a research group's work, dubbed ExtraSS
18.12.2023 - 13:20
/ pcgamer.com
/ Ai
Researchers at the University of California and Intel have developed a complex algorithm that leverages AI and some clever routines to extrapolate new frames, with claims of lower input latency than current frame generation methods, all while retaining good image quality. There's no indication that Intel is planning to implement the system for its Arc GPUs just yet but if the work is continued, we'll probably have some Intel-powered frame generation in the near future.
Announced at this year's Siggraph Asia event in Australia (via Wccftech), a group of researchers from the University of California was sponsored and supported by Intel to develop a system that artificially creates frames to boost the performance of games and other applications that do rendering.
More commonly known as frame generation, we've all been familiar with this since Nvidia included it with its DLSS 3 package in 2022. That system uses a deep learning neural network, along with some fancy optical flow analysis, to examine two rendered frames and produce an entirely new one, which is inserted in between them. Technically, this is frame interpolation and it's been used in the world of TVs for years.
Earlier this year, AMD offered us its version of frame generation in FSR 3 but rather than relying on AI to do all the heavy lifting, the engineers developed the mechanism to work entirely through shaders.
However, both AMD and Nvidia have a bit of a problem with their frame generation technologies, and it's an increase in latency between a player's inputs and then seeing them in action on screen. This happens because two full frames have to be rendered first before the interpolated one can be generated and then shoehorned into the chain of frames.
The new method proposed by Intel and UoC is rather different. First of all, it's three methods, all rolled into one long algorithm. The initial stage eschews the use of motion vectors or optical flow analysis and instead relies on some clever mathematics to examine geometry buffers created during the rendering of previous frames.
That stage makes a partially complete new frame which is then fed into the next stage in the whole process, along with other data. Here, a small neural network is used to finish off the missing parts. The outputs from stages one and two are then run through the final step, involving another neural network.
It's all far too complex to go into detail here but the result is all that matters: A generated frame that's extrapolated from previous frames and inserted after them. You're still going to get a bit of input latency but, in theory, it should be less than what you get with AMD and Nvidia's methods but real frames are presented immediately after rendering.
If this