Can we please stop with this shit?
The ideal framerate booster was already invented, it’s called asynchronous space warp.
Frames are generated by the GPU at whatever rate it can do, and then the latest frame is “updated” using reprojection at the framerate of the display, based on input.
Here is LTT demoing it two years ago.
It blows my mind that were wasting time with fucking frame generation, when a better way to acheive the same result has been used for VR (where adding latency is a GIANT no-no) for nearly a decade.
This is a hilariously bad take for anything not VR. async warping causes frame smearing on detail that is really noticable when the screens aren’t so close your peripheral blind spots make up for it.
Its an excellent tool in the toolbox but to pretend that async reprojection “solved” this kind of means you don’t understand the problem itself…
Edit: also the LTT video is very cool as a proof of concept, but absolutely demonstrates my point regarding smearing. There are also many, MANY cases where a clean frame with legible information would be preferable to a less latent smeared frame.
Thank you for being rude.
I’m not pretending it solves anything other than the job of increasing the perceived responsiveness of a game.
There are a variety of potential ways to fill in the missing peripheral data, or even occluded data, other than simply stretching the edge of the image. Some of which very much overlap with what DLSS and frame generation are doing.
My core argument is simply that it is superior to frame generation. If you’re gonna throw in fake frames, reprojection beats interpolation.
Frame generation is completely unfit for purpose, because while it may spit out more frames, it makes games feel LESS responsive, not more.
ASW does the opposite. Both are “hacky” and “fake” but one is clearly superior in terms of the perceived experience.
One lets me feel like the game is running faster, the other makes the game look like it runs faster, while making it feel slower.
This solution by intel is better, essentially because it works more like ASW than other implementations of frame generation.
Frame reprojection lacks motion data. It is in the title. It is reprojecting the last frame. Frame generation uses the interval between real frames, feeds in vector data, and estimates movement.
If I am trying to follow a ball going across the screen, not moving my mouse, reprojection is flat out worse. Because it is reprojecting the last frame, where nothing moved. Frame 1, Frame 1RP , then Frame 2. 1 and 1RP would have the ball in the exact same place. If I move my viewpoint, then the perspective will feel correct, viewport edges will blur and the reprojection will map to perspective which feels better for head tracking in VR. But for information delivery it is no new data, not even a guess. It’s still the same frame, just in a different point in space. Not till the next real frame comes in.
With frame generation, if I am watching this ball again, now it looks more like Frame 1 (Real), Frame 1G (estimate), Frame 2 (real) Now frame 1 and frame 1G have different data, and 1G is built on vector data between frames. Not 100% but it’s a educated guess where the ball is going between frame 1 and frame 2. If I move my viewpoint, it is not as responsive feeling as reprojection, but it the gained fake middle frame helps with motion tracking in action.
The real answer is to use frame generation with low-latency configurations, and also enable reprojection in the game engine if possible. Then you have the best of both worlds. For VR, the headset is the viewport, so it’s handled at a driver level. But for games, the viewport is a detached virtual camera, so the gamedev has to expose this and setup reprojection, or Nvidia and AMD need to build some kind of DLSS/FSR like hook for devs to utilize.
But if you could do both at once, that would be very cool. You would get the most responsive feel in terms of lag between input and action on screen, while also getting motion updates faster than a full render pass. So yes, Intel’s solution is a set in that direction. But ASW is not in itself a solution, especially for high motion scenes with lost of graphics. There is a reason the demo engine in the LTT video was extremely basic. If you overloaded that with particle effects and heavy rendering like you see in high end titles, then the smearing from reprojection would look awful without rules and bounding on it.
I REALLY love how, in this AI friendly article, they’re using a picture of Aloy. 5/7 no notes.
This is great and I hope this technology can be implemented on older hardware that maybe barely doesn’t meet todays high system requirements.
I hope this is not used as a crutch by developers to hide really bad optimization and performance, as they have already been doing with upscalers like FSR/DLSS.
no, I fucking hope not. Older games rendered an actual frame. Modern engines render a noisy, extremely ugly mess, and rely on temporal denoising and frame generation (which is why most modern games only show you scenes with static scenery with a very slow moving camera).
Just render the damn thing properly in the first place!
Depends what you want to render. High fps requirements in conjunction with movement where the human eye is the bottleneck is a perfect interpolation case. In such a case the bad frames aren’t really seen.
no, it depends how you want to render it. Older games still had most of today’s effects. It’s just that everyone is switching to unreal, whose focus isn’t games anymore. And which imo, looks really bad on anything except a 4090, if that. Nobody is putting in the work for an optimized engine. There is no “one size fits all”. They do this to save money in development, not because it’s better.
ffs even the noisy image isn’t always at native resolution anymore.
I think you are misunderstanding, because I agree with you when the games minimum hardware requirements are met.
I am saying I hope this technology can be used so that hardware that is below minimum requirements could potentially still get decently playable framerates using this technology on newer titles. The obvious drawback being decreased visual quality. I agree that upscaling, particularly TAA and its related effects, should not be used to reduce system requirements because the developers do not design their game well or make use of ugly effects. But I think this can be useful for old systems or perhaps only integrated graphics chips depending on how the technology works. That was what I meant. Sorry I was not clear enough initially.
They really trying everything besides not bloating the game to shit and optimising their games
As a gut who absolutely hates those “60fps 4k anime fight scene rerenders” I hope to dear god this isn’t used in the future