Nvidia is working on an AI-accelerated method to create three-dimensional animated scenes from two-dimensional photos. The technology for this has the potential to turn the games market upside down by changing games permanently and saving huge amounts of data. Read more about this below.
The feature set of a graphics card is a decisive criterion for many gamers today when choosing a specific model. This no longer primarily includes the rasterizing performance, but also the support of upscalers such as DLSS, FSR or XeSS, ray tracing performance and which codecs and libraries are supported. Here Nvidia offers a tangible advantage with CUDA, but there are other special features. One of them is “Neural Radiance Fields” (NeRFs).
[PLUS] Nvidia Geforce RTX 4080 in the extended print test
The acronym stands for Nvidia’s AI-accelerated method of creating a fully animated three-dimensional scene from multiple photos. The algorithm is provided with photos from different perspectives and angles. It renders the two-dimensional objects from it in real time and in three dimensions. Similar to DLSS 3.0, missing or non-superimposed sections are calculated by the AI so that a fluid scene can arise from them. NeRFs are not an Nvidia-exclusive invention and are also being researched by other companies.
However, Nvidia is the company that can bring this technology and PC gaming together so that it’s not just used for movies, as has mostly been the case until now. The main advantage of this would be that highly complex animations, which currently take up a lot of storage space, are no longer necessary and can be calculated by the AI. This technology may seem very useful, but so far no concept for the gaming sector is marketable and it could still be a long time before there is a corresponding one. Those who want to deal with NeRFs in the meantime can use apps like Luma AI for smartphones to get their own impression of the technology.