Nvidia reveals the secret sauce behind their DLSS technology
Nvidia reveals the secret behind its DLSS technology, a chuffing big Supercomputer
At Editors Day at CES 2025, Nvidia revealed one of the secrets behind the success of its DLSS AI upscaling technology. For six years, Nvidia has been continuously improving its DLSS technology using a supercomputer with thousands of GPUs. That’s the secret sauce behind DLSS, PC’s most advanced AI upscaling technology.
With DLSS 4, Nvidia has moved to a new transformer-based AI model and away from Convolutional Neural Networks (CNNs). This shift allows Nvidia’s new AI model to feature double the parameters of their older DLSS models. This has enabled an increase in image stability, reduced ghosting, smoother edges, and increased detail in motion. This was achieved thanks to the insane amount of GPU power that Nvidia has been able to dedicate to its new AI models.
For six years, Nvidia has had a supercomputer that has been thinking about nothing but DLSS. With DLSS 4, it is clear that effort has bore fruit.
How is it that we’ve been able to make progress with DLSS over the years? You know, it’s been a six-year, continuous learning process for us.
Actually, we have a big supercomputer at Nvidia, with many 1000s of our latest and greatest GPUs, that is running 24/7, 365 days a year improving DLSS. And it’s been doing that for six years.
– Brian Catanzaro, Nvidia’s VP of Applied Deep Learning Research – Via PCGamer
Supercomputing isn’t Nvidia’s only secret sauce for DLSS, their evolving data set is a huge factor
Nvidia’s DLSS Supercomputer alone isn’t the secret sauce behind the quality of their AI upscaler. Nvidia has been actively finding the problems with DLSS so their AI models can solve them. Nvidia had a database of examples of what good graphics look like and examples of what problematic DLSS upscales look like.
With their evolving data set and an AI that’s continuously trying to solve DLSS’ problems, Nvidia has been able to improve the quality of their AI upscaling solution rapidly. This means that we can expect DLSS to look even better in the future, which is great news for PC gamers.
What we’re doing during this process, is we’re analysing failures. When the DLSS model fails, it looks like ghosting or flickering or blurriness. And, you know, we find failures in many of the games we’re looking at and we try to figure out what’s going on, why does the model make the wrong choice about how to draw the image there?
We then find ways to augment our training data set. Our training data sets are always growing. We’re compiling examples of what good graphics looks like and what difficult problems DLSS needs to solve.
We put those in our training set, and then we retrain the model, and then we test across hundreds of games in order to figure out how to make DLSS better. So, that’s the process.
– Brian Catanzaro, Nvidia’s VP of Applied Deep Learning Research – via PCGamer
Nvidia has dedicated a lot of resources to the continued evolution of DLSS. DLSS 4 is the first to use Nvidia’s new transformer-based AI models. As such, we can expect this new AI model to improve over time.
You can join the discussion on Nvidia’s DLSS Supercomputer on the OC3D Forums.