so i just saw that notch, the creator of minecraft, is talking about nvidia's dlss tech on social media. basically, he's saying that it doesn't make sense to him because if your graphics card is too slow to run a game at decent speeds, why use the same hardware to run a neural network to generate extra frames? it's a pretty interesting point, but i'm not sure i entirely agree with him.

User Image

one thing that caught my attention is that notch seems to be focusing on the frame generation aspect of dlss, which is just one part of the tech. some people in the comments are pointing out that the hardware used for dlss is actually specialized and optimized for running neural networks, so it's not exactly the same as the rest of the graphics card. this makes sense to me, since we know that different parts of a graphics card can be designed for specific tasks.

User Image

i love browsing through the comments on these kinds of posts, because you always get some really insightful responses. someone pointed out that using dlss is like putting the load on a different part of the pipeline, which can actually lead to better performance. another person made a funny comparison to anti-aliasing, saying that if notch's logic applies to dlss, then it should also apply to other graphics algorithms. it's always cool to see people breaking down complex tech like this and explaining it in simple terms.

User Image

as for me, i'm not sure what to think about dlss just yet. i mean, i've seen some pretty impressive demos of it in action, but at the same time, i can understand why notch might be skeptical. maybe the real question is whether we should be focusing on developing graphics cards that can handle raw raster performance, or if it's better to prioritize machine learningRead more: Full article on www.pcgamer.com

What do you think about this?