What is RTX technology, how it works and how it could influence the development of video games? Remember when you first had the opportunity to run a PC game with a 3D accelerator? The differences between the software rendered version, “2D” if you will, of a title like the classic Quake and the same game played with a 3D accelerator like the old 3dfx Voodoo were huge, like from heaven to earth, maybe even bigger than when it was from monochrome displays (CGA) to color displays (EGA and, later, VGA).
The late 1990s marked a new milestone in the evolution of real-time graphics using a PC, with NVIDIA launching GeForce 256, the world’s first graphics processing unit (GPU), a chip capable of performing T&L (transform & lighting) operations, accelerated by the hardware itself, lifting demanding loads from the “shoulders” of the processor.
Visual Impact
The visual impact could not be neglected, the facilities offered by such technology allowing the more frequent use of higher resolutions (compared to the usual 640 × 480 and 800 × 600 of that time), as well as the display with a color depth of 32 bits, with a much more generous color spectrum than 16-bit playback (over 16 million colors, compared to only 65536), with no significant loss of performance. In fact, this step left its mark especially on performance, “rewriting the rules” for game developers, the video chip began to be used to perform tasks that, until then, depended exclusively on the processor.
Just two years later, NVIDIA rewrote the “rules of the game,” launching the GeForce 3 chips. This new technological evolution was quickly adopted by the entire industry, an NVIDIA graphics chip with similar features being implemented in the first generation of Xbox consoles. Thus, we composed this brief introduction precisely to highlight these crucial moments in the history of 3D graphics on PC.
Ray tracing vs. rasterization
It is enough to look around, in the real world, to understand, in general, how the propagation of light works: the light sources emit “rays” of light, which, depending on the materials with which they “intersect”, can be reflected in other directions, and in the parts where they fail to reach, what we call “shadows” appear.
And when it comes to computer 3D graphics, no element gives a dose of more credibility than realistically rendered lighting. To this end, on the principles described above, the Ray Tracing rendering technique was born, which uses “rays” of “virtual light” to illuminate, in a realistic way, 3D scenes.
In other words, in order to get an image as close as possible to reality, the makers of static 3D scenes can afford to “spend time”, there are also situations when rendering such projects can take hours, depending on the complexity of things displayed, even on hardware configurations well above average. When moving 3D graphics are already needed, with such a level of detail accuracy (such as special effects in some movies), real rendering farms are used, dozens or hundreds of computers working together to achieve the desired effect, frame with frame (a movie usually runs at 24 frames per second).
Applying such a scenario in a video game, where in most cases the player can change the perspective as many times as he wants, and which must maintain a minimum framerate of at least 30 frames per second, is considered impossible, especially given that gamers do not have access to rendering farms and, very often, not even to top individual hardware.
Thus, a compromise was reached, a rendering technique called “rasterization”, which greatly simplifies the “problem data”. Not necessarily an exact explanation, but for everyone’s understanding, the so-called “rasterization” projects a 3D scene in a two-dimensional plane, related to the display area of a screen, the calculations needed to illuminate and render the resulting image being much easier to make and, implicitly, much more “accessible” for the hardware that is currently found in our everyday computers.
Nvidia RTX: “Hybris” ray tracing
This is where the new Turing architecture used by the newly launched NVIDIA GeForce RTX video card range comes into play. Through this new generation, NVIDIA engineers have set out to achieve the “impossible” and bring the visual benefits of Ray Tracing in current and future video games. To achieve this goal, a policy of “hybridization” was resorted to, through which the “skeleton” of a three-dimensional scene is further displayed through “traditional” rasterization. The calculations characteristic of the rendering of certain portions (such as reflections or shadows) being but made by Ray Tracing, with a precision far above the standards we have been accustomed to so far.
For example, let’s stop for a moment on the reflections in the games, which are fully found on surfaces such as water, ponds, mirrors, etc. Until the advent of RTX, game makers had plenty of alternative techniques to achieve this kind of effect, simplistic ones, such as cube mapping (the reflection of a pre-rendered cube, whose textures on the inner faces try to reproduce the environment), or more complex ones, such as “render to texture”, an additional rendering process, but applied to the textured surface that should reflect the environment and not to the screen itself.
Foreign Artificial Intelligence
In addition to real-time Ray Tracing effects, the Turing architecture behind the new NVIDIA GeForce GTX video cards stands out thanks to the new Tensor cores, designed to handle Artificial Intelligence processing tasks.
Artificial Intelligence (AI) is an increasingly common term in the everyday vocabulary of technology enthusiasts. While Google uses AI routines to control a wide range of Android 9 Pie operating system parameters, and companies like Huawei use AI-assisted cameras to determine the nature of the object to be photographed, NVIDIA uses its expertise and proposes a new method of post-image processing: DLSS (Deep Learning Super Sampling).
DLSS has a dual role in the games in which it is to be used: on the one hand, DLSS can replace certain current Anti-Aliasing techniques (removing “jagged edges” from 3D scenes), such as TAA (Temporary Anti-Aliasing). ), providing better results. The operation is simple: “behind closed doors”, NVIDIA “trains” an artificial neural network, presenting thousands of screenshots of a given game, both in standard versions, unprocessed in any way, as well as invariants that applied a 64x supersampling filter (a very, very demanding anti-aliasing method in terms of hardware, but also very accurate). After analyzing these samples, based on the information gathered, artificial intelligence will be able to apply the supersampling filter alone to a raw image.
The task will then be taken over by the Tensor cores, which, based on the received information, can apply the DLSS filter in the games that support this technology. The hardware resources consumed at the end-user level are thus much lower than if a gross solution of a similar solution were tried. In fact, optimizing the distribution of hardware resources in the process of playing a video game is also the basis of the second role that DLSS can play.
3D slot games simulate a deep visual experience
By using either a closed device or maybe an augmented reality application, a slot game could easily spin the reels around you. Thus, you would literally be in the slot game. And the game doesn’t have to look like a classic “roller” game if you could just imagine symbols and characters coming and going towards you in a seemingly random sequence.
Let’s be honest, who likes laborious registration procedures, incomprehensible games, or monotonous game sessions? Thus, a good recommendation and example is Gametwist, an ideal online casino for people who like to get straight to the point when it comes to fun games.
If you have ever played video blackjack or a video poker game, then you know how similar these games are in basic design to classic slot games. What is undoubtedly clear is that the results of the game are displayed on a screen. The player presses the buttons or touches the screen to make choices and the results are based on randomly chosen numbers.
In theory, the 4D slot game will allow you to choose how many screens to play simultaneously. Your favorite horse racing game could be replaced with a virtual horse race. Thus, any expansion of the slot gaming experience that keeps things simple but adds a new factor of interest will undoubtedly keep the field competitive with other types of gaming without any problems.
Variable shading
Using as a basic concept the way in which, in a VR game, the user’s gaze is always directed towards the center of his perspective, and how there is no need for the elements contained only in the peripheral view to benefit from the same level of detail as those with the RTX range, came the idea of Variable Rate Shading (VRS, for short).
As the name suggests, Variable Rate Shading encourages advanced rendering of shades only in certain parts of each image, the important ones, which attract attention, are in motion, etc. In fact, VRS benefits from three implementation methods. The first of these is Motion Adaptive Shading (MAS) and prioritizes objects that the player has to constantly look at on the screen, to the detriment of fast-moving ones (such as the model of the piloted car, in a racing game, face of the environment, which, at high speeds, can hardly be observed in detail by the player).
A second method of using VRS is Content Adaptive Shading (CAS), where the objects in the field of view of the game are “hierarchized”, depending on their level of complexity, visibility, etc. Thus, some less detailed surfaces such as bare walls or shaded areas, which remain unchanged when switching from one frame to the next, do not need as accurate a representation, as shading information can be omitted, calculated with less accuracy. or taken from previous frames. To maximize the performance of a game, manufacturers also have the option to combine the effects of the two methods: MAS and CAS.
The third method, Foveated Rendering, is less common, being dependent on monitoring the point at which the player is using a device such as those from Tobii in games played on a traditional monitor. Obviously, the area of interest thus determined will receive more attention, to the detriment of the other areas.
Performance and a new “zero point”
If you have carefully followed all the above, you can easily see that one of NVIDIA’s priorities for the GeForce RTX video card range has been to implement as many effective methods as possible to optimize future games, thus encouraging manufacturers to allocate their resources in a more organized manner than they may have done in the past.
And this orientation towards optimization is not accidental: as we mentioned at the beginning of this article, the methods of using real-time Ray Tracing are very demanding for the hardware, the performance discrepancies between the display modes with these effects on ( RTX On) and those in which this option is disabled (RTX Off) being, in turn, more than obvious. It can go to the point where, in the Battlefield V game, with the most affordable model (GeForce RTX 2060), it is not possible to maintain a constant framerate of 60 frames per second, with RTX on, even when we decrease from details and we maintain a modest resolution of type 1080p (1920 × 1080).
On a final note, we could say that we are once again in a new “zero point” of 3D graphics in PC games, this reorientation towards the real-time implementation of Ray Tracing routines having the potential to change, forever, not just the way which users will consume digital entertainment products, but also how they will be designed by developer studios. And, like any beginning of the road, the pitfalls and possible obstacles we will encounter should not discourage us, the goal being, in the end, to reach the final destination. It remains to be seen whether we will do so during the current generation of video chips or whether we will have to wait for these technologies to mature. But the important thing is that this journey has begun: RTX On.