NVIDIA GeForce RTX 4090 Founders Edition Review in Spanish
La NVIDIA GeForce RTX 4090 It is the first video card to be released to the public using NVIDIA's new Ada Lovelace architecture. It's been a little over two years since NVIDIA introduced a new generation of video cards for the end consumer. You will have already seen several renders and photos of the monstrosity in dimensions that is the GeForce RTX 4090, but now it is our turn to show you the performance and what it means for the end user.
Table of Contents
NVIDIA GeForce RTX 4090 Founders Edition – ADA Lovelace, New Architecture
NVIDIA seeks to maintain its leadership in ray tracing, a field in which it has been quite successful in recent years, presenting substantial but incremental improvements. The ray tracing technology of NVIDIA and its related products, such as DLSS 2.0 are synonymous with ray tracing in the minds of the vast majority of gamers, despite the efforts of AMD to popularize FidelityFX. Y NVIDIA seeks to maintain this situation.
For this, the architecture Ada Lovelace promises up to 2 times the speed of its predecessor (Ampere) in raster games, and up to 4 times in titles take advantage of the capabilities of raytracing. From this comes a new Ada RT Core, which offers double the speed in ray triangle intersection tests, and also brings two new hardware blocks. A Micromap Opacity engine enables alpha test geometry ray tracing twice as fast vs. Ampere. While a Micro-Mesh Displacement engine generates Micro-triangles on displacement to create additional geometry with lower performance and storage costs.
There are also improvements in the execution order of shaders, and that they assure, they have tested with 44% performance in Cyberpunk 2077 with raytracing in mode overdrive. Finally, Ada Lovelace will bring DLSS3, which promises to double the frame rates per second compared to DLSS2, while maintaining or exceeding native image quality.
AD102 GPU
The graphics chip that will bring the RTX 4090 is the AD102, which will be followed by AD103 y AD104 in the coming months The AD102 GPU includes 12 graphics processing clusters (GPCs), 72 texture processing clusters (TPCs), 144 streaming multiprocessors (SMs), and a memory interface of 384 bits with 12 32-bit memory controllers. In this way, the AD102 stays with 18432 CUDA cores, 144 RT cores, 576 tensor cores and 576 texture units.
Each GPC brings a rasterization engine, 6 TPCs, 12 SMs and 16 rasterization operations units (ROP), divided into two blocks. For its part, each SM brings 128 CUDA cores, a third generation Ada RT core, four cores fourth-generation ADA tensors, four texture units, a 256KB log file, and 128KB of shared L1 memory.
On the other hand, each Ada Lovelace generation RT core comes with a Micromap Opacity engine and a Micro-mesh Displacement engine, as we saw earlier. The first evaluates opacity micromaps that serve to accelerate alpha traverses. While the second engine works with micro-displacement meshes, which are used to ray-trace geometrically complex objects with less time and storage costs.
Like previous GPUs, the AD10x SM is divided into four processing blocks (or partitions), with each partition containing a 64 KB register file, L0 instruction cache, warp scheduler, send unit, 16 CUDA cores dedicated to processing FP32 operations (up to 16 FP32 operations per clock), 16 CUDA cores that can process FP32 or INT32 operations (16 FP32 operations per clock OR 16 INT32 operations per clock), a fourth-generation tensor core that has four load/store units, and a special functions unit (SFU) that executes graphics interpolation instructions.
Memory and power efficiency
Each Ada Lovelace SM has 128 KB of L1 memory, which can be configured to function as cache or shared memory. In total, the AD102 GPU brings 18432KB of L1 cache, vs. 10752 KB for the Ampere generation GA102 GPU. Where an important jump can be seen is in the L2 cache, since the AD102 has 98304 KB, compared to the 6144 KB of the GA102. This should give a substantial improvement in performance in raytracing and in all applications in general. VRAM memory, NVIDIA says that it worked with MIcron to bring speeds of 22.4 Gbps of bandwidth, compared to 19.5 Gbps of the RTX 3090.
With the TSMC's 4nm manufacturing nodeNVIDIA offers 70% more CUDA cores in the AD102 than in the GA102, and a total of 76.3 billion transistors. On the other hand, they assure that this is the company's most efficient chip, and that with the same energy as the RTX 3090 Ti, it offers more than twice the performance.
NVIDIA GeForce RTX 4090 – ADA Lovelace | ||
xanxogaming | ||
Graphic card | NVIDIA GeForce RTX 3090 Ti | NVIDIA GeForce RTX 4090 |
CUDA Cores | 10752 | 16384 |
CPGs | 7 | 11 |
TPC's | 42 | 64 |
SMS | 84 | 128 |
Boost Clock Speed (Mhz) | 1860 | 2520 |
FP32 TFLOPS | 40 | 82.6 |
tensor nuclei | 336 third generation | 512th generation XNUMX |
TFLOPS Tensors (FP8) | N/A | 660.6/1321.2 (with Sparsity feature) |
RT cores | 84 second generation | 128 third generation |
RT-TFLOPS | 78.1 | 191 |
texture units | 336 | 512 |
Texture Fill Rate | 625 | 1290.2 |
ROPS | 112 | 176 |
Pixel fill rate | 208.3 | 443.5 |
Memory type and size | 24 GB GDDR6X | 24 GB GDDR6X |
clock data rate | 21 Gbps | 21 Gbps |
Memory bandwidth | 1008 GB / sec | 1008 GB / sec |
L1/shared cache | 10752 KB | 16384 KB |
L2 cache | 6144 KB | 73728 KB |
TGP | 450 W | 450 W |
number of transistors | 28.3 billion | 76.3 billion |
chip size | 628.4 mm2 | 608.5 mm2 |
Fabrication process | Samsung 8nm 8N NVIDIA custom process | TSMC 4N NVIDIA custom process |
DLSS 3 with Optical Flow acceleration
Much of the work for DLSS-3 that is present in Ada Lovelace concentrates on the Optical Flow. The optical flow estimation is commonly used in computer vision applications to measure the direction and magnitude of apparent motion of pixels between consecutively rendered graphics or video frames. In the field of 3D graphics and video, typical use cases include reducing latency in virtual and augmented reality, improving smoothness of video playback, improving compression and stabilization efficiency. of the video camera.
Optical flow is superficially similar to the motion estimation component of video encoding, but with much higher accuracy and consistency requirements. As a result, different different algorithms are used. Since Ampere, NVIDIA GPUs have had support for a separate optical flow engine (OFA). The OFA used by Ada Lovelace offers 300 TeraOPS (TOPS) of optical flow, more than twice the generation speed compared to Ampere.
In this way precision and performance is achieved in the generation of frames with DLSS 3, promising twice the FPS compared to Ampere, and if combined with the new RT cores and other technologies, Ada Lovelace could offer up to four times the speed of GPUs previous. They also claim that it is capable of offering higher FPS in games that have a CPU bottleneck. For example, in Microsoft Flight Simulator, which has bottlenecks due to the physics section and distances to render, they claim to achieve up to twice the performance with DLSS 3.
Video encoding and streaming
Building on the advancements showcased at Ampere with NVIDIA Broadcast, Ada Lovelace will include native support for encoding or AV1 video encoding, which would be 40% more efficient than the popular H.264 codec. In practical terms, this should allow you to stream good quality 1440p resolution video while gaming, with good quality and smooth, versus the 1080p that has been common so far with Ampere.
They also announce collaboration with Viewer discretion to add support for the AV1 codec, which should arrive later this year, as well as improved effects for NVIDIA Broadcast. Improvements are also expected for Discord, with the possibility of streaming in AV1 using this platform. Other features include dual NVENC encoders, to encode 8K 60FPS video, or up to four 4K 60FPS videos simultaneously. Finally, Ada Lovelace GPUs will feature a fifth-generation hardware video decoder, with support for MPEG-2, VC-1, H.264 (AVCHD), H.265 (HEVC), VP8, VP9 and AV1 formats. .
NVIDIA GeForce RTX 4090 – Photos and unboxing
Synthetic and gaming benchmarks (1080p, 1440p, 2160p)
With the release and announcement of new generation video cards, we have to update our benchmark (once again). The best processor we have in our hands to do the necessary tests is the Intel Core i9-12900KF processor.
A statistic reminder...
Before detailing our setup system, let's refresh a bit what AVG FPS and 1% LOW are.
AVG FPS (Average FPS): As the name says, it is the average number of frames per second within a specific sequence. It is the most used measure, but it does not detail the whole story, since there are FPS drops.
1%LOW: Within an entire frames per second dataset, the 1% LOW is the value equal to the lowest 1% within the frames dataset (of the specific sequence ordered ascending). In simpler terms, it is the frame where you see the FPS drop that exists within a specific sequence.
Also, we suggest you read any additional feedback about the gaming experience (jerks) that may or may not have existed during testing.
Benchmarks (GPU Benchmarks – Raster – 2022)
Our revamped benchmark features the best processor we have in stock, the Intel Core i9-12900KF. We use said processor, since it is the one that will generate the least bottleneck for the GPUs tested in scenarios where the limiter may be the CPU.
The focus is aimed at achieving 100% performance of the video card, NVIDIA GeForce RTX 4090 Founders Edition. However, there are scenarios where the GPU will not scale any further at lower resolutions (1080p and even 1440p).
Also, we have upgraded to Windows 11, but we have disabled VBS (Virtualization-Based Security), because in the middle of our tests, we noticed that it took away considerable performance in certain scenarios or generated stuttering.
CPU: Intel Core i9-12900KF (Power Limiters Disabled) (https://amzn.to/3fXVGCb)
Board: Z690 Maximus Hero (BIOS 2004)
RAM: Kingston Fury DDR5 5200C40 – 2x16GB
T.video (what we are testing): NVIDIA GeForce RTX 4090 Founders Edition
Operating system: Windows 11 Home Edition 22H2 – VBS OFF
Liquid refrigeration: Custom Water Loop (EKWB)
SSD: HP FX900 1TB + Samsung 970 EVO PLUS 2TB
Driver: NVIDIA Press Driver
Power supply: EVGA P+ 1300W
Gaming – Rasterization
All tests are done at the highest quality available, unless otherwise specified.
Let's see the first title, Assassin's Creed: Origins.
Assassin's Creed: Origins (1080p, 1440p, 2160p)
Game Engine: AnvilNext 2.0
Borderlands 3 (1080p, 1440p, 2160p)
Game Engine: Frostbite 3
Monitor (1080p, 1440p, 2160p)
Game Engine: Northlight Engine
Death Stranding (1080p, 1440p, 2160p)
Game Engine: Decima
F1 2022 (1080p, 1440p, 2160p)
Game Engine: EGO “Boost” Engine 4.0
God of War (1080p, 1440p, 2160p)
Game Engine: Proprietary
Shadow of the Tomb Raider DX 12 (1080, 1440p, 2160p)
Game Engine: Foundation
Shadow of War (1080, 1440p, 2160p)
Game engine: LithTech Jupiter EX
Strange Brigade DX12 + Async (1080p, 1440p, 2160p)
Game Engine: Asura
The Witcher 3 (1080p, 1440p, 2160p)
Game Engine: REDengine 3
DLSS3 – DLSS Frame Generation shows quite a bit of potential
As we mentioned at the beginning, DLSS3 is a big change that comes with the new GPU architecture, ADA Lovelace. At the moment, there are no games available to the public (although the first ones should come out in these weeks) but NVIDIA and some developers showed what the big change that comes with their new GPUs offers:
-DLSS Generation
DLSS Generation offers several benefits over conventional DLSS2, among them is to overcome scenarios where there are bottlenecks. We have tested three games to see the potential of DLSS3:
-Cyberpunk 2077
-Microsoft Flight Simulator
-A Plague Tale: Requiem
All builds are internal, so they are not final releases for the public. Let's start with Cyberpunk 2077.
Cyberpunk 2077 – Some bugs to be fixed by the developer
Cyberpunk 2077 has been heavily marketed by NVIDIA as this title in the future will offer an update that will increase Ray Tracing effects, as well as support for DLSS3. Unfortunately, we found some "bugs" with the internal build and it is related to DLSS2 (super scaling).
En 1440p y 1080p, we saw light artifacts (flashes) when testing. 2160p did not show much of those artifacts and we can say, that I did not see this detail in Microsoft Flight Simulator, nor in The Plague Tale: Requiem.
I think I can lean that this is a bug that seems to be more on the developer side and I have reported it to NVIDIA (they have managed to reproduce it). Despite this, although I am not an expert in comparing image quality, with the naked eye, the quality looks quite good (with DLSS Frame Generation).
However, being still in development, there should be things to improve on NVIDIA's side (in terms of DLSS driver) as well as developer implementation. Here are two images of the GeForce RTX 4090 using DLSS3 y Ray Tracing with DLSS3.
It is interesting to see that the performance impact is not that substantial even with Ray Tracing enabled, with a 12.86% reduction using DLSS3 alone. Rather, the increase using DLSS3 is brutal.
Here we have the following graph measuring performance between the different options in 2160p:
-Full raster (not RT)
-Raster + Ray Tracing
-Ray Tracing + DLSS2
-Ray Tracing + DLSS3
-DLSS3 (not RT)
Note that super scaling (what was known as DLSS2) is set to Auto (performance mode).
The numbers speak for themselves, but the most substantial improvement comes when one wants to use only rasterization with Ray Tracing enabled versus Ray Tracing + DLSS3.
The improvement is a 239.53%.
This is quite substantial. Now, to see the generational changes between DLSS2 y DLSS3 (remember that DLSS3 es DLSS2 + DLSS Frame Generation) activating DLSS Frame Generation FPS average 99 a 146 (increase of 47.47%).
In this scene, where we have verified that there is a bottleneck in 1080p, when activating DLSS3 (DLSS2 + Frame Generation) we get past that bottleneck and it hits roughly 310 average FPS. Beware, with this we do not suggest using a GeForce RTX 4090 to 1080p, but show what it brings DLSS Frame Generation to the table.
The most important, EL PERFORMANCE. DLSS3 breaks various schemes, such as removing bottlenecks generated by the processor, as well as offering more performance on top of scaling through artificial intelligence (DLSS2) that offers the cards NVIDIA GeForce RTX series 20 y 30.
Now it's the turn of Microsoft Flight Simulator.
Microsoft Flight Simulator – More FPS thanks to DLSS3 (Frame Generation)
Microsoft Flight Simulator is a game that has always had the detail that it is limited by the processor. As good a video card as one has had in the past, FPS at various resolutions has been limited. This totally changes with DLSS3 and its native option, DLSS Frame Generation.
A picture is worth a thousand words some say...
90 FPS Average using TAA at 2160p (NVIDIA GeForce RTX 4090)
192 FPS Average using DLSS3 at 2160p (NVIDIA GeForce RTX 4090)
What was once limited to the processor, disappears with DLSS3 activated in Microsoft Flight Simulator. Several gamers using this particular title who are pilots (who I know have high-end hardware) are probably interested in DLSS3.
The results are more than impressive.
By the way, we didn't see artifacts (such as light flash) in Microsoft Flight Simulator.
A Plague Tale: Requiem – Beautiful gameplay and performance increases with DLSS3
While Ray Tracing is not necessarily to everyone's liking, having higher FPS is something that is welcomed by everyone. Thanks to NVIDIA y ASOBO Studio for early access and its quick implementation of DLSS3 to the game. It is a title that, personally, has aroused my curiosity to play, thanks to its visual section, which takes the breath away from just moving the main character.
It's a title we'll try to add to our benchmark list once the final version of the game is released to the public.
The performance increase is noticeable with DLSS3, over traditional DLSS2 and it's quite a demanding game at 2160p. The game does not offer Ray Tracing and it will probably be one of the most demanding games of 2022. Remember that DLSS is on Auto, so use Performance mode.
to jump from TAA (Raster only) to DLSS3, there is an improvement of 71 FPS average to 169 (a increase of the 138.02%) and although there is improvement with DLSS2, DLSS3 seems to be the new option in terms of performance while maintaining image quality.
A Plague Tale: Requiem – NVIDIA GeForce RTX 4090 – TAA (Raster)
A Plague Tale: Requiem – NVIDIA GeForce RTX 4090 – DLSS3
Productivity (Under construction)
Temperatures/sound levels/consumption and overclock (Under construction)
According to the tests that have taken time to verify, in terms of temperatures, sound levels, consumption and overclock, they will be different from what it normally was. The factory TGP of the NVIDIA GeForce RTX 4090 Founders Edition is 450W and one can increase to 600W.
Interestingly, in demanding gaming load, the video card does not reach 600W. Let's see what NVIDIA's solution has in store for us with the Founders Edition for this new generation.
Consumption
Normally we would start with putting information on the measured temperatures of the Nvidia Geforce RTX 4090, but I think that, on this occasion, consumption is more relevant. Unless there is something weird, this is what I would apply to all RTX4090.
We use The Plague Tale: Requiem en 2160p to measure consumption and review the maximum TGP does not exceed 410-420W in factory settings. Increasing the slider to 600w de TGP and using a manual overclock, the consumption only increases by 30W (440-450W).
This is quite curious, since on AMPERE cards, when the power slider was increased, the consumption was usually close to the modified TGP.
This time it is not so.
Average Consumption (TGP=450W) – 410-420W – NVIDIA GeForce RTX 4090 Founders
Average Consumption (TGP=600W) – 440-450W – NVIDIA GeForce RTX 4090 Founders
That means that for gamer users, a source of 850-1000W is more than enough for a GeForce RTX 4090 if the behavior is like that for most games in factory settings. The only exception for this case was when using Combustor Donut. That has been the only scenario where I have reached 600W with manual overclock.
Temperatures – Improved dissipation and is finally competent
Sometimes they say that size matters and this is no exception with the NVIDIA GeForce RTX 4090 video card. Although the dissipation solution that NVIDIA has used for its flagship video card is generous in size, it offers quite low temperatures, never before seen on higher-end Founders Edition video cards.
We could even say that, on this occasion, the GeForce RTX 4090 Founders Edition is a strong competitor against the models of its partners.
After stressing 20 minutes in God of War (2160p) at the highest quality, the maximum temperature that the NVIDIA GeForce RTX 4090 Founders Edition it was 64.7 degrees Celsius. Also the memory temperature did not exceed 74 degrees Celsius and the GPU hotspot reached 71.9 degrees Celsius.
For those who have used Founders Edition video cards in the past, this was their weak point, but the change of manufacturing process to TSMC, apart from the engineering behind the new Founders Edition heatsink, brings this new level to video cards. reference you offer NVIDIA.
Sound levels (Under construction)
Still under construction, but we have noticed coil whine on this video card under some scenarios. Outside the coil whine, the noise generated by the heatsink is low.
Overclock (Under construction)
I have managed to make a base configuration and will probably try to increase the frequency, but at the moment the video card is OCeada, it has + 250 MHz al Core Clock y +1000 al clock memory This causes the video card to boost to 3000 MHz, but we only observe a performance improvement of the 5%.
It may be silicon lottery. It should be noted that the power limiter was increased to 600w (TGP).
Final Analysis – Godzilla Has Arrived, NVIDIA GeForce RTX 4090 Founders Edition
Several have been waiting for the release of the new video cards and the NVIDIA GeForce RTX 4090 is the first to be released. We didn't get to review the NVIDIA GeForce RTX 3090 Ti, but if we did, it probably would have been pretty negative. A video card, with little difference to the NVIDIA GeForce RTX 3090 and its substantially higher price. It was released at a time when video card prices didn't make much sense, especially at high end.
Now comes the NVIDIA GeForce RTX 4090 and this video card makes so much more sense than the NVIDIA GeForce RTX 3090 Ti and GeForce RTX 3090 ever did. The RTX 4090 comes with a pretty hefty price tag for normal mortals, but NVIDIA knows this. , since it is a top-of-the-range product and NVIDIA considers it a product focused on gaming and/or content creation.
The NVIDIA GeForce RTX 4090 comes in at the same price as an RTX 3090 when it launched, and while we don't have an RTX 3090 in our hands, we do have the closest thing, the NVIDIA GeForce RTX 3080 Ti, with very similar performance (US $1119 MSRP USA).
Time to review the GeForce RTX 4090 versus its predecessors.
Performance difference in rasterization (RTX 4090 versus the world)
Due to time issues and having to retest everything, since I updated my test bench, we will analyze the RTX 4090 against:
-NVIDIA GeForce RTX 3080 Ti
-NVIDIA GeForce RTX 3080
-NVIDIA GeForce RTX 3060 Ti
We will add more information in subsequent weeks and have more data for the launch of the NVIDIA GeForce RTX 4080 de 12 / 16GB, But analysis time.
NVIDIA GeForce RTX 4090 Founders Edition ($1599) Versus | |||
xanxogaming | |||
1080p | 1440p | 2160p | |
NVIDIA GeForce RTX 3080 Ti ($1119) | 29.45% | 55.15% | 77.55% |
NVIDIA GeForce RTX 3080 ($699) | 36.73% | 67.29% | 93.84% |
NVIDIA GeForce RTX 3060 Ti ($399) | 90.83% | 149.33% | 199.16% |
If at its time we said that the NVIDIA GeForce RTX 3080 was the first video card in which the end user could play games at 4K UHD 60 FPS without any compromise, the NVIDIA GeForce RTX 4090 is the card in which end users will finally be able to play. with very few compromises at 4K UHD 144 FPS in the most demanding games. That's not counting the fact that probably most games that will be released will support DLSS3, which is an innovation with the new NVIDIA architecture.
The NVIDIA GeForce RTX 4090 performs 77.55% faster than an RTX 3080 Ti based on the games we've tested at 2160p. Undoubtedly, NVIDIA has wanted to make a difference with the GeForce RTX 4090 and target the target audience that would not hesitate to buy a video card at US$1599 to be able to play quietly on a 4K UHD monitor or TV with a 120 or 140 hertz rate. of soda
We have some long-time titles in our databank, but even games with the RTX 4090 hit a 1440p bottleneck. While it could be used for this resolution, it's clear that 2160p is a target. It also makes little sense to have a video card with such a cost for resolutions like 1920 × 1080 pixels.
Cost per FPS – Rasterization (Analysis)
To cost-per-fps fans, we couldn't get past this metric. As we've mentioned, the GeForce RTX 4090 makes the most sense for resolutions like 2160p, so we'll start our review at that resolution.
Compared to the NVIDIA GeForce RTX 3080 Ti (and we can include the RTX 3090 and 3090 Ti by the way) the cost per FPS of the GeForce RTX 4090 is lower than all the video cards just mentioned. Beware, this is only in rasterization and we are not even putting DLSS3 on the table.
NVIDIA GeForce RTX 4090 Founders Edition ($1599) Cost Per FPS | |||
xanxogaming | |||
1080p | 1440p | 2160p | |
NVIDIA GeForce RTX 4090 ($1599) | $ 6.45 | $ 6.80 | $ 9.80 |
NVIDIA GeForce RTX 3080 Ti ($1119) | $ 5.85 | $ 7.38 | $ 12.18 |
NVIDIA GeForce RTX 3080 ($699) | $ 3.86 | $ 4.97 | $ 8.30 |
NVIDIA GeForce RTX 3060 Ti ($399) | $ 3.07 | $ 4.23 | $ 7.32 |
Cost per FPS tells part of the story, because if we focus on 2160p, the RTX 3060 Ti wins the category, but if we look at its performance in that resolution, it really doesn't have enough power for 4K UHD. The GeForce RTX 4090, compared to the NVIDIA GeForce RTX 3080 Ti, there is a 19.54% reduction in this metric.
The GeForce RTX 3060 Ti will remain a price/benefit card for the 1080p/1440p segment for a while and it won't be until its successor comes out (and also depending on price) that it will be displaced.
The GeForce RTX 3080 makes a good argument at 2160p, but as we said when we did our review two years ago… The RTX 3080 offers smooth 4K UHD 60 FPS gaming for the first time, while the RTX 4090 almost offers 4K UHD 144 FPS.
Having made this clear, let's move on to talk about DLSS 3.
DLSS3 – What's Coming Soon
We tested several games even in internal build, testing DLSS3 and DLSS Frame Generation, which makes a big difference in performance compared to DLSS2. I hope that for the launch of the GeForce RTX 4080, we already have games with DLSS3 de crafts act. There is a difference between an internal build and an already public one.
Having said that, what it offers DLSS3 is pretty awesome. While we did see some visual issues in Cyberpunk 2077; Flight Simulator y A Plague's Tale We did not observe any problem with the simple eye. The performance improvement was simply surprising in all the games we tested. We suggest you read the DLSS3 section.
DLSS Frame Generation increases performance even in scenarios where there is a processor bottleneck and offers more performance than substantial way. Microsoft Flight Simulator is a demonstration of this and the visual quality is quite good. A Plague's Tale is the game that really blew me away, both for its beauty and the improvement of FPS en 2160p with DLSS3.
A Plague's Tale is the first AAA game in a long time that I've been excited to play (because of the visuals) and the performance improvement with DLSS3. I hope that the version that will be released to the public will come with DLSS3 implemented.
Productivity – Quite a few improvements
We have done few productivity tests, but we have done the basic tests to have an opinion of the NVIDIA GeForce RTX 4090. In tasks like Blender, the performance increase is exponential and with the new encoders that ADA Lovelace has, every content creator is going aim for an RTX 4090 if it monetizes with the GPU it uses.
I hope in the following weeks I can do more tests (the RTX 4090 is going on tour) so we will come back with more details, but every professional can rest easy with the performance for this target audience with this new release.
Conclusion – New top of the range has arrived with RTX 4090
The improvements offered by the NVIDIA GeForce RTX 4090 are significant. The performance improvement between an RTX 3080, 3080 Ti, 3090 and 3090 Ti were minimal, but they did exist on the market. The RTX 4090 brings a new level and beats all these mentioned cards by a lot. In certain games, there was up to a 100% improvement (rasterization only) with an average of 77% higher average FPS at 2160p.
This is without taking into consideration the improvement in productivity tasks where GPU acceleration is used, programs that take advantage of Ray Tracing and DLSS and especially, what the future holds with DLSS3.
Unlike Ray Tracing, which many don't excite, DLSS3 with Frame Generation offers new performance and I hope the industry will embrace it in games that will be released in the coming months and beyond.
The price that NVIDIA asks for the RTX 4090 is $100 more than what it asked for the RTX 3090, although much less than the RTX 3090 Ti. We have to put an emphasis, since our opinion of the RTX 3090 Ti was not very good. The RTX 3090 also didn't make much sense for gaming compared to the 3080 and 3080 Ti. It was a card that made more sense for professional level content creation.
Having said that, we believe that NVIDIA is targeting users who are looking for the latest video card, with the highest performance possible. This card doesn't make ANY sense for 1080p in the vast majority of cases and even at 1440p the 4090 doesn't make much sense either, but it's a better argument. Gaming at 2160p (4K UHD) with a 144 hertz monitor is where the NVIDIA GeForce RTX 4090 has its niche. It is the only video card that gives the best experience and it is well above an RTX 3080, 3080 Ti, 3090, 3090 Ti in this category. Also users who use large TVs with a refresh rate and want the best image fidelity. It's literally a monster, as is the price.
AMD will announce its new graphics cards next month, but it has pretty strong competition with the RTX 4090, so we're guessing they'll compete on price (a guess). The advantage of DLSS3 and its adoption by developers will probably be what defines who will take the market share in high-end video cards. The generation jump of 77% (2160p) is something that we will see if they can compete, if not the RTX 4090 will be without competition in this category (both in performance and price).
As for consumption, at the time of testing, unless one is going to use Furmark Donut or some stress tool, the consumption of the RTX 4090 is lower than expected (even below 450W TGP).
If you don't see AMD comparisons in the review, it's because the company doesn't sample video cards for us so we can compare them to NVIDIA cards.
Intel has nothing to compete with in this price and performance batch, so it will be at least a generation to see if the company can offer something to rival this monstrosity.
I think that NVIDIA has come out and wants to make it clear that it is the one that dominates the gaming market, as well as content creation with the NVIDIA GeForce RTX 4090 and it seems that there is a clear distinction in performance with the RTX 4080 of 12GB and 16GB according to specifications , but it will be something we will see in a future review.