March 18, 2025
NVIDIAReviews

NVIDIA GeForce RTX 5080 Founder's Edition Review in Spanish

And this is the second from NVIDIA's launch formula, with the GeForce RTX 5080 Founders Edition, which marks the beginning of the family of video cards Blackwell for the public (Both are released tomorrow). The GeForce RTX 5090 had a reception Mixed, and my biggest concern about that model is its high fuel consumption. 575 watts de TGP, as well as other additional criticisms. On the other hand, the GeForce RTX 5080 presents a more moderate TGP (360w), compared to the 5090, but shows an increase over what the XNUMX offered. RTX 4080 and 3080 (320W, one + 12.5 %).

Fortunately, There is no price increase for the end user, and its cost is lower than that of the RTX 4080 ($1199) at the time, standing at the same reference price as the RTX 4080 Super ($999).

My expectations are not very high for this review, but at least there should be a small improvement (even if slight) in the relationship price/performance with this release.

The NVIDIA GeForce RTX 5080 Founders Edition video card has been sampled by NVIDIA GeForce LATAM.

Table of Contents

Blackwell Architecture - NVIDIA GeForce RTX 50 Series

For NVIDIA there is no turning back, and it has decided that the new architecture Blackwell For graphics cards, it employs more and better techniques and tools based on artificial intelligence, neural network processing, and generative AI engines. The green company seeks to further employ its expertise in business AI, without neglecting the classic improvements in base specifications, energy efficiency, compatibility with new codecs and video outputs, a new type of GDDR7 memory, among others.

Anyway, the star in Blackwell is the Deep Learning Super Sampling technology, popularly known as DLSS, which in its fourth generation promises up to twice as many frames per second as DLSS 3 or 3.5. There will also be a deeper look at “neural shaders,” which will work with AI models trained by developers to generate “approximate” images even faster than previous-generation ray tracing. We also have an update to the DLSS Ray Reconstruction technique, which reduces the number of rays needed to generate ray-traced lighting, and another group of techniques that we will see later.

Let's start with the basics, the specifications of the graphics cards that we will show below.

Graphic cardGeForce RTX 3080GeForce RTX 4080GeForce RTX 5080
GPU code nameGA102AD103GB203
GPU ArchitectureNVIDIA ampsNVIDIA Ada LovelaceNVIDIA Blackwell
CPGs677
TPC's343842
SMS687684
CUDA/SM Cores128128128
CUDA / GPU Cores8704972810752
Tensor Cores / SM4 (3rd Gen)4 (4th Gen)4 (5th Gen)
Tensor Cores / GPU272 (3rd Gen)304 (4th Gen)336 (5th Gen)
RT cores80 (2nd Gen)76 (3rd Gen)84 (4th Gen)
GPU Boost Clock Speed ​​(MHz)171025052617
FP32 TFLOPS Peak (non-Tensor)^{1}34.148.756.3
FP16 TFLOPS Peak (non-Tensor)^{1}34.148.756.3
BF16 TFLOPS Peak (non-Tensor)^{1}34.148.756.3
INT32 TOPS Peak (non-Tensor)^{1}1724.456.3
RT-TFLOPS58.1112.7170.6
FP4 Peak TFLOPS Tensor with accumulated FP32 (FP4 AI TOPS)N/AN/A900.4/1801^{2}
FP8 Tensor TFLOPS Peak with accumulated FP16^{1}N/A389.9/779.8^{2}450.2/900.4^{2}
FP8 Tensor TFLOPS Peak with accumulated FP32^{1}N/A194.9/389.8^{2}225.1/450.2^{2}
FP16 Tensor Peak TFLOPS with accumulated FP16^{1}119.1/238.2^{2}194.9/389.8^{2}225.1/450.2^{2}
FP16 Tensor TFLOPS Peak
with accumulated FP32^{1}
59.5/119^{2}97.5/195^{2}112.6/225.1^{2}
BF16 Tensor TFLOPS Peak
with accumulated FP32^{1}
59.5/119^{2}97.5/195^{2}112.6/225.1^{2}
TF32 Tensor TFLOPS Peak^{1}29.8/59.6^{2}48.7/97.4^{2}56.3/112.6^{2}
INT8 Tensor TOPS Peak^{1}238.1/476.2^{2}389.9/779.82^{2}450.2/900.4^{2}
Frame buffer memory type and size10 GB
GDDR6X
16 GB
GDDR6X
16 GB
GDDR7
Memory interface320-bit256-bit256-bit
Memory Clock (Data Rate)19 Gbps22.4 Gbps30 Gbps
Memory bandwidth760 GB / sec716.8 GB / sec960 GB / sec
ROPs96112112
Pixel fill rate (Gigapixels/sec)164.2280.6293.1
texture units272304336
Texel fill rate (Gigatexels/sec)465.12761.5879.3
L1 Data Cache/Shared Memory8704 KB9728 KB10752 KB
L2 data cache5120 KB65536 KB65536 KB
Log file size17408 KB19456 KB21504 KB

Video Engines
1 x NVENC (7th Gen)
1 x NVDEC (5th Gen)
2 x NVENC (8th Gen)
1 x NVDEC (5th Gen)
2 x NVENC (9th Gen)
2 x NVDEC (6th Gen)
TGP (Total Graphics Power)320 W320 W360 W
Number of transistors28.3 Billion45.9 Billion45.6 Billion
Wafer size628.4 mm^{2}378.6 mm^{2}378 mm^{2}
Fabrication processSamsung 8nm 8N custom process NVIDIATSMC 4nm 4N custom process NVIDIATSMC 4nm 4N custom process NVIDIA
PCI Express InterfaceGen 4Gen 4Gen 5

La GeForce RTX 5080 will have the GPU GB203. It includes 7 graphics processing clusters (GPCs), 42 texture processing clusters, 84 streaming multiprocessors (SMs), and a 256-bit memory interface. Each of the 7 GPCs mentioned above offers a rasterization engine, 8 texture processing clusters, 16 streaming multiprocessors, and 16 rasterization operation partitions.

GB202

In total, the GB203 GPU has 65MB of L2 cache memory. This hardware will be used to manage the architecture based on “neural rendering”. The main attraction will be the use of DLSS 4, with multiple frame generation and lower latency through the use of improved RTX techniques (SLR 2) and image generation or transformation using AI.

Blackwell SM

Finally, each streaming multiprocessor in Blackwell (SM) includes 128 CUDA cores, a fourth generation RT core, Four fifth-generation tensor cores, 4 texture units, a 256KB register file, and 128KB of shared memory or L1 cache. The FP32 and INT32 cores have also been unified, allowing them to perform one of the two operations when needed. Additionally, the number of texture units has been increased from 304 on the RTX 4080 to 336 for the RTX 5080. Bilinear texel rates have increased from 761.5 Gigabytes per second on the 4080 to 879.3 Gigabytes per second on the 5080.

One of the most striking points is, without a doubt, the debut of GDDR7 graphics memory in NVIDIA gaming GPUs. After two generations of cards with GDDR6X, NVIDIA will be launching this generation of memory with no less than 32GB for the RTX 5090, 16GB for the RTX 5080 and RTX 5070 Ti, and 12GB for the RTX 5070. 

Key improvements to GDDR7 include PAM3 signaling technology that promises improved signal-to-noise ratio and doubles the density of independent channels, resulting in greater memory bandwidth and improved energy efficiency.

GDDR7 PAM

Fifth-generation Tensor Cores now support the FP4 data format, which requires less than half the memory thanks to a compression type with virtually zero loss in quality compared to FP16.

Blackwell’s fourth-generation RT cores double the throughput of ray tracing test intersections, which is a high-frequency operation when generating RT images. Therefore, ray-traced frames are expected to be generated at a higher speed. Also included in this generation is a micromapping opacity engine, which helps reduce the amount of computational calculations in alpha shaders. Other dedicated ray tracing technologies include the use of mega geometry, a triangle group intersection engine, and linear scanning spheres for rendering fine geometry objects such as hair.

La “mega geometry” aims to increase geometric detail in applications that use ray tracing. In particular, for graphics engines such as Unreal Engine 5 that use more advanced level of detail systems, it allows the use of ray tracing with greater fidelity, improving the quality of shadows, reflections and indirect lighting.

Mega geometry is expected to be available on all DirectX12, Vulkan, and OptiX 9.0 APIs, and will be available on all RTX graphics cards from the Turing architecture or the RTX 20 generation.

On the other hand, we have Shader Execution Reordering (SER) 2.0. This technology allows the GPU's parallel threads to be reorganized for greater processing efficiency. This helps lighten the load on ray tracing processes such as divergent memory access and path tracing, and send information to the tensor or shader cores. There are already games and APIs that take advantage of this technique in ray tracing, and the new version promises better results.

RTX Blackwell RT Shader

The AI ​​Management Processor (AMP) is a context-programmable scheduler on the GPU that improves process scheduling in Windows, reducing the contextual load on the GPU. Using a RISC-V processor, AMP works with the Windows architecture to reduce latency and improve memory management, reducing CPU load for task scheduling and helping to reduce bottlenecks, while improving frame rates and multitasking in Windows. 

And on the graphics rendering front, we finally come to DLSS 4, the crown jewel at Blackwell. NVIDIA promises multi-frame rendering with improved performance and lower memory usage than previous versions of the technology, as well as improvements to previous DLSS techniques such as frame generation, ray tracing, super resolution, and deep learning anti-aliasing.

With the combination of improvements in hardware, architecture and software, they promise 40% faster frame rate than DLSS 3, using 30% less video memory, and a model that only needs to run once per frame. Optical flow frame generation is now AI-based instead of dedicated hardware, also reducing the cost of frame generation and integration. An inverted metering system or Flip Metering shifts frame rate logic to the display engine, allowing the GPU to improve the accuracy of display display timings.

RTX Blackwell DLSS 4 Multi Frame Generation Diagram

DLSS Super Resolution (SR) boosts performance by using AI to produce higher resolution frames from a lower resolution input. DLSS samples multiple low-resolution images and uses motion data and feedback from previous frames to construct high-quality images. The final product of the transformer model is more stable, with less ghosting, more motion image detail, and improved anti-aliasing compared to previous versions of DLSS. 

Ray Reconstruction (RR) improves image quality by using AI to generate additional pixels in ray-traced intensive scenes. DLSS replaces manual denoisers with an AI network trained on NVIDIA supercomputers that generates higher quality pixels between sampled rays. In ray-traced intensive content, the transformer model for RR further improves quality, especially in scenes with challenging lighting. In fact, all common artifacts of typical denoisers are significantly reduced. 

Deep Learning Anti-Aliasing (DLAA) provides improved image quality using an AI-based anti-aliasing technique. DLAA uses the same super-resolution technology developed for DLSS, constructing a more realistic, high-quality image at native resolution. The result provides greater temporal stability, motion details, and smoother edges in a scene.

Neural shaders are a technology NVIDIA is looking to introduce at Blackwell, unifying traditional shaders with the use of AI in parts of the rendering process, partially at first, and believed to be fully so in the future. Tensor cores are now accessible to graphics shaders combined with scheduling optimizations in SER 2.0 (Shader Execution Reordering) so that AI graphics with neural filtering capabilities and AI models, including generative AI, can run simultaneously in next-generation games.

RTX Neural Rendering

Neural shaders allow us to train neural networks to learn efficient approximations of complex algorithms that calculate how light interacts with surfaces, decompress super-compressed video, predict indirect illumination from limited real data, and approximate subsurface light scattering. The potential applications of neural shaders are still largely unexplored, meaning that new applications may be found.

Among the other integrated techniques are the “RTX Neural Materials”. AI is used to replace the original mathematical model of a material or texture with an approximation, promising “cinema-quality” frames at gaming-grade speeds while using less video memory and fewer graphics card resources.

RTX Neural Materials

Another technique is RTX Neural Texture Compression, which leverages neural networks accessed through neural shaders to compress and decompress material textures more efficiently than traditional methods. Then there is the so-called “Neural Radiance Cache” (NRC). This uses a neural shader to cache and approximate brightness information. Taking advantage of the By learning a neural network, complex lighting information can be stored and used to create high-quality global illumination and dynamic lighting effects in real time. 

From there we have RTX Skin, a technique that NVIDIA developed based on subsurface scattering for rendering cinema images. This allows light passing through certain skin surfaces that are not entirely solid to be rendered subtly or intensely, depending on the requirements of the game, using ray tracing.

And RTX Neural Faces creates a rasterized face that is overlaid with a rough 3D layer using a generative AI model, which can infer natural-looking face results.

For video and encoding features, there’s support for chroma-sampled 4:2:2 video, which has lower data requirements than the 4:4:4 standard, making it suitable for generating HDR content. The ninth-generation NVENC encoder also supports higher-quality AV1 and HEVC video. The RTX 5090 graphics card supports up to three encoders and two decoders. A sixth-generation NVDEC hardware decoder is also available.

Finally, Blackwell has support for DisplayPort 2.1b with up to 80 Gbps of bandwidth. This will allow up to 165hz refresh rate in 8K resolution, and no less than 420hz in 4K resolution.

Photos - NVIDIA GeForce RTX 5080 Founders Edition

Synthetic benchmarks, productivity and gaming (1080p, 1440p, 2160p)

With the release and announcement of next-gen graphics cards, we have to update our benchmark (once again). The best gaming processor currently is the AMD Ryzen 7 9800X3D processor.

A statistic reminder...

AVG FPS (Average Frames Per Second):

This is the average number of frames per second during a benchmark. It represents the overall performance of the graphics card and shows how smooth a game will be on average.

  • Importance: It allows you to compare overall performance between cards, but does not reflect possible drops or instabilities.

1% Percentile:

Measures the average of the lowest FPS (worst 1% of performance). Indicates performance drops and overall stability.

  • Importance: It reveals how consistent the experience is. A low 1% Percentile implies potential stuttering, even if the average is high.

Relationship:

The AVG FPS shows overall speed, while the 1% Percentile reflects fluidity. Together, they offer a complete performance assessment.

The new tests are measured with MsBetweenDisplay.

Benchmarks (GPU Benchmarks – 2025)

The revamped test bench features the best processor I have in my hands, the AMD Ryzen 7 9800X3D. We used this processor, since it is the one that will generate the least bottleneck to the tested GPUs in scenarios where the limiter may be the CPU (video link).

The focus is aimed at achieving 100% performance of the video card, NVIDIA GeForce RTX 5080 Founders Edition. However, there are scenarios where the GPU will not scale any further at lower resolutions (1080p and even 1440p).

I'm using Windows 11 24H2, but I've disabled VBS (Virtualization-Based Security), as it either slows down performance in certain scenarios or causes stuttering.

CPU: AMD Ryzen 7 9800X3D (https://amzn.to/4h5d7eR)
Board: ROG STRIX B650E-E GAMING WIFI (BIOS 3057) (https://amzn.to/4abMKAY)
RAM: CORSAIR VENGEANCE RGB DDR5 RAM 32GB (2x16GB) 6000MHz CL30 AMD EXPO Intel XMP (https://amzn.to/404P6gk)
T.video (under test): NVIDIA GeForce RTX 5080 Founders Edition (Link: https://amzn.to/3Pe4Vx6)
Operating system: Windows 11 Home Edition 24H2 – VBS OFF
Liquid refrigeration: DeepCool Mystique 360
SSD: FN970 1TB M.2 2280 PCIe Gen4 x4 NVMe 1.4 (https://amzn.to/3PuXPn8)
Driver: NVIDIA Press Driver 572.12
Power supply: NZXT C1200 ATX 3.1 (https://amzn.to/3ChugT4)

3DMark Time Spy Extreme

3DMark Speed ​​Way

Vray Benchmark 6 (RTX)

Blender

AI - MLPerf Client 0.5 - Inference Test

MLPerf is a set of tests created by MLCommons, a consortium that includes experts from Harvard, Stanford, NVIDIA y Google, among others. These tests evaluate the performance of Advanced GPUs, and now, with MLPerf-Client v0.5 For Windows, users can measure how their PCs and laptops they drive Generative Language Models (LLMs) – Inference in INT4.

The LLMs are fundamental in the generative artificial intelligence, but evaluating their performance on different teams can be tricky. MLPerf-Client simplifies this by generating clear and comparable results, helping to understand how popular models They perform in real tasks on the table:

    • content generation
    • Creative writing
    • Light summary
    • Moderate Summary

The benchmark uses the model Llama2-7B Meta, known for its accessibility and similarity to modern architectures. In addition, it takes advantage of technologies such as ONNXRuntime-GenAI y DirectML EP to run models on various hardware.

The tests generate two key metrics: the Average time to generate first token (a result SMALLER es BEST) measured in seconds (s) and the Average generation rate of the following tokens (a result MAYOR es BETTER) measured in tokens per second (tok/s). These metrics provide a clear view of your equipment's performance with Generative AI.

MLPerf-Client 0.5
TestingMetricRTX 5080RTX-4080SPercentage change
(RTX 5080 vs RTX 4080S)
TotalAverage Time to First Token (s)0.1140.151-24.50%
Average token generation rate (tok/s) 158.75136.2616.51%
content generationAverage Time to First Token0.0670.083-19.28%
Average token generation rate (tok/s) 170.53146.9616.04%
Creative writingAverage Time to First Token0.0940.136-30.88%
Average token generation rate (tok/s) 161.33138.6416.37%
Summary, LightAverage Time to First Token0.1390.186-25.27%
Average token generation rate (tok/s) 156.49134.316.52%
Summary, ModerateAverage Time to First Token0.1910.25-23.60%
Average token generation rate (tok/s) 147.5125.9917.07%

Gaming – Rasterization

All tests are done at the highest quality available, unless otherwise specified.

Let's see the first title, The Plague Tale: Requiem.

A Plague Tale: Requiem (1080p, 1440p, 2160p)

Game Engine: Proprietary

Alan Wake 2 (1080p, 1440p, 2160p)

Game Engine: Northlight Engine

Baldur's Gate 3 (1080p, 1440p, 2160p)

Game Engine: Divinity Engine 4.0

Black Myth: Wukong (1080p, 1440p, 2160p)

Game Engine: Unreal Engine 5

Borderlands 3 (1080p, 1440p, 2160p)

Game Engine: Frostbite 3

CS2 (1080p, 1440p, 2160p)

Game Engine: Source 2

F1 24 (1080p, 1440p, 2160p)

Game Engine: EGO Engine 4.0

God of War (1080p, 1440p, 2160p)

Game Engine: Proprietary

Marvel's Spider-Man Remastered (1080p, 1440p, 2160p)

Game Engine: Proprietary

Shadow of the Tomb Raider DX 12 (1080, 1440p, 2160p)

Game Engine: Foundation

Shadow of War (1080, 1440p, 2160p)

Game engine: LithTech Jupiter EX

Star Wars: Jedi Survivor (1080p, 1440p, 2160p)

Game Engine: Unreal Engine 4

Strange Brigade DX12 + Async (1080p, 1440p, 2160p)

Game Engine: Asura

Warhammer 40,000: Space Marine 2 (1080p, 1440p, 2160p)

Game Engine: Swarm Engine

DLSS 4 Multi Frame Generation (2X, 3X, 4X)

DLSS 4 Multi Frame Generation It is the evolution of technology frame generation de NVIDIA. Unlike previous versions, it can generate several additional frames for each processed frame, thanks to a new AI model (Transformer) and hardware component Flip Metering, present in the GeForce RTX 50. This combination allows for a considerable increase (3X, 4X) in frame rate without drastically increasing latency. Additionally, adjustments have been made to improve the visual quality in fast scenes and reduce artifacts (according to NVIDIA).

DLSS Frame Generation conventional, works at 2X.
DLSS Multi Frame Generation
, works at 3X and 4X.

Alan Wake 2 (DLSS 4 Multi Frame Generation) – The title finally showed the weakness of FG and MFG

Let's start with Alan wake 2, one of the two titles tested where visual quality declined significantly when using the frame generation, whether conventional (2X) or the new Multi Frame Generation (3X and 4X) for cards GeForce RTX 50 series. I need to check in more detail if the model Transform generates artifacts without Frame Generation in this particular title, however, it was immediately noted that upon activating any mode of frame generation, appeared visual artifacts without the need to do a slow motion test. These errors, or ghosting severe, they are very noticeable with FG/MFG en Alan wake 2. While the improvement in FPS increases when activated, the visual experience leaves much to be desired. In Cyberpunk 2077 The situation was much better, but we will talk about that elsewhere.

Still, here are the results of the scene benchmarking in resolution 2160p:

RTX 5080 Alan Wake 2 Multi Frame Generation

Starting with rasterization onlyThe new GeForce RTX 5080 records 53 FPS average. Then, using superscaling (in mode Quality) And the preset Ultra de Ray Tracing, the frames descend from 53 a 31 when enabling todas available technologies except FG/MFG.
When activating FG 2X, average FPS rises to 57, XNUMX MFG 3X they reach 83 and with MFG 4X llegan to 107 frames on average. This could be considered the Holy Grail If it weren't for the bad viewing experience, where the character “generates” artifacts when running.

The cause of this problem seems to be due to the low base average frames (without FG/MFG), which are around 30. Apparently, if the FPS are below 60, any frame generation mode in Alan wake 2 makes the artifacts very visible, in this case around the main character when running. The lower the FPS, the worse it looks, due to lack of information (fewer frames).

To corroborate this, I reduced the DLSS so Quality by way Ultra Performance, thus using a base resolution of 720p. In this scenario, the base FPS (before applying FG/MFG) are much older and, by enabling FG/MFG, it is appreciated less This phenomenon of ghosting o artifacts compared to the previous superscaling mode.

Cyberpunk 2077 (Multi Frame Generation) – A more stable experience

We arrived to Cyberpunk 2077, where the presence of ghosting and artifacts caused by low FPS in combination with frame generation was considerably less noticeable compared to what was observed in Alan wake 2. It does not mean that the problem does not exist, but it does not reach the same level of severity.

Following the same methodology as in the previous game, we establish a baseline in rasterization. The GeForce RTX 5080 gets an average of 69 FPS in the test. However, when activating todas NVIDIA technologies, except frame generation, the frames drop from 69 to 41 FPS, a reduction of 40%.

RTX 5080 Cyberpunk 2077 Multi Frame Generation

Activating frame generation en 2X mode, FPS went up from 41 to 72. In 3X mode, the average increased to 102 FPSAnd with 4X mode, were reached 130 FPS. To level theoretical, this increase is positivebut still ghosting was observed with the card, although to a lesser extent than in Alan wake 2.

To rule out that the problem was limited to Alan Wake 2, I changed the graphic preset de superscaling de Quality a DLAAWhich reduced base FPS. By making this change, Artifacts were immediately noticed on the NPCs, which confirms that The problem mainly manifests itself when the base FPS is below 60.

Consumption – Not much increased (NVIDIA GeForce RTX 5080 Founders Edition)

The information available to measure consumption comes from the NVIDIA sensor, which only records the energy expenditure of the GPU. This means that it does not include the total system consumption or the additional pre-power consumption of the GPU power phase, as well as what is used through the slot PCIeBelow is a summary table of the Average consumption y maximum, focusing on 4K UHD (2160p) with rasterization, where the GPU is usually at its maximum load for gaming.

Consumption in games
GeForce RTX 5080GeForce RTX 4080 Super
2160pAverage consumption (W)Maximum consumption (W)Average consumption (W)Maximum consumption (W)
The Plague Tale: Requiem335342306309
Alan wake 2324344303313
Baldur's Gate 3304329286299
Black MythWukong321327296305
Borderlands 3333341289298
F1 24317326298305
God of War335341291295
Marvel's Spider-Man Remastered276297268288
Shadow of the Tomb Raider317329286296
Shadow of War294327278304
Star Wars: Jedi Survivor327338301310
Strange Brigade332350280301
Warhammer 40,000: Space Marines 2304315278287

Averaging the average consumption en 13 games, a increase of 9.55% compared to the GeForce RTX 4080 Super. This increase is under of growth in TGP between RTX 5080 and RTX 4080 Super, which is from 12.5% according to the official specification. This suggests that although there is a higher energy consumption, the overall efficiency of the RTX 5080 es slightly better than expected.

Relative performance – NVIDIA GeForce RTX 5080 vs RTX 4080 Super

When analyzing the relative performance new NVIDIA GeForce RTX 5080, taking as reference the GeForce RTX 4080 Super , the 100%, an improvement is observed in the 17.87% en pure rasterization. While this increase It is not mediocre, it doesn't work either impressive, since an improvement is generally expected older generation (25%+?). For those who doubt this margin, it is enough to review the jump that there was between the RTX 4080 Super and RTX 3080, where the profit was notably superior.

This growth moderate It is not surprising either, given that NVIDIA has chosen to maintain the same manufacturing process for this generation.

Furthermore, the GeForce RTX 4090 still maintains an advantage moderate on RTX 5080a 15.95% outperformance en rasterization en 14 games at 2160p. Fortunately, The price of the RTX 5080 has not increased, since it preserves the reference price of its predecessor, the RTX 4080 Super ($999).

I do not include or mention tables of 1440p, since during the phase of benchmarking we notice abnormal behaviors in the performance of the RTX 5080 under this resolution. A clear example is Space Marine 2, where the results in our test scene were lower than expected, with no apparent reason to explain the lack of scaling. Maybe I'll put the post-embargo data (need some sleep mate!)

This incident has already been reported to NVIDIA LATAM, with the expectation that they can investigate and elevate the case to determine if it is a driver problem or some other technical anomaly with this title.

NVIDIA GeForce RTX 5080 vs. GeForce RTX 4080 Super – Game-by-Game Analysis (%) at 2160p

This section is written after lifting the embargo and after a well-deserved rest. During that time, I took the trouble to benchmark la GeForce RTX 4080 Super (I used a previous driver, I think the one hotfix 566.45 for the original measurement) and the GeForce RTX 5080, both tested again with the end-user driver (572.16). I did this to confirm that I had not made any mistakes when benchmarking, when collecting the data, or when putting it into the spreadsheet, including checking the percentage formula used.

In summary, the percentage change between the original measurements and the new data is practically zero, that is, it is within the margin of error (less than 1%).

In simple terms, I was not wrong in the original data and I have updated the data (in 2160p) both of the GeForce RTX 5080 and the GeForce RTX 4080 Super.

The average rasterization improvement at 2160p, going from the RTX 4080 Super to RTX 5080, is from the 17.49% in 14 games.

RTX 4080 Super vs RTX 5080

Deeper Analysis – Rasterization Improvements, Game by Game (2160p)
RasterizationAVG FPS (RTX 5080)AVG FPS (RTX 4080S)% Difference
A Plague Tale: Requiem – 2160p726118.03%
Alan Wake 2 – 2160p534712.77%
Baldur's Gate 3 – 2160p12010910.09%
Black Myth: Wukong -2160p373215.63%
Borderlands 3 – 2160p1179819.39%
CS2 – 2160p20616822.62%
F1 24 – 2160p18716414.02%
God of War – 2160p12210219.61%
Marvel's Spider-Man Remastered – 2160p1421309.23%
Shadow of the Tomb Raider – 2160p14612913.18%
Shadow of War – 2160p14312316.26%
Star Wars: Jedi Survivor – 2160p736414.06%
Strange Brigade DX12 – 2160p25320026.50%
Warhammer 40,000: Space Marine 2 – 2160p967724.68%

First, I extracted the relevant information: the average FPS of each game for the GeForce RTX 5080 and GeForce RTX 4080 Super. Using this data, I calculated the percentage difference for each title and then reorganized the information from lowest to highest to identify which games have shown the least impact from the architecture change (with the current driver) and which have seen the greatest performance improvements.

RTX 5080 vs RTX 4080S% Difference
Marvel's Spider-Man Remastered – 2160p9.23%
Baldur's Gate 3 – 2160p10.09%
Alan Wake 2 – 2160p12.77%
Shadow of the Tomb Raider – 2160p13.18%
F1 24 – 2160p14.02%
Star Wars: Jedi Survivor – 2160p14.06%
Black Myth: Wukong -2160p15.63%
Shadow of War – 2160p16.26%
A Plague Tale: Requiem – 2160p18.03%
Borderlands 3 – 2160p19.39%
God of War – 2160p19.61%
CS2 – 2160p22.62%
Warhammer 40,000: Space Marine 2 – 2160p24.68%
Strange Brigade DX12 – 2160p26.50%

For example, in titles like Marvel's Spider-Man Remastered the improvement was modest, while in a less popular game, such as Strange Brigade (using DX12 and Async Compute at 2160p), I saw a bigger jump. I use Strange Brigade as a “sandbox” to analyze generational changes in GPU compute, as it scales quite well in this title. Other games, such as CS2, benefited noticeably from the maximum graphics preset, while in Alan wake 2 the changes were less pronounced.

Finally, instead of using an average to summarize these percentages, I decided to use the median.

La median It is simply the value found in the planet when you order all the data from smallest to largest. That is, half of the values ​​are below it and the other half are above it.

Why is this so important in a technology portal? Because, unlike the average, the median not affected by extreme or atypical values. This means that when we analyse percentages or yields and there are very high or very low figures that could distort the average, the median gives us a more accurate image of typical performance.

The median for this dataset is 15.94%, meaning that on average the generational shift in pure rasterization is around that percentage.

Now it's time to update the cost per FPS, as I have observed a small change in the total frames per second for both the GeForce RTX 5080 as for the GeForce RTX 4080 Super.

Cost per FPS – 2160p – NVIDIA GeForce RTX 5080 Founders Edition

Compare the cost per FPS between Original RTX 4080 and the new GeForce RTX 5080 it would not make sense, except as an analysis Historical, Since the 4080 entered EOL (End of life) and was replaced by the RTX 4080 Super. The most relevant comparison It is with the latter, since both share the same reference price.

While I have pointed out the limitations of Multi Frame Generation observed when testing the GeForce RTX 5080, and considering that the generation gap en performance was left below normal expectations, cost per FPS is first positive point in this release. This aspect is crucial for the end consumer, especially since the RTX 5080 offers comparable price/performance to the GeForce RTX 4070 Super at 2160p.

Remember that the RTX 5080 de NVIDIA offers a better experience in 2160p.

Cost per frame (2160p)
xanxogaming
GeForce RTX 4070 Super ($599) $7.88
GeForce RTX 5080 ($999) $7.88
GeForce RTX 4060 Dual OC ($299) $8.03
GeForce RTX 4070 ($549) $8.30
GeForce RTX 4060 Ti ($399) $8.52
GeForce RTX 4070 Ti Super TUF ($849) $9.21
GeForce RTX 4080 Super ($999) $9.29
GeForce RTX 3080 ($699) $9.40
GeForce RTX 5090 ($1999) $10.23
GeForce RTX 4090 ($1599) $10.88
GeForce RTX 3080 Ti ($1199) $14.69

Final Analysis – A Better Launch Than the GeForce RTX 5090 (Updated)

Make no mistake: the GeForce RTX 5090 It is still the most powerful card on the market (and, moreover, the one that requires the most energy consumption). However, my main criticism lies in the price increase y best before date,which makes it the object of desire of many enthusiasts, but makes it inaccessible to the majority., especially because of its cost, which is almost double that of the GeForce RTX 5080. Later on, I'll go into more detail about what type of user might justify purchasing an RTX 5090.

Now, focusing on the GeForce RTX 5080, I have good news: its price has not increased, and the increase in consumption is moderate—less than 10%—along with an average improvement of 17.49% Compared to its current predecessor, the GeForce RTX 4080 Super. Although other portals have reported lower results (even less than 10%), my tests in 2160p position the RTX 5080 as an attractive option in terms of price/performance for those who want to play at this resolution without “break the pocket”, as long as the price is close to the reference price (excluding tariffs, freight and taxes).

Cost per frame – 2160p
(Updated – XanxoGaming)
GeForce RTX 4070 Super ($599) $ 7.88
GeForce RTX 5080 ($999) $ 7.92
GeForce RTX 4060 Dual OC ($299) $ 8.03
GeForce RTX 4070 ($549) $ 8.30
GeForce RTX 4060 Ti ($399) $ 8.52
GeForce RTX 4070 Ti Super TUF ($849) $ 9.21
GeForce RTX 4080 Super ($999) $ 9.30
GeForce RTX 3080 ($699) $ 9.40
GeForce RTX 5090 ($1999) $10.23
GeForce RTX 4090 ($1599) $10.88
GeForce RTX 3080 Ti ($1199) $14.69

La GeForce RTX 4070 Super It offers a better price/performance ratio, but its experience at 2160p is far from optimal (which is why it is suggested for 1440p). This is undoubtedly the biggest draw for a user considering making their first purchase in this price range, which makes this launch, in general terms, superior to that of the RTX 5090.

La RTX 5090 achieved 33% more performance compared to the RTX 4090, but this is accompanied by a 25% increase in price and truly abysmal consumption. The generational leap in performance is mediocre in real gaming tests, but today I was able to see one of the main weaknesses of the technology of frame generation y Multi Frame Generation.

If the base FPS is below 60, enabling these technologies will result in artifacts and ghosting on characters or NPCs. On the RTX 5090, this issue was less noticeable, simply because the card generates more base frames, which mitigates the visual impact of such errors.

I'm not sure if NVIDIA will be able to fix or at least mitigate this Achilles heel in their frame rate technology, but I hope they get enough feedback from various sources to work on future improvements in their more advanced models.

Also, I should point out that in some tests at my bank, the new series cards Blackwell (at least the RTX 5090 and RTX 5080) are somewhat bottlenecked at 1440p. The results at this resolution are below what might be expected, making them unsuitable for this type of use at the moment. It may be a driver issue; I'm not absolutely sure. I hope that future cards, like the GeForce RTX 5070 Ti and 5070, show improved optimization for 1440p (and also for 1080p) and that, in case there is a driver issue with the 5080/5090 at these resolutions, it is fixed as soon as possible—which will require rebenchmarking both cards again 😊.

XanxoGaming Gold

 

Who would you recommend the GeForce RTX 5090 and RTX 5080 for?

  • GeForceRTX5090:
     
    • Professionals that rely on your GPU for intensive work and for whom rendering time is crucial.
    • Gamers without budget restrictions looking for the best in the market regardless of the cost.
    • Users with 4Hz 240K UHD monitors, capable of taking full advantage of the power of this GPU.
  • GeForceRTX5080:

    • La best option in price/performance for players who want to enter the world of 2160p without spending a fortune (remember that the RTX 5090 costs twice as much, but only offers 54.90% more performance), as long as the market price of the RTX 5080 is reasonable.
    • Direct replacement for the RTX 4080 Super, which is outdated by this model if both models are priced similarly. Those who bought it recently probably made a bad decision, as it was no secret that the RTX 5080 would outperform it (+17.41% average, with a median of 15.94%).
    • Users with 4Hz 144K UHD monitors, that can truly take advantage of the power of this GPU.

Finally, Multi Frame Generation requires further analysis and discussion with NVIDIA, especially around the weaknesses I have identified in this technology. While on paper and in measurements it shows good results, the visual experience suffers when base FPS are low, something that needs to be addressed in future optimizations.

If you feel that this generation has not met expectations, that it should have performed better (and I include myself) or that has left a bad taste in the mouth, is the result of the lack of competition in the high-end of video cards. Remember that the series GeForce RTX 50 uses the same manufacturing process as the series GeForce RTX 40When a single company dominates the market, generational improvements tend to be more conservative and, in the worst case, technological innovation stagnates. Without the pressure of AMD or other competitors in the high-end segment, the evolution in performance, efficiency and technology has not been as aggressive as in previous generations. As for AI in gaming, let's wait to see how it evolves Multi Frame Generation, but I will not take this aspect as the main purchasing point, but rather as a differential bonus compared to the previous generation (for the moment).

NVIDIA GeForce RTX 5080 Founders Edition - Review
  • Economical performance
  • Temperatures
  • Noise
  • Consumption
  • Price
  • Innovation
Overall
3.8

Summary

La NVIDIA GeForce RTX 5080 Founders Edition Improves rasterization performance by 17.87% compared to the RTX 4080 Super (2160p), maintaining a price of $999. With an increase in energy consumption of 9.55% but also 360W TGP, generational advancement results discreteStill, it solidifies itself as a solid choice for gaming on 2160p, offering an attractive price/performance ratio for those looking to upgrade their older hardware, if the market price is reasonable.

Pros

-There was no increase in the reference price for the RTX 5080 in this generation.
-Increase in consumption was slight (less than 10%).
-DLSS 4 Multi Frame Generation looks promising in our tests, although we found scenarios where it loses its charm.
-Better price/performance than the GeForce RTX 4080 Super, if the price is in line with the reference price.
-Now comes with DisplayPort 2.1b with UHBR20 and HDMI 2.1b.

Cons

-Performance at 1440p below expectations.
-Limited games using DLSS 4 MFG natively at launch. Several require tweak via NVIDIA App.
-Less generational advancement than expected and below a GeForce RTX 4090.