Nvidia RTX 3070 review: AMD’s stopwatch just started ticking a lot louder
- Sam Machkovech
- Sam Machkovech
Talking about the RTX 3070, Nvidia’s latest $499 GPU launching Thursday, October 29, is tricky in terms of the timing of today’s review embargo. As of right now, the RTX 3070 is the finest GPU in this price sector by a large margin. In 24 hours, that could change—perhaps drastically.
Ahead of AMD’s big October 28 event, dedicated to its RDNA 2 GPU line, Nvidia gave us an RTX 3070 Founders Edition to test however we saw fit. This is the GPU Nvidia absolutely needed to reveal before AMD shows up in (expectedly) the same price and power range.
Inside of an Nvidia-only bubble, this new GPU is a sensation. Pretty much every major RTX 2000-series card overshot with proprietary promises instead of offering brute force worth its inflated costs. Yet without AMD nipping at its heels, Nvidia’s annoying strategy seemed to be the right call: the company established the RTX series’ exclusive, bonus processing cores as a major industry option without opposition, then got to wait a full year before competing with significant power jumps and delectable price cuts.
Last month’s RTX 3080 saw that strategy bear incredible fruit—even if ordering that $799 GPU is still seemingly impossible. But what happens when Nvidia scales down the Ampere 7nm promise to a $499 product that more people can afford? And how will that compare to whatever AMD likely has to offer in the same range?
Future-proofing around the 1440p threshold
Nvidia GeForce RTX 3070
We can only answer some of those questions today. (Until Nvidia proves otherwise, we assume that availability will continue to be a massive asterisk for this and all other RTX 3000-series cards.) In good news, at least, the RTX 3070 gets off to a roaring start by rendering its 2019 sibling, the RTX 2070 Super, moot. Both debuted at $499, but the newer option typically approaches, and occasionally bests, the RTX 2080 Ti (whose $1,199 MSRP in 2018 sure feels like a kick in the ray-traced teeth nowadays).
But RTX 3070’s price-to-performance ratio comes with one significant caveat: a not-so-future-proofed VRAM capacity of 8GB, shipping in the not-as-blistering category of GDDR6. That matches the best RTX 2000-series cards but is surpassed by higher-speed GDDR6x VRAM in pricier RTX 3000-series GPUs.
|RTX 3080 FE||RTX 3070 FE||RTX 2080 Ti FE||RTX 2080 Super||RTX 2070 Super||GTX 1080 Ti|
|Memory Bus Width||320-bit||256-bit||352-bit||256-bit||256-bit||352-bit|
|Memory Size||10GB GDDR6X||8GB GDDR6||11GB GDDR6||8GB GDDR6||8GB GDDR6||11GB GDDR5X|
|MSRP at launch||$699||$499||$1,199||$699||$499||$699|
The thing is, “future-proofed” for PC gaming is relative. What’s going to matter in 3D processing in the near future, both for the games you love and the systems you run them on? If you’re set on having the crispest native 4K rendering for the foreseeable future, the RTX doesn’t leapfrog over the 2080 Ti, particularly with a VRAM allotment that could stress any games that ship with 4K-specific texture packs.
But if you’re favoring a lower-resolution panel, perhaps 1440p or a widescreen 1440p variant—and Steam’s worldwide stats make that a safe assumption—then your version of future-proofing revolves more around processing power and ray-tracing potential. In those respects, the RTX 3070 currently looks like the tippy-top option for a “top-of-the-line” 1440p system… with the bonus of Nvidia’s Deep Learning Super-Sampling (DLSS) for surprisingly competitive fidelity in 4K resolutions, should gamers upgrade their monitor between now and the next GPU generation. (Until AMD shows us otherwise, Nvidia’s proprietary DLSS 2.0 pipeline remains the industry’s leading upscaling option, and game studios have started embracing it in droves.)
In other words, if you’re more interested in high frame rates on resolutions less than 4K, and you want GPU overkill for such a CPU-bound gaming scenario, the RTX 3070 is this year’s best breathing-room option for the price… at least, unless AMD announces an even more compelling proposition on October 28.
Strong, but not the 2080 Ti topper we expected
The above collection of game benchmarks mostly mirrors the ones I used for my RTX 3080 review, and once again, these tests err on the side of graphical overkill. You may have zero interest in using an RTX 3070 with 4K resolutions or maximum graphical slider values, and that’s understandable. Instead, these tests are designed to stress the GPU as much as possible to present the clearest comparisons between the listed cards. Look less at the FPS values and more at the relative percentages of difference. (The exception comes from “DLSS” tests, which I’ll get to.)
Even though this year’s $499 RTX 3070 clearly exceeds the power of last year’s $699 RTX 2080 Super, I tested it against last year’s $499 RTX 2070 Super, as well, to show exactly what a difference a year makes in terms of price-to-power proposition. The percentage differential between the 70-suffix GPUs varies based on what kind of software you’re testing, but the most massive surge in performance can be found when ray-tracing effects are toggled at pure 4K resolution. Wolfenstein Youngblood, in particular, sees the 3070 double the 2070 Super’s frame rates in its ray-tracing benchmarks.
While Nvidia has made benchmarking claims that put the RTX 3070 ahead of the RTX 2080 Ti, that doesn’t necessarily bear out in my testing—but this is because the RTX 2080 Ti Founders Edition shipped in 2018 with a remarkable capacity for safe overclocking. The 3070 FE, like its 2070 Super sibling, seriously lacks headroom for such safe overclocking for either its core or memory clocks, as managed by tests-at-every-step automation by programs such as EVGA X1. Testing was nearly identical on the 3070 with or without a scant EVGA X1 overclock applied, and as such, I’ve left its OC tests out of this roundup. Remember: as Nvidia’s Founders Editions go, generally, so do other vendors’ variants. So we’re not sure other vendors will squeeze much more out of the same model.
Thus, the 2080 Ti still pulls ahead in most, but not all, of the above gaming benchmarks, whether ray tracing is or isn’t enabled. When comparing both cards’ specs, this difference checks out, since the newer 3070 cuts back on certain components for efficiency’s sake (not to mention that dip in VRAM capacity). Categories like Tensor cores and RT cores are listed as “newer-generation” versions for the 3070, and the bigger 3000-series cards beat the 2080 Ti both in quantity and generation, so they get the clearer wins. The 3070 finally sees that efficiency trade fail to win out in certain testing scenarios—nothing tragic, mind you, but worth noting in case you’d hoped for across-the-board wins against the 2080 Ti. That’s 184 “third-generation” Tensor cores in the 3070, versus 544 older Tensor cores in the 2080 Ti, and 46 “second-generation” RT cores in the 3070, versus 68 older RT cores in the 2080 Ti.
Size, ports, noise
The RTX 3070’s efficiency figures into its size reduction, down to 9.5″ in length (242mm) from the RTX 2070 Super’s 10.5″ (but not quite as small as the original RTX 2070’s 9″ length). Like other 3000-series FEs, the RTX 3070 utilizes what Nvidia calls a “flow-through” design that pulls cool air from below and pushes hot air out in two directions: through its “blower,” out the same side as its DisplayPort and HDMI connections, and upward in the same direction as your motherboard’s other components. Basically, the size reduction may help you cram an RTX 3070 into a smaller case, but you’ll still want to guarantee considerable airflow.
Speaking of connections, they’re identical to what you’ll find on the RTX 3080: three for DisplayPort, one for HDMI 2.1. (If you missed it, Nvidia quietly dumped the VR-friendly USB Type-C “VirtualLink” port found in most RTX 2000-series cards from this year’s GPU generation, perhaps owing to how few VR headset manufacturers bothered supporting it.) Additionally, the 3070 continues the RTX 3000-series trend of employing a smaller 12-pin connector for power, though it ships with an adapter for today’s common 8-pin PSU standard. In the 3070’s case, it only requires one 8-pin connection to a PSU, not two (or a mix of 8-pin and 6-pin), even though it maxes out at a 220W power draw. (The 2070 Super requires one 8-pin and one 6-pin connector with a power draw maximum of 215W.)
And when Nvidia brags that the RTX 3070 runs quieter, the company means it. While I lack solid decibel-measuring equipment to tell you exactly how much quieter this card runs than its competition, it’s safe to say that its full-load mix of fan noise and operational hum probably won’t be the loudest component in your system. And with my ear directly up to it, its noticeable noise certainly wasn’t louder than, say, a PlayStation 4 Pro. (Nvidia has described its noise level as “up to 16dBA quieter” than the original RTX 2070 Founders Edition.)
Thoughts on 1440p, ray tracing, and DLSS
The above benchmarks make clear that 4K/60fps performance in newer PC games, with all settings maxed out, isn’t a given on the RTX 3070. But it’s important to note that many of these tests include overkill settings for things like anti-aliasing, shadow resolution, and even “maximum” ray-tracing effects, all meant to guarantee maximum GPU impact for the sake of accurate comparisons between the GPUs. In the real world, you can safely drop most of these from “ultra,” “extreme,” or “insane” while still exceeding most console ports’ settings and barely looking discernible from their over-the-top maximums, and the results often land darned close to 4K/60.
Scale down to a resolution like 1440p and you’ll hope for frame rates that take advantage of monitors rated for 144fps and above. One good indicator of the RTX 3070’s capabilities is Borderlands 3, a particularly demanding (and arguably inefficient) game that doesn’t leverage Nvidia-specific GPU perks while packing its scenes with dynamic lighting, alpha particle effects, cel-shaded detail, and massive draw distances. When put through its benchmark wringer at 1440p on my testing rig (i7-8700K OC’ed to 4.7GHz, 32GB DDR-3000 RAM), BL3 averages 99.5fps at the “high” settings preset or 88.0fps at “ultra.” Not 144fps, mind you, but I think of BL3 as a good “floor” for performance, easily outdone by older and more efficient 3D games.
Without ray tracing turned on in 3D games from the past few years, RTX 3070’s frame rates have easily surpassed 80fps with tons of bells and whistles enabled at 1440p resolution, and they’ve easily gone higher with every drop in settings from there. But what happens on the RTX 3070 with ray tracing turned on?
As of press time, there’s an interesting combined trend for just about everything I’ve tested with some version of DirectX Ray Tracing (DXR): the harmonious pairing of Nvidia’s latest “DLSS 2.0” standard. Should you run GPU-pounders like last year’s Control or this month’s Watch Dogs Legion at near-max settings and 1440p resolution, plus ray tracing enabled, you can expect frame rates at roughly 50-55fps on the RTX 3070. But a funny thing has happened with DLSS 2.0: much improved support for DLSS upscaling from 906p to 1440p. Last year, I would’ve told you that you were crazy to upscale from anything lower than 1440p, in terms of pixel smudginess. But now? Just take a look:
When testing at 1440p, Control has seen its DLSS 2.0 translation of tiny details, particularly text on posters, improve compared to native rendering plus temporal anti-aliasing (TAA). Meanwhile, WDL‘s benchmark is keen on adding rain to its mix, which is clever on Ubisoft’s part; this is the exact kind of detail that DLSS has struggled to render in games like Death Stranding, yet in this newer game, rain materializes almost identically when its 906p signal is upscaled with DLSS’s machine-learning wizardry.
With both of these games’ DLSS modes toggled at this 906p upscale, frame rates jump to the 78-84fps range… and that’s with ray tracing enabled (“high” RT settings in Control, “medium” RT settings in WDL).
A masterful game of GPU dominoes
Nvidia really couldn’t have set these dominoes up any better. Its RTX line of GPUs has separate components to handle the above fancy features—dedicated ray-tracing cores and dedicated “tensor” cores to handle ML-assisted computation. The way its ray-tracing cores work lines up neatly with industrywide standards like DXR, which means it’s a drop in the programming budget to implement those in ways that will work on competitors’ GPUs and on brand-new gaming consoles. And the tensor cores’ upscaling methods line up neatly with TAA, a particularly common anti-aliasing standard that Nvidia’s DLSS effectively piggybacks off. As of DLSS 2.0, the model does not require game-specific coding to work (though developers still have to partner with Nvidia to implement it).
For Nvidia gamers, then, the ray-tracing proposition going forward is clear: if you want to turn it on, you’ll almost certainly have the simultaneous option of toggling the efficiency of Nvidia’s dedicated RT cores and the efficiency of their DLSS implementation. In terms of pixel fidelity, DLSS 2.0 has pretty much proven to be a wash, with games generally enjoying a mix of sharper and blurrier elements depending on the scene (with neither being egregious, with the notable exception of Death Stranding‘s peskiest, super-detailed moments like cut scene details and screen-filling rain). And that’s a wash visually, not computationally; the proof is in the frame rate pudding.
We still don’t know if AMD can possibly compete when its future cards have their ray-tracing modes turned on. Maybe we’re in for a $500-ish scenario where AMD can beat Nvidia’s rendering performance in a game like Borderlands 3 at a better price-to-performance ratio, only to lose out on the same performance gains with ray tracing turned on. Having tested Watch Dogs Legion over the past week, I can safely say its RT perks—as slathered over a massive, open-world city full of reflective surfaces and other handsome light-bounce effects—are difficult to disable now that I have a midrange GPU that can reasonably handle said effects at “1440p.”
Meaning, I could turn them off… but I no longer want to. It’s hard to go back to plain ol’ rasterization after seeing so many light sources realistically emerge no matter what time of day or scenario I’m in. As I pilot a drone past a shiny office building, or drive in a shiny, future-London car past a beautiful landmark, I see objects in WDL reflect or bounce light in ways that acknowledge objects or light sources that otherwise aren’t on the screen. This is what ray tracing does: It accounts for every nearby light bounce, even if it’s not on screen, to render the whole world whether you can see it directly or not.
Plus, if you have dreams of one day toggling ray-tracing power at 4K with this card, WDL on an RTX 3070 at “high” settings gets up to a 58fps average in 4K resolution with RT at “medium,” so long as I use DLSS to upscale to 4K from… wait for it… 1440p native. Those upscaling results are convincingly crisp, as well.
Thus, as I said in the beginning, your definition of a “future-proofed” GPU will likely drive your interest in what the RTX 3070 has to offer for $499. We’re about to see even more interesting ray tracing in games—including at least one we’re not allowed to talk about yet. You’ll have to take our word for it, in terms of how exciting it is to live inside of some games’ ray-traced worlds.
If that’s not your bag, due to visual preferences or budgetary reasons, I get it. But it remains to be seen whether a cheaper RTX card can deliver the same future-proofing in the 1080p range or whether AMD will arrive with a perfect amount of budget-minded power and ray tracing—or even a butt-kicker of a card that skips ray tracing altogether in favor of powerful, traditional 3D rendering for a damned good price. For now, in the 1440p range, Nvidia has the clear lead… for at least 24 hours.
Listing image by Sam Machkovech