Archive Home arrow Reviews: arrow Video Cards arrow NVIDIA GeForce GTX 680 Graphics Performance
NVIDIA GeForce GTX 680 Graphics Performance
Reviews - Featured Reviews: Video Cards
Written by Olin Coles   
Thursday, 22 March 2012

NVIDIA GeForce GTX 680 Kepler Video Card Performance

The world's most powerful GPU consumes less electricity and runs cooler than all competing graphics solutions.

Manufacturer: NVIDIA
Product Name: GeForce GTX 680
Suggested Retail Price: $499.99 MSRP

Full Disclosure: The product sample used in this article has been provided by NVIDIA.

Of the many platforms available for gamers to enjoy video games, there's no question that the highest quality graphics come from PC. While game developers might not consider PC gaming as lucrative as entertainment consoles, companies like NVIDIA use desktop graphics to set the benchmark for smaller more compact designs that make it into notebooks, tablets, and smartphone devices. NVIDIA's Kepler GPU architecture is an example of this, delivering unprecedented performance while operating cooler and consuming far less power than previous flagship discrete graphics cards. In this article Benchmark Reviews tests the NVIDIA GeForce GTX 680 video card equipped with a 28nm GK104 Kepler GPU, and compares it against the best DirectX 11 video cards available. Featuring their new NVIDIA GPU Boost technology, the GeForce GTX 680 video card can dynamically adjust power and clock speeds based on real-time application demands.

NVIDIA's GeForce GTX 680 is the first graphics card designed around their next-generation Kepler GPU architecture, which adopts key aspects from the previous Fermi architecture. Building from the 32-core Streaming Multiprocessor (SM) from Fermi on the GeForce GTX 580, NVIDIA optimized Kepler with twice the performance per watt using an innovative 192-core streaming multiprocessor (referred to as SMX) that exchanges a double speed processor clock for more processor cores. Utilizing eight SMXs, the GeForce GTX 680 Kepler GPU boasts 1536 total CUDA cores which manage shader, texture, geometry, and compute tasks. A reengineered memory subsystem reduces pipeline penalty for these many cores, and allows the GTX 680 to reach memory speeds up to 6.0 Gb/s. Combined, these GPU architecture improvements offer impressive performance gains while improving overall power efficiency, yet they actually represent only a small portion of new technology for this launch.

NVIDIA-GeForce-GTX-680-Video-Card-Kit.jpg

In addition to a new and improved Kepler GPU architecture and NVIDIA GPU Boost technology, the GeForce GTX 680 video card ushers in refinements in the user experience. Smoother FXAA and adaptive vSync technology results in less chop, stutter, and tearing in on-screen motion. Overclockers might see their enthusiast experiments threatened by the presence of NVIDIA GPU Boost technology, but dynamically adjusting power and clock speed profiles can be supplemented with additional overclocking or shut off completely. Adaptive vSync on the other hand, is a welcome addition by all users - from the gamer to the casual computer user. This new technology adjusts the monitor's refresh rate whenever the FPS rate becomes too low to properly sustain vertical sync (when enabled), thereby reducing stutter and tearing artifacts. Finally, NVIDIA is introducing TXAA, a film-style anti-aliasing technique with a mix of hardware post-processing, custom CG file style AA resolve, and an optional temporal component for better image quality.

NVIDIA targets the top-end enthusiast segment with their premium GeForce GTX 680 discrete graphics card, which includes only the most affluent PC gamers. In order to best illustrate the GTX 680s performance, we use the most demanding PC video game titles and benchmark software available. Video frame rate performance is tested against a large collection of competing desktop graphics products, such as the AMD Radeon HD 7970 (Tahiti). Crysis Warhead compares DirectX 10 performance levels, joined by newer DirectX 11 benchmarks such as: 3DMark11, Batman: Arkham City, Battlefield 3, and Unigine Heaven 3.

GeForce GTX-Series Product Family

Graphics Card

GeForce GTX 550 Ti

GeForce GTX 460

GeForce GTX 560 Ti

GeForce GTX 570 GeForce GTX 580 GeForce GTX 590 GeForce GTX 680
GPU Transistors 1.17 Billion 1.95 Billion 1.95 Billion 3.0 Billion 3.0 Billion 6.0 Billion Total 3.54 Billion

Graphics Processing Clusters

1 2

2

4

4

8 Total 4

Streaming Multiprocessors

4 7

8

15 16 32 Total 8

CUDA Cores

192 336

384

480 512 1024 Total 1536

Texture Units

32 56

64

60 64 128 Total 128

ROP Units

24 768MB=24 / 1GB=32

32

40 48 96 Total 32

Graphics Clock
(Fixed Function Units)

900 MHz

675 MHz

822 MHz

732 MHz 772 MHz 607 MHz 1006-1058 MHz

Processor Clock
(CUDA Cores)

1800 MHz

1350 MHz

1644 MHz

1464 MHz 1544 MHz 1215 MHz 1006-1058 MHz

Memory Clock
(Clock Rate/Data Rate)

1025/4200 MHz

900/3600 MHz

1001/4008 MHz

950/3800 MHz 1002/4016 MHz 854/3414 MHz 1502/6008 MHz

Total Video Memory

1024MB GDDR5

768MB / 1024MB GDDR5

1024MB GDDR5

1280MB GDDR5

1536MB GDDR5

3072MB GDDR5

2048MB GDDR5

Memory Interface

192-Bit

768MB=192 / 1GB=256-Bit

256-Bit

320-Bit

384-Bit

384-Bit

256-Bit

Total Memory Bandwidth

98.4 GB/s

86.4 / 115.2 GB/s

128.3 GB/s

152.0 GB/s 192.4 GB/s 327.7 GB/s 192.26 GB/s

Texture Filtering Rate
(Bilinear)

28.8 GigaTexels/s

37.8 GigaTexels/s

52.6 GigaTexels/s

43.9 GigaTexels/s

49.4 GigaTexels/s

77.7 GigaTexels/s

128.8 GigaTexels/s

GPU Fabrication Process

40 nm

40 nm

40 nm

40 nm

40 nm

40 nm

28 nm

Output Connections

2x Dual-Link DVI-I
1x Mini HDMI

2x Dual-Link DVI-I
1x Mini HDMI

2x Dual-Link DVI-I
1x Mini HDMI

2x Dual-Link DVI-I
1x Mini HDMI

2x Dual-Link DVI-I
1x Mini HDMI

3x Dual-Link DVI-I
1x Mini DisplayPort

2x Dual-Link DVI-I
1x HDMI 1.4a
1x DisplayPort 1.2

Form Factor

Dual-Slot

Dual-Slot

Dual-Slot

Dual-Slot

Dual-Slot

Dual-Slot

Dual-Slot

Power Input

6-Pin

2x 6-Pin

2x 6-Pin

2x 6-Pin

6-Pin + 8-Pin

2x 8-Pin

2x 6-Pin

Thermal Design Power (TDP)

116 Watts

768MB=150W / 1GB=160W

170 Watts

219 Watts 244 Watts 365 Watts 195 Watts

Recommended PSU

400 Watts

450 Watts

500 Watts

550 Watts

600 Watts

700 Watts 550 Watts

GPU Thermal Threshold

100°C

104°C

100°C

97°C

97°C

97°C

98°C

Chart Courtesy of Benchmark Reviews

First Look: GeForce GTX 680

From a distance, the NVIDIA GeForce GTX 680 looks a lot like the video cards of previous generations. Unless you're close enough to notice the details, the flagship models appear to all be about the same overall size. The outer dimensions for the GeForce GTX 680 give this 1.5" tall double-bay, 3.9" wide, 10.0" long graphics card a similar profile, but it's actually slightly shorter than the NVIDIA GeForce GTX 570, GeForce GTX 580, AMD Radeon HD 6970, and Radeon HD 7970 (each 10.5" long).

NVIDIA-GeForce-GTX-680-Video-Card-Top.jpg

A rear mounted 60mm (2.4") blower motor fan with a slight offset takes advantage of the chamfered depression to draw cool air into the angled fan shroud, allowing more air to reach the intake whenever two or more video cards are combined in close-proximity SLI configurations. NVIDIA's add-in card partners with engineering resources may incorporate their own cooling solution into the GTX 680, but most brands are likely to adopt the cool-running reference design.

NVIDIA-GeForce-GTX-680-Video-Card-Corner.jpg

Specified at 195W Thermal Design Power output, the GeForce GTX 680 operates at levels less than the previous four generations of GTX flagship products. Because TDP demands have been reduced, NVIDIA's GeForce GTX 680 has also reduced power supply requirements. Rather than using the traditional eight- and six-pin PCI-E power connections, GeForce GTX 680 requires two six-pin PCI-E connections - identical to the GeForce GTX 570. Similar to previous GeForce shroud designs, the GeForce GTX 680 shares an identical vent near the header panel.

NVIDIA-GeForce-GTX-680-Video-Card-Angle.jpg

The GTX 680 reference design offers two simultaneously functional dual-link DVI (DL-DVI) connections, a full-size HDMI 1.4a output, and a DisplayPort 1.2 connection. Add-in partners may elect to remove or possibly further extend any of these video interfaces, but most will likely retain the original engineering. Only one of these video cards is necessary to drive triple-display NVIDIA 3D-Vision Surround functionality, when using both DL-DVI and either the HDMI or DP connection for third output. All of these video interfaces consume exhaust-vent real estate, but this has very little impact on cooling because the 28nm Kepler GPU generates less heat than past GeForce processors, and also because NVIDIA intentionally positions the heatsink far enough from these vents to equalize exhaust pressure.

NVIDIA-GeForce-GTX-680-Video-Card-IO.jpg

As with past-generation GeForce GTX series graphics cards, the GTX 680 is capable of two and three card SLI configurations. Because GeForce GTX 680 is PCI-Express 3.0 compliant device, the added bandwidth could potentially come into demand as future games and applications make use of these resources. Most games work well using moderate settings on a single GeForce GTX 680 graphics card, but multi-card SLI configurations are perfect for gamers wanting to experience high-performance video games played at their highest quality settings with all the bells and whistles enabled.

NVIDIA-GeForce-GTX-680-Video-Card-SLI.jpg

In our next section, we disassemble the GeForce GTX 680 for a more detailed look and inspect the internal component technology that NVIDIA used to build this Kepler-based video card...

NVIDIA Kepler GPU Details

Like any high-performance machine, it's what hides under the hood that counts. On the outside, NVIDIA's GeForce GTX 680 is merely another video card that shares a similar profile with almost every other product released over the past four years. But on the inside, NVIDIA's codename "Kepler" GPU architecture reshapes the internal landscape. Once the plastic shroud is removed, you'll notice a large heatsink covering half the PCB while a 60mm blower motor fan is positioned nearby. Beneath the cooling equipment is a 12-layer printed circuit board (PCB) to ensure the highest signal integrity, and to help disperse heat more effectively across the PCB.

NVIDIA-GeForce-GTX-680-Video-Card-Heatsink-Top.jpg

Made popular on high-performance CPU coolers, NVIDIA uses embedded heat-pipe technology for GTX 680s thermal management system. Kepler's lower TDP already reduces heat output, so using the larger and more expensive hollow vapor chamber design found on GTX 580 became unnecessary. The thermal management system on GeForce GTX 680 actually falls somewhere between that of the GeForce GTX 560 Ti and the GTX 570.

NVIDIA-GeForce-GTX-680-Video-Card-Shroud-Removed.jpg

With the heatsink removed, NVIDIA's GK104 28nm processor is exposed. Packed with 1536 CUDA cores across eight SMX units, the GK104 GPU is assigned a base clock frequency of 1006 MHz which boosts to 1058 MHz when needed. Kepler's architecture is comprised of 192 CUDA cores, 16 texture units, and polymorph engine (2.0) per SMX cluster. By comparison, the increase in cores balanced by a reduction in core speed helps Kepler achieve far more efficient (2x according to NVIDIA) performance per watt when compared to the Fermi architecture.

NVIDIA-GeForce-GTX-680-Video-Card-PCB-Angle.jpg

The memory subsystem has been tweaked on GeForce GTX 680, allowing the 2048MB GDDR5 video frame buffer to produce 192.26 GB/s total memory bandwidth at an impressive 6008 MHz data rate. Four memory controllers combine eight GDDR5 ICs for a 256-bit memory lane, which moves at record speeds but still operates more efficiently than previous designs to yield a fill rate of 128.8 GigaTexels per second.

NVIDIA-GeForce-GTX-680-Video-Card-PCB-Top.jpg

Examining the printed circuit board (PCB) reveals a few new changes, namely the absence of an aluminum plate heatsink cooler and the inclusion of a Richtek Technology Corporation RT8802A multi-phase synchronous PWM advanced digital power controller with over-volting capability. We examine power consumption later on in this article, using 3DMark11 to represent real-world loads.

NVIDIA-GeForce-GTX-680-Video-Card-PCB.jpg

In the next section, we detail our test methodology and give specifications for all of the benchmarks and equipment used in our testing process...

VGA Testing Methodology

The Microsoft DirectX-11 graphics API is native to the Microsoft Windows 7 Operating System, and will be the primary O/S for our test platform. DX11 is also available as a Microsoft Update for the Windows Vista O/S, so our test results apply to both versions of the Operating System. The majority of benchmark tests used in this article are comparative to DX11 performance, however some high-demand DX10 tests have also been included.

In each benchmark test there is one 'cache run' that is conducted, followed by five recorded test runs. Results are collected at each setting with the highest and lowest results discarded. The remaining three results are averaged, and displayed in the performance charts on the following pages.

A combination of synthetic and video game benchmark tests have been used in this article to illustrate relative performance among graphics solutions. Our benchmark frame rate results are not intended to represent real-world graphics performance, as this experience would change based on supporting hardware and the perception of individuals playing the video game.

Intel X79 Express Test SystemGPU-Z_NVIDIA_GeForce_GTX-680.gif

DirectX-10 Benchmark Applications

  • Crysis Warhead v1.1 with HOC Benchmark
    • Settings: Airfield Demo, Very High Quality, 4x AA, 16x AF

DirectX-11 Benchmark Applications

  • 3DMark11 Professional Edition by Futuremark
    • Settings: Performance Level Preset, 1280x720, 1x AA, Trilinear Filtering, Tessellation level 5)
  • Aliens vs Predator Benchmark 1.0
    • Settings: Very High Quality, 4x AA, 16x AF, SSAO, Tessellation, Advanced Shadows
  • Batman: Arkham City
    • Settings: 8x AA, 16x AF, MVSS+HBAO, High Tessellation, Extreme Detail, PhysX Disabled
  • BattleField 3
    • Settings: Ultra Graphics Quality, FOV 90, 180-second Fraps Scene
  • Gugila GroundWiz RTS 2.1 Demo: Alpine
    • Settings: DirectX 11 Renderer, 1280x720p Resolution, Tessellation Normal, Shadow Mapping 1024, CPU 1t, 60-Second Duration
  • Lost Planet 2 Benchmark 1.0
    • Settings: Benchmark B, 4x AA, Blur Off, High Shadow Detail, High Texture, High Render, High DirectX 11 Features
  • Metro 2033 Benchmark
    • Settings: Very-High Quality, 4x AA, 16x AF, Tessellation, PhysX Disabled
  • Unigine Heaven Benchmark 3.0
    • Settings: DirectX 11, High Quality, Extreme Tessellation, 16x AF, 4x AA

PCI-Express Graphics Cards

Graphics Card GeForce GTX570 Radeon HD6970 GeForce GTX580 Radeon HD7970 GeForce GTX680 Radeon HD6990 GeForce GTX590
GPU Cores 480 1536 512 2048 1536 3072 Total 1024
Core Clock (MHz) 732 880 772 925 1006 (1187 OC) 830/880 608
Shader Clock (MHz) 1464 N/A 1544 N/A Boost 1058 (1240 OC) N/A 1215
Memory Clock (MHz) 950 1375 1002 1375 1502 (1600 OC) 1250 854
Memory Amount 1280MB GDDR5 2048MB GDDR5 1536MB GDDR5 3072MB GDDR5 2048MB GDDR5 4096MB GDDR5 3072
Memory Interface 320-bit 256-bit 384-bit 384-bit 256-bit 256-bit 384-bit
  • NVIDIA GeForce GTX 570 (732 MHz GPU/1464 MHz Shader/950 MHz 1280MB GDDR5 - Forceware 296.10)
  • AMD Radeon HD 6970 (880 MHz GPU/1375 MHz vRAM - AMD Catalyst 12.3)
  • NVIDIA GeForce GTX 580 (772 MHz GPU/1544 MHz Shader/1002 MHz vRAM - Forceware 296.10)
  • AMD Radeon HD 7970 (925 MHz GPU/1375 MHz vRAM - AMD Catalyst 12.3)
  • NVIDIA GeForce GTX 680 (1006 MHz GPU/1059 MHz Boost/1502 MHz vRAM - Forceware 300.99 Beta)
  • NVIDIA GeForce GTX 680 Overclocked (1187 MHz GPU/1240 MHz Boost/1600 MHz vRAM - Forceware 300.99 Beta)
  • AMD Radeon HD 6990 (830/880 MHz GPU/1250 MHz vRAM - AMD Catalyst 12.3)
  • NVIDIA GeForce GTX 590 (772 MHz GPU/1544 MHz Shader/1002 MHz vRAM - Forceware 296.10)

DX10: Crysis Warhead

Crysis Warhead is an expansion pack based on the original Crysis video game. Crysis Warhead is based in the future, where an ancient alien spacecraft has been discovered beneath the Earth on an island east of the Philippines. Crysis Warhead uses a refined version of the CryENGINE2 graphics engine. Like Crysis, Warhead uses the Microsoft Direct3D 10 (DirectX-10) API for graphics rendering.

Benchmark Reviews uses the HOC Crysis Warhead benchmark tool to test and measure graphic performance using the Airfield 1 demo scene. This short test places a high amount of stress on a graphics card because of detailed terrain and textures, but also for the test settings used. Using the DirectX-10 test with Very High Quality settings, the Airfield 1 demo scene receives 4x anti-aliasing and 16x anisotropic filtering to create maximum graphic load and separate the products according to their performance.

Using the highest quality DirectX-10 settings with 4x AA and 16x AF, only the most powerful graphics cards are expected to perform well in our Crysis Warhead benchmark tests. DirectX-11 extensions are not supported in Crysis: Warhead, and SSAO is not an available option.

  • Crysis Warhead v1.1 with HOC Benchmark
    • Settings: Airfield Demo, Very High Quality, 4x AA, 16x AF

Crysis_Warhead_Benchmark.jpg

Crysis Warhead Benchmark Test Results

Graphics Card GeForce GTX570 Radeon HD6970 GeForce GTX580 Radeon HD7970 GeForce GTX680 Radeon HD6990 GeForce GTX590
GPU Cores 480 1536 512 2048 1536 3072 Total 1024
Core Clock (MHz) 732 880 772 925 1006 (1187 OC) 830/880 608
Shader Clock (MHz) 1464 N/A 1544 N/A Boost 1058 (1240 OC) N/A 1215
Memory Clock (MHz) 950 1375 1002 1375 1502 (1600 OC) 1250 854
Memory Amount 1280MB GDDR5 2048MB GDDR5 1536MB GDDR5 3072MB GDDR5 2048MB GDDR5 4096MB GDDR5 3072
Memory Interface 320-bit 256-bit 384-bit 384-bit 256-bit 256-bit 384-bit

DX11: 3DMark11

FutureMark 3DMark11 is the latest addition the 3DMark benchmark series built by FutureMark corporation. 3DMark11 is a PC benchmark suite designed to test the DirectX-11 graphics card performance without vendor preference. Although 3DMark11 includes the unbiased Bullet Open Source Physics Library instead of NVIDIA PhysX for the CPU/Physics tests, Benchmark Reviews concentrates on the four graphics-only tests in 3DMark11 and uses them with medium-level 'Performance' presets.

The 'Performance' level setting applies 1x multi-sample anti-aliasing and trilinear texture filtering to a 1280x720p resolution. The tessellation detail, when called upon by a test, is preset to level 5, with a maximum tessellation factor of 10. The shadow map size is limited to 5 and the shadow cascade count is set to 4, while the surface shadow sample count is at the maximum value of 16. Ambient occlusion is enabled, and preset to a quality level of 5.

3DMark11-Performance-Test-Settings.png

  • Futuremark 3DMark11 Professional Edition
    • Settings: Performance Level Preset, 1280x720, 1x AA, Trilinear Filtering, Tessellation level 5)

3dMark2011_Performance_GT1-2_Benchmark.jpg

3dMark2011_Performance_GT3-4_Benchmark.jpg

3DMark11 Benchmark Test Results

Graphics Card GeForce GTX570 Radeon HD6970 GeForce GTX580 Radeon HD7970 GeForce GTX680 Radeon HD6990 GeForce GTX590
GPU Cores 480 1536 512 2048 1536 3072 Total 1024
Core Clock (MHz) 732 880 772 925 1006 (1187 OC) 830/880 608
Shader Clock (MHz) 1464 N/A 1544 N/A Boost 1058 (1240 OC) N/A 1215
Memory Clock (MHz) 950 1375 1002 1375 1502 (1600 OC) 1250 854
Memory Amount 1280MB GDDR5 2048MB GDDR5 1536MB GDDR5 3072MB GDDR5 2048MB GDDR5 4096MB GDDR5 3072
Memory Interface 320-bit 256-bit 384-bit 384-bit 256-bit 256-bit 384-bit

DX11: Aliens vs Predator

Aliens vs. Predator is a science fiction first-person shooter video game, developed by Rebellion, and published by Sega for Microsoft Windows, Sony PlayStation 3, and Microsoft Xbox 360. Aliens vs. Predator utilizes Rebellion's proprietary Asura game engine, which had previously found its way into Call of Duty: World at War and Rogue Warrior. The self-contained benchmark tool is used for our DirectX-11 tests, which push the Asura game engine to its limit.

In our benchmark tests, Aliens vs. Predator was configured to use the highest quality settings with 4x AA and 16x AF. DirectX-11 features such as Screen Space Ambient Occlusion (SSAO) and tessellation have also been included, along with advanced shadows.

  • Aliens vs Predator
    • Settings: Very High Quality, 4x AA, 16x AF, SSAO, Tessellation, Advanced Shadows

Aliens-vs-Predator_DX11_Benchmark.jpg

Aliens vs Predator Benchmark Test Results

Graphics Card GeForce GTX570 Radeon HD6970 GeForce GTX580 Radeon HD7970 GeForce GTX680 Radeon HD6990 GeForce GTX590
GPU Cores 480 1536 512 2048 1536 3072 Total 1024
Core Clock (MHz) 732 880 772 925 1006 (1187 OC) 830/880 608
Shader Clock (MHz) 1464 N/A 1544 N/A Boost 1058 (1240 OC) N/A 1215
Memory Clock (MHz) 950 1375 1002 1375 1502 (1600 OC) 1250 854
Memory Amount 1280MB GDDR5 2048MB GDDR5 1536MB GDDR5 3072MB GDDR5 2048MB GDDR5 4096MB GDDR5 3072
Memory Interface 320-bit 256-bit 384-bit 384-bit 256-bit 256-bit 384-bit

DX11: Batman Arkham City

Batman: Arkham City is a 3d-person action game that adheres to story line previously set forth in Batman: Arkham Asylum, which launched for game consoles and PC back in 2009. Based on an updated Unreal Engine 3 game engine, Batman: Arkham City enjoys DirectX 11 graphics which uses multi-threaded rendering to produce life-like tessellation effects. While gaming console versions of Batman: Arkham City deliver high-definition graphics at either 720p or 1080i, you'll only get the high-quality graphics and special effects on PC.

In an age when developers give game consoles priority over PC, it's becoming difficult to find games that show off the stunning visual effects and lifelike quality possible from modern graphics cards. Fortunately Batman: Arkham City is a game that does amazingly well on both platforms, while at the same time making it possible to cripple the most advanced graphics card on the planet by offering extremely demanding NVIDIA 32x CSAA and full PhysX capability. Also available to PC users (with NVIDIA graphics) is FXAA, a shader based image filter that achieves similar results to MSAA yet requires less memory and processing power.

Batman: Arkham City offers varying levels of PhysX effects, each with its own set of hardware requirements. You can turn PhysX off, or enable 'Normal levels which introduce GPU-accelerated PhysX elements such as Debris Particles, Volumetric Smoke, and Destructible Environments into the game, while the 'High' setting adds real-time cloth and paper simulation. Particles exist everywhere in real life, and this PhysX effect is seen in many aspects of game to add back that same sense of realism. For PC gamers who are enthusiastic about graphics quality, don't skimp on PhysX. DirectX 11 makes it possible to enjoy many of these effects, and PhysX helps bring them to life in the game.

  • Batman: Arkham City
    • Settings: 8x AA, 16x AF, MVSS+HBAO, High Tessellation, Extreme Detail, PhysX Disabled

Batman-Arkham-City-Benchmark.jpg

Batman: Arkham City Benchmark Test Results

Graphics Card GeForce GTX570 Radeon HD6970 GeForce GTX580 Radeon HD7970 GeForce GTX680 Radeon HD6990 GeForce GTX590
GPU Cores 480 1536 512 2048 1536 3072 Total 1024
Core Clock (MHz) 732 880 772 925 1006 (1187 OC) 830/880 608
Shader Clock (MHz) 1464 N/A 1544 N/A Boost 1058 (1240 OC) N/A 1215
Memory Clock (MHz) 950 1375 1002 1375 1502 (1600 OC) 1250 854
Memory Amount 1280MB GDDR5 2048MB GDDR5 1536MB GDDR5 3072MB GDDR5 2048MB GDDR5 4096MB GDDR5 3072
Memory Interface 320-bit 256-bit 384-bit 384-bit 256-bit 256-bit 384-bit

DX11: Battlefield 3

In Battlefield 3, players step into the role of the Elite U.S. Marines. As the first boots on the ground, players will experience heart-pounding missions across diverse locations including Paris, Tehran and New York. As a U.S. Marine in the field, periods of tension and anticipation are punctuated by moments of complete chaos. As bullets whiz by, walls crumble, and explosions force players to the grounds, the battlefield feels more alive and interactive than ever before.

The graphics engine behind Battlefield 3 is called Frostbite 2, which delivers realistic global illumination lighting along with dynamic destructible environments. The game uses a hardware terrain tessellation method that allows a high number of detailed triangles to be rendered entirely on the GPU when near the terrain. This allows for a very low memory footprint and relies on the GPU alone to expand the low res data to highly realistic detail.

Using Fraps to record frame rates, our Battlefield 3 benchmark test uses a three-minute capture on the 'Secure Parking Lot' stage of Operation Swordbreaker. Relative to the online multiplayer action, these frame rate results are nearly identical to daytime maps with the same video settings.

  • BattleField 3
    • Settings: Ultra Graphics Quality, FOV 90, 180-second Fraps Scene

Battlefield-3_Benchmark.jpg

Battlefield 3 Benchmark Test Results

Graphics Card GeForce GTX570 Radeon HD6970 GeForce GTX580 Radeon HD7970 GeForce GTX680 Radeon HD6990 GeForce GTX590
GPU Cores 480 1536 512 2048 1536 3072 Total 1024
Core Clock (MHz) 732 880 772 925 1006 (1187 OC) 830/880 608
Shader Clock (MHz) 1464 N/A 1544 N/A Boost 1058 (1240 OC) N/A 1215
Memory Clock (MHz) 950 1375 1002 1375 1502 (1600 OC) 1250 854
Memory Amount 1280MB GDDR5 2048MB GDDR5 1536MB GDDR5 3072MB GDDR5 2048MB GDDR5 4096MB GDDR5 3072
Memory Interface 320-bit 256-bit 384-bit 384-bit 256-bit 256-bit 384-bit

DX11: Gugila GroundWiz RTS

Gugila's GroundWiz RTS application showcases real-time shader technology. In DirectX 11 tests, terrain rendering uses displacement, tessellation and higher detail ground surfaces. GroundWiz RTS is optimized for parallel computing using multiple CPUs and GPU shaders to achieve real-time performance.

Procedural displacement tessellation is supported on DirectX11 compatible graphics cards. This feature adds a great amount of terrain detail, which will be extra noticeable on rocks and mountainous terrains. The amount of tessellation is user controllable and should be adjusted to the speed of graphics card.

Another important aspect is procedural terrain roughness - controllable per ground layer. Terrain roughness affects lighting via normal mapping and also layer distribution. Optimized routines of GroundWiz RTS Terrain Map make it possible to render a big layer tree in real-time (16 layers and more). The current version is optimized to use graphics cards that support Shader Model 3.0 and above.

  • Gugila GroundWiz RTS 2.1 Demo: Alpine
    • Settings: DirectX 11 Renderer, 1280x720p Resolution, Tessellation Normal, Shadow Mapping 1024, CPU 1t, 60-Second Duration

Gugila-GroundWiz-Alpine_DX11_Benchmark.jpg

Gugila GroundWiz Alpine Benchmark Test Results

EDITOR'S NOTE 22 March 2012: AMD representatives and their PR firm were both contacted nearly one week prior to publication of this article, alerting them to the failure of their Radeon HD 7900 series with the Gugila GroundWiz benchmark using DirectX 11 rendering. To date, no response has been received and no driver update has been posted. It remains unclear why the R7900 series functions with the DX9 version of this test, but fails in DX11 mode.

Graphics Card GeForce GTX570 Radeon HD6970 GeForce GTX580 Radeon HD7970 GeForce GTX680 Radeon HD6990 GeForce GTX590
GPU Cores 480 1536 512 2048 1536 3072 Total 1024
Core Clock (MHz) 732 880 772 925 1006 (1187 OC) 830/880 608
Shader Clock (MHz) 1464 N/A 1544 N/A Boost 1058 (1240 OC) N/A 1215
Memory Clock (MHz) 950 1375 1002 1375 1502 (1600 OC) 1250 854
Memory Amount 1280MB GDDR5 2048MB GDDR5 1536MB GDDR5 3072MB GDDR5 2048MB GDDR5 4096MB GDDR5 3072
Memory Interface 320-bit 256-bit 384-bit 384-bit 256-bit 256-bit 384-bit

DX11: Lost Planet 2

Lost Planet 2 is the second installment in the saga of the planet E.D.N. III, ten years after the story of Lost Planet: Extreme Condition. The snow has melted and the lush jungle life of the planet has emerged with angry and luscious flora and fauna. With the new environment comes the addition of DirectX-11 technology to the game.

Lost Planet 2 takes advantage of DX11 features including tessellation and displacement mapping on water, level bosses, and player characters. In addition, soft body compute shaders are used on 'Boss' characters, and wave simulation is performed using DirectCompute. These cutting edge features make for an excellent benchmark for top-of-the-line consumer GPUs.

The Lost Planet 2 benchmark offers two different tests, which serve different purposes. This article uses tests conducted on benchmark B, which is designed to be a deterministic and effective benchmark tool featuring DirectX 11 elements.

  • Lost Planet 2 Benchmark 1.0
    • Settings: Benchmark B, 4x AA, Blur Off, High Shadow Detail, High Texture, High Render, High DirectX 11 Features

Lost-Planet-2_DX11_Benchmark.jpg

Lost Planet 2 Benchmark Test Results

Graphics Card GeForce GTX570 Radeon HD6970 GeForce GTX580 Radeon HD7970 GeForce GTX680 Radeon HD6990 GeForce GTX590
GPU Cores 480 1536 512 2048 1536 3072 Total 1024
Core Clock (MHz) 732 880 772 925 1006 (1187 OC) 830/880 608
Shader Clock (MHz) 1464 N/A 1544 N/A Boost 1058 (1240 OC) N/A 1215
Memory Clock (MHz) 950 1375 1002 1375 1502 (1600 OC) 1250 854
Memory Amount 1280MB GDDR5 2048MB GDDR5 1536MB GDDR5 3072MB GDDR5 2048MB GDDR5 4096MB GDDR5 3072
Memory Interface 320-bit 256-bit 384-bit 384-bit 256-bit 256-bit 384-bit

DX11: Metro 2033

Metro 2033 is an action-oriented video game with a combination of survival horror, and first-person shooter elements. The game is based on the novel Metro 2033 by Russian author Dmitry Glukhovsky. It was developed by 4A Games in Ukraine and released in March 2010 for Microsoft Windows. Metro 2033 uses the 4A game engine, developed by 4A Games. The 4A Engine supports DirectX-9, 10, and 11, along with NVIDIA PhysX and GeForce 3D Vision.

The 4A engine is multi-threaded in such that only PhysX had a dedicated thread, and uses a task-model without any pre-conditioning or pre/post-synchronizing, allowing tasks to be done in parallel. The 4A game engine can utilize a deferred shading pipeline, and uses tessellation for greater performance, and also has HDR (complete with blue shift), real-time reflections, color correction, film grain and noise, and the engine also supports multi-core rendering.

Metro 2033 featured superior volumetric fog, double PhysX precision, object blur, sub-surface scattering for skin shaders, parallax mapping on all surfaces and greater geometric detail with a less aggressive LODs. Using PhysX, the engine uses many features such as destructible environments, and cloth and water simulations, and particles that can be fully affected by environmental factors.

NVIDIA has been diligently working to promote Metro 2033, and for good reason: it's one of the most demanding PC video games we've ever tested. When their flagship GeForce GTX 480 struggles to produce 27 FPS with DirectX-11 anti-aliasing turned two to its lowest setting, you know that only the strongest graphics processors will generate playable frame rates. All of our tests enable Advanced Depth of Field and Tessellation effects, but disable advanced PhysX options.

  • Metro 2033 Benchmark
    • Settings: Very-High Quality, 4x AA, 16x AF, Tessellation, PhysX Disabled

Metro-2033_DX11_Benchmark.jpg

Metro 2033 Benchmark Test Results

Graphics Card GeForce GTX570 Radeon HD6970 GeForce GTX580 Radeon HD7970 GeForce GTX680 Radeon HD6990 GeForce GTX590
GPU Cores 480 1536 512 2048 1536 3072 Total 1024
Core Clock (MHz) 732 880 772 925 1006 (1187 OC) 830/880 608
Shader Clock (MHz) 1464 N/A 1544 N/A Boost 1058 (1240 OC) N/A 1215
Memory Clock (MHz) 950 1375 1002 1375 1502 (1600 OC) 1250 854
Memory Amount 1280MB GDDR5 2048MB GDDR5 1536MB GDDR5 3072MB GDDR5 2048MB GDDR5 4096MB GDDR5 3072
Memory Interface 320-bit 256-bit 384-bit 384-bit 256-bit 256-bit 384-bit

DX11: Unigine Heaven 3.0

The Unigine Heaven benchmark is a free publicly available tool that grants the power to unleash the graphics capabilities in DirectX-11 for Windows 7 or updated Vista Operating Systems. It reveals the enchanting magic of floating islands with a tiny village hidden in the cloudy skies. With the interactive mode, emerging experience of exploring the intricate world is within reach. Through its advanced renderer, Unigine is one of the first to set precedence in showcasing the art assets with tessellation, bringing compelling visual finesse, utilizing the technology to the full extend and exhibiting the possibilities of enriching 3D gaming.

The distinguishing feature in the Unigine Heaven benchmark is a hardware tessellation that is a scalable technology aimed for automatic subdivision of polygons into smaller and finer pieces, so that developers can gain a more detailed look of their games almost free of charge in terms of performance. Thanks to this procedure, the elaboration of the rendered image finally approaches the boundary of veridical visual perception: the virtual reality transcends conjured by your hand.

Since only DX11-compliant video cards will properly test on the Heaven benchmark, only those products that meet the requirements have been included.

  • Unigine Heaven Benchmark 3.0
    • Settings: DirectX 11, High Quality, Extreme Tessellation, 16x AF, 4x AA

Unigine_Heaven_DX11_Benchmark.jpg

Heaven Benchmark Test Results

Graphics Card GeForce GTX570 Radeon HD6970 GeForce GTX580 Radeon HD7970 GeForce GTX680 Radeon HD6990 GeForce GTX590
GPU Cores 480 1536 512 2048 1536 3072 Total 1024
Core Clock (MHz) 732 880 772 925 1006 (1187 OC) 830/880 608
Shader Clock (MHz) 1464 N/A 1544 N/A Boost 1058 (1240 OC) N/A 1215
Memory Clock (MHz) 950 1375 1002 1375 1502 (1600 OC) 1250 854
Memory Amount 1280MB GDDR5 2048MB GDDR5 1536MB GDDR5 3072MB GDDR5 2048MB GDDR5 4096MB GDDR5 3072
Memory Interface 320-bit 256-bit 384-bit 384-bit 256-bit 256-bit 384-bit

VGA Power Consumption

In this section, PCI-Express graphics cards are isolated for idle and loaded electrical power consumption. In our power consumption tests, Benchmark Reviews utilizes an 80-PLUS GOLD certified OCZ Z-Series Gold 850W PSU, model OCZZ850. This power supply unit has been tested to provide over 90% typical efficiency by Chroma System Solutions. To measure isolated video card power consumption, Benchmark Reviews uses the Kill-A-Watt EZ (model P4460) power meter made by P3 International. In this particular test, all power consumption results were verified with a second power meter for accuracy.

The power consumption statistics discussed in this section are absolute maximum values, and may not represent real-world power consumption created by video games or graphics applications.

A baseline measurement is taken without any video card installed on our test computer system, which is allowed to boot into Windows 7 and rest idle at the login screen before power consumption is recorded. Once the baseline reading has been taken, the graphics card is installed and the system is again booted into Windows and left idle at the login screen before taking the idle reading. Our final loaded power consumption reading is taken with the video card running a stress test using graphics test #4 on 3DMark11. Below is a chart with the isolated video card power consumption (system without video card subtracted from measured combined total) displayed in Watts for each specified test product:

Video Card Power Consumption by Benchmark Reviews

VGA Product Description

(sorted by combined total power)

Idle Power

Loaded Power

NVIDIA GeForce GTX 480 SLI Set
82 W
655 W
NVIDIA GeForce GTX 590 Reference Design
53 W
396 W
ATI Radeon HD 4870 X2 Reference Design
100 W
320 W
AMD Radeon HD 6990 Reference Design
46 W
350 W
NVIDIA GeForce GTX 295 Reference Design
74 W
302 W
ASUS GeForce GTX 480 Reference Design
39 W
315 W
ATI Radeon HD 5970 Reference Design
48 W
299 W
NVIDIA GeForce GTX 690 Reference Design
25 W
321 W
ATI Radeon HD 4850 CrossFireX Set
123 W
210 W
ATI Radeon HD 4890 Reference Design
65 W
268 W
AMD Radeon HD 7970 Reference Design
21 W
311 W
NVIDIA GeForce GTX 470 Reference Design
42 W
278 W
NVIDIA GeForce GTX 580 Reference Design
31 W
246 W
NVIDIA GeForce GTX 570 Reference Design
31 W
241 W
ATI Radeon HD 5870 Reference Design
25 W
240 W
ATI Radeon HD 6970 Reference Design
24 W
233 W
NVIDIA GeForce GTX 465 Reference Design
36 W
219 W
NVIDIA GeForce GTX 680 Reference Design
14 W
243 W
Sapphire Radeon HD 4850 X2 11139-00-40R
73 W
180 W
NVIDIA GeForce 9800 GX2 Reference Design
85 W
186 W
NVIDIA GeForce GTX 780 Reference Design
10 W
275 W
NVIDIA GeForce GTX 770 Reference Design
9 W
256 W
NVIDIA GeForce GTX 280 Reference Design
35 W
225 W
NVIDIA GeForce GTX 260 (216) Reference Design
42 W
203 W
ATI Radeon HD 4870 Reference Design
58 W
166 W
NVIDIA GeForce GTX 560 Ti Reference Design
17 W
199 W
NVIDIA GeForce GTX 460 Reference Design
18 W
167 W
AMD Radeon HD 6870 Reference Design
20 W
162 W
NVIDIA GeForce GTX 670 Reference Design
14 W
167 W
ATI Radeon HD 5850 Reference Design
24 W
157 W
NVIDIA GeForce GTX 650 Ti BOOST Reference Design
8 W
164 W
AMD Radeon HD 6850 Reference Design
20 W
139 W
NVIDIA GeForce 8800 GT Reference Design
31 W
133 W
ATI Radeon HD 4770 RV740 GDDR5 Reference Design
37 W
120 W
ATI Radeon HD 5770 Reference Design
16 W
122 W
NVIDIA GeForce GTS 450 Reference Design
22 W
115 W
NVIDIA GeForce GTX 650 Ti Reference Design
12 W
112 W
ATI Radeon HD 4670 Reference Design
9 W
70 W
* Results are accurate to within +/- 5W.

The GeForce GTX 680 accepts two 6-pin PCI-E power connections for normal operation, and will not activate the display unless proper power has been supplied. NVIDIA recommends a 550W power supply unit for stable operation with GTX 680, which should include both required 6-pin PCI-E connections without the use of adapters.

If you're familiar with how electronics function, it will come as no surprise that less power consumption equals less heat output, evidenced by our results below...

GeForce GTX 680 Temperatures

This section reports our temperature results with the GeForce GTX 680 under idle and maximum load conditions. During each test a 20°C ambient room temperature is maintained from start to finish, as measured by digital temperature sensors located outside the computer system. GPU-Z is used to measure the temperature at idle as reported by the GPU, and also under load. Using a modified version of FurMark's "Torture Test" to generate maximum thermal load, peak GPU temperature is recorded in high-power 3D mode. FurMark does two things extremely well: drives the thermal output of any graphics processor much higher than any video games realistically could, and it does so with consistency every time. Furmark works great for testing the stability of a GPU as the temperature rises to the highest possible output.

The temperatures illustrated below are absolute maximum values, and do not represent real-world temperatures created by video games or graphics applications:

Video Card Idle Temp Loaded Temp Loaded Noise Ambient
ATI Radeon HD 5850 39°C 73°C 7/10 20°C
NVIDIA GeForce GTX 460 26°C 65°C 4/10 20°C
AMD Radeon HD 6850 42°C 77°C 7/10 20°C
AMD Radeon HD 6870 39°C 74°C 6/10 20°C
ATI Radeon HD 5870 33°C 78°C 7/10 20°C
NVIDIA GeForce GTX 560 Ti 27°C 78°C 5/10 20°C
NVIDIA GeForce GTX 570 32°C 82°C 7/10 20°C
ATI Radeon HD 6970 35°C 81°C 6/10 20°C
NVIDIA GeForce GTX 580 32°C 70°C 6/10 20°C
NVIDIA GeForce GTX 590 33°C 77°C 6/10 20°C
AMD Radeon HD 6990 40°C 84°C 8/10 20°C
NVIDIA GeForce GTX 680 26°C 75°C 3/10 20°C

As we've already mentioned on the pages leading up to this section, NVIDIA's Kepler architecture yields a much more efficient operating GPU compared to previous designs. This becomes evident by the extremely low idle temperature, and the modest loaded temperature. What's even more impressive than these results is how quiet GeForce GTX 680 operates, barely changing levels from silent to almost silent as it reaches full load. Even with an open computer case exposing the video card, it's difficult to hear the blower fan make any noise at all. While NVIDIA should be proud of updating their product line with the fastest graphics processor on the planet, I'm happy they also made it one of the most quiet-running flagship video cards ever tested.

GeForce GTX 680 Conclusion

IMPORTANT: Although the rating and final score mentioned in this conclusion are made to be as objective as possible, please be advised that every author perceives these factors differently at various points in time. While we each do our best to ensure that all aspects of the product are considered, there are often times unforeseen market conditions and manufacturer changes which occur after publication that could render our rating obsolete. Please do not base any purchase solely on our conclusion as it represents our product rating specifically for the product tested, which may differ from future versions of the same product. Benchmark Reviews begins our conclusion with a short summary for each of the areas that we rate.

NVIDIA have designed their latest GPU with several goals: operate faster, offer more features, deliver more functionality, use less energy, and generate less heat. These days, consumers generally react favorably to any product that can deliver impressive performance gains over competing alternatives, so it seems that NVIDIA's rather large shopping list of goals could serve them very well in the marketplace... especially since they delivered beyond most expectations. There will still be multi-GPU graphics cards to contend with, but as far as single-GPU solutions go the GeForce GTX 680 captures star status as the best graphics card available on the market.

Fan boys often argue one brand against another based on personal attachment, but as an industry critic it's difficult to avoid agreement when our tests prove NVIDIA video cards offer a better total graphics solution than the closest competition. As of this launch, that competition comes in the shape of AMD's Radeon HD 7970, a video card that costs $50 more, consumes more electricity, produces more heat, and trails in frame rate performance. After running benchmarks on each video card through fifteen different tests, the FPS results almost always favored NVIDIA's GeForce GTX 680. Let's look at the break-down:

In the DirectX 10 game Crysis Warhead, the GeForce GTX 680 and Radeon HD 7970 appear even at 1680x1050 resolution, but once the strain of 1920x1080p is added the GTX 680 pulls ahead 7 FPS. DirectX 11 tests followed the trend, resulting in the GeForce GTX 680 to lead significantly in most tests. In one of the few exceptions, Aliens vs Predator gave a noteworthy lead to AMD Radeon products over their NVIDIA counterparts. The demanding DX11 graphics of Batman: Arkham Asylum made use of Kepler's optimized architecture, delivering a staggering lead to the GeForce GTX 680 over every other graphics card tested. Battlefield 3 continued the run, pushing the stock GTX 680 more than 10 FPS beyond the Radeon HD 7970. Lost Planet 2 played well on all graphics cards when set to high quality with 4x AA, yet the GeForce GTX 680 still surpassed Radeon HD 7970 performance by 12 FPS before an overclock that sent it another 10 FPS higher. Metro 2033 is another demanding game that requires high-end graphics to enjoy quality settings, but like AvP this game really took to the Radeon HD 7970 and helped push it 4-6 FPS ahead of the GTX 680.

Synthetic benchmark tools offered a similar read on these products, mirroring some of the results seen from our video game tests. Futuremark's 3DMark11 benchmark suite strained our high-end graphics cards with only mid-level settings displayed at 720p, forcing the less-powerful Radeon HD 7970 to trail the GeForce GTX 680 by nearly 10 FPS. Then there was the Gugila GroundWiz RTS Demo, which uses the Alpine scene to cripple graphics cards... and cripple it did: this benchmark is so demanding that we had to run tests at 1280x720p just to get somewhat decent frame rate results. NVIDIA's GeForce GTX 680 did extremely well, but it's no contest when the only card that fails the test is your competitions flagship model. Unfortunately AMD did not consider this issue to be worthy of response, even though I reported nearly a week prior to publication. Finally, the Unigine Heaven 3.0 benchmark confirmed what we've seen in most other tests: NVIDIA's GeForce GTX 680 leading the AMD Radeon HD 7970 in stock form, and then leaping way past it once overclocked to maximum GPU Boost.

NVIDIA-GeForce-GTX-680-Video-Card-Kit.jpg

Appearance is a much more subjective matter, especially since this particular rating doesn't have any quantitative benchmark scores to fall back on. NVIDIA's GeForce GTX series has traditionally used a recognizable design over the past two years, and with the exception to more angular corners, the GTX 680 looks very similar to their GTX 580 and 570 models. Some add-in card partners may offer their own unique cooling solution design, but this might not happen with the GeForce GTX 680 since it operates so efficiently and allows nearly all of the heated air to exhaust outside of the computer case. Expect most partners to dress up the original reference design by placing exciting graphics over the fan shroud or using colored plastic components. While looks might mean a lot to some consumers, keep in mind that this product outperforms the competition while generating much less heat and producing very little noise.

Construction is the one area NVIDIA continually shines, and thanks in part to extremely quiet operation paired with more efficient cores that consume less energy and emit less heat, I'm confident that GeForce GTX 680 will continue this tradition. Reducing the flagship model to use two 6-pin PCI-E power connections is a step in the right direction, while tweaking heatsink and fan placement to optimize cooling performance proves there are still ways to improve on a commonplace technology. Even better yet, now consumers have a single-GPU solution capable of driving three monitors in 3D Vision Surround with the inclusion of two DL-DVI ports with supplementary HDMI and DisplayPort output.

Defining value at the premium-priced high-end segment isn't easy, because hardware enthusiasts know that they're going to pay top dollar to own the top product. Even still, rating value is like chasing a fast moving target, so please believe me when I say that prices change by the minute in this industry. The premium-priced GeForce GTX 680 "Kepler" graphics card demonstrates NVIDIA's ability to innovate the graphics segment while maintaining a firm lead in their market, but it comes at a cost. As of launch day 22 March 2012, the GeForce GTX 680 has been assigned a $499 MSRP. For those with an impeccable memory, back to November 2010 the GeForce GTX 580 also launched with the exact same $499 MSRP (which is still available at Newegg for around $400). So with regard to value, the GeForce GTX 680 delivers more features and better performance than the less-powerful AMD Radeon HD 7970 that currently sells for $550, but matches frame rate performance while costing slightly more than the older less efficient GTX 590. To compare one cards value to another based solely on video frame rate is a fools game, because features and functionality run off the chart with GTX 680. Furthermore, only video card can offer multi-display 3D gaming, PhysX technology, GPU Boost, FXAA, and now TXAA.

GeForce GTX 680 is the ultimate enthusiast graphics card intended for affluent top-end gamers, but I see this product becoming so popular that it draws more interest than previous flagship models. Our test sample took the standard 1006 /1058 MHz GPU clock and easily reached a 1187/1240 MHz overclock without any additional voltage. Add this to the record-setting 6.0 GHz GDDR5 memory clock (which we also overclocked to 6.4 GHz) and vSync on everything becomes a possibility... especially with NVIDIA Adaptive VSync now available to smooth the frame rate gaps. Using just one GeForce GTX 680 video card is enough to display millions of pixels at the speed of light, so imagine the graphics quality settings possible with two combined into a SLI set.

Overall I'm quite impressed with the NVIDIA GeForce GTX 680, but it's the 28nm GK104 'Kepler' GPU that really has my attention. This article has covered many of the new product features and added functionality possible through Kepler, but imagine beyond the GTX 680. By reducing the TDP footprint to an easily manageable 175W operating range, it won't take much effort to combine two of these GPUs into the yet-to-be-announced GeForce GTX 690. I can picture it now: 4GB of GDDR5 video frame buffer memory pushed to 6.0 GHz, combined with two Kepler GPUs operating at 880 MHz before GPU Boost... and it would still run cold and quiet with a combined 300W TDP. Give a few months, and we'll see how accurate my prediction was. EDITOR'S NOTE: As it turns out, I was extremely close: NVIDIA GeForce GTX 690 Video Card Features

So what do you think of the NVIDIA GeForce GTX 680 Kepler graphics card, and do you plan to buy one?


Related Articles:
 

Comments 

 
# RE: NVIDIA GeForce GTX 680 Kepler Video Card Performancedanwat1234 2012-03-22 07:43
I hope all of you buying these high end GPUs put them to good use when your not gaming! Folding@home or another distributed computing project, do your part to help science!
Peace out
Report Comment
 
 
# RE: RE: NVIDIA GeForce GTX 680 Kepler Video Card Performancebob 2012-03-23 02:05
I hope you are joking. These gpu are bad for gpgpu like boinc. 7970 are great!
Report Comment
 
 
# RE: RE: RE: NVIDIA GeForce GTX 680 Kepler Video Card PerformanceOlin Coles 2012-03-23 08:37
Based on what factual evidence? I notice the 7970 couldn't even run a DirectX 11 tessellation test, so how will it compete with GPGPU tests?
Report Comment
 
 
# On what Evidence! Are You Blind, Deaf, Stupid & Insane?tophat killer 2012-03-27 09:55
REady any, and I mean any direct compute review of kepler and tahati,
tahati beats gk104 by 10 to over 500%. Nv castrated direct computer in gk104.
Report Comment
 
 
# Is there going to be a difference?Christopher Fields 2012-03-22 08:04
I have 2 GTX 580's in SLI and they are water cooled. After months of rumors you guys have seemed to clear things up. I read things like "3 Times more powerful than a GTX 580 OC" to "The Card will be 8-9" Long".

I guess my question is this. My monitor is a 60Hz LED 32" 1920x1080 with a 5ms refresh rate. I play Battlefield 3 100% of the time. If I buy this card will my experience be any different?

I can prob answer this one myself.........no. Nvidia makes great products and the nice thing is when you buy a top end card you can usually skip a generation and wait for the next knock out contender. I would say if you have a GTX480 Series to upgrade. But if you have a Single GTX580 then put on the brakes and wait for the next show.

This of course is just my opinion. This is a great article, I always like reading your stuff Olin. Thanks for the time & effort you guys put into these reviews.
Report Comment
 
 
# RE: Is there going to be a difference?Olin Coles 2012-03-22 08:27
As a BF3 player myself (Das Capitolin), I can say that you're not going to SEE a significant difference between an overclocked GTX 580 and the new GTX 680. That being said, Active vSync is something worth considering, as are the extremely low sound levels and heat output. Since you've already purchased the 580's, I see no solid reason to upgrade. But if someone's deciding between the two, the GTX 680 easily wins my vote.
Report Comment
 
 
# RE: RE: Is there going to be a difference?Christopher Fields 2012-03-22 08:49
I added you Olin, S1W3A3T7
Report Comment
 
 
# Very good POWReSeRe 2012-03-22 12:35
i'm talking about skiping 1 generation GPU (and usually CPU too). Yes that's my philosophy either.
now i'm BF3in' w/ 460 1Gb SLI on an (pretty old but excellent LCD) EIZO 1600/1200. high settings (not ultra). Actually i could go ultra BUT with a drastic tweak down AA and other 1, maybe 2 settings.
In BF3 my nick: mantuitoru.

Once again, good review.
Long live the competition, anyway. in this case nVidia vs AMD.
Report Comment
 
 
# gk110godrilla 2012-03-22 08:43
I'm going to wait for gk110 chip according to fudzilla will be out by July , currently using classified gtx 580 ultra.
Report Comment
 
 
# RE: NVIDIA GeForce GTX 680 Kepler Video Card Performanceroy 2012-03-22 10:04
Good card and excellent review! I love the low power usage and the new tech(txaa,adaptive v-sync,28nm,gpu boost,etc). A little concerned by the 256mb memory bus.
I wanna upgrade but until Nvidia (must have Phys-x) releases a single gpu card that can run Metro 2033 at max settings (4xAA,16xAF,very high quality,advanced DOF on and phys-x on) and average 60fps at 1920x1200 I guess I am stuck with my GTX480 sli for a long while.
Report Comment
 
 
# RE: NVIDIA GeForce GTX 680 Kepler Video Card Performanceclaydough 2012-03-22 13:34
at 3x the cuda cores than the card that was supposed to replace the 560ti that was supposed to be released first the benchmarks do not even seem to reflect the numbers of that affordable version? :(

Rumors however that this is the card was supposed to replace the 560ti but the performance was so good that they just gave it the 680 designation is very unsettling...

So...
What happened to the consumer friendly mid powered card that was supposed to be released first?

What the heck happened to the Kepler super powered card?

Better than 480 to 580 but is this just another incremental release or hardware without drivers?

Or worse yet...
Did we just get version screwed?
Report Comment
 
 
# RE: RE: NVIDIA GeForce GTX 680 Kepler Video Card PerformanceOlin Coles 2012-03-22 16:16
I don't recall NVIDIA ever making a statement on Kepler, it's roadmap, or what it was intended to replace. In other words, consider the source.
Report Comment
 
 
# RE: NVIDIA GeForce GTX 680 Kepler Video Card Performanceclaydough 2012-03-22 13:50
Maybe the rumors of gtx 780 were true?
If this is the replacement for the gtx 560ti..
and the gtx780 is coming out at the end of this year...
Then that would be fine with me!
If the following gtx 780 benchmarks are real:

wccftech.com/leaked-nvidia-generation-performance-slide-pits-upcoming-flagship-gk100-kepler-based-geforce-gtx-780-gtx-580/

Like I said in next gen console/xbox 720 article:
Anything without the benefit of present day maxwell development is sure to lead to a another very console limiting 10 year cycle!
Report Comment
 
 
# This..luay 2012-03-23 10:26
AMD replaced their mid-end product names with the higher end when they released their 6000 series, so not a first or even second time it has been done. This is a 560ti replacement and it will be marketed to death as a high-end product now and mid-end after July with a price reduction when the 580 replacement comes around.
If this card sells well, console makers should take note and try to cut a deal with Nvidia instead of AMD. I think this is why MS and Sony were not publicizing any future commitments of delivering next-gen consoles..
Report Comment
 
 
# RE: NVIDIA GeForce GTX 680 Kepler Video Card PerformanceDoug Dallam 2012-03-22 14:40
NVIDIA GeForce GTX 680 Reference Design

Load: 243 W

WOW?!

Also consider, extrapolating from your price comparison of other nVidia top end offerings, the GTX295 also came in at 500+ USD. That's pretty incredible.
Report Comment
 
 
# awesomecube 2012-03-22 19:50
If i buy this card, it means i will have to buy a new motherboard, LG2011 which means ill have to buy a new CPU and of course, the newest RAM. so all in all, the only thing i get to keep is my SSD and HDD. At the same time, i will also get a bran new case. so... the games i currently play and will play in the future dont require this much power.

this is what happens when devs dumb down the #ing games for consoles. people like me that used to upgrade every other cycle are now holding off and keeping their system alot longer. Means less PC part sales. =/ hope # will get better for us.
Report Comment
 
 
# RE: awesomeDavid Ramsey 2012-03-22 19:54
You don't have to buy an LGA2011 motherboard to use this card...it will work fine in any PCI-E x16 slot in older motherboards.
Report Comment
 
 
# RE: RE: awesomecube 2012-03-22 19:58
but they must have PCI Express 3.0 right?
Report Comment
 
 
# RE: RE: RE: awesomeOlin Coles 2012-03-22 20:02
Just like PCI-E 2.0 is backward compatible with PCI-E 16x, PCI-E 3.0 is backwards to 2.0 and 16x as well.
Report Comment
 
 
# RE: RE: RE: RE: awesomecube 2012-03-22 20:03
yeah, but what about later on when games do take advantage of 3.0, if ever... then ill have to buy a new mobo for that.
Report Comment
 
 
# RE: RE: RE: RE: RE: awesomeDavid Ramsey 2012-03-22 20:08
That's a long way off in the future. Current games don't come anywhere near to saturating a PCI-E 2.0 x16 slot.
Report Comment
 
 
# RE: RE: RE: RE: RE: RE: awesomecube 2012-03-22 20:34
can you please go to the forums under hardware. I posted a question and need some help.
Report Comment
 
 
# RE: RE: RE: RE: RE: RE: awesomeclaydough 2012-03-26 13:30
I like to emphasize the only factor keeping any hardware pipe from melting away from saturation is compromise and economics. The artist and imagination exists today for any game that is possible with tech at the end of the 21rst century. Creativity is both cpu and gpu limited!

The game that is possible in your lifetime is only limited by the rate of hardware acceptance. ( keep buying till cinematic levels of hyper real realtime rendering is possible... after that wait until holograms become a reasonable reality? )
Report Comment
 
 
# RE: RE: RE: RE: RE: RE: RE: awesomeChristopher Fields 2012-03-26 13:35
WTF? Holograms?..................I think we are a way off from gaming on holograms, lol, 3D Gaming has just barely been breaking the surface their Spock, lol. But it would be cool. PS beware of SUPER NERDS!!!!
Report Comment
 
 
# MrSteven 2012-03-24 14:50
I find it hard to believe that this is the replacement for the GTX 560ti when it costs £430. Ouch! Budget card my ass!
Report Comment
 
 
# Bought 2: Selling Bothkzinti1 2012-03-27 23:02
I realized that this is merely a replacement for mid-range cards.
Even though it beats the best of the 5xx series, it's still only a mid-range card.
The REAL high performers will be out in a couple of months.
Nvidia's hype-machine got me once again.
I should know better by now.
Report Comment
 
 
# AdamAdam 2012-04-01 16:03
I have one of these on order. Had 2 GTX 480 in SLi. This one card should easily beat them. My GTX 480 ran under water @ 900/1800/1848. They would out bench a stock 580 as they were the same GPU. Just tweaked a little.

This is a whole NEW GPU. Like Fermi was. The 670 has already been annouced and waterblock manufactors are shipping the bocks. Expect theor next release to be a lower version. Also Laptops are shipping with mobile versions. Check nVidia's homepage.

The next release will be the 780. Time range anything from 6 - 12 months. AMD could influence this.

I have my waterblock on order. I will also be SLi'ing them from about 2 months in.
Report Comment
 
 
# RE: NVIDIA GeForce GTX 680 Video Card Performancestelios 2012-05-14 21:56
i did a test to see where im at with those settings and i got 46.7fps gtx580 @962Mhz compared to 39 is a 20% increase in performance and gtx 680 is 56fps compared to a stock gtx 580 is 41%increase in performance but compared to gtx 580 oc in my case 22-23% performance increase pretty good if u ask me
Report Comment
 
 
# RE: NVIDIA GeForce GTX 680 Graphics Performanceramin 2012-11-11 07:42
NVIDIA?s newest flagship card(680) is superior to the HD 7970 in almost every way. Whether it is performance, power consumption, noise, features, price or launch day availability, it currently owns the road and won?t be looking over its shoulder for some time to come.
Report Comment
 
 
# RE: RE: NVIDIA GeForce GTX 680 Graphics PerformanceDavid Ramsey 2012-11-11 09:38
Hm, no, not necessarily. The 7970 is superior in some games (as these benchmarks show), and NVIDIA still can't match the 7970's lower power usage, especially in multi-card setups where Radeons not being used are all but disabled-- sub 5-W power consumption, fan stopped, etc.-- when they're not needed (i.e. if you're not playing a game.)

Still, I'd agree GTX680 performance is better overall, and they still own the world in GPGPU.
Report Comment
 

Comments have been disabled by the administrator.

Search Benchmark Reviews Archive