ATI Radeon 5870 Eyefinity6 Review
This is the day all Eyefinity fans (current and potential) have been waiting for - the formal unveiling of the Radeon HD 5870 Eyefinity6 (E6) card. ATI first teased this card when it initially unveiled the Radeon 5000 series, and its Eyefinity technology. The Eyefinity6 card was seen by many members of the WSGF as the trifecta in multi-monitor gaming:
- Eyefinity technology (i.e., multi-monitor gaming without the need for extra hardware a la the Matrox TripleHead2Go)
- Support greater than three displays, thus giving more options
- 2GB of VRAM
The E6 card is the last piece in the Radeon 5000 series lineup. In the time since the E6 announcement, the product landscape has changed and the E6 may not be the holy grail users once thought it to be:
- The 5970 has been released and the Catalyst 10.2 drivers have made Eyefinity and CrossFireX co-exist peacefully.
- ATI’s partners are now moving beyond reference designs and 2GB versions of the HD 5870 (as well as other cards) are being released.
Do these additional options hold the performance key for users with “just three screens” (irony full intended)? Do you need 2GB of RAM for 3x1 Eyefinity, or is it only relevant for more than three screens? Are any of the options for more than three screens usable for gaming?
We hope to answer all these questions (and more) over the course of this review.
Architecture & Specs
The HD Radeon Eyefinity6 carries almost the exact same specs as the original HD 5870. The only difference is the TDP and the power requirements. The need for more power could already be assumed given there is twice as much RAM to power, and now six ports to keep active. To ensure proper power is supplied to the card, ATI has increased the power requirements from 6+6 to 8+6 pin connections. The price also comes in slightly higher than the original HD 5870, with the E6 carrying an MSRP of $479.
Below is a spec block with comparing the original 5870, the 5970 and the E6.
Card | GPUs | Transistors | Max Memory | Shaders | Clock (MHz) | TDP (Watts) | MSRP* | ||
Core | Mem | Idle | Max | ||||||
ATI Radeon HD 5970 | 2 | 2 x 2.15B | 2 x 1GB | 2 x 1600 | 725 | 1000 | 51 | 294 | $699 |
ATI Radeon HD 5870 E6 | 1 | 2.15B | 2GB | 1600 | 850 | 1200 | 34 | 228 | $479 |
ATI Radeon HD 5870 | 1 | 2.15B | 1GB | 1600 | 850 | 1200 | 34 | 228 | $399 |
A Tale of Two Markets
As referenced on the previous page, the Eyefinity6 brings two unique features over the existing HD 5870. These features are the support for more than three monitors, and the inclusion of 2GB of VRAM on the card. Beyond the productivity benefits, this card has appeal to two distinct gaming markets.
Beyond Three Screens
The first market is obviously those customers who wish to use more than three screens for productivity and/or gaming. Seeing as how the original HD 5870 with 1GB of VRAM was taxed in many games (using max settings), we’ll assume that the 2GB of VRAM is essential to move past three screens.
The original HD 5870 had four available connections, but only three could be used simultaneously. The E6 allows the user to set up six different screens in a multitude of configurations. This new array of options challenges our ideals of multi-monitor gaming and opens the door of multi-monitor gaming to more gamers.
But, will one GPU be enough? Will the gamer who pursues this display configuration have enough horsepower with a single GPU card, or is CrossFireX forgone assumption?
Better Performance on Three Screens
The second market is the current Eyefinity gamer, who probably cut his or her teeth in the world of the Matrox TripleHead2Go. They know first hand the demands of gaming up to 5040x1050, and the Eyefinity now allows for 5760x1200 and beyond.
The WSGF showed in previous benchmarking of the NVIDIA GTX275 that moving from 896MB to 1792MB of VRAM was beneficial in providing a “smoother” experience. Overall average fps didn’t noticeably improve, but “jitters” and “hiccups” from texture swaps were noticeably reduced resulting in smoother gameplay. We were never able to test a GTX285 1GB vs. a GTX285 2GB, so we couldn’t positively determine the “sweet spot.” Is it at 1GB, or just above? Or, is it all the way at 2GB?
Additionally, those tests were in the days of DX9 and a resolution limit of 5040x1050. We’re now in a world of far fewer limits. We easily game at 5760x1200, and fill our games with tessellations, ambient occlusions and “God rays.” Given all these changes, do the old assumptions and findings still stand?