A few weeks ago, AMD launched the Radeon RX 500 series, which is mostly a refresh of the previous-generation RX 400 series they launched less than a year ago. Mostly because the Radeon RX 500 series is based on the very same Polaris architecture as its predecessor, albeit on a slightly more mature process. Whatever your stance on rebadges, it’s clear that AMD has decided to double-down on its Polaris architecture in 2017, at least for the midrange and entry-level segments until its RX Vega GPUs appear later on to cover the high-end market. So like it are not, Polaris is here to stay for the foreseeable future.
Today, we’re going to take a look at one of the first graphics cards in the new RX 500 series lineup, the Radeon RX 570 – specifically the ASUS STRIX RX 570 which features RGB lighting for all the customization your heart desires.
Radeon RX 570 Specifications
|Model||Radeon RX 470||Radeon RX 570||Radeon RX 580|
|Codename||Polaris 10||Polaris 20||Polaris 20|
|Core Clock (MHz)||1206/926||1244/1168 (1300)||1340/1257|
|Memory Clock||6.6 Gbps||7.0 Gbps||7.0/8.0 Gbps|
|Launch Price||$179||$169 ($189)||$199/$229|
*ASUS non-reference specifications highlighted in bold.
Taking a look at the spec chart above, we can see that the Radeon RX 570 features the exact same specs as the previous-generation Radeon RX 470 however, a more mature process does come with some refinements and one of those is higher clocks. As expected, the ASUS STRIX RX 570 then goes beyond the reference RX 570, featuring a factory boost clock of 1300MHz, which further boosts to 1310MHz in OC mode.
Let’s take a closer look.
[section label=”A Closer Look”]
A Closer Look
Starting with the packaging for ASUS STRIX RX 570, we have a nice look at the card itself, the ASUS AURA SYNC badge indicating the card features RGB lighting and the colorful STRIX logo.
Included in the packaging, we get a user manual, quick installation guide, a driver installation disc (don’t use this, download the latest drivers from the AMD website guys), and a pair velcro cable ties, which is a very thoughtful addition. Finally, we have a set of adhesive-backed decals which can be stuck to cut outs on the GPU’s shroud, if you’re into that sort of thing. I appreciate the customization, but considering the front of the fan shroud is not even seen in 99% of cases, it seems a little unnecessary.
The ASUS STRIX RX 570 features a dual 90mm fan design which features ASUS’ patented “Wing-Blade” 0dB fan design. These fans are IP5x-certfied dust resistant. The design allows for more air pressure on the edges of the blades which can increase air flow up to “105%” while operating up to “3x quieter” than reference cards. 0dB means that the fans will idle at 0 RPM when at idle, or not under intense loads.
Around the back, we see ASUS has omitted a backplate, which is a little disappointing to see especially at this price point. Thankfully, they’ve included a small support bracket spanning roughly 80% of the card, which will help increase rigidity. Still, I can’t help but think a backplate would have better served this purpose.
The card measures just over 9.5″ long and features a fully black color scheme with the only color coming from the RGB ASUS ROG logo on the side of the card. The card receives power from a single 8-pin PCIe power connector, which is fairly standard.
Taking a look at the rear I/O, we have see that ASUS has opted for a different configuration than usual. We have a single DisplayPort 1.4, as well as a single HDMI 2.0 and 2x dual-link DVI- connectors. I really can’t imagine why ASUS would choose to include two DVI connectors and only a single DisplayPort since most GPUs in this class have up to three.
After removing the cooling shroud from the PCB, we get a better look at the internal layout of the graphics card and the cooling system itself. Taking a look at the cooling shroud, we find ASUS signature DirectCU II cooling technology, which basically means there are two 6mm copper-heatpipes which directly contact the GPU core itself. The heatsink is rather smaller, considering the size of the fans and the fact that it only includes two heatpipes is a bit concerning for a premium priced RX 570.
Next. we have a shot of the Polaris 20 GPU, which looks identical to that of the Polaris 10 GPU found on the RX 470 we review earlier this year, In addition, there’s also 8x 512MB Elpida GDDR5 memory chips, which give us our 4GB framebuffer. These memory chips aren’t exactly well known for their overclocking potential or tolerance for high heat, so the fact that they are entirely passively-cooled means overclocking is likely a non-starter.
Taking a look at the graphics cards power delivery system, we see that rather than a reference 6-phase power design, we actually have three phases which are being controlled by the Digi+ (International Rectifer) ASP1300. Those three phases are being doubled using three IR3598s which are each driving two pairs of MOSFETs which are all UBIQ made, one M3054 N-channel MOSFET for the high side and two M3056 N-channel MOSFETs for the low side.
[section label=”Testing Setup and Methodology”]
Haswell-E X99 Test Bench
|CPU||Intel Core i7 5960X @ 4.2GHz|
|Motherboard||Asrock X99 OC Formula|
|Memory||Crucial Ballistix Elite 16GB DDR4-2666|
|Boot Drive||Samsung 850 EVO 500GB M.2 SSD|
|Storage Drive||ADATA Premier SP610 1TB SSD|
|Power Supply||DEEPCOOL DQ1250|
|CPU Cooler||DEEPCOOL GamerStorm Captain 360|
|Case||Phanteks Enthoo Pro|
|Operating System||Windows 10 Pro|
Testing Methodology – 2017 Update 2
With major GPU releases from both AMD and NVIDIA that fully support the latest APIs such as DirectX 12 and Vulkan, and a number of new games available which take advantage of these new features, we decided it was a time to update our GPU testing suite to more accurately represent the modern gaming landscape. Replacing FRAPs in our toolset is OCAT (Open Capture and Analytics Tool) which is an open source GUI frontend for PresentMon, the results captured by OCAT are identical to that of PresentMon, however, the ease of use provided by the GUI and its overlay greatly help in reducing the time in our testing process.
Like with any good graphics test suite, ours is constantly evolving to suit the current gaming landscape. With incremental changes adding new titles and benchmarks we believe are necessary. Our second update for 2017 replaces Project CARS with Sniper Elite 4, which brings with it support for DirectX 12 as well as a number of optimizations for multiple graphics card setups.
The following games/benchmarks will be tested:
- 3DMark FireStrike
- 3DMark Time Spy
- Rise of The Tomb Raider (DX11 & DX12)
- HITMAN (DX11 & DX12)
- Deus Ex: Mankind Divided (DX11 & DX12)
- Ashes of the Singularity (DX12)
- Gears of War: 4 (DX12)
- Grand Theft Auto V
- The Witcher 3: Wild Hunt
- Sniper Elite 4 (DX11 & DX12)
- DOOM (Vulkan)
All titles will be benchmarked a minimum of three times per configuration with the average of those results being displayed in our graphs. Performance will be measured in average FPS as well as the 99th (1% low) and 99.9th (0.1% low) percentiles. Those results will be gathered from the frame time data that is recorded using OCAT, however, they will be converted from milliseconds (ms) to frames per second (FPS) in order to simplify things. Data gathered from benchmarking tools is analyzed using Microsoft Excel.
All titles are tested at “High” to “Very High” settings at resolutions to be determined by the GPU’s market-segment. In the case of this review, it will be 1080p.
All benchmarks are conducted using the latest available drivers at the time of testing. In the case of this review, they are the AMD Radeon Crimson ReLive Edition 17.4.3 and the NVIDIA GeForce Game Ready 381.65 drivers.
The new 3D Mark, now referred to as just 3D Mark, is Futuremark’s latest update to the popular 3D Mark series of benchmarks. The updated 3D Mark now includes multiple benchmarks for cross-platform support as well as updated graphics to push the latest graphics cards to their limits.
[section label=”Deus Ex: Mankind Divided (DX11/DX12)”]
Deus Ex: Mankind Divided
Mankind Divided is the latest in the legendary first-person stealth action series, Deus Ex. Developed by Eidos Montreal and built using their brand new Dawn Engine which is based on IO Interactive’s impressive, Glacier 2 engine. It features support for both DX11 and DX12 like most of Square-Enix’s other 2016 titles and like those tiles was ported to PC by Nixxes. It is a fairly graphically intensive game, and is easily one of the most demanding in our test suite.
[section label=”Rise of the Tomb Raider (DX11/DX12)”]
Rise of the Tomb Raider
The follow-up to Crystal Dynamics’ award-winning Tomb Raider reboot, Rise of the Tomb Raider takes users across the world to a variety of exotic locales with even more tombs. With physically-based rendering, HDR and adaptive tone mapping, deferred lighting with localized Global Illumination for realistic lighting, volumetric lighting enables God Rays and light shafts, Rise of the Tomb Raider is hard on even on high end modern graphics cards.
We test Tomb Raider using the game’s built-in benchmarking tool.
[section label=”Grand Theft Auto V”]
Grand Theft Auto V
The hotly anticipated PC release of Rockstar Games’ fifth installment of their Grand Theft Auto franchise easily proves once again that when it comes to open-world games, no one does it better. With lots of new features and graphical enhancements built specifically for the PC version, it’s no wonder it took them so long to optimize it. With advanced features such as tessellation, ambient occlusion, realistic shadows, and lighting, mixed with the largest open-world map in franchise history, this is one beautiful, well-optimized PC title.
We test Grand Theft Auto 5 using the last scene in the game’s built-in benchmarking tool.
[section label=”The Witcher 3: Wild Hunt”]
The Witcher 3: Wild Hunt
CD Projekt RED’s The Witcher series has long been accredited with being some of the most beautiful, and graphically demanding PC titles and its latest installment, The Witcher 3: Wild Hunt, is no exception. With beautiful, large open-world environments, detailed charter designs, high-resolution textures, and advanced features such as God Rays and Volumetric Fog, in addition to a slew post-processing effects, makes this one really impressive looking game.
We test The Witcher 3 with a 60 second lap around the first village you come across in the campaign, this is one of the best places for testing as it exhibits some of the game’s most graphically intense features such as God Rays, and Volumetric Fog. The test loop also offers very little variance making it a very repeatable benchmark, which is difficult to find in most open-world games.
[section label=”Sniper Elite 4 (DX11/DX12)”]
Sniper Elite 4
Sniper Elite 4 is the latest sequel in the hit third-person, stealth shooter franchise by developer Rebellion. The latest installment brings with it support for DirectX 12 as well as a massive increase to the game’s overall map size and scale. The game also features a re-worked rendering engine which allows for even more over the top slow-motion deaths, which are a signature of the series.
We test on “High Settings” and start our benchmark during the first mission of the game when you arrive at the village. Tests are conducted in 60-second runs.
The latest installment of Id Software’s legendary DOOM series. The new DOOM brings with it the latest graphics technologies and a fully uncapped frame rate thanks to the all new id Tech 6 game engine. With some of the best graphics available on PC today, DOOM brings to life its hellish environments and demons for a truly jaw-dropping experience.
We test DOOM using a 60-second run during the first mission of the game.
[section label=” HITMAN (DX11/DX12)”]
The follow-up to 2012’s Hitman: Absolution, simply titled HITMAN, the latest game in the series by developer IO-Interactive takes users on an episodic adventure to compete online against other players. The new game also brings with it support for Microsoft’s latest DirectX 12 API.
We test HITMAN using the game’s built-in benchmarking utility.
[section label=”Gears of War 4 (DX12)”]
Gears of War 4
Gears of War 4 is the latest installment in Microsoft’s long-running third-person shooter franchise. As the first of the series to launch simultaneously on both Xbox consoles and PCs, Gears of War 4 is based on Epic’s Unreal Engine 4 and fully supports DX12, making it one of the most well optimized PC titles of 2016 offering a plethora of graphics options which can be adjusted for scaling across large varieties of hardware.
We test Gears of War 4 using the game’s built-in benchmark utility.
[section label=”Ashes of the Singularity (DX12)”]
Ashes of the Singularity
Oxide’s Sci-Fi RTS game, Ashes of the Singularity is a staple in any graphics review as it provides one of the best implementations of latest DirectX 12 API, complete with Asynchronous Compute for improved performance and even explicit multi-GPU support which allows for mixing multiple graphics cards for additional performance.
Using AMD’s WattMan overclocking utility which is built into the Radeon Crimson software package, we were able to overclock our sample from 1300MHz on the GPU clock to 1380MHz. I attempted to overclock the memory but was found that anything above the rated specifications was completely unstable, as I originally expected. The Elpida memory chips provided in this GPU just aren’t geared toward overclocking. Still, the increase on the GPU core clock is impressive, although not out of the norm for the RX 570.
In our overclocking results, we see that while the synthetic 3D Mark tests all perform marginally better with the overclocked dialed in, our games tests are bit more inconclusive. Two out of the three titles we tested did perform better with the overclock, however, Rise of the Tomb Raider is strangely worse on the low side, but basically the same on average.
For idle temperature, we’ll be taking a reading when the graphics card is idle for 5 minutes after a cold boot. Load temperatures are taken after a full 30-minute burn using Furmark.
While some of you aren’t huge fans of Furmark as it creates an ultra heavy, unrealistic load on the graphics card, we feel like it’s a more useful tool as it differentiates between graphics cards that have extremely well-designed coolers and ones that simply have cooling solutions that simply pass the test, if you will. Most games these days generally don’t create enough of a load/heat to even exceed temperatures where the fans would spin up on most custom coolers so it’s difficult to adequately rank cooling solutions without using a tool like Furmark. During all tests, the GPU intake air temperature is approximately 25° Celsius.
Temperature wise, the card’s cooling had no issues keeping it relatively cool at stock. That said, while idles are lower, the card did reach temperatures significantly higher than the RX 470 Red Devil, which is a bit disappointing, but not surprising considering the rather anemic cooling solution. Once overclocked, the cooling solution proved even further problematic, reaching temperatures of 86C while under load. Although, this wasn’t an issue while gaming as we never saw any signs of thermal throttling and noise levels were still relatively quiet.
[section label=”Power Consumption”]
For power consumption testing, we’ll be measuring full system power while idle along with full system power with the graphics card running at full load using Furmark. All power consumption measurements will be measured at the outlet with a simple P3 Kill A Watt meter.
Taking a look at the above chart, we can see that the ASUS STRIX RX 570 does indeed draw more power than both the reference RX 480 and the PowerColor RX 470 Red Devil. Once overclocked, it draws even more power than the EVGA GTX 1070 SC2, which is a significantly more powerful card. This wouldn’t be a huge issue if it wasn’t for the fact that in all our tests the ASUS STRIX RX 570 didn’t really manage to outpace the RX 470 Red Devil and even lost to it in some cases.
So after a wide range of tests, I’m left a little baffled by this card. Despite being based being based on a more mature version of the same silicon, performance of the ASUS STRIX RX 570 seems to just match or slightly lag behind that of the PowerColor RX 470 Red Devil. This is very disappointing considering they are essentially the exact same GPU, only the ASUS card has the benefit of being clocked higher, this and of itself should translate to better performance across the board, and yet the results speak for themselves. I can only assume that is due to the Red Devil just being a better implementation, in terms of cooling capacity, memory chips, etc.
The ASUS STRIX RX 570 isn’t necessarily a bad card, it offers decent aesthetics, solid 1080p gaming performance and it even runs fairly cool. That said, it is very difficult to recommend it for the current price of $189. If this was marketed as a more basic RX 570 priced closer to the $170 MSRP, I’d have no issues recommending it. However, the rather lackluster cooling solution, lack of a backplate and mixed performance makes it a tough sell as a premium RX 570 card.
If ASUS would have did away with the fancy RGB lighting and dropped the price, or provided a better cooling solution and a backplate, I’d probably have had a better experience with the card. On top of that, the odd configuration of display connectivity options leaves me a bit confused. I know it is unlikely that a user at this price point would be looking to invest into a triple monitor setup, but only including a single DisplayPort on a gaming graphics card at this price, in this year, is just strange.
This isn’t necessarily a mark against the RX 570 in general, as I’d assume that a better implementation would at least outperform the likes of the RX 470 in most cases, which is what you’d expect. However, we’ll have to test another few cards to find out. We have a PowerColor RX 570 Red Devil on the bench next, so hopefully, that will garner some better results. As for the ASUS STRIX RX 570, I can’t really recommend it unless the price is cut closer to the RX 570’s MSRP.
Available on: Amazon
Sample provided by: AMD