The listing of GPU-accelerated software program we check with at Techgage was modest, but over the past few years, it’s grown quite a bit. A few of the additions have come by means of reader suggestion, including MAGIX’s Vegas. In reality, we’d wager that we’ve acquired more requests for Vegas efficiency than another software program suite.
We first examined with Vegas a bit of over a yr in the past, tackling straight-forward AVC and HEVC encodes in our workstation GPU content material. With group suggestions, our unique checks have been deemed too lacking, leaving a variety of info on the table. That was true, and it’s the rationale behind us revamping our check scripts, including some new encode exams, and in addition playback exams.
We’re not going to say that the efficiency testing seen on this page is the perfect it can be, nevertheless it’s enhancing over time, as we turn into more accustomed to the software. We benchmark with about 25 items of software, and we’re masters of very few. When you’ve got ideas on additional enhancing our Vegas exams, please depart a remark.
Issues & Potential Performance Roadblocks
April 30 Addendum: At some point after this text was revealed, MAGIX released the fifth main replace to Vegas Pro 16, construct 424. We carried out follow-up testing with this model, and have discovered that the issues detailed here continue to persist.
Work on this text started in December, following publication of our Radeon Pro WX 8200 evaluation and a some good suggestions from the group. We overhauled our check scripts, and obtained again to work on efficiency testing – only to run into a direct roadblock.
From the get-go, AMD Radeon GPUs had no problems in our testing, but the story was totally different for NVIDIA. As an alternative of 60 FPS playback with a LUT FX filter, we as an alternative saw 1~5 FPS. One thing seemed broken, so we reached out to MAGIX and NVIDIA to figure out what was happening. We continued to speak with both corporations a number of occasions since December, however nothing improved on the performance entrance.
After placing twenty graphics cards by way of the gauntlet of revised exams, extra was revealed concerning the difficulty. We hate to point out a “broken” performance graph, nevertheless it’s essential to spotlight as a result of the difficulty continues to survive.
Poor LUT perf on Quadro/TITAN without Vegas profile added to NVIDIA Management Panel
Each GeForce card performed their LUT jobs with out situation, however all the Quadros, and each TITANs, delivered efficiency that makes a $400 GPU look no less efficient than a $2500 one. In an analogous vein, the same LUT encode used would render an error on some events, stating that the GPU was out of memory, regardless that that wasn’t the case (system reminiscence was additionally okay).
Sure NVIDIA GPUs made LUT encoding troublesome
Despite the oddities in testing, all twenty of the GPUs had their knowledge compiled. Graphs have been created, after which writing was taking place… until we found a fix that’s only type of a repair. In the NVIDIA Control Panel, at the least on Quadro and TITAN, a profile for Vegas shouldn’t be mechanically created. Usually, this shouldn’t matter, but in case you are experiencing these similar issues and manually add the appliance (as seen within the under picture), the problem goes away. Once more, sort of.
After merely adding the Vegas profile to the NVIDIA Control Panel, the difficulty of poor LUT performance disappeared. Encode occasions of 900 seconds dropped to 300, and 5 FPS playback changed to 60 FPS. The kicker? You possibly can hold every single setting in this new profile as “Use global setting”. You just need the profile to exist.
Given this bizarre conduct, it looks like it’d make sense to have NVIDIA mechanically add Vegas as a profile, and it in all probability should… but including this profile truly comes with a caveat. After adding the profile, our LUT and playback efficiency improved, but every different encode suffered a small drop in efficiency. This was consistent across a number of GPUs. An encode which may have been 2m 30s initially, for instance, would turn out to be 2m 36s.
We’re not well-versed enough in MAGIX’s or NVIDIA’s software program designs to remark far more, but we feel like the last word fix has to return by means of an replace to NVIDIA’s driver. The fact that efficiency is respectable on GeForce however abysmal on Quadro and TITAN by default suggests a primary fix shouldn’t be that troublesome.
Exams & Hardware
We use a number of totally different tasks for both our encode and playback exams. To gauge primary AVC and HEVC encode performance, an equivalent scene is encoded to each on all GPUs. To then get the GPU to do actual work, two more tasks are used: one with a LUT filter, and another with Median. The Median check continues on to our CPU encode exams.
Here’s the essential specs of our check rigs and chosen processors:
Because this article focuses on performance of a workstation software, all relevant CPUs and GPUs (that we now have) are included for testing here. Some gaming GPUs have been tossed in for expanded testing, with particular reasons for every. The GTX 1660 Ti was included as it’s the top-end non-RTX GeForce based mostly on Turing, whereas the RTX 2060 was included as a result of it’s the lowest-end RTX GeForce based mostly on Turing. Conversely, the RTX 2080 Ti provides us a take a look at top-end efficiency with an NVIDIA gaming GPU. On the AMD aspect, included gaming rivals are the Vega-based Radeon VII and RX Vega 64, and the Polaris-based RX 590. To finish the image, both the Pascal-based TITAN Xp and Turing-based TITAN RTX from NVIDIA are also included.
GPU Encode Performance
For our first set of outcomes, we see some apparent detriment to AMD’s older Polaris-based graphics cards, including the WX 3100 ~ WX 7100, and RX 590. Any trendy GPU beyond that is going to ship good performance, with the top-end of NVIDIA’s range leading the pack.
HEVC fares fairly a bit in another way:
With HEVC, there’s little difference throughout the complete vary of GPUs, with the beforehand struggling low-end WX collection playing cards finding better positions on this chart. Finally, every GPU takes a bit longer to encode with HEVC over AVC (apart from these aforementioned Radeons).
With each GPU working properly, the LUT FX encode chart appears quite a bit totally different from the skewed one proven earlier within the article. AMD has some clear strengths in Vegas, with the Vega-based cards sitting comfortably on the top. NVIDIA’s greater Quadros and TITANs sit beneath these, whereas the AMD Polaris collection of cards once once more discover themselves being held back resulting from their dated VCE encoder.
The LUT filter is admittedly not that demanding within the grand scheme, however the Median FX positive is. With the chart under, we will see nice scaling from prime to backside, with AMD as soon as again shining brilliant at the prime:
AMD’s efficiency general is admittedly spectacular, save for perhaps the Polaris playing cards that wrestle in multiple exams. They wrestle much less in this Median check general though. Primarily, the extra graphics horsepower you’ve got, the better, however AMD’s Radeons have a definite benefit.
CPU Encode Performance
Graphics playing cards scale fairly properly in Vegas, but CPUs do as properly. Clearly, there’s an enormous difference in the outcomes above between the lowly Four-core Ryzen and 18- and 32-core prime canine. That’s a fantastic thing, however finally just about anticipated, given how lengthy CPUs have been optimized for. It’s essential to notice some anomalies as nicely, although, principally with the top AMD Ryzen Threadripper chips. These are recognized to behave oddly with some video encoders, thanks partially to less-than-ideal thread management in Home windows. We speak more about this in our Coreprio article. The 24-core 2970WX should place greater than the 16-core Intel chip in a check like this, and likewise, the 32-core 2990WX should dominate an Intel 18-core – however neither is true right here.
As we saw in our Blender 2.80 performance deep-dive last month, some software program tasks favor the GPU so heavily, you can get by with a smaller CPU in case your GPU is highly effective. With Vegas, GPU encoding is clearly quicker general, but the CPU continues to be heavily concerned, so the better your CPU and GPU, the quicker your encodes are going to happen. Working example:
For its worth, the 2400G is a superb processor, however for many who need to get critical work accomplished, it’d clearly repay to opt for a higher-end choice. Even the 2700X Eight-core delivers large positive aspects over the 4-core 2400G. It’s only on the really high-end of core counts the place minimal benefits can be seen when shifting up a mannequin or two.
To capture playback efficiency, we used the same Median venture as above, and a slightly edited LUT one, configured the viewport for Greatest (Full), and recorded 30 second stints of playback. For every run, all the 30 seconds was allowed to play via twice, with the third run being recorded. This was completed to help clean out hiccups that can actually throw off tabulated check outcomes.
LUT, as mentioned above, isn’t as demanding as Median, however you’ll still need a adequate sufficient GPU to make sure that you’ll have the ability to play again at 4K/60 with out concern. Under are the minimal and average FPS results from these runs:
Nearly all of the GPUs here might deliver suitable efficiency, whereas AMD’s Polaris playing cards fell again a bit, but again, albeit to not as an extreme some extent as it might be. Issues change quite a bit with the much heavier Median check:
These graphs require a little bit of an evidence. Keep in mind the large “Considerations & Potential Performance Roadblocks” section from earlier? Oddities continued into the Median playback check, where many GPUs might simply not play again the scene and not using a critical wrestle. If each GPU behaved the same method, the check might be easily declared too grueling, however many GPUs survive.
Whereas with the poor LUT efficiency seen from NVIDIA GPUs earlier in the article (with default NVIDIA Control Panel settings), a few of the GeForces that fared wonderful there didn’t here. That “0” for the WX 3100 isn’t a typo, both. Median particularly is a strenuous impact, and would require a beefy GPU for dependable playback at good quality.
There’s another kicker right here. Even with a few of the GPUs that positioned on the backside of those charts, there were occasions once we experienced better playback efficiency when testing, nevertheless it’d by no means persist for long. There’s clearly more optimization that may be accomplished someplace. We will truthfully say it doesn’t make sense that an RX 590 scores better in this check over a lot quicker GPUs. Take a look at the RTX 5000’s 18 FPS and RTX 4000’s 3 FPS to see how sporadic efficiency may be. We’re coping with video right here, but Median FX is in effect graphics rendering, and you only need to shortly take a look at considered one of our related articles on that to know how issues can scale.
Plenty of benchmarking went into this text, so it’s an excellent factor that lots of fascinating info got here from it. Conversely, it’s unfortunate that a lot of the NVIDIA performance in our GPU testing was so hit-or-miss, with GeForces largely performing higher than Quadro and TITAN, with a half-fix out there to get things working higher. We’re hoping this example gained’t last for long, or there will probably be an terrible LUT of headache out there.
Given the present state of affairs with NVIDIA and Vegas proper now, Radeon wins as the go-to selection for the software program. Even towards NVIDIA’s GPUs that appeared to behave nice from the get-go, AMD’s prime chips led the pack. These Radeon strengths have carried over from model 15. We didn’t encounter issues with NVIDIA GPUs in that version, however that might be because of the straightforward proven fact that we solely did primary checks at that time. As soon as filters become involved, the state of affairs modified rather a lot for NVIDIA.
It may be argued that CPUs are technically higher for encoding, however GPU-based encoding has come a great distance through the years, so it’s turning into more and more unlikely that you simply’ll discover a top quality distinction between the two kinds of encodes. You’ll notice the encode time variations, nevertheless. In our testing, including a GPU gave a 300% speed-up to most comparisons.
There are some workstation situations the place a GPU would be the solely factor that issues, but that’s not the case with Vegas. Whereas the GPU is extraordinarily necessary, the better your CPU, the better the general encode efficiency, so both an excellent GPU and CPU will deliver a fantastic expertise.
We’re by no means executed testing, so we’ll undoubtedly be revisiting Vegas again down the street. Hopefully proper after NVIDIA performance and stability improves…