Ashes of the Singularity PC Performance Review

Developer FAQ - What happened to Mantle Support?

Ashes of the Singularity PC Performance Review

Developer FAQ - What happened to Mantle Support?

Oxide Games were one of the first developers to support the new generation of Graphical APIs, supporting AMD's Mantle API before DirectX 12 and the Vulkan API were even announced. 

The new Nitrous Engine engine is designed specifically for use with modern graphical APIs and Oxide Games has been working tirelessly to deliver a game engine that makes the best use of modern technology be it multi-core CPU support and even cross-vendor Multi-GPU support.

Oxide Games are a real collection of Gaming Enthusiasts, working to support all of these new features in order to satisfy their own curiosity while building one of the most Efficient DirectX 11 and now DirectX 12 game engines ever designed. 

Noe with DirectX 12 being here Oxide has dropped support for AMD's Mantle API, moving their resources to support both DirectX 12 and now Vulkan, as both of these new APIs support multiple GPU vendors. Oxide say that their time was not wasted on the Mantle API, as the knowledge gained from working with it is one of the reasons that they are one of the leading developers using modern graphical APIs today. 

 

Ashes of the Singularity Beta Phase 2 DirectX 12 Performance Review

 

Below is a selection of questions that Oxide Games were able to answer for us regarding the Ashes of the Singularity game and their new benchmarking tool, confirming to us that they do not have a business agreement with either GPU manufacturer to add support for DirectX 12. 

 

Is Oxide still supporting Mantle?

Oxide is migrating the effort spent on Mantle to support on the upcoming Vulkan API. We have no solid time-table for Vulkan support at this time, however.

 

I’ve heard you allow source access to vendors? Is this true?

Yes. Oxide and Stardock want our game to run as fast as possible and with as few issues as possible on everyone’s hardware. Thus, we have an open door policy. For security reasons, we can’t dive into details, but we should be clear that this level of source access is almost unprecedented in the game industry. It is not common industry practice to share source code with IHVs.

 

Does Oxide/Stardock have some sort of business deal with any IHV with regards to Async Compute? Is Oxide promoting this feature because of some kind of marketing deal?

No. We have no marketing or business agreement to pursue or implement this feature. We pursued the multiple command queues also known as async compute because it is a new capability in D3D12 and Windows 10. That is, we implemented it entirely on our own accord and curiosity. Oxide is committed to exploiting as many capabilities of DX12 as possible.


In the previous benchmark, were you using async compute?

We had very basic support of this feature. During the process of development for Multi-GPU, we realized that some of the lessons learned and code written could be applied to async compute. Thus, this benchmark 2 has a much more advanced implementation of this feature.

 

Does Oxide optimize specifically for any hardware?

Oxide primarily optimizes at an algorithmic level, not for any specific hardware. We also take care to avoid the proverbial known “glass jaws” which every hardware has. However, we do not write our code or tune for any specific GPU in mind. We find this is simply too time consuming, and we must run on a wide variety of GPUs. We believe our code is very typical of a reasonably optimized PC game.


How much performance should I gains from a second graphics card in my computer?

This depends on your video cards. We expect around 70% scaling if you use two of the same card. However, mixing cards can vary the results. For example, you will never get more than twice the speed of the slowest video card. You would be better off just using the new card alone. If you are mixing and matching cards, we recommend running the benchmark in single GPU mode first, then matching cards which have similar single GPU scores.


Why do multiple GPUs matter?

Multiple GPU configurations are increasingly common amongst gamers. Moreover, it allows users with a reasonably new video card to greatly improve their performance by buying a second card, even if it is a different brand or model and gain performance. This will begin to matter more as gamers begin to migrate to 4K and higher resolution displays.

 

«Prev 1 2 3 4 5 6 7 8 9 10 11 12 Next»

Most Recent Comments

04-04-2016, 16:34:06

Relayer
Do you have 390(X), 970, 980 cards on hand to test? It would be good to see how these perform since so many gamers own them. In some games lately Hawaii has been getting pretty close to Fury(X) performance.

Thanks,Quote

04-04-2016, 16:44:49

WYP
Quote:
Originally Posted by Relayer View Post
Do you have 390(X), 970, 980 cards on hand to test? It would be good to see how these perform since so many gamers own them. In some games lately Hawaii has been getting pretty close to Fury(X) performance.

Thanks,
Sadly I do not have an R9 390 or GTX 970 for performance testing.

All of the GPUs that I use for testing have been bought and paid for by me the writer and were not samples from an external party or sponsor. Hopefully we can get hold of more GPUs for gaming content in the future, especially when the next generation of GPUs are released.Quote

04-04-2016, 21:33:03

NeverBackDown
Quote:
Right now only AMD GCN GPUs support Asynchronous Compute in their GPU drivers, though Nvidia is rumored to be adding support for this function to Maxwell in the future with a driver update, though this remains unconfirmed by Nvidia.
AMD supports it on a hardware level as well as driver level. Nvidia won't be releasing a driver for it, driver can't make up for hardware losses, it's not possible for Nvidia to implement such a feature.

I find it kind of funny that now AMD in DX11 are either slightly behind or just ahead. Nvidia really are struggling with this title, explains why they called Oxide out before. Hopefully as they push out more content it gets more optimized and less CPU limited in future updates/patches.Quote

05-04-2016, 08:32:58

SPS
I'd be more interested to see different CPUs used, from both Intel and AMD's offerings. Though I do appreciate that you don't have access to everything or necessarily the time either.

Quote:
Originally Posted by NeverBackDown View Post
AMD supports it on a hardware level as well as driver level. Nvidia won't be releasing a driver for it, driver can't make up for hardware losses, it's not possible for Nvidia to implement such a feature.

I find it kind of funny that now AMD in DX11 are either slightly behind or just ahead. Nvidia really are struggling with this title, explains why they called Oxide out before. Hopefully as they push out more content it gets more optimized and less CPU limited in future updates/patches.
Oxide most likely focused on GCN optimization which kind of explains why Nvidia benefit on DX11. Not sure how you know Nvidia can't support async compute, they don't really share architecture notes.Quote

05-04-2016, 11:18:59

NeverBackDown
Quote:
Originally Posted by SPS View Post
I'd be more interested to see different CPUs used, from both Intel and AMD's offerings. Though I do appreciate that you don't have access to everything or necessarily the time either.



Oxide most likely focused on GCN optimization which kind of explains why Nvidia benefit on DX11. Not sure how you know Nvidia can't support async compute, they don't really share architecture notes.
Even in dx11, they aren't much better as everything gets higher resolution or settings. I doubt it's mostly AMD focused, it uses asynchronous compute. Nvidia doesn't have it and therefore probably results to context switching to do both compute and graphic work so it adds latency and therefore decrease framerate. AMD called out Nvidia at GDC that you can't support it at a driver level. In addition, it's been what 6 months and we've had no comment or hints from Nvidia about this "rumored" magic driver. I think it's more likely that got started by people saying, "wait for them to release a driver for the game". I don't even think Nvidia has a proper driver for it yet.. or not for a while. Which makes sense since they called out Oxide saying its not represenitive of a real dx12 title and begging them not to use asynchronous compute. That's just them being sore losers tbh. Nothing wrong with admitting that they can't support it but will try to get the best performance possible anyway. They haven't done this for any other dx12 title, it's just because they lose by far in this oneQuote
Reply
x

Register for the OC3D Newsletter

Subscribing to the OC3D newsletter will keep you up-to-date on the latest technology reviews, competitions and goings-on at Overclock3D. We won't share your email address with ANYONE, and we will only email you with updates on site news, reviews, and competitions and you can unsubscribe easily at any time.

Simply enter your name and email address into the box below and be sure to click on the links in the confirmation emails that will arrive in your e-mail shortly after to complete the registration.

If you run into any problems, just drop us a message on the forums.