'

Nvidia's AD102 Lovelace GPU reportedly offers gamers up to 18432 CUDA cores

Nvidia's next-generation architecture will be BIG

Nvidia's AD102 Lovelace GPU reportedly offers gamers up to 18432 CUDA cores

Nvidia's AD102 Lovelace GPU reportedly offers gamers up to 18432 CUDA cores

Rumour has it that Nvidia's next-generation graphics architecture will deliver a huge performance leap over today's Ampere products, with 3DCenter.org and kopite7kimi  claiming that the company's AD102 GPU core will feature 18,432 FP32 units. 

Nvidia's next-generation GPU architecture will reportedly be called Lovelace after Ada Lovelace. This next-generation graphics architecture is also rumoured to use 5nm lithography and make up Nvidia's RTX 40 series of products. 

If the specifications from kopite7kimi are accurate, Nvidia's AD102 GPU will feature 71.4% more CUDA cores than Nvidia's GA102 (Ampere) GPU. This factor alone should make Lovelace a huge generational leap over Ampere, assuming that Nvidia can offer the same (or higher) performance levels from each Lovelace FP32 unit. 

At this time, it is unknown when Nvidia will release a post-Ampere graphics architecture. Right now, Nvidia hasn't released their entire Ampere lineup, and mobile Ampere products have not been released onto the consumer market. Even if we discount the possibility of an Ampere-based RTX 30 Super series refresh, Nvidia is unlikely to release a next-generation graphics architecture until mid-to-late 2021 at the earliest.

Rumour has it that Nvidia is working on RTX 3070 Ti and RTX 3080 Ti products, which are both due to release in early 2021. These products would negate the need for an Ampere RTX Super refresh, making it likely that Nvidia plans to release RTX 40 series products and a new GPU architecture sooner than expected.

Reports of Nvidia's accelerated product launches could be due to AMD's recent RDNA 2 products, which have proven to be very competitive with Nvidia's RTX 30 series offerings. Have AMD forced Nvidia to accelerate their roadmap? 

Nvidia's AD102 Lovelace GPU reportedly offers gamers up to 18432 CUDA cores  
You can join the discussion on Nvidia's rumoured AD102 GPU on the OC3D Forums

«Prev 1 Next»

Most Recent Comments

28-12-2020, 11:06:01

AlienALX
Bah, so they didn't name it after Linda then

I don't think AMD have accelerated it. Ampere has been a wash since its conception. Poor yields, wrong company making the dies and so on.Quote

28-12-2020, 12:16:03

AngryGoldfish
Quote:
Originally Posted by AlienALX View Post
Bah, so they didn't name it after Linda then

I don't think AMD have accelerated it. Ampere has been a wash since its conception. Poor yields, wrong company making the dies and so on.
I've heard rumours that Nvidia will be using Samsung's 5nm process, not TSMC's.

I think it's more complicated than Samsung's process node just being terrible. Ampere is a not a terrible architectural leap. It's nowhere near as good as past generations, but in and of itself it's decent enough. Availability is poor, but so is everything using TSMC's 7nm, which has been available for ages now. Power consumption is pretty bad with Ampere, but there are loads of cores, and it's not exactly miles behind RDNA2. And that high power draw clearly hasn't made the cards impossible to cool.Quote

28-12-2020, 12:44:36

Dicehunter
Meh... means nothing if only a minority of actual customers can get them without going to extraordinary lengths due to scalpers and retailers price gouging.Quote

28-12-2020, 12:47:01

AlienALX
Quote:
Originally Posted by AngryGoldfish View Post
I've heard rumours that Nvidia will be using Samsung's 5nm process, not TSMC's.

I think it's more complicated than Samsung's process node just being terrible. Ampere is a not a terrible architectural leap. It's nowhere near as good as past generations, but in and of itself it's decent enough. Availability is poor, but so is everything using TSMC's 7nm, which has been available for ages now. Power consumption is pretty bad with Ampere, but there are loads of cores, and it's not exactly miles behind RDNA2. And that high power draw clearly hasn't made the cards impossible to cool.
Then they will have the same issues. IE, Samsung's 5nm is more akin to TSMC's 7.

Will it be enough to kick AMD's ass? yeah, probably. However, if they remain with Samsung AMD will have a chance to drop with TSMC and stay in line.

Bit annoying if you think about "What if?". IE, what if Nvidia stuffed all of that core tech onto a 5nm TSMC die. Better clocks, better power consumption ETC.Quote

28-12-2020, 14:59:53

AngryGoldfish
Quote:
Originally Posted by AlienALX View Post
Then they will have the same issues. IE, Samsung's 5nm is more akin to TSMC's 7.

Will it be enough to kick AMD's ass? yeah, probably. However, if they remain with Samsung AMD will have a chance to drop with TSMC and stay in line.

Bit annoying if you think about "What if?". IE, what if Nvidia stuffed all of that core tech onto a 5nm TSMC die. Better clocks, better power consumption ETC.
Yeah, I know what you mean. But I still think there is more at play with their decisions than consciously choosing Samsung's process over TSMC as some sort of artificial drip feeding technique to milk consumers over longer periods of time. It could be that the only way for Nvidia to cram that many cores is with Samsung's process. Maybe their design specifically 'allows' it. Or maybe it's the only way to make the cards even remotely affordable. And if Nvidia are committed to increasing the raw horsepower and not necessarily working on clock speed and IPC improvements, maybe it makes sense for Nvidia to use Samsung. I know it sounds funny at this stage, but maybe it guarantees Nvidia will have a distinct advantage over the myriad of other companies using TSMC when it comes to wafer supply and design. Nvidia like to be distinct as we know. And maybe there is a disagreement between Nvidia and TSMC that cannot be resolved at this time. Maybe there's not enough 7 or 5nm to cater to everyone and Nvidia are forced to use Samsung. It just seems to me like there is more at play. It seems to me that if Nvidia could use TSMC's 5nm and crush AMD, but they don't, then there is a valid reason for it that we don't or may not ever know about. Nvidia don't want to stay competitive; they want to crush their competition.Quote
Reply
x

Register for the OC3D Newsletter

Subscribing to the OC3D newsletter will keep you up-to-date on the latest technology reviews, competitions and goings-on at Overclock3D. We won't share your email address with ANYONE, and we will only email you with updates on site news, reviews, and competitions and you can unsubscribe easily at any time.

Simply enter your name and email address into the box below and be sure to click on the links in the confirmation emails that will arrive in your e-mail shortly after to complete the registration.

If you run into any problems, just drop us a message on the forums.