Hugo Hacker News

Intel’s Arc GPUs will compete with GeForce and Radeon in early 2022

Exmoor 2021-08-16 16:56:12 +0000 UTC [ - ]

As TFA rightly points out, unless something drastically changes in the next ~6mo, Intel is going to launch into the most favorable market situation we've seen in our lifetimes. Previously, the expectation is that they needed to introduce something that was competitive with the top end cards from nVidia and AMD. With basically all GPU's out of stock currently they really just need to introduce something competitive with the almost anything on the market to be able to sell as much as they can ship.

015a 2021-08-16 19:02:15 +0000 UTC [ - ]

Yup; three other points I'd add:

1) I hate to say "year of desktop Linux" like every year, but with the Steam Deck release later this year, and Valve's commitment to continue investing and collaborating on Proton to ensure wide-range game support; Linux gaming is going to grow substantially throughout 2022, if only due to the new devices added by Steam Decks.

Intel has always had fantastic Linux video driver support. If Arc is competitive with the lowest end current-gen Nvidia/AMD cards (3060?), Linux gamers will love it. And, when thinking about Steam Deck 2 in 2022-2023, Intel becomes an option.

2) The current-gen Nvidia/AMD cards are insane. They're unbelievably powerful. But, here's the kicker: Steam Deck is 720p. You go out and buy a brand new Razer/Alienware/whatever gaming laptop, the most common resolution even on the high end models is 1080p (w/ high refresh rate). The Steam Hardware survey puts 1080p as the most common resolution, and ITS NOT EVEN REMOTELY CLOSE to #2 [1] (720p 8%, 1080p 67%, 1440p 8%, 4k 2%) (did you know more people use Steam on MacOS than on a 4k monitor? lol)

These Nvidia/AMD cards are unprecedented overkill for most gamers. People are begging for cards that can run games at 1080p, Nvidia went straight to 4K, even showing off 8K gaming on the 3090, and now they can't even deliver any cards that run 720p/1080p. Today, we've got AMD releasing the 6600XT, advertising it as a beast for 1080p gaming [2]. This is what people actually want; affordable and accessible cards to play games on (whether they can keep the 6600xt in stock remains to be seen, of course). Nvidia went straight Icarus with Ampere; they shot for the sun, and couldn't deliver.

3) More broadly, geopolitical pressure in east asia, and specifically taiwan, should be concerning investors in any company that relies heavily on TSMC (AMD & Apple being the two big ones). Intel may start by fabbing Arc there, but they uniquely have the capacity to bring that production to the west.

I am very, very long INTC.

[1] https://store.steampowered.com/hwsurvey/Steam-Hardware-Softw...

[2] https://www.pcmag.com/news/amd-unveils-the-radeon-rx-6600-xt...

pjmlp 2021-08-16 19:11:29 +0000 UTC [ - ]

Steam will hardly change the 1% status of GNU/Linux desktop.

Many forget that most studios don't bother to port their Android games to GNU/Linux, which are mostly written using the NDK, so plain ISO C and C++, GL, Vulkan, OpenSL,..., yet no GNU/Linux, because the market just isn't there.

015a 2021-08-16 19:57:27 +0000 UTC [ - ]

The first wave of Steam Decks sold out in minutes. They're now pushing back delivery to Q2 2022. The demand for the device is pretty significant; not New Console large, but its definitely big enough to be visible in the Steam Hardware Survey upon release later this year, despite the vast size of Steam's overall playerbase.

Two weeks ago, the Hardware Survey reported Linux breaching 1% for the first time ever [1], for reasons not related to Deck (in fact, its not obvious WHY linux has been growing; disappointment in the Win11 announcement may have caused it, but in short, its healthy, natural, long-term growth). I would put real money up that Linux will hit 2% by the January 2022 survey, and 5% by January 2023.

Proton short-circuits the porting argument. It works fantastically for most games, with zero effort from the devs.

We're not talking about Linux being the majority. But its definitely looking like it will see growth over the next decade.

[1] https://www.tomshardware.com/news/steam-survey-linux-1-perce...

pjmlp 2021-08-16 20:07:07 +0000 UTC [ - ]

It took 20 years to reach 1%, so...

I believed in it back in the Loki golden days, nowadays I rather bet on macOS, Windows, mobile OSes and game consoles.

It remains to be seen how the Steam fairs versus the Steam Machines of yore.

onli 2021-08-16 20:54:26 +0000 UTC [ - ]

Don't let history blind you to the now ;)

It's way better now than it was back then. There was a long period of good ports, which combined with the Steam for Linux client made Linux gaming a real thing already. But instead of fizzling out like the last time there were ports, now Linux transitioned to "Run every game" without needing a port. Some exceptions, but they are working on it and compatibility is huge.

This will grow slowly but steadily now, and is ready to explode if Microsoft does on bad move (like crazy Windows 11 hardware requirements, but we'll see).

Biggest danger to that development are the gpu prices, the Intel gpus can only help there. A competent 200 bucks model is desperately needed to keep the PC as a gaming platform alive. It has to run on fumes - on old hardware - now.

pjmlp 2021-08-17 04:51:48 +0000 UTC [ - ]

Just like everyone would migrate to GNU/Linux because of DirectX 10 being Vista only, or OpenGL being replaced by Metal?

onli 2021-08-17 08:14:48 +0000 UTC [ - ]

Vista brought a lot of migration to Linux indeed. And the Mac being incapable of playing games is a factor for quite some people. But until recently, gaming on Linux was limited.

No one said everyone though.

pjmlp 2021-08-17 08:48:14 +0000 UTC [ - ]

Yeah otherwise it would be even less than 1% nowadays.

onli 2021-08-17 22:04:07 +0000 UTC [ - ]

Obviously. That 1% gives an estimate of 1.2 million people btw - not a small group.

ineedasername 2021-08-16 23:27:32 +0000 UTC [ - ]

What's the rate of increase though? Has it been linear, 0.05% per year? No: it has increased about 0.25% since 2019. It's not just that Linux is increasing, it's that it's rate of increase is also increasing.

pjmlp 2021-08-17 06:42:06 +0000 UTC [ - ]

For me that is wishful thinking that ignores increases in other platforms.

ineedasername 2021-08-17 12:53:58 +0000 UTC [ - ]

How is it wishful thinking? It's a proportional increase. Meaning that it has increased in both absolute and relative terms, and done so at a faster rate in the last two years. I'm stating an observation of the data. I don't see room for wishful thinking in that.

Other platforms can increase while Linux also increases, the two aren't mutually exclusive.

I'm not saying it's going to take over the market. It's a huge long shot it will ever really rival Windows. Even if makes significant headway on Steam, that also doesn't necessarily translate to a corresponding change in OS market share. But the pending arrival of the Steam Deck combined with an enormous increase in game compatibility has nonetheless set the stage for significant gains for Linux with Steam.

Which part of the above observations are wishful thinking?

2021-08-16 21:38:51 +0000 UTC [ - ]

reitzensteinm 2021-08-16 21:22:28 +0000 UTC [ - ]

Not disagreeing with your overall point, but it's pretty rare for people to port their mobile game to PC even if using Unity and all you have to do is figure out the controls. Which you've probably got a beta version of just to develop the game.

Smithalicious 2021-08-17 07:20:31 +0000 UTC [ - ]

I think talking about android games is moving the goalposts a bit. There's no market for Android games on windows either.

On my end, and my Year of Linux Desktop started almost a decade ago, the Linux experience has not only never been better (of course) but has improved faster in recent years than ever before, and gaming is one of the fastest improving areas.

Proton is a fantastic piece of software.

pjmlp 2021-08-17 08:49:47 +0000 UTC [ - ]

Not at all.

Android seems to only be Linux when bragging about how Linux has won, yet to point out that games on Android/Linux distribution don't get made available on GNU/Linux distribution, is moving goalposts.

trangus_1985 2021-08-16 22:27:55 +0000 UTC [ - ]

>Steam will hardly change the 1% status of GNU/Linux desktop.

I agree. But it will change the status in the listings. Steam deck and steamos appliances should be broken out into their own category, and I could easily see them overtaking linux desktop

ZekeSulastin 2021-08-16 19:30:59 +0000 UTC [ - ]

… Nvidia did release lower end cards that target the same market and price point as the 6600 XT a lot earlier than AMD though - as far as MSRP goes the 3060 and 3060 Ti bracket the 6600 XT’s $380 at $329 and $399 (not that MSRP means a thing right now) and similarly brackets performance, and even the MSRP was not received well in conjunction with the 1080p marketing. Both manufacturers have basically told the mid and low range market to buy a console even if you are lucky enough to get an AMD reference or Nvidia FE card.

Revenant-15 2021-08-16 21:58:02 +0000 UTC [ - ]

I've happily taken their advice and have moved to an Xbox Series S for a good 80% of my gaming needs. What gaming I still do on my PC consists mainly of older games, emulators and strategy games. Although I've been messing with Retroarch/Duckstation on my Xbox, and it's been quite novel and fun to be playing PS1 games on a Microsoft console.

kyriakos 2021-08-17 05:38:46 +0000 UTC [ - ]

Same here, I was looking to buy a GPU and for half the cost of a decent one (current market prices) I instead bought an Xbox Series X and a gamepass subscription. My gaming needs have been covered since February and gamepass keeps delivering while gpu prices are still too high for a no very serious gamer like myself.

Revenant-15 2021-08-17 13:10:48 +0000 UTC [ - ]

The only game I've had to buy was Forza Motorsport 7, which is deeply discounted because it's leaving the service. Oh, and I couldn't resist Splinter Cell (the original Xbox version) and Beyond Good and Evil, which are also discounted to like 4 Euros. I might just end up buying the whole Splinter Cell series.

Otherwise, I have a list of 100 games installed that I'm slowly playing through that Game Pass has given me access to. Now today we're getting Humankind (on PC) and in two days Twelve Minutes. It's a ridiculously good deal.

ineedasername 2021-08-16 23:30:01 +0000 UTC [ - ]

playing PS1 games on a Microsoft console.

That's just wrong. There should be some sort or matter/anti-matter reaction there.

Revenant-15 2021-08-17 13:46:46 +0000 UTC [ - ]

You'd think so, but somehow it hasn't happened. I'm just waiting for PCSX2 to mature a bit more on Xbox, then I'll be playing PS2 games on my Xbox too.

anthk 2021-08-17 15:18:26 +0000 UTC [ - ]

You could do that since the OG XBOX days.

And SNES/MegaDrive.

iknowstuff 2021-08-16 22:06:28 +0000 UTC [ - ]

Intel sells expensive CPUs which are becoming useless thanks to ARM - as much in consumer devices as they are in datacenters, with big players designing their own ARM chips. GPUs are their lifeboat. Three GPU players is better than two, but I don't see much of a reason to be long Intel.

pkulak 2021-08-16 23:06:32 +0000 UTC [ - ]

You’re overestimating the role of the architecture. X86 is just fine, and is perfectly competitive at comparable node generations. Don’t believe everything Apple tells you. ;)

dublin 2021-08-17 01:26:54 +0000 UTC [ - ]

I don't believe much of anythign Apple tells me. The x86 is fine, but any reasons to prefer it to other architectures are disappearing fast. As someone who's suffered (literally suffered) due to Intel's abysmal and execrable graphics performance in the past, I don't expect that they'll exactly blow out this market.

One of the biggest reasons I want a real next-gen ARM-based Surface Pro is that I want to put Intel in the rearview mirror forever. I didn't hate Intel until I started buying Surfaces, then I realized that everything that sucks about the Surface family is 100% Intel's fault, from beyond-buggy faulty power management (cutting advertised battery life more than in half) to buggy and broken graphics ("Intel droppings" or redraw artifacts on the screen) to "integrated" graphics performance that just simply sucks so bad it's unusable for simple 3D CAD, much less gaming.

rowanG077 2021-08-17 19:38:29 +0000 UTC [ - ]

That every x86 CPU in Ultrabooks thottles itself into the ground while M1 on MacBook pro is almost impossible to throttle determined this is a lie.

pkulak 2021-08-18 00:16:02 +0000 UTC [ - ]

Can you think of any differences in those two CPUs, other than architecture, that may be responsible for that?

Hint: TSMC's new 5nm process is being used in a laptop CPU by exactly one company at the moment.

rowanG077 2021-08-18 08:40:52 +0000 UTC [ - ]

It has nothing to do with 5nm. 5nm doesn't magically make the CPU output less heat then the cooling can handle.

SCUSKU 2021-08-16 23:08:01 +0000 UTC [ - ]

IMO, the long position on Intel is basically a bet on China invading Taiwan, or the US gov't subsidizing Intel. Both are certainly extreme events, but given Chinese military expansion and US increase in gov't spending, it doesn't seem impossible.

ineedasername 2021-08-16 23:22:04 +0000 UTC [ - ]

China doesn't even have to invade: just an ever incrementally increasing amount of soft pressure, punctuated by slightly larger actions calibrated to fall just below a provocation that requires a response. See their naval exercises for just one example.

abledon 2021-08-18 16:54:38 +0000 UTC [ - ]

Didn't they already seed the government with chinese officials last year? there were tons of riots etc... but so many 'bad' things in news that we don't focus on it anymore

naravara 2021-08-17 13:55:49 +0000 UTC [ - ]

> These Nvidia/AMD cards are unprecedented overkill for most gamers. People are begging for cards that can run games at 1080p, Nvidia went straight to 4K, even showing off 8K gaming on the 3090, and now they can't even deliver any cards that run 720p/1080p. Today, we've got AMD releasing the 6600XT, advertising it as a beast for 1080p gaming [2]. This is what people actually want; affordable and accessible cards to play games on (whether they can keep the 6600xt in stock remains to be seen, of course). Nvidia went straight Icarus with Ampere; they shot for the sun, and couldn't deliver.

Do they need dedicated GPUs at all then? My impression has been that if you just want to go at 1080p and modest settings modern integrated GPUs can do fine for most games.

015a 2021-08-17 19:18:40 +0000 UTC [ - ]

It depends on the game; for something like CS:GO or Overwatch, an integrated GPU @ 1080p is fine. For something like Cyberpunk, Apex Legends, or Call of Duty, its really not enough for an enjoyable experience.

brian_herman 2021-08-17 01:35:06 +0000 UTC [ - ]

There are also more reasons to buy nvidia gpus just than their speed they have great driver support, features like RTX Voice which cleans up the audio and other proprietary features like DLSS. Though I would argue that desktop gaming is becoming a thing of the past because mobile gaming is that much more powerful and easier to use and setup.

ungamedplayer 2021-08-17 15:11:50 +0000 UTC [ - ]

I'd love to believe what you are saying is true, but I just can't bring myself to play a game requiring accurate inputs on mobile. It is a disaster.

wonwars 2021-08-17 02:36:19 +0000 UTC [ - ]

Here on the elite tech news webite, we were celebrating clipboard and screenshot tools just three months back and you dream about the year on desktop.

I've said it before and I've said it again, the biggest problem with linux and oss are the fanatics and deluded dreamers.

BuckRogers 2021-08-18 01:02:44 +0000 UTC [ - ]

I don't own any INTC, just index funds right now, but I tip my hat to you on Intel. Buying Intel today is essentially like gobbling up American apple pie. If you love China, buy paper design firms AMD or Apple. If you like the USA, buy Intel or maybe a paperweight in Nvidia, as they're diversified at Samsung and in talks to produce at Intel.

I'm expecting China-Taiwan-US tensions to increase, and all these outsourcing profit seeking traitors will finally eat crow. As if "just getting rich" wasn't enough for them.

My stock bets and opinions don't have to necessarily be pro-American (certainly wouldn't buy anything outright anti-American). But I love it when they align like today.

Looking forward to picking up an Intel Arc.

ayngg 2021-08-16 17:59:39 +0000 UTC [ - ]

I thought they are using TSMC for their gpu, which means they will be part of the same bottleneck that is affecting everyone else.

ineedasername 2021-08-16 23:31:42 +0000 UTC [ - ]

That depends on how much of TSMC's capacity they've reserved, and which node they are manufacturing on. I'm guessing that a low end GPU doesn't need a 5nm node. I think these are 7nm, which is still going to be booked tightly but probably not as heavily bottlenecked as smaller nodes.

davidjytang 2021-08-16 18:17:42 +0000 UTC [ - ]

I believe nvidia doesn't use TSMC or not only use TSMC.

dathinab 2021-08-16 20:29:58 +0000 UTC [ - ]

Independent of the question around TSMC they are still affected as:

- Shortages and price hikes caused by various effect are not limit to the GPU chiplet but also most other parts on the GPU.

- Especially it also affects the RAM they are using, which can be a big deal wrt. pricing and availability.

mkaic 2021-08-16 18:18:56 +0000 UTC [ - ]

30 series Nvidia cards are on Samsung silicon iirc

tylerhou 2021-08-16 18:55:46 +0000 UTC [ - ]

The datacenter cards (which are about half of their revenue) are running on TSMC.

monocasa 2021-08-16 18:29:43 +0000 UTC [ - ]

Yeah, Samsung 8nm, which is basically Samsung 10nm++++.

abraae 2021-08-16 18:39:11 +0000 UTC [ - ]

10nm--?

monocasa 2021-08-16 18:53:16 +0000 UTC [ - ]

The '+' in this case is a common process node trope where improvements to a node over time that involve rules changes become Node+, Node++, Node+++, etc. So this is a node that started as Samsung 10nm, but they made enough changes to it that they started marketing it as 8nm. When they started talking about it, it wasn't clear if it was a more manufacturable 7nm or instead a 10nm with lots of improvements, so I drop the 10nm++++ to help give some context.

ayngg 2021-08-16 19:56:29 +0000 UTC [ - ]

Yeah they use Samsung for their current series but are planning to move to TSMC for the next irrc.

mastax 2021-08-17 00:00:38 +0000 UTC [ - ]

Currently the bottleneck is substrates, not TSMC.

YetAnotherNick 2021-08-16 19:34:46 +0000 UTC [ - ]

Except Apple

teclordphrack2 2021-08-16 19:22:59 +0000 UTC [ - ]

If they purchased a slot in the queue then they will be fine.

voidfunc 2021-08-16 17:05:52 +0000 UTC [ - ]

Intel has the manufacturing capability to really beat up Nvidia. Even if the cards don’t perform like top-tier cards they could still win bigly here.

Very exciting!

opencl 2021-08-16 17:10:48 +0000 UTC [ - ]

Intel is not even manufacturing these, they are TSMC 7nm so they are competing for the same fab capacity that everyone else is using.

judge2020 2021-08-16 17:24:06 +0000 UTC [ - ]

*AMD/Apple is using. Nvidia's always-sold-out Ampere-based gaming chips are made in a Samsung fab.

https://www.pcgamer.com/nvidia-ampere-samsung-8nm-process/

Yizahi 2021-08-16 18:39:28 +0000 UTC [ - ]

Nvidia would also use TSMC 7nm since it is much better that Samsung 8mn. So potentially they are also waiting for the TSMC availability.

judge2020 2021-08-16 19:53:19 +0000 UTC [ - ]

How is it 'much better'? 7nm is not better than 8nm because it has a smaller number - the number doesn't correlate strongly with transistor density these days.

kllrnohj 2021-08-16 22:26:48 +0000 UTC [ - ]

Did you bother trying to do any research or comparison between TSMC's 7nm & Samsung's 8nm or did you just want to make the claim that numbers are just marketing? Despite the fact that numbers alone were not being talked about, but two specific fab processes, and thus the "it's just a number!" mistake wasn't obviously being made in the first place?

But Nvidia has Ampere on both TSMC 7nm (GA100) and Samsung's 8nm (GA102). The TSMC variant has a significantly higher density at 65.6M / mm² vs. 45.1M / mm². Comparing across architectures is murkey, but we also know that the TSMC 7nm 6900XT clocks a lot higher than the Samsung 8nm RTX 3080/3090 while also drawing less power. There's of course a lot more to clock speeds & power draw in an actual product than the raw fab transistor performance, but it's still a data point.

So there's both density & performance evidence to suggest TSMC's 7nm is meaningfully better than Samsung's 8nm.

Even going off of marketing names, Samsung has a 7nm as well and they don't pretend their 8nm is just one-worse than the 7nm. The 8nm is an evolution of the 10nm node while the 7nm is itself a new node. According to Samsung's marketing flowcharts, anyway. And analysis suggests Samsung's 7nm is competitive with TSMC's 7nm.

IshKebab 2021-08-16 19:29:59 +0000 UTC [ - ]

TSMC have a 56% percent market share. The next closest is Samsung at 18%. I think that's enough to say that everyone uses them without much hyperbole.

ineedasername 2021-08-16 23:36:00 +0000 UTC [ - ]

This is the point where commenters enter into a pedantic discussion about whether 56% is better suited to the term everyone or the term most. :)

paulmd 2021-08-16 17:38:09 +0000 UTC [ - ]

if NVIDIA cards were priced as ridiculously as AMD cards they'd be sitting on store shelves too

kllrnohj 2021-08-16 17:50:36 +0000 UTC [ - ]

Nvidia doesn't price any cards other than the founder's editions which you'll notice they both drastically cut down on availability for and also didn't do at all for the "price sensitive" mid-range tier.

Nvidia's pricing as a result is completely fake. Like the claimed "$330 3060" in fact starts at $400 and rapidly goes up from there, with MSRP's on 3060's as high as $560.

paulmd 2021-08-16 18:07:42 +0000 UTC [ - ]

I didn't say NVIDIA did directly price cards? Doesn't sound like you are doing a very good job of following the HN rule - always give the most gracious possible reading of a comment. Nothing I said directly implied that they did, you just wanted to pick a bone. It's really quite rude to put words in people's mouths, and that's why we have this rule.

But a 6900XT is available for $3100 at my local store... and the 3090 is $2100. Between the two it's not hard to see why the NVIDIA cards are selling and the AMD cards are sitting on the shelves, the AMD cards are 50% more expensive for the same performance.

As for why that is - which is the point I think you wanted to address, and decided to try and impute into my comment - who knows. Price are "sticky" (retailers don't want to mark down prices and take a loss) and AMD moves fewer cards in general. Maybe that means that prices are "stickier for longer" with AMD. Or maybe it's another thing like Vega where AMD set the MSRP so low that partners can't actually build and sell a card for a profit at competitive prices. But in general, regardless of why - the prices for AMD cards are generally higher, and when they go down the AMD cards sell out too. The inventory that is available is available because it's overpriced.

(and for both brands, the pre-tariff MSRPs are essentially a fiction at this point apart from the reference cards and will probably never be met again.)

RussianCow 2021-08-16 19:02:44 +0000 UTC [ - ]

> But a 6900XT is available for $3100 at my local store... and the 3090 is $2100.

That's just your store being dumb, then. The 6900 XT is averaging about $1,500 brand new on eBay[0] while the 3090 is going for about $2,500[1]. Even on Newegg, the cheapest in-stock 6900 XT card is $1,700[2] while the cheapest 3090 is $3,000[3]. Everything I've read suggests that the AMD cards, while generally a little slower than their Nvidia counterparts (especially when you factor in ray-tracing), give you way more bang for your buck.

> the prices for AMD cards are generally higher

This is just not true. There may be several reasons for the Nvidia cards being out of stock more often than AMD: better performance; stronger brand; lower production counts; poor perception of AMD drivers; specific games being optimized for Nvidia; or pretty much anything else. But at this point, pricing is set by supply and demand, not by arbitrary MSRPs set by Nvidia/AMD, so claiming that AMD cards are priced too high is absolutely incorrect.

[0]: https://www.ebay.com/sch/i.html?_from=R40&_nkw=6900xt&_sacat...

[1]: https://www.ebay.com/sch/i.html?_from=R40&_nkw=3090&_sacat=0...

[2]: https://www.newegg.com/p/pl?N=100007709%20601359957&Order=1

[3]: https://www.newegg.com/p/pl?N=100007709%20601357248&Order=1

kruxigt 2021-08-17 00:37:56 +0000 UTC [ - ]

> But at this point, pricing is set by supply and demand.

If one is sold out and the other not that indicates that this is not completely the case.

Animats 2021-08-16 17:33:16 +0000 UTC [ - ]

Oh, that's disappointing. Intel has three 7nm fabs in the US.

There's a lot of fab capacity under construction. 2-3 years out, semiconductor glut again.

BuckRogers 2021-08-16 17:55:18 +0000 UTC [ - ]

This is a problem for AMD especially, but also Nvidia. Not so much for Intel. They're just budging in line with their superior firepower. Intel even bought out first dibs on TSMC 3nm out from under Apple. I'll be interested to see the market's reaction to this once everyone realizes that Intel is hitting AMD where it hurts and sees the inevitable outcome.

This is one of the smartest moves by Intel, make their own stuff and consume production from all their competitors, which do nothing but paper designs. Nvidia and especially AMD took a risk not being in the fabrication business, and now we'll see the full repercussions. It's a good play (outsourcing) in good times, not so much when things get tight like today.

wmf 2021-08-16 18:06:49 +0000 UTC [ - ]

This is a problem for AMD especially

Probably not. AMD has had their N7/N6 orders in for years.

They're just budging in line with their superior firepower. Intel even bought out first dibs on TSMC 3nm out from under Apple.

There's no evidence this is happening and people with TSMC experience say it's not happening.

Nvidia and especially AMD took a risk not being in the fabrication business

Yes, and it paid off dramatically. If AMD stayed with their in-house fabs (now GloFo) they'd probably be dead on 14nm now.

BuckRogers 2021-08-16 18:12:46 +0000 UTC [ - ]

Do you have sources for any of your claims? Other than going fabless being a fantastic way to cut costs and management challenges, but increase longterm supply line risk, none of that is anything that I've heard. Here are sources for my claims.

AMD on TSMC 3nm for Zen5. Will be squeezed by Intel and Apple- https://videocardz.com/newz/amd-3nm-zen5-apus-codenamed-stri...

Intel consuming a good portion of TSMC 3nm- https://www.msn.com/en-us/news/technology/intel-locks-down-a...

I see zero upside with these developments for AMD, and to a lesser degree, Nvidia, who are better diversified with Samsung and also rumored to be in talks with fabricating at Intel as well.

AnthonyMouse 2021-08-16 21:31:41 +0000 UTC [ - ]

> Will be squeezed by Intel and Apple

This doesn't really work. If there is more demand, they'll build more fabs. It doesn't happen overnight -- that's why we're in a crunch right now -- but we're talking about years of lead time here.

TSMC is also not stupid. It's better for them for their customers to compete with each other instead of having to negotiate with a monopolist, so their incentive is to make sure none of them can crush the others.

> I see zero upside with these developments for AMD, and to a lesser degree, Nvidia

If Intel uses its own fabs, Intel makes money and uses the money to improve Intel's process which AMD can't use. If Intel uses TSMC's fabs, TSMC makes money and uses the money to improve TSMC's process which AMD does use.

BuckRogers 2021-08-17 06:35:42 +0000 UTC [ - ]

>TSMC is also not stupid. It's better for them for their customers to compete

It depends how much money is involved. I don't think TSMC, nor any business is the master chess player you're envisioning, losing billions to instead worry about AMD's woes. But rather consider AMD's troubles as something they'll have to sort out on their own in due time. They're on their own.

Now, AMD and TSMC do have a good relationship. But large infusions of wealth corrupt even the most stalwart companions. This is one of those things that people don't need to debate, we'll see the results on 3nm. At a minimum, it looks like Intel is going to push AMD out of TSMC's leading nodes. There's no way to size this up as good news for AMD.

>If Intel uses TSMC's fabs, TSMC makes money and uses the money to improve TSMC's process which AMD does use.

TSMC is going to make money without Intel anyway. Choking AMD may not be their intention, but it's certainly a guaranteed side effect of it. Intel makes so much product that they are capable of using up their own space, and others. And now the GPUs are coming. Intel makes 10-times the profit AMD does per year. If AMD didn't have an x86 license, no one would utter these two companies names in the same sentence.

wmf 2021-08-16 18:29:58 +0000 UTC [ - ]

I expect AMD to start using N3 after Apple and Intel have moved on to N2 (or maybe 20A in Intel's case) in 2024 so there's less competition for wafers.

flenserboy 2021-08-16 17:09:54 +0000 UTC [ - ]

Indeed. Something that's affordable and hits even RX 580 performance would grab the attention of many. Good enough really is when supply is low and prices are high.

abledon 2021-08-16 17:13:00 +0000 UTC [ - ]

it seems AMD manufactures most 7nm all at TSMC, but intel has a factory coming online next year in Arizona... https://en.wikipedia.org/wiki/List_of_Intel_manufacturing_si...

I could see gov/Military investing/awarding more contracts based on these 'locally' situated plants

humanistbot 2021-08-16 17:21:46 +0000 UTC [ - ]

Nope, wiki is wrong. According to Intel, the facility in Chander, AZ will start being built next year, but won't be producing chips until 2024. See https://www.anandtech.com/show/16573/intels-new-strategy-20b...

pankajdoharey 2021-08-16 17:23:44 +0000 UTC [ - ]

What about design capabilities? If they had it in them what were they doing all these yrs? i mean since 2000 i can remember a single GPU from intel that wasnt already behind the market.

Tsiklon 2021-08-16 21:10:15 +0000 UTC [ - ]

Raja Koduri is Intel’s lead architect for their new product line; prior to this he was the lead of the Radeon Technologies Group at AMD, successfully delivering Polaris, Vega and Navi. Navi is AMD’s current GPU product architecture.

Things seem promising at this stage.

2021-08-16 17:13:11 +0000 UTC [ - ]

deaddodo 2021-08-16 18:25:28 +0000 UTC [ - ]

Where do you get that idea? The third-party fabs have far greater production capacity[1]. Intel isn't even in the top five.

They're a shared resource; however, if you're willing to pay the money, you could monopolize their resources and outproduce anybody.

1 - https://epsnews.com/2021/02/10/5-fabs-own-54-of-global-semic...

wtallis 2021-08-16 19:54:17 +0000 UTC [ - ]

You're looking at the wrong numbers. The wafer capacity of memory fabs and logic fabs that are only equipped for older nodes aren't relevant to the GPU market. So Micron, SK hynix, Kioxia/WD and a good chunk of Samsung and TSMC capacity are irrelevant here.

2021-08-16 23:36:44 +0000 UTC [ - ]

pier25 2021-08-16 17:52:13 +0000 UTC [ - ]

Exactly. There are plenty of people that just want to upgrade an old GPU and anything modern would be a massive improvement.

I'm still rocking a 1070 for 1080p/60 gaming and would love to jump to 4K/60 gaming but just can't convince myself to buy a new GPU at current prices.

mey 2021-08-16 18:02:47 +0000 UTC [ - ]

I refuse to engage with the current GPU pricing insanity, so my 5900x is currently paired with a 960 GTX. When Intel enters the market it will be another factor in driving pricing back down, so might play Cyberpunk in 2022...

deadmutex 2021-08-16 19:51:10 +0000 UTC [ - ]

If you really want to play Cyberpunk on PC, and don't want to buy a new GPU.. playing it on Stadia is an option (especially if you have a GPU that can support VP9 decoding). I played it at 4K/1080p, and it looked pretty good. However, I think if you want the best graphics fidelity (i.e. 4K RayTracing), then you probably do want to just get a high end video card.

Disclosure: Work at Google, but not on Stadia.

jholman 2021-08-17 06:22:15 +0000 UTC [ - ]

It's always a quiet little pleasure when someone uses "disclosure" correctly. :)

mey 2021-08-17 00:36:34 +0000 UTC [ - ]

Spouse bought me a copy on steam before understanding that I needed a new system to play it. Being a single player experience, I can wait for it.

I did advise several co-workers to go that route.

leeoniya 2021-08-16 19:04:51 +0000 UTC [ - ]

i wanna get a good Alyx setup to finally try VR, but with the gpu market the way it is, looks like my RX480 4GB will be sticking around for another 5yrs - it's more expensive now than it was 4 yrs ago (used), and even then it was already 2yrs old. batshit crazy; no other way to describe it :(

chaosharmonic 2021-08-16 17:29:29 +0000 UTC [ - ]

Given that timeline and their years of existing production history with Thunderbolt, Intel could also feasibly beat both of them to shipping USB4 on a graphics card.

pankajdoharey 2021-08-16 17:47:32 +0000 UTC [ - ]

I suppose the better thing to do would be to ship an APU, Besting both Nvidia on GPU and AMD on CPU? But can they?

moss2 2021-08-17 13:53:06 +0000 UTC [ - ]

I thought the problem was a chip shortage. How is Intel going to solve this? Won't they just also run out of stock instantly?

2021-08-17 18:32:09 +0000 UTC [ - ]

rasz 2021-08-16 18:48:00 +0000 UTC [ - ]

You would think that. GamersNexus did try Intels finest, and it doesnt look pretty

https://www.youtube.com/watch?v=HSseaknEv9Q We Got an Intel GPU: Intel Iris Xe DG1 Video Card Review, Benchmarks, & Architecture

https://www.youtube.com/watch?v=uW4U6n-r3_0 Intel GPU A Real Threat: Adobe Premiere, Handbrake, & Production Benchmarks on DG1 Iris Xe

Its below GT1030 with a lot of issues.

agloeregrets 2021-08-16 19:16:41 +0000 UTC [ - ]

DG1 isn't remotely related to Arc. For one, it's not even using the same node nor architecture.

deaddodo 2021-08-16 20:11:23 +0000 UTC [ - ]

That's not quite true. The Arc was originally known as the DG2 and is the successor to the DG1. So to say it isn't "remotely related" is a bit misleading, especially since we have very little information on the architecture.

trynumber9 2021-08-16 20:53:08 +0000 UTC [ - ]

For some comparison, that's a 30W 80EU part using 70GB/s memory. DG2 is supposed to be 512EU part with over 400GB/s memory. GPUs generally scale pretty well with EU count and memory bandwidth. Plus it has a different architecture which may be even more capable per EU.

phone8675309 2021-08-16 19:15:24 +0000 UTC [ - ]

The DG1 isn’t designed for gaming, but it is better than integrated graphics.

deburo 2021-08-16 19:19:47 +0000 UTC [ - ]

Just to add on that, DG1 was comparable to integrated graphics, but just in a discrete form factor. It was a tiny bit better because of higher frequency, I think. But even then it wasn't better in all cases, if I recall correctly.

pitaj 2021-08-16 18:56:49 +0000 UTC [ - ]

This won't be the same as the DG1

ineedasername 2021-08-16 23:16:54 +0000 UTC [ - ]

Yep, basically anything capable of playing new games on lower setting at 720p, and > 3yr old games at better settings should be highly competitive in the low-end gaming market. Especially laptops where they might be a secondary machine for gamers with a high end desktop.

dheera 2021-08-16 19:04:36 +0000 UTC [ - ]

> will compete with GeForce

> which performs a lot like the GDDR5 version of Nvidia's aging, low-end GeForce GTX 1030

Intel is trying to emulate what NVIDIA did a decade ago. Nobody in the NVIDIA world speaks of GeForce and GTX anymore, RTX is where it's at.

ineedasername 2021-08-16 23:12:31 +0000 UTC [ - ]

Yep, basically anything capable of playing new games at low setting and older > 3yr old games at better settings should be highly competitive in the low-end gaming market.

NonContro 2021-08-16 19:47:07 +0000 UTC [ - ]

How long will that situation last though, with Ethereum 2.0 around the corner and the next difficulty bomb scheduled for December?

https://www.reddit.com/r/ethereum/comments/olla5w/eip_3554_o...

Intel could be launching their cards into a GPU surplus...

That's discrete GPUs though, presumably the major volumes are in laptop GPUs? Will Intel have a CPU+GPU combo product for laptops?

dathinab 2021-08-16 20:37:32 +0000 UTC [ - ]

It's not "just" a shortage of GPU's but all kinds of components.

And it's also not "just" caused by miners.

But that means if they are really unlucky they could launch into a situation where there is a surplus of good second hand graphic cards and still shortages/price hikes on the GPU components they use...

Through as far as I can tell they are more targeting OEM's (any OEM instead of a selected few), and other large customers, so it might not matter too much for them for this release (but probably from the next one after one-ward it would).

errantspark 2021-08-16 20:01:47 +0000 UTC [ - ]

> How long will that situation last though

Probably until at least 2022 because the shortage of GPUs isn't solely because of crypto. Until we generally get back on track tricking sand to think we're not going to be able to saturate demand.

> Will Intel have a CPU+GPU combo product for laptops?

What? Obviously the answer is yes, how could it possibly be no? CPU+GPU combo is the only GPU related segment where Intel currently has a product.

NonContro 2021-08-17 00:08:23 +0000 UTC [ - ]

To be more specific, Intel currently allocates a fair chunk of their dies to iGPUs. Will that no longer be the case when Intel manufacturers their own dedicated GPUs? It seems like a waste of silicon.

orra 2021-08-16 19:57:47 +0000 UTC [ - ]

Alas: Bitcoin.

cinntaile 2021-08-16 20:05:38 +0000 UTC [ - ]

You don't mine bitcoin with a GPU, those days are long gone.

hughrr 2021-08-16 18:22:32 +0000 UTC [ - ]

GPU stock is rising and prices falling. It’s too late now.

YetAnotherNick 2021-08-16 19:41:30 +0000 UTC [ - ]

No, they aren't. They are trading at 250% of MSRP. See this data:

https://stockx.com/nvidia-nvidia-geforce-rtx-3080-graphics-c...

RussianCow 2021-08-16 20:48:37 +0000 UTC [ - ]

Anecdotally, I've noticed prices falling on the lower end. My aging RX 580 was worth over $400 used at the beginning of the year; it now goes for ~$300. The 5700 XT was going for close to $1k used, and is more recently selling for $800-900.

With that said, I don't know if it's a sign of the shortage coming to an end; I think the release of the Ryzen 5700G with integrated graphics likely helped bridge the gap for people who wanted low-end graphics without paying the crazy markups.

Revenant-15 2021-08-17 13:49:37 +0000 UTC [ - ]

I remember RX 580s going for 170 Euros before the pandemic. I can only hope that prices reach sane levels sooner than later, but I suspect we're going to see at least another year of outlandish prices.

rejectedandsad 2021-08-16 18:32:06 +0000 UTC [ - ]

I still can’t get a 3080, and the frequency of drops seems to have decreased. Where are you seeing increased stock?

2021-08-16 18:44:19 +0000 UTC [ - ]

hughrr 2021-08-16 18:50:36 +0000 UTC [ - ]

Can get a 3080 tomorrow in UK no problems at all.

mhh__ 2021-08-16 18:59:12 +0000 UTC [ - ]

Can get but still very expensive.

mhh__ 2021-08-16 18:58:16 +0000 UTC [ - ]

If they come out swinging here they could have the most deserved smugness in the industry for a good while. People have been rightly criticising them but wrongly writing them off.

jeswin 2021-08-16 17:04:34 +0000 UTC [ - ]

If Intel provides as much Linux driver support as they do for their current integrated graphics lineup, we might have a new favourite among Linux users.

r-bar 2021-08-16 17:52:50 +0000 UTC [ - ]

They also seem to be the most willing to open up their GPU sharding API, GVTG, based on their work with their existing Xe GPUs. The performance of their implementation in their first generation was a bit underwhelming, but it seems like the intention is there.

If Intel is able to put out something reasonably competitive and that supports GPU sharding it could be a game changer. It could change the direction of the ecosystem and force Nvidia and AMD to bring sharding to their consumer tier cards. I am stoked to see where this new release takes us.

Level1Linux has a (reasonably) up to date state of the GPU ecosystem that does a much better job outlining the potential of this tech.

https://www.youtube.com/watch?v=IXUS1W7Ifys

stormbrew 2021-08-16 17:16:13 +0000 UTC [ - ]

This is the main reason I'm excited about this. I really hope they continue the very open approach they've used so far, but even if they start going binary blob for some of it like nvidia and (now to a lesser extent) amd have at least they're likely to properly implement KMS and other things because that's what they've been doing already.

jogu 2021-08-16 17:44:46 +0000 UTC [ - ]

Came here to say this. This will be especially interesting if there's better support for GPU virtualization to allow a Windows VM to leverage the card without passing the entire card through.

modeless 2021-08-16 19:52:16 +0000 UTC [ - ]

This would be worth buying one for. It's super lame that foundational features like virtualization are used as leverage for price discrimination by Nvidia, and hopefully new competition can shake things up.

kop316 2021-08-16 17:57:29 +0000 UTC [ - ]

This was my thought too. If their linux driver support for this is as good as their integrated ones, I will be switching to Intel GPUs.

heavyset_go 2021-08-16 18:48:33 +0000 UTC [ - ]

Yep, their WiFi chips have good open source drivers on Linux, as well. It would be nice to have a GPU option that isn't AMD for open driver support on Linux.

dcdc123 2021-08-16 19:23:05 +0000 UTC [ - ]

A long time Linux graphics driver dev friend of mine was just hired by Intel.

Nexxxeh 2021-08-17 18:49:51 +0000 UTC [ - ]

With the Steam Deck coming out, running Linux in a high-profile gaming device with AMD graphics, hopefully it'll turn into a mild Linux GPU driver arms race. AMD and Valve are both working on improving Linux support for AMD hardware, GPU and CPU.

holoduke 2021-08-16 22:03:06 +0000 UTC [ - ]

Well. Every single AAA game is reflected in GPU drivers. I bet they need to work on windows drivers first. Sure they need to write tons of custom driver mods for hundreds of games.

byefruit 2021-08-16 16:51:49 +0000 UTC [ - ]

I really hope this breaks Nvidia's stranglehold on deep learning. Some competition would hopefully bring down prices at the compute high-end.

AMD don't seem to even be trying on the software-side at the moment. ROCm is a mess.

pjmlp 2021-08-16 20:25:46 +0000 UTC [ - ]

You know how to break it?

With modern tooling.

Instead of forcing devs to live in the pre-historic days of C dialects and printf debugging, provide polyglot IDEs with graphical debugging tools capable of single step GPU shaders and a rich libraries ecosystem.

Khronos got the message too late and now no one cares.

rowanG077 2021-08-16 17:07:00 +0000 UTC [ - ]

I think this situation can only be fixed by moving up into languages that compile to vendor specific GPU languages. Just treat CUDA, openCL, vulkan compute, metal compute(??) etc. as the assembly of graphics cards.

pjmlp 2021-08-16 20:32:36 +0000 UTC [ - ]

That is just part of the story.

CUDA wiped out OpenCL, because it went polyglot as of version 3.0, while insisting that everyone should write in a C dialect.

They also provide great graphical debugging tools and libraries.

Khronos waited too long to introduce SPIR, and in traditional Khronos fashion, waited for the partners to provide the tooling.

One could blame NVidia, but it isn't as the competition has done a better job.

hobofan 2021-08-16 18:09:23 +0000 UTC [ - ]

Barely anyone is writing CUDA directly these days. Just add support in PyTorch and Tensorflow and you've covered probably 90% of the deep learning market.

hprotagonist 2021-08-16 18:18:10 +0000 UTC [ - ]

and ONNX.

T-A 2021-08-16 17:15:30 +0000 UTC [ - ]

snicker7 2021-08-16 18:51:53 +0000 UTC [ - ]

Currently only supported by Intel.

jjcon 2021-08-16 16:58:43 +0000 UTC [ - ]

I wholeheartedly agree. PyTorch did recently release AMD support which I was happy to see (though I have not tested it), I’m hoping there is more to come.

https://pytorch.org/blog/pytorch-for-amd-rocm-platform-now-a...

byefruit 2021-08-16 17:16:11 +0000 UTC [ - ]

Unfortunately that support is via ROCm, which doesn't support the last three generations (!) of AMD hardware: https://github.com/ROCm/ROCm.github.io/blob/master/hardware....

dragontamer 2021-08-16 17:21:55 +0000 UTC [ - ]

ROCm supports Vega, Vega 7nm, and CDNA just fine.

The issue is that AMD has split their compute into two categories:

* RDNA -- consumer cards. A new ISA with new compilers / everything. I don't think its reasonable to expect AMD's compilers to work on RDNA, when such large changes have been made to the architecture. (32-wide instead of 64-wide. 1024 registers. Etc. etc.)

* CDNA -- based off of Vega's ISA. Despite being "legacy ISA", its pretty modern in terms of capabilities. MI100 is competitive against the A100. CDNA is likely going to run Frontier and El Capitan supercomputers.

------------

ROCm focused on CDNA. They've had compilers emit RDNA code, but its not "official" and still buggy. But if you went for CDNA, that HIP / ROCm stuff works enough for the Oak Ridge National Labs.

Yeah, CDNA is expensive ($5k for MI50 / Radeon VII, and $9k for MI100). But that's the price of full-speed scientific-oriented double-precision floating point GPUs these days.

paulmd 2021-08-16 18:24:14 +0000 UTC [ - ]

> ROCm supports Vega, Vega 7nm, and CDNA just fine.

yeah, but that's exactly what OP said - Vega is three generations old at this point, and that is the last consumer GPU (apart from VII which is a rebranded compute card) that ROCm supports.

On the NVIDIA side, you can run at least basic tensorflow/pytorch/etc on a consumer GPU, and that option is not available on the AMD side, you have to spend $5k to get a GPU that their software actually supports.

Not only that but on the AMD side it's a completely standalone compute card - none of the supported compute cards do graphics anymore. Whereas if you buy a 3090 at least you can game on it too.

Tostino 2021-08-16 19:10:38 +0000 UTC [ - ]

I really don't think people appreciate the fact enough that for developers to care to learn about building software for your platform, you need to make it accessible for them to run that software. That means "run on the hardware they will already have". AMD really need to push to get ROCm compiling for RDNA based chips.

slavik81 2021-08-16 19:14:52 +0000 UTC [ - ]

There's unofficial support in the rocm-4.3.0 math-libs for gfx1030 (6800 / 6800 XT / 6900 XT). rocBLAS also includes gfx1010, gfx1011 and gfx1012 (5000 series). If you encounter any bugs in the {roc,hip}{BLAS,SPARSE,SOLVER,FFT} stack with those cards, file GitHub issues on the corresponding project.

I have not seen any problems with those cards in BLAS or SOLVER, though they don't get tested as much as the officially supported cards.

FWIW, I finally managed to buy an RX 6800 XT for my personal rig. I'll be following up on any issues found in the dense linear algebra stack on that card.

I work for AMD on ROCm, but all opinions are my own.

BadInformatics 2021-08-16 21:24:56 +0000 UTC [ - ]

I've mentioned this on other forums, but it would help to have some kind of easily visible, public tracker for this progress. Even a text file, set of GitHub issues or project board would do.

Why? Because as-is, most people still believe support for gfx1000 cards is non-existent in any ROCm library. Of course that's not the case as you've pointed out here, but without any good sign of forward progress, your average user is going to assume close to zero support. Vague comments like https://github.com/RadeonOpenCompute/ROCm/issues/1542 are better than nothing, but don't inspire that much confidence without some more detail.

FeepingCreature 2021-08-16 18:43:22 +0000 UTC [ - ]

You don't think it's reasonable to expect machine learning to work on new cards?

That's exactly the point. ML on AMD is a third-class citizen.

dragontamer 2021-08-16 18:45:16 +0000 UTC [ - ]

AMD's MI100 has those 4x4 BFloat16 and FP16 matrix multiplication instructions you want, with PyTorch and TensorFlow compiling down into them through ROCm.

Now don't get me wrong: $9000 is a lot for a development system to try out the software. NVidia's advantage is that you can test out the A100 by writing software for cheaper GeForce cards at first.

NVidia also makes it easy with the DGX computer to quickly get a big A100-based computer. AMD you gotta shop around with Dell vs Supermicro (etc. etc.) to find someone to build you that computer.

byefruit 2021-08-16 17:44:57 +0000 UTC [ - ]

That makes a lot more sense, thanks. They could do with making that a lot clearer on the project.

Still handicaps them compared to Nvidia where you can just buy anything recent and expect it to work. Suspect it also means they get virtually no open source contributions from the community because nobody can run or test it on personal hardware.

dragontamer 2021-08-16 17:49:56 +0000 UTC [ - ]

NVidia can support anything because they have a PTX-translation layer between cards, and invest heavily on PTX.

Each assembly language from each generation of cards changes. PTX recompiles the "pseudo-assembly" instructions into the new assembly code each generation.

---------

AMD has no such technology. When AMD's assembly language changes (ex: from Vega into RDNA), its a big compiler change. AMD managed to keep the ISA mostly compatible from 7xxx GCN 1.0 series in the late 00s all the way to Vega 7nm in the late 10s... but RDNA's ISA change was pretty massive.

I think its only natural that RDNA was going to have compiler issues.

---------

AMD focused on Vulkan / DirectX support for its RDNA cards, while its compute team focused on continuing "CDNA" (which won large supercomputer contracts). So that's just how the business ended up.

blagie 2021-08-16 18:21:18 +0000 UTC [ - ]

I bought an ATI card for deep learning. I'm a big fan of open source. Less than 12 months later, ROCm dropped support. I bought an NVidia, and I'm not looking back.

This makes absolutely no sense to me, and I have a Ph.D:

"* RDNA -- consumer cards. A new ISA with new compilers / everything. I don't think its reasonable to expect AMD's compilers to work on RDNA, when such large changes have been made to the architecture. (32-wide instead of 64-wide. 1024 registers. Etc. etc.) * CDNA -- based off of Vega's ISA. Despite being "legacy ISA", its pretty modern in terms of capabilities. MI100 is competitive against the A100. CDNA is likely going to run Frontier and El Capitan supercomputers. ROCm focused on CDNA. They've had compilers emit RDNA code, but its not "official" and still buggy. But if you went for CDNA, that HIP / ROCm stuff works enough for the Oak Ridge National Labs. Yeah, CDNA is expensive ($5k for MI50 / Radeon VII, and $9k for MI100). But that's the price of full-speed scientific-oriented double-precision floating point GPUs these days.

I neither know nor care what RDNA, CDNA, A100, MI50, Radeon VII, MI100, or all the other AMD acronyms are. Yes, I could figure it out, but I want plug-and-play, stability, and backwards-compatibility. I ran into a whole different minefield with AMD. I'd need to run old ROCm, downgrade my kernel, and use a different card to drive monitors than for ROCm. It was a mess.

NVidia gave me plug-and-play. I bought a random NVidia card with the highest "compute level," and was confident everything would work. It does. I'm happy.

Intel has historically had great open source drivers, and if it give better plug-and-play and open source, I'll buy Intel next time. I'm skeptical, though. The past few year, Intel has a hard time tying their own shoelaces. I can't imagine this will be different.

dragontamer 2021-08-16 18:28:19 +0000 UTC [ - ]

> Yes, I could figure it out, but I want plug-and-play, stability, and backwards-compatibility

Its right there in the ROCm introduction.

https://github.com/RadeonOpenCompute/ROCm#Hardware-and-Softw...

> ROCm officially supports AMD GPUs that use following chips:

> GFX9 GPUs

> "Vega 10" chips, such as on the AMD Radeon RX Vega 64 and Radeon Instinct MI25

> "Vega 7nm" chips, such as on the Radeon Instinct MI50, Radeon Instinct MI60 or AMD Radeon VII, Radeon Pro VII

> CDNA GPUs

> MI100 chips such as on the AMD Instinct™ MI100

--------

The documentation of ROCm is pretty clear that it works on a limited range of hardware, with "unofficial" support at best on other sets of hardware.

blagie 2021-08-16 19:41:42 +0000 UTC [ - ]

Only...

(1) There are a million different ROCm pages and introductions

(2) Even that page is out-of-date, and e.g. claims unofficial support for "GFX8 GPUs: Polaris 11 chips, such as on the AMD Radeon RX 570 and Radeon Pro WX 4100," although those were randomly disabled after ROCm 3.5.1.

... if you have a Ph.D in AMD productology, you might be able to figure it out. If it's merely in computer science, math, or engineering, you're SOL.

There are now unofficial guides to downgrading to 3.5.1, only 3.5.1 doesn't work with many modern frameworks, and you land in a version incompatibility mess.

These aren't old cards either.

Half-decent engineer time is worth $350/hour, all in (benefits, overhead, etc.). Once you've spent a week futzing with AMD's mess, you're behind by the cost of ten NVidia A4000 cards which Just Work.

As a footnote, I suspect in the long term, small purchases will be worth more than the supercomputing megacontracts. GPGPU is wildly underutilized right now. That's mostly a gap of software, standards, and support. If we can get that right, every computer have many teraflops of computing power, even for stupid video chat filters and whatnot.

dragontamer 2021-08-16 19:58:18 +0000 UTC [ - ]

> Half-decent engineer time is worth $350/hour, all in (benefits, overhead, etc.). Once you've spent a week futzing with AMD's mess, you're behind by the cost of ten NVidia A4000 cards which Just Work.

It seems pretty simple to me if we're talking about compute. The MI-cards are AMD's line of compute GPUs. Buy an MI-card if you want to use ROCm with full support. That's MI25, MI50, or MI100.

> As a footnote, I suspect in the long term, small purchases will be worth more than the supercomputing megacontracts. GPGPU is wildly underutilized right now. That's mostly a gap of software, standards, and support. If we can get that right, every computer have many teraflops of computing power, even for stupid video chat filters and whatnot.

I think you're right, but the #1 use of these devices is running video games (aka: DirectX and Vulkan). Compute capabilities are quite secondary at the moment.

blagie 2021-08-17 11:11:29 +0000 UTC [ - ]

> It seems pretty simple to me if we're talking about compute. The MI-cards are AMD's line of compute GPUs. Buy an MI-card if you want to use ROCm with full support. That's MI25, MI50, or MI100.

For tasks which require that much GPU, I'm using cloud machines. My dev desktop would like a working GPU 24/7, but it doesn't need to be nearly that big.

If I had my druthers, I would have bought an NVidia 3050, since it has adequate compute, and will run once available <$300. Of course, anything from the NVidia consumer line is impossible to buy right now, except at scalper prices.

I just did a web search. The only card from that series I can find for sale, new, was the MI100, which runs $13k. The MI50 doesn't exist, and the MI25 can only be bought used on eBay. Corporate won't do eBay. Even the MI100 would require an exception, since it's an unauthorized vendor (Amazon doesn't have it).

Combine that with poor software support, and an unknown EOL, and it's a pretty bad deal.

> I think you're right, but the #1 use of these devices is running video games (aka: DirectX and Vulkan). Compute capabilities are quite secondary at the moment.

Companies should maximize shareholder value. Right now:

- NVidia is building an insurmountable moat. I already bought an NVidia card, and our software already has CUDA dependencies. I started with ROCm. I dropped it. I'm building developer tools, and if they pick up, they'll carry a lot of people.

- It will be years before I'm interested in trying ROCm again. I was oversold, and AMD underdelivered.

- Broad adoption is limited by lack of standards and mature software.

It's fine to day compute capabilities are secondary right now, but I think that will limit AMD in the long term. And I think lack of standards is to NVidia's advantage right now, but it will hinder long-term adoption.

If I were NVidia, I'd make:

- A reference CUDA open-source implementation which makes CUDA coda 100% compatible with Xe and Radeon

- License it under GPL with a CLA, so any Intel and AMD enhancements are open and flow back

- Have nominal optimizations in the open-source reference implementation, while keeping the high-performance proprietary optimizations NVidia proprietary (and only for NVidia GPUs)

This would encourage broad adoption of GPGPU, since any code I wrote would work on any customer machine, Intel, AMD, or NVidia. On the other hand, it would create an unlevel playing field for NVidia, since as the copyright holder, only NVidia could have proprietary optimizations. HPC would go to NVidia, as would markets like video editing or CAD.

wmf 2021-08-16 18:11:20 +0000 UTC [ - ]

Hopefully CDNA2 will be similar enough to RDNA2/3 that the same software stack will work with both.

dragontamer 2021-08-16 18:13:35 +0000 UTC [ - ]

I assume the opposite is going on.

Hopefully the RDNA3 software stack is good enough that AMD decides that CDNA2 (or CDNA-3) can be based off of the RDNA-instruction set.

AMD doesn't want to piss off its $100 million+ customers with a crappy software stack.

---------

BTW: AMD is reporting that parts of ROCm 4.3 are working with the 6900 XT GPU (suggesting that RDNA code generation is beginning to work). I know that ROCm 4.0+ has made a lot of github checkins that suggest that AMD is now actively working on the RDNA-code generation. Its not officially written into the ROCm documentation yet, its mostly the discussions with ROCm github issues that are noting these changes.

Its not official support and its literally years late. But its clear what AMD's current strategy is.

lvl100 2021-08-16 17:42:36 +0000 UTC [ - ]

I agree 100% and if Nvidia’s recent showing and puzzling focus on “Omniverse” is any indication, they’re operating in a fantasy world a bit.

rektide 2021-08-16 17:20:29 +0000 UTC [ - ]

ROCm seems to be tolerably decent, if you are willing to spend a couple hours, and, big if, if HIP supports all the various libraries you were relying on. CUDA has a huge support library, and ROCm has been missing not just the small fry stuff but a lot of the core stuff in the that library.

Long term, AI (& a lot of other interests) need to serve themselves. CUDA is excellently convenient, but long term I have a hard time imagining there being a worthwhile future for anything but Vulkan. There don't seem to be a lot of forays into writing good all-encompassing libraries in Vulkan yet, nor many more specialized AI/ML Vulkan libraries, so it feels largely like we more or less haven't started really trying.

dnautics 2021-08-16 17:22:28 +0000 UTC [ - ]

is there any indication that ROCm has solved its stability issues? I wasn't doing the testing myself, but the reason why we rejected ROCm a while back (2 years?) was because you could get segfaults hours into a ML training run, which is... frustrating, to say the least, and not easily identifiable in quickie test runs (or CI, if ML did more CI).

rektide 2021-08-16 19:51:16 +0000 UTC [ - ]

Lot of downvotes. Anyone have any opinion? Is CUDA fine forever? Is there something other than Vulkan we should also try? Do you think AMD should solve every problem CUDA solves for their customers too? What gives here?

I see a lot a lot a lot of resistance to the idea that we should start trying to align to Vulkan. Here & elsewhere. I don't get it, it makes no sense, & everyone else using GPU's is running fast as they can towards Vulkan. Is it just too soon too early in the adoption curve, or do ya'll think there are more serious obstructions long term to building a more Vulkan centric AI/ML toolkit? It still feels inevitable to me. What we are doing now feels like a waste of time. I wish ya'll wouldn't downvote so casually, wouldn't just try to brush this viewpoint away.

BadInformatics 2021-08-16 21:50:53 +0000 UTC [ - ]

> Do you think AMD should solve every problem CUDA solves for their customers too?

They had no choice. Getting a bunch of HPC people to completely rewrite their code for a different API is a tough pill to swallow when you're trying to win supercomputer contracts. Would they have preferred to spend development resources elsewhere? Probably, they've even got their own standards and SDKs from days past.

> everyone else using GPU's is running fast as they can towards Vulkan

I'm not qualified to comment on the entirety of it, but I can say that basically no claim in this statement is true:

1. Not everyone doing compute is using GPUs. Companies are increasingly designing and releasing their own custom hardware (TPUs, IPUs, NPUs, etc.)

2. Not everyone using GPUs is cares about Vulkan. Certainly many folks doing graphics stuff don't, and DirectX is as healthy as ever. There have been bits and pieces of work around Vulkan compute for mobile ML model deployment, but it's a tiny niche and doesn't involve discrete GPUs at all.

> Is it just too soon too early in the adoption curve

Yes. Vulkan compute is still missing many of the niceties of more developed compute APIs. Tooling is one big part of that: writing shaders using GLSL is a pretty big step down from using whatever language you were using before (C++, Fortran, Python, etc).

> do ya'll think there are more serious obstructions long term to building a more Vulkan centric AI/ML toolkit

You could probably write a whole page about this, but TL;DR yes. It would take at least as much effort as AMD and Intel put into their respective compute stacks to get Vulkan ML anywhere near ready for prime time. You need to have inference, training, cross-device communication, headless GPU usage, reasonably wide compatibility, not garbage performance, framework integration, passable tooling and more.

Sure these are all feasible, but who has the incentive to put in the time to do it? The big 3 vendors have their supercomputer contracts already, so all they need to do is keep maintaining their 1st-party compute stacks. Interop also requires going through Khronos, which is its own political quagmire when it comes to standardization. Nvidia already managed to obstruct OpenCL into obscurity, why would they do anything different here? Downstream libraries have also poured untold millions into existing compute stacks, OR rely on the vendors to implement that functionality for them. This is before we even get into custom hardware like TPUs that don't behave like a GPU at all.

So in short, there is little inevitable about this at all. The reason people may have been frustrated by your comment is because Vulkan compute comes up all the time as some silver bullet that will save us from the walled gardens of CUDA and co (especially for ML, arguably the most complex and expensive subdomain of them all). We'd all like it to come true, but until all of the aforementioned points are addressed this will remain primarily in pipe dream territory.

rektide 2021-08-17 02:21:04 +0000 UTC [ - ]

The paradox I identify in your comments is the start & where you end. The start is that AMD's only choice is to re-embark & re-do the years & years of hard work, to catch up.

The end is decrying how impossible & hard it is to imagine anyone ever reproducing anything like CUDA in Vulkan:

> Sure these are all feasible, but who has the incentive to put in the time to do it?

To talk to the first though: what choice do we have? Why would AMD try to compete by doing it all again as a second party? It seems like, with Nvidia so dominant, AMD and literally everyone else should realize their incentive is to compete, as a group, against the current unquestioned champion. There needs to be some common ground that the humble opposition can work from. And, from what I see, Vulkan is that ground, and nothing else is remotely competitive or interesting.

I really appreciate your challenges, thank you for writing them out. It is real hard, there are a lot of difficulties starting afresh, with a much harder to use toolkit than enriched spiced up C++ (CUDA) as a starting point. At the same time, I continue to think there will be a sea-change, it will happen enormously fast, & it will take far less real work than the prevailing pessimist's view could ever have begin to encompassed. Some good strategic wins to set the stage & make some common use cases viable, good enough technics to set a mold, and I think the participatory nature will snowball, quickly, and we'll wonder why we hadn't begun years ago.

BadInformatics 2021-08-17 05:17:00 +0000 UTC [ - ]

Saying all the underdog competitors should team up is a nice idea, but as anyone who has seen how the standards sausage is made (or, indeed, has tried something similar) will tell you, it is often more difficult than everyone going their own way. It might be unintuitive, but coordination is hard even when you're not jockeying for position with your collaborators. This is why I mentioned the silver bullet part: a surface level analysis leads one to believe collaboration is the optimal path, but that starts to show cracks real quickly once one starts actually digging into the details.

To end things on a somewhat brighter note, there will be no sea change unless people put in the time and effort to get stuff like Vulkan compute working. As-is, most ML people (somewhat rightfully) expect accelerator support to be handed to them on a silver platter. That's fine, but I'd argue by doing so we lose the right to complain about big libraries and hardware vendors doing what's best for their own interests instead of for the ecosystem as a whole.

at_a_remove 2021-08-16 17:34:58 +0000 UTC [ - ]

I find myself needing, for the first time ever, a high-end video card for some heavy video encoding, and when I look, they're all gone, apparently in a tug of war between gamers and crypto miners.

At the exact same time, I am throwing out a box of old video cards from the mid-nineties (Trident, Diamond Stealth) and from the looks of it you can list them on eBay but they don't even sell.

Now Intel is about to leap into the fray and I am imagining trying to explain all of this to the me of twenty-five years back.

topspin 2021-08-16 18:27:04 +0000 UTC [ - ]

"apparently in a tug of war between gamers and crypto miners"

That, and the oligopoly of AMD and NVidia. Their grip is so tight they dictate terms to card makers. For example; you can't build an NVidia GPU card unless you source the GDDR from NVidia. Between them the world supply of high end GDDR is monopolized.

Intel is going to deliver some badly needed competition. They don't even have to approach the top of the GPU high end; just deliver something that will play current games at 1080p at modest settings and they'll have an instant hit. Continuing the tradition of open source support Intel has had with (most) of their GPU technology is something else we can hope for.

noleetcode 2021-08-16 19:40:36 +0000 UTC [ - ]

I will, quite literally, take those old video cards off your hands. I have a hoarder mentality when it comes to old tech and love collecting it.

at_a_remove 2021-08-16 20:19:54 +0000 UTC [ - ]

That involves shipping, though. It wouldn't be worth it to you to have my old 14.4 Kbps modem and all of the attendant junk I have.

cwizou 2021-08-16 17:09:04 +0000 UTC [ - ]

They still are not saying with which part of that lineup they want to compete with, which is a good thing.

I still remember Pat Gelsinger telling us over and over that Larrabee would compete with the high end of the GeForce/Radeon offering back in the days, including when it was painfully obvious to everyone that it definitely would not.

https://en.wikipedia.org/wiki/Larrabee_(microarchitecture)

judge2020 2021-08-16 18:04:26 +0000 UTC [ - ]

Well there's already the DG1 which seems to compete with the low-end. https://www.youtube.com/watch?v=HSseaknEv9Q

tmccrary55 2021-08-16 16:44:47 +0000 UTC [ - ]

I'm down if it comes with open drivers or specs.

TechieKid 2021-08-16 17:06:12 +0000 UTC [ - ]

Phoronix has been covering the Linux driver development for the cards as they happen: https://www.phoronix.com/scan.php?page=search&q=DG2

the8472 2021-08-16 17:06:06 +0000 UTC [ - ]

If they support virtualization like they do on their iGPUs that would be great and possibly drive adoption by power users. But I suspect they'll use that feature for market segmentation just like AMD and Nvidia do.

dragontamer 2021-08-16 17:17:09 +0000 UTC [ - ]

https://software.intel.com/content/dam/develop/external/us/e...

The above is Intel's Gen11 architecture whitepaper, describing how Gen11 iGPUs work. I'd assume that their next-generation discrete GPUs will have a similar architecture (but no longer attached to CPU L3 cache).

I haven't really looked into Intel iGPU architecture at all. I see that the whitepaper has some oddities compared to AMD / NVidia GPUs. Its definitely "more different".

The SIMD-units are apparently only 4 x 32-bit wide (compared to 32-wide NVidia / RDNA or 64-wide CDNA). But they can be reconfigured to be 8x16-bit wide instead (a feature not really available on NVidia. AMD can do SIMD-inside-of-SIMD and split up its registers once again however, but its a fundamentally different mechanism).

--------

Branch divergence is likely to be less of an issue with narrower SIMD than its competitors. Well, in theory anyway.

mastax 2021-08-17 00:25:41 +0000 UTC [ - ]

Intel has talked about Xe-LP which should be a better baseline for Xe-LPG: https://www.anandtech.com/show/15973/the-intel-xelp-gpu-arch...

It's a lot more like the competition, IIRC.

arcanus 2021-08-16 16:47:06 +0000 UTC [ - ]

Always seems to be two years away, like the Aurora supercomputer at Argonne.

stormbrew 2021-08-16 16:58:03 +0000 UTC [ - ]

I know March 2020 has been a very very long month but I'm pretty sure we're gonna skip a bunch of calendar dates when we get out of it.

re-actor 2021-08-16 16:48:50 +0000 UTC [ - ]

Early 2022 is just 4 months away actually

midwestemo 2021-08-16 17:27:53 +0000 UTC [ - ]

Man this year is flying, I still think it's 2020.

smcl 2021-08-16 21:10:51 +0000 UTC [ - ]

Honestly, I've been guilty of treating much of the last year as a loading screen. At times I've been hyper-focussed on doing that lovely personal development we're all supposed to do when cooped up alone at home, and at others just pissing around making cocktails and talking shite with my friends over social media.

So basically what I'm saying is - "same" :D

timbaboon 2021-08-16 17:17:08 +0000 UTC [ - ]

:O ;(

AnimalMuppet 2021-08-16 19:05:50 +0000 UTC [ - ]

Erm, last I checked, four months from now is December 2021.

dubcanada 2021-08-16 16:49:05 +0000 UTC [ - ]

Early 2022 is only like 4-8 months away?

dragontamer 2021-08-16 16:57:04 +0000 UTC [ - ]

Aurora was supposed to be delivered in 2018: https://www.nextplatform.com/2018/07/27/end-of-the-line-for-...

After it was delayed, Intel said that 2020 was when they'd be ready. Spoiler alert: they aren't: https://www.datacenterdynamics.com/en/news/doe-confirms-auro...

We're now looking at 2022 as the new "deadline", but we know that Intel has enough clout to force a new deadline as necessary. They've already slipped two deadlines, what's the risk in slipping a 3rd time?

---------

I don't like to "kick Intel while they're down", but Aurora has been a disaster for years. That being said, I'm liking a lot of their OneAPI tech on paper at least. Maybe I'll give it a shot one day. (AVX512 + GPU supported with one compiler, in a C++-like language that could serve as a competitor to CUDA? That'd be nice... but Intel NEEDS to deliver these GPUs in time. Every delay is eating away at their reputation)

Dylan16807 2021-08-16 20:37:10 +0000 UTC [ - ]

Edit: Okay I had it slightly wrong, rewritten.

Aurora was originally slated to use Phi chips, which are an unrelated architecture to these GPUs. The delays there don't say much about problems actually getting this new architecture out. It's more that they were halfway through making a supercomputer and then started over.

I could probably pin the biggest share of the blame on 10nm problems, which are irrelevant to this architecture.

As far as this architecture goes, when they announced Aurora was switching, they announced 2021. That schedule, looking four years out for a new architecture, has only had one delay of an extra 6 months.

dragontamer 2021-08-16 21:25:07 +0000 UTC [ - ]

> I could probably pin the biggest share of the blame on 10nm problems, which are irrelevant to this architecture.

I doubt that.

If Xeon Phi were a relevant platform, Intel could have easily kept it... continuing to invest into the platform and make it into 7nm like the rest of Aurora's new design.

Instead, Intel chose to build a new platform from its iGPU architecture. So right there, Intel made a fundamental shift in the way they expected to build Aurora.

I don't know what kind of internal meetings Intel had to choose its (mostly untested) iGPU platform over its more established Xeon Phi line, but that's quite a dramatic change of heart.

------------

Don't get me wrong. I'm more inclined to believe in Intel's decision (they know more about their market than I do), but its still a massive shift in architecture... with a huge investment into a new software ecosystem (DPC++, OpenMP, SYCL, etc. etc.), a lot of which is largely untested in practice (DPC++ is pretty new, all else considered).

--------

> As far as this architecture goes, when they announced Aurora was switching, they announced 2021. That schedule, looking four years out for a new architecture, has only had one delay of an extra 6 months.

That's fair. But the difference between Aurora-2018 vs Aurora-2021 is huge.

Dylan16807 2021-08-16 23:03:01 +0000 UTC [ - ]

> That's fair. But the difference between Aurora-2018 vs Aurora-2021 is huge.

It is, yes. But none of that difference reflects badly on this new product line. It just reflects badly on Intel in general.

Xe hasn't existed long enough to be "always 2 years away". It's had a pretty steady rollout.

2021-08-16 17:51:09 +0000 UTC [ - ]

RicoElectrico 2021-08-16 16:47:42 +0000 UTC [ - ]

Meanwhile overloading a name of an unrelated CPU architecture, incidentally used in older Intel Management Engines.

fefe23 2021-08-16 20:50:58 +0000 UTC [ - ]

The fact that the selling point most elaborated on in the press is the AI upscaling, I'm worried the rest of their architecture may not be up to snuff.

jscipione 2021-08-16 17:44:46 +0000 UTC [ - ]

I've been hearing Intel play this tune for years, time to show us something or change the record!

mhh__ 2021-08-16 19:01:06 +0000 UTC [ - ]

They've been playing this for years because it's only really now that they can actually respond to Zen and friends. Intel's competitors have been asleep at the wheel until 2017, getting a new chip out takes years.

jbverschoor 2021-08-16 18:37:07 +0000 UTC [ - ]

New ceo, so some press releases, but the company remains the same. I am under no illusion that this will change, and definitely not in such a short notice.

They’ve neglected almost every market they were in. They’re altavista.

Uncle roger says bye bye !

andrewmcwatters 2021-08-16 22:42:56 +0000 UTC [ - ]

Mostly unrelated, but I'm still amazed that if you bought Intel at the height of the Dot-com bubble and held on, you still wouldn't have broken even, even ignoring inflation.

mastax 2021-08-17 00:19:42 +0000 UTC [ - ]

Including dividend reinvestment?

andrewmcwatters 2021-08-17 01:19:15 +0000 UTC [ - ]

Not including. But you make a good point, dividend reinvestment may have taken the edge off since the bubble.

pjmlp 2021-08-16 19:08:47 +0000 UTC [ - ]

I keep seeing such articles since Larrabe, better wait and see if this time it is actually any better.

bifrost 2021-08-16 17:13:40 +0000 UTC [ - ]

I'd be excited to see if you can run ARC on Intel ARC!

GPU Accelerated HN would be very interesting :)

dleslie 2021-08-16 18:00:21 +0000 UTC [ - ]

The sub-heading is false, I had a dedicated Intel GPU in 1998 by way of the i740.

acdha 2021-08-16 18:43:27 +0000 UTC [ - ]

Was that billed as a serious gaming GPU? I don't remember the i740 as anything other than a low-budget option.

dleslie 2021-08-16 19:55:49 +0000 UTC [ - ]

It was sold as a serious gaming GPU.

Recall that this was an era where GPUs weren't yet a thing; instead there was 2D video cards and 3D accelerators that paired with. The i740 and TNT paved the way toward GPUs, while I don't recall whether either had programmable pipelines they both had 2D capacity. For budget gamers, it wasn't a _terrible_ choice to purchase an i740 for the combined 2D/3D ability.

acdha 2021-08-16 20:06:04 +0000 UTC [ - ]

I definitely remember that era, I just don't remember that having anything other than an entry-level label. It's possible that this could have been due to the lackluster results — Wikipedia definitely supports the interpretation that the image changed in the months before it launched:

> In the lead-up to the i740's introduction, the press widely commented that it would drive all of the smaller vendors from the market. As the introduction approached, rumors of poor performance started circulating. … The i740 was released in February 1998, at $34.50 in large quantities.

However, this suggests that it was never going to be a top-end contender since it was engineered to hit a lower price point and was significantly under-specced compared to the competitors which were already on the market:

> The i740 was clocked at 66Mhz and had 2-8MB of VRAM; significantly less than its competitors which had 8-32MB of VRAM, allowing the card to be sold at a low price. The small amount of VRAM meant that it was only used as a frame buffer, hence it used the AGP interface to access the system's main memory to store textures; this was a fatal flaw that took away memory bandwidth and capacity from the CPU, reducing its performance, while also making the card slower since it had to go through the AGP interface to access the main memory which was slower than its VRAM.

dleslie 2021-08-16 20:09:11 +0000 UTC [ - ]

It was never aimed at top-end, but that doesn't mean it wasn't serious about being viable as a gaming device.

And it was, I used it for years.

smcl 2021-08-16 21:14:01 +0000 UTC [ - ]

My recollection is that the switchover to referring to them as a "GPU" wasn't integrating 2D and 3D in the same card, but the point where we offloaded MUCH more computation to the graphics card itself. So we're talking specifically about when NVidia launched the Geforce 256 - a couple of generations after the TNT

detaro 2021-08-16 18:49:52 +0000 UTC [ - ]

That's how it turned out in practice, but it was supposed to be a serious competitor AFAIK.

Here's an old review: https://www.anandtech.com/show/202/7

smcl 2021-08-16 21:19:59 +0000 UTC [ - ]

It's kind of amazing to me that I never really encountered or read about the i740. I got really into PC gaming in 1997, we got internet that same year so I read a ton and was hyper aware of the various hardware that was released, regardless of whether I could actually own any of it (spoiler, as a ~11 year old, no I could not). How did this sneak by me?

2021-08-16 19:16:14 +0000 UTC [ - ]

astockwell 2021-08-16 16:52:12 +0000 UTC [ - ]

More promises tied --not to something in hand-- but to some amazing future thing. Intel has not learned one bit.

tyingq 2021-08-16 17:05:22 +0000 UTC [ - ]

"The earliest Arc products will be released in "the first quarter of 2022"

That implies they do have running prototypes in-hand.

desktopninja 2021-08-17 01:00:36 +0000 UTC [ - ]

3DFX is joining the party soon ... Matrox, you're up next

xkeysc0re 2021-08-17 02:00:33 +0000 UTC [ - ]

You laugh but look at the current prices for a Voodoo on eBay

dkhenkin 2021-08-16 16:51:18 +0000 UTC [ - ]

But what kind of hash rates will they get?! /s

IncRnd 2021-08-16 17:52:44 +0000 UTC [ - ]

They will get 63 Dooms/Sec.

f6v 2021-08-16 17:11:34 +0000 UTC [ - ]

20 Vitaliks per Elon.

vmception 2021-08-16 16:51:35 +0000 UTC [ - ]

I'm going to add this to the Intel GPU graveyard in advance

dethswatch 2021-08-18 19:02:16 +0000 UTC [ - ]

Cries in Phi

jeffbee 2021-08-16 17:22:49 +0000 UTC [ - ]

Interesting, but the add-in-card GPU market for graphics purposes is so small, it's hard to get worked up about it. The overwhelming majority of GPU units sold are IGPs. Intel owns virtually 100% of the computer (excluding mobile) IGP market and 70% of the total GPU market. You can get almost the performance of Intel's discrete GPUs with their latest IGPs in "Tiger Lake" generation parts. Intel can afford to nibble at the edges of the discrete GPU market because it costs them almost nothing to put a product out there and to a large extent they won the war already.

selfhoster11 2021-08-16 17:48:57 +0000 UTC [ - ]

You must be missing the gamer market that's positively starving for affordable dedicated GPUs.

mirker 2021-08-16 21:07:43 +0000 UTC [ - ]

I would guess that the main point has to be hardware accelerated features, such as ray-tracing. I agree though that it seems pointless to buy a budget GPU when it’s basically a scaled up iGPU. Perhaps it makes sense if you want a mid-range CPU without a iGPU and you can’t operate it headlessly, or if you have an old PC that needs a mild refresh.