Intel’s Arc GPUs will compete with GeForce and Radeon in early 2022
jeswin 2021-08-16 17:04:34 +0000 UTC [ - ]
r-bar 2021-08-16 17:52:50 +0000 UTC [ - ]
If Intel is able to put out something reasonably competitive and that supports GPU sharding it could be a game changer. It could change the direction of the ecosystem and force Nvidia and AMD to bring sharding to their consumer tier cards. I am stoked to see where this new release takes us.
Level1Linux has a (reasonably) up to date state of the GPU ecosystem that does a much better job outlining the potential of this tech.
stormbrew 2021-08-16 17:16:13 +0000 UTC [ - ]
jogu 2021-08-16 17:44:46 +0000 UTC [ - ]
modeless 2021-08-16 19:52:16 +0000 UTC [ - ]
kop316 2021-08-16 17:57:29 +0000 UTC [ - ]
heavyset_go 2021-08-16 18:48:33 +0000 UTC [ - ]
dcdc123 2021-08-16 19:23:05 +0000 UTC [ - ]
Nexxxeh 2021-08-17 18:49:51 +0000 UTC [ - ]
holoduke 2021-08-16 22:03:06 +0000 UTC [ - ]
byefruit 2021-08-16 16:51:49 +0000 UTC [ - ]
AMD don't seem to even be trying on the software-side at the moment. ROCm is a mess.
pjmlp 2021-08-16 20:25:46 +0000 UTC [ - ]
With modern tooling.
Instead of forcing devs to live in the pre-historic days of C dialects and printf debugging, provide polyglot IDEs with graphical debugging tools capable of single step GPU shaders and a rich libraries ecosystem.
Khronos got the message too late and now no one cares.
rowanG077 2021-08-16 17:07:00 +0000 UTC [ - ]
pjmlp 2021-08-16 20:32:36 +0000 UTC [ - ]
CUDA wiped out OpenCL, because it went polyglot as of version 3.0, while insisting that everyone should write in a C dialect.
They also provide great graphical debugging tools and libraries.
Khronos waited too long to introduce SPIR, and in traditional Khronos fashion, waited for the partners to provide the tooling.
One could blame NVidia, but it isn't as the competition has done a better job.
hobofan 2021-08-16 18:09:23 +0000 UTC [ - ]
T-A 2021-08-16 17:15:30 +0000 UTC [ - ]
snicker7 2021-08-16 18:51:53 +0000 UTC [ - ]
T-A 2021-08-18 23:46:31 +0000 UTC [ - ]
https://www.urz.uni-heidelberg.de/en/newsroom/oneapi-academi...
jjcon 2021-08-16 16:58:43 +0000 UTC [ - ]
https://pytorch.org/blog/pytorch-for-amd-rocm-platform-now-a...
byefruit 2021-08-16 17:16:11 +0000 UTC [ - ]
dragontamer 2021-08-16 17:21:55 +0000 UTC [ - ]
The issue is that AMD has split their compute into two categories:
* RDNA -- consumer cards. A new ISA with new compilers / everything. I don't think its reasonable to expect AMD's compilers to work on RDNA, when such large changes have been made to the architecture. (32-wide instead of 64-wide. 1024 registers. Etc. etc.)
* CDNA -- based off of Vega's ISA. Despite being "legacy ISA", its pretty modern in terms of capabilities. MI100 is competitive against the A100. CDNA is likely going to run Frontier and El Capitan supercomputers.
------------
ROCm focused on CDNA. They've had compilers emit RDNA code, but its not "official" and still buggy. But if you went for CDNA, that HIP / ROCm stuff works enough for the Oak Ridge National Labs.
Yeah, CDNA is expensive ($5k for MI50 / Radeon VII, and $9k for MI100). But that's the price of full-speed scientific-oriented double-precision floating point GPUs these days.
paulmd 2021-08-16 18:24:14 +0000 UTC [ - ]
yeah, but that's exactly what OP said - Vega is three generations old at this point, and that is the last consumer GPU (apart from VII which is a rebranded compute card) that ROCm supports.
On the NVIDIA side, you can run at least basic tensorflow/pytorch/etc on a consumer GPU, and that option is not available on the AMD side, you have to spend $5k to get a GPU that their software actually supports.
Not only that but on the AMD side it's a completely standalone compute card - none of the supported compute cards do graphics anymore. Whereas if you buy a 3090 at least you can game on it too.
Tostino 2021-08-16 19:10:38 +0000 UTC [ - ]
slavik81 2021-08-16 19:14:52 +0000 UTC [ - ]
I have not seen any problems with those cards in BLAS or SOLVER, though they don't get tested as much as the officially supported cards.
FWIW, I finally managed to buy an RX 6800 XT for my personal rig. I'll be following up on any issues found in the dense linear algebra stack on that card.
I work for AMD on ROCm, but all opinions are my own.
BadInformatics 2021-08-16 21:24:56 +0000 UTC [ - ]
Why? Because as-is, most people still believe support for gfx1000 cards is non-existent in any ROCm library. Of course that's not the case as you've pointed out here, but without any good sign of forward progress, your average user is going to assume close to zero support. Vague comments like https://github.com/RadeonOpenCompute/ROCm/issues/1542 are better than nothing, but don't inspire that much confidence without some more detail.
FeepingCreature 2021-08-16 18:43:22 +0000 UTC [ - ]
That's exactly the point. ML on AMD is a third-class citizen.
dragontamer 2021-08-16 18:45:16 +0000 UTC [ - ]
Now don't get me wrong: $9000 is a lot for a development system to try out the software. NVidia's advantage is that you can test out the A100 by writing software for cheaper GeForce cards at first.
NVidia also makes it easy with the DGX computer to quickly get a big A100-based computer. AMD you gotta shop around with Dell vs Supermicro (etc. etc.) to find someone to build you that computer.
byefruit 2021-08-16 17:44:57 +0000 UTC [ - ]
Still handicaps them compared to Nvidia where you can just buy anything recent and expect it to work. Suspect it also means they get virtually no open source contributions from the community because nobody can run or test it on personal hardware.
dragontamer 2021-08-16 17:49:56 +0000 UTC [ - ]
Each assembly language from each generation of cards changes. PTX recompiles the "pseudo-assembly" instructions into the new assembly code each generation.
---------
AMD has no such technology. When AMD's assembly language changes (ex: from Vega into RDNA), its a big compiler change. AMD managed to keep the ISA mostly compatible from 7xxx GCN 1.0 series in the late 00s all the way to Vega 7nm in the late 10s... but RDNA's ISA change was pretty massive.
I think its only natural that RDNA was going to have compiler issues.
---------
AMD focused on Vulkan / DirectX support for its RDNA cards, while its compute team focused on continuing "CDNA" (which won large supercomputer contracts). So that's just how the business ended up.
blagie 2021-08-16 18:21:18 +0000 UTC [ - ]
This makes absolutely no sense to me, and I have a Ph.D:
"* RDNA -- consumer cards. A new ISA with new compilers / everything. I don't think its reasonable to expect AMD's compilers to work on RDNA, when such large changes have been made to the architecture. (32-wide instead of 64-wide. 1024 registers. Etc. etc.) * CDNA -- based off of Vega's ISA. Despite being "legacy ISA", its pretty modern in terms of capabilities. MI100 is competitive against the A100. CDNA is likely going to run Frontier and El Capitan supercomputers. ROCm focused on CDNA. They've had compilers emit RDNA code, but its not "official" and still buggy. But if you went for CDNA, that HIP / ROCm stuff works enough for the Oak Ridge National Labs. Yeah, CDNA is expensive ($5k for MI50 / Radeon VII, and $9k for MI100). But that's the price of full-speed scientific-oriented double-precision floating point GPUs these days.
I neither know nor care what RDNA, CDNA, A100, MI50, Radeon VII, MI100, or all the other AMD acronyms are. Yes, I could figure it out, but I want plug-and-play, stability, and backwards-compatibility. I ran into a whole different minefield with AMD. I'd need to run old ROCm, downgrade my kernel, and use a different card to drive monitors than for ROCm. It was a mess.
NVidia gave me plug-and-play. I bought a random NVidia card with the highest "compute level," and was confident everything would work. It does. I'm happy.
Intel has historically had great open source drivers, and if it give better plug-and-play and open source, I'll buy Intel next time. I'm skeptical, though. The past few year, Intel has a hard time tying their own shoelaces. I can't imagine this will be different.
dragontamer 2021-08-16 18:28:19 +0000 UTC [ - ]
Its right there in the ROCm introduction.
https://github.com/RadeonOpenCompute/ROCm#Hardware-and-Softw...
> ROCm officially supports AMD GPUs that use following chips:
> GFX9 GPUs
> "Vega 10" chips, such as on the AMD Radeon RX Vega 64 and Radeon Instinct MI25
> "Vega 7nm" chips, such as on the Radeon Instinct MI50, Radeon Instinct MI60 or AMD Radeon VII, Radeon Pro VII
> CDNA GPUs
> MI100 chips such as on the AMD Instinct™ MI100
--------
The documentation of ROCm is pretty clear that it works on a limited range of hardware, with "unofficial" support at best on other sets of hardware.
blagie 2021-08-16 19:41:42 +0000 UTC [ - ]
(1) There are a million different ROCm pages and introductions
(2) Even that page is out-of-date, and e.g. claims unofficial support for "GFX8 GPUs: Polaris 11 chips, such as on the AMD Radeon RX 570 and Radeon Pro WX 4100," although those were randomly disabled after ROCm 3.5.1.
... if you have a Ph.D in AMD productology, you might be able to figure it out. If it's merely in computer science, math, or engineering, you're SOL.
There are now unofficial guides to downgrading to 3.5.1, only 3.5.1 doesn't work with many modern frameworks, and you land in a version incompatibility mess.
These aren't old cards either.
Half-decent engineer time is worth $350/hour, all in (benefits, overhead, etc.). Once you've spent a week futzing with AMD's mess, you're behind by the cost of ten NVidia A4000 cards which Just Work.
As a footnote, I suspect in the long term, small purchases will be worth more than the supercomputing megacontracts. GPGPU is wildly underutilized right now. That's mostly a gap of software, standards, and support. If we can get that right, every computer have many teraflops of computing power, even for stupid video chat filters and whatnot.
dragontamer 2021-08-16 19:58:18 +0000 UTC [ - ]
It seems pretty simple to me if we're talking about compute. The MI-cards are AMD's line of compute GPUs. Buy an MI-card if you want to use ROCm with full support. That's MI25, MI50, or MI100.
> As a footnote, I suspect in the long term, small purchases will be worth more than the supercomputing megacontracts. GPGPU is wildly underutilized right now. That's mostly a gap of software, standards, and support. If we can get that right, every computer have many teraflops of computing power, even for stupid video chat filters and whatnot.
I think you're right, but the #1 use of these devices is running video games (aka: DirectX and Vulkan). Compute capabilities are quite secondary at the moment.
blagie 2021-08-17 11:11:29 +0000 UTC [ - ]
For tasks which require that much GPU, I'm using cloud machines. My dev desktop would like a working GPU 24/7, but it doesn't need to be nearly that big.
If I had my druthers, I would have bought an NVidia 3050, since it has adequate compute, and will run once available <$300. Of course, anything from the NVidia consumer line is impossible to buy right now, except at scalper prices.
I just did a web search. The only card from that series I can find for sale, new, was the MI100, which runs $13k. The MI50 doesn't exist, and the MI25 can only be bought used on eBay. Corporate won't do eBay. Even the MI100 would require an exception, since it's an unauthorized vendor (Amazon doesn't have it).
Combine that with poor software support, and an unknown EOL, and it's a pretty bad deal.
> I think you're right, but the #1 use of these devices is running video games (aka: DirectX and Vulkan). Compute capabilities are quite secondary at the moment.
Companies should maximize shareholder value. Right now:
- NVidia is building an insurmountable moat. I already bought an NVidia card, and our software already has CUDA dependencies. I started with ROCm. I dropped it. I'm building developer tools, and if they pick up, they'll carry a lot of people.
- It will be years before I'm interested in trying ROCm again. I was oversold, and AMD underdelivered.
- Broad adoption is limited by lack of standards and mature software.
It's fine to day compute capabilities are secondary right now, but I think that will limit AMD in the long term. And I think lack of standards is to NVidia's advantage right now, but it will hinder long-term adoption.
If I were NVidia, I'd make:
- A reference CUDA open-source implementation which makes CUDA coda 100% compatible with Xe and Radeon
- License it under GPL with a CLA, so any Intel and AMD enhancements are open and flow back
- Have nominal optimizations in the open-source reference implementation, while keeping the high-performance proprietary optimizations NVidia proprietary (and only for NVidia GPUs)
This would encourage broad adoption of GPGPU, since any code I wrote would work on any customer machine, Intel, AMD, or NVidia. On the other hand, it would create an unlevel playing field for NVidia, since as the copyright holder, only NVidia could have proprietary optimizations. HPC would go to NVidia, as would markets like video editing or CAD.
wmf 2021-08-16 18:11:20 +0000 UTC [ - ]
dragontamer 2021-08-16 18:13:35 +0000 UTC [ - ]
Hopefully the RDNA3 software stack is good enough that AMD decides that CDNA2 (or CDNA-3) can be based off of the RDNA-instruction set.
AMD doesn't want to piss off its $100 million+ customers with a crappy software stack.
---------
BTW: AMD is reporting that parts of ROCm 4.3 are working with the 6900 XT GPU (suggesting that RDNA code generation is beginning to work). I know that ROCm 4.0+ has made a lot of github checkins that suggest that AMD is now actively working on the RDNA-code generation. Its not officially written into the ROCm documentation yet, its mostly the discussions with ROCm github issues that are noting these changes.
Its not official support and its literally years late. But its clear what AMD's current strategy is.
lvl100 2021-08-16 17:42:36 +0000 UTC [ - ]
rektide 2021-08-16 17:20:29 +0000 UTC [ - ]
Long term, AI (& a lot of other interests) need to serve themselves. CUDA is excellently convenient, but long term I have a hard time imagining there being a worthwhile future for anything but Vulkan. There don't seem to be a lot of forays into writing good all-encompassing libraries in Vulkan yet, nor many more specialized AI/ML Vulkan libraries, so it feels largely like we more or less haven't started really trying.
dnautics 2021-08-16 17:22:28 +0000 UTC [ - ]
rektide 2021-08-16 19:51:16 +0000 UTC [ - ]
I see a lot a lot a lot of resistance to the idea that we should start trying to align to Vulkan. Here & elsewhere. I don't get it, it makes no sense, & everyone else using GPU's is running fast as they can towards Vulkan. Is it just too soon too early in the adoption curve, or do ya'll think there are more serious obstructions long term to building a more Vulkan centric AI/ML toolkit? It still feels inevitable to me. What we are doing now feels like a waste of time. I wish ya'll wouldn't downvote so casually, wouldn't just try to brush this viewpoint away.
BadInformatics 2021-08-16 21:50:53 +0000 UTC [ - ]
They had no choice. Getting a bunch of HPC people to completely rewrite their code for a different API is a tough pill to swallow when you're trying to win supercomputer contracts. Would they have preferred to spend development resources elsewhere? Probably, they've even got their own standards and SDKs from days past.
> everyone else using GPU's is running fast as they can towards Vulkan
I'm not qualified to comment on the entirety of it, but I can say that basically no claim in this statement is true:
1. Not everyone doing compute is using GPUs. Companies are increasingly designing and releasing their own custom hardware (TPUs, IPUs, NPUs, etc.)
2. Not everyone using GPUs is cares about Vulkan. Certainly many folks doing graphics stuff don't, and DirectX is as healthy as ever. There have been bits and pieces of work around Vulkan compute for mobile ML model deployment, but it's a tiny niche and doesn't involve discrete GPUs at all.
> Is it just too soon too early in the adoption curve
Yes. Vulkan compute is still missing many of the niceties of more developed compute APIs. Tooling is one big part of that: writing shaders using GLSL is a pretty big step down from using whatever language you were using before (C++, Fortran, Python, etc).
> do ya'll think there are more serious obstructions long term to building a more Vulkan centric AI/ML toolkit
You could probably write a whole page about this, but TL;DR yes. It would take at least as much effort as AMD and Intel put into their respective compute stacks to get Vulkan ML anywhere near ready for prime time. You need to have inference, training, cross-device communication, headless GPU usage, reasonably wide compatibility, not garbage performance, framework integration, passable tooling and more.
Sure these are all feasible, but who has the incentive to put in the time to do it? The big 3 vendors have their supercomputer contracts already, so all they need to do is keep maintaining their 1st-party compute stacks. Interop also requires going through Khronos, which is its own political quagmire when it comes to standardization. Nvidia already managed to obstruct OpenCL into obscurity, why would they do anything different here? Downstream libraries have also poured untold millions into existing compute stacks, OR rely on the vendors to implement that functionality for them. This is before we even get into custom hardware like TPUs that don't behave like a GPU at all.
So in short, there is little inevitable about this at all. The reason people may have been frustrated by your comment is because Vulkan compute comes up all the time as some silver bullet that will save us from the walled gardens of CUDA and co (especially for ML, arguably the most complex and expensive subdomain of them all). We'd all like it to come true, but until all of the aforementioned points are addressed this will remain primarily in pipe dream territory.
rektide 2021-08-17 02:21:04 +0000 UTC [ - ]
The end is decrying how impossible & hard it is to imagine anyone ever reproducing anything like CUDA in Vulkan:
> Sure these are all feasible, but who has the incentive to put in the time to do it?
To talk to the first though: what choice do we have? Why would AMD try to compete by doing it all again as a second party? It seems like, with Nvidia so dominant, AMD and literally everyone else should realize their incentive is to compete, as a group, against the current unquestioned champion. There needs to be some common ground that the humble opposition can work from. And, from what I see, Vulkan is that ground, and nothing else is remotely competitive or interesting.
I really appreciate your challenges, thank you for writing them out. It is real hard, there are a lot of difficulties starting afresh, with a much harder to use toolkit than enriched spiced up C++ (CUDA) as a starting point. At the same time, I continue to think there will be a sea-change, it will happen enormously fast, & it will take far less real work than the prevailing pessimist's view could ever have begin to encompassed. Some good strategic wins to set the stage & make some common use cases viable, good enough technics to set a mold, and I think the participatory nature will snowball, quickly, and we'll wonder why we hadn't begun years ago.
BadInformatics 2021-08-17 05:17:00 +0000 UTC [ - ]
To end things on a somewhat brighter note, there will be no sea change unless people put in the time and effort to get stuff like Vulkan compute working. As-is, most ML people (somewhat rightfully) expect accelerator support to be handed to them on a silver platter. That's fine, but I'd argue by doing so we lose the right to complain about big libraries and hardware vendors doing what's best for their own interests instead of for the ecosystem as a whole.
at_a_remove 2021-08-16 17:34:58 +0000 UTC [ - ]
At the exact same time, I am throwing out a box of old video cards from the mid-nineties (Trident, Diamond Stealth) and from the looks of it you can list them on eBay but they don't even sell.
Now Intel is about to leap into the fray and I am imagining trying to explain all of this to the me of twenty-five years back.
topspin 2021-08-16 18:27:04 +0000 UTC [ - ]
That, and the oligopoly of AMD and NVidia. Their grip is so tight they dictate terms to card makers. For example; you can't build an NVidia GPU card unless you source the GDDR from NVidia. Between them the world supply of high end GDDR is monopolized.
Intel is going to deliver some badly needed competition. They don't even have to approach the top of the GPU high end; just deliver something that will play current games at 1080p at modest settings and they'll have an instant hit. Continuing the tradition of open source support Intel has had with (most) of their GPU technology is something else we can hope for.
noleetcode 2021-08-16 19:40:36 +0000 UTC [ - ]
at_a_remove 2021-08-16 20:19:54 +0000 UTC [ - ]
cwizou 2021-08-16 17:09:04 +0000 UTC [ - ]
I still remember Pat Gelsinger telling us over and over that Larrabee would compete with the high end of the GeForce/Radeon offering back in the days, including when it was painfully obvious to everyone that it definitely would not.
judge2020 2021-08-16 18:04:26 +0000 UTC [ - ]
tmccrary55 2021-08-16 16:44:47 +0000 UTC [ - ]
TechieKid 2021-08-16 17:06:12 +0000 UTC [ - ]
the8472 2021-08-16 17:06:06 +0000 UTC [ - ]
dragontamer 2021-08-16 17:17:09 +0000 UTC [ - ]
The above is Intel's Gen11 architecture whitepaper, describing how Gen11 iGPUs work. I'd assume that their next-generation discrete GPUs will have a similar architecture (but no longer attached to CPU L3 cache).
I haven't really looked into Intel iGPU architecture at all. I see that the whitepaper has some oddities compared to AMD / NVidia GPUs. Its definitely "more different".
The SIMD-units are apparently only 4 x 32-bit wide (compared to 32-wide NVidia / RDNA or 64-wide CDNA). But they can be reconfigured to be 8x16-bit wide instead (a feature not really available on NVidia. AMD can do SIMD-inside-of-SIMD and split up its registers once again however, but its a fundamentally different mechanism).
--------
Branch divergence is likely to be less of an issue with narrower SIMD than its competitors. Well, in theory anyway.
mastax 2021-08-17 00:25:41 +0000 UTC [ - ]
It's a lot more like the competition, IIRC.
arcanus 2021-08-16 16:47:06 +0000 UTC [ - ]
stormbrew 2021-08-16 16:58:03 +0000 UTC [ - ]
re-actor 2021-08-16 16:48:50 +0000 UTC [ - ]
midwestemo 2021-08-16 17:27:53 +0000 UTC [ - ]
smcl 2021-08-16 21:10:51 +0000 UTC [ - ]
So basically what I'm saying is - "same" :D
AnimalMuppet 2021-08-16 19:05:50 +0000 UTC [ - ]
dubcanada 2021-08-16 16:49:05 +0000 UTC [ - ]
dragontamer 2021-08-16 16:57:04 +0000 UTC [ - ]
After it was delayed, Intel said that 2020 was when they'd be ready. Spoiler alert: they aren't: https://www.datacenterdynamics.com/en/news/doe-confirms-auro...
We're now looking at 2022 as the new "deadline", but we know that Intel has enough clout to force a new deadline as necessary. They've already slipped two deadlines, what's the risk in slipping a 3rd time?
---------
I don't like to "kick Intel while they're down", but Aurora has been a disaster for years. That being said, I'm liking a lot of their OneAPI tech on paper at least. Maybe I'll give it a shot one day. (AVX512 + GPU supported with one compiler, in a C++-like language that could serve as a competitor to CUDA? That'd be nice... but Intel NEEDS to deliver these GPUs in time. Every delay is eating away at their reputation)
Dylan16807 2021-08-16 20:37:10 +0000 UTC [ - ]
Aurora was originally slated to use Phi chips, which are an unrelated architecture to these GPUs. The delays there don't say much about problems actually getting this new architecture out. It's more that they were halfway through making a supercomputer and then started over.
I could probably pin the biggest share of the blame on 10nm problems, which are irrelevant to this architecture.
As far as this architecture goes, when they announced Aurora was switching, they announced 2021. That schedule, looking four years out for a new architecture, has only had one delay of an extra 6 months.
dragontamer 2021-08-16 21:25:07 +0000 UTC [ - ]
I doubt that.
If Xeon Phi were a relevant platform, Intel could have easily kept it... continuing to invest into the platform and make it into 7nm like the rest of Aurora's new design.
Instead, Intel chose to build a new platform from its iGPU architecture. So right there, Intel made a fundamental shift in the way they expected to build Aurora.
I don't know what kind of internal meetings Intel had to choose its (mostly untested) iGPU platform over its more established Xeon Phi line, but that's quite a dramatic change of heart.
------------
Don't get me wrong. I'm more inclined to believe in Intel's decision (they know more about their market than I do), but its still a massive shift in architecture... with a huge investment into a new software ecosystem (DPC++, OpenMP, SYCL, etc. etc.), a lot of which is largely untested in practice (DPC++ is pretty new, all else considered).
--------
> As far as this architecture goes, when they announced Aurora was switching, they announced 2021. That schedule, looking four years out for a new architecture, has only had one delay of an extra 6 months.
That's fair. But the difference between Aurora-2018 vs Aurora-2021 is huge.
Dylan16807 2021-08-16 23:03:01 +0000 UTC [ - ]
It is, yes. But none of that difference reflects badly on this new product line. It just reflects badly on Intel in general.
Xe hasn't existed long enough to be "always 2 years away". It's had a pretty steady rollout.
RicoElectrico 2021-08-16 16:47:42 +0000 UTC [ - ]
fefe23 2021-08-16 20:50:58 +0000 UTC [ - ]
jscipione 2021-08-16 17:44:46 +0000 UTC [ - ]
mhh__ 2021-08-16 19:01:06 +0000 UTC [ - ]
jbverschoor 2021-08-16 18:37:07 +0000 UTC [ - ]
They’ve neglected almost every market they were in. They’re altavista.
Uncle roger says bye bye !
andrewmcwatters 2021-08-16 22:42:56 +0000 UTC [ - ]
mastax 2021-08-17 00:19:42 +0000 UTC [ - ]
andrewmcwatters 2021-08-17 01:19:15 +0000 UTC [ - ]
pjmlp 2021-08-16 19:08:47 +0000 UTC [ - ]
bifrost 2021-08-16 17:13:40 +0000 UTC [ - ]
GPU Accelerated HN would be very interesting :)
dleslie 2021-08-16 18:00:21 +0000 UTC [ - ]
acdha 2021-08-16 18:43:27 +0000 UTC [ - ]
dleslie 2021-08-16 19:55:49 +0000 UTC [ - ]
Recall that this was an era where GPUs weren't yet a thing; instead there was 2D video cards and 3D accelerators that paired with. The i740 and TNT paved the way toward GPUs, while I don't recall whether either had programmable pipelines they both had 2D capacity. For budget gamers, it wasn't a _terrible_ choice to purchase an i740 for the combined 2D/3D ability.
acdha 2021-08-16 20:06:04 +0000 UTC [ - ]
> In the lead-up to the i740's introduction, the press widely commented that it would drive all of the smaller vendors from the market. As the introduction approached, rumors of poor performance started circulating. … The i740 was released in February 1998, at $34.50 in large quantities.
However, this suggests that it was never going to be a top-end contender since it was engineered to hit a lower price point and was significantly under-specced compared to the competitors which were already on the market:
> The i740 was clocked at 66Mhz and had 2-8MB of VRAM; significantly less than its competitors which had 8-32MB of VRAM, allowing the card to be sold at a low price. The small amount of VRAM meant that it was only used as a frame buffer, hence it used the AGP interface to access the system's main memory to store textures; this was a fatal flaw that took away memory bandwidth and capacity from the CPU, reducing its performance, while also making the card slower since it had to go through the AGP interface to access the main memory which was slower than its VRAM.
dleslie 2021-08-16 20:09:11 +0000 UTC [ - ]
And it was, I used it for years.
smcl 2021-08-16 21:14:01 +0000 UTC [ - ]
detaro 2021-08-16 18:49:52 +0000 UTC [ - ]
Here's an old review: https://www.anandtech.com/show/202/7
smcl 2021-08-16 21:19:59 +0000 UTC [ - ]
astockwell 2021-08-16 16:52:12 +0000 UTC [ - ]
tyingq 2021-08-16 17:05:22 +0000 UTC [ - ]
That implies they do have running prototypes in-hand.
desktopninja 2021-08-17 01:00:36 +0000 UTC [ - ]
xkeysc0re 2021-08-17 02:00:33 +0000 UTC [ - ]
dkhenkin 2021-08-16 16:51:18 +0000 UTC [ - ]
vmception 2021-08-16 16:51:35 +0000 UTC [ - ]
jeffbee 2021-08-16 17:22:49 +0000 UTC [ - ]
selfhoster11 2021-08-16 17:48:57 +0000 UTC [ - ]
mirker 2021-08-16 21:07:43 +0000 UTC [ - ]
Exmoor 2021-08-16 16:56:12 +0000 UTC [ - ]
015a 2021-08-16 19:02:15 +0000 UTC [ - ]
1) I hate to say "year of desktop Linux" like every year, but with the Steam Deck release later this year, and Valve's commitment to continue investing and collaborating on Proton to ensure wide-range game support; Linux gaming is going to grow substantially throughout 2022, if only due to the new devices added by Steam Decks.
Intel has always had fantastic Linux video driver support. If Arc is competitive with the lowest end current-gen Nvidia/AMD cards (3060?), Linux gamers will love it. And, when thinking about Steam Deck 2 in 2022-2023, Intel becomes an option.
2) The current-gen Nvidia/AMD cards are insane. They're unbelievably powerful. But, here's the kicker: Steam Deck is 720p. You go out and buy a brand new Razer/Alienware/whatever gaming laptop, the most common resolution even on the high end models is 1080p (w/ high refresh rate). The Steam Hardware survey puts 1080p as the most common resolution, and ITS NOT EVEN REMOTELY CLOSE to #2 [1] (720p 8%, 1080p 67%, 1440p 8%, 4k 2%) (did you know more people use Steam on MacOS than on a 4k monitor? lol)
These Nvidia/AMD cards are unprecedented overkill for most gamers. People are begging for cards that can run games at 1080p, Nvidia went straight to 4K, even showing off 8K gaming on the 3090, and now they can't even deliver any cards that run 720p/1080p. Today, we've got AMD releasing the 6600XT, advertising it as a beast for 1080p gaming [2]. This is what people actually want; affordable and accessible cards to play games on (whether they can keep the 6600xt in stock remains to be seen, of course). Nvidia went straight Icarus with Ampere; they shot for the sun, and couldn't deliver.
3) More broadly, geopolitical pressure in east asia, and specifically taiwan, should be concerning investors in any company that relies heavily on TSMC (AMD & Apple being the two big ones). Intel may start by fabbing Arc there, but they uniquely have the capacity to bring that production to the west.
I am very, very long INTC.
[1] https://store.steampowered.com/hwsurvey/Steam-Hardware-Softw...
[2] https://www.pcmag.com/news/amd-unveils-the-radeon-rx-6600-xt...
pjmlp 2021-08-16 19:11:29 +0000 UTC [ - ]
Many forget that most studios don't bother to port their Android games to GNU/Linux, which are mostly written using the NDK, so plain ISO C and C++, GL, Vulkan, OpenSL,..., yet no GNU/Linux, because the market just isn't there.
015a 2021-08-16 19:57:27 +0000 UTC [ - ]
Two weeks ago, the Hardware Survey reported Linux breaching 1% for the first time ever [1], for reasons not related to Deck (in fact, its not obvious WHY linux has been growing; disappointment in the Win11 announcement may have caused it, but in short, its healthy, natural, long-term growth). I would put real money up that Linux will hit 2% by the January 2022 survey, and 5% by January 2023.
Proton short-circuits the porting argument. It works fantastically for most games, with zero effort from the devs.
We're not talking about Linux being the majority. But its definitely looking like it will see growth over the next decade.
[1] https://www.tomshardware.com/news/steam-survey-linux-1-perce...
pjmlp 2021-08-16 20:07:07 +0000 UTC [ - ]
I believed in it back in the Loki golden days, nowadays I rather bet on macOS, Windows, mobile OSes and game consoles.
It remains to be seen how the Steam fairs versus the Steam Machines of yore.
onli 2021-08-16 20:54:26 +0000 UTC [ - ]
It's way better now than it was back then. There was a long period of good ports, which combined with the Steam for Linux client made Linux gaming a real thing already. But instead of fizzling out like the last time there were ports, now Linux transitioned to "Run every game" without needing a port. Some exceptions, but they are working on it and compatibility is huge.
This will grow slowly but steadily now, and is ready to explode if Microsoft does on bad move (like crazy Windows 11 hardware requirements, but we'll see).
Biggest danger to that development are the gpu prices, the Intel gpus can only help there. A competent 200 bucks model is desperately needed to keep the PC as a gaming platform alive. It has to run on fumes - on old hardware - now.
pjmlp 2021-08-17 04:51:48 +0000 UTC [ - ]
onli 2021-08-17 08:14:48 +0000 UTC [ - ]
No one said everyone though.
pjmlp 2021-08-17 08:48:14 +0000 UTC [ - ]
onli 2021-08-17 22:04:07 +0000 UTC [ - ]
ineedasername 2021-08-16 23:27:32 +0000 UTC [ - ]
pjmlp 2021-08-17 06:42:06 +0000 UTC [ - ]
ineedasername 2021-08-17 12:53:58 +0000 UTC [ - ]
Other platforms can increase while Linux also increases, the two aren't mutually exclusive.
I'm not saying it's going to take over the market. It's a huge long shot it will ever really rival Windows. Even if makes significant headway on Steam, that also doesn't necessarily translate to a corresponding change in OS market share. But the pending arrival of the Steam Deck combined with an enormous increase in game compatibility has nonetheless set the stage for significant gains for Linux with Steam.
Which part of the above observations are wishful thinking?
2021-08-16 21:38:51 +0000 UTC [ - ]
reitzensteinm 2021-08-16 21:22:28 +0000 UTC [ - ]
Smithalicious 2021-08-17 07:20:31 +0000 UTC [ - ]
On my end, and my Year of Linux Desktop started almost a decade ago, the Linux experience has not only never been better (of course) but has improved faster in recent years than ever before, and gaming is one of the fastest improving areas.
Proton is a fantastic piece of software.
pjmlp 2021-08-17 08:49:47 +0000 UTC [ - ]
Android seems to only be Linux when bragging about how Linux has won, yet to point out that games on Android/Linux distribution don't get made available on GNU/Linux distribution, is moving goalposts.
trangus_1985 2021-08-16 22:27:55 +0000 UTC [ - ]
I agree. But it will change the status in the listings. Steam deck and steamos appliances should be broken out into their own category, and I could easily see them overtaking linux desktop
ZekeSulastin 2021-08-16 19:30:59 +0000 UTC [ - ]
Revenant-15 2021-08-16 21:58:02 +0000 UTC [ - ]
kyriakos 2021-08-17 05:38:46 +0000 UTC [ - ]
Revenant-15 2021-08-17 13:10:48 +0000 UTC [ - ]
Otherwise, I have a list of 100 games installed that I'm slowly playing through that Game Pass has given me access to. Now today we're getting Humankind (on PC) and in two days Twelve Minutes. It's a ridiculously good deal.
ineedasername 2021-08-16 23:30:01 +0000 UTC [ - ]
That's just wrong. There should be some sort or matter/anti-matter reaction there.
Revenant-15 2021-08-17 13:46:46 +0000 UTC [ - ]
anthk 2021-08-17 15:18:26 +0000 UTC [ - ]
And SNES/MegaDrive.
iknowstuff 2021-08-16 22:06:28 +0000 UTC [ - ]
pkulak 2021-08-16 23:06:32 +0000 UTC [ - ]
dublin 2021-08-17 01:26:54 +0000 UTC [ - ]
One of the biggest reasons I want a real next-gen ARM-based Surface Pro is that I want to put Intel in the rearview mirror forever. I didn't hate Intel until I started buying Surfaces, then I realized that everything that sucks about the Surface family is 100% Intel's fault, from beyond-buggy faulty power management (cutting advertised battery life more than in half) to buggy and broken graphics ("Intel droppings" or redraw artifacts on the screen) to "integrated" graphics performance that just simply sucks so bad it's unusable for simple 3D CAD, much less gaming.
rowanG077 2021-08-17 19:38:29 +0000 UTC [ - ]
pkulak 2021-08-18 00:16:02 +0000 UTC [ - ]
Hint: TSMC's new 5nm process is being used in a laptop CPU by exactly one company at the moment.
rowanG077 2021-08-18 08:40:52 +0000 UTC [ - ]
SCUSKU 2021-08-16 23:08:01 +0000 UTC [ - ]
ineedasername 2021-08-16 23:22:04 +0000 UTC [ - ]
abledon 2021-08-18 16:54:38 +0000 UTC [ - ]
naravara 2021-08-17 13:55:49 +0000 UTC [ - ]
Do they need dedicated GPUs at all then? My impression has been that if you just want to go at 1080p and modest settings modern integrated GPUs can do fine for most games.
015a 2021-08-17 19:18:40 +0000 UTC [ - ]
brian_herman 2021-08-17 01:35:06 +0000 UTC [ - ]
ungamedplayer 2021-08-17 15:11:50 +0000 UTC [ - ]
wonwars 2021-08-17 02:36:19 +0000 UTC [ - ]
I've said it before and I've said it again, the biggest problem with linux and oss are the fanatics and deluded dreamers.
BuckRogers 2021-08-18 01:02:44 +0000 UTC [ - ]
I'm expecting China-Taiwan-US tensions to increase, and all these outsourcing profit seeking traitors will finally eat crow. As if "just getting rich" wasn't enough for them.
My stock bets and opinions don't have to necessarily be pro-American (certainly wouldn't buy anything outright anti-American). But I love it when they align like today.
Looking forward to picking up an Intel Arc.
ayngg 2021-08-16 17:59:39 +0000 UTC [ - ]
ineedasername 2021-08-16 23:31:42 +0000 UTC [ - ]
davidjytang 2021-08-16 18:17:42 +0000 UTC [ - ]
dathinab 2021-08-16 20:29:58 +0000 UTC [ - ]
- Shortages and price hikes caused by various effect are not limit to the GPU chiplet but also most other parts on the GPU.
- Especially it also affects the RAM they are using, which can be a big deal wrt. pricing and availability.
mkaic 2021-08-16 18:18:56 +0000 UTC [ - ]
tylerhou 2021-08-16 18:55:46 +0000 UTC [ - ]
monocasa 2021-08-16 18:29:43 +0000 UTC [ - ]
abraae 2021-08-16 18:39:11 +0000 UTC [ - ]
monocasa 2021-08-16 18:53:16 +0000 UTC [ - ]
ayngg 2021-08-16 19:56:29 +0000 UTC [ - ]
mastax 2021-08-17 00:00:38 +0000 UTC [ - ]
YetAnotherNick 2021-08-16 19:34:46 +0000 UTC [ - ]
teclordphrack2 2021-08-16 19:22:59 +0000 UTC [ - ]
voidfunc 2021-08-16 17:05:52 +0000 UTC [ - ]
Very exciting!
opencl 2021-08-16 17:10:48 +0000 UTC [ - ]
judge2020 2021-08-16 17:24:06 +0000 UTC [ - ]
https://www.pcgamer.com/nvidia-ampere-samsung-8nm-process/
Yizahi 2021-08-16 18:39:28 +0000 UTC [ - ]
judge2020 2021-08-16 19:53:19 +0000 UTC [ - ]
kllrnohj 2021-08-16 22:26:48 +0000 UTC [ - ]
But Nvidia has Ampere on both TSMC 7nm (GA100) and Samsung's 8nm (GA102). The TSMC variant has a significantly higher density at 65.6M / mm² vs. 45.1M / mm². Comparing across architectures is murkey, but we also know that the TSMC 7nm 6900XT clocks a lot higher than the Samsung 8nm RTX 3080/3090 while also drawing less power. There's of course a lot more to clock speeds & power draw in an actual product than the raw fab transistor performance, but it's still a data point.
So there's both density & performance evidence to suggest TSMC's 7nm is meaningfully better than Samsung's 8nm.
Even going off of marketing names, Samsung has a 7nm as well and they don't pretend their 8nm is just one-worse than the 7nm. The 8nm is an evolution of the 10nm node while the 7nm is itself a new node. According to Samsung's marketing flowcharts, anyway. And analysis suggests Samsung's 7nm is competitive with TSMC's 7nm.
IshKebab 2021-08-16 19:29:59 +0000 UTC [ - ]
ineedasername 2021-08-16 23:36:00 +0000 UTC [ - ]
paulmd 2021-08-16 17:38:09 +0000 UTC [ - ]
kllrnohj 2021-08-16 17:50:36 +0000 UTC [ - ]
Nvidia's pricing as a result is completely fake. Like the claimed "$330 3060" in fact starts at $400 and rapidly goes up from there, with MSRP's on 3060's as high as $560.
paulmd 2021-08-16 18:07:42 +0000 UTC [ - ]
But a 6900XT is available for $3100 at my local store... and the 3090 is $2100. Between the two it's not hard to see why the NVIDIA cards are selling and the AMD cards are sitting on the shelves, the AMD cards are 50% more expensive for the same performance.
As for why that is - which is the point I think you wanted to address, and decided to try and impute into my comment - who knows. Price are "sticky" (retailers don't want to mark down prices and take a loss) and AMD moves fewer cards in general. Maybe that means that prices are "stickier for longer" with AMD. Or maybe it's another thing like Vega where AMD set the MSRP so low that partners can't actually build and sell a card for a profit at competitive prices. But in general, regardless of why - the prices for AMD cards are generally higher, and when they go down the AMD cards sell out too. The inventory that is available is available because it's overpriced.
(and for both brands, the pre-tariff MSRPs are essentially a fiction at this point apart from the reference cards and will probably never be met again.)
RussianCow 2021-08-16 19:02:44 +0000 UTC [ - ]
That's just your store being dumb, then. The 6900 XT is averaging about $1,500 brand new on eBay[0] while the 3090 is going for about $2,500[1]. Even on Newegg, the cheapest in-stock 6900 XT card is $1,700[2] while the cheapest 3090 is $3,000[3]. Everything I've read suggests that the AMD cards, while generally a little slower than their Nvidia counterparts (especially when you factor in ray-tracing), give you way more bang for your buck.
> the prices for AMD cards are generally higher
This is just not true. There may be several reasons for the Nvidia cards being out of stock more often than AMD: better performance; stronger brand; lower production counts; poor perception of AMD drivers; specific games being optimized for Nvidia; or pretty much anything else. But at this point, pricing is set by supply and demand, not by arbitrary MSRPs set by Nvidia/AMD, so claiming that AMD cards are priced too high is absolutely incorrect.
[0]: https://www.ebay.com/sch/i.html?_from=R40&_nkw=6900xt&_sacat...
[1]: https://www.ebay.com/sch/i.html?_from=R40&_nkw=3090&_sacat=0...
[2]: https://www.newegg.com/p/pl?N=100007709%20601359957&Order=1
[3]: https://www.newegg.com/p/pl?N=100007709%20601357248&Order=1
kruxigt 2021-08-17 00:37:56 +0000 UTC [ - ]
If one is sold out and the other not that indicates that this is not completely the case.
Animats 2021-08-16 17:33:16 +0000 UTC [ - ]
There's a lot of fab capacity under construction. 2-3 years out, semiconductor glut again.
BuckRogers 2021-08-16 17:55:18 +0000 UTC [ - ]
This is one of the smartest moves by Intel, make their own stuff and consume production from all their competitors, which do nothing but paper designs. Nvidia and especially AMD took a risk not being in the fabrication business, and now we'll see the full repercussions. It's a good play (outsourcing) in good times, not so much when things get tight like today.
wmf 2021-08-16 18:06:49 +0000 UTC [ - ]
Probably not. AMD has had their N7/N6 orders in for years.
They're just budging in line with their superior firepower. Intel even bought out first dibs on TSMC 3nm out from under Apple.
There's no evidence this is happening and people with TSMC experience say it's not happening.
Nvidia and especially AMD took a risk not being in the fabrication business
Yes, and it paid off dramatically. If AMD stayed with their in-house fabs (now GloFo) they'd probably be dead on 14nm now.
BuckRogers 2021-08-16 18:12:46 +0000 UTC [ - ]
AMD on TSMC 3nm for Zen5. Will be squeezed by Intel and Apple- https://videocardz.com/newz/amd-3nm-zen5-apus-codenamed-stri...
Intel consuming a good portion of TSMC 3nm- https://www.msn.com/en-us/news/technology/intel-locks-down-a...
I see zero upside with these developments for AMD, and to a lesser degree, Nvidia, who are better diversified with Samsung and also rumored to be in talks with fabricating at Intel as well.
AnthonyMouse 2021-08-16 21:31:41 +0000 UTC [ - ]
This doesn't really work. If there is more demand, they'll build more fabs. It doesn't happen overnight -- that's why we're in a crunch right now -- but we're talking about years of lead time here.
TSMC is also not stupid. It's better for them for their customers to compete with each other instead of having to negotiate with a monopolist, so their incentive is to make sure none of them can crush the others.
> I see zero upside with these developments for AMD, and to a lesser degree, Nvidia
If Intel uses its own fabs, Intel makes money and uses the money to improve Intel's process which AMD can't use. If Intel uses TSMC's fabs, TSMC makes money and uses the money to improve TSMC's process which AMD does use.
BuckRogers 2021-08-17 06:35:42 +0000 UTC [ - ]
It depends how much money is involved. I don't think TSMC, nor any business is the master chess player you're envisioning, losing billions to instead worry about AMD's woes. But rather consider AMD's troubles as something they'll have to sort out on their own in due time. They're on their own.
Now, AMD and TSMC do have a good relationship. But large infusions of wealth corrupt even the most stalwart companions. This is one of those things that people don't need to debate, we'll see the results on 3nm. At a minimum, it looks like Intel is going to push AMD out of TSMC's leading nodes. There's no way to size this up as good news for AMD.
>If Intel uses TSMC's fabs, TSMC makes money and uses the money to improve TSMC's process which AMD does use.
TSMC is going to make money without Intel anyway. Choking AMD may not be their intention, but it's certainly a guaranteed side effect of it. Intel makes so much product that they are capable of using up their own space, and others. And now the GPUs are coming. Intel makes 10-times the profit AMD does per year. If AMD didn't have an x86 license, no one would utter these two companies names in the same sentence.
wmf 2021-08-16 18:29:58 +0000 UTC [ - ]
flenserboy 2021-08-16 17:09:54 +0000 UTC [ - ]
abledon 2021-08-16 17:13:00 +0000 UTC [ - ]
I could see gov/Military investing/awarding more contracts based on these 'locally' situated plants
humanistbot 2021-08-16 17:21:46 +0000 UTC [ - ]
pankajdoharey 2021-08-16 17:23:44 +0000 UTC [ - ]
Tsiklon 2021-08-16 21:10:15 +0000 UTC [ - ]
Things seem promising at this stage.
2021-08-16 17:13:11 +0000 UTC [ - ]
deaddodo 2021-08-16 18:25:28 +0000 UTC [ - ]
They're a shared resource; however, if you're willing to pay the money, you could monopolize their resources and outproduce anybody.
1 - https://epsnews.com/2021/02/10/5-fabs-own-54-of-global-semic...
wtallis 2021-08-16 19:54:17 +0000 UTC [ - ]
2021-08-16 23:36:44 +0000 UTC [ - ]
pier25 2021-08-16 17:52:13 +0000 UTC [ - ]
I'm still rocking a 1070 for 1080p/60 gaming and would love to jump to 4K/60 gaming but just can't convince myself to buy a new GPU at current prices.
mey 2021-08-16 18:02:47 +0000 UTC [ - ]
deadmutex 2021-08-16 19:51:10 +0000 UTC [ - ]
Disclosure: Work at Google, but not on Stadia.
jholman 2021-08-17 06:22:15 +0000 UTC [ - ]
mey 2021-08-17 00:36:34 +0000 UTC [ - ]
I did advise several co-workers to go that route.
leeoniya 2021-08-16 19:04:51 +0000 UTC [ - ]
chaosharmonic 2021-08-16 17:29:29 +0000 UTC [ - ]
pankajdoharey 2021-08-16 17:47:32 +0000 UTC [ - ]
moss2 2021-08-17 13:53:06 +0000 UTC [ - ]
2021-08-17 18:32:09 +0000 UTC [ - ]
rasz 2021-08-16 18:48:00 +0000 UTC [ - ]
https://www.youtube.com/watch?v=HSseaknEv9Q We Got an Intel GPU: Intel Iris Xe DG1 Video Card Review, Benchmarks, & Architecture
https://www.youtube.com/watch?v=uW4U6n-r3_0 Intel GPU A Real Threat: Adobe Premiere, Handbrake, & Production Benchmarks on DG1 Iris Xe
Its below GT1030 with a lot of issues.
agloeregrets 2021-08-16 19:16:41 +0000 UTC [ - ]
deaddodo 2021-08-16 20:11:23 +0000 UTC [ - ]
trynumber9 2021-08-16 20:53:08 +0000 UTC [ - ]
phone8675309 2021-08-16 19:15:24 +0000 UTC [ - ]
deburo 2021-08-16 19:19:47 +0000 UTC [ - ]
pitaj 2021-08-16 18:56:49 +0000 UTC [ - ]
ineedasername 2021-08-16 23:16:54 +0000 UTC [ - ]
dheera 2021-08-16 19:04:36 +0000 UTC [ - ]
> which performs a lot like the GDDR5 version of Nvidia's aging, low-end GeForce GTX 1030
Intel is trying to emulate what NVIDIA did a decade ago. Nobody in the NVIDIA world speaks of GeForce and GTX anymore, RTX is where it's at.
ineedasername 2021-08-16 23:12:31 +0000 UTC [ - ]
NonContro 2021-08-16 19:47:07 +0000 UTC [ - ]
https://www.reddit.com/r/ethereum/comments/olla5w/eip_3554_o...
Intel could be launching their cards into a GPU surplus...
That's discrete GPUs though, presumably the major volumes are in laptop GPUs? Will Intel have a CPU+GPU combo product for laptops?
dathinab 2021-08-16 20:37:32 +0000 UTC [ - ]
And it's also not "just" caused by miners.
But that means if they are really unlucky they could launch into a situation where there is a surplus of good second hand graphic cards and still shortages/price hikes on the GPU components they use...
Through as far as I can tell they are more targeting OEM's (any OEM instead of a selected few), and other large customers, so it might not matter too much for them for this release (but probably from the next one after one-ward it would).
errantspark 2021-08-16 20:01:47 +0000 UTC [ - ]
Probably until at least 2022 because the shortage of GPUs isn't solely because of crypto. Until we generally get back on track tricking sand to think we're not going to be able to saturate demand.
> Will Intel have a CPU+GPU combo product for laptops?
What? Obviously the answer is yes, how could it possibly be no? CPU+GPU combo is the only GPU related segment where Intel currently has a product.
NonContro 2021-08-17 00:08:23 +0000 UTC [ - ]
orra 2021-08-16 19:57:47 +0000 UTC [ - ]
cinntaile 2021-08-16 20:05:38 +0000 UTC [ - ]
hughrr 2021-08-16 18:22:32 +0000 UTC [ - ]
YetAnotherNick 2021-08-16 19:41:30 +0000 UTC [ - ]
https://stockx.com/nvidia-nvidia-geforce-rtx-3080-graphics-c...
RussianCow 2021-08-16 20:48:37 +0000 UTC [ - ]
With that said, I don't know if it's a sign of the shortage coming to an end; I think the release of the Ryzen 5700G with integrated graphics likely helped bridge the gap for people who wanted low-end graphics without paying the crazy markups.
Revenant-15 2021-08-17 13:49:37 +0000 UTC [ - ]
rejectedandsad 2021-08-16 18:32:06 +0000 UTC [ - ]
2021-08-16 18:44:19 +0000 UTC [ - ]
hughrr 2021-08-16 18:50:36 +0000 UTC [ - ]
mhh__ 2021-08-16 18:59:12 +0000 UTC [ - ]
mhh__ 2021-08-16 18:58:16 +0000 UTC [ - ]