Hugo Hacker News

Wgpu-0.10 released: WebGPU implementation now in pure Rust

_cart 2021-08-18 21:50:00 +0000 UTC [ - ]

As Bevy Engine's lead developer (which uses wgpu), this release excites me for a number of reasons:

* A smaller (and less abstracted) codebase means that we can more easily extend wgpu with the features we need now and in the future. (ex: XR, ray tracing, exposing raw backend apis). The barrier to entry is so much lower.

* It shows that the wgpu team is receptive to our feedback. There was a point during our "new renderer experiments" where we were considering other "flatter" gpu abstractions for our new renderer. They immediately took this into account and kicked off this re-architecture. There were other people with similar feedback so I can't take full credit here, but the timing was perfect.

* A pure rust stack means that our builds are even simpler. Combine that with Naga for shader reflection and compilation and we can remove a lot of the "build quirks" in our pipeline that come from non-rust dependencies. Windows especially suffers from this type of build weirdness and I'm excited to not need to deal with that anymore.

* The "risk" of treating wgpu as our "main gpu abstraction layer" has gone way down thanks to the last few points. As a result, we have decided to completely remove our old "abstract render layer" in favor of wgpu. This means that wgpu is no longer a "bevy_render backend". It is now bevy_render's core gpu abstraction. This makes our code smaller, simpler, and more compatible with the wider wgpu ecosystem.

* There is a work-in-progress WebGL2 backend for the new wgpu. This will hopefully ultimately remove the need for the third party bevy_webgl2 backend (which has served us well, but it has its own quirks and complexities).

timClicks 2021-08-18 22:08:58 +0000 UTC [ - ]

Wow, this feels like a win-win-win for wgpu, bevy and the rust ecosystem in general

CyberRabbi 2021-08-18 22:22:29 +0000 UTC [ - ]

> This means that wgpu is no longer a "bevy_render backend". It is now bevy_render's core gpu abstraction.

Experience has shown me that you will eventually regret this decision

_cart 2021-08-18 23:06:00 +0000 UTC [ - ]

Snarky answer first: my experience tells me that I won't regret this decision :)

Real answer: You can use this argument to justify abstracting out literally anything. Clearly we shouldn't abstract everything and the details of each specific situation will dictate what abstractions are best (and who should own them). Based on the context that I have, Bevy's specific situation will almost certainly benefit from this decision. Bevy needs an "abstract gpu layer". We are currently maintaining our own (more limited) "abstract gpu layer" that lives on top of wgpu. Both my own experience and user experiences have indicated that this additional layer provides a worse experience: it is harder to maintain, harder to understand (by nature of being "another layer"), it lacks features, it adds overhead. Wgpu _is_ a generic abstraction layer fit for an engine. If I was building an abstract layer from scratch (with direct api backends), it would look a lot like wgpu. I understand the internals. I have a good relationship with the wgpu project lead. They respond to our needs. The second anything changes to make this situation suboptimal (they make massive api changes we don't like, they don't accept changes we need, they add dependencies we don't like, etc) I will happily fork the code and maintain it. I know the internals well enough to know that I couldn't do better and that I could maintain it if push comes to shove. The old Bevy abstraction layer adds nothing. In both situations we are free to extend our gpu abstraction with new features / backends. The only difference is the overall complexity of the system, which will be massively reduced by removing the intermediate layer.

CyberRabbi 2021-08-18 23:23:40 +0000 UTC [ - ]

A situation will inevitably arise where you’ll be unable to support a new platform in a reasonable amount of time because implementing support for that platform yourself in wgpu would be a significant undertaking and/or distraction. Progress from the community on supporting that platform in wgpu will lag because either you might be the only stakeholder interested in that platform and/or modifying wgpu to support that platform may require significant internal restructuring to support the platform in a non-hacky way or it would slow down or increase the complexity of the other main platforms. This is a hypothetical situation but it’s likely it will eventually happen.

Maintaining a fork of wgpu simply for your own project will also likely be more effort than necessary since wgpu is a more general API than bevy requires.

The core issue is that bevy uses a subset of wgpu so your own focused and limited abstraction layer will almost always be easier to implement and maintain. Some call this “the rule of least power” https://www.w3.org/2001/tag/doc/leastPower.html

The meta-core issue is that you and the wgpu team aren’t 100% aligned on your goals. Maybe you’re aligned on 80% but the 20% will eventually come back to haunt you.

_cart 2021-08-18 23:52:18 +0000 UTC [ - ]

Adding new "backends" to an abstraction always includes the risk of not being compatible with the current abstraction and requiring re-architectures. This would be true with our own abstractions as well. I agree that two separate projects often have different goals, but wgpu's goals are an (almost) complete subset of our goals: cross platform modern gpu layer that cleanly abstracts Vulkan/Metal/DX12 and best-effort abstracts older apis, rust-friendly api surface with RAII, limited "lowest common denominator" defaults that run everywhere with opt-in support for advanced features and lifting lowest-common-denominator limits. The biggest divergence is their increased need for safety features to make wgpu a suitable host for WebGPU apis in browsers, but this is something that still benefits us, because it might ultimately allow Bevy apps to "host" less trusted shader code.

We've been building out the new renderer and we have already used a huge percentage of wgpu's api surface. It will be close to 100% by the time we launch. This proves to me that we do need almost all of the features they provide. Bevy's renderer is modular and we need to expose a generic (and safe) gpu api to empower users to build new render features. Wgpu's entire purpose is to be that API. I know enough of the details here (because I built my own api, thoroughly reviewed the wgpu code, and did the same for alternatives in the ecosystem) to feel comfortable betting on it. I'm even more comfortable having reviewed bevy-user-provided wgpu prs that add major new features to wgpu (XR). If you have specific concerns about specific features, I'm happy to discuss this further.

You can link to as many "rules" as you want, but solving a problem in the real world requires a careful balance of many variables and concerns. Rules like "the rule of least power" should be a guiding principle, not something to be followed at all costs.

nextaccountic 2021-08-19 08:11:24 +0000 UTC [ - ]

Since other commenter had concerns about the PS4/PS5/other consoles wgpu being proprietary due to SDK restrictions (and consequently, Bevy PS4/PS5/other consoles port being proprietary), I will ask: does this mean that Bevy for consoles will cost money? (apart from the console SDK cost). Will Bevy for consoles be source available, as in, developed exactly like current Bevy but under a non-open source license?

Or actually: is it feasible to license console-specific Bevy code as MIT/Apache and have the only proprietary bits be the console SDK? (This means having Bevy, an open source project, call a console SDK in the open - is that allowed?)

For me those are my main concerns regarding Bevy.

gfxgirl 2021-08-19 07:22:15 +0000 UTC [ - ]

I think wgpu is amazing. But, if I want to take the previous commenter seriously I'd think about PS5, Switch/Switch2, etc as places where someone will have to write a wgpu implementation (non open source since those SDKs don't allow it) if you ever decide to ship on those platforms.

Jweb_Guru 2021-08-19 16:10:38 +0000 UTC [ - ]

wgpu's recent license change was specifically done to allow these implementations to exist for consoles.

Jasper_ 2021-08-18 23:36:49 +0000 UTC [ - ]

As someone who's developed and maintained platform backends for all sorts of obscure platforms (actually, it's still my day job!), I don't think wgpu backends are going to be difficult to develop for any of the remaining platforms APIs still alive.

It's a pretty well-thought-out API that mirrors most of the modern rendering abstractions seen in game engines.

CyberRabbi 2021-08-18 23:46:59 +0000 UTC [ - ]

> platforms APIs still alive.

The APIs that currently exist are not the cause of the potential issue I am raising.

epage 2021-08-19 00:17:24 +0000 UTC [ - ]

And without knowing what those future APIs are like, you can't design a future-proof backend API to handle it. You'll design it for the past problems which will be different than future ones.

CyberRabbi 2021-08-19 01:24:16 +0000 UTC [ - ]

What you can do is design a backend for your project that is less general and tailored to your specific problems so as to increase the likelihood that it is easier to implement on new platforms that may arise. This is why I mentioned the “rule of least power”

For example using drawTriangle(p1, p2, p3) instead of drawPolys(TYPE_TRIANGLE, point_list, n_points). The former is unequivocally easier to implement, the latter requires more complexity.

dralley 2021-08-19 02:46:57 +0000 UTC [ - ]

And it has been explained to you repeatedly that wgpu is already more or less "least power" - that Bevy already uses most of the API surface and will soon be using nearly all of it.

CyberRabbi 2021-08-19 05:31:28 +0000 UTC [ - ]

Wgpu is the equivalent of “Turing complete” for a graphics API. I think you’re not fully groking the principle of least power if you consider that a “least power” graphics abstraction.

wtetzner 2021-08-19 12:10:34 +0000 UTC [ - ]

The thing about graphics in general is that typically you want to use whatever the underlying hardware gives you to get the best performance. This can conflict with the law of least power, but so what?

You can’t optimize for everything, so you decide what tradeoffs make the most sense.

CyberRabbi 2021-08-19 12:44:40 +0000 UTC [ - ]

I think you and likely most people reading this discussion are missing the point. Wgpu is fine and even great as a general purpose API but it’s inappropriate as an application-specific abstraction for a graphics backend. It should absolutely be used to implement the latter.

Compare this to the Rust compiler. Rust uses MIR as its intermediate compiler IR for Rust-specific optimizations, because it retains Rust-specific semantics, before it compiles down to LLVM bitcode. If it used LLVM bitcode as its native IR then it would be difficult to implement Rust-specific optimization passes. In this case an application specific graphics API is analogous to MIR and wgpu is analogous to LLVM.

Jasper_ 2021-08-19 04:50:46 +0000 UTC [ - ]

I know you don't really want to hedge too much on your example, but do note that in your attempt to simplify, you accidentally turned an API which can, in theory, draw many triangles into one that only can draw one triangle at a time. What if I want to parallelize the work of drawing triangles?

Trying to aggressively simplify without understanding the full design space or surface area is not really a great idea.

In the worst case, if you never use, for instance, Timestamp Queries in your engine, you can half-ass a backend and just nop the implementation. Lots of game engines do that kinda thing. So many game engines where half the graphics API implementations were no-op stubs because we never needed the functionality on that platform.

CyberRabbi 2021-08-19 05:34:45 +0000 UTC [ - ]

> What if I want to parallelize the work of drawing triangles?

What if your specific application cannot provide inherently parallel workloads and has no need for an abstraction that can accommodate parallel rendering? In that case porting your app to a new platform requires implementing functionality your application does not need.

danShumway 2021-08-19 06:32:05 +0000 UTC [ - ]

> What if your specific application cannot provide inherently parallel workloads and has no need for an abstraction that can accommodate parallel rendering?

Okay, but... what if it does?

It's one thing to talk about "this could be simpler if you don't need more general functionality." But that's also just kind of an assumption that the functionality actually isn't needed. The odds that you have to support a platform/application that both can't handle parallelization and that will be substantially held back by the option even just existing -- I'm not sure that those odds are actually higher than the odds that you'll run into a platform that requires parallelization for decent performance.

It feels kind of glib to just state with such certainty that a cross-platform game engine is never going to need to draw arbitrary polygons. That doesn't seem to me like a safe assumption at all.

I agree with GP here:

> Trying to aggressively simplify without understanding the full design space or surface area is not really a great idea.

In many cases, needing to no-op some functionality for one or two platforms may end up being a lot better than a situation where you need to hack a bunch of functionality on top of an API that fundamentally is not designed to support that. It's a little bit annoying for simpler platforms, but simpler platforms are probably not your biggest fear when thinking about support. The first time that you need to do something other than linearly draw triangles, for any platform you want to support at all, even just one of them, then the API you propose suddenly becomes more complicated and harder to maintain than a single `drawPols` method would be.

This is not saying that abstraction or narrowing design space should never happen. It's just saying, understand what the design space is before you decide that you're never going to need to support something. I expect that the Bevy core team has spent a decent amount of time thinking about what kinds of GPU operations they're likely to need for both current and future platforms.

CyberRabbi 2021-08-19 12:48:24 +0000 UTC [ - ]

> It feels kind of glib to just state with such certainty that a cross-platform game engine is never going to need to draw arbitrary polygons.

It’s an example

Jweb_Guru 2021-08-19 00:25:13 +0000 UTC [ - ]

There are only like 21 resource types in the whole WebGPU API. Which of these do you think are going to go away?

* Adapter, Device, Queue, Surface

* Buffer, Texture, TextureView, Sampler, QuerySet, BindGroup

* ShaderModule, BindGroupLayout, PipelineLayout, RenderPipeline, ComputePipeline

* CommandEncoder, RenderPassEncoder, ComputePassEncoder, RenderBundleEncoder, RenderBundle, CommandBuffer

Adapter, Device, and Surface are abstractions that would be needed no matter what the GPU architecture. What else is there that's even fishy... QuerySet, maybe? The various kinds of encoders are already abstractions on some platforms, and I guess there are cases where there might be better resource binding abstractions, but I'm not sure what could cause this set of supported features to change on the hardware other than completely dropping the rasterization pipeline (in which case you can just emulate everything with compute shaders and buffers anyway). And the relationships between these structures are pretty well-specified, I doubt that will change either.

So at that point, at least if you want to stay cross platform, you're talking about things like the specific implementation of buffer mapping or tracing or whatever, which is the sort of thing it's relatively easy to add as an extension (or refactor to make more configurable).

kvark 2021-08-18 23:36:27 +0000 UTC [ - ]

> The core issue is that bevy uses a subset of wgpu so your own focused and limited abstraction layer will almost always be easier to implement and maintain

This isn't obviously true. WebGPU API surface is fairly small (unlike Vulkan), and Bevy may easily use most of it.

Generally speaking, depending on another library always involve some amount of trust. Going the path of zero trust is also possible, but is much less effective.

CyberRabbi 2021-08-18 23:41:41 +0000 UTC [ - ]

> This isn't obviously true. WebGPU API surface is fairly small (unlike Vulkan), and Bevy may easily use most of it.

Small is not the issue. Generality is.

Jweb_Guru 2021-08-18 23:53:06 +0000 UTC [ - ]

Keep in mind that wgpu is intended to be something you can write a game engine in and achieve at least comparable performance to going through something like Vulkan directly. The WebGPU API is intended to be optimizable enough that even low-level applications don't need to reach for their own solutions, and the WebGPU team have worked very hard to find universal and efficient abstractions over the underlying drivers. It also provides unsafe opt-out functionality for cases where the goal of safety clashes too much with the performance needs of some applications (such as using precompiled shaders).

Given all this, what do you think is missing that makes it "too general" for use as the basis of a game engine? Many of the features you're probably thinking of are a nonstarter if you want to be cross-platform, and being efficient cross-platform is a key motivating reason to use wgpu in the first place.

CyberRabbi 2021-08-19 00:02:14 +0000 UTC [ - ]

> Given all this, what do you think is missing that makes it "too general" for use as the basis of a game engine?

My concern isn’t that wgpu’s level of generality makes it unsuitable for implementing a game engine. It’s of course perfectly suitable for implementing a game engine. The concern is around the pitfalls of not architecting your game engine in terms of your own limited graphics abstraction (which may backend to wgpu).

nindalf 2021-08-19 06:52:33 +0000 UTC [ - ]

You’ve made like 15 comments on this thread with this idea. Cart is intent on sticking to his approach even after listening to your point of view.

I feel like we could just put this conversation on ice and check back in after 6-12 months. Bevy release notes always reach the top of HN. If they go back to creating an abstraction over wgpu, you can say “I told you so” then. And similarly, if Bevy is able to use this approach to provide support for Android, iOS and web (in addition to existing support for windows, Linux and Mac) then you need to own up to that.

CyberRabbi 2021-08-19 12:52:13 +0000 UTC [ - ]

> You’ve made like 15 comments on this thread with this idea.

Yet other people beside Cart in my thread still fail to properly understand the idea and instead think I am suggesting not using wgpu at all.

> you can say “I told you so” then

My aim isn’t to say “I told you so” it’s to make sure people accurately understand the point I am making. Cart seems to understand the point I am making therefore I have made no further comments to him.

jokethrowaway 2021-08-19 10:24:30 +0000 UTC [ - ]

That depends.

I would agree if it were a dependency on third party closed source solution, but wgpu is a thriving and well maintained OSS project.

Nothing prevents you from forking it and treating it like your own code in a catastrophic scenario.

CyberRabbi 2021-08-19 13:01:18 +0000 UTC [ - ]

> Nothing prevents you from forking it and treating it like your own code in a catastrophic scenario.

It’s much easier for deployment and distribution purposes to add a level of indirection in your codebase than maintain a fork of a large project. Maintaining a fork is a full time job in itself, especially for active projects where you’ll need to constantly be merging new changes and keeping up with the evolution of the internals of the project via the mailing list or other means.

efnx 2021-08-18 22:17:53 +0000 UTC [ - ]

This is great. I switched my renderer from OpenGL to wgpu for my engine and I cannot exaggerate how much better the error reporting is. Instead of "working incorrectly" or simply not working and providing a vague error code (like OpenGL), wgpu tells me exactly what I did wrong, which is a life saver.

Also - types. Thanks wgpu team!

porphyra 2021-08-18 22:40:45 +0000 UTC [ - ]

Ah yes, OpenGL's error reporting is notoriously bad. If you forget to call disable on a vertex attribute you might just get a random error 1282 (invalid operation) somewhere down the line in a totally irrelevant part of the code, lol.

skocznymroczny 2021-08-19 09:48:41 +0000 UTC [ - ]

It's much better when you use a debug context, although as usual with OpenGL the details vary between graphics card drivers. I had very good debug messages with Nvidia.

ComputerGuru 2021-08-19 00:32:52 +0000 UTC [ - ]

Does wgpu finally support OpenGL?

kvark 2021-08-19 00:36:20 +0000 UTC [ - ]

Yeah, we support OpenGL ES-3.0 with a few major caveats.

ComputerGuru 2021-08-19 00:39:10 +0000 UTC [ - ]

That’s awesome. Wgpu did not support a single Linux machine I tried it on, all with various hardware/driver combos, back when OpenGL wasn’t available and later when it was first previewed. It was the only reason I used a different abstraction.

adamnemecek 2021-08-19 04:04:17 +0000 UTC [ - ]

What abstraction did you use?

ComputerGuru 2021-08-19 16:35:41 +0000 UTC [ - ]

gfx/gfx_hal but that was before the complete pivot disguised as a series of updates that left anyone using it for its original purpose SOL.

rhn_mk1 2021-08-19 07:49:12 +0000 UTC [ - ]

Is 3.0 a hard requirement, or can 2.0 also work?

kvark 2021-08-19 12:50:58 +0000 UTC [ - ]

We have to draw the line somewhere. ES 3.0 is already complicated, adding a number of "downlevel" features and limits. ES 2.0 would just be more work that we can't take right now.

skocznymroczny 2021-08-19 09:45:40 +0000 UTC [ - ]

Good news. I switched to WebGPU from OpenGL for my homemade game engine and I am very happy with it. I don't use Rust though, but D, the only time I see Rust code is in the stacktraces when I get an error.

Personally, I think there is a lot of potential for WebGPU outside of web/Rust ecosystems. WebGPU provides a great balance between awkwardness of OpenGL (global state, suboptimal resource binding patterns) and the verbosity and expertise required for Vulkan (barriers, synchronization, everything are needed even for a basic triangle and it only gets harder from there).

Animats 2021-08-18 23:56:06 +0000 UTC [ - ]

I've been using the previous version for months. Underneath Rend3, which handles some of the queuing and threading issues for Wgpu. Not for a web client; for complex 3D content on Linux. It's working well. With the new version, I should be able to cross-compile Rust to Windows as well as Linux, and generate a Windows executable without having to use any Microsoft products.

mikevm 2021-08-19 08:39:33 +0000 UTC [ - ]

What's your current opinion on Rust viz-a-viz Go? I remember that you had a few complaints about Rust.. I wonder if that has changed? :)

pjmlp 2021-08-19 06:25:03 +0000 UTC [ - ]

The API that Vulkan should have been.

Looking forward to see it take up adoption outside of the browsers that are decades away of being an API one can rely on, and without blacklisting.

tenaciousDaniel 2021-08-18 23:05:56 +0000 UTC [ - ]

So I'm familiar with WebGPU, but I'm not really sure why there would be a native (non-web) implementation of it. What are the practical uses of this, as opposed to just using something like Vulkan?

kvark 2021-08-18 23:38:44 +0000 UTC [ - ]

See prior discussion: https://news.ycombinator.com/item?id=23079200

Includes a link to the relevant Fosdem talk;)

enos_feedler 2021-08-18 23:11:58 +0000 UTC [ - ]

Cross platform. You could build a rust app with a WebGPU target that runs across web and native.

tenaciousDaniel 2021-08-19 00:17:57 +0000 UTC [ - ]

Ah nice, I see. Any particular reason for Rust being used? Just curious, because I would (naively) expect to see similar projects with Golang, C++ etc but I really only see Rust.

kvark 2021-08-19 00:38:29 +0000 UTC [ - ]

Rust is great for this!

Google has an implementation of WebGPU called Dawn, which is quite impressive. It's written in C++.

2021-08-19 00:45:00 +0000 UTC [ - ]

jms55 2021-08-19 00:38:02 +0000 UTC [ - ]

This implementation is developed by firefox (at least, last I checked). Google has an implementation along similar lines in C++ called dawn, and I'm pretty sure webkit plans to use dawn. So you see them in Rust/C++, but no one has written an implementation in go afaik.

dralley 2021-08-19 00:46:45 +0000 UTC [ - ]

A Go implementation would only be usable from Go, whereas Rust and C++ can export a C API that many different languages can use.

Plus, using system libraries (DirectX, Villain, Metal, OpenGL) from Go would be a pain and possibly suffer worse from the FFI penalties than just using a higher level library written in Rust/C++ to begin with.

kvark 2021-08-19 00:40:58 +0000 UTC [ - ]

For all we know, Safari on Windows(!) is going to use Dawn. But anything on Apple platforms isn't quite clear. They may get their own implementation.

kangz 2021-08-19 11:40:39 +0000 UTC [ - ]

I think you meant WebKit on Windows (used for other browsers than Safari). Once Dawn is integrated in WebKit for one OS, it should be easy to extend WebKit to use it on other OSes so maybe it will be a quick way to have WebGPU implemented in Safari?

kvark 2021-08-19 12:53:08 +0000 UTC [ - ]

Thanks for correcting! For a moment I started wondering if I exposed any confidential info here, phew!

frutiger 2021-08-19 00:59:24 +0000 UTC [ - ]

The Windows version of Safari was discontinued in 2012.

Jweb_Guru 2021-08-19 00:35:50 +0000 UTC [ - ]

It's cross-platform, safe, (reasonably) standardized, and fast (in roughly that order of priority). Even if you only care about the last three, there are not all that many competitors outside of maybe Metal, which is not the API what most people are thinking of when they say they don't care about working cross platform :P Vulkan emulation exists for some older machines, but it's generally slow and janky due to the impedence mismatch of trying to build a very low level API on top of a high level one.

user-the-name 2021-08-18 23:44:55 +0000 UTC [ - ]

An API that is more usable than Vulkan, a better fit for modern hardware and faster than OpenGL, and that runs on top of already existing and well-supported platform APIs.

wrnr 2021-08-19 00:17:17 +0000 UTC [ - ]

Then rust code using webgpu can compile to wasm. WebGPU does a bit of extra work to make it the safe to run code on the GPU from different domains. Also, Vulkan is not support on most platforms.

astlouis44 2021-08-19 00:26:47 +0000 UTC [ - ]

WebGPU is going to be huge for Unity/Unreal browser games, and instrumental in architecting the open metaverse via the web.

jaas 2021-08-18 22:57:07 +0000 UTC [ - ]

Can/will this be used in Firefox at some point?

kvark 2021-08-18 23:15:22 +0000 UTC [ - ]

Yes, in fact in two different ways.

1. Firefox uses wgpu for implementing WebGPU. 2. Applications on wgpu can be compiled in wasm for the web and ran in Firefox Nightly.

This is harmed by the fact WebGPU API is still not stable. Hopefully, not for long!

Shadonototro 2021-08-18 23:34:20 +0000 UTC [ - ]

I first thought wgpu was WebGPU, i googled and it is not

https://www.w3.org/TR/webgpu/

Naming is important, you want to avoid confusing as much as possible

user-the-name 2021-08-18 23:45:28 +0000 UTC [ - ]

wgpu is an implementation of WebGPU.

math 2021-08-18 23:45:06 +0000 UTC [ - ]

it is the WebGPU API (targeting both the web and native)

encryptluks2 2021-08-18 21:41:53 +0000 UTC [ - ]

> gfx-rs community’s goal is to make graphics programming in Rust easy, fast, and reliable.

Yet one of their main projects involves reliance on the web for graphics programming. This is the opposite of their stated goal in my opinion.

_cart 2021-08-18 21:56:34 +0000 UTC [ - ]

Yup wgpu has a WebGPU backend, but it also has first-class native backends for Vulkan, Metal, DX12, and OpenGL ES (with upcoming support for DX11 and WebGL). It is designed to be a cross platform graphics API. It isn't "web based", just "web compatible". And it has super solid "feature detection" that enables you to opt in to "native only" features like push constants.

encryptluks2 2021-08-19 16:26:24 +0000 UTC [ - ]

So then why is this being advertised as a WebGPU implementation if it isn't the focus? The title is misleading in that case and instead should just be titled Graphics implementation now in pure Rust.

zamadatix 2021-08-18 21:50:45 +0000 UTC [ - ]

You may want to take a look at what exactly WebGPU is and the wgpu repository, particularly the examples subfolder, before drawing conclusions based on the name.

nextaccountic 2021-08-19 08:21:12 +0000 UTC [ - ]

Wgpu-rs doesn't rely on the web. The API is just a simplified Vulkan. The API can be implemented for the web (and Firefox uses wgpu-rs) but it is also adequate for native applications. See https://news.ycombinator.com/item?id=23079200 (that was posted elsewhere in this thread) for more details.

2021-08-18 22:30:56 +0000 UTC [ - ]

pitaj 2021-08-18 21:46:59 +0000 UTC [ - ]

What are you talking about?

devwastaken 2021-08-18 22:30:52 +0000 UTC [ - ]

Does WebGPU protect against DOS via shaders, crashes, etc? Games like VRChat, where a user can just make shaders in Unity and have them ran on everyone elses computer - could really benefit from such a thing. There's an increase in games that are marketed as being able to create your own content that is displayed for other users in a game.

gfxgirl 2021-08-19 07:28:24 +0000 UTC [ - ]

That's arguably not wgpu's responsibility. It's usually the OS's. Windows will reset the GPU if it doesn't return. Chrome tries to catch it before the OS IIRC.

In any case, other than resetting the GPU there is no way to stop DOS via shaders. I can easily write a 3 line shader that has a legit use at 1x1 pixels but will DOS your machine at 2000x2000 pixels. It's basically the halting problem. The only known solutions are (a) reset the GPU or (b) design preemptable GPUs

nextaccountic 2021-08-19 08:15:13 +0000 UTC [ - ]

The commenter said about DoS but what about the fact that GPU memory isn't protected? What if two different shaders run at the same time at the gpu - will one be able to read the other's private memory?

There are GPUs that support IOMMU but they cost a fortune. (or is IOMMU a separate feature than just having virtual memory for GPU memory?)

Running user-supplied shaders seems like a security nightmare.

garaetjjte 2021-08-19 11:28:17 +0000 UTC [ - ]

All modern GPUs have MMUs that prevent shaders from accessing arbitrary memory. In rare cases where it is absent (like VC4 on Raspberry Pi) driver needs to statically analyze shaders and ensure that all memory accesses are clamped.

kangz 2021-08-19 11:46:22 +0000 UTC [ - ]

Other comments explain how that's challenging/impossible in the general case because GPU don't have as good preemption as CPUs and because of the halting problem. However for the VRChat use case you could imagine disallowing unbounded loops so you can estimate the number of instructions and put a reasonable cap on them. This would take care of the DOS and the WebGPU runtime would take care of the rest of the security (OOB, uninitialized data, etc)

kvark 2021-08-18 23:09:22 +0000 UTC [ - ]

If there is too much GPU work thrown at a WebGPU implementation, the best it could do is terminate the logical device on timeout.