Wgpu-0.10 released: WebGPU implementation now in pure Rust
efnx 2021-08-18 22:17:53 +0000 UTC [ - ]
Also - types. Thanks wgpu team!
porphyra 2021-08-18 22:40:45 +0000 UTC [ - ]
skocznymroczny 2021-08-19 09:48:41 +0000 UTC [ - ]
ComputerGuru 2021-08-19 00:32:52 +0000 UTC [ - ]
kvark 2021-08-19 00:36:20 +0000 UTC [ - ]
ComputerGuru 2021-08-19 00:39:10 +0000 UTC [ - ]
adamnemecek 2021-08-19 04:04:17 +0000 UTC [ - ]
ComputerGuru 2021-08-19 16:35:41 +0000 UTC [ - ]
rhn_mk1 2021-08-19 07:49:12 +0000 UTC [ - ]
kvark 2021-08-19 12:50:58 +0000 UTC [ - ]
skocznymroczny 2021-08-19 09:45:40 +0000 UTC [ - ]
Personally, I think there is a lot of potential for WebGPU outside of web/Rust ecosystems. WebGPU provides a great balance between awkwardness of OpenGL (global state, suboptimal resource binding patterns) and the verbosity and expertise required for Vulkan (barriers, synchronization, everything are needed even for a basic triangle and it only gets harder from there).
Animats 2021-08-18 23:56:06 +0000 UTC [ - ]
mikevm 2021-08-19 08:39:33 +0000 UTC [ - ]
pjmlp 2021-08-19 06:25:03 +0000 UTC [ - ]
Looking forward to see it take up adoption outside of the browsers that are decades away of being an API one can rely on, and without blacklisting.
tenaciousDaniel 2021-08-18 23:05:56 +0000 UTC [ - ]
kvark 2021-08-18 23:38:44 +0000 UTC [ - ]
Includes a link to the relevant Fosdem talk;)
enos_feedler 2021-08-18 23:11:58 +0000 UTC [ - ]
tenaciousDaniel 2021-08-19 00:17:57 +0000 UTC [ - ]
kvark 2021-08-19 00:38:29 +0000 UTC [ - ]
Google has an implementation of WebGPU called Dawn, which is quite impressive. It's written in C++.
jms55 2021-08-19 00:38:02 +0000 UTC [ - ]
dralley 2021-08-19 00:46:45 +0000 UTC [ - ]
Plus, using system libraries (DirectX, Villain, Metal, OpenGL) from Go would be a pain and possibly suffer worse from the FFI penalties than just using a higher level library written in Rust/C++ to begin with.
kvark 2021-08-19 00:40:58 +0000 UTC [ - ]
kangz 2021-08-19 11:40:39 +0000 UTC [ - ]
kvark 2021-08-19 12:53:08 +0000 UTC [ - ]
frutiger 2021-08-19 00:59:24 +0000 UTC [ - ]
Jweb_Guru 2021-08-19 00:35:50 +0000 UTC [ - ]
user-the-name 2021-08-18 23:44:55 +0000 UTC [ - ]
wrnr 2021-08-19 00:17:17 +0000 UTC [ - ]
astlouis44 2021-08-19 00:26:47 +0000 UTC [ - ]
jaas 2021-08-18 22:57:07 +0000 UTC [ - ]
kvark 2021-08-18 23:15:22 +0000 UTC [ - ]
1. Firefox uses wgpu for implementing WebGPU. 2. Applications on wgpu can be compiled in wasm for the web and ran in Firefox Nightly.
This is harmed by the fact WebGPU API is still not stable. Hopefully, not for long!
Shadonototro 2021-08-18 23:34:20 +0000 UTC [ - ]
Naming is important, you want to avoid confusing as much as possible
encryptluks2 2021-08-18 21:41:53 +0000 UTC [ - ]
Yet one of their main projects involves reliance on the web for graphics programming. This is the opposite of their stated goal in my opinion.
_cart 2021-08-18 21:56:34 +0000 UTC [ - ]
encryptluks2 2021-08-19 16:26:24 +0000 UTC [ - ]
zamadatix 2021-08-18 21:50:45 +0000 UTC [ - ]
nextaccountic 2021-08-19 08:21:12 +0000 UTC [ - ]
devwastaken 2021-08-18 22:30:52 +0000 UTC [ - ]
gfxgirl 2021-08-19 07:28:24 +0000 UTC [ - ]
In any case, other than resetting the GPU there is no way to stop DOS via shaders. I can easily write a 3 line shader that has a legit use at 1x1 pixels but will DOS your machine at 2000x2000 pixels. It's basically the halting problem. The only known solutions are (a) reset the GPU or (b) design preemptable GPUs
nextaccountic 2021-08-19 08:15:13 +0000 UTC [ - ]
There are GPUs that support IOMMU but they cost a fortune. (or is IOMMU a separate feature than just having virtual memory for GPU memory?)
Running user-supplied shaders seems like a security nightmare.
garaetjjte 2021-08-19 11:28:17 +0000 UTC [ - ]
kangz 2021-08-19 11:46:22 +0000 UTC [ - ]
kvark 2021-08-18 23:09:22 +0000 UTC [ - ]
_cart 2021-08-18 21:50:00 +0000 UTC [ - ]
* A smaller (and less abstracted) codebase means that we can more easily extend wgpu with the features we need now and in the future. (ex: XR, ray tracing, exposing raw backend apis). The barrier to entry is so much lower.
* It shows that the wgpu team is receptive to our feedback. There was a point during our "new renderer experiments" where we were considering other "flatter" gpu abstractions for our new renderer. They immediately took this into account and kicked off this re-architecture. There were other people with similar feedback so I can't take full credit here, but the timing was perfect.
* A pure rust stack means that our builds are even simpler. Combine that with Naga for shader reflection and compilation and we can remove a lot of the "build quirks" in our pipeline that come from non-rust dependencies. Windows especially suffers from this type of build weirdness and I'm excited to not need to deal with that anymore.
* The "risk" of treating wgpu as our "main gpu abstraction layer" has gone way down thanks to the last few points. As a result, we have decided to completely remove our old "abstract render layer" in favor of wgpu. This means that wgpu is no longer a "bevy_render backend". It is now bevy_render's core gpu abstraction. This makes our code smaller, simpler, and more compatible with the wider wgpu ecosystem.
* There is a work-in-progress WebGL2 backend for the new wgpu. This will hopefully ultimately remove the need for the third party bevy_webgl2 backend (which has served us well, but it has its own quirks and complexities).
timClicks 2021-08-18 22:08:58 +0000 UTC [ - ]
CyberRabbi 2021-08-18 22:22:29 +0000 UTC [ - ]
Experience has shown me that you will eventually regret this decision
_cart 2021-08-18 23:06:00 +0000 UTC [ - ]
Real answer: You can use this argument to justify abstracting out literally anything. Clearly we shouldn't abstract everything and the details of each specific situation will dictate what abstractions are best (and who should own them). Based on the context that I have, Bevy's specific situation will almost certainly benefit from this decision. Bevy needs an "abstract gpu layer". We are currently maintaining our own (more limited) "abstract gpu layer" that lives on top of wgpu. Both my own experience and user experiences have indicated that this additional layer provides a worse experience: it is harder to maintain, harder to understand (by nature of being "another layer"), it lacks features, it adds overhead. Wgpu _is_ a generic abstraction layer fit for an engine. If I was building an abstract layer from scratch (with direct api backends), it would look a lot like wgpu. I understand the internals. I have a good relationship with the wgpu project lead. They respond to our needs. The second anything changes to make this situation suboptimal (they make massive api changes we don't like, they don't accept changes we need, they add dependencies we don't like, etc) I will happily fork the code and maintain it. I know the internals well enough to know that I couldn't do better and that I could maintain it if push comes to shove. The old Bevy abstraction layer adds nothing. In both situations we are free to extend our gpu abstraction with new features / backends. The only difference is the overall complexity of the system, which will be massively reduced by removing the intermediate layer.
CyberRabbi 2021-08-18 23:23:40 +0000 UTC [ - ]
Maintaining a fork of wgpu simply for your own project will also likely be more effort than necessary since wgpu is a more general API than bevy requires.
The core issue is that bevy uses a subset of wgpu so your own focused and limited abstraction layer will almost always be easier to implement and maintain. Some call this “the rule of least power” https://www.w3.org/2001/tag/doc/leastPower.html
The meta-core issue is that you and the wgpu team aren’t 100% aligned on your goals. Maybe you’re aligned on 80% but the 20% will eventually come back to haunt you.
_cart 2021-08-18 23:52:18 +0000 UTC [ - ]
We've been building out the new renderer and we have already used a huge percentage of wgpu's api surface. It will be close to 100% by the time we launch. This proves to me that we do need almost all of the features they provide. Bevy's renderer is modular and we need to expose a generic (and safe) gpu api to empower users to build new render features. Wgpu's entire purpose is to be that API. I know enough of the details here (because I built my own api, thoroughly reviewed the wgpu code, and did the same for alternatives in the ecosystem) to feel comfortable betting on it. I'm even more comfortable having reviewed bevy-user-provided wgpu prs that add major new features to wgpu (XR). If you have specific concerns about specific features, I'm happy to discuss this further.
You can link to as many "rules" as you want, but solving a problem in the real world requires a careful balance of many variables and concerns. Rules like "the rule of least power" should be a guiding principle, not something to be followed at all costs.
nextaccountic 2021-08-19 08:11:24 +0000 UTC [ - ]
Or actually: is it feasible to license console-specific Bevy code as MIT/Apache and have the only proprietary bits be the console SDK? (This means having Bevy, an open source project, call a console SDK in the open - is that allowed?)
For me those are my main concerns regarding Bevy.
gfxgirl 2021-08-19 07:22:15 +0000 UTC [ - ]
Jweb_Guru 2021-08-19 16:10:38 +0000 UTC [ - ]
Jasper_ 2021-08-18 23:36:49 +0000 UTC [ - ]
It's a pretty well-thought-out API that mirrors most of the modern rendering abstractions seen in game engines.
CyberRabbi 2021-08-18 23:46:59 +0000 UTC [ - ]
The APIs that currently exist are not the cause of the potential issue I am raising.
epage 2021-08-19 00:17:24 +0000 UTC [ - ]
CyberRabbi 2021-08-19 01:24:16 +0000 UTC [ - ]
For example using drawTriangle(p1, p2, p3) instead of drawPolys(TYPE_TRIANGLE, point_list, n_points). The former is unequivocally easier to implement, the latter requires more complexity.
dralley 2021-08-19 02:46:57 +0000 UTC [ - ]
CyberRabbi 2021-08-19 05:31:28 +0000 UTC [ - ]
wtetzner 2021-08-19 12:10:34 +0000 UTC [ - ]
You can’t optimize for everything, so you decide what tradeoffs make the most sense.
CyberRabbi 2021-08-19 12:44:40 +0000 UTC [ - ]
Compare this to the Rust compiler. Rust uses MIR as its intermediate compiler IR for Rust-specific optimizations, because it retains Rust-specific semantics, before it compiles down to LLVM bitcode. If it used LLVM bitcode as its native IR then it would be difficult to implement Rust-specific optimization passes. In this case an application specific graphics API is analogous to MIR and wgpu is analogous to LLVM.
Jasper_ 2021-08-19 04:50:46 +0000 UTC [ - ]
Trying to aggressively simplify without understanding the full design space or surface area is not really a great idea.
In the worst case, if you never use, for instance, Timestamp Queries in your engine, you can half-ass a backend and just nop the implementation. Lots of game engines do that kinda thing. So many game engines where half the graphics API implementations were no-op stubs because we never needed the functionality on that platform.
CyberRabbi 2021-08-19 05:34:45 +0000 UTC [ - ]
What if your specific application cannot provide inherently parallel workloads and has no need for an abstraction that can accommodate parallel rendering? In that case porting your app to a new platform requires implementing functionality your application does not need.
danShumway 2021-08-19 06:32:05 +0000 UTC [ - ]
Okay, but... what if it does?
It's one thing to talk about "this could be simpler if you don't need more general functionality." But that's also just kind of an assumption that the functionality actually isn't needed. The odds that you have to support a platform/application that both can't handle parallelization and that will be substantially held back by the option even just existing -- I'm not sure that those odds are actually higher than the odds that you'll run into a platform that requires parallelization for decent performance.
It feels kind of glib to just state with such certainty that a cross-platform game engine is never going to need to draw arbitrary polygons. That doesn't seem to me like a safe assumption at all.
I agree with GP here:
> Trying to aggressively simplify without understanding the full design space or surface area is not really a great idea.
In many cases, needing to no-op some functionality for one or two platforms may end up being a lot better than a situation where you need to hack a bunch of functionality on top of an API that fundamentally is not designed to support that. It's a little bit annoying for simpler platforms, but simpler platforms are probably not your biggest fear when thinking about support. The first time that you need to do something other than linearly draw triangles, for any platform you want to support at all, even just one of them, then the API you propose suddenly becomes more complicated and harder to maintain than a single `drawPols` method would be.
This is not saying that abstraction or narrowing design space should never happen. It's just saying, understand what the design space is before you decide that you're never going to need to support something. I expect that the Bevy core team has spent a decent amount of time thinking about what kinds of GPU operations they're likely to need for both current and future platforms.
CyberRabbi 2021-08-19 12:48:24 +0000 UTC [ - ]
It’s an example
Jweb_Guru 2021-08-19 00:25:13 +0000 UTC [ - ]
* Adapter, Device, Queue, Surface
* Buffer, Texture, TextureView, Sampler, QuerySet, BindGroup
* ShaderModule, BindGroupLayout, PipelineLayout, RenderPipeline, ComputePipeline
* CommandEncoder, RenderPassEncoder, ComputePassEncoder, RenderBundleEncoder, RenderBundle, CommandBuffer
Adapter, Device, and Surface are abstractions that would be needed no matter what the GPU architecture. What else is there that's even fishy... QuerySet, maybe? The various kinds of encoders are already abstractions on some platforms, and I guess there are cases where there might be better resource binding abstractions, but I'm not sure what could cause this set of supported features to change on the hardware other than completely dropping the rasterization pipeline (in which case you can just emulate everything with compute shaders and buffers anyway). And the relationships between these structures are pretty well-specified, I doubt that will change either.
So at that point, at least if you want to stay cross platform, you're talking about things like the specific implementation of buffer mapping or tracing or whatever, which is the sort of thing it's relatively easy to add as an extension (or refactor to make more configurable).
kvark 2021-08-18 23:36:27 +0000 UTC [ - ]
This isn't obviously true. WebGPU API surface is fairly small (unlike Vulkan), and Bevy may easily use most of it.
Generally speaking, depending on another library always involve some amount of trust. Going the path of zero trust is also possible, but is much less effective.
CyberRabbi 2021-08-18 23:41:41 +0000 UTC [ - ]
Small is not the issue. Generality is.
Jweb_Guru 2021-08-18 23:53:06 +0000 UTC [ - ]
Given all this, what do you think is missing that makes it "too general" for use as the basis of a game engine? Many of the features you're probably thinking of are a nonstarter if you want to be cross-platform, and being efficient cross-platform is a key motivating reason to use wgpu in the first place.
CyberRabbi 2021-08-19 00:02:14 +0000 UTC [ - ]
My concern isn’t that wgpu’s level of generality makes it unsuitable for implementing a game engine. It’s of course perfectly suitable for implementing a game engine. The concern is around the pitfalls of not architecting your game engine in terms of your own limited graphics abstraction (which may backend to wgpu).
nindalf 2021-08-19 06:52:33 +0000 UTC [ - ]
I feel like we could just put this conversation on ice and check back in after 6-12 months. Bevy release notes always reach the top of HN. If they go back to creating an abstraction over wgpu, you can say “I told you so” then. And similarly, if Bevy is able to use this approach to provide support for Android, iOS and web (in addition to existing support for windows, Linux and Mac) then you need to own up to that.
CyberRabbi 2021-08-19 12:52:13 +0000 UTC [ - ]
Yet other people beside Cart in my thread still fail to properly understand the idea and instead think I am suggesting not using wgpu at all.
> you can say “I told you so” then
My aim isn’t to say “I told you so” it’s to make sure people accurately understand the point I am making. Cart seems to understand the point I am making therefore I have made no further comments to him.
jokethrowaway 2021-08-19 10:24:30 +0000 UTC [ - ]
I would agree if it were a dependency on third party closed source solution, but wgpu is a thriving and well maintained OSS project.
Nothing prevents you from forking it and treating it like your own code in a catastrophic scenario.
CyberRabbi 2021-08-19 13:01:18 +0000 UTC [ - ]
It’s much easier for deployment and distribution purposes to add a level of indirection in your codebase than maintain a fork of a large project. Maintaining a fork is a full time job in itself, especially for active projects where you’ll need to constantly be merging new changes and keeping up with the evolution of the internals of the project via the mailing list or other means.