How can WebGL be used for general computing (GPGPU)? [duplicate]
Asked Answered
S

1

6

I've heard that you can use WebGL for general computing (GPGPU) by generating textures and using the RGB values (or something like that to run computations). How is this possible, and could someone please provide a simple example with code?

Specifically, I know you can use images and map them to surfaces with textures, but I don't know how to create an arbitrary texture of RGB values in the WebGL shaders. Also, once you've generated a texture, in what way would you use the RGB values to actually do computations? Finally, once the computations have been performed, how do you pass the results back to JavaScript so that they can be used in a meaningful way?

Thank you.

Spatula answered 31/12, 2015 at 2:56 Comment(0)
H
14

WebGL (OpenGL) allows you to create shader programs which allows you to run your own application code on the GPU. Normally, this is used to calculate the position and color to display. To to this, you feed the GPU data and you run your own code using the data you fed to the GPU. As you can imagine, the ability to pass in arbitrary data and run arbitrary instructions = possibility of general computation (within limits).

You are also not bound to render to the screen directly (back buffer) but to an texture of your choice. But one of the limitations (base spec) is that you can only output a 4*8-bit RGBA value for each pixel. How you calculate this 32 bit RGBA and how you interpret it is entirely up to you.

As an example, perhaps you want to run many sqrt calculations. First you create a texture with each "pixel" on the texture being your initial value (float packed into RGBA); then on the GPU, you instruct it to do a sqrt calculation for every pixel and output the results; the result is garbage as screen color, but if you write a unpack function that convert RGBA back to float, then you have your GPU based sqrt calculator!

Also, a texture is nothing more than an array of values packed in a certain format. Try canvas.getContext("2d").getImageData(0,0,10,10) for example. The equivalent in WebGL is gl.readPixels().

No runnable example from me however, WebGL api is very verbose and it takes a lot of repetition to get something simple done.

Harlequinade answered 31/12, 2015 at 4:49 Comment(3)
Thanks a lot for the answer, WacławJasper. I'm currently most of the way through the book WebGL Programming Guide, and I have a good grasp on the general flow of WebGL programs at this point. I have also done several programs with textures, but in every case, it was a matter of taking an image and using a texture to map it to a surface. I'm not quite sure how I need to change my JS or GLSL code to arbitrarily input RGBA values for each pixel. That's the information I was hoping to get. Beyond that, I can imagine how I would use GLSL to multiply multiple textures together, etc. to...Spatula
Do some calculations very quickly. Finally, you helped a lot by referring to the gl.readPixels() function, which I was unaware of existing for reading the result of a generated texture/canvas. If you can offer anymore pointed information on how to input arbitrary RGBA values into each pixel for a texture, that would be much appreciated. Thank you.Spatula
The gl.texImage2D allows you to specific a TypeArray as the data source. The typical RGBA texture is structured as [R,G,B,A,R,G,B,A...] so you can write JS code to build an array and manipulate it directly. I suggest playing around with 2d - context and .getImageData first. Maybe something like a greyscale filter for starters? Keep in mind of this tho: #7870252Nanananak

© 2022 - 2024 — McMap. All rights reserved.