#StackBounty: #opengl #shaders #textures #flutter Render 3 textures into an RGB texture?

Bounty: 50

Flutter has support for external textures, but they have to be RGB. I want to render YUV video to Flutter.

On OpenGL I used to create 3 textures, and upload Y,U,V to each corresponding texture. Then I’d paint the screen using these 3 textures, forming an RGB image.

On Flutter, I need to render to a RGB texture. Is there a way to still do the YUV conversion using OpenGL, and then render to the flutter’s RGB texture?

Maybe rendering to Y,U,V textures and them rendering to the RGB texture from these 3 textures?


Get this bounty!!!

#StackBounty: #opengl #2d #sprite #antialiasing create graphics asset, that can be used for different team colors

Bounty: 50

I’m making a 2D game, where some graphics assets must be used with 2 user-selected team colors.
What workflow do game developers use, so that the graphics artist only need to draw each asset once, and the code allows the end-user to choose two team colors, that the asset will be rendered in.

Notice, that each color may be draw with antialias to the background (transparent), or to another color.

Rendering is done with OpenGL

same graphics asset, shown with two different team colors


Get this bounty!!!

#StackBounty: #opengl #rendering #shapes Drawing basic shapes in mordern opengl, which approach is better?

Bounty: 50

  1. Preallocating a vertex buffer in the GPU and then filling it vertex data and then drawing them to replicate legacy opengl functions like glVertex2f, glNormal2f etc and drawing shapes with them.

  2. Sending the vertex data of all the primitive shapes to the GPU at once at the start of the program then drawing the appropriate part of it in the vertex shader when drawing the shape.

These are all the ways I could think of but I’m not sure how optimal either of these approaches are.
Do games and game engines use a similar approach? or is there an even better approach to this?


Get this bounty!!!

#StackBounty: #graphics #virtualization #kvm #opengl #virt-manager How to use OpenGL/3D acceleration in virt-manager with ubuntu?

Bounty: 100

Currently on Ubuntu 20.04 both as host and guest, I followed http://ryan.himmelwright.net/post/virtio-3d-vms/ and activated 3D acceleration on video, and OpenGL on dsplay, but on VM launch I get

SPICE GL support is local only for now and incompatible with -spice port/tls-port

How can I make it work?

UPDATE:

I disabled Listen Type to None

like thisenter image description here

but I get a very glitchy image:

enter image description here


Get this bounty!!!

#StackBounty: #opengl #perspective Understanding the math behind perspective matrix in OpenGL

Bounty: 50

I’ve been trying to figure out the math behind perspective matrix for 2 weeks now but I’m failing badly. I understand the theory behind the perspective matrix but I am not sure how the math works.

The code:

def perspective(fov: Float, aspect: Float, zNear: Float, zFar: Float) = {
  val h = Math.tan(fov * 0.5f).toFloat
  val c00 = 1.0f / (h * aspect)
  val c11 = 1.0f / h
  val c22 = (zFar + zNear) / (zNear - zFar)
  val c23 = (zFar + zFar) * zNear / (zNear - zFar)

  Matrix4(
    c00,    0,      0,     0,
    0,    c11,      0,     0,
    0,      0,    c22,   c23,
    0,      0,     -1,     0
  )
}

What I understand

I’ve seen video tutorials by Jorge Rodriguez & Arpan Pathak but I cannot fully relate it to the perspective matrix in the function above.

  • Congruent triangles: Following Arpan’s video I understand that to project a point P=(x, y, z) onto the a 2D XY plane I need to create a frustum and which is then used to find it’s projection P=(x, y, z) using congruent triangles relationship. Arpan’s video and the final matrix makes sense but I do not see how it relates to the perspective function above? Here is my attempt.

  • enter image description here

  • Following the above I switched the frustum in opposite direction to visualise how it would apply in OpenGL using the perspective function above with zFar and zNear parameters but it’s no way close to the perspective matrix in the code.

  • enter image description here


Get this bounty!!!

#StackBounty: #opengl #shaders #shadow-mapping How to fix shadow not casted to terrain when rendering using default and terrain shader …

Bounty: 50

Given that I have the TerrainShader class and DefaultShader class. Also a FBO (Frame Buffer Object) shadow map

The TerrainShader has all the terrain, light, shadow related calculations. While the DefaultShader has the generic objects light, shadow related calculations.

I have successfully cast a directional shadow map when I only use DefaultShader alone with random cube objects and a plane. Now the problem was when I move or use a terrain instead of TerrainShader, the shadow is not cast in the terrain.

Question: Am I using the FBO the correct way or I am doing it wrong.

Solution Idea (Not yet applied)

  • Merge terrain and default shader as one and create a flag if object or terrain will be rendered? (Still not sure if this is correct.)

Pseudocode (Current successful implementation)

  • Create shadow map fbo
  • Create default shader
  • Create depth shader
  • bind shadow map fbo
  • clear depth
  • render cubes & plane using depth shader (mvp)
  • unbind shadow map fb
  • clear color and depth
  • render cubes & plane using default shader

Pseudocode (with Terrain shadow not working)

  • Create shadow map fbo
  • Create default shader
  • Create terrain shader
  • Create depth shader
  • bind shadow map fbo
  • clear depth
  • render cubes & plane using depth shader (mvp) and exclude terrain
  • unbind shadow map fb
  • clear color and depth
  • render cubes using default shader
  • render terrain using terrain shader


Get this bounty!!!