In my quest to learn more about game development, I’m at the point where I’m looking at the history of rendering via framebuffers and learning about all the terms used in various graphics apis as well as finally taking a stab at learning WGPU in Rust. This actually is turning out much better than I expected. By looking at the history of this terminology, I’m able to gain a bit of a better understanding a bit of the background behind the api designs.

Let’s start with the basic terminology:

  • Refresh Rates:

    Effectively the origin of the refresh rate refers back to this concept in CRT / cathode ray tube type screens. This differs from LCD monitors where each electrical signal on the screen will stay in that state until it has been oriented to do otherwise. Going back to CRT monitors, the time it takes between when the electron beam is at the bottom right and starts over to the top left is typically referred to as a the vertical blank interval.

    CRT monitors had an electron emitter that produced a narrow, collimated electron beam that has a precise kinetic energy. Cathode ray tubes contains one or more electron emitters and a pohsphorescent screen to display images. Electron beams are modulated, accelerated, and deflected onto the screen to create these images in the form of electrical waveforms (oscilloscopes), pictures, radar targets, or other phenonoma.

    Images are produced by controlling the intensity of each of the three electron beams (one for each additive primary color RGB) with the video signal as a reference. Beams are bent by magnetic deflection, varying magnetic field generated by coils and driven by electronic circuits around the neck of the tube. The electron beam will pan along the screen one dot at a time, left to right, row by row until it reaches the bottom right corner of the screen - starting over in the top left. Since the phosphor dots on the screen lose energy over time, the screen has to be effectively refreshed by exciting these dots again via the electron beam.

  • VBLANK

    Vertical blanking interval is the time between the end of the final visible line on a frame or field and the beginning of the first visible line of the next frame. Cathode ray tubes is supplied during this period to avoid painting the retrace line and digital displays usually will not display incoming data stream during the blanking interval if present. Modern CRT circuitry does not require a long blanking interval, and this LCDs require none, but standards however were established when the delay was needed. As far as the GPU is concerned, the vertical blanking interval is needed in order to know when the monitor is ready to send the next frame.

  • Framebuffer

    The GPU has a reserved piece of memory that it uses to store the finished rendered frame that it will use to send to the monitor. Within the circuitry of video cards converts this in-memory representation of an image into a video signal that is sent off to the monitor. The buffer itself is then nothing more than the bit representation of each pixel for a given image. Older versions of these representations might only store 1-bit (monochromatic) per pixel contrast with more modern 24-bit true color formats per pixel.

  • Virtual framebuffer

    Virtual frame buffers provide a way to emulate the functionality of a framebuffer device. This provides a way to abstract away the physical method of accessing the underlying framebuffer into a guaranteed memory map that is easy for to access.

    This is really useful if you’re working on a Raspberry Pi and need to write some graphical applications without a desktop environment. In Linux, the virtual frame buffer provides a way to simply write bytes to a /dev/fb0 file in order to change what is displayed on the screen while a program is running. Writing GUI applications on the Raspberry PI

  • Double buffering, VSync, GSync, FreeSync

    Double buffering, from a software rendering perspective, has drawing operations stored in memory until the rendering operations are considered complete - at which point that region is copied to the gpu “front buffer”. Page flipping, rather than copying data, both buffers are capable of being displayed by the video card. The display circuitry will point to one of the buffers in memory to be sent to the monitor and uses a hardware register on the display controller to switch between the pointers of the beginning of display data.

    Various implementations of syncing technology such as vsync, freesync, g-sync vary between relying on the monitor’s refresh rate and sending the same frame buffer to the monitor in cases where the video card can’t keep up (in the case of vsync). Screen tearing occurs when the display buffer being displayed is switched underneath while the secondary buffer is not yet ready to fully rendered to the display. As a result, part of the screen would display from one buffer (having initially read from that buffer) and the other part of the screen would read from the new buffer (having just switched). This happens because the card may not be keeping up with the screen’s refresh rate for instance.

    In other scenarios, like g-sync, and free-sync, the video card is in charge of the refresh rate itself - by adapting when the next frame will be driven to the display as soon as a frame buffer is done being written to.

  • Swap chains

    Swap chains refer to the series of virtual frame buffers used by graphics cards and apis for frame rate stabilization and rendering functions. Swap chains typically exist in the graphics memory, but can exist in system memory as well. Swap chains are typically required by many graphics apis which is why you see this a lot in those initial “draw a triangle” code these days. Swap chains with two buffers is considered a double buffer (screen buffer and back buffer).

  • Framebuffer object architecture

    In any graphics library (opengl, directx, vulkan, metal), there is usually a concept for rendering off screen such as to a texture and the ability to pipeline a series of post-processing effects (such as running fragment shaders).

  • Descriptor sets

    Arseny Kapoulkine has a great article on understanding the Vulkan API mental model when it comes to building a renderer. Effectively, the memory model within vulkan involves GPU-visible memory descriptors used to describe memory mapped resources usable from shaders.

  • Command buffering and submitting

    Region of memory holding a set of instructions for graphics processing to render a portion of a scene. These can be generated by bare metal instructions, or generated from high or lower level apis. A render graph may be scene in association with command buffering as a method of understanding how a single frame will be constructed across many different layers and apis, shaders and textures.

  • Render pass

    A stage in the rendering pipeline generating some possibly incomplete representation of a scene. The rendering pipeline can typically include model and camera transformations, converting object and vertex coordinates according to these transformations, adding lighting, clipping coordinates, occlusion and window/viewpoint transformation to ultimately device coordinates.

A lot of this terminology might seem daunting at first, but when you start looking at the history it sort of starts to make sense. Building a graphics system that allows you to create windows, adjust the way you want to render things, which device you want to use under varying conditions, and you want it all to be fast and multithreaded so you do things like command queuing, compiling programs to store on the gpu, buffering up resources that these programs will use, and piping data over to the gpu to run as inputs to those compiled programs - to finally rendering the output to a surface you’ve also configured and specified. It’s a lot more involved than just - point this block of memory to whatever a “display” is.

WGPU

Getting started with wgpu is much simpler once we have kept in mind some of these concepts and terms I’ve listed above. I’m going to the following libraries to get going:

  • winit for window management, event loops, events
  • winit_input_helper wrapper to keep the input state up to date from events passed along from the winit event loop
  • wgpu-rs rust client for wgpu for general purpose graphics and cross-platform backend support

Setup a new project with these dependencies (use cargo-edit to quickly manage this):

cargo init wgpu-example && cd wgpu-example

cargo add winit
cargo add winit_input_helper
cargo add wgpu

Next, setup things so we can work within winit Window.

use winit::event::{VirtualKeyCode};
use winit::event_loop::{ControlFlow, EventLoop};
use winit::window::{Window, WindowBuilder};
use winit_input_helper::WinitInputHelper;

fn main() {
    let mut input = WinitInputHelper::new();

    let event_loop = EventLoop::new();
    let window = WindowBuilder::new()
        .with_title("Rust by Example: WGPU!")
        .build(&event_loop)
        .unwrap();

    event_loop.run(move |event, _, control_flow| {
        if input.update(&event) {
            if input.key_released(VirtualKeyCode::Escape) || input.quit() {
                *control_flow = ControlFlow::Exit;
                return;
            }
        }
    });
}

Great! The app will respond to the event loop, and we can use the input state to determine whether to properly exit the control flow of this application. Next, wgpu requires that we have a raw-window-handle trait to access the native window implementation for the graphics backend to work. Winit provides this on the Window so we can carry on to getting an actual graphics device to work with.

If you haven’t already, check out my post on winit and pixels in rust where I go over what winit is doing and what the event loop is for.

struct State {
    surface: wgpu::Surface,
    device: wgpu::Device,
    queue: wgpu::Queue,
    sc_desc: wgpu::SwapChainDescriptor,
    swap_chain: wgpu::SwapChain,
    size: winit::dpi::PhysicalSize<u32>
}

In the above state, we have the following property definitions:

  • Surface

    a platform specific surface (e.g. a window) onto which rendered images may be presented (each backend will use the raw window handle according to how it needs to render)

  • Device

    Open connection to a graphics and/or compute device which is responsible for the creation of rendering and compute tasks which are used in commands and then sent to a Queue. Implementation methods on a device include features() to list out all the features available on the device, create_shader_module to generate shader module from source, among many other creation method for pipelining render, layouts, bind groups, compute pipelines, textures, samplers, swap chains, and buffers. All creation methods reference the device context and deal with the underlying backend for how these resources become available on the graphics device itself.

  • Queue

    Handle to a command queue on a device that records CommandBuffer objects and provides convenient methods for writing to buffers and textures. (Such as scheduling a data write to a texture or writing directly onto a buffer). Queues are important for actually being able to schedule graphics commands that are ultimately queued up and executed on the graphics device itself.

  • SwapChainDescriptor: describes how the underlying swap chain will be used. Primarily this includes the usage as an OUTPUT_ATTACHMENT, meaning that the texture will be the resource that is the output of a render pass. Following the swap chain usage is the texture format (Bgra8UnormSrgb or Bgra8Unorm), width and height of the swap chain to match the surface size, and finally the present mode.

    (PresentMode)[https://docs.rs/wgpu/0.6.0/wgpu/enum.PresentMode.html) as specified in the SwapChainDescriptor provides 3 different ways to instruct the presentation engine in how it waits.

    • Immediate: does not wait for the vertical blanking period and the request is presented immediately (low-latency but visible tearing is likely to occur, falls back to Fifo if unavailable on selected platform and backend, not optimal for mobile as well)
    • Mailbox: waits for the next vertical blanking period to update the current image, but frames may be submitted without delay. This is low-latency and visible tearing will not be observed. Falls back to fifo if unavailable and is not optimal on mobile.
    • Fifo: waits for the next vertical blanking period to update the current imag. Framerate is capped at the display refresh rate, corresponding to the Vsync. Tearing cannot be observed. Optimal as well for mobile
  • SwapChain represents the series of images or image that will be presented to the Surface. The swap chain can get the current frame by returning the next texture to be presented by the swap chain for drawing. When the swap chain frame returned is dropped, the swap chain will present the texture to the associated surface.

  • PhysicalSize size represented in physical pixels

With that State, we can carry along to create an impl for it to go ahead and create a Surface and Device.

impl State {
    async fn new(window: &Window) -> Self {
        let instance = wgpu::Instance::new(wgpu::BackendBit::PRIMARY);
        let surface = unsafe { instance.create_surface(window) };
        let adapter = instance.request_adapter(
            &wgpu::RequestAdapterOptions {
                power_preference: wgpu::PowerPreference::Default,
                compatible_surface: Some(&surface)
            }
        ).await.unwrap();
        ...
    }
}

When requesting an adapter with a power_preference keep in mind that the preferences are dependent on:

  • Default: prefer low power when on battery, high performance when on mains
  • LowPower: adapter that uses the least possible power, often an integrated GPU
  • HighPerformance: adapter that uses highest performance, often a discrete GPU

Next, once we have an adapter, a surface, we can use those to request a device and a queue.

impl State {
    async fn new(window: &Window) -> Self {
        let instance = wgpu::Instance::new(wgpu::BackendBit::PRIMARY);
        let surface = unsafe { instance.create_surface(window) };
        let adapter = instance.request_adapter(
            &wgpu::RequestAdapterOptions {
                power_preference: wgpu::PowerPreference::Default,
                compatible_surface: Some(&surface)
            }
        ).await.unwrap();

        let (device, queue) = adapter.request_device(
            &wgpu::DeviceDescriptor {
                features: wgpu::Features::empty(),
                limits: wgpu::Limits::default(),
                shader_validation: true
            },
            None
        ).await.unwrap();
        ...
    }
}

When requesting a device, the device descriptor lets us specify what wgpu::Features the device should support as well as what wgpu::Limits to impose and whether to perform shader validation.

Note: shader_validation is temporary since wgpu is planning to implement validation logic to avoid this

Next, let’s create the swap chain from the device:

impl State {
    async fn new(window: &Window) -> Self {
        let size = window.inner_size();
        let instance = wgpu::Instance::new(wgpu::BackendBit::PRIMARY);
        let surface = unsafe { instance.create_surface(window) };
        let adapter = instance.request_adapter(
            &wgpu::RequestAdapterOptions {
                power_preference: wgpu::PowerPreference::Default,
                compatible_surface: Some(&surface)
            }
        ).await.unwrap();

        let (device, queue) = adapter.request_device(
            &wgpu::DeviceDescriptor {
                features: wgpu::Features::empty(),
                limits: wgpu::Limits::default(),
                shader_validation: true
            },
            None
        ).await.unwrap();

        let sc_desc = wgpu::SwapChainDescriptor {
            usage: wgpu::TextureUsage::OUTPUT_ATTACHMENT,
            format: wgpu::TextureFormat::Bgra8UnormSrgb,
            width: size.width,
            height: size.height,
            present_mode: wgpu::PresentMode::Fifo
        };

        let swap_chain = device.create_swap_chain(&surface, &sc_desc);

        Self {
            surface,
            device,
            queue,
            sc_desc,
            swap_chain,
            size
        }
    }
}

The main function isn’t async, so we need to import futures when we create the State object. Add the futures crate with cargo add futures and import the following:

use futures::executor::block_on;

Then in the main function you can create the state like so:

let mut state = block_on(State::new(&window));

I’ve gone ahead and skipped the input handling since we’re using winit_input_helper to take care of that for us. The only thing we’re going to want to do is make sure we update the swap chain whenever the window is resized. This is important because the swap chain descriptor was already specified with the original physical size in pixels, so if this changes we need a new swap chain to work with. What’s actually happening here is that by doing this the actual buffer we’re using may end up getting resized as a result, so it’s not exactly something you want to be doing often.

// impl State
fn resize(&mut self, new_size: winit::dpi::PhysicalSize<u32>) {
    self.size = new_size;
    self.sc_desc.width = new_size.width;
    self.sc_desc.height = new_size.height;
    self.swap_chain = self.device.create_swap_chain(&self.surface, &self.sc_desc);
}

Then in the main event code, we simply need to handle this event for when the window is resized or scaling has been changed.

use futures::executor::block_on;

use winit::event::{Event, VirtualKeyCode, WindowEvent};
use winit::event_loop::{ControlFlow, EventLoop};
use winit::window::{Window, WindowBuilder};
use winit_input_helper::WinitInputHelper;

fn main() {
    let mut input = WinitInputHelper::new();

    let event_loop = EventLoop::new();
    let window = WindowBuilder::new()
        .with_title("Rust by Example: WGPU!")
        .build(&event_loop)
        .unwrap();

    let mut state = block_on(State::new(&window));

    event_loop.run(move |event, _, control_flow| {
        if input.update(&event) {
            if input.key_released(VirtualKeyCode::Escape) || input.quit() {
                *control_flow = ControlFlow::Exit;
                return;
            }
        }

        match event {
            Event::WindowEvent { event, .. } => match event {
                WindowEvent::Resized(physical_size) => {
                    state.resize(physical_size);
                },
                _ => {}
            },
            _ => {}
        }
    });
}

// struct State
// ...
// impl State {
// ...

WGPU: Rendering

Now that we have a device, a surface, and a swap chain that is recreated whenever the window resizes, I can move onto the actual render code. Since operations to the gpu are sent in the form of a command buffer, we need a way to serialize/encode these commands as gpu operations. Use the device to create a command encoder to store the commands in a buffer before they are actually submitted to the gpu for executing.

impl State {
    async fn new(window: &Window) -> Self {
        // ...
    }

    fn render(&mut self) {
        let mut encoder = self.device.create_command_encoder(&wgpu::CommandEncoderDescriptor {
            label: Some("RENDER ENCODER") // label in graphics debugger
        });

        // ...
    }
}

A render pass is an object that has all the possible methods where drawing actually occurs, along with color attachments. The render pass effectively begins recording the operations we pass along to it and once we are done, we use the command encoder to iterate over those operations and submit them to the actual queue.

begin_render_pass takes a RenderPassDescriptor to describe what kind of render resources will be attached and rendered onto as render targets (including color, depth/stencil, resolve). These are effectively “images” but the concept is described as the meta data descriptions by which we telling the api what resources each step in a render pass will use to render onto.

fn render(&mut self) {
    let frame = self.swap_chain.get_current_frame()
        .expect("Timeout getting texture")
        .output;

    let mut encoder = self.device.create_command_encoder(&wgpu::CommandEncoderDescriptor {
        label: Some("RENDER ENCODER") // label in graphics debugger
    });

    let _render_pass = encoder.begin_render_pass(&wgpu::RenderPassDescriptor {
        color_attachments: &[
            wgpu::RenderPassColorAttachmentDescriptor {
                attachment: &frame.view,
                resolve_target: None,
                ops: wgpu::Operations {
                    load: wgpu::LoadOp::Clear(wgpu::Color {
                        r: 0.9,
                        g: 0.2,
                        b: 0.3,
                        a: 1.0
                    }),
                    store: true
                }
            }
        ],
        depth_stencil_attachment: None
    });

    // ...
}

Note that at the beginning of this function, we are getting the next texture to be presented by the swap chain for drawing with get_current_frame.

I noticed that the RenderPassColorAttachmentDescriptor provides options for the attachment, resolve_target to provide a way to do multisampling and output a downsampled image for instance.

Finally, the operations we are attaching to this descriptor provide a way to use effectively any load or clear operation based on a custom type. I’m not quite sure what else is used here other than color at the moment, but it’s convenient that it’s built in a generic way to simply say - load the previous frame attachment for subsequent render operations or clear it using this color.

Depth stencil attachments, from what I can gather, deal with a way to limit the area of rendering (in the simplest case of stenciling) and can make use of the z-value of a generated pixel (into a z-buffer or depth buffer) to use various depth tests in order to create a vast number of effects possible like dissolves, fading, silhouettes, shadow volumes. This area is a bit more complicated than I will be able to layout here so I’ll end up revisiting this at some point to gain a better understanding. Techniques like mirroring reflections of a scene and creating soft shadows make use of depth stencil buffers.

Alright, now that I have the render pass setup, I can either make additional draw operations, render pipelines, shaders, and all else - and finally when I’m done with a pass I can queue it.

fn render(&mut self) {
    let frame = self.swap_chain.get_current_frame()
        .expect("Timeout getting texture")
        .output;

    let mut encoder = self.device.create_command_encoder(&wgpu::CommandEncoderDescriptor {
        label: Some("RENDER ENCODER") // label in graphics debugger
    });

    {
        let _render_pass = encoder.begin_render_pass(&wgpu::RenderPassDescriptor {
            color_attachments: &[
                wgpu::RenderPassColorAttachmentDescriptor {
                    attachment: &frame.view,
                    resolve_target: None,
                    ops: wgpu::Operations {
                        load: wgpu::LoadOp::Clear(wgpu::Color {
                            r: 0.1,
                            g: 0.2,
                            b: 0.3,
                            a: 1.0
                        }),
                        store: true
                    }
                }
            ],
            depth_stencil_attachment: None
        });
    }

    // submit the commands to the queue!
    self.queue.submit(std::iter::once(encoder.finish()));
}

The Queue has other methods for scheduling writes to buffers or scheduling writes to a texture, but mainly we are just going to use the submit. Submitting to the queue takes an iterator of command buffers, but since the encoder is only returning a single CommandBuffer when we call finish I’m making use of the standard iterator function once to convert a single value into an iterator.

Awesome! Looks like that’s about all you need (aside from all the complexity of shaders, pipelines, and buffers which I don’t think I’m going to get into yet). This lets us create a basic setup of a single render pass, perform a clear and submit the command buffer operations to the queue. The only thing left is to update the main event loop code to actually call render.

Event Loop Request Redraw

Note that drawing to a surface will not automatically update until you actually need to request a redraw. You can imagine a web client might use requestAnimationFrame for instance as a way to indicate that I want to call my function again to run some render code. In the case of winit, the event to look at is the RedrawRequested event. Event::RedrawRequested event happens when either the OS has performed an operation that’s invalidated the surface (such as resizing) or we explicitly call Window::request_redraw, preferrably after all of the event loop’s input events have been processed and redraw reprocessing is about to begin (e.g. MainEventsCleared).

fn main() {
    let mut input = WinitInputHelper::new();

    let event_loop = EventLoop::new();
    let window = WindowBuilder::new()
        .with_title("Rust by Example: WGPU!")
        .build(&event_loop)
        .unwrap();

    let mut state = block_on(State::new(&window));

    event_loop.run(move |event, _, control_flow| {
        if input.update(&event) {
            if input.key_released(VirtualKeyCode::Escape) || input.quit() {
                *control_flow = ControlFlow::Exit;
                return;
            }
        }

        match event {
            Event::RedrawRequested(_) => {
                state.render();
            },
            Event::WindowEvent { event, .. } => match event {
                WindowEvent::Resized(physical_size) => {
                    state.resize(physical_size);
                },
                _ => {}
            },
            Event::MainEventsCleared => {
                window.request_redraw();
            },
            _ => {}
        }
    });
}

WGPU Clear

Awesome! Now that I have the basic idea of using a render pass, attachment operations, command buffers, swap chains, surface handles and device adapters I can move onto getting something more robust on the screen!

All that terminology, and the actual full code looks like this!

use futures::executor::block_on;

use winit::event::{Event, VirtualKeyCode, WindowEvent};
use winit::event_loop::{ControlFlow, EventLoop};
use winit::window::{Window, WindowBuilder};
use winit_input_helper::WinitInputHelper;

fn main() {
    let mut input = WinitInputHelper::new();

    let event_loop = EventLoop::new();
    let window = WindowBuilder::new()
        .with_title("Rust by Example: WGPU!")
        .build(&event_loop)
        .unwrap();

    let mut state = block_on(State::new(&window));

    event_loop.run(move |event, _, control_flow| {
        if input.update(&event) {
            if input.key_released(VirtualKeyCode::Escape) || input.quit() {
                *control_flow = ControlFlow::Exit;
                return;
            }
        }

        match event {
            Event::RedrawRequested(_) => {
                state.render();
            },
            Event::WindowEvent { event, .. } => match event {
                WindowEvent::Resized(physical_size) => {
                    state.resize(physical_size);
                },
                WindowEvent::ScaleFactorChanged { new_inner_size, .. } => {
                    state.resize(*new_inner_size);
                },
                _ => {}
            },
            Event::MainEventsCleared => {
                window.request_redraw();
            },
            _ => {}
        }
    });
}

struct State {
    surface: wgpu::Surface,
    device: wgpu::Device,
    queue: wgpu::Queue,
    sc_desc: wgpu::SwapChainDescriptor,
    swap_chain: wgpu::SwapChain,
    size: winit::dpi::PhysicalSize<u32>
}

impl State {
    async fn new(window: &Window) -> Self {
        let size = window.inner_size();
        let instance = wgpu::Instance::new(wgpu::BackendBit::PRIMARY);
        let surface = unsafe { instance.create_surface(window) };
        let adapter = instance.request_adapter(
            &wgpu::RequestAdapterOptions {
                power_preference: wgpu::PowerPreference::Default,
                compatible_surface: Some(&surface)
            }
        ).await.unwrap();

        let (device, queue) = adapter.request_device(
            &wgpu::DeviceDescriptor {
                features: wgpu::Features::empty(),
                limits: wgpu::Limits::default(),
                shader_validation: true
            },
            None
        ).await.unwrap();

        let sc_desc = wgpu::SwapChainDescriptor {
            usage: wgpu::TextureUsage::OUTPUT_ATTACHMENT,
            format: wgpu::TextureFormat::Bgra8UnormSrgb,
            width: size.width,
            height: size.height,
            present_mode: wgpu::PresentMode::Fifo
        };

        let swap_chain = device.create_swap_chain(&surface, &sc_desc);

        Self {
            surface,
            device,
            queue,
            sc_desc,
            swap_chain,
            size
        }
    }

    fn resize(&mut self, new_size: winit::dpi::PhysicalSize<u32>) {
        self.size = new_size;
        self.sc_desc.width = new_size.width;
        self.sc_desc.height = new_size.height;
        self.swap_chain = self.device.create_swap_chain(&self.surface, &self.sc_desc);
    }

    fn render(&mut self) {
        let frame = self.swap_chain.get_current_frame()
            .expect("Timeout getting texture")
            .output;

        let mut encoder = self.device.create_command_encoder(&wgpu::CommandEncoderDescriptor {
            label: Some("RENDER ENCODER") // label in graphics debugger
        });

        {
            let _render_pass = encoder.begin_render_pass(&wgpu::RenderPassDescriptor {
                color_attachments: &[
                    wgpu::RenderPassColorAttachmentDescriptor {
                        attachment: &frame.view,
                        resolve_target: None,
                        ops: wgpu::Operations {
                            load: wgpu::LoadOp::Clear(wgpu::Color {
                                r: 0.9,
                                g: 0.2,
                                b: 0.3,
                                a: 1.0
                            }),
                            store: true
                        }
                    }
                ],
                depth_stencil_attachment: None
            });
        }


        // submit the commands to the queue!
        self.queue.submit(std::iter::once(encoder.finish()));
    }
}

Conclusion

Long gone are the days when you could write directly to the framebuffer memory and update whatever you wanted on the screen. These days the way we get things done with graphics is to create small compiled programs that run on the GPU to perform operations on data we also send to the GPU. These come in the form of shaders with the three types being: vertex, fragment, and compute shaders. I’m going to create a separate post specifically for shaders and call this post done. It certainly was fascinating to look at the history of framebuffers and graphics rendering - from CRTs and old video arcade hardware to modern libraries like Vulkan, Metal where the way we get things done involves a lot more steps and terminology to get used to. In any case, I’m having a lot of fun learning about this and hope my next article I can start learning about shaders. I’m also really glad I got my first foray into working with wgpu while becoming more and more familiar with rust as I go along.