Skip to content

Latest commit

 

History

History

Folders and files

NameName
Last commit message
Last commit date

parent directory

..
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

README.md

Examples

These serve as a collection of common use-cases. Maybe in the future they can also serve as a sort-of tutorial if read in order.

Minimum amount of code necessary to show a window on the screen.

Counts how many times per second we are able to draw to the screen. Our "drawing" here is very trivial: we just render the same empty buffer to the screen for every frame. Doing more work in the render loop will of course drop your frame rate.

Fills a buffer with pixel data and draws it to the screen. Note that whenever the window's size changes the buffer has to be resized and refilled. Conversely, observe that if the content's of the screen stay the same, no drawing at all is necessary.

If the buffer has different dimensions from the window it will be scaled (and potentially stretched) to match. By default this is done using nearest neighbor sampling, but an optional argument to window.render() controls that behavior. In this example we use a 4 pixel buffer stretched over the entire window using linear filtering.

If the buffer has different dimensions from the window it will be scaled (and potentially stretched) to match. This will distort the aspect ratio of the image which is often undesirable. To prevent this we calculate and set the dstRect property. Additionally, since this example is displaying pixel art, we make sure to only scale by an integer multiple of the original image dimensions.

Instead of filling the buffer by hand we can use the @napi-rs/canvas library which does the same thing using the familiar Canvas API. Note the pixel format that the canvas library's buffer uses.

This is probably not a real use-case, but it serves to demonstrate why @kmamal/gl was necessary. In this example we use the base gl library to fill a buffer with data. The idea is the same as in the previous example, but this time using the 3D WebGL API instead of the 2D Canvas API.

There are 2 problems:

  1. The buffer we get is upside down. The readPixels function lets you specify a rectangle of pixels to copy, but its origin is at the lower left corner (what's called "cartesian coordinates") vs the more familiar upper left corner ("screen coordinates"). This is only a minor annoyance since you can arrange you drawing code to accommodate this (as we did in this example).
  2. To render anything this way, you have to send the data to the GPU, then read back the result to memory, only to send it back to the GPU to be displayed. This is inelegant.

(Note also that we had to set accelerated: false for the window because gl interferes with hardware-accelerated windows.)

Same example, but this time with the @kmamal/gl library. Note that the call to render is gone. It has instead been replaced by the call to gl.swap(). Note also that, because the window is double-buffered, we need to call gl.swap() twice when the window is resized.

Uses the popular regl library to simplify some of that WebGL code.

There are many ways to load static assets into your application. In this example we use FFmpeg (through the ffmpeg-static package) to decode an image and a video into raw pixels that we can render to the screen.

Opens all connected joystick devices and monitors their state.

Opens all connected controller devices and monitors their state.

Plays a 440Hz sine wave for 3 seconds.

Listens to the mic and plots the volume of recorded audio on the screen. Note how we have reduced the value of the buffered option to get audio samples more frequently (and with smaller delays) from the hardware.

Use headphones for this one or you'll end up in a feedback loop. Listens to the mic and then plays back what it recorded plus an echo effect.

This is like example #7, but this time with audio files. Again we are going to use ffmpeg-static to decode a .wav file into a raw PCM buffer we can use for playback.

You can only use @kmamal/sdl from the main thread of your Node.js program, but you might still want to offload computationally expensive tasks to worker threads. This example shows how an audio rendering thread might be implemented.

Once SDL has been initialized (which happens automatically when you import @kmamal/sdl) you can no longer change the audio or video driver. Your only option is to restart the process, passing different values to the SDL_VIDEODRIVER and SDL_AUDIODRIVER environment variables. This example shows how you could implement an audio player, that can change audio drivers without closing it's main window. It does this by keeping the audio playback in a child process, separate from the main window.

Applies random changes to whatever text has been copied to the clipboard.

Eventually you might want to make your application available for download somewhere. Publishing as an npm package is an option, but requires that you users are already familiar with Node.js. A more traditional option is to just include all your dependencies in a .zip file and distribute that. This example shows how you would setup a project for exactly this use-case. I'm using @kmamal/packager here, but other packagers should work as well. There's also a github workflow file that builds and bundles the project for all supported platforms.

This is an alternative packaging method using the @yao-pkg/pkg package (a fork of the original vercel/pkg). One important thing to keep in mind is that ES modules are not yet supported by pkg so we have to use the old CommonJS modules. Another important detail is that SDL is dynamically linked, so you need to distribute the library files along with the executable.

If we did want to use ES modules in our code, we would first have to transform our code before passing it to pkg. This example is similar to the previous one, except that we first run webpack to bundle the project into the build folder, and then run pkg on that. Note that the build folder needs its own package.json file.

// TODO: more