The above image remixes the Hydra code "Filet Mignon" from AFALFL and GLSL shader "Just another cube" from mrange. Licensed under CC BY-NC-SA 4.0 and CC0 respectively.
Patchies is a patcher for audio-visual things that runs on the web. It's made for creative coding; patch objects and code snippets together to make visualizations, soundscapes and artistic explorations 🎨
Try it out at patchies.app - it's open source and free to use 😎
Patchies lets you use the audio-visual tools and libraries that you know (and love!), together in one place. For example:
Try out the above demo which uses P5.js with Hydra to create a random walk shader.
Patchies is designed to mix textual coding and visual patching, using the best of both worlds. Instead of writing long chunks of code or patching together a huge web of small objects, Patchies encourages you to write small and compact programs and patch 'em together.
If you haven't used a patching environment before, patching is a visual way to program by connecting objects together. Each object does something e.g. generate sound, generate visual, compute some values. You connect the output of one object to the input of another object to create a flow of data. We call the whole visual program a "patch" or "patcher".
This lets you visually see the program's core composition and its in-between results such as audio, video and message flows, while using tools you're already familiar with that lets you do a lot with a bit of code. This is done through Message Passing, Video Chaining and Audio Chaining. They're heavily inspired by tools like Max/MSP, Pure Data, TouchDesigner and VVVV.
"What I cannot create, I do not understand. Know how to solve every problem that has been solved." - Richard Feynman
Patchies is licensed under AGPL-3.0 and builds upon many amazing open source projects. See the complete licenses and attributions for detailed information about all third-party libraries used.
Playing around with demos first is a nice way to get inspirations and see what Patchies can do, first-hand. Go to "Help" (the button with the question mark on the bottom right) then "demos" to view the list of demos you can play with!
Enter to create a new object.hydra or glsl or p5.Arrow Up/Down navigates the list.Enter inserts the object.Esc closes the menu.
Use Ctrl/Cmd + B or the search icon button on the bottom right to open the Object Browser - a searchable, categorized view of all available objects in Patchies.
See all 100+ objects organized by category (Visual, Audio, Video, Control, etc.), with searchable names and brief description. Drag a random object and see what you can do with it!
Delete to delete an object.Ctrl + C/V to copy and paste an object.
Shift + Enter when in a code editor re-runs the code. This helps you to make changes to the code and see the results right away.
Patchies is designed to be keyboard-first so you can get in the flow. Go to "Help > Shortcuts" to see the full list of keyboard shortcuts.
Click on the bottom handle (outlet) of an object, and drag it all the way to the another object's top handle (inlet).
To create shareable links, click on the "Share Link" button on the bottom right. You can also use "Share Patch" from the command palette.
Each object can send message to other objects, and receive messages from other objects.
In this example, two slider objects sends out their value to a expr $1 + $2 object which adds the number together. The result is sent as a message to the p5 object which displays it.
Here are some examples to get you started:
button objects, and connect the outlet of one to the inlet of another.bang message to the second button, which will flash.{type: 'bang'}msg object with the message 'hello world' (you can hit Enter and type m 'hello world'). Mind the quotes.Enter again and search for the logger.js preset. Connect them together.'hello world' to the console object, which will log it to the virtual console.Most messages in Patchies are objects with a type field. For example, bang is {type: 'bang'}, and start is {type: 'start'}. If you need more properties, then you can add more fields to the object, e.g. {type: 'loop', value: false}.
Typing bang in the message box sends {type: 'bang'} for convenience. If you want to send a string "bang", type in "bang" with quotes. See the message object's documentation for the message box syntax.
In JavaScript-based objects such as js, p5, hydra, canvas, strudel, dsp~, tone~, elem~ and sonic~, you can use the send() and recv() functions to send and receive messages between objects. For example:
// In the source `js` object
send({ type: "bang" });
send("Hello from Object A");
// In the target `js` object
recv((data) => {
// data 0 is { type: 'bang' }
// data 1 is "Hello from Object A"
console.log("Received message:", data);
});
This is similar to the second example above, but using JavaScript code.
The recv callback also accepts the meta argument in addition to the message data. It includes the inlet field which lets you know which inlet the message came from.
You can combine this with send(data, {to: inletIndex}) to send data to only a particular inlet, for example:
recv((data, meta) => {
send(data, { to: meta.inlet });
});
In the above example, if the message came from inlet 2, it will be sent to outlet 2.
In js, p5, hydra, canvas, dsp~, tone~, elem~ and sonic~ objects, you can call setPortCount(inletCount, outletCount) to set the exact number of message inlets and outlets. Example: setPortCount(2, 1) ensures there is 2 message inlets and 1 message outlet.
See the Message Passing with GLSL section for how to use message passing with GLSL shaders to pass data to shaders dynamically.
You can chain visual objects together to create video effects and compositions, by using the output of a visual object as an input to another.
The above example creates a hydra object and a glsl object that produces a pattern, and connects them to a hydra object that subtracts the two visuals together using src(s0).sub(s1).out(o0).
This is very similar to shader graphs in programs like TouchDesigner, Unity, Blender, Godot and Substance Designer.
To use video chaining:
Try out the presets to get started quickly.
pipe.hydra, pipe.gl) simply passes the visual through without any changes. This is the best starting point for chaining.diff.hydra, add.hydra, sub.hydra) on two visual inputs, see hydra section.The visual object should have at least one visual inlets and/or outlets, i.e. orange circles on the top and bottom.
hydra, you can call setVideoCount(ins = 1, outs = 1) to specify how many visual inlets and outlets you want. See hydra section for more details.glsl objects, you can dynamically create sampler2D uniforms. See glsl section for more details.The visual object should have code that takes in a visual source, does something, and outputs visual. See the above presets for examples.
Connect the orange inlets of a source object to the orange outlets of a target object.
p5 to an orange visual inlet of a pipe.hydra preset, and then connect the hydra object to a pipe.gl preset. You should see the output of the p5 object being passed through hydra and glsl objects without modification.Getting lag and slow patches? See the Rendering Pipeline section on how to avoid lag.
Similar to video chaining, you can chain many audio objects together to create audio effects and soundscapes.
Try the above example here. This is a FM synthesis demo that uses a combination of osc~ (sine oscillator), expr (math expression), gain~ (gain control), and fft~ (frequency analysis) objects to create a simple synth with frequency modulation.
For a more fun example, here's a little patch by @kijjaz that uses expr~ to create a funky beat:
If you don't have an idea where to start, why not build your own drum machine? Try it out! Use the W A S D keys on your keyboard to play some drums 🥁.
If you have used an audio patcher before (e.g. Pure Data, Max/MSP, FL Studio Patcher, Bitwig Studio's Grid), the idea is similar.
You can use these objects as audio sources: strudel, chuck~, ai.tts, ai.music, soundfile~, sampler~, video, dsp~, tone~, elem~, sonic~, as well as the web audio objects (e.g. osc~, sig~, mic~)
dac~ to hear the audio output, otherwise you will hear nothing. Audio sources do not output audio unless connected to dac~. Use gain~ to control the volume.You can use these objects to process audio: gain~, fft~, +~, lowpass~, highpass~, bandpass~, allpass~, notch~, lowshelf~, highshelf~, peaking~, compressor~, pan~, delay~, waveshaper~, convolver~, expr~, dsp~, tone~, elem~, sonic~.
Use the fft~ object to analyze the frequency spectrum of the audio signal. See the Audio Analysis section on how to use FFT with your visual objects.
You can use dac~ to output audio to your speakers.
Here are the non-exhaustive list of objects that we have in Patchies.
These objects support video chaining and can be connected to create complex visual effects:
p5: creates a P5.js sketchP5.js is a JavaScript library for creative coding. It provides a simple way to create graphics and animations, but you can do very complex things with it.
If you are new to P5.js, I recommend watching Patt Vira's YouTube tutorials on YouTube, or on her website. They're fantastic for both beginners and experienced developers.
Read the P5.js documentation to see how P5 works.
See the P5.js tutorials and OpenProcessing for more inspirations.
Note: Patchies uses P5.js v2.x with backward compatibility libraries for v1 features. All existing P5.js v1 sketches should work without modification.
You can call these special methods in your sketch:
noDrag() disables dragging the whole canvas. You must call this method if you want to add interactivity to your sketch, such as adding sliders or mousePressed events. You can call it in your setup() function.noDrag() is enabled, you can still drag the "p5" title to move the whole object around.noOutput() hides the video output port (the orange outlet at the bottom). This is useful when creating interface widgets that don't need to be part of the video chain.setTitle(title) sets the title of the node. Use this to create custom, reusable widgets with meaningful names. Example: setTitle('Color Picker').send(message) and recv(callback), see Message Passing.You can use any third-party packages you want in your sketch, see importing JavaScript packages from NPM.
import ml5 from "npm:ml5";
function preload() {
classifier = ml5.imageClassifier("MobileNet");
}
You can import shared JavaScript libraries across multiple p5 objects, see sharing JavaScript across multiple js blocks.
hydra: creates a Hydra video synthesizersetVideoCount(ins = 1, outs = 1) creates the specified number of Hydra source ports.setVideoCount(2) initializes s0 and s1 sources with the first two visual inlets.ho0, o1, o2, and o3.send(message) and recv(callback) works here, see Message Passing.setTitle(title) sets the hydra object titlepipe.hydra: passes the image through without any changesdiff.hydra, add.hydra, sub.hydra, blend.hydra, mask.hydra: perform image operations (difference, addition, subtraction, blending, masking) on two video inputsfilet-mignon.hydra: example Hydra code "Filet Mignon" from AFALFL. Licensed under CC BY-NC-SA 4.0.glsl: creates a GLSL fragment shader
✨ Try this patch out in the app. Shader is from @dtinth's talk, the power of signed distance functions!
p5, hydra, glsl, swgl, bchrn, ai.img or canvas) to the GLSL object via sampler2D video inlets.uniform float iMix;, it will create a float inlet for you to send values to.sampler2D such as uniform sampler2D iChannel0;, it will create an orange video inlet for you to connect video sources to.glsl, as they accept the same uniforms.iMouse uniform (vec4), mouse interaction is automatically enabled:iMouse.xy: current mouse position or last click positioniMouse.zw: drag start position (positive when mouse down, negative when mouse up)iMouse.zw > 0 contains ongoing drag start positioniMouse.zw < 0 (use abs() to get last drag start position)iMouse is detected in your code, the node becomes interactive (drag is disabled to allow mouse input)red.gl: solid red colorpipe.gl: passes the image through without any changesmix.gl: mixes two video inputsoverlay.gl: put the second video input on top of the first onefft-freq.gl: visualizes the frequency spectrum from audio inputfft-waveform.gl: visualizes the audio waveform from audio inputswitcher.gl: switches between six video inputs by sending an int message of 0 - 5.You can send messages into the GLSL uniforms to set the uniform values in real-time. First, create a GLSL uniform using the standard GLSL syntax, which adds two dynamic inlets to the GLSL object.
uniform float iMix;
uniform vec2 iFoo;
You can now send a message of value 0.5 to iMix, and send [0.0, 0.0] to iFoo. When you send messages to these inlets, it will set the internal GLSL uniform values for the object. The type of the message must match the type of the uniform, otherwise the message will not be sent.
If you want to set a default uniform value for when the patch gets loaded, use the loadbang object connected to a msg object or a slider. loadbang sends a bang message when the patch is loaded, which you can use to trigger a msg object or a slider to send the default value to the GLSL uniform inlet.
Supported uniform types are bool (boolean), int (number), float (floating point number), vec2, vec3, and vec4 (arrays of 2, 3, or 4 numbers).
swgl: creates a SwissGL shaderSwissGL is a wrapper for WebGL2 to create shaders in very few lines of code. Here is how to make a simple animated mesh:
function render({ t }) {
glsl({
t,
Mesh: [10, 10],
VP: `XY*0.8+sin(t+XY.yx*2.0)*0.2,0,1`,
FP: `UV,0.5,1`,
});
}
See the SwissGL examples for some inspirations on how to use SwissGL.
canvas: creates a JavaScript canvas (offscreen)You can use HTML5 Canvas to create custom graphics and animations. The rendering context is exposed as ctx in the JavaScript code, so you can use methods like ctx.fill() to draw on the canvas.
You can call these special methods in your canvas code:
noDrag() disables dragging the node. This allows you to add mouse or touch interactivity to your canvas without accidentally moving the node.noOutput() hides the video output port. Useful when creating interface widgets or tools that don't need to be part of the video processing chain.setTitle(title) sets the title of the node. Create custom, reusable widgets with meaningful names like setTitle('Spectogram').send(message) and recv(callback), see Message Passing.fft() for audio analysis, see Audio AnalysisThis runs on the rendering pipeline using OffscreenCanvas on web workers. This means:
glsl, hydra, etc.) without lag. You can draw animations using the canvas API and output it at 60fps.document or windowfft~ inputs has very high delay due to worker message passingcanvas.dom: creates a JavaScript canvas (main thread)
✨ Try this patch out in the app!
Same as canvas but runs directly on the main thread instead of on the rendering pipeline thread, and comes with some additional features:
mouse object with properties: x, y, down, buttons to get current mouse position and state.onKeyDown(callback) and onKeyUp(callback) to register keyboard event handlers. Events are trapped and won't leak to xyflow (e.g., pressing Delete won't delete the node).document and window)setCanvasSize(width, height) to dynamically resize the canvas resolution (e.g., setCanvasSize(500, 500)).canvas: noDrag(), noOutput(), setTitle(title), send(message), recv(callback), fft() can all be used in canvas.dom.When to use canvas.dom instead of canvas:
mouse.x, mouse.y, mouse.down for interactive sketches.onKeyDown() and onKeyUp() for keyboard-controlled widgets.document, window and other browser APIs when needed.Try out these fun and useful presets for inspirations on widgets and interactive controls:
particle.canvas adds a particle canvas that reacts to your mouse inputs.xy-pad.canvas adds an X-Y pad that you can send [x, y] coordinates into to set the position of the crosshair. It also sends [x, y] coordinates to the message outlet when you drag on it.rgba.picker and hsla.picker lets you pick colors and sends them as outputs: [r, g, b, a] and [h, s, l, a] respectively.keyboard.example demonstrates keyboard event handling with onKeyDown() and onKeyUp() callbacks.fft.canvas preset takes in analysis output from fft~ object and does a FFT plot, similar to fft.p5 but even faster.Performance trade-offs:
bchrn: render the Winamp Milkdrop visualizer (Butterchurn)hydra and glsl) to derive more visual effects.img: display imagesstring: load the image from the given url.video: display videosbang: restart the videostring: load the video from the given url.play: play the videopause: pause the video{type: 'loop', value: false}: do not loop the videoiframe: embed web contentstring or {type: 'load', url: 'https://...'}: loads the webpage from the given URL.bg.out: background outputjs: A JavaScript code blockconsole.log() to log messages to the virtual console.setInterval(callback, ms) to run a callback every ms milliseconds.setInterval that automatically cleans up the interval on unmount. Do not use window.setInterval from the window scope as that will not clean up.requestAnimationFrame(callback) to run a callback on the next animation frame.requestAnimationFrame that automatically cleans up on unmount. Do not use window.requestAnimationFrame from the window scope as that will not clean up.send() and recv() to send and receive messages between objects. This also works in other JS-based objects. See the Message Passing section above.setRunOnMount(true) to run the code automatically when the object is created. By default, the code only runs when you hit the "Play" button.setPortCount(inletCount, outletCount) to set the number of message inlets and outlets you want. By default, there is 1 inlet and 1 outlet.meta.inlet in the recv callback to distinguish which inlet the message came from.send(data, { to: inletIndex }) to send data to a specific inlet of another object.await delay(ms) to pause the code for ms milliseconds. For example, await delay(1000) pauses the code for 1 second.This feature is only available in
js,p5,sonic~andelem~objects, for now.
You can import any JavaScript package by using the npm: prefix in the import statement.
import * as X is not yet supported.import Matter from "npm:matter-js";
import { uniq } from "npm:lodash-es";
console.log(Matter); // Matter.js library
console.log(uniq([1, 1, 2, 2, 3, 3])); // [1, 2, 3]
Alternatively, write the dynamic import yourself:
const { uniq } = await import("https://esm.run/lodash-es");
console.log(uniq([1, 1, 2, 2, 3, 3])); // [1, 2, 3]
// or use a shorthand `await esm()` function that does the same thing
const { uniq } = await esm("lodash-es");
console.log(uniq([1, 1, 2, 2, 3, 3])); // [1, 2, 3]
js blocksThis feature is only available in
js,p5,sonic~andelem~objects, for now.
You can share JavaScript code across multiple js blocks by using the // @lib <module-name> comment at the top of your code, and exporting at least one constant, function, class, or module.
// @lib foobar on top of the code snippet with an exported constant, function, class, or module will register the module as foobar.export syntax in your library js object, e.g. export const rand = () => Math.random(). This works for everything: classes, functions, modules.import { rand } from 'foobar' from other objects that supports this feature.See the following example:
expr: mathematical expression evaluator
✨ Try this patch out in the app!
Evaluate mathematical expressions and formulas.
Use the $1 to $9 variables to create inlets dynamically. For example, $1 + $2 creates two inlets for addition, and sends a message with the result each time inlet one or two is updated.
This uses the expr-eval library from silentmatt under the hood for evaluating mathematical expressions.
There are so many mathematical functions and operators you can use here! See the expression syntax section.
Very helpful for control signals and parameter mapping.
You can also create variables and they are multi-line. Make sure to use ; to separate statements. For example:
a = $1 * 2;
b = $2 + 3;
a + b;
This creates two inlets, and sends the result of (inlet1 * 2) + (inlet2 + 3) each time inlet one or two is updated.
You can also define functions to make the code easier to read, e.g. add(a, b) = a + b.
uxn: Uxn virtual machine
Uxn is a virtual machine for running small programs written in Uxntal, an assembly language for the Uxn stack machine. Conforms with the Varvara device specifications.
Run classic Uxn programs like Orca and Left. Run games like Oquonie and Donsol.
Write and assemble your own Uxntal programs directly in the editor.
Supports video chaining - connect the video outlet to other visual objects (e.g. hydra and glsl) to process the Uxn screen output.
Console output is automatically sent as messages through the message outlet, allowing you to process program output with other objects.
Load ROM files by dropping a .rom file, or use the Load ROM button (folder icon)
"Edit Code" button (code icon) opens the Uxntal assembly code editor.
Shift + Enter or click "Assemble & Load" to compile and run your code."Console" button (terminal icon) shows program output
"Pause" button pauses and resumes program execution.
The canvas captures keyboard and mouse input for Uxn programs. Click on the canvas to focus it.
Messages
string (URL): Load ROM from URLUint8Array: Load ROM from raw binary dataFile: Load ROM from file object{type: 'load', url: string}: Load ROM from URLSee the Uxn documentation and Uxntal reference to learn how to write Uxn programs.
Check out 100r.co for Uxn design principles.
See Awesome Uxn for cool resources and projects from the Uxn community.
asm: virtual stack machine assembly interpreterasm lets you write a simple flavor of stack machine assembly to construct concise programs. This was heavily inspired by Zachtronic games like TIS-100 and Shenzhen I/O, where you write small assembly programs to interact with the world and solve problems:
The stack machine module is quite extensive, with over 50 assembly instructions and a rich set of features. There are lots of quality-of-life tools unique to Patchies like color-coded memory region visualizer, line-by-line instruction highlighting, and external memory cells (asm.mem).
See the documentation for assembly module to see the full instruction sets and syntax, what the asm object and its friends can do, and how to use it.
Try out my example assembly patch to get a feel of how it works.
python: creates a Python code environmentbutton: a simple buttonbang message when clicked.any: flashes the button when it receives any message, and outputs the bang message out.msg: message objectEnter and type m <message> to create a msg object with the given message.m start creates a msg object that sends start when clicked.hello or start) are sent as objects with type field: i.e. {type: 'hello'} or {type: 'start'}"hello") are sent as JS strings: "hello"100) are sent as numbers: 100{foo: 'bar'}) are sent as-is: {foo: 'bar'}bang sends {type: 'bang'} object - this is what button does when you click itstart sends {type: 'start'} object'hello world' or "hello world" sends the string 'hello world'100 sends the number 100{x: 1, y: 2} sends the object {x: 1, y: 2}bang: outputs the messageslider: numerical value sliderEnter and type in these short commands to create sliders with specific ranges:slider <min> <max>: integer slider control. example: slider 0 100fslider <min> <max>: floating-point slider control. example: fslider 0.0 1.0. fslider defaults to -1.0 to 1.0 range if no arguments are given.vslider <min> <max>: vertical integer slider control. example: vslider -50 50vfslider <min> <max>: vertical floating-point slider control. example: vfslider -1.0 1.0. vfslider defaults to -1.0 to 1.0 range if no arguments are given.bang: outputs the current slider valuenumber: sets the slider to the given number within the range and outputs the valuetextbox: multi-line text inputbang: outputs the current textstring: sets the text to the given stringorca: Orca livecoding sequencer
midi.out for MIDI output.poly-synth-midi.tone preset, which uses tone~ node to playback MIDI messages.A-Z: Mathematical, logical, and movement operations:: MIDI note output (channel, octave, note, velocity, length)%: Monophonic MIDI (only one note per channel)!: MIDI Control ChangeU: Euclidean rhythm generator (very useful for drum patterns!)V: Variables for storing valuesR: Random values*: Bang operator to trigger adjacent operators#: Comment (halts line)ctrl+shift+r resets framectrl+f advances one frame (frame-by-frame), you can use this even with paused.> increases tempo< decreases tempostrudel: Strudel music environmentCtrl/Cmd + Enter to re-evaluate the code.dac~ object to hear the audio output.recv only works with a few functions, e.g. setcpm right now. Try recv(setcpm) to automate the cpm value.bang or run: evaluates the code and starts playback{type: 'set', code: '...'}: sets the code in the editorstrudel object, but only one will be playing at a time.bang or run messages to switch playback between multiple Strudel objects to orchestrate them.chuck~: creates a ChucK audio programming environmentCtrl/Cmd + Enter: replaces the most recent shred.Ctrl/Cmd + \: adds a new shred to the shreds list.Ctrl/Cmd + Backspace: removes the most recent shred.object: textual object systemEnter, and type in the name of the object you want to create.gain~ object's gain value (e.g. 1.0) to see the tooltip.These objects run on control rate, which means they process messages (control signals), but not audio signals.
mtof: Convert MIDI note numbers to frequenciesloadbang: Send bang on patch loadmetro: Metronome for regular timingdelay: Message delay (not audio)adsr: ADSR envelope generatorMost of these objects are easy to re-implement yourself with the js object as they simply emit messages, but they are provided for your convenience!
These objects run on audio rate, which means they process audio signals in real-time. They are represented with a ~ suffix in their names.
Audio Processing:
gain~: Amplifies audio signals with gain controlosc~: Oscillator for generating audio waveforms (sine, square, sawtooth, triangle)lowpass~, highpass~, bandpass~, allpass~, notch~: Various audio filterslowshelf~, highshelf~, peaking~: EQ filters for frequency shapingcompressor~: Dynamic range compression for audiopan~: Stereo positioning controldelay~: Audio delay line with configurable delay time+~: Audio signal additionsig~: Generate constant audio signalswaveshaper~: Distortion and waveshaping effectsconvolver~: Convolution reverb using impulse responsessoundfile~ object to the convolver~ object's message inlet. Then, upload a sound file or send a url as an input message.read message to the soundfile~ object to read the impulse response into the convolver~ object.split~: Split multi-channel audio into separate mono channels.merge~: Merge multiple mono channels into a single multi-channel audio.fft~: FFT analysis for frequency domain processing. See the audio analysis section for how to read the FFT data.meter~: Visual audio level meter that shows the loudness of the audio source.Sound Input and Output:
Try out the drum sequencer: use
Pto play andKto stop!
soundfile~: Load and play audio files with transport controlssampler~: Sample playback with triggering capabilitiesmic~: Capture audio from microphone inputdac~: Send audio to speakersosc~ oscillator
✨ Try this patch out in the app!
The osc~ oscillator object supports custom waveforms using PeriodicWave by sending [real: Float32Array, imaginary: Float32Array] to the type inlet. Both arrays must be Float32Array or TypedArray of the same length (minimum 2).
js objectosc~'s type inlet (second message inlet from the left)'Run on the js object to send the arrays to the osc~ object.type property on the object should say "custom" now.setRunOnMount(true);
const real = new Float32Array(64);
const imag = new Float32Array(64);
for (let n = 1; n < 64; n++) {
real[n] = (2 / (n * Math.PI)) * Math.sin(n * Math.PI * 0.5);
}
send([real, imag]);
waveshaper~
✨ Try this patch out in the app!
Similar to the periodic wave example above, you can also send a wave shaping distortion curve to the curve inlet of the waveshaper~. It expects a single Float32Array describing the distortion curve.
js objectwaveshaper~'s curve inlet (second message inlet from the left)'Run on the js object to send the array to the waveshaper~ object.curve property on the object should say "curve" now.Here's an example distortion curve:
setRunOnMount(true);
const k = 50;
const s = 44100;
const curve = new Float32Array(s);
const deg = Math.PI / 180;
for (let i = 0; i < s; i++) {
const x = (i * 2) / s - 1;
curve[i] = ((3 + k) * x * 20 * deg) / (Math.PI + k * Math.abs(x));
}
send(curve);
dsp~, expr~, tone~, elem~ or sonic~ objects. In fact, the default dsp~, tone~ and elem~ objects are simple sine wave oscillators that work similar to osc~.expr~: audio-rate mathematical expression evaluatorexpr but runs at audio rate for audio signal processing.shift+enter to re-run the expression.expr~ object will also re-run the expression.expr, so the same mathematical expression will work in both expr and expr~.sig~ if you just need a constant signal.s: current sample value, a float between -1 and 1i: current sample index in buffer, an integer starting from 0t: current time in seconds, a float starting from 0channel: current channel index, usually 0 or 1 for stereobufferSize: the size of the audio buffer, usually 128samples: an array of samples from the current channelinput: first input audio signal (for all connected channels), a float between -1 and 1inputs: every connected input audio signal$1 to $9: dynamic control inletssin(t * 440 * PI * 2) creates a sine wave oscillator at 440Hzrandom() creates white noises outputs the input audio signal as-iss * $1 applies gain control to the input audio signals ^ 2 squares the input audio signal for distortion effect$1 to $9 to create dynamic control inlets.$1 * 440 creates one message inlet that controls the frequency of a sine wave oscillator.slider 1 880 object to control the frequency.compressor~ object with appropriate limiter-esque setting after expr~ to avoid loud audio spikes that can and will damage your hearing and speakers. You have been warned!dsp~: dynamic JavaScript DSP processorThis is similar to expr~, but it takes in a single process JavaScript function that processes the audio. It essentially wraps an AudioWorkletProcessor. The worklet is always kept alive until the node is deleted.
Try out some patches that uses dsp~ to get an idea of its power:
Some presets are also built on top of dsp~:
snapshot~: takes a snapshot of the incoming audio's first sample and outputs it.Here's how to make white noise:
function process(inputs, outputs) {
outputs[0].forEach((channel) => {
for (let i = 0; i < channel.length; i++) {
channel[i] = Math.random() * 1 - 1;
}
});
}
Here's how to make a sine wave oscillator at 440Hz:
function process(inputs, outputs) {
outputs[0].forEach((channel) => {
for (let i = 0; i < channel.length; i++) {
let t = (currentFrame + i) / sampleRate;
channel[i] = Math.sin(t * 440 * Math.PI * 2);
}
});
}
You can use the counter variable that increments every time process is called. There are also a couple more variables from the worklet global that you can use.
const process = (inputs, outputs) => {
counter; // increments every time process is called
sampleRate; // sample rate (e.g. 48000)
currentFrame; // current frame number (e.g. 7179264)
currentTime; // current time in seconds (e.g. 149.584)
};
You can use $1, $2, ... $9 to dynamically create value inlets. Message sent to the value inlets will be set within the DSP. The number of inlets and the size of the dsp~ object will adjust automatically.
const process = (inputs, outputs) => {
outputs[0].forEach((channel) => {
for (let i = 0; i < channel.length; i++) {
channel[i] = Math.random() * $1 - $2;
}
});
};
In addition to the value inlets, we also have messaging capabilities:
setPortCount(inletCount, outletCount) to set the number of message inlets.setAudioPortCount(inletCount, outletCount) to set the number of audio inlets and outlets.setTitle(title) to set the title of the object.dsp~.setKeepAlive(enabled) to control whether the worklet stays active when not connected.setKeepAlive(true) keeps the worklet processing even when no audio is flowing through it.setKeepAlive(false) lets the worklet to stop processing when it's not connected to other audio nodes, which can improve performance.snapshot~ and bang~ presets for examples on when to use setKeepAlivesend and recv to communicate with the outside world. See Message Passing.setPortCount(2);
recv((msg, meta) => {
if (meta.inlet === 0) {
// do something
}
});
You can even use both value inlets and message inlets together in the DSP.
let k = 0;
recv((m) => {
// you can use value inlets `$1` ... `$9` anywhere in the JavaScript DSP code.
k = m + $1 + $2;
});
const process = (inputs, outputs) => {
outputs[0].forEach((channel) => {
for (let i = 0; i < channel.length; i++) {
channel[i] = Math.random() * k;
}
});
};
tone~: Tone.js synthesis and processingThe tone~ object allows you to use Tone.js to create interactive music. Tone.js is a powerful Web Audio framework that provides high-level abstractions for creating synthesizers, effects, and complex audio routing.
By default, tone~ adds a sample code for sine oscillator.
The Tone.js context gives you these variables:
Tone: the Tone.js libraryinputNode: GainNode from Web Audio API for receiving audio input from other nodesoutputNode: GainNode from Web Audio API for sending audio output to connected nodesIn addition to the audio processing capabilities, tone~ also supports messaging:
setPortCount(inletCount, outletCount) to set the number of message inlets and outlets.setTitle(title) to set the title of the object.tone~.send and recv to communicate with the outside world. See Message Passing.Try out these presets:
poly-synth.tone: Polyphonic synthesizer that plays chord sequenceslowpass.tone - low pass filterspipe.tone - directly pipe input to outputCode example:
// Process incoming audio through a filter
const filter = new Tone.Filter(1000, "lowpass");
inputNode.connect(filter.input.input);
filter.connect(outputNode);
// Handle incoming messages to change frequency
recv((m) => {
filter.frequency.value = m;
});
// Return cleanup function to properly dispose Tone.js objects
return {
cleanup: () => filter.dispose(),
};
sonic~: SuperCollider synthesis engineThe sonic~ object integrates SuperSonic, which brings SuperCollider's powerful scsynth audio engine to the browser via AudioWorklet.
By default, sonic~ loads and triggers the Prophet synth on message.
The sonic~ context provides:
sonic: SuperSonic instance for synthesis controlSuperSonic: Class for static methods (e.g., SuperSonic.osc.encode())sonicNode: Audio node wrapper (sonic.node) for Web Audio connectionson(event, callback): Subscribe to SuperSonic eventsinputNode: Audio input GainNodeoutputNode: Audio output GainNodeAvailable events: 'ready', 'loading:start', 'loading:complete', 'error', 'message'
In addition to the synthesis capabilities, sonic~ also supports messaging:
setPortCount(inletCount, outletCount) to set the number of message inlets and outlets.setTitle(title) to set the title of the object.sonic~.send and recv to communicate with the outside world. See Message Passing.Load and play a synth:
setPortCount(1);
await sonic.loadSynthDef("sonic-pi-prophet");
recv((note) => {
sonic.send(
"/s_new",
"sonic-pi-prophet",
-1,
0,
0,
"note",
note,
"release",
2
);
});
Load and play samples:
await sonic.loadSynthDef("sonic-pi-basic_stereo_player");
await sonic.loadSample(0, "loop_amen.flac");
await sonic.sync();
sonic.send(
"/s_new",
"sonic-pi-basic_stereo_player",
-1,
0,
0,
"buf",
0,
"rate",
1
);
See the SuperSonic documentation and scsynth OSC reference for more details.
elem~: Elementary Audio synthesis and processingThe elem~ object lets you use the Elementary Audio library, a declarative digital audio signal processing.
By default, elem~ adds a sample code for a simple sine wave oscillator.
The elem~ context gives you these variables:
el: the Elementary Audio core librarycore: the WebRenderer instance for rendering audio graphsnode: the AudioWorkletNode for connecting to the Web Audio graphinputNode: GainNode from Web Audio API for receiving audio input from other nodesoutputNode: GainNode from Web Audio API for sending audio output to connected nodesIn addition to the audio processing capabilities, elem~ also supports messaging:
setPortCount(inletCount, outletCount) to set the number of message inlets and outlets.setTitle(title) to set the title of the object.elem~.send and recv to communicate with the outside world. See Message Passing.Here's how to create a simple phasor:
setPortCount(1);
let [rate, setRate] = core.createRef(
"const",
{
value: 440,
},
[]
);
recv((freq) => setRate({ value: freq }));
// also try el.train and el.cycle in place of el.phasor
// first arg is left channel, second arg is right channel
core.render(el.phasor(rate), el.phasor(rate));
csound~: Sound and music computingThe csound~ object allows you to use Csound for audio synthesis and processing. Csound is a powerful, domain-specific language for audio programming with decades of development.
csound~ object per patch, for now. Creating multiple csound~ object will break the patch's audio playback. This is a known bug.You can send messages to control Csound instruments:
bang: Resume or re-eval Csound codeplay: Resume playbackpause: Pause playbackstop: Stop playbackreset: Reset the Csound instance{type: 'setChannel', channel: 'name', value: number}: Set a control channel value{type: 'setChannel', channel: 'name', value: 'string'}: Set a string channel value{type: 'setOptions', value: '-flagname'}: Set Csound options and reset{type: 'noteOn', note: 60, velocity: 127}: Send MIDI note on{type: 'noteOff', note: 60, velocity: 0}: Send MIDI note off{type: 'readScore', value: 'i1 0 1'}: Send score statements to Csound{type: 'eval', code: 'instr 1 ... endin'}: Evaluate Csound codenumber: Set control channel for the inlet indexstring: Send input messages (or set option if starts with -)midi.in: MIDI inputmidi.out: MIDI outputnetsend: network message sendernetsend <channelname> to create a netsend object that sends messages to the specified channel name. Example: netsend drywetnetrecv: network message receivernetrecv <channelname> to create a netrecv object that receives messages from the specified channel name. Example: netrecv drywet[!CAUTION] API keys are currently stored on localStorage as
gemini-api-keyfor Gemini (forai.txt,ai.imgandai.music), andcelestiai-api-keyforai.tts. This is currently super insecure.
Be very cautious that Patchies allows any arbitrary code execution right now with no sandboxing whatsoever, and if you load anyone's patch with malicious code, they can steal your API keys. I recommend removing API keys after use before loading other people's patch.
Please, do not use your main API keys here! Create separate API keys with limited quota for use in Patchies. I plan to ork on a backend-based way to store API keys in the future.
In addition, these objects can be hidden from insert object and the object list via "CMD + K > Toggle AI Features" if you prefer not to use AI objects in your patches.
With that in mind, use "CMD + K > Set Gemini API Key" to set your Gemini API key for ai.txt, ai.img and ai.music. You can get the API key from Google Cloud Console.
ai.txt: AI text generationai.img: AI image generationai.music: AI music generationai.tts: AI text-to-speechmarkdown: Markdown renderer
✨ Try this patch out in the app!
The fft~ audio object gives you an array of frequency bins that you can use to create visualizations in your patch.
First, create a fft~ object. Set the bin size (e.g. fft~ 1024). Then, connect the purple "analyzer" outlet to the visual object's inlet.
Supported objects are glsl, hydra, p5, canvas, canvas.dom and js.
sampler2D GLSL uniform inlet and connect the purple "analyzer" outlet of fft~ to it.Enter to insert object, and try out the fft-freq.gl and fft-waveform.gl presets for working code samples.uniform sampler2D waveTexture;. Using other uniform names will give you frequency analysis.You can call the fft() function to get the audio analysis data in the supported JavaScript-based objects: hydra, p5, canvas, canvas.dom and js.
IMPORTANT: Patchies does NOT use standard audio reactivity APIs in Hydra and P5.js. Instead, you must use the fft() function to get the audio analysis data.
fft() defaults to waveform (time-domain analysis). You can also call fft({type: 'wave'}) to be explicit.
fft({type: 'freq'}) gives you frequency spectrum analysis.
Try out the fft.hydra preset for Hydra.
Try out the fft.p5, fft-sm.p5 and rms.p5 presets for P5.js.
Try out the fft.canvas preset for HTML5 canvas with instant audio reactivity.
fft.canvas preset uses canvas.dom (main thread), giving you the same tight audio reactivity as p5.canvas.dom or p5 for best results.canvas node has slight FFT delay but won't slow down your patch when chained with other visual objects.The fft() function returns the FFTAnalysis class instance which contains helpful properties and methods:
fft().afft().getEnergy('bass') / 255. You can use these frequency ranges: bass, lowMid, mid, highMid, treble.fft().getEnergy(40, 200) / 255fft().rmsfft().avgfft().centroidWhere to call fft():
p5: call in your draw function.
canvas and canvas.dom: call in your draw function that are gated by requestAnimationFrame
js: call in your setInterval or requestAnimationFrame callback
setInterval(() => {
let a = fft().a;
}, 1000);
hydra: call inside arrow functions for dynamic parameters
let a = () => fft().getEnergy("bass") / 255;
src(s0).repeat(5, 3, a, () => a() * 2);
Q: Why not just use standard Hydra and P5.js audio reactivity APIs like a.fft[0] and p5.FFT()?
p5-sound and a.fft APIs only lets you access microphones and audio files. In contrast, Patchies lets you FFT any dynamic audio sources 😊Converting Hydra's Audio Reactivity API into Patchies:
Replace a.fft[0] with fft().a[0] (un-normalized int8 values from 0 - 255)
Replace a.fft[0] with fft().f[0] (normalized float values from 0 - 1)
Instead of a.setBins(32), change the fft bins in the fft~ object instead e.g. fft~ 32
Instead of a.show(), use the below presets to visualize fft bins.
Using the value to control a variable:
- osc(10, 0, () => a.fft[0]*4)
+ osc(10, 0, () => fft().f[0]*4)
.out()
Converting P5's p5.sound API into Patchies:
p5.Amplitude with fft().rms (rms as float between 0-1)p5.FFT with fft()fft.analyze() with nothing - fft() is always up to date.fft.waveform() with fft({ format: 'float' }).a, as P5's waveform returns a value between -1 and 1. Using format: 'float' gives you Float32Array.fft.getEnergy('bass') with fft().getEnergy('bass') / 255 (normalize to 0-1)fft.getCentroid() with fft().centroid[!CAUTION] API keys are currently stored on localStorage as
gemini-api-keyfor Gemini. In addition, this feature is experimental and unstable, and it has a high chance of corrupting and destroying your code and patches without any way to restore it. Backup your node and patch before trying this out!
Press Ctrl/Cmd + I to open the AI object insert/edit prompt. Describe what you want to create in natural language, and the AI will generate the appropriate object with code for you.
When the AI object insert prompt is open, press Ctrl/Cmd+I again to switch between Single Insert and Multi Insert mode.
This feature uses Google Gemini AI to understand your prompt and generate the right object configuration. Make sure to set your Gemini API key in the command palette (Cmd/Ctrl + K → "Set Gemini API Key").
If you dislike AI features (e.g. text generation, image generation, speech synthesis and music generation), you can hide them by activating the command palette with CMD + K, then search for "Toggle AI Features".
This will hide all AI-related objects and features, such as ai.txt, ai.img, ai.tts and ai.music. It also hides the experimental Cmd/Ctrl + I AI object insertion shortcut.
[!TIP] Use objects that run on the rendering pipeline e.g.
hydra,glsl,swgl,canvasandimgto reduce lag.
Behind the scenes, the video chaining feature constructs a rendering pipeline based on the use of framebuffer objects (FBOs), which lets visual objects copy data to one another on a framebuffer level, with no back-and-forth CPU-GPU transfers needed. The pipeline makes use of Web Workers, WebGL2, Regl and OffscreenCanvas (for canvas).
It creates a shader graph that streams the low-resolution preview onto the preview panel, while the full-resolution rendering happens in the frame buffer objects. This is much more efficient than rendering everything on the main thread or using HTML5 canvases.
Objects on the rendering pipeline (web worker thread):
hydra, glsl, swgl, canvas and img run entirely on the web worker thread and are very high-performance.Objects on the main thread:
p5, canvas.dom and bchrn runs on the main thread.