The above image remixes the Hydra code "Filet Mignon" from AFALFL and GLSL shader "Just another cube" from mrange. Licensed under CC BY-NC-SA 4.0 and CC0 respectively.
Patchies is a patcher for audio, visual and computational things that runs on the web. It's made for creative coding; patch objects and code snippets together to explore visualizations, soundscapes and computations 🎨
Try it out at patchies.app - it's open source and free to use 😎
Patchies lets you use the audio, visual and computational tools and libraries that you know (and love!), together in one place. For example:
Try out the above demo which uses P5.js with Hydra to create a random walk shader.
Patchies is designed to mix textual coding and visual patching, using the best of both worlds. Instead of writing long chunks of code or patching together a huge web of small objects, Patchies encourages you to write small and compact programs and patch 'em together.
If you haven't used a patching environment before, patching is a visual way to program by connecting objects together. Each object does something e.g. generate sound, generate visual, compute some values. You connect the output of one object to the input of another object to create a flow of data.
This lets you visually see the program's core composition and its in-between results such as audio, video and message flows, while using tools you're already familiar with that lets you do a lot with a bit of code. This is done through Message Passing, Video Chaining and Audio Chaining. They're heavily inspired by tools like Max, Pd, TouchDesigner and VVVV.
"What I cannot create, I do not understand. Know how to solve every problem that has been solved." - Richard Feynman
Playing around with demos first is a nice way to get inspirations and see what Patchies can do, first-hand.
Help / Getting Started.
Enter to create a new object.hydra or glsl or p5.Arrow Up/Down navigates the list.Enter inserts the object.Esc closes the menu.
Use Ctrl/Cmd + O or the search icon button on the bottom right to open the Object Browser - a searchable, categorized view of all available objects in Patchies.
See all 100+ objects organized by category (Video, Audio, Code, Control, UI, etc.), with searchable names and brief descriptions.
You can also browse object presets here. Presets are pre-configured objects that helps you get started quickly. Click to insert an object or preset -- pick one at random and play with it!
Delete to delete an object.Ctrl + C/V to copy and paste an object, or use the "copy/paste" button.
Edit Code button to open the code editor.Shift + Enter when in a code editor re-runs the code. This helps you to make changes to the code and see the results right away.
Patchies is designed to be keyboard-first so you can get in the flow. Go to "Help > Shortcuts" to see the full list of keyboard shortcuts.
Use the easy connect button to make the handles big and easy to touch, for these use cases:
To use this feature:
To create shareable links, click on the "Share Link" button on the bottom right. You can also use "Share Patch" from the command palette.
Patchies is licensed under AGPL-3.0 and builds upon many amazing open source projects. See the complete licenses and attributions for detailed information about all third-party libraries used.
If you enjoy using Patchies, please consider supporting the open source creators who made it possible. You can view the list of creators to sponsor in-app by going to the "thanks" tab in the help dialog.
Special thanks to the amazing people who helped bring Patchies to life through their continuous support, feedback, and encouragement.
Each object can send message to other objects, and receive messages from other objects.
In this example, two slider objects sends out their value to a expr $1 + $2 object which adds the number together. The result is sent as a message to the p5 object which displays it.
Here are some examples to get you started:
✨ Try this patch out in the app!
button objects, and connect the outlet of one to the inlet of another.bang message to the second button, which will flash.{type: 'bang'}msg object with the message 'hello world' (you can hit Enter and type m 'hello world'). Mind the quotes.Enter again and search for the logger.js preset. Connect them together.'hello world' to the console object, which will log it to the virtual console.Most messages in Patchies are objects with a type field. For example, bang is {type: 'bang'}, and start is {type: 'start'}. If you need more properties, then you can add more fields to the object, e.g. {type: 'loop', value: false}.
Typing bang in the message box sends {type: 'bang'} for convenience. If you want to send a string "bang", type in "bang" with quotes. See the message object's documentation for the message box syntax.
In every object that supports writing JavaScript code (e.g. js and p5), you can use the send() and recv() functions to send and receive messages between objects. For example:
// In the source `js` object
send({ type: "bang" });
send("Hello from Object A");
// In the target `js` object
recv((data) => {
// data 0 is { type: 'bang' }
// data 1 is "Hello from Object A"
console.log("Received message:", data);
});
This is similar to the second example above, but using JavaScript code.
[!TIP] To see what kind of messages an object is sending out, use the
logger.jspreset. It is ajsobject that runsrecv(m => console.log(m)), i.e. logs every incoming message to the console. You can add any preset by hittingEnterand searching for them.
The recv callback also accepts the meta argument in addition to the message data. It includes the inlet field which lets you know which inlet the message came from.
You can combine this with send(data, {to: inletIndex}) to send data to only a particular inlet, for example:
// If the message came from inlet #2, send it out to outlet #2
recv((data, meta) => {
send(data, { to: meta.inlet });
});
In most JavaScript-based objects, you can also call setPortCount(inletCount, outletCount) to set the exact number of message inlets and outlets. Example: setPortCount(2, 1) ensures there is 2 message inlets and 1 message outlet.
See the Message Passing with GLSL section for how to use message passing with GLSL shaders to pass data to shaders dynamically.
You can chain visual objects together to create video effects and compositions, by using the output of a visual object as an input to another.
The above example creates a hydra object and a glsl object that produces a pattern, and connects them to a hydra object that subtracts the two visuals together using src(s0).sub(s1).out(o0).
This is very similar to shader graphs in programs like TouchDesigner, Unity, Blender, Godot and Substance Designer.
To use video chaining:
Try out the presets to get started quickly.
pipe.hydra, pipe.gl) simply passes the visual through without any changes. This is the best starting point for chaining.diff.hydra, add.hydra, sub.hydra) on two visual inputs, see hydra section.The visual object should have at least one visual inlets and/or outlets, i.e. orange circles on the top and bottom.
hydra, you can call setVideoCount(ins = 1, outs = 1) to specify how many visual inlets and outlets you want. See hydra section for more details.glsl objects, you can dynamically create sampler2D uniforms. See glsl section for more details.The visual object should have code that takes in a visual source, does something, and outputs visual. See the above presets for examples.
Connect the orange inlets of a source object to the orange outlets of a target object.
p5 to an orange visual inlet of a pipe.hydra preset, and then connect the hydra object to a pipe.gl preset. You should see the output of the p5 object being passed through hydra and glsl objects without modification.Getting lag and slow patches? See the Rendering Pipeline section on how to avoid lag.
Similar to video chaining, you can chain many audio objects together to create audio effects and soundscapes.
✨ Try this patch out in the app!
This is a FM synthesis demo that uses a combination of osc~ (sine oscillator), expr (math expression), gain~ (gain control), and fft~ (frequency analysis) objects to create a simple synth with frequency modulation.
For a more fun example, here's a little patch by @kijjaz that uses mathematical expressions to make a beat in expr~:
If you don't have an idea where to start, why not build your own drum machine? Try it out! Use the W A S D keys on your keyboard to play some drums 🥁.
If you have used an audio patcher before (e.g. Pd, Max, FL Studio Patcher, Bitwig Studio's Grid), the idea is similar.
Use these objects as audio sources: osc~, sig~, mic~, strudel, chuck~, ai.tts, ai.music, soundfile~, sampler~, video, dsp~, tone~, elem~, sonic~
dac~ to hear the audio output, otherwise you will hear nothing. Audio sources do not output audio unless connected to dac~. Use gain~ to control the volume.Use these objects to process audio: gain~, fft~, +~, lowpass~, highpass~, bandpass~, allpass~, notch~, lowshelf~, highshelf~, peaking~, compressor~, pan~, delay~, waveshaper~, convolver~, expr~, dsp~, tone~, elem~, sonic~.
Use dac~ to output audio to your speakers.
Use the fft~ object to analyze the frequency spectrum of the audio signal. See the Audio Analysis section on how to use FFT with your visual objects.
These rules define what handles can be connected together.
fft~ output can connect to message and video inletsosc~'s frequency and gain~'s gain are both audio param inlets.osc~ out and gain~ out) can connect to audio param inlets[!CAUTION] These features are experimental, and thus has a very high chance of corrupting and destroying your code and patches without any way to restore it. Try it on an empty patch or backup your objects.
Try out the above patch in which the AI generates a shader graph of starfield with hearts 💕
Press Ctrl/Cmd + I to open the object insert/edit prompt. Describe what you want to create in natural language, and the AI will generate or edit the appropriate objects with code for you.
When the AI object insert prompt is open, press Ctrl/Cmd+I again to switch between Single Insert and Multi Insert mode.
[!TIP] AI is 100% optional and opt-in with Patchies. Dislike AI? Hit
Ctrl/Cmd + KthenToggle AI Features. This permanently turns all AI-based nodes and AI generation features off.
Here's how to set it up:
Cmd/Ctrl + I.Save & Continue.Ctrl/Cmd + I or the sparkles button on the bottom right to generate.This feature uses the gemini-3-flash-preview model to understand your prompt and generate the object configuration. API keys are stored on localStorage as gemini-api-key and there is a risk of your API keys being stolen by malicious patches you open.
Here are the non-exhaustive list of objects that we have in Patchies.
These objects support video chaining and can be connected to create complex visual effects:
p5: creates a P5.js sketch
✨ Try this patch out in the app. The sketches are Patt Vira's DESSINS Géométriques and Interactive Truchet Tiles tutorials. Her YouTube tutorials are helpful for getting familiar with P5 and for daily inspirations.
P5.js is a JavaScript library for creative coding. It provides a simple way to create graphics and animations, but you can do very complex things with it.
Read the P5.js documentation to see how P5 works.
See the P5.js tutorials and OpenProcessing for more inspirations.
Note: Patchies uses P5.js v2.x with backward compatibility libraries for v1 features. All existing P5.js v1 sketches should work without modification.
You can call these special methods in your sketch:
noDrag() disables dragging the whole canvas. You must call this method if you want to add interactivity to your sketch, such as adding sliders or mousePressed events. You can call it in your setup() function.noDrag() is enabled, you can still drag the "p5" title to move the whole object around.noOutput() hides the video output port (the orange outlet at the bottom). This is useful when creating interface widgets that don't need to be part of the video chain.send, recv, setPortCount, onCleanup, etc.).You can use any third-party packages you want in your sketch, see importing JavaScript packages from NPM.
You can import shared JavaScript libraries across multiple p5 objects, see sharing JavaScript across multiple js blocks.
Please consider supporting the Processing Foundation who maintains p5.js!
hydra: creates a Hydra video synthesizersetVideoCount(ins = 1, outs = 1) creates the specified number of Hydra source ports.setVideoCount(2) initializes s0 and s1 sources with the first two visual inlets.setMouseScope('global' | 'local') sets mouse tracking scope. 'local' (default) tracks mouse within the canvas preview, 'global' tracks mouse across the entire screen using screen coordinates.ho0, o1, o2, and o3.mouse.x and mouse.y provide real-time mouse coordinates (scope depends on setMouseScope)send, recv, setPortCount, onCleanup, etc.).pipe.hydra: passes the image through without any changesdiff.hydra, add.hydra, sub.hydra, blend.hydra, mask.hydra: perform image operations (difference, addition, subtraction, blending, masking) on two video inputsfilet-mignon.hydra: example Hydra code "Filet Mignon" from AFALFL. Licensed under CC BY-NC-SA 4.0.glsl: creates a GLSL fragment shader
✨ Try this patch out in the app. Shader is from @dtinth's talk, the power of signed distance functions!
p5, hydra, glsl, swgl, bchrn, ai.img or canvas) to the GLSL object via sampler2D video inlets.uniform float iMix;, it will create a float inlet for you to send values to.sampler2D such as uniform sampler2D iChannel0;, it will create an orange video inlet for you to connect video sources to.glsl, as they accept the same uniforms.iMouse uniform (vec4), mouse interaction is automatically enabled:iMouse.xy: current mouse position or last click positioniMouse.zw: drag start position (positive when mouse down, negative when mouse up)iMouse.zw > 0 contains ongoing drag start positioniMouse.zw < 0 (use abs() to get last drag start position)iMouse is detected in your code, the node becomes interactive (drag is disabled to allow mouse input)red.gl: solid red colorpipe.gl: passes the image through without any changesmix.gl: mixes two video inputsoverlay.gl: put the second video input on top of the first onefft-freq.gl: visualizes the frequency spectrum from audio inputfft-waveform.gl: visualizes the audio waveform from audio inputswitcher.gl: switches between six video inputs by sending an int message of 0 - 5.You can send messages into the GLSL uniforms to set the uniform values in real-time. First, create a GLSL uniform using the standard GLSL syntax, which adds two dynamic inlets to the GLSL object.
uniform float iMix;
uniform vec2 iFoo;
You can now send a message of value 0.5 to iMix, and send [0.0, 0.0] to iFoo. When you send messages to these inlets, it will set the internal GLSL uniform values for the object. The type of the message must match the type of the uniform, otherwise the message will not be sent.
If you want to set a default uniform value for when the patch gets loaded, use the loadbang object connected to a msg object or a slider. loadbang sends a bang message when the patch is loaded, which you can use to trigger a msg object or a slider to send the default value to the GLSL uniform inlet.
Supported uniform types are bool (boolean), int (number), float (floating point number), vec2, vec3, and vec4 (arrays of 2, 3, or 4 numbers).
swgl: creates a SwissGL shaderSwissGL is a wrapper for WebGL2 to create shaders in very few lines of code. See the API docs for full reference. Here is how to make a simple animated mesh:
function render({ t }) {
glsl({
t,
Mesh: [10, 10],
VP: `XY*0.8+sin(t+XY.yx*2.0)*0.2,0,1`,
FP: `UV,0.5,1`,
});
}
See the SwissGL examples for some inspirations on how to use SwissGL.
canvas: creates a JavaScript canvas (offscreen)You can use HTML5 Canvas to create custom graphics and animations. The rendering context is exposed as ctx in the JavaScript code, so you can use methods like ctx.fill() to draw on the canvas.
You can call these special methods in your canvas code:
noDrag() disables dragging the node. This allows you to add mouse or touch interactivity to your canvas without accidentally moving the node.noOutput() hides the video output port. Useful when creating interface widgets or tools that don't need to be part of the video processing chain.fft() for audio analysis, see Audio Analysis.send, recv, setPortCount, onCleanup, etc.).This runs on the rendering pipeline using OffscreenCanvas on web workers. This means:
glsl, hydra, etc.) without lag. You can draw animations using the canvas API and output it at 60fps.document or windowfft~ inputs has very high delay due to worker message passingcanvas.dom: creates a JavaScript canvas (main thread)
✨ Try this patch out in the app!
Same as canvas but runs directly on the main thread instead of on the rendering pipeline thread, and comes with some additional features:
mouse object with properties: x, y, down, buttons to get current mouse position and state.onKeyDown(callback) and onKeyUp(callback) to register keyboard event handlers. Events are trapped and won't leak to xyflow (e.g., pressing Delete won't delete the node).document and window)setCanvasSize(width, height) to dynamically resize the canvas resolution (e.g., setCanvasSize(500, 500)).canvas: noDrag(), noOutput(), fft(), plus all Patchies JavaScript Runner functions.When to use canvas.dom instead of canvas:
mouse.x, mouse.y, mouse.down for interactive sketches.onKeyDown() and onKeyUp() for keyboard-controlled widgets.document, window and other browser APIs when needed.Try out these fun and useful presets for inspirations on widgets and interactive controls:
particle.canvas adds a particle canvas that reacts to your mouse inputs.xy-pad.canvas adds an X-Y pad that you can send [x, y] coordinates into to set the position of the crosshair. It also sends [x, y] coordinates to the message outlet when you drag on it.rgba.picker and hsla.picker lets you pick colors and sends them as outputs: [r, g, b, a] and [h, s, l, a] respectively.keyboard.example demonstrates keyboard event handling with onKeyDown() and onKeyUp() callbacks.fft.canvas preset takes in analysis output from fft~ object and does a FFT plot, similar to fft.p5 but even faster.Performance trade-offs:
textmode and textmode.dom: creates ASCII/text-mode graphics
✨ Try this patch out in the app! Code sample and library by @humanbydefinition
Textmode.js is a library for creating ASCII art and text-mode graphics in the browser using WebGL2. Perfect for creating retro-style visuals, text animations, and creative coding with characters.
There are two flavors of textmode objects with a few differences:
textmode: Runs on the rendering pipeline and is performant when chaining to other video nodes. Features such as mouse interactivity, images/videos and fonts are NOT supported.textmode.dom: Runs on the main thread. Supports mouse, touch and keyboard interactivity. Supports video and images. Slower when chaining to other video nodes as it requires CPU-to-GPU pixel copy.You can call these special methods in your textmode code:
noDrag() disables dragging the node.noOutput() hides the video output port.setHidePorts(true | false) sets whether to hide inlets and outlets.fft() for audio analysis, see Audio Analysis.send, recv, setPortCount, onCleanup, etc.).The textmode instance is exposed as tm in your code:
tm.setup(() => {
tm.fontSize(16);
tm.frameRate(60);
});
tm.draw(() => {
tm.background(0, 0, 0, 0);
const halfCols = tm.grid.cols / 2;
const halfRows = tm.grid.rows / 2;
for (let y = -halfRows; y < halfRows; y++) {
for (let x = -halfCols; x < halfCols; x++) {
const dist = Math.sqrt(x * x + y * y);
const wave = Math.sin(dist * 0.2 - tm.frameCount * 0.1);
tm.push();
tm.translate(x, y, 0);
tm.char(wave > 0.5 ? "▓" : wave > 0 ? "▒" : "░");
tm.charColor(0, 150 + wave * 100, 255);
tm.point();
tm.pop();
}
}
});
[!CAUTION] If you create too many
textmodeortextmode.domobjects, your browser will crash withToo many active WebGL contexts. Oldest context will be lost. It seems like textmode might not be sharing the WebGL contexts acrossTextModifierinstances.
You can use the textmode.filters.js plugin to apply image filters, e.g. tm.layers.base.filter('brightness', 1.3)
Try these presets for more quick examples: digital-rain.tm, animated-wave.tm, plasma-field.tm, rain.tm, torus.tm and fire.tm
See the Textmode.js documentation to learn how to use the library.
Please consider supporting @humanbydefinition who maintains textmode.js!
three and three.dom: creates Three.js 3D graphics
✨ Try this patch out in the app! It shows how you can use 2D textures from other objects in Three.js.
Three.js is a powerful 3D graphics library for WebGL. Create 3D scenes, animations, and interactive visualizations in the browser.
There are two flavors of three objects with a few differences:
three: Runs on the rendering pipeline and is performant when chaining to other video nodes. Can takethree.dom: Runs on the main thread. Supports interactivity via OrbitControls or custom handlers. Slower when chaining to other video nodes as it requires CPU-to-GPU pixel copy.The draw() function should be defined to draw every frame:
const { Scene, PerspectiveCamera, BoxGeometry, Mesh, MeshNormalMaterial } =
THREE;
const scene = new Scene();
const camera = new PerspectiveCamera(75, width / height, 0.1, 1000);
camera.position.z = 2;
const geometry = new BoxGeometry(1, 1, 1);
const material = new MeshNormalMaterial();
const cube = new Mesh(geometry, material);
scene.add(cube);
function draw() {
cube.rotation.x += 0.01;
cube.rotation.y += 0.01;
renderer.render(scene, camera);
}
You can call these special methods in the three object only:
getTexture(inlet): THREE.Texture gets the video input as Three.js texture. Only works withsetVideoCount(ins, outs) sets the number of video inlets and outlets (for video chaining).You can call these special methods in the three.dom object only:
setCanvasSize(width, height) resizes the output canvas sizeonKeyDown(callback) receives keydown eventsonKeyUp(callback) receives keyup eventsYou can call these special methods in both three and three.dom:
noDrag() disables dragging the node.noOutput() hides the video output port.setHidePorts(true | false) sets whether to hide inlets and outlets.fft() for audio analysis, see Audio Analysis.send, recv, setPortCount, onCleanup, etc.).As well as these variables:
mouse.x and mouse.y provides mouse positionwidth and height provides output sizeThe Three.js context provides these variables:
THREE: the Three.js libraryrenderer: WebGLRenderer: the WebGL renderer from Three.jsSee the Three.js documentation and examples for more inspiration.
Please consider supporting mrdoob on GitHub Sponsors!
bchrn: render the Winamp Milkdrop visualizer (Butterchurn)hydra and glsl) to derive more visual effects.img: display imagesstring: load the image from the given url.video: display videosbang: restart the videostring: load the video from the given url.play: play the videopause: pause the video{type: 'loop', value: false}: do not loop the videoiframe: embed web contentEnter and type iframe <url> to create an iframe with a pre-filled URL. Example: iframe example.com{type: 'load', url: 'https://...'}: loads the webpage from the given URL.postMessage. Use this for communication protocols like WebMIDILink that rely on postMessage.postMessage events received from the iframe. This allows bidirectional communication between your patch and embedded web content.bg.out: background outputjs: A JavaScript code blocksend, recv, setPortCount, onCleanup, etc.) and features (NPM imports, VFS, shared libraries).js object:setRunOnMount(true) to run the code automatically when the object is created. By default, the code only runs when you hit the "Play" button.flash() to briefly flash the node's border, useful for visual feedback when processing messages.logger.js preset which lets you log incoming messages to the console. Useful for debugging.worker: JavaScript in a Web Worker threadworker node runs JavaScript in a dedicated Web Worker thread, allowing CPU-intensive computations to run without blocking the main thread.requestAnimationFrame() (uses 60fps setInterval as fallback), // @lib declaration (libraries must be created in regular js nodes)flash() to briefly flash the node's border, useful for visual feedback when processing messages.// @lib in a regular js node can be imported in worker nodes.expr: expression evaluator
✨ Try this patch out in the app!
Evaluate expressions and formulas.
Use the $1 to $9 variables to create inlets dynamically. For example, $1 + $2 creates two inlets for addition.
This uses the expr-eval library from silentmatt under the hood for evaluating expressions.
There are so many functions and operators you can use here! See the expression syntax section.
Very helpful for control signals and parameter mapping.
This works with non-numbers too! You can use it to access object fields and work with arrays.
// gets the 'note' field of an object and add 20 to it
$1.note + 20;
// checks if the 'type' field is noteOn
$1.type == "noteOn";
// perform conditional operations on an object
$1.value > 20 ? "ok" : "no";
// get the 5th index of an array
$1[5];
You can also create variables and they are multi-line. Make sure to use ; to separate statements. For example:
a = $1 * 2;
b = $2 + 3;
a + b;
You can also define functions to make the code easier to read, e.g. add(a, b) = a + b.
The expr object follows the Max and Pd convention of hot and cold inlets:
$1), the expression is evaluated and the result is sent to the outlet.$2, $3, etc.), the value is stored but no output is triggered. The stored values are used the next time inlet 0 receives a message.This allows you to set up multiple values before triggering a computation. Use the trigger object to control the order of execution when you need to update multiple inlets and then trigger the output.
filter: conditional message passingFilter messages based on a JavaScript expression. If the expression evaluates to a truthy value, the message is sent to the first outlet (matched); otherwise, it's sent to the second outlet (no match).
Use $1 to $9 variables like in expr to reference inlet values.
Unlike expr which outputs the result of the expression, filter passes through the original input message when the condition is met (or not met).
// Only pass through messages where type is 'play'
$1.type === "play";
// Filter for note-on messages with velocity above 64
$1.type === "noteOn" && $1.velocity > 64;
// Pass through numbers greater than 100
$1 > 100;
Two outlets: The first outlet emits messages that match the filter condition. The second outlet emits messages that fail to match, allowing you to handle both cases.
Follows the same hot/cold inlet convention as expr: inlet 0 triggers evaluation, other inlets store values.
map: transform messages with JavaScriptTransform incoming messages using JavaScript expressions. The result of the expression is sent to the outlet.
Use $1 to $9 variables like in expr to reference inlet values.
Unlike expr which uses expr-eval, map uses full JavaScript, giving you access to all JS features and some of the runner context (e.g. esm() for NPM imports, llm(), etc.).
// Add 1 to the incoming value (same as expr $1 + 1)
$1 + 1
// Override a field in the incoming message object
{...$1, note: 64}
// Use JavaScript built-in functions
Math.floor($1)
// Use string methods
$1.toUpperCase()
// Use array methods
$1.map(x => x * 2)
Follows the same hot/cold inlet convention as expr: inlet 0 triggers evaluation, other inlets store values.
tap: debug and inspect messagesExecute JavaScript expressions for side effects (like logging) while passing the original message through unchanged.
Perfect for debugging message flow without altering the data.
// Log incoming messages
console.log("received:", $1);
// Log specific fields
console.log("note:", $1.note, "velocity:", $1.velocity);
// Conditional logging
if ($1.type === "noteOn") console.log("Note on!", $1);
The expression result is ignored - the original message always passes through.
Follows the same hot/cold inlet convention as expr: inlet 0 triggers evaluation, other inlets store values.
scan: stateful accumulationAccumulate values over time using a JavaScript expression (like RxJS scan).
$1 is the accumulator (previous result), $2 is the new input value.
The result becomes the new accumulator and is sent to the outlet.
// Running sum
$1 + $2
// Running maximum
Math.max($1, $2)
// Collect values into array
[...$1, $2]
// Count messages
$1 + 1
// Running average (with count in accumulator)
{ sum: $1.sum + $2, count: $1.count + 1 }
uniq: filter consecutive duplicatesFilters out consecutive duplicate values (like Unix uniq or RxJS distinctUntilChanged).
By default, uses strict equality (===) to compare values.
Optional comparator expression: $1 is the previous value, $2 is the current value. Return true if equal (skip), false if different (pass through).
// Default: strict equality (no expression needed)
// 1 1 1 2 2 3 3 3 4 → 1 2 3 4
// Compare by specific property
$1.id === $2.id;
// Compare by multiple properties
$1.x === $2.x && $1.y === $2.y;
// Custom comparison (e.g., within threshold)
Math.abs($1 - $2) < 0.01;
Second inlet resets the state (forgets the last value).
Inlet 0: Input value ($2) - triggers evaluation
Inlet 1: Reset/set accumulator - send bang to reset to initial value, or send a value to set the accumulator directly
The first input initializes the accumulator (unless initialValue is set in data)
peek: display message valuespeek $1.type or click the code icon to add an expression.$1 to reference the incoming message (e.g., $1.x, $1.data.name).vue: create user interfaces with VuecreateApp({template}) as a string for now, or use hyperscript via h() for more complicated things.Vue (the entire Vue.js module), createApp, ref, reactive, computed, watch, watchEffect, onMounted, onUnmounted, nextTick, h, defineComponenttailwind(false) to disable TailwindCSS if you prefer to use your own styles.vue code:noDrag() disables dragging the node.send, recv, setPortCount, onCleanup, etc.).dom: create user interfaces with Vanilla JSroot provides the root element that you can modify, e.g. root.innerHTML = 'hello'.tailwind(false) to disable TailwindCSS if you prefer to use your own styles.dom code:noDrag() disables dragging the node.send, recv, setPortCount, onCleanup, etc.).uxn: Uxn virtual machine
hydra and glsl) to process the Uxn screen output..rom file, or use the Load ROM button (folder icon)
✨ Try this patch out in the app! Code is by Compudanzas' Uxn tutorial. If you like their tutorial, please go support them!
Write and assemble your own Uxntal programs directly in the editor!
Shift + Enter or click "Assemble & Load" to compile and run your code.Messages
stringhttp:// or https://, loads ROM from URL.bang: Re-assembles and loads code if available, or reloads ROM from URL if available.Uint8Array: Load ROM from raw binary dataFile: Load ROM from file object{type: 'load', url: string}: Load ROM from URLAuto-loading behavior:
See the Uxn documentation and Uxntal reference to learn how to write Uxn programs.
Check out 100r.co for Uxn design principles.
See Awesome Uxn for cool resources and projects from the Uxn community.
Please consider supporting Hundred Rabbits on Patreon for their amazing work on Uxn and Orca!
asm: virtual stack machine assembly interpreterasm lets you write a simple flavor of stack machine assembly to construct concise programs. This was heavily inspired by Zachtronic games like TIS-100 and Shenzhen I/O, where you write small assembly programs to interact with the world and solve problems:
The stack machine module is quite extensive, with over 50 assembly instructions and a rich set of features. There are lots of quality-of-life tools unique to Patchies like color-coded memory region visualizer, line-by-line instruction highlighting, and external memory cells (asm.mem).
See the documentation for assembly module to see the full instruction sets and syntax, what the asm object and its friends can do, and how to use it.
Try out my example assembly patch to get a feel of how it works.
ruby: creates a Ruby code environmentemit data - send data to all outletsemit data, to: n - send data to specific outlet (0-indexed)recv { |data, meta| ... } - receive messages (data is auto-converted to Ruby types)set_port_count(inlets, outlets) - configure number of portsset_title "title" - set the node's titleflash - flash the nodeputs, p, warn - console outputemit instead of send (Ruby's built-in send method conflicts with JS interop).# Example: double incoming numbers
recv { |data, meta| emit(data * 2) }
python: creates a Python code environmentbutton: a simple buttonbang message when clicked.any: flashes the button when it receives any message, and outputs the bang message out.msg: message objectEnter and type m <message> to create a msg object with the given message.m start creates a msg object that sends start when clicked.hello or start) are sent as objects with type field: i.e. {type: 'hello'} or {type: 'start'}"hello") are sent as JS strings: "hello"100) are sent as numbers: 100{foo: 'bar'}) are sent as-is: {foo: 'bar'}bang sends {type: 'bang'} object - this is what button does when you click itstart sends {type: 'start'} object'hello world' or "hello world" sends the string 'hello world'100 sends the number 100{x: 1, y: 2} sends the object {x: 1, y: 2}bang: outputs the message without storing a new value{type: 'set', value: <value>}: sets the message without triggering outputYou can use placeholders from $1 - $9 to send messages with stored variables. This is very helpful if you have a message like {type: 'noteOn', note: $1, velocity: 100} and you need the note to be dynamic.
The msg object follows the Max and Pd convention of hot and cold inlets:
$1): A single hot inlet. Sending a value stores it as $1 and triggers output. Sending a bang triggers output with the current stored value.$1, $2, etc.): First inlet is hot ($1), rest are cold ($2, $3, etc.). Cold inlets store values without triggering. Send values to cold inlets first, then trigger via the hot inlet. Use the trigger object to do this.slider: numerical value sliderEnter and type in these short commands to create sliders with specific ranges:slider <min> <max>: integer slider control. example: slider 0 100fslider <min> <max>: floating-point slider control. example: fslider 0.0 1.0. fslider defaults to -1.0 to 1.0 range if no arguments are given.vslider <min> <max>: vertical integer slider control. example: vslider -50 50vfslider <min> <max>: vertical floating-point slider control. example: vfslider -1.0 1.0. vfslider defaults to -1.0 to 1.0 range if no arguments are given.bang: outputs the current slider valuenumber: sets the slider to the given number within the range and outputs the valuetextbox: multi-line text inputbang: outputs the current textstring: sets the text to the given stringorca: Orca livecoding sequencer
noteOn, noteOff, controlChange)midi.out for MIDI output to hardware.poly-synth-midi.tone preset, which uses the tone~ node to playback MIDI messages with a polyphonic synth.Enter or ctrl+f advances one framectrl+shift+r resets frame> increases tempo and < decreases tempostrudel: Strudel music environment
✨ Try this patch out in the app!
Ctrl/Cmd + Enter to re-evaluate the code.dac~ object to hear the audio output.send technically works but has very limited use case as there are no event emitters in Strudel.recv only works with a few functions, e.g. setcpm right now. Try recv(setcpm) to automate the cpm value.bang or run: evaluates the code and starts playback{type: 'set', code: '...'}: sets the code in the editor{type: 'setFontSize', value: 18}: sets the font size of the editor.{type: 'setFontFamily', value: 'JetBrains Mono, monospace'}: sets the font family of the editor. fallback is allowed.{type: 'setStyles', value: {container: 'background: transparent'}}: sets custom styles for editor container.strudel object, but only one will be playing at a time.bang or run messages to switch playback between multiple Strudel objects to orchestrate them.chuck~: creates a ChucK audio programming environment
✨ Try this patch out in the app! This is from @dtinth's ChucK experiments.
Ctrl/Cmd + Enter: replaces the most recent shred.Ctrl/Cmd + \: adds a new shred to the shreds list.Ctrl/Cmd + Backspace: removes the most recent shred.adc => PitShift p => dac;, so you can use ChucK as a filter or for analysis.bang, replace or run: replaces the most recent shred with the current expressionadd: adds the current expression as a new shredremove: removes the last shredstop: stops all shredsclearAll: clears all shreds{type: 'replace', code: string}: replaces the most recent shred with the given code
✨ Try this patch out in the app! You can use ChucK for audio analysis and applying filters as it receives audio inputs and can emit events and global variables.
global (e.g. global int bpm) and make sure all dependent variables are re-computed in a loop.{type: 'set', key: string, value: any}: sets a chuck global value / array (can be string, int or float)global bpm float of 140.0 it would not work. Try setInt or setFloat if there is an issue.{type: 'setInt', key: string, value: number}: sets a chuck global integer value{type: 'setFloat', key: string, value: number}: sets a chuck global float value{type: 'setIntArray', key: string, value: number[]}: sets a chuck global integer array{type: 'setFloatArray', key: string, value: number[]}: sets a chuck global float array{type: 'get', key: string}: gets a chuck global value (auto-detects type from code) and emits {key, value}{type: 'getInt', key: string}: gets a chuck global integer value and emits {key, value}{type: 'getFloat', key: string}: gets a chuck global float value and emits {key, value}{type: 'getString', key: string}: gets a chuck global string value and emits {key, value}{type: 'getIntArray', key: string}: gets a chuck global integer array and emits {key, value}{type: 'getFloatArray', key: string}: gets a chuck global float array and emits {key, value}{type: 'signal', event: string}: signal an event by name{type: 'broadcast', event: string}: broadcast an event by name{type: 'listenOnce', event: string}: listen for an event once, emits {event} when triggered{type: 'listenStart', event: string}: start listening for an event continuously, emits {event} each time it's triggered{type: 'listenStop', event: string}: stop listening for an event<<< print statements are emitted as raw strings from the message outlet
Tip: you can configure audio devices and its settings by using the settings button on mic~ and dac~
mic~: Capture audio from microphone inputdac~: Send audio to speakersmeter~: Visual audio level meter that shows the loudness of the audio source.soundfile~: Load and play audio files with transport controlssoundfile~ to load it.soundfile~ by default.soundfile~ into sampler~ which has more playback capabilities.sampler~ persist between reloads too.bang: play from start of sampleplay: play from current positionpause: pause the playbackstop: stop the playback and reset playback positionread: reads the audio buffer and sends it to output, see convolver~{type: 'load', url: string}: loads the audio file or stream by url'https://stream.japanradio.de/live' to soundfile~ then bang to play a radio station!
Try out the drum sequencer: use
Pto play andKto stop!
sampler~: Sample playback with triggering capabilities, see sampler~split~: Split multi-channel audio into separate mono channels.merge~: Merge multiple mono channels into a single multi-channel audio.object: textual object systemEnter, and type in the name of the object you want to create.gain~ object's gain value (e.g. 1.0) to see the tooltip.These objects run on control rate, which means they process messages (control signals), but not audio signals.
mtof: Convert MIDI note numbers to frequenciesloadbang: Send bang on patch loadmetro: Metronome for regular timingdelay: Message delay (not audio)debounce: Waits for quiet period before emitting last value (e.g., debounce 100)throttle: Rate limits messages to at most one per time period (e.g., throttle 100)trigger (alias t): Send messages through multiple outlets in right-to-left orderadsr: ADSR envelope generatorspigot: Message gate that allows or blocks data based on a conditionuniqby: Filter consecutive duplicates by a specific key (e.g., uniqby id or uniqby user.name)webmidilink: Converts midi.in messages to WebMIDILink link level 0 formats. Connect this to iframe to send MIDI messages to WebMIDILink-enabled iframes.webmidilink to make smooth jazz with SpessaSynth. click on the iframe to play sound.trigger: sends messages in right-to-left orderThe trigger object (shorthand: t) is essential for controlling message order and working with hot/cold inlets. It sends messages through multiple outlets in right-to-left order.
Usage: trigger <type1> <type2> ... or t <type1> <type2> ...
Type specifiers:
b or bang: Always sends {type: 'bang'}a or any: Passes the input unchangedn or f or number or float: Passes only if input is a numberl or list: Passes only if input is an arrayo or object: Passes only if input is a plain object (not array)s or symbol: Passes only if input is an object with a type key or a js symbolExample: t b n creates two outlets. When it receives the number 42:
42{type: 'bang'}This right-to-left order is crucial for setting up cold inlets before triggering hot inlets. For example, to properly update an expr $1 + $2 object:
[slider] ──┬──► [t b a] ──► outlet 0 (bang) ──► expr inlet 0 (hot, triggers output)
│ └──► outlet 1 (value) ──► expr inlet 1 (cold, stores value)
The trigger ensures the value reaches the cold inlet ($2) before the bang triggers the hot inlet ($1).
adsr: ADSR envelope generator
✨ Try this patch out in the app! This is a sampler that changes the playback speed depending on which notes you pressed.
The adsr object generates ADSR envelope messages for controlling audio parameters (like gain). It has 6 inlets:
1 triggers attack→decay→sustain, 0 triggers releaseConnect the output to an audio parameter inlet (e.g., gain~'s gain inlet) to automate the parameter.
Under the hood, adsr sends scheduled messages that automate audio parameters. You can also send these directly from js nodes.
// Trigger envelope (attack → decay → sustain)
send({
type: "trigger",
values: { start: 0, peak: 1, sustain: 0.7 },
attack: { time: 0.02 }, // seconds
decay: { time: 0.1 },
});
// Release envelope
send({ type: "release", release: { time: 0.3 }, endValue: 0 });
// Set value immediately
send({ type: "set", value: 0.5 });
// Set value at a future time (relative, in 0.5s from now)
send({ type: "set", value: 0.5, time: 0.5 });
// Set value at absolute audio context time
send({ type: "set", value: 0.5, time: 1.0, timeMode: "absolute" });
curve: 'linear' | 'exponential' | 'targetAtTime' (default: linear).midi-adsr-gain.js preset shows how you can use MIDI messages to automate the gain parameter. This patch shows how to use this in place of the adsr object.These objects run on audio rate, which means they process audio signals in real-time. They are represented with a ~ suffix in their names.
Audio Processing:
gain~: Amplifies audio signals with gain controlosc~: Oscillator for generating audio waveforms (sine, square, sawtooth, triangle)lowpass~, highpass~, bandpass~, allpass~, notch~: Various audio filterslowshelf~, highshelf~, peaking~: EQ filters for frequency shapingcompressor~: Dynamic range compression for audiopan~: Stereo positioning controldelay~: Audio delay line with configurable delay time+~: Audio signal additionsig~: Generate constant audio signalswaveshaper~: Distortion and waveshaping effectsconvolver~: Convolution reverb using impulse responsessoundfile~ object to the convolver~ object's message inlet. Then, upload a sound file or send a url as an input message.read message to the soundfile~ object to read the impulse response into the convolver~ object.fft~: FFT analysis for frequency domain processing. See the audio analysis section for how to read the FFT data.osc~ oscillator
✨ Try this patch out in the app!
The osc~ oscillator object supports custom waveforms using PeriodicWave by sending [real: Float32Array, imaginary: Float32Array] to the type inlet. Both arrays must be Float32Array or TypedArray of the same length (minimum 2).
js objectosc~'s type inlet (second message inlet from the left)'Run on the js object to send the arrays to the osc~ object.type property on the object should say "custom" now.setRunOnMount(true);
const real = new Float32Array(64);
const imag = new Float32Array(64);
for (let n = 1; n < 64; n++) {
real[n] = (2 / (n * Math.PI)) * Math.sin(n * Math.PI * 0.5);
}
send([real, imag]);
waveshaper~
✨ Try this patch out in the app!
Similar to the periodic wave example above, you can also send a wave shaping distortion curve to the curve inlet of the waveshaper~. It expects a single Float32Array describing the distortion curve.
js objectwaveshaper~'s curve inlet (second message inlet from the left)'Run on the js object to send the array to the waveshaper~ object.curve property on the object should say "curve" now.Here's an example distortion curve:
setRunOnMount(true);
const k = 50;
const s = 44100;
const curve = new Float32Array(s);
const deg = Math.PI / 180;
for (let i = 0; i < s; i++) {
const x = (i * 2) / s - 1;
curve[i] = ((3 + k) * x * 20 * deg) / (Math.PI + k * Math.abs(x));
}
send(curve);
dsp~, expr~, tone~, elem~ or sonic~ objects. In fact, the default dsp~, tone~ and elem~ objects are simple sine wave oscillators that work similar to osc~.sampler~: audio sampler with recording and playback
✨ Try this patch out in the app! This is a sampler that changes the playback speed depending on which notes you pressed.
The sampler~ object records audio from connected sources into a buffer and plays it back with loop points, playback rate, and detune control. It's useful for sampling audio from other nodes, creating loops, and building sample-based instruments.
Messages
play / bang: play the recorded samplerecord: start recording audio from connected sourcesend: stop recordingstop: stop playbackloop: toggle loop and start loop playback{type: 'loop', start: 0.5, end: 2.0}: set loop points (in seconds) and playloopOn: enable loop mode{type: 'loopOn', start: 0.5, end: 2.0}: enable loop with specific pointsloopOff: Disable loop mode{type: 'setStart', value: 0.5} - start playback at 0.5 seconds{type: 'setEnd', value: 2.0} - end playback at 2.0 seconds{type: 'setPlaybackRate', value: 2.0} - play at double speed{type: 'setPlaybackRate', value: 0.5} - play at half speed{type: 'setDetune', value: 1200} - pitch up one octave{type: 'setDetune', value: -1200} - pitch down one octaveexpr~: audio-rate mathematical expression evaluatorexpr but runs at audio rate for audio signal processing.shift+enter to re-run the expression.expr~ object will also re-run the expression.expr, so the same mathematical expression will work in both expr and expr~.sig~ if you just need a constant signal.s: current sample value, a float between -1 and 1i: current sample index in buffer, an integer starting from 0t: current time in seconds, a float starting from 0channel: current channel index, usually 0 or 1 for stereobufferSize: the size of the audio buffer, usually 128samples: an array of samples from the current channelinput: first input audio signal (for all connected channels), a float between -1 and 1inputs: every connected input audio signal$1 to $9: dynamic control inletssin(t * 440 * PI * 2) creates a sine wave oscillator at 440Hzrandom() creates white noises outputs the input audio signal as-iss * $1 applies gain control to the input audio signals ^ 2 squares the input audio signal for distortion effect$1 to $9 to create dynamic control inlets.$1 * 440 creates one message inlet that controls the frequency of a sine wave oscillator.slider 1 880 object to control the frequency.compressor~ object with appropriate limiter-esque setting after expr~ to avoid loud audio spikes that can and will damage your hearing and speakers. You have been warned!dsp~: dynamic JavaScript DSP processorThis is similar to expr~, but it takes in a single process JavaScript function that processes the audio. It essentially wraps an AudioWorkletProcessor. The worklet is always kept alive until the node is deleted.
Try out some patches that uses dsp~ to get an idea of its power:
Some presets are also built on top of dsp~:
snapshot~: takes a snapshot of the incoming audio's first sample and outputs it.Here's how to make white noise:
function process(inputs, outputs) {
outputs[0].forEach((channel) => {
for (let i = 0; i < channel.length; i++) {
channel[i] = Math.random() * 1 - 1;
}
});
}
Here's how to make a sine wave oscillator at 440Hz:
function process(inputs, outputs) {
outputs[0].forEach((channel) => {
for (let i = 0; i < channel.length; i++) {
let t = (currentFrame + i) / sampleRate;
channel[i] = Math.sin(t * 440 * Math.PI * 2);
}
});
}
You can use the counter variable that increments every time process is called. There are also a couple more variables from the worklet global that you can use.
const process = (inputs, outputs) => {
counter; // increments every time process is called
sampleRate; // sample rate (e.g. 48000)
currentFrame; // current frame number (e.g. 7179264)
currentTime; // current time in seconds (e.g. 149.584)
};
You can use $1, $2, ... $9 to dynamically create value inlets. Message sent to the value inlets will be set within the DSP. The number of inlets and the size of the dsp~ object will adjust automatically.
const process = (inputs, outputs) => {
outputs[0].forEach((channel) => {
for (let i = 0; i < channel.length; i++) {
channel[i] = Math.random() * $1 - $2;
}
});
};
Note:
dsp~does not use Patchies' JavaScript Runner. It runs in an AudioWorklet (separate thread) which doesn't have access towindow, DOM APIs, or timing functions likesetTimeout/delay/setInterval/requestAnimationFrame. This is necessary for real-time audio processing (~345 calls/sec at 44.1kHz).
In addition to the value inlets, we also have messaging capabilities:
setPortCount(inletCount, outletCount) to set the number of message inlets.setAudioPortCount(inletCount, outletCount) to set the number of audio inlets and outlets.setTitle(title) to set the title of the object.dsp~.setKeepAlive(enabled) to control whether the worklet stays active when not connected.setKeepAlive(true) keeps the worklet processing even when no audio is flowing through it.setKeepAlive(false) lets the worklet to stop processing when it's not connected to other audio nodes, which can improve performance.snapshot~ and bang~ presets for examples on when to use setKeepAlivesend and recv to communicate with the outside world. See Message Passing.console.log() to log messages to the virtual console (forwarded from the AudioWorklet to the main thread).setPortCount(2);
recv((msg, meta) => {
if (meta.inlet === 0) {
// do something
}
});
You can even use both value inlets and message inlets together in the DSP.
let k = 0;
recv((m) => {
// you can use value inlets `$1` ... `$9` anywhere in the JavaScript DSP code.
k = m + $1 + $2;
});
const process = (inputs, outputs) => {
outputs[0].forEach((channel) => {
for (let i = 0; i < channel.length; i++) {
channel[i] = Math.random() * k;
}
});
};
tone~: Tone.js synthesis and processingThe tone~ object allows you to use Tone.js to create interactive music. Tone.js is a powerful Web Audio framework that provides high-level abstractions for creating synthesizers, effects, and complex audio routing.
By default, tone~ adds a sample code for sine oscillator.
The Tone.js context gives you these variables:
Tone: the Tone.js libraryinputNode: GainNode from Web Audio API for receiving audio input from other nodesoutputNode: GainNode from Web Audio API for sending audio output to connected nodesIn addition to the audio processing capabilities, tone~ also supports messaging. See Patchies JavaScript Runner for available functions (send, recv, setPortCount, onCleanup, etc.).
Try out these presets:
poly-synth.tone: Polyphonic synthesizer that plays chord sequenceslowpass.tone - low pass filterspipe.tone - directly pipe input to outputCode example:
// Process incoming audio through a filter
const filter = new Tone.Filter(1000, "lowpass");
inputNode.connect(filter.input.input);
filter.connect(outputNode);
// Handle incoming messages to change frequency
recv((m) => {
filter.frequency.value = m;
});
// Return cleanup function to properly dispose Tone.js objects
return {
cleanup: () => filter.dispose(),
};
sonic~: SuperCollider synthesis engineThe sonic~ object integrates SuperSonic, which brings SuperCollider's powerful scsynth audio engine to the browser via AudioWorklet.
By default, sonic~ loads and triggers the Prophet synth on message.
The sonic~ context provides:
sonic: SuperSonic instance for synthesis controlSuperSonic: Class for static methods (e.g., SuperSonic.osc.encode())sonicNode: Audio node wrapper (sonic.node) for Web Audio connectionson(event, callback): Subscribe to SuperSonic eventsinputNode: Audio input GainNodeoutputNode: Audio output GainNodeAvailable events: 'ready', 'loading:start', 'loading:complete', 'error', 'message'
In addition to the synthesis capabilities, sonic~ also supports messaging. See Patchies JavaScript Runner for available functions (send, recv, setPortCount, onCleanup, etc.).
Load and play a synth:
setPortCount(1);
await sonic.loadSynthDef("sonic-pi-prophet");
recv((note) => {
sonic.send(
"/s_new",
"sonic-pi-prophet",
-1,
0,
0,
"note",
note,
"release",
2
);
});
Load and play samples:
await sonic.loadSynthDef("sonic-pi-basic_stereo_player");
await sonic.loadSample(0, "loop_amen.flac");
await sonic.sync();
sonic.send(
"/s_new",
"sonic-pi-basic_stereo_player",
-1,
0,
0,
"buf",
0,
"rate",
1
);
See the SuperSonic documentation and scsynth OSC reference for more details.
Please consider supporting Sam Aaron on Patreon, the creator of Sonic Pi and SuperSonic!
elem~: Elementary Audio synthesis and processingThe elem~ object lets you use the Elementary Audio library, a declarative digital audio signal processing.
By default, elem~ adds a sample code for a simple sine wave oscillator.
The elem~ context gives you these variables:
el: the Elementary Audio core librarycore: the WebRenderer instance for rendering audio graphsnode: the AudioWorkletNode for connecting to the Web Audio graphinputNode: GainNode from Web Audio API for receiving audio input from other nodesoutputNode: GainNode from Web Audio API for sending audio output to connected nodesIn addition to the audio processing capabilities, elem~ also supports messaging. See Patchies JavaScript Runner for available functions (send, recv, setPortCount, onCleanup, etc.).
Here's how to create a simple phasor:
setPortCount(1);
let [rate, setRate] = core.createRef(
"const",
{
value: 440,
},
[]
);
recv((freq) => setRate({ value: freq }));
// also try el.train and el.cycle in place of el.phasor
// first arg is left channel, second arg is right channel
core.render(el.phasor(rate), el.phasor(rate));
csound~: Sound and music computing[!CAUTION] You must only create one
csound~object per patch, for now. Creating multiplecsound~object will break the patch's audio playback. Deleting the object also stops other object's audio. These are known bugs.
The csound~ object allows you to use Csound for audio synthesis and processing. Csound is a powerful, domain-specific language for audio programming with decades of development.
You can send messages to control Csound instruments:
bang: Resume or re-eval Csound codeplay: Resume playbackpause: Pause playbackstop: Stop playbackreset: Reset the Csound instance{type: 'setChannel', channel: 'name', value: number}: Set a control channel value{type: 'setChannel', channel: 'name', value: 'string'}: Set a string channel value{type: 'setOptions', value: '-flagname'}: Set Csound options and reset{type: 'noteOn', note: 60, velocity: 127}: Send MIDI note on{type: 'noteOff', note: 60, velocity: 0}: Send MIDI note off{type: 'readScore', value: 'i1 0 1'}: Send score statements to Csound{type: 'eval', code: 'instr 1 ... endin'}: Evaluate Csound codenumber: Set control channel for the inlet indexstring: Send input messages (or set option if starts with -)midi.in: MIDI inputmidi.out: MIDI outputnetsend and netrecv: send and receive messages over network
✨ Try this patch out in the app! This lets you chat over the network. Try clicking on "Share Link" and sending it to your friend!
Enter then netsend <channelname> to create a netsend object that sends messages to the specified channel name, such as netsend chatEnter then netrecv <channelname> to create a netrecv object that receives messages from the specified channel name, such as netrecv chatnetrecv from that channelnetsend or netrecv object, it will attach a room parameter to your URL.?room= parameter to be able to connect to each other.room parameter to generate a different room to use.Ctrl/Cmd + K > Share Patch Link) to share the patch with friends.room parameter to your shared link, letting you connect with friends.netsend and netrecvYou can use netsend and netrecv to send and receive messages from your own Node.js and Bun scripts, by using the Trystero library with the RTC polyfills such as node-datachannel/polyfill.
Here's an example of an OSC (OpenSoundControl) bridge. You can send message to netsend osc to route that to your OSC server.
import { joinRoom } from "trystero";
import { Client } from "node-osc";
import { RTCPeerConnection } from "node-datachannel/polyfill";
const appId = "patchies";
const roomId = "f84df292-3811-4d9b-be54-ce024d4ae1c0"; // your room id!
const room = joinRoom({ appId, rtcPolyfill: RTCPeerConnection }, roomId);
const [netsend, netrecv] = room.makeAction("osc");
const osc = new Client("127.0.0.1", 3333);
room.onPeerJoin((peerId) => console.log("peer joined:", peerId));
room.onPeerLeave((peerId) => console.log("peer left:", peerId));
netrecv((data) => {
const { address, args } = data;
osc.send(address, ...args, (err) => {
if (err) console.error(err);
netsend("osc sent!");
osc.close();
});
});
Here's another example of an ArtNet bridge for controlling DMX-enabled equipments:
import { joinRoom } from "trystero";
import { RTCPeerConnection } from "node-datachannel/polyfill";
import dmxlib from "dmxnet";
const appId = "patchies";
const roomId = "f84df292-3811-4d9b-be54-ce024d4ae1c0"; // your room id!
const room = joinRoom({ appId, rtcPolyfill: RTCPeerConnection }, roomId);
room.onPeerJoin((peerId) => console.log("peer joined:", peerId));
room.onPeerLeave((peerId) => console.log("peer left:", peerId));
const [netsend, netrecv] = room.makeAction("dmx");
const dmxnet = new dmxlib.dmxnet({});
const sender = dmxnet.newSender({
ip: "127.0.0.1",
subnet: 0,
universe: 0,
port: 6454,
});
netrecv((data, peerId) => {
if (Array.isArray(data)) {
for (let frame of data) {
sender.prepChannel(frame.channel, frame.value);
}
sender.transmit();
}
});
mqtt: MQTT Client
✨ Try this patch out in the app! This shows how to send and receive messages over MQTT.
mqtt in the object box to create the node, then click the gear icon to configure.wss://test.mosquitto.org:8081/mqtt) and click Connect.loadbang with {type: 'connect', url} to auto-connect after patch load.Inlet messages:
| Message | Description |
|---|---|
{type: 'connect', url: 'wss://...'} |
Connect to a broker |
{type: 'disconnect'} |
Disconnect from the broker |
{type: 'subscribe', topic: '...'} |
Subscribe to a topic |
{type: 'unsubscribe', topic: '...'} |
Unsubscribe from a topic |
{type: 'publish', topic: '...', message: '...'} |
Publish a message to a topic |
Outlet messages:
| Message | Description |
|---|---|
{type: 'connected'} |
Successfully connected |
{type: 'disconnected'} |
Disconnected from broker |
{type: 'message', topic: '...', message: '...'} |
Received a message |
{type: 'subscribed', topics: [...]} |
Successfully subscribed |
{type: 'unsubscribed', topics: [...]} |
Successfully unsubscribed |
{type: 'error', message: '...'} |
An error occurred |
sse: Server-Sent Eventssse https://example.com/events to create a node with a pre-filled URL.sse https://stream.wikimedia.org/v2/stream/recentchange to stream changes to wiki{type: 'connect', url: string} to connect, {type: 'disconnect'} to disconnect.tts: Text-to-Speechtts in the object box to create the node, then click the gear icon to select a voice.Inlet messages:
| Message | Description |
|---|---|
"text" (string) |
Speak the text |
{type: 'setVoice', value: '...') |
Set the voice by name |
{type: 'setRate', value: 0.1-10} |
Set speech rate (default: 1) |
{type: 'setPitch', value: 0-2} |
Set pitch (default: 1) |
{type: 'setVolume', value: 0-1} |
Set volume (default: 1) |
{type: 'stop'} |
Stop current speech |
{type: 'pause'} |
Pause current speech |
{type: 'resume'} |
Resume paused speech |
Outlet messages:
| Message | Description |
|---|---|
{type: 'start', text: '...'} |
Speech started |
{type: 'end', text: '...'} |
Speech finished |
{type: 'error', message: '...'} |
An error occurred |
✨ Try this patch out in the app! This shows how to send and receive audio, video and messages via vdo.ninja.
Stream audio, video and messages over WebRTC using VDO.Ninja. These nodes enable real-time collaboration and remote audio/video streaming between Patchies instances, OBS instances or with VDO.Ninja web clients.
vdo.ninja.push: Push audio, video, and messages to a VDO.Ninja room
Inlets:
Outlets:
Settings:
Inlet Messages:
| Message | Description |
|---|---|
{type: 'connect'} |
Connect using room/streamId configured in node settings |
{type: 'connect', room?, streamId?} |
Connect to a room with specified values |
{type: 'disconnect'} |
Disconnect from the room |
| Any other message | Sent to all peers in the room |
Outlet Messages:
| Message | Description |
|---|---|
{type: 'connected', room} |
Successfully connected |
{type: 'disconnected'} |
Disconnected from room |
{type: 'data', data, uuid} |
Received data from a peer |
{type: 'track', kind, uuid} |
Received media track |
{type: 'streaming', tracks} |
Started streaming with N tracks |
{type: 'error', message} |
Connection or streaming error |
vdo.ninja.pull: Pull audio, video, and messages from a VDO.Ninja room
Inlets:
Outlets:
Settings:
Inlet Messages:
| Message | Description |
|---|---|
{type: 'connect'} |
Connect using room/streamId configured in node settings |
{type: 'connect', room, streamId?} |
Connect to a room with specified values |
{type: 'view', streamId} |
Start viewing a specific stream |
{type: 'disconnect'} |
Disconnect from the room |
Outlet Messages:
| Message | Description |
|---|---|
{type: 'connected', room} |
Successfully connected |
{type: 'disconnected'} |
Disconnected from room |
{type: 'viewing', streamId} |
Started viewing a stream |
{type: 'track', kind, uuid, streamId} |
Received media track |
{type: 'message', data, uuid} |
Received data from a peer |
{type: 'error', message} |
Connection error |
Tip: In data-only mode, you don't need a stream id - all peers in the room can exchange messages via mesh networking. In normal mode (with video/audio), you need to specify which stream to view.
[!CAUTION] API keys are stored on localStorage as
gemini-api-keyfor Gemini (forai.txt,ai.img,ai.ttsandai.music). This is super insecure.
Be very cautious that Patchies allows any arbitrary code execution right now with no sandboxing whatsoever, and if you load anyone's patch with malicious code, they can steal your API keys. I recommend removing API keys after use before loading other people's patch.
Please, do not use your main API keys here! Create separate API keys with limited quota for use in Patchies. I plan to ork on a backend-based way to store API keys in the future.
In addition, these objects can be hidden from insert object and the object list via "CMD + K > Toggle AI Features" if you prefer not to use AI objects in your patches.
With that in mind, use "CMD + K > Set Gemini API Key" to set your Gemini API key for ai.txt, ai.img, ai.tts and ai.music. You can get the API key from Google Cloud Console.
ai.txt: AI text generationgemini-3-flash-preview model.ai.img: AI image generationgemini-2.5-flash-image model.ai.music: AI music generationlyria-realtime-exp modelai.tts: AI text-to-speech
Inlet messages:
"text" - Generate and play speech for the given text{type: "speak", text: "..."} - Same as above, explicit format{type: "load", text: "..."} - Generate speech without playing (preload){type: "play"} or {type: "bang"} - Play cached audio{type: "stop"} - Stop playback{type: "setVoice", value: "voice-name"} - Set voice (e.g., "en-US-Chirp3-HD-Achernar"){type: "setRate", value: 1.0} - Set speaking rate (0.25-4){type: "setPitch", value: 0} - Set pitch (-20 to 20){type: "setVolume", value: 0} - Set volume gain in dB (-96 to 16)markdown: Markdown rendererMost of the JavaScript-based nodes in Patchies are using the unified JavaScript Runner (JSRunner), which is responsible for executing JavaScript code in a sandboxed environment and providing Patchies-specific features to the code.
The full features of the JavaScript Runner are available in the following objects: js, worker, p5, canvas, canvas.dom, textmode, textmode.dom, three, three.dom, hydra, dom, vue, sonic~, tone~ and elem~.
Some nodes uses single-expression evaluation mode, where the expression is evaluated once for each incoming message. These nodes are filter, map, tap and scan.
send, onMessage, recv, fft, delay, onCleanup, setInterval, setTimeout and requestAnimationFrame, as they are run once on each message and does not allow messaging callbacks.These functions are available in all JSRunner-enabled nodes:
Console: Use console.log() to log messages to the virtual console (not the browser console).
Timers with auto-cleanup:
setInterval(callback, ms) runs a callback every ms milliseconds. Automatically cleaned up on unmount or code re-execution.setTimeout(callback, ms) runs a callback after ms milliseconds. Automatically cleaned up on unmount or code re-execution.delay(ms) returns a Promise that resolves after ms milliseconds. If you stop the js object while awaiting delay(ms), the promise rejects and code execution stops.requestAnimationFrame(callback) schedules a callback for the next animation frame. Automatically cleaned up on unmount or code re-execution.window.setInterval, window.setTimeout, or window.requestAnimationFrame as they will not clean up automatically.Custom cleanup: Use onCleanup(callback) to register a cleanup callback that runs when the node is unmounted or code is re-executed. Useful for disconnecting resources, unsubscribing from events, or any custom cleanup logic.
Message passing: Use send(message) and recv(callback) to communicate with other nodes. See Message Passing for details.
Port configuration: Use setPortCount(inletCount, outletCount) to set the number of message inlets and outlets. Use meta.inlet in the recv callback to distinguish which inlet the message came from.
Node title: Use setTitle(title) to set the display title of the node.
Async helpers: Top-level await is supported. Use await delay(ms) to pause execution for ms milliseconds.
Audio analysis: Use fft() to get audio frequency analysis data from a connected fft~ node's message inlet. See Audio Analysis for details.
LLM integration: Use await llm(prompt, options?) to call Google's Gemini API from your code.
Ctrl/Cmd + K > Geminiconst response = await llm("Describe this image"){ imageNodeId?: string, abortSignal?: AbortSignal } - pass imageNodeId to include a visual node's output as image context.You can import any JavaScript package by using the npm: prefix in the import statement.
import * as X is not yet supported.import Matter from "npm:matter-js";
import { uniq } from "npm:lodash-es";
console.log(Matter); // Matter.js library
console.log(uniq([1, 1, 2, 2, 3, 3])); // [1, 2, 3]
Alternatively, write the dynamic import yourself:
const { uniq } = await import("https://esm.sh/lodash-es");
console.log(uniq([1, 1, 2, 2, 3, 3])); // [1, 2, 3]
// or use a shorthand `await esm()` function that does the same thing
const { uniq } = await esm("lodash-es");
console.log(uniq([1, 1, 2, 2, 3, 3])); // [1, 2, 3]
await getVfsUrl(...) to load files from the virtual filesystem (VFS) as blob urls. This lets you use images, videos, fonts, 3D models and other assets that you've uploaded to your patch.Ctrl/Cmd + K > Toggle Sidebar to toggle the sidebar.await fetch(await getVfsUrl(...)) to retrieve the blob per the above screenshot.// In p5:
let img;
async function setup() {
let url = await getVfsUrl("user://photo.jpg");
img = await loadImage(url);
}
function draw() {
image(img, 0, 0);
}
// In js or canvas.dom:
const url = await getVfsUrl("user://data.json");
const data = await fetch(url);
user:// prefix for user-uploaded files.js blocksYou can share JavaScript code across multiple js blocks by using the // @lib <module-name> comment at the top of your code, and exporting at least one constant, function, class, or module.
// @lib foobar on top of the code snippet with an exported constant, function, class, or module will register the module as foobar.export syntax in your library js object, e.g. export const rand = () => Math.random(). This works for everything: classes, functions, modules.import { rand } from 'foobar' from other objects that supports this feature.See the following example:
✨ Try this patch out in the app!
The fft~ audio object gives you an array of frequency bins that you can use to create visualizations in your patch.
First, create a fft~ object. Set the bin size (e.g. fft~ 1024). Then, connect the purple "analyzer" outlet to the visual object's inlet.
Supported objects are glsl, swgl, as well as any objects using the unified JavaScript Runner, such as canvas.dom, hydra and many more.
sampler2D GLSL uniform inlet and connect the purple "analyzer" outlet of fft~ to it.Enter to insert object, and try out the fft-freq.gl and fft-waveform.gl presets for working code samples.uniform sampler2D waveTexture;. Using other uniform names will give you frequency analysis.You can call the fft() function to get the audio analysis data in any objects using the unified JavaScript Runner.
IMPORTANT: Patchies does NOT use standard audio reactivity APIs in Hydra and P5.js. Instead, you must use the fft() function to get the audio analysis data.
fft() defaults to waveform (time-domain analysis). You can also call fft({type: 'wave'}) to be explicit.
fft({type: 'freq'}) gives you frequency spectrum analysis.
Try out the fft.hydra preset for Hydra.
Try out the fft.p5, fft-sm.p5 and rms.p5 presets for P5.js.
Try out the fft.canvas preset for HTML5 canvas with instant audio reactivity.
fft.canvas preset uses canvas.dom (main thread), giving you the same tight audio reactivity as p5.canvas.dom or p5 for best results.canvas node has slight FFT delay but won't slow down your patch when chained with other visual objects.The fft() function returns the FFTAnalysis class instance which contains helpful properties and methods:
fft().afft().getEnergy('bass') / 255. You can use these frequency ranges: bass, lowMid, mid, highMid, treble.fft().getEnergy(40, 200) / 255fft().rmsfft().avgfft().centroidWhere to call fft():
p5: call in your draw function.
canvas and canvas.dom: call in your draw function that are gated by requestAnimationFrame
js: call in your setInterval or requestAnimationFrame callback
setInterval(() => {
let a = fft().a;
}, 1000);
hydra: call inside arrow functions for dynamic parameters
let a = () => fft().getEnergy("bass") / 255;
src(s0).repeat(5, 3, a, () => a() * 2);
Q: Why not just use standard Hydra and P5.js audio reactivity APIs like a.fft[0] and p5.FFT()?
p5-sound and a.fft APIs only lets you access microphones and audio files. In contrast, Patchies lets you FFT any dynamic audio sources 😊Converting Hydra's Audio Reactivity API into Patchies:
Replace a.fft[0] with fft().a[0] (un-normalized int8 values from 0 - 255)
Replace a.fft[0] with fft().f[0] (normalized float values from 0 - 1)
Instead of a.setBins(32), change the fft bins in the fft~ object instead e.g. fft~ 32
Instead of a.show(), use the below presets to visualize fft bins.
Using the value to control a variable:
- osc(10, 0, () => a.fft[0]*4)
+ osc(10, 0, () => fft().f[0]*4)
.out()
Converting P5's p5.sound API into Patchies:
p5.Amplitude with fft().rms (rms as float between 0-1)p5.FFT with fft()fft.analyze() with nothing - fft() is always up to date.fft.waveform() with fft({ format: 'float' }).a, as P5's waveform returns a value between -1 and 1. Using format: 'float' gives you Float32Array.fft.getEnergy('bass') with fft().getEnergy('bass') / 255 (normalize to 0-1)fft.getCentroid() with fft().centroidAI is 100% optional and opt-in with Patchies.
Don't want AI? Hit Ctrl/Cmd + K then Toggle AI Features. This permanently turns all AI-based nodes and AI generation features off.
In particular, this will hide all AI-related objects and features, such as ai.txt, ai.img, ai.tts and ai.music. It also disables the experimental Cmd/Ctrl + I AI object insertion shortcut.
[!TIP] Use objects that run on the rendering pipeline e.g.
hydra,glsl,swgl,canvas,textmode,threeandimgto reduce lag.
Behind the scenes, the video chaining feature constructs a rendering pipeline based on the use of framebuffer objects (FBOs), which lets visual objects copy data to one another on a framebuffer level, with no back-and-forth CPU-GPU transfers needed. The pipeline makes use of Web Workers, WebGL2, Regl and OffscreenCanvas (for canvas).
It creates a shader graph that streams the low-resolution preview onto the preview panel, while the full-resolution rendering happens in the frame buffer objects. This is much more efficient than rendering everything on the main thread or using HTML5 canvases.
Objects on the rendering pipeline (web worker thread):
hydra, glsl, swgl, canvas, textmode, three and img run entirely on the web worker thread and are very performant when using chaining multiple video objects together, as it does not require CPU-to-GPU pixel copy.Objects on the main thread:
p5, canvas.dom, textmode.dom, three.dom and bchrn runs on the main thread.canvas.dom to bg.out, your FPS will drop around 10FPS - 20FPS. Use "CMD+K > Toggle FPS Monitor" to verify.canvas to bg.out, your FPS will not drop at all.