The above image remixes the Hydra code "Filet Mignon" from AFALFL and GLSL shader "Just another cube" from mrange. Licensed under CC BY-NC-SA 4.0 and CC0 respectively.
Patchies is a tool for building interactive audio-visual patches in the browser with JavaScript and GLSL. It's made for creative coding; patch objects and code snippets together to make visualizations, simulations, soundscapes and artistic explorations 🎨
Try it out at patchies.app - it's open source and free to use 😎
Patchies lets you use the audio-visual tools and libraries that you know (and love!), together in one place. For example:
Patchies is designed to mix textual coding and visual patching, using the best of both worlds. Instead of writing long chunks of code or patching together a huge web of small objects, Patchies encourages you to write small and compact programs and patch 'em together.
If you haven't used a patching environment before, patching is a visual way to program by connecting objects together. Each object does something e.g. generate sound, generate visual, compute some values. You connect the output of one object to the input of another object to create a flow of data. We call the whole visual program a "patch" or "patcher".
This lets you visually see the program's core composition and its in-between results such as audio, video and message flows, while using tools you're already familiar with that lets you do a lot with a bit of code. This is done through Message Passing, Video Chaining and Audio Chaining. They're heavily inspired by tools like Max/MSP, Pure Data, TouchDesigner and VVVV.
Here's a simple Patchies patch that uses Message Passing and Video Chaining together:
It contains a JS random walker (using code from The Nature of Code) which handles add
and clear
messages. On each frame, it ticks the walker then sends the [x, y]
position to a P5 object which draws points on the canvas:
p5
then pipes the image to a chain of Hydra nodes which masks and diffs the visuals:
Try out the patch here to see how it works. Click on the add
button to add new random walkers, and clear
to remove all walkers.
"What I cannot create, I do not understand. Know how to solve every problem that has been solved." - Richard Feynman
Enter
to create a new object.hydra
or glsl
or p5
.Arrow Up/Down
navigates the list.Enter
inserts the object.Esc
closes the menu.Enter
, but it lets you see all objects at a glance 👀Delete
to delete an object.Shift + Enter
while in a code editor to run the code again. This helps you to make changes to the code and see the results immediately.Ctrl/Cmd + K
brings up the command palette.To create shareable links, click on the "Share Link" button on the bottom right. You can also use "Share Patch" from the command palette.
You can use the Shortcuts button on the bottom right to see a list of shortcuts. Here are some of the most useful ones:
Click on object / title
: focus on the object.Drag on object / title
: move the object around.Scroll up
: zoom in.Scroll down
: zoom out.Drag on empty space
: pan the canvas.Enter
: create a new object at cursor position.Ctrl/Cmd + K
: open the command palette to search for commands.Shift + Enter
: run the code in the code editor within the selected object.Delete
: delete the selected object.Ctrl + C
: copy the selected object.Ctrl + V
: paste the copied object.Each object can send message to other objects, and receive messages from other objects.
In this example, two slider
objects sends out their value to a expr $1 + $2
object which adds the number together. The result is sent as a message to the p5
object which displays it.
Here are some examples to get you started:
button
objects, and connect the outlet of one to the inlet of another.{type: 'bang'}
message to the second button, which will flash.msg
object with the message hello world
(you can hit Enter
and type m hello world
). Then, hit Enter
again and search for the logger.js
preset. Connect them together.hello world
to the console object, which will log it to the virtual console.In JavaScript-based objects such as js
, p5
, hydra
, canvas
, strudel
, dsp~
and tone~
, you can use the send()
and recv()
functions to send and receive messages between objects. For example:
// In the source `js` object
send('Hello from Object A')
// In the target `js` object
recv((data) => {
// data is "Hello from Object A"
console.log('Received message:', data)
})
This is similar to the second example above, but using JavaScript code.
The recv
callback also accepts the meta
argument in addition to the message data. It includes the inlet
field which lets you know which inlet the message came from.
You can combine this with send(data, {to: inletIndex})
to send data to only a particular inlet, for example:
recv((data, meta) => {
send(data, {to: meta.inlet})
})
In the above example, if the message came from inlet 2, it will be sent to outlet 2.
In js
, p5
, hydra
, canvas
, dsp~
and tone~
objects, you can call setPortCount(inletCount, outletCount)
to set the exact number of message inlets and outlets. Example: setPortCount(2, 1)
ensures there is 2 message inlets and 1 message outlet.
See the Message Passing with GLSL section for how to use message passing with GLSL shaders to pass data to shaders dynamically.
You can chain visual objects together to create video effects and compositions, by using the output of a visual object as an input to another.
The above example creates a hydra
object and a glsl
object that produces a pattern, and connects them to a hydra
object that subtracts the two visuals together using src(s0).sub(s1).out(o0)
.
This is very similar to shader graphs in programs like TouchDesigner, Unity, Blender, Godot and Substance Designer.
To use video chaining:
Try out the presets to get started quickly.
pipe.hydra
, pipe.gl
) simply passes the visual through without any changes. This is the best starting point for chaining.diff.hydra
, add.hydra
, sub.hydra
) on two visual inputs, see hydra section.The visual object should have at least one visual inlets and/or outlets, i.e. orange circles on the top and bottom.
hydra
, you can call setVideoCount(ins = 1, outs = 1)
to specify how many visual inlets and outlets you want. See hydra section for more details.glsl
objects, you can dynamically create sampler2D uniforms. See glsl section for more details.The visual object should have code that takes in a visual source, does something, and outputs visual. See the above presets for examples.
Connect the orange inlets of a source object to the orange outlets of a target object.
p5
to an orange visual inlet of a pipe.hydra
preset, and then connect the hydra
object to a pipe.gl
preset. You should see the output of the p5
object being passed through hydra
and glsl
objects without modification.Getting lag and slow patches? See the Rendering Pipeline section on how to avoid lag.
Similar to video chaining, you can chain many audio objects together to create audio effects and soundscapes.
The above example sets up a FM synthesizer audio chain that uses a combination of osc~
(sine oscillator), expr
(math expression), gain~
(gain control), and fft~
(frequency analysis) objects to create a simple synth with frequency modulation.
For a more fun example, here's a little patch by @kijjaz that uses expr~
to create a funky beat:
If you have used an audio patcher before (e.g. Pure Data, Max/MSP, FL Studio Patcher, Bitwig Studio's Grid), the idea is similar.
You can use these objects as audio sources: strudel
, chuck
, ai.tts
, ai.music
, soundfile~
, sampler~
, video
, dsp~
, tone~
, as well as the web audio objects (e.g. osc~
, sig~
, mic~
)
dac~
to hear the audio output, otherwise you will hear nothing. Audio sources do not output audio unless connected to dac~
. Use gain~
to control the volume.You can use these objects to process audio: gain~
, fft~
, +~
, lowpass~
, highpass~
, bandpass~
, allpass~
, notch~
, lowshelf~
, highshelf~
, peaking~
, compressor~
, pan~
, delay~
, waveshaper~
, convolver~
, expr~
, dsp~
, tone~
.
Use the fft~
object to analyze the frequency spectrum of the audio signal. See the Audio Analysis section on how to use FFT with your visual objects.
You can use dac~
to output audio to your speakers.
Here are the non-exhaustive list of objects that we have in Patchies.
These objects support video chaining and can be connected to create complex visual effects:
p5
: creates a P5.js sketchP5.js is a JavaScript library for creative coding. It provides a simple way to create graphics and animations, but you can do very complex things with it.
If you are new to P5.js, I recommend watching Patt Vira's YouTube tutorials on YouTube, or on her website. They're fantastic for both beginners and experienced developers.
Read the P5.js documentation to see how P5 works.
See the P5.js tutorials and OpenProcessing for more inspirations.
You can call these special methods in your sketch:
noDrag()
disables dragging the whole canvas. You must call this method if you want to add interactivity to your sketch, such as adding sliders or mousePressed events. You can call it in your setup()
function.noDrag()
is enabled, you can still drag the "p5" title to move the whole object around.send(message)
and recv(callback)
, see Message Passing.You can use any third-party packages you want in your sketch, see importing JavaScript packages from NPM.
import ml5 from 'npm:ml5'
function preload() {
classifier = ml5.imageClassifier('MobileNet')
}
You can import shared JavaScript libraries across multiple p5
objects, see sharing JavaScript across multiple js
blocks.
hydra
: creates a Hydra video synthesizersetVideoCount(ins = 1, outs = 1)
creates the specified number of Hydra source ports.setVideoCount(2)
initializes s0
and s1
sources with the first two visual inlets.h
o0
, o1
, o2
, and o3
.send(message)
and recv(callback)
works here, see Message Passing.pipe.hydra
: passes the image through without any changesdiff.hydra
, add.hydra
, sub.hydra
, blend.hydra
, mask.hydra
: perform image operations (difference, addition, subtraction, blending, masking) on two video inputsfilet-mignon.hydra
: example Hydra code "Filet Mignon" from AFALFL. Licensed under CC BY-NC-SA 4.0.glsl
: creates a GLSL fragment shaderp5
, hydra
, glsl
, swgl
, bchrn
, ai.img
or canvas
) to the GLSL object via the four visual inlets.uniform float iMix;
, it will create a float inlet for you to send values to.sampler2D
such as uniform sampler2D iChannel0;
, it will create a visual inlet for you to connect video sources to.glsl
, as they accept the same uniforms.red.gl
: solid red colorpipe.gl
: passes the image through without any changesmix.gl
: mixes two video inputsoverlay.gl
: put the second video input on top of the first onefft-freq.gl
: visualizes the frequency spectrum from audio inputfft-waveform.gl
: visualizes the audio waveform from audio inputswitcher.gl
: switches between six video inputs by sending an int message of 0 - 5.You can send messages into the GLSL uniforms to set the uniform values in real-time. First, create a GLSL uniform using the standard GLSL syntax, which adds two dynamic inlets to the GLSL object.
uniform float iMix;
uniform vec2 iFoo;
You can now send a message of value 0.5
to iMix
, and send [0.0, 0.0]
to iFoo
. When you send messages to these inlets, it will set the internal GLSL uniform values for the object. The type of the message must match the type of the uniform, otherwise the message will not be sent.
If you want to set a default uniform value for when the patch gets loaded, use the loadbang
object connected to a msg
object or a slider. loadbang
sends a {type: 'bang'}
message when the patch is loaded, which you can use to trigger a msg
object or a slider
to send the default value to the GLSL uniform inlet.
Supported uniform types are bool
(boolean), int
(number), float
(floating point number), vec2
, vec3
, and vec4
(arrays of 2, 3, or 4 numbers).
swgl
: creates a SwissGL shaderSwissGL is a wrapper for WebGL2 to create shaders in very few lines of code. Here is how to make a simple animated mesh:
function render({t}) {
glsl({
t,
Mesh: [10, 10],
VP: `XY*0.8+sin(t+XY.yx*2.0)*0.2,0,1`,
FP: `UV,0.5,1`,
})
}
See the SwissGL examples for some inspirations on how to use SwissGL.
canvas
: creates a JavaScript canvasYou can use HTML5 Canvas to create custom graphics and animations. The rendering context is exposed as ctx
in the JavaScript code, so you can use methods like ctx.fill()
to draw on the canvas.
You cannot use DOM APIs such as document
or window
in the canvas code. This is because the HTML5 canvas runs as an offscreen canvas on the rendering pipeline.
You can call these special methods in your canvas code:
noDrag()
to disable dragging the whole canvas. this is needed if you want to add interactivity to your canvas, such as adding sliders. You can call it in your setup()
function.send(message)
and recv(callback)
, see Message Passing.bchrn
: render the Winamp Milkdrop visualizer (Butterchurn)hydra
and glsl
) to derive more visual effects.img
: display imagesstring
: load the image from the given url.video
: display videosbang
: restart the videostring
: load the video from the given url.{type: 'play'}
: play the video{type: 'pause'}
: pause the video{type: 'loop', value: false}
: do not loop the videobg.out
: background outputjs
: A JavaScript code blockconsole.log()
to log messages to the virtual console.setInterval(callback, ms)
to run a callback every ms
milliseconds.setInterval
that automatically cleans up the interval on unmount. Do not use window.setInterval
from the window scope as that will not clean up.requestAnimationFrame(callback)
to run a callback on the next animation frame.requestAnimationFrame
that automatically cleans up on unmount. Do not use window.requestAnimationFrame
from the window scope as that will not clean up.send()
and recv()
to send and receive messages between objects. This also works in other JS-based objects. See the Message Passing section above.setRunOnMount(true)
to run the code automatically when the object is created. By default, the code only runs when you hit the "Play" button.setPortCount(inletCount, outletCount)
to set the number of message inlets and outlets you want. By default, there is 1 inlet and 1 outlet.meta.inlet
in the recv
callback to distinguish which inlet the message came from.send(data, { to: inletIndex })
to send data to a specific inlet of another object.await delay(ms)
to pause the code for ms
milliseconds. For example, await delay(1000)
pauses the code for 1 second.This feature is only available in
js
andp5
objects, for now.
You can import any JavaScript package by using the npm:
prefix in the import statement.
import * as X
is not yet supported.import Matter from 'npm:matter-js'
import {uniq} from 'npm:lodash-es'
console.log(Matter) // Matter.js library
console.log(uniq([1, 1, 2, 2, 3, 3])) // [1, 2, 3]
Alternatively, write the dynamic import yourself:
const {uniq} = await import('https://esm.run/lodash-es')
console.log(uniq([1, 1, 2, 2, 3, 3])) // [1, 2, 3]
// or use a shorthand `await esm()` function that does the same thing
const {uniq} = await esm('lodash-es')
console.log(uniq([1, 1, 2, 2, 3, 3])) // [1, 2, 3]
js
blocksThis feature is only available in
js
andp5
objects, for now.
You can share JavaScript code across multiple js
blocks by using the // @lib <module-name>
comment at the top of your code.
// @lib foobar
will register the module as foobar
. This will turn the object into a library object, as shown by the package icon.export
syntax, e.g. export const rand = () => Math.random()
. This works for everything: classes, functions, modules.import { rand } from 'foobar'
.See the following example:
expr
: mathematical expression evaluatorEvaluate mathematical expressions and formulas.
Use the $1
to $9
variables to create inlets dynamically. For example, $1 + $2
creates two inlets for addition, and sends a message with the result each time inlet one or two is updated.
This uses the expr-eval library from silentmatt under the hood for evaluating mathematical expressions.
There are so many mathematical functions and operators you can use here! See the expression syntax section.
Very helpful for control signals and parameter mapping.
You can also create variables and they are multi-line. Make sure to use ;
to separate statements. For example:
a = $1 * 2
b = $2 + 3
a + b
This creates two inlets, and sends the result of (inlet1 * 2) + (inlet2 + 3)
each time inlet one or two is updated.
You can also define functions to make the code easier to read, e.g. add(a, b) = a + b
.
python
: creates a Python code environmentasm
: virtual stack machine assembly interpreterasm
lets you write a simple flavor of stack machine assembly to construct concise programs. This was heavily inspired by Zachtronic games like TIS-100 and Shenzhen I/O, where you write small assembly programs to interact with the world and solve problems:
The stack machine module is quite extensive, with over 50 assembly instructions and a rich set of features. There are lots of quality-of-life tools unique to Patchies like color-coded memory region visualizer, line-by-line instruction highlighting, and external memory cells (asm.mem
).
See the documentation for assembly module to see the full instruction sets and syntax, what the asm
object and its friends can do, and how to use it.
Try out my example assembly patch to get a feel of how it works.
button
: a simple button{type: 'bang'}
message when clicked.any
: flashes the button when it receives any message, and outputs the {type: 'bang'}
message out.msg
: message objectEnter
and type m <message>
to create a msg
object with the given message.m {type: 'start'}
creates a msg
object that sends {type: 'start'}
when clicked.100
sends the number 100hello
or "hello"
sends the string "hello"{type: 'bang'}
sends the object {type: 'bang'}
. this is what button
does.{type: 'bang'}
: outputs the messageslider
: numerical value sliderEnter
and type in these short commands to create sliders with specific ranges:slider <min> <max>
: integer slider control. example: slider 0 100
fslider <min> <max>
: floating-point slider control. example: fslider 0.0 1.0
. fslider
defaults to -1.0
to 1.0
range if no arguments are given.vslider <min> <max>
: vertical integer slider control. example: vslider -50 50
vfslider <min> <max>
: vertical floating-point slider control. example: vfslider -1.0 1.0
. vfslider
defaults to -1.0
to 1.0
range if no arguments are given.{type: 'bang'}
: outputs the current slider valuenumber
: sets the slider to the given number within the range and outputs the valuetextbox
: multi-line text input{type: 'bang'}
: outputs the current textstring
: sets the text to the given stringstrudel
: Strudel music environmentCtrl/Cmd + Enter
to re-evaluate the code.dac~
object to hear the audio output.recv
only works with a few functions, e.g. setcpm
right now. Try recv(setCpm)
to automate the cpm value.chuck
: creates a ChucK audio programming environmentCtrl/Cmd + Enter
: replaces the most recent shred.Ctrl/Cmd + \
: adds a new shred to the shreds list.Ctrl/Cmd + Backspace
: removes the most recent shred.object
: textual object systemEnter
, and type in the name of the object you want to create.gain~
object's gain value (e.g. 1.0
) to see the tooltip.These objects run on control rate, which means they process messages (control signals), but not audio signals.
mtof
: Convert MIDI note numbers to frequenciesloadbang
: Send bang on patch loadmetro
: Metronome for regular timingdelay
: Message delay (not audio)adsr
: ADSR envelope generatorMost of these objects are easy to re-implement yourself with the js
object as they simply emit messages, but they are provided for your convenience!
These objects run on audio rate, which means they process audio signals in real-time. They are represented with a ~
suffix in their names.
Audio Processing:
gain~
: Amplifies audio signals with gain controlosc~
: Oscillator for generating audio waveforms (sine, square, sawtooth, triangle)lowpass~
, highpass~
, bandpass~
, allpass~
, notch~
: Various audio filterslowshelf~
, highshelf~
, peaking~
: EQ filters for frequency shapingcompressor~
: Dynamic range compression for audiopan~
: Stereo positioning controldelay~
: Audio delay line with configurable delay time+~
: Audio signal additionsig~
: Generate constant audio signalswaveshaper~
: Distortion and waveshaping effectsconvolver~
: Convolution reverb using impulse responsessoundfile~
object to the convolver~
object's message
inlet. Then, upload a sound file or send a url as an input message.{type: "read"}
message to the soundfile~
object to read the impulse response into the convolver~
object.split~
: Split multi-channel audio into separate mono channels.merge~
: Merge multiple mono channels into a single multi-channel audio.fft~
: FFT analysis for frequency domain processing. See the audio analysis section for how to read the FFT data.meter~
: Visual audio level meter that shows the loudness of the audio source.Sound Input and Output:
soundfile~
: Load and play audio files with transport controlssoundurl~ <url>
to load audio files and streams from URLs directly.soundurl~ http://stream.antenne.de:80/antenne
to stream Antenne Bayern live radio.sampler~
: Sample playback with triggering capabilitiesmic~
: Capture audio from microphone inputdac~
: Send audio to speakersdsp~
, expr~
or tone~
objects. In fact, the default dsp~
and tone~
object is a simple sine wave oscillator that works similar to osc~
.expr~
: audio-rate mathematical expression evaluatorexpr
but runs at audio rate for audio signal processing.shift+enter
to re-run the expression.expr~
object will also re-run the expression.expr
, so the same mathematical expression will work in both expr
and expr~
.sig~
if you just need a constant signal.s
: current sample value, a float between -1 and 1i
: current sample index in buffer, an integer starting from 0t
: current time in seconds, a float starting from 0channel
: current channel index, usually 0 or 1 for stereobufferSize
: the size of the audio buffer, usually 128samples
: an array of samples from the current channelinput
: first input audio signal (for all connected channels), a float between -1 and 1inputs
: every connected input audio signal$1
to $9
: dynamic control inletssin(t * 440 * PI * 2)
creates a sine wave oscillator at 440Hzrandom()
creates white noises
outputs the input audio signal as-iss * $1
applies gain control to the input audio signals ^ 2
squares the input audio signal for distortion effect$1
to $9
to create dynamic control inlets.$1 * 440
creates one message inlet that controls the frequency of a sine wave oscillator.slider 1 880
object to control the frequency.compressor~
object with appropriate limiter-esque setting after expr~
to avoid loud audio spikes that can and will damage your hearing and speakers. You have been warned!dsp~
: dynamic JavaScript DSP processorThis is similar to expr~
, but it takes in a single process
JavaScript function that processes the audio. It essentially wraps an AudioWorkletProcessor
. The worklet is always kept alive until the node is deleted.
Try out some patches that uses dsp~
to get an idea of its power:
Some presets are also built on top of dsp~
:
snapshot~
: takes a snapshot of the incoming audio's first sample and outputs it.Here's how to make white noise:
function process(inputs, outputs) {
outputs[0].forEach((channel) => {
for (let i = 0; i < channel.length; i++) {
channel[i] = Math.random() * 1 - 1
}
})
}
Here's how to make a sine wave oscillator at 440Hz:
function process(inputs, outputs) {
outputs[0].forEach((channel) => {
for (let i = 0; i < channel.length; i++) {
let t = (currentFrame + i) / sampleRate
channel[i] = Math.sin(t * 440 * Math.PI * 2)
}
})
}
You can use the counter
variable that increments every time process
is called. There are also a couple more variables from the worklet global that you can use.
const process = (inputs, outputs) => {
counter // increments every time process is called
sampleRate // sample rate (e.g. 48000)
currentFrame // current frame number (e.g. 7179264)
currentTime // current time in seconds (e.g. 149.584)
}
You can use $1
, $2
, ... $9
to dynamically create value inlets. Message sent to the value inlets will be set within the DSP. The number of inlets and the size of the dsp~
object will adjust automatically.
const process = (inputs, outputs) => {
outputs[0].forEach((channel) => {
for (let i = 0; i < channel.length; i++) {
channel[i] = Math.random() * $1 - $2
}
})
}
In addition to the value inlets, we also have messaging capabilities:
setPortCount(inletCount, outletCount)
to set the number of message inlets.setAudioPortCount(inletCount, outletCount)
to set the number of audio inlets and outlets.setTitle(title)
to set the title of the object.dsp~
.send
and recv
to communicate with the outside world. See Message Passing.setPortCount(2)
recv((msg, meta) => {
if (meta.inlet === 0) {
// do something
}
})
You can even use both value inlets and message inlets together in the DSP.
let k = 0
recv((m) => {
// you can use value inlets `$1` ... `$9` anywhere in the JavaScript DSP code.
k = m + $1 + $2
})
const process = (inputs, outputs) => {
outputs[0].forEach((channel) => {
for (let i = 0; i < channel.length; i++) {
channel[i] = Math.random() * k
}
})
}
tone~
: Tone.js synthesis and processingThe tone~
object allows you to use Tone.js to create interactive music. Tone.js is a powerful Web Audio framework that provides high-level abstractions for creating synthesizers, effects, and complex audio routing.
By default, tone~
adds a sample code for sine oscillator.
The Tone.js context gives you these variables:
Tone
: the Tone.js libraryinputNode
: GainNode from Web Audio API for receiving audio input from other nodesoutputNode
: GainNode from Web Audio API for sending audio output to connected nodesTry out these presets:
poly-synth.tone
: Polyphonic synthesizer that plays chord sequenceslowpass.tone
- low pass filterspipe.tone
- directly pipe input to outputCode example:
// Process incoming audio through a filter
const filter = new Tone.Filter(1000, 'lowpass')
inputNode.connect(filter.input.input)
filter.connect(outputNode)
// Handle incoming messages to change frequency
recv((m) => {
filter.frequency.value = m
})
// Return cleanup function to properly dispose Tone.js objects
return {
cleanup: () => filter.dispose(),
}
midi.in
: MIDI inputmidi.out
: MIDI outputnetsend
: network message sendernetsend <channelname>
to create a netsend
object that sends messages to the specified channel name. Example: netsend drywet
netrecv
: network message receivernetrecv <channelname>
to create a netrecv
object that receives messages from the specified channel name. Example: netrecv drywet
[!CAUTION] API keys are currently stored on localStorage as
gemini-api-key
for Gemini (forai.txt
,ai.img
andai.music
), andcelestiai-api-key
forai.tts
. This is currently super insecure.
Be very cautious that Patchies allows any arbitrary code execution right now with no sandboxing whatsoever, and if you load anyone's patch with malicious code, they can steal your API keys. I recommend removing API keys after use before loading other people's patch.
Please, do not use your main API keys here! Create separate API keys with limited quota for use in Patchies. I plan to ork on a backend-based way to store API keys in the future.
In addition, these objects can be hidden from insert object and the object list via "CMD + K > Toggle AI Features" if you prefer not to use AI objects in your patches.
With that in mind, use "CMD + K > Set Gemini API Key" to set your Gemini API key for ai.txt
, ai.img
and ai.music
. You can get the API key from Google Cloud Console.
ai.txt
: AI text generationai.img
: AI image generationai.music
: AI music generationai.tts
: AI text-to-speechmarkdown
: Markdown rendererThe fft~
audio object gives you an array of frequency bins that you can use to create visualizations in your patch.
First, create a fft~
object. Set the bin size (e.g. fft~ 1024
). Then, connect the purple "analyzer" outlet to the visual object's inlet.
Supported objects are glsl
, hydra
, p5
, canvas
and js
.
sampler2D
GLSL uniform inlet and connect the purple "analyzer" outlet of fft~
to it.Enter
to insert object, and try out the fft-freq.gl
and fft-waveform.gl
presets for working code samples.uniform sampler2D waveTexture;
. Using other uniform names will give you frequency analysis.You can call the fft()
function to get the audio analysis data in the supported JavaScript-based objects: hydra
, p5
, canvas
and js
.
IMPORTANT: Patchies does NOT use standard audio reactivity APIs in Hydra and P5.js. Instead, you must use the fft()
function to get the audio analysis data.
fft()
defaults to waveform (time-domain analysis). You can also call fft({type: 'wave'})
to be explicit.
fft({type: 'freq'})
gives you frequency spectrum analysis.
Try out the fft.hydra
preset for Hydra.
Try out the fft-capped.p5
, fft-full.p5
and rms.p5
presets for P5.js.
Try out the fft.canvas
preset for HTML5 canvas.
p5
in retrieving the audio analysis data. So, the audio reactivity will not be as tight as p5
.canvas
will not slow down your patch if you chain it with other visual objects like hydra
or glsl
, thanks to running on the rendering pipeline.The fft()
function returns the FFTAnalysis
class instance which contains helpful properties and methods:
fft().a
fft().getEnergy('bass') / 255
. You can use these frequency ranges: bass
, lowMid
, mid
, highMid
, treble
.fft().getEnergy(40, 200) / 255
fft().rms
fft().avg
fft().centroid
Where to call fft()
:
p5
: call in your draw
function.
canvas
: call in your draw
function that are gated by requestAnimationFrame
js
: call in your setInterval
or requestAnimationFrame
callback
setInterval(() => {
let a = fft().a
}, 1000)
hydra
: call inside arrow functions for dynamic parameters
let a = () => fft().getEnergy('bass') / 255
src(s0).repeat(5, 3, a, () => a() * 2)
Q: Why not just use standard Hydra and P5.js audio reactivity APIs like a.fft[0]
and p5.FFT()
?
p5-sound
and a.fft
APIs only lets you access microphones and audio files. In contrast, Patchies lets you FFT any dynamic audio sources 😊Converting Hydra's Audio Reactivity API into Patchies:
Replace a.fft[0]
with fft().a[0]
(un-normalized int8 values from 0 - 255)
Replace a.fft[0]
with fft().f[0]
(normalized float values from 0 - 1)
Instead of a.setBins(32)
, change the fft bins in the fft~
object instead e.g. fft~ 32
Instead of a.show()
, use the below presets to visualize fft bins.
Using the value to control a variable:
- osc(10, 0, () => a.fft[0]*4)
+ osc(10, 0, () => fft().f[0]*4)
.out()
Converting P5's p5.sound API into Patchies:
p5.Amplitude
with fft().rms
(rms as float between 0-1)p5.FFT
with fft()
fft.analyze()
with nothing - fft()
is always up to date.fft.waveform()
with fft({ format: 'float' }).a
, as P5's waveform returns a value between -1 and 1. Using format: 'float'
gives you Float32Array.fft.getEnergy('bass')
with fft().getEnergy('bass') / 255
(normalize to 0-1)fft.getCentroid()
with fft().centroid
If you dislike AI features (e.g. text generation, image generation, speech synthesis and music generation), you can hide them by activating the command palette with CMD + K
, then search for "Toggle AI Features". This will hide all AI-related objects and features, such as ai.txt
, ai.img
, ai.tts
and ai.music
.
[!TIP] Use objects that run on the rendering pipeline e.g.
hydra
,glsl
,swgl
,canvas
andimg
to reduce lag.
Behind the scenes, the video chaining feature constructs a rendering pipeline based on the use of framebuffer objects (FBOs), which lets visual objects copy data to one another on a framebuffer level, with no back-and-forth CPU-GPU transfers needed. The pipeline makes use of Web Workers, WebGL2, Regl and OffscreenCanvas (for canvas
).
It creates a shader graph that streams the low-resolution preview onto the preview panel, while the full-resolution rendering happens in the frame buffer objects. This is much more efficient than rendering everything on the main thread or using HTML5 canvases.
Objects such as hydra
, glsl
, swgl
, canvas
and img
runs entirely on the web worker thread and therefore are very high-performance.
In contrast, objects such as p5
and bchrn
run on the main thread, and at each frame we need to create an image bitmap on the main thread, then transfer it to the web worker thread for rendering. This is much slower than using FBOs and can cause lag if you have many p5
or bchrn
objects in your patch.