Minimal reproduction showing that @tanstack/[email protected] produces linear read-cost growth (and therefore quadratic aggregate work) on rapid state changes because the Svelte adapter's default mergeOptions wraps the previous merge result in a new object whose property getters walk [prev, next] on every property access.
The Svelte adapter's createTable sets up a $effect.pre that, on every reactive tick, calls:
table.setOptions((prev) => mergeObjects(prev, mergedOptions));
table_setOptions then runs:
newOptions = functionalUpdate(updater, table.options); // = mergeObjects(prev, closure)
mergedOptions = table.options.mergeOptions(table.options, newOptions);// default = mergeObjects(prev, newOptions)
table.optionsStore.setState(() => mergedOptions);
So table.options after N ticks is a chain of nested mergeObjects results. Each top-level getter walks its two sources; if the key isn't in the "new" side, the walk descends into prev which has its own getter chain — and so on.
data, columns, state, enableRowSelection, getRowId, etc.) are in the adapter's closure-captured mergedOptions. Every getter walk finds them on the first source check. Read cost is constant, regardless of chain depth.feature.getDefaultTableOptions, e.g. onExpandedChange, onColumnVisibilityChange, onRowSelectionChange, onSortingChange, onColumnFiltersChange, getColumnCanGlobalFilter, isMultiSortEvent — live only in the initial plain store state {...defaultOptions, ...tableOptions} at the bottom of the chain. Every read walks the full chain. Read cost is O(chain depth).Real row-model pipelines (filtering, sorting, expansion) read these defaults many times per row per tick. After N keystrokes, chain depth is O(N), each read is O(N), and the cycle runs O(M rows) times → O(N × M) work per tick, O(N² × M) aggregate.
On this repro with 500 rows and seven features active, using the default adapter merger:
| keystrokes | chain depth | onExpandedChange (20k reads) |
enableRowSelection (20k reads) |
|---|---|---|---|
| 0 | 2 | ~2 ms | ~0.2 ms |
| 10 | 23 | ~55 ms | ~0.2 ms |
| 50 | 103 | ~94 ms | ~0.2 ms |
| 100 | 203 | ~275 ms | ~1.5 ms |
User-key reads stay flat. Default-only reads grow linearly with depth.
Provide a mergeOptions implementation that resolves each key once and stores the result as a plain data descriptor — no getter chain on the output:
createTable({
mergeOptions: (prev, next) => {
const result: Record<PropertyKey, unknown> = {};
const allKeys = new Set([...Reflect.ownKeys(prev), ...Reflect.ownKeys(next)]);
for (const key of allKeys) {
const fromNext = (next as Record<PropertyKey, unknown>)[key];
const value =
fromNext !== undefined
? fromNext
: (prev as Record<PropertyKey, unknown>)[key];
result[key as string] = value;
}
return result;
},
// ...rest of options
});
Reactivity is preserved: the adapter's $effect.pre fires setOptions on every state/data change, so each merge happens with fresh values and table.options is repopulated with fresh snapshots before any consumer reads it within the tick.
With this merger, onExpandedChange reads drop from 275 ms / 20k reads at 100 keystrokes to ~0 ms — a true O(1).
A mergeOptions that copies descriptors (preserving the getters but flattening the top object) halves the constant because each merge adds one getter hop instead of two. But the chain of closures still grows linearly with N:
// Not enough — still O(N) per read, just with a smaller constant.
mergeOptions: (prev, next) => {
const result = {};
const copy = (s) => {
for (const key of Reflect.ownKeys(s)) {
const desc = Object.getOwnPropertyDescriptor(s, key);
if (desc) Object.defineProperty(result, key, { ...desc, configurable: true });
}
};
copy(prev);
copy(next);
return result;
}
pnpm install
pnpm dev
Change the adapter so setOptions doesn't wrap prev at all. The correct semantic is: on each reactive tick, store a fresh resolved snapshot of mergedOptions (plus feature defaults for keys mergedOptions doesn't provide), not a growing chain of getters.