Press shortcut → speak → get text. Free and open source ❤️
Whispering turns your speech into text with a single keyboard shortcut. Press the shortcut, speak, and your words appear wherever you're typing. No window switching, no clicking around.
I built this because I was tired of paying $30/month for transcription apps that are basically API wrappers. With Whispering, you bring your own API key and pay cents directly to providers. I use it 3-4 hours daily and pay about $3/month.
The math is simple: transcription APIs cost $0.02-0.36/hour. Subscription apps charge $30/month. That's a 10-100x markup for a middleman you don't need.
Note: Whispering is designed for quick transcriptions, not long recordings. For extended recording sessions, use a dedicated recording app.
Want to see the voice coding workflow? Check out this 3-minute demo showing how I use Whispering with Claude Code for faster development.
Choose from multiple transcription providers (Groq at $0.02/hour is my favorite). The app supports voice-activated mode for hands-free operation; just talk and it transcribes. You can set up AI transformations to automatically format your text, fix grammar, or translate languages.
Everything is stored locally on your device. Your audio goes directly from your machine to your chosen API provider. No middleman servers, no data collection, no tracking.
Built with Svelte 5 and Tauri, so it's tiny (~22MB) and starts instantly. The codebase is clean and well-documented if you want to contribute or learn.
Takes about 2 minutes to get running.
Choose your operating system below and click the download link:
Architecture | Download | Requirements |
---|---|---|
Apple Silicon | Whispering_7.0.0_aarch64.dmg | M1/M2/M3 Macs |
Intel | Whispering_7.0.0_x64.dmg | Intel-based Macs |
Not sure which Mac you have? Click the Apple menu → About This Mac. Look for "Chip" or "Processor":
- Apple M1/M2/M3 → Use Apple Silicon version
- Intel Core → Use Intel version
.dmg
file for your architecturexattr -cr /Applications/Whispering.app
in TerminalInstaller Type | Download | Description |
---|---|---|
MSI Installer | Whispering_7.0.0_x64_en-US.msi | Recommended Standard Windows installer |
EXE Installer | Whispering_7.0.0_x64-setup.exe | Alternative installer option |
.msi
installer (recommended)Whispering will appear in your Start Menu when complete.
Package Format | Download | Compatible With |
---|---|---|
AppImage | Whispering_7.0.0_amd64.AppImage | All Linux distributions |
DEB Package | Whispering_7.0.0_amd64.deb | Debian, Ubuntu, Pop!_OS |
RPM Package | Whispering-7.0.0-1.x86_64.rpm | Fedora, RHEL, openSUSE |
AppImage (Universal)
wget https://github.com/braden-w/whispering/releases/latest/download/Whispering_7.0.0_amd64.AppImage
chmod +x Whispering_7.0.0_amd64.AppImage
./Whispering_7.0.0_amd64.AppImage
Debian/Ubuntu
wget https://github.com/braden-w/whispering/releases/latest/download/Whispering_7.0.0_amd64.deb
sudo dpkg -i Whispering_7.0.0_amd64.deb
Fedora/RHEL
wget https://github.com/braden-w/whispering/releases/latest/download/Whispering-7.0.0-1.x86_64.rpm
sudo rpm -i Whispering-7.0.0-1.x86_64.rpm
Links not working? Find all downloads at GitHub Releases
No installation needed! Works in any modern browser.
Note: The web version doesn't have global keyboard shortcuts, but otherwise works great for trying out Whispering before installing.
Right now, I personally use Groq for almost all my transcriptions.
💡 Why Groq? The fastest models, super accurate, generous free tier, and unbeatable price (as cheap as $0.02/hour using
distil-whisper-large-v3-en
)
🙌 That's it! No credit card required for the free tier. You can start transcribing immediately.
Groq API Key
Cmd+Shift+;
anywhere) and say "Testing Whispering"🎉 Success! Your words are now in your clipboard. Paste anywhere!
This happens due to macOS App Nap, which suspends background apps to save battery.
Quick fixes:
Best practice: Keep Whispering in the foreground in front of other apps. You can resize it to a smaller window or use Voice Activated mode for minimal disruption.
If you accidentally clicked "Don't Allow" when Whispering asked for microphone access, here's how to fix it:
If you accidentally blocked microphone permissions, use the Registry solution:
Registry Cleanup (Recommended)
regedit
)Delete App Data: Navigate to %APPDATA%\..\Local\com.bradenwong.whispering
and delete this folder, then reinstall.
Windows Settings: Settings → Privacy & security → Microphone → Enable "Let desktop apps access your microphone"
See Issue #526 for more details.
Take your transcription experience to the next level with these advanced features:
Choose from multiple transcription providers based on your needs for speed, accuracy, and privacy:
distil-whisper-large-v3-en
($0.02/hr), whisper-large-v3-turbo
($0.04/hr), whisper-large-v3
($0.06/hr)whisper-1
($0.36/hr), gpt-4o-transcribe
($0.36/hr), gpt-4o-mini-transcribe
($0.18/hr)scribe_v1
, scribe_v1_experimental
Transform your transcriptions automatically with custom AI workflows:
Quick Example: Format Text
Claude Sonnet 3.5
(or your preferred AI)Core Principles:
Formatting Guidelines:
Punctuation & Grammar:
Structure & Organization:
Intelligent Corrections:
Special Handling:
Preserve Original Intent:
Output Format: Return the formatted text with:
Remember: You're a translator from spoken to written form, not an editor trying to improve the content. Make it readable while keeping it real.`
What can transformations do?
Example workflow: Speech → Transcribe → Fix Grammar → Translate to Spanish → Copy to clipboard
You'll need additional API keys for AI transformations. Choose from these providers based on your needs:
gpt-4o
, gpt-4o-mini
, o3-mini
and moreclaude-opus-4-0
, claude-sonnet-4-0
, claude-3-7-sonnet-latest
gemini-2.5-pro
, gemini-2.5-flash
, gemini-2.5-flash-lite
llama-3.3-70b-versatile
, llama-3.1-8b-instant
, gemma2-9b-it
, and moreHands-free recording that starts when you speak and stops when you're done.
Two ways to enable VAD:
Option 1: Quick toggle on homepage
Option 2: Through settings
How it works:
Perfect for dictation without holding keys!
Change the recording shortcut to whatever feels natural:
F1
, Cmd+Space+R
, Ctrl+Shift+V
I was paying $30/month for a transcription app. Then I did the math: the actual API calls cost about $0.36/hour. At my usage (3-4 hours/day), I was paying $30 for what should cost $3.
That's when I realized these apps are just middlemen. They take your audio, send it to OpenAI's Whisper API, and charge you 10x markup. Plus your recordings go through their servers, get stored who knows where, and you're locked into their ecosystem.
So I built Whispering to cut out the middleman. You bring your own API key, your audio goes directly to the provider, and you pay actual costs. No subscription, no data collection, no lock-in. Just transcription at cost.
The code is open source because I believe tools this fundamental should be free. Companies pivot, get acquired, or shut down. But open source is forever.
With Whispering, you pay providers directly instead of marked-up subscription prices:
Service | Cost per Hour | Light Use (20 min/day) | Moderate Use (1 hr/day) | Heavy Use (3 hr/day) | Traditional Tools |
---|---|---|---|---|---|
distil-whisper-large-v3-en (Groq) |
$0.02 | $0.20/month | $0.60/month | $1.80/month | $15-30/month |
whisper-large-v3-turbo (Groq) |
$0.04 | $0.40/month | $1.20/month | $3.60/month | $15-30/month |
gpt-4o-mini-transcribe (OpenAI) |
$0.18 | $1.80/month | $5.40/month | $16.20/month | $15-30/month |
Local (Speaches) | $0.00 | $0.00/month | $0.00/month | $0.00/month | $15-30/month |
Whispering stores as much data as possible locally on your device, including recordings and text transcriptions. This approach ensures maximum privacy and data security. Here's an overview of how data is handled:
Local Storage: Voice recordings and transcriptions are stored in IndexedDB, which is used as blob storage and a place to store all of your data like text and transcriptions.
Transcription Service: The only data sent elsewhere is your recording to an external transcription service—if you choose one. You have the following options:
Transformation Service (Optional): Whispering includes configurable transformation settings that allow you to pipe transcription output into custom transformation flows. These flows can leverage:
When using AI-powered transformations, your transcribed text is sent to your chosen LLM provider using your own API key. All transformation configurations, including prompts and step sequences, are stored locally in your settings.
You can change both the transcription and transformation services in the settings to ensure maximum local functionality and privacy.
Most apps are middlemen charging $30/month for API calls that cost pennies. With Whispering, you bring your own API key and pay providers directly. Your audio goes straight from your device to the API - no servers in between, no data collection, no subscriptions.
There isn't one. I built this for myself and use it every day. The code is open source so you can verify exactly what it does. No telemetry, no premium tiers, no upsells.
Svelte 5 + Tauri. The app is tiny (~22MB), starts instantly, and uses minimal resources. The codebase is clean and well-documented if you want to learn or contribute.
Yes - use the Speaches provider for local transcription. No internet, no API keys, completely private.
With Groq (my favorite): $0.02-$0.06/hour. With OpenAI: $0.18-$0.36/hour. Local transcription: free forever. I use it 3-4 hours daily and pay about $3/month total.
Your recordings stay on your device in IndexedDB. When you transcribe, audio goes directly to your chosen provider using your API key. No middleman servers. For maximum privacy, use local transcription.
Yes - set up AI transformations to fix grammar, translate languages, or reformat text. Works with any LLM provider.
Desktop: Mac (Intel & Apple Silicon), Windows, Linux. Web: Any modern browser at whispering.bradenwong.com.
Open an issue on GitHub. I actively maintain this and respond quickly.
Whispering showcases the power of modern web development as a comprehensive example application:
Note: The browser extension is temporarily disabled while we stabilize the desktop app.
rpc.recordings.getAllRecordings
)Whispering uses a clean three-layer architecture that achieves extensive code sharing between the desktop app (Tauri) and web app. This is possible because of how we handle platform differences and separate business logic from UI concerns.
Quick Navigation: Service Layer | Query Layer | Error Handling
┌─────────────┐ ┌─────────────┐ ┌──────────────┐
│ UI Layer │ --> │ Query Layer│ --> │ Service Layer│
│ (Svelte 5) │ │ (TanStack) │ │ (Pure) │
└─────────────┘ └─────────────┘ └──────────────┘
↑ │
└────────────────────┘
Reactive Updates
The service layer contains all business logic as pure functions with zero UI dependencies. Services don't know about reactive Svelte variables, user settings, or UI state—they only accept explicit parameters and return Result<T, E>
types for consistent error handling.
The key innovation is build-time platform detection. Services automatically choose the right implementation based on the target platform:
// Platform abstraction happens at build time
export const ClipboardServiceLive = window.__TAURI_INTERNALS__
? createClipboardServiceDesktop() // Uses Tauri clipboard APIs
: createClipboardServiceWeb(); // Uses browser clipboard APIs
// Same interface, different implementations
export const NotificationServiceLive = window.__TAURI_INTERNALS__
? createNotificationServiceDesktop() // Native OS notifications
: createNotificationServiceWeb(); // Browser notifications
This design enables 97% code sharing between desktop and web versions. The vast majority of the application logic is platform-agnostic, with only the thin service implementation layer varying between platforms. Services are incredibly testable (just pass mock parameters), reusable (work identically anywhere), and maintainable (no hidden dependencies).
To calculate the actual code sharing percentage, I analyzed the codebase:
# Count total lines of code in the app
find apps/app/src -name "*.ts" -o -name "*.svelte" -o -name "*.js" | \
grep -v node_modules | xargs wc -l
# Result: 22,824 lines total
# Count platform-specific implementation code
find apps/app/src/lib/services -name "*desktop.ts" -o -name "*web.ts" | \
xargs wc -l
# Result: 685 lines (3%)
# Code sharing calculation
# Shared code: 22,824 - 685 = 22,139 lines (97%)
This minimal platform-specific code demonstrates how the architecture maximizes code reuse while maintaining native performance on each platform.
→ Learn more: Services README | Constants Organization
The query layer is where reactivity gets injected on top of pure services. It wraps service functions with TanStack Query and handles two key responsibilities:
Runtime Dependency Injection - Dynamically switching service implementations based on user settings:
// From transcription query layer
async function transcribeBlob(blob: Blob) {
const selectedService = settings.value['transcription.selectedTranscriptionService'];
switch (selectedService) {
case 'OpenAI':
return services.transcriptions.openai.transcribe(blob, {
apiKey: settings.value['apiKeys.openai'],
model: settings.value['transcription.openai.model'],
});
case 'Groq':
return services.transcriptions.groq.transcribe(blob, {
apiKey: settings.value['apiKeys.groq'],
model: settings.value['transcription.groq.model'],
});
}
}
Optimistic Updates - Using the TanStack Query client to manipulate the cache for optimistic UI. By updating the cache, reactivity automatically kicks in and the UI reflects these changes, giving you instant optimistic updates.
It's often unclear where exactly you should mutate the cache with the query client—sometimes at the component level, sometimes elsewhere. By having this dedicated query layer, it becomes very clear: we co-locate three key things in one place: (1) the service call, (2) runtime settings injection based on reactive variables, and (3) cache manipulation (also reactive). This creates a layer that bridges reactivity with services in an intuitive way. It also cleans up our components significantly because we have a consistent place to put this logic—now developers know that all cache manipulation lives in the query folder, making it clear where to find and add this type of functionality:
// From recordings mutations
createRecording: defineMutation({
resultMutationFn: async (recording: Recording) => {
const { data, error } = await services.db.createRecording(recording);
if (error) return Err(error);
// Optimistically update cache - UI updates instantly
queryClient.setQueryData(['recordings'], (oldData) => {
if (!oldData) return [recording];
return [...oldData, recording];
});
return Ok(data);
},
})
This design keeps all reactive state management isolated in the query layer, allowing services to remain pure and platform-agnostic while the UI gets dynamic behavior and instant updates.
→ Learn more: Query README | RPC Pattern Guide
The query layer also transforms service-specific errors into WhisperingError
types that integrate seamlessly with the toast notification system. This happens inside resultMutationFn
or resultQueryFn
, creating a clean boundary between business logic errors and UI presentation:
// Service returns domain-specific error
const { data, error: serviceError } = await services.manualRecorder.startRecording(...);
if (serviceError) {
// Query layer transforms to UI-friendly WhisperingError
return Err(WhisperingError({
title: '❌ Failed to start recording',
description: serviceError.message, // Preserve detailed message
action: { type: 'more-details', error: serviceError }
}));
}
Whispering uses WellCrafted, a lightweight TypeScript library I created to bring Rust-inspired error handling to JavaScript. I built WellCrafted after using the effect-ts library when it first came out in 2023—I was very excited about the concepts but found it too verbose. WellCrafted distills my takeaways from effect-ts and makes them better by leaning into more native JavaScript syntax, making it perfect for this use case. Unlike traditional try-catch blocks that hide errors, WellCrafted makes all potential failures explicit in function signatures using the Result<T, E>
pattern.
Key benefits in Whispering:
Result<T, E>
, making errors impossible to ignoreTaggedError
objects include error names, messages, context, and causesThis approach ensures robust error handling across the entire codebase, from service layer functions to UI components, while maintaining excellent developer experience with TypeScript's control flow analysis.
git clone https://github.com/braden-w/whispering.git
cd whispering
pnpm i
To run the desktop app and website:
cd apps/app
pnpm tauri dev
If you have concerns about the installers or want more control, you can build the executable yourself. This requires more setup, but it ensures that you are running the code you expect. Such is the beauty of open-source software!
cd apps/app
pnpm i
pnpm tauri build
Find the executable in apps/app/target/release
We welcome contributions! Whispering is built with care and attention to clean, maintainable code.
Result<T, E>
, structured TaggedError
objects, and comprehensive error context→ New to the codebase? Start with the Architecture Deep Dive to understand how everything fits together.
Note: WellCrafted is a TypeScript utility library I created to bring Rust-inspired error handling to JavaScript. It makes errors explicit in function signatures and ensures robust error handling throughout the codebase.
We'd love to expand Whispering's capabilities with more transcription and AI service adapters! Here's how to add a new adapter:
Overview of the adapter system:
services/transcription/
): Convert audio to textservices/completion/
): Power AI transformations in the transformation pipelinequery/
): Provides reactive state management and runtime dependency injectionAdding a new transcription service involves four main steps:
Create the service implementation in apps/app/src/lib/services/transcription/
:
// apps/app/src/lib/services/transcription/your-service.ts
import { WhisperingErr, type WhisperingError } from '$lib/result';
import type { Settings } from '$lib/settings';
import { Err, Ok, tryAsync, type Result } from 'wellcrafted/result';
// Define your models directly in the service file
export const YOUR_SERVICE_MODELS = [
{
name: 'model-v1',
description: 'Description of what makes this model special',
cost: '$0.XX/hour',
},
{
name: 'model-v2',
description: 'A faster variant with different trade-offs',
cost: '$0.YY/hour',
},
] as const;
export type YourServiceModel = (typeof YOUR_SERVICE_MODELS)[number];
export function createYourServiceTranscriptionService() {
return {
async transcribe(
audioBlob: Blob,
options: {
prompt: string;
temperature: string;
outputLanguage: Settings['transcription.outputLanguage'];
apiKey: string;
modelName: (string & {}) | YourServiceModel['name'];
// Add any service-specific options
}
): Promise<Result<string, WhisperingError>> {
// Validate API key
if (!options.apiKey) {
return WhisperingErr({
title: '🔑 API Key Required',
description: 'Please enter your YourService API key in settings.',
action: {
type: 'link',
label: 'Add API key',
href: '/settings/transcription',
},
});
}
// Make the API call
const { data, error } = await tryAsync({
try: () => yourServiceClient.transcribe(audioBlob, options),
mapError: (error) => WhisperingErr({
title: '❌ Transcription Failed',
description: error.message,
action: { type: 'more-details', error },
}),
});
if (error) return Err(error);
return Ok(data.text.trim());
}
};
}
export const YourServiceTranscriptionServiceLive = createYourServiceTranscriptionService();
Don't forget to export your service in apps/app/src/lib/services/transcription/index.ts
:
import { YourServiceTranscriptionServiceLive } from './your-service';
export {
// ... existing exports
YourServiceTranscriptionServiceLive as yourservice,
};
And add the API key field to the settings schema in apps/app/src/lib/settings/settings.ts
:
'apiKeys.yourservice': z.string().default(''),
Update the service configuration in apps/app/src/lib/constants/transcription/service-config.ts
:
import { YourServiceIcon } from 'lucide-svelte';
import {
YOUR_SERVICE_MODELS,
type YourServiceModel,
} from '$lib/services/transcription/your-service';
// Add to the imports at the top
type TranscriptionModel = OpenAIModel | GroqModel | ElevenLabsModel | YourServiceModel;
// Add to TRANSCRIPTION_SERVICE_IDS
export const TRANSCRIPTION_SERVICE_IDS = [
'OpenAI',
'Groq',
'speaches',
'ElevenLabs',
'YourService', // Add here
] as const;
// Add to TRANSCRIPTION_SERVICES array
{
id: 'YourService',
name: 'Your Service Name',
icon: YourServiceIcon,
models: YOUR_SERVICE_MODELS,
defaultModel: YOUR_SERVICE_MODELS[0],
modelSettingKey: 'transcription.yourservice.model',
apiKeyField: 'apiKeys.yourservice',
type: 'api',
}
Wire up the query layer in apps/app/src/lib/query/transcription.ts
:
// Add to the switch statement in transcribeBlob function
case 'YourService':
return services.transcriptions.yourservice.transcribe(blob, {
outputLanguage: settings.value['transcription.outputLanguage'],
prompt: settings.value['transcription.prompt'],
temperature: settings.value['transcription.temperature'],
apiKey: settings.value['apiKeys.yourservice'],
modelName: settings.value['transcription.yourservice.model'],
});
Update the settings UI in apps/app/src/routes/(config)/settings/transcription/+page.svelte
:
<!-- Add after other service conditionals -->
{:else if settings.value['transcription.selectedTranscriptionService'] === 'YourService'}
<LabeledSelect
id="yourservice-model"
label="YourService Model"
items={YOUR_SERVICE_MODELS.map((model) => ({
value: model.name,
label: model.name,
...model,
}))}
selected={settings.value['transcription.yourservice.model']}
onSelectedChange={(selected) => {
settings.value = {
...settings.value,
'transcription.yourservice.model': selected,
};
}}
renderOption={renderModelOption}
/>
<YourServiceApiKeyInput />
{/if}
Create the API key input component in apps/app/src/lib/components/settings/api-key-inputs/YourServiceApiKeyInput.svelte
:
<script lang="ts">
import { LabeledInput } from '$lib/components/labeled/index.js';
import { Button } from '$lib/components/ui/button/index.js';
import { settings } from '$lib/stores/settings.svelte';
</script>
<LabeledInput
id="yourservice-api-key"
label="YourService API Key"
type="password"
placeholder="Your YourService API Key"
value={settings.value['apiKeys.yourservice']}
oninput={({ currentTarget: { value } }) => {
settings.value = { ...settings.value, 'apiKeys.yourservice': value };
}}
>
{#snippet description()}
<p class="text-muted-foreground text-sm">
You can find your YourService API key in your <Button
variant="link"
class="px-0.3 py-0.2 h-fit"
href="https://yourservice.com/api-keys"
target="_blank"
rel="noopener noreferrer"
>
YourService dashboard
</Button>.
</p>
{/snippet}
</LabeledInput>
And export it from apps/app/src/lib/components/settings/index.ts
:
export { default as YourServiceApiKeyInput } from './api-key-inputs/YourServiceApiKeyInput.svelte';
Also update apps/app/src/lib/constants/transcription/index.ts
to re-export your models:
export {
YOUR_SERVICE_MODELS,
type YourServiceModel,
} from '$lib/services/transcription/your-service';
AI transformations in Whispering use completion services that can be integrated into transformation workflows. Here's how to add a new AI provider:
Create the completion service in apps/app/src/lib/services/completion/
:
// apps/app/src/lib/services/completion/your-provider.ts
import { WhisperingErr, type WhisperingError } from '$lib/result';
import { Err, Ok, tryAsync, type Result } from 'wellcrafted/result';
export function createYourProviderCompletionService() {
return {
async complete(options: {
apiKey: string;
model: string;
systemPrompt: string;
userPrompt: string;
temperature?: number;
}): Promise<Result<string, WhisperingError>> {
// Validate API key
if (!options.apiKey) {
return WhisperingErr({
title: '🔑 API Key Required',
description: 'Please add your YourProvider API key.',
});
}
// Make the completion request
const { data, error } = await tryAsync({
try: () => yourProviderClient.complete(options),
mapError: (error) => WhisperingErr({
title: '❌ Completion Failed',
description: error.message,
action: { type: 'more-details', error },
}),
});
if (error) return Err(error);
return Ok(data.text);
}
};
}
export const YourProviderCompletionServiceLive = createYourProviderCompletionService();
Register the service in apps/app/src/lib/services/completion/index.ts
:
import { YourProviderCompletionServiceLive } from './your-provider';
export {
// ... existing exports
YourProviderCompletionServiceLive as yourprovider,
};
Wire up the transformation handler in apps/app/src/lib/query/transformer.ts
:
// Add a new case in the handleStep function's prompt_transform switch statement
case 'YourProvider': {
const { data: completionResponse, error: completionError } =
await services.completions.yourprovider.complete({
apiKey: settings.value['apiKeys.yourprovider'],
model: step['prompt_transform.inference.provider.YourProvider.model'],
systemPrompt,
userPrompt,
});
if (completionError) {
return Err(completionError.message);
}
return Ok(completionResponse);
}
Add API key to settings in apps/app/src/lib/settings/settings.ts
:
'apiKeys.yourprovider': z.string().default(''),
Update transformation types to include your provider models and configuration
Always use the WhisperingErr
helper for user-facing errors:
// Good: User-friendly error with action
return WhisperingErr({
title: '⏱️ Rate Limit Reached',
description: 'Too many requests. Please try again in a few minutes.',
action: {
type: 'link',
label: 'View rate limits',
href: 'https://yourservice.com/rate-limits',
},
});
// Handle different error types
if (error.status === 401) {
return WhisperingErr({
title: '🔑 Invalid API Key',
description: 'Your API key appears to be invalid or expired.',
action: {
type: 'link',
label: 'Update API key',
href: '/settings/transcription',
},
});
}
// Use with tryAsync for automatic error mapping
const { data, error } = await tryAsync({
try: () => apiClient.makeRequest(),
mapError: (error) => WhisperingErr({
title: '❌ Request Failed',
description: error.message,
action: { type: 'more-details', error },
}),
});
Create a test file alongside your service:
// apps/app/src/lib/services/transcription/your-service.test.ts
import { describe, it, expect } from 'vitest';
import { createYourServiceTranscriptionService } from './your-service';
describe('YourService Transcription', () => {
it('should handle missing API key', async () => {
const service = createYourServiceTranscriptionService();
const result = await service.transcribe(new Blob(), {
apiKey: '',
// other options
});
expect(result.error).toBeDefined();
expect(result.error?.title).toContain('API Key Required');
});
// Add more tests
});
When submitting a PR for a new adapter, include:
.env
entries if neededWe're excited to see what services you'll integrate! Feel free to open an issue first to discuss your adapter idea.
git checkout -b feature/your-feature-name
git push origin your-branch-name
When preparing a new release, use our version bumping script to update all necessary files:
# Update version across all project files
bun run bump-version <new-version>
# Example:
bun run bump-version 7.0.1
This script automatically updates:
package.json
package.json
tauri.conf.json
)Cargo.toml
)After running the script, follow the displayed instructions to commit, tag, and push the changes.
Feel free to suggest and implement any features that improve usability—I'll do my best to integrate contributions that make Whispering better for everyone.
Whispering is released under the MIT License. Use it, modify it, learn from it, and build upon it freely.
If you encounter any issues or have suggestions for improvements, please open an issue on the GitHub issues tab or contact me via [email protected]. I really appreciate your feedback!
This project is supported by amazing people and organizations:
Transcription should be free, open, and accessible to everyone. Join us in making it so.
Thank you for using Whispering and happy writing!