Open
Conversation
Adds opt-in setting to silence the default output device while a dictation session is active, so media playing in other apps cannot bleed into the microphone and corrupt transcriptions. Implementation: - SystemAudioDucker (Utilities/SystemAudioDucker.swift): reads and writes kAudioDevicePropertyMute on the current default output device via CoreAudio. Saves the device's pre-session mute state and restores it on stop, so the user's volume is never touched. Devices that don't support the mute property (some Bluetooth sinks) are silently ignored. - AppState: new muteSystemAudio Bool setting persisted in UserDefaults; duck() is called right before AVAudioRecorder starts (including restore in the error path) and restore() is called as soon as recording stops, before the transcription wait. - HomeView: "Mute system audio while dictating" toggle added to the Transcription section alongside the existing punctuation toggle. Validated: build succeeds, toggle persists across restarts, other audio is silenced on hotkey press and restored when dictation ends.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Closes #1
Problem
When dictating with Wave, media playing in other apps — music, podcasts, video — bleeds into the microphone. The transcription model hears both the user's voice and background audio, which degrades accuracy and breaks the "fast, quiet dictation" experience Wave is built around.
Implementation
Wave/Utilities/SystemAudioDucker.swift(new file)A focused CoreAudio helper that reads and writes
kAudioDevicePropertyMuteon the current default output device:duck()— snapshots the device's current mute state, then mutes itrestore()— reapplies the saved stateUsing the hardware mute property rather than changing the volume means the user's volume level is never altered. Devices that don't support mute (some Bluetooth sinks) are silently ignored via the CoreAudio return value.
AppState.swiftmuteSystemAudio: Boolsetting, persisted inUserDefaultsSystemAudioDucker.duck()called immediately beforeAVAudioRecorderstarts instartDictation()SystemAudioDucker.restore()called in both the normal stop path (stopDictationAndPaste()) and the recording-failure error path, so audio is always restored even if recording fails to startHomeView.swiftToggle("Mute system audio while dictating", …)added to the existing Transcription section alongside the punctuation toggle. Off by default — no behaviour change for existing users.Validation