AVSpeechSynthesizer does not speak after using SFSpeechRecognizer
Asked Answered
G

5

14

So I built a simple app that does speech recognition using SFSpeechRecognizer and displays the converted speech to text in a UITextView on the screen. Now I'm trying to make the phone speak that displayed text. It doesn't work for some reason. AVSpeechSynthesizer speak function works only before SFSpeechRecognizer was used. For instance, when the app launches, it has some welcome text displayed in the UITextView, if I tap the speak button, the phone will speak out the welcome text. Then if I do record (for speech recognition), the recognized speech will be displayed in the UITextView. Now I want the phone to speak that text, but unfortunately it doesn't.

here is the code

import UIKit
import Speech
import AVFoundation


class ViewController: UIViewController, SFSpeechRecognizerDelegate, AVSpeechSynthesizerDelegate {

    @IBOutlet weak var textView: UITextView!
    @IBOutlet weak var microphoneButton: UIButton!

    private let speechRecognizer = SFSpeechRecognizer(locale: Locale.init(identifier: "en-US"))!

    private var recognitionRequest: SFSpeechAudioBufferRecognitionRequest?
    private var recognitionTask: SFSpeechRecognitionTask?
    private let audioEngine = AVAudioEngine()

    override func viewDidLoad() {
        super.viewDidLoad()

        microphoneButton.isEnabled = false

        speechRecognizer.delegate = self

        SFSpeechRecognizer.requestAuthorization { (authStatus) in

            var isButtonEnabled = false

            switch authStatus {
            case .authorized:
                isButtonEnabled = true

            case .denied:
                isButtonEnabled = false
                print("User denied access to speech recognition")

            case .restricted:
                isButtonEnabled = false
                print("Speech recognition restricted on this device")

            case .notDetermined:
                isButtonEnabled = false
                print("Speech recognition not yet authorized")
            }

            OperationQueue.main.addOperation() {
                self.microphoneButton.isEnabled = isButtonEnabled
            }
        }
    }

    @IBAction func speakTapped(_ sender: UIButton) {
        let string = self.textView.text
        let utterance = AVSpeechUtterance(string: string!)
        let synthesizer = AVSpeechSynthesizer()
        synthesizer.delegate = self
        synthesizer.speak(utterance)
    }
    @IBAction func microphoneTapped(_ sender: AnyObject) {
        if audioEngine.isRunning {
            audioEngine.stop()
            recognitionRequest?.endAudio()
            microphoneButton.isEnabled = false
            microphoneButton.setTitle("Start Recording", for: .normal)
        } else {
            startRecording()
            microphoneButton.setTitle("Stop Recording", for: .normal)
        }
    }

    func startRecording() {

        if recognitionTask != nil {  //1
            recognitionTask?.cancel()
            recognitionTask = nil
        }

        let audioSession = AVAudioSession.sharedInstance()  //2
        do {
            try audioSession.setCategory(AVAudioSessionCategoryRecord)
            try audioSession.setMode(AVAudioSessionModeMeasurement)
            try audioSession.setActive(true, with: .notifyOthersOnDeactivation)
        } catch {
            print("audioSession properties weren't set because of an error.")
        }

        recognitionRequest = SFSpeechAudioBufferRecognitionRequest()  //3

        guard let inputNode = audioEngine.inputNode else {
            fatalError("Audio engine has no input node")
        }  //4

        guard let recognitionRequest = recognitionRequest else {
            fatalError("Unable to create an SFSpeechAudioBufferRecognitionRequest object")
        } //5

        recognitionRequest.shouldReportPartialResults = true  //6

        recognitionTask = speechRecognizer.recognitionTask(with: recognitionRequest, resultHandler: { (result, error) in  //7

            var isFinal = false  //8

            if result != nil {

                self.textView.text = result?.bestTranscription.formattedString  //9
                isFinal = (result?.isFinal)!
            }

            if error != nil || isFinal {  //10
                self.audioEngine.stop()
                inputNode.removeTap(onBus: 0)

                self.recognitionRequest = nil
                self.recognitionTask = nil

                self.microphoneButton.isEnabled = true
            }
        })

        let recordingFormat = inputNode.outputFormat(forBus: 0)  //11
        inputNode.installTap(onBus: 0, bufferSize: 1024, format: recordingFormat) { (buffer, when) in
            self.recognitionRequest?.append(buffer)
        }

        audioEngine.prepare()  //12

        do {
            try audioEngine.start()
        } catch {
            print("audioEngine couldn't start because of an error.")
        }

        textView.text = "Say something, I'm listening!"

    }

    func speechRecognizer(_ speechRecognizer: SFSpeechRecognizer, availabilityDidChange available: Bool) {
        if available {
            microphoneButton.isEnabled = true
        } else {
            microphoneButton.isEnabled = false
        }
    }
}
Gelasias answered 26/10, 2016 at 19:33 Comment(3)
Show. Your. Code.Blackamoor
@Blackamoor I added the code. The original speech to text code was from an appcode tutorial. appcoda.com/siri-speech-frameworkGelasias
I found this link very useful. It contains complete source code of speech to text and then text to speech using AVSpeechSynthesizer Machos
B
8

The problem is that when you start speech recognition, you have set your audio session category to Record. You cannot play any audio (including speech synthesis) with an audio session of Record.

Blackamoor answered 26/10, 2016 at 20:24 Comment(3)
But if you look at this microphoneTapped function triggered on tapping the mic, if the audio engine is running, it will stop it and end the audio. Am I missing something here?Gelasias
I do not say remove the audio session category part. You need more audio session management, not less.Blackamoor
I'm setting session category to record while creating a session. But still not playing audioBannerman
W
18

You should change this line of the startRecording method from:

try audioSession.setCategory(AVAudioSessionCategoryRecord)            

to:

try audioSession.setCategory(AVAudioSessionCategoryPlayAndRecord)
Worrywart answered 7/11, 2016 at 9:9 Comment(8)
This works perfectly. But i noticed that the text-to-speech audio is lower the second time (and consecutive runs). And i don't know why.Terzas
I agree with Samuel Méndez.I am facing same issue.Subjectify
@SamuelMéndez Are you using an iPhone 7+ by chance?Huffish
@Huffish No, it was an iPad 4th gen.Terzas
Is there any solution for low volume audio?Bannerman
Did anybody solve the issue with the low volume? I figured out that it switches to the small speaker, as it's the default for playAndRecord, but it only works once on the normal speaker even if I set the options to defaultToSpeaker everytime I start the speech recognizer.Nagoya
try audioSession.setCategory(.playAndRecord, mode: .default, options: [.defaultToSpeaker, .allowBluetoothA2DP])Timberwork
@DigvijaysinhGida, thanks so much for your comment, which prompted me to tinker with other parameters.Disaccharide
L
15

Please use the below code for fixing the issue:

let audioSession = AVAudioSession.sharedInstance()  
            do {

                try audioSession.setCategory(AVAudioSessionCategoryPlayback)
                try audioSession.setMode(AVAudioSessionModeDefault)

            } catch {
                print("audioSession properties weren't set because of an error.")
            }

Here, we have to use  the above code in the following way:

 @IBAction func microphoneTapped(_ sender: AnyObject) {

        if audioEngine.isRunning {
            audioEngine.stop()
            recognitionRequest?.endAudio()
           let audioSession = AVAudioSession.sharedInstance()  
            do {

                try audioSession.setCategory(AVAudioSessionCategoryPlayback)
                try audioSession.setMode(AVAudioSessionModeDefault)

            } catch {
                print("audioSession properties weren't set because of an error.")
            }

            microphoneButton.isEnabled = false
            microphoneButton.setTitle("Start Recording", for: .normal)
        } else {
            startRecording()
            microphoneButton.setTitle("Stop Recording", for: .normal)
        }
    }

Here, After stopping the audioengine we are setting the audioSession Category to AVAudioSessionCategoryPlayback and audioSession Mode to AVAudioSessionModeDefault.Then when you call the next text to speech method ,it will work fine.

Laux answered 29/3, 2017 at 16:14 Comment(2)
This comment helped me to solve my issue and didn't leave me with the audio changing volume. Seems that the important part is in resetting the audioSession and mode once you're finished with recognition. Thanks for sharing this info.Impenitent
thanks , this saved lot of time, i was searching for the error on web and not noticing it was happening only after i activate recognizer. i though this was error in 11.0.1 ,, but it is not so.Montcalm
B
8

The problem is that when you start speech recognition, you have set your audio session category to Record. You cannot play any audio (including speech synthesis) with an audio session of Record.

Blackamoor answered 26/10, 2016 at 20:24 Comment(3)
But if you look at this microphoneTapped function triggered on tapping the mic, if the audio engine is running, it will stop it and end the audio. Am I missing something here?Gelasias
I do not say remove the audio session category part. You need more audio session management, not less.Blackamoor
I'm setting session category to record while creating a session. But still not playing audioBannerman
J
3

when using STT, you have to set like this:

AVAudioSession *avAudioSession = [AVAudioSession sharedInstance];

if (avAudioSession) {
    [avAudioSession setCategory:AVAudioSessionCategoryRecord error:nil];
    [avAudioSession setMode:AVAudioSessionModeMeasurement error:nil];
    [avAudioSession setActive:true withOptions:AVAudioSessionSetActiveOptionNotifyOthersOnDeactivation error:nil];
}

When using TTS set AudioSession again, like this:

[regRequest endAudio];

AVAudioSession *avAudioSession = [AVAudioSession sharedInstance];
if (avAudioSession) {
    [avAudioSession setCategory:AVAudioSessionCategoryPlayback error:nil];
    [avAudioSession setMode:AVAudioSessionModeDefault error:nil];
}

Its work perfectly for me. Also the LOW AUDIO problem is solved.

Jeffries answered 8/1, 2018 at 13:18 Comment(2)
I agree with this. Using AVAudioSessionModeMeasurement should be examined, if one experiences very low volume, and/or problems switching between AVSpeechSynthesizer and SFSpeechRecognizerEndolymph
Yeah, That helps improving apps efficiency.Jeffries
W
0

try this:

audioSession.setCategory(AVAudioSessionCategoryRecord) 
Woodprint answered 15/12, 2016 at 9:37 Comment(2)
Give some explanationBridgman
Why should the OP "try this"? A good answer will always have an explanation of what was done and why it was done that way, not only for the OP but for future visitors to SO that may find this question and be reading your answer.Marchetti

© 2022 - 2024 — McMap. All rights reserved.