Adding Speech Recognition Capabilities to your NativeScript app


Does speech recognition still suck?

It doesn't. Watch this 24 sec video so you can literally take my word for it:



Wow, iOS speech recognition is really impressive!

I know, right?! The nice thing is it works equally well on Android and neither of these require any external SDK - it's all built into the mobile operating systems nowadays.

I'm convinced, let's replace all text input by speech!

Sure, knock yourself out! Add the plugin like any other and read on:
$ tns plugin add nativescript-speech-recognition

Availability check

With the plugin installed, let's make sure the device has speech recognition capabilities before trying to use it (certain older Android devices may not):

// import the plugin
import { SpeechRecognition } from "nativescript-speech-recognition";
class SpeechRecognition {
// instantiate the plugin
private speechRecognition = new SpeechRecognition();
public checkAvailability(): void {
this.speechRecognition.available().then(
(available: boolean) => console.log(available ? "YES!" : "NO"),
(err: string) => console.log(err)
);
}
}

Starting 👂 and stopping 🙉 listening

Now that we've made sure the device supports speech recognition we can start listening for voice input. To help the device recognize what the user says we need to tell it which language it can expect. By default we expect the device language.

We also pass in a callback that gets invoked whenever the device interpreted one or more spoken words.

This example builds on the previous one and shows how to start and stop listening:

// import the plugin
import { SpeechRecognition } from "nativescript-speech-recognition";
class SpeechRecognition {
// instantiate the plugin
private speechRecognition = new SpeechRecognition();
public checkAvailability(): void {
this.speechRecognition.available().then(
(available: boolean) => console.log(available ? "YES!" : "NO"),
(err: string) => console.log(err)
);
}
public startListening(): void {
this.speechRecognition.startListening({
// optional, uses the device locale by default
locale: "en-US",
// this callback will be invoked repeatedly during recognition
onResult: (transcription: SpeechRecognitionTranscription) => {
console.log(`User said: ${transcription.text}`);
console.log(`User finished?: ${transcription.finished}`);
},
}).then(
(started: boolean) => { console.log(`started listening`) },
(errorMessage: string) => { console.log(`Error: ${errorMessage}`); }
);
}
public stopListening(): void {
this.speechRecognition.stopListening().then(
() => { console.log(`stopped listening`) },
(errorMessage: string) => { console.log(`Stop error: ${errorMessage}`); }
);
}
}

iOS user consent

On iOS the startListening function will trigger two prompts: one to request allowing Apple to analyze voice input, and another one to requests permission to use the microphone.

The contents of these "consent popups" can be amended by adding fragments like these to app/App_Resources/iOS/Info.plist:

<!-- Speech recognition usage consent -->
<key>NSSpeechRecognitionUsageDescription</key>
<string>My custom recognition usage description. Overriding the default empty one in the plugin.</string>
<!-- Microphone usage constent -->
<key>NSMicrophoneUsageDescription</key>
<string>My custom microphone usage description. Overriding the default empty one in the plugin.</string>

Have feedback?

As usual, compliments and marriage proposals can be added to the comments. Problems related to the plugin can go to the GitHub repository. Enjoy!

 

Comments


NativeScript is licensed under the Apache 2.0 license
© 2020 All Rights Reserved.