Matching wildcard/dictation in Microsoft Speech Grammar
Asked Answered
K

2

1

I'm using Microsoft Speech API to load a grxml grammar:

Grammar grammar = new Grammar(file);
grammar.Enabled = true;

SpeechRecognitionEngine sre = GetEngine();
sre.LoadGrammarAsync(grammar);

Based on MSDN I can not find tag to match a wildcard / spoken text like:

<item>My message is {dictation}</item>

It seems to be availalble with code with a DictationGrammar and appendDictation(). It's also available with WSRMacro XML using * but I do not how to do it in XML ?

The skip text but I need to recognize it.

Am I missing something ?

Kreis answered 23/8, 2012 at 22:55 Comment(0)
R
0

If you're using the Kinect speech engine, you cannot use dictation at all; the engine simply doesn't support it.

For more details, you can look at my answer to this question.

Refine answered 25/9, 2012 at 20:16 Comment(4)
But in the C# API there is a DictationGrammar and WildcardGrammar. I could archive my goal if I "harcode" it. In fact I activate a Dictation grammar for som special case (even if it is bad I agree)Kreis
The C# API works with both the desktop engine and the server engine. The desktop engine supports DictationGrammar and WildcardGrammar; the server engine does not.Refine
Kinect uses Microsoft.Speech, not System.Speech as it seems, although you could probably grab the audio from Kinect and use it with System.Speech somehow (but I think you need training of the recognition engine if you go with System.Speech)Swacked
Btw, it seems Microsoft united their previously separate installers for their Server (accessed via Microsoft.Speech in .NET) and Client (accessed via System.Speech in .NET) Speech Runtimes into Microsoft Speech Platform Runtime (Version 11), found at microsoft.com/en-us/download/details.aspx?id=27225. The respective SDK is at microsoft.com/en-us/download/details.aspx?id=27226. The speech runtime version that was labeled in the past "for servers" is non-trainable (has setting for acoustic model adaptation on/off though) and doesn't accept free speech, only commandsSwacked
K
0

For my project SARAH

  • I load all XML grammar
  • Then I create a dictation grammar
  • Some user's action enable/disable the dictation mode

I know there should be a better way to do it since WSRMacro use '*' but I don"t know how to do it.

Might be a clue

Kreis answered 21/9, 2012 at 17:24 Comment(0)
R
0

If you're using the Kinect speech engine, you cannot use dictation at all; the engine simply doesn't support it.

For more details, you can look at my answer to this question.

Refine answered 25/9, 2012 at 20:16 Comment(4)
But in the C# API there is a DictationGrammar and WildcardGrammar. I could archive my goal if I "harcode" it. In fact I activate a Dictation grammar for som special case (even if it is bad I agree)Kreis
The C# API works with both the desktop engine and the server engine. The desktop engine supports DictationGrammar and WildcardGrammar; the server engine does not.Refine
Kinect uses Microsoft.Speech, not System.Speech as it seems, although you could probably grab the audio from Kinect and use it with System.Speech somehow (but I think you need training of the recognition engine if you go with System.Speech)Swacked
Btw, it seems Microsoft united their previously separate installers for their Server (accessed via Microsoft.Speech in .NET) and Client (accessed via System.Speech in .NET) Speech Runtimes into Microsoft Speech Platform Runtime (Version 11), found at microsoft.com/en-us/download/details.aspx?id=27225. The respective SDK is at microsoft.com/en-us/download/details.aspx?id=27226. The speech runtime version that was labeled in the past "for servers" is non-trainable (has setting for acoustic model adaptation on/off though) and doesn't accept free speech, only commandsSwacked

© 2022 - 2024 — McMap. All rights reserved.