I think there is some confusion because you are using the Kinect SDK and some of the answers here regard the related SDKs (System.Speech in .net and Microsoft.Speech that is distributed with a variety of Microsoft server products and the Server Speech Platform). From your comments in the other answers it seems that Kinect SDK uses the Microsoft.Speech namespace and your app must reference the Microsoft.Speech.dll that came with the Kinect SDK.
Just to help clarify a few things (I hope):
System.Speech is a core .net API and a recognizer that implements it is included in Windows 7. It is a client or desktop recognizer and can be trained for specific users and includes a dictation grammar.
Microsoft.Speech is a .net API that is similar, but a bit different. Recognizers that implment Microsoft.Speech are part of various Server products like UCMA and the Microsoft Server Speech Platform.
As you point out, Microsoft.Speech is also the API used for the Kinect recognizer. This is documented in the MSDN link Philipp Schmid mentioned in a comment Speech C# How To (Kinect). I have not worked with Kinect, but this makes sense since the recognizer doesn't require speaker training.
These resources are a bit out date, because the predate Kinect, but they may be helpful:
Microsoft.Speech and System.Speech are similar, but different. See What is the difference between System.Speech.Recognition and Microsoft.Speech.Recognition?
To get started with .NET speech, there is a very good article that was published a few years ago at http://msdn.microsoft.com/en-us/magazine/cc163663.aspx. It is probably the best introductory article I’ve found so far. It is a little out of date, but very helfpul. (The AppendResultKeyValue method was dropped after the beta.) This article shows the System.Speech namespace, but most of that can be directly mapped to Microsoft.Speech.