Rethorical question: why on earth would anyone who tries to live without Google Apps use such a feature?
You should do some research on how speech recognition works. This is a good start:
If you thing about it for a moment, and reflect on the term “statistics” and the buzzword “big data”, you might realise that most of those services do not run on your local ARM processor, but transmit audio data to a large cluster which processes the data and returns the text.
You can also watch google’s own PR video. At least it has some facts included.
It will become possible to do this on mobile devices without “cloud” services (aka: computers of other people), at some point. But it will take some time to achieve this. And some processing power. But hey, then, this stuff moves faster than a bullet train. Just remember, this was five years ago:
My advise: If you use these features, be aware which service you choose, which clusters are behind it for analysis, who collects the data, and what data does the app collect besides audio.
Swype, e.g., uses Nuance, as does quite probably do Apple, and even Google, AFAIK (which both have own research groups working on speech recognition, of course, and enough servers to run this shit). Swype also reports your location, constantly, to improve service.
tl;dr: speech recognition is non-trivial. If you want it to work for text entry, you will need Google and/or Nuance. Alternatively, wait some years.