这里有篇有趣的文章讲苹果曾因为坚持使用马克夫模型而在语音识别方面落后谷歌和亚马孙。
https://www.engadget.com/2017/06/07/how-apple-reinvigorated-its-ai-aspirations-in-under-a-year/Fast forward to 2014. Apple is at the end of its rope with Siri's listening and comprehension issues. The company realizes that minor tweaks to Siri's processes can't fix its underlying problems and a full reboot is required. So that's exactly what they did. The original Siri relied on hidden Markov models -- a statistical tool used to model time series data (essentially reconstructing the sequence of states in a system based only on the output data) -- to recognize temporal patterns in handwriting and speech recognition.
The company replaced and supplemented these models with a variety of machine learning techniques including Deep Neural Networks and "long short-term memory networks" (LSTMNs). These neural networks are effectively more generalized versions of the Markov model. However, because they posses memory and can track context -- as opposed to simply learning patterns as Markov models do -- they're better equipped to understand nuances like grammar and punctuation to return a result closer to what the user really intended.