Google Now may soon be able to work offline: Report
According to reports, Google could be working on making Google Now, it’s speech recognition-based assistant work for Android users even if they are offline.
According to a report by Gizmodo, speech recognition heavily relies on cloud computing to function optimally. It’s essential as the processing power and memory required is huge. The report reveals that in a recent paper, Google engineers have shared how they used deep machine learning techniques to run a not so heavy speech recognition program that resides on the smartphone.
The paper describes the research project as ‘a large vocabulary speech recognition system that is accurate, has low latency, and yet has a small enough memory and computational footprint to run faster than real-time on a Nexus 5 Android smartphone.’
Explaining it technically, the paper says that the engineers employed a quantised Long Short-Term Memory (LSTM) acoustic model trained with connectionist temporal classification (CTC) to directly predict phoneme targets, and further reduce its memory footprint by using an SVD-based compression scheme. The paper adds that the system achieves 13.5% word error rate on an open-ended dictation task, running with a median speed that is seven times faster than real-time.
Tech Times reports that Google used 2,000 hours of anonymised Google voice search traffic encompassing roughly 100 million requests, and added YouTube background noise to better simulate real-life conditions. The report adds that since the system relies on machine learning, the more it’s used the better it will get at anticipating user needs, preferences and habits.
There have been no details on when the changes will be implemented but one can hope to see some functionalities in the Android N.
According to reports, Google could be working on making Google Now, it’s speech recognition-based assistant work for Android users even if they are offline.
According to a report by Gizmodo, speech recognition heavily relies on cloud computing to function optimally. It’s essential as the processing power and memory required is huge. The report reveals that in a recent paper, Google engineers have shared how they used deep machine learning techniques to run a not so heavy speech recognition program that resides on the smartphone.
The paper describes the research project as ‘a large vocabulary speech recognition system that is accurate, has low latency, and yet has a small enough memory and computational footprint to run faster than real-time on a Nexus 5 Android smartphone.’
Explaining it technically, the paper says that the engineers employed a quantised Long Short-Term Memory (LSTM) acoustic model trained with connectionist temporal classification (CTC) to directly predict phoneme targets, and further reduce its memory footprint by using an SVD-based compression scheme. The paper adds that the system achieves 13.5% word error rate on an open-ended dictation task, running with a median speed that is seven times faster than real-time.
Tech Times reports that Google used 2,000 hours of anonymised Google voice search traffic encompassing roughly 100 million requests, and added YouTube background noise to better simulate real-life conditions. The report adds that since the system relies on machine learning, the more it’s used the better it will get at anticipating user needs, preferences and habits.
There have been no details on when the changes will be implemented but one can hope to see some functionalities in the Android N.
No comments: