Coming back to our original problem at hand, in order to baseline the implementation, a POC was in order. Please go through the GITHUB project for details. Microsoft offers different flavors for the Speech to text Conversion. So, logically the Speech to text functionality was to go in to the Cognitive Microservice API, if implemented at the Server side. Microservices API layer with microservices for purposes like Cognitive services, Elastic search services etc.ģ. The application architecture that we have is roughly as follows:Ģ. Use the Microsoft Speech SDK to translate the speech and output the text content in to the rich text box as the use speaks (dictates) his review in to the microphone Have a rich text box input field with a ‘mic’ icon to let the user click the mic icon and start the dictationĢ. Lets delve into the nitty gritty of the situation right away! Since we have an enterprise Azure subscription, the logical choice was to implement the above with the use of Microsoft Cognitive Speech Service. Chatbot, Natural Language Processing (NLP) and Search Services and how to mash them up for a better user experience □ How to talk to Computers: A Framework for building Conversational Agents - Part 1Ĥ. Knowledge graphs and Chatbots - An analytical approach.Ģ. The requirement sounds pretty simple on the surface of it! Easier said than done! Trending Chatbot Tutorialsġ. We have analysts that visit the client side and as such, they wanted the ease to just dictate the review(s) about client meeting directly in to an input field, rather than having to log in to the app, then upload a. Just recently, there was a requirement that popped up for the ability to have speech to text conversion capability in our Angular application.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |