How Flutter and Google Cloud Platform came together to solve a ‘Learning’ Problem

How about the Technical Side of it?

I decided that Flutter is the way to go for the mobile app development, But wait, I need something that can process Text to Speech with High Accuracy and Right Accent. That’s when I decided to go with Google Cloud APIs from Google Cloud Platform(GCP).

Our clients describe us as a product team which creates amazing UI/UX, by crafting top-notch user experience.

Setup Google Cloud Text to Speech API

Google Cloud Text To Speech API powered by WaveNet DeepMind is a really amazing technology that can be used to synthesise and mimic real person voice. We need to do some quick set up to make it working.

  • First you need to sign up into Google Cloud Platform using your gmail id. Follow the quickstart guide, make sure to enable the API for your project.

Note: The API Key is confidential. Please do not share with anyone to get misused.

Text to Speech APIs

Google Cloud Text To Speech provides native client libraries but it can not be used for Android and iOS. Instead it provides REST API to interact with the Cloud Text To Speech API. There are 2 primary endpoints to use:

  • /voices – Get list of all the voices available for the Cloud Text To Speech API for user to select.
  • /text:synthesize -Perform text to speech synthesise by using the text, language, and audioConfig we provide.

For my app, I have used the /text:synthesize API alone. In case, you would like to give the choice of changing voice and accent, /voices APIs can be used to list down all the supported voices to select from.

At the Flutter side

As we have set-up things successfully with Google Cloud Text to Speech API, we can use it from the flutter code.

  • Add dependencies Flutter project pubspec.yaml will be using 2 external dependencies:

    audioplayer: A Flutter audio plugin (ObjC/Java) to play remote or local audio files.

    path_provider: A Flutter plugin for finding commonly used locations on the filesystem.