Menu Search
Jump to the content X X
Smashing Conf Barcelona

You know, we use ad-blockers as well. We gotta keep those servers running though. Did you know that we publish useful books and run friendly conferences — crafted for pros like yourself? E.g. our upcoming SmashingConf Barcelona, dedicated to smart front-end techniques and design patterns.

How To Build Your Own Action For Google Home Using API.AI

For the holidays, the owner of (and my boss at) thirteen23 gave each employee a Google Home device. If you don’t already know, Google Home1 is a voice-activated speaker powered by Google Assistant2 and is a competing product to Amazon’s line of Alexa products3. I already have the Amazon Echo, and as Director of Technology at thirteen23, I love tinkering with software for new products. For the Amazon Echo, you can create what are called “skills”, which allow you to build custom interactions when speaking to the device.

I’ve really enjoyed learning how to build my own skills for Alexa4. Now that Google Home is out in the market, Google has its own platform for you to build custom interactions, similar to skills, called “actions5“. I checked it out and found that creating and deploying a basic Google action6 is extremely simple.

If you have a Google Home, you may have played with its prebuilt mad libs. Mad Libs7 is a game in which one player prompts others for a list of words to substitute for blanks in a story, before reading the — often comical or nonsensical — story aloud. I’ll use this game to show you how to build your own action for Google Home. Below, I’ve detailed steps to build a custom mad lib action, and I’ve explained why certain steps are important and ultimately how they fit into the voice services world. After this exercise, you will better understand voice services and begin your path to programming actions for Google Home.

Further Reading on SmashingMag: Link

Google Actions And API.AI Link

One notable difference between developing skills for Alexa and actions for Google Home is the software you use to set up the actual product. Amazon has a barebones web form that it has built specific to Alexa skills. Google, on the other hand, bought API.AI in September 201612, right before it released Home. Google requires you to use this platform to create your action. There is a short learning curve with API.AI1413, and the interface takes a little getting used to, but it works pretty well. It also has a lot more built-in power than Alexa’s development portal. The other notable aspect is that you can do a lot more with API.AI outside of Google actions. For this tutorial, we will primarily use this software to create a Google action.

To start off, we will create an API.AI account, create a new agent (which will eventually be our Google Action) and give it a name.

Step 1: Create an API.AI account Link

Note: If you have a Google Home, make sure the API.AI account is the same Google account logged into that device! Otherwise, you won’t be able to test it on the actual hardware.

Go to API.AI1413 and click “Sign up free.” I signed in with Google because I always have Gmail open. Once signed in, you should see an interface similar to what’s below. Click “Create agent,” and let’s get started!

Home screen of api.ai15
Home screen of API.AI (View large version16)

Step 2: Name Your Agent Link

The first thing you need to do is name your agent. An API.AI agent17 represents a conversational interface for your application, device or bot. For this tutorial, our agent represents a conversation to gather words for a happy-birthday mad lib. You can’t have any spaces in the name, so let’s call the agent HappyBirthdayMadLib.

Leave the agent type as “public,” add a description if you want, and click “Save.”

A new HappyBirthdayMadLib agent18
A new HappyBirthdayMadLib agent (View large version19)

Intents Link

An intent20 allows users to say what they want to do and lets the system figure out what activity matches what was said. This is another area in which API.AI and Alexa’s skill-building forms differ greatly: API.AI has a tool specific to creating these intents, whereas Amazon requires you to load a raw intent schema21. You should see a screen like the one below.

Working with intents22
Working with intents (View large version23)

We will now create a welcome intent to introduce the Google action and an intent to gather the words in our mad lib. Our last step in this section is to create the final response — the mad lib!

Step 3: Default Welcome Intent Link

Let’s now focus on the default welcome intent. (For now, you can ignore the default fallback intent24.) The default welcome intent is what fires when your action is invoked through the Google Home device. For instance, when the user says “Hey, Google, open the happy birthday mad lib,” the agent knows to kick off the default welcome intent, which introduces your game to the person speaking to the Google Home device.

Click on “Default welcome intent,” scroll to the bottom, and you should see the screen below:

The default welcome intent25
The default welcome intent (View large version26)

If you mouse over the responses that you see above, a little trash can will appear on the line. Click the trash cans, and delete all of those prebaked responses. Then, let’s write a custom welcome text response:

Hello, and welcome to the Happy Birthday Mad Lib. Let’s begin. Give me the name of a female friend.

Click “Save,” and we are done with the default welcome intent! If you see a screen like the one below, then you are on track.

Create a custom default welcome intent27
Create a custom default welcome intent (View large version28)

Step 4: Create a New Intent Link

Click the “Intents” item in the left-hand menu. You will see “Default fallback intent” and “Default welcome intent” listed. We need a new intent to gather words for our mad lib. We are going to create a new intent; so, click “Create intent” in the upper-right. The first thing we will do on this new screen is name the intent: make_madlib.

Step 5: “User Says” Content Link

Our next step is to populate the “User says” area. “User says” phrases define what users need to say to trigger an intent. We only need a couple here. These will be answers to the request you stated in the default welcome intent: “Give me the name of a female friend.” It ultimately helps with the machine-learning aspect of Google Home. The documentation explains29 about “User says” and why example answers like this help with machine learning. Using a name of one of your friends or Laura, enter these two values:

  • Laura
  • A name is Laura

If your entries look like what you see below, then go ahead and click “Save,” and we will move on to creating our action.

Creating your own make_madlib intent30
Creating your own make_madlib intent (View large version31)

Step 6: Defining Your Action Link

In order to gather all of the words we need for the happy-birthday mad lib, we will need to create an action. An action32 corresponds to the step your application will take when a specific intent has been triggered by a user’s input.

We need to enter an action name in the field. Enter this name: make_madlib.

For our mad libs action, we will gather several words. These will be our parameters33 for the action. Parameters consist of all of the data we need in order to complete our action. Given that this is a mad lib, we will need to gather various parts of speech, such as nouns and adjectives. These are our parameters for the action of creating a mad lib! So, let’s create some parameters. Each one of these is required, so check the box on each one.

  1. Edit the given-name entry, changing the name to name1. Check it as required, and enter the prompt “Give me a name of a female friend”.
  2. Create a new parameter named noun1 of entity @sys.any, a value of $noun1, mark it as required, and enter the prompt “Give me a noun”.
  3. Create a new parameter named adjective1 of entity @sys.any, a value of $adjective1, mark it as required, and enter the prompt “Give me an adjective”.
  4. Create a new parameter named noun2 of entity @sys.any, a value of $noun2, mark it as required, and enter the prompt “Give me another noun”.
  5. Create a new parameter named number1 of entity @sys.number, a value of $number1, mark it as required, and enter the prompt “Give me a number”.
  6. Create a new parameter named adjective2 of entity @sys.any, a value of $adjective2, mark it as required, and enter the prompt “Give me an adjective”.
  7. Create a new parameter named name2 of entity @sys.given-name, a value of $name2, mark it as required, and enter the prompt “Give me a name of another friend”.
  8. Create a new parameter named noun3 of entity @sys.any, a value of $noun3, mark it as required, and enter the prompt “Give me another noun”.
  9. Create a new parameter named bodypart1 of entity @sys.any, a value of $bodypart1, mark it as required, and enter the prompt “Name a part of the body”.
  10. Create a new parameter named noun4 of entity @sys.any, a value of $noun4, mark it as required, and enter the prompt “Last one. Give me another noun”.

I realize this is a lot. Are you still with me? Great. It is important to name everything exactly as outlined above. At this point, your screen should look like this:

Creating an action and parameters for your make_madlib intent34
Creating an action and parameters for your make_madlib intent (View large version35)

Step 7: Create the Response Link

Move down to the “Response” section and add this content:

Friends, this is a surprise party for $name1. We are here to celebrate her $noun1. All of her most $adjective1 friends are here, including me, her devoted and faithful $noun2. I must say that she doesn’t look a day over $number1. Naturally, we have some $adjective2 presents for her. $name2 bought her a beautiful copper $noun3 that she can wear on her lovely $bodypart1. Now, let’s all sing together: “Happy $noun4 day to you!”

We’re done! This final part should look like what you see below:

Insert the response text36
Insert the response text (View large version37)

Note: Do you see the bottom of the screen in the image above, where it has “End conversation” checked under the “Actions on Google” heading? In order for that to show up, you need to do the next step. Once it is there, you need to go back to the intent and check that box.

Integration Link

In order to test what we’ve created on a Google Home device or simulator, we need to integrate this new mad lib agent with the “Actions on Google” integration. API.AI has many different types of integrations38 to choose from, such as for Facebook Messenger, Slack, Skype, Alexa and Cortana. Remember that API.AI wasn’t built specifically for Google Home. In order for this new action to work on your Google Home, we need to integrate it with “Actions on Google.”

In the next step, we will enable the API.AI agent to work with the Google Home device by integrating it with Google actions.

Step 8: Integrate With Actions on Google Link

Click on “Integrations” in the left-hand menu. The very top-left item should be “Actions on Google.” Click on that item to open up the settings. You should see something like the screen below:

The Actions on Google integration39
The “Actions on Google” integration (View large version40)

Turn it on by flipping that toggle in the upper-right corner. Click on the Create Project button in the lower right. This will take you to the Actions on Google site for setting all the publishing parameters for your action as well as being able to test it in the simulator. Next, you can try it out in the Google Home Web Simulator4241.

Testing Your Action Link

You can test your new action in a couple of ways. You can use the Google Home Web Simulator, which allows you to test in a browser without an actual Google Home device, or you can test on a device that is logged in with the same account. Let’s test it in the simulator first.

In these final steps, you will test your new Google action in a simulator and, if you have access to one, an actual Google Home device!

Step 9: Testing on Google Home Web Simulator Link

To test your Google action, you can run it in the Google Home Web Simulator4241. Go ahead and open that tool. You will see the screen below. Click “Start.”

The Google Home Web Simulator43
The Google Home Web Simulator (View large version44)

On the next screen, you can type text in the “Dialog” area on the left, hit return and hear a result. On the right side of the screen, under “Log,” you will see the resulting JSON generated behind the scenes. This JSON is generated by the Google Home device upon hearing the user’s command. For this simulator, it comes from the text that you typed in the “Dialog” area. If you scroll down in the “Log” area, under the “Response” subheading, you will see the JSON returned from API.AI after it has processed your JSON request.

Put on some headphones or turn up your speakers! Type the following text in the “Dialog” text field: Talk to my test app

Once you do that and hit return, you should hear the start of your mad lib action, and the screen will look like what you see below:

Testing your action in the simulator45
Testing your action in the simulator (View large version46)

Go through the rest of the mad lib to finish it up, and the result should look similar to what you see below, but with your own words. Our next step will be to test it on an actual Google Home device.

Final result of your action in the simulator47
Final result of your action in the simulator (View large version48)

Step 10: Testing on a Google Home Device Link

To test the mad lib on a Google Home device, all you need to do is log into the device using the same account that you used to authorize your action for testing in the simulator. Once you authorize your action for previewing, it will automatically be available on the Google Home device assigned to that same account. Note that you need to be authorized in the API.AI account under the same Google account in order for this to work!

In step 8, if you set your invocation name to “Happy birthday mad lib,” then try invoking the action you just built by saying to your device, “OK Google, open happy birthday mad lib.”

Once you do this, Google Home should say the welcome intent (from step 3) back to you:

Hello, and welcome to the Happy Birthday Mad Lib. Let’s begin. Give me the name of a female friend.

From here, you can give Google Home the name of a female friend, and follow the rest of the prompts until it reads the story back to you!

For this example, we won’t be deploying it for public use, because we would all be deploying the same action. But if you do come up with a unique action and you want to put it out there for the world to use, you can do that by following the simple steps in the documentation49. Like when building an Alexa skill, the Google team will need to review your action before accepting it, so be patient!

What’s Next? Link

In this article, you learned how to create a basic action for Google Home. You learned about the API.AI platform for creating a Google action and how to set up all parameters in order for your action to work properly. Now you have a basic understanding of how to build custom functionality for the Google Home device, and with this knowledge you can explore more complex ideas for applications running on Google Home.

This example is one of the basic ways to build an action for Google Home. What if your action is a bit more complicated? In that case, you should build in your own custom webhook to handle tasks such as querying a database or looking up user data. Luckily, the API.AI interface provides an easy way to use a webhook to pull in what the user says, then to make decisions based on that input and give a response. See Google’s tutorial on GitHub50 for how to create an action with a webhook.

Good luck, and most of all, have fun! If you have any questions, ask them in the comments section below.

(da, vf, yk, al, il)

Footnotes Link

  1. 1 https://madeby.google.com/home/
  2. 2 https://assistant.google.com/
  3. 3 https://www.amazon.com/alexa-smart-home/b?node=13575751011
  4. 4 https://medium.com/hello-thirteen23/just-the-facts-alexa-71a04b836d7f#.p7r4zc5tp
  5. 5 https://developers.google.com/actions/
  6. 6 https://developers.google.com/actions/
  7. 7 https://en.wikipedia.org/wiki/Mad_Libs
  8. 8 https://www.smashingmagazine.com/2017/05/intrusive-interstitials-guidelines-avoid-google-penalty/
  9. 9 https://www.smashingmagazine.com/2016/12/progressive-web-amps/
  10. 10 https://www.smashingmagazine.com/2014/08/responsive-web-design-google-analytics/
  11. 11 https://www.smashingmagazine.com/2014/08/targeting-mobile-users-through-google-adwords/
  12. 12 https://api.ai/blog/2016/09/19/api-ai-joining-google/
  13. 13 https://api.ai/
  14. 14 https://api.ai/
  15. 15 https://www.smashingmagazine.com/wp-content/uploads/2017/03/action-google-home-image1-large-opt.jpg
  16. 16 https://www.smashingmagazine.com/wp-content/uploads/2017/03/action-google-home-image1-large-opt.jpg
  17. 17 https://docs.api.ai/docs/concept-agents
  18. 18 https://www.smashingmagazine.com/wp-content/uploads/2017/03/action-google-home-image2-large-opt.jpg
  19. 19 https://www.smashingmagazine.com/wp-content/uploads/2017/03/action-google-home-image2-large-opt.jpg
  20. 20 https://docs.api.ai/docs/concept-intents
  21. 21 https://developer.amazon.com/public/solutions/alexa/alexa-skills-kit/docs/alexa-skills-kit-interaction-model-reference
  22. 22 https://www.smashingmagazine.com/wp-content/uploads/2017/03/action-google-home-image3-large-opt.jpg
  23. 23 https://www.smashingmagazine.com/wp-content/uploads/2017/03/action-google-home-image3-large-opt.jpg
  24. 24 https://docs.api.ai/docs/concept-intents#fallback-intent
  25. 25 https://www.smashingmagazine.com/wp-content/uploads/2017/03/action-google-home-image4-large-opt.jpg
  26. 26 https://www.smashingmagazine.com/wp-content/uploads/2017/03/action-google-home-image4-large-opt.jpg
  27. 27 https://www.smashingmagazine.com/wp-content/uploads/2017/03/action-google-home-image5-large-opt.jpg
  28. 28 https://www.smashingmagazine.com/wp-content/uploads/2017/03/action-google-home-image5-large-opt.jpg
  29. 29 https://docs.api.ai/docs/concept-intents#user-says
  30. 30 https://www.smashingmagazine.com/wp-content/uploads/2017/03/action-google-home-image6-large-opt.jpg
  31. 31 https://www.smashingmagazine.com/wp-content/uploads/2017/03/action-google-home-image6-large-opt.jpg
  32. 32 https://docs.api.ai/docs/concept-actions
  33. 33 https://docs.api.ai/docs/concept-actions#section-defining-parameters-manually
  34. 34 https://www.smashingmagazine.com/wp-content/uploads/2017/03/action-google-home-image7-large-opt.jpg
  35. 35 https://www.smashingmagazine.com/wp-content/uploads/2017/03/action-google-home-image7-large-opt.jpg
  36. 36 https://www.smashingmagazine.com/wp-content/uploads/2017/03/action-google-home-image8-large-opt.jpg
  37. 37 https://www.smashingmagazine.com/wp-content/uploads/2017/03/action-google-home-image8-large-opt.jpg
  38. 38 https://docs.api.ai/docs/integrations
  39. 39 https://www.smashingmagazine.com/wp-content/uploads/2017/05/integration1-step-8-large-opt.png
  40. 40 https://www.smashingmagazine.com/wp-content/uploads/2017/05/integration1-step-8-large-opt.png
  41. 41 https://developers.google.com/actions/tools/web-simulator
  42. 42 https://developers.google.com/actions/tools/web-simulator
  43. 43 https://www.smashingmagazine.com/wp-content/uploads/2017/03/action-google-home-image10-large-opt.jpg
  44. 44 https://www.smashingmagazine.com/wp-content/uploads/2017/03/action-google-home-image10-large-opt.jpg
  45. 45 https://www.smashingmagazine.com/wp-content/uploads/2017/03/action-google-home-image11-large-opt.jpg
  46. 46 https://www.smashingmagazine.com/wp-content/uploads/2017/03/action-google-home-image11-large-opt.jpg
  47. 47 https://www.smashingmagazine.com/wp-content/uploads/2017/03/action-google-home-image12-large-opt.jpg
  48. 48 https://www.smashingmagazine.com/wp-content/uploads/2017/03/action-google-home-image12-large-opt.jpg
  49. 49 https://developers.google.com/actions/distribute/deploy
  50. 50 https://github.com/actions-on-google/apiai-silly-name-maker-webhook-nodejs

↑ Back to top Tweet itShare on Facebook

Tom has over 15 years directing and architecting client/server technologies. Tom manages technical solutions and the technology team at thirteen23. When not in the office, Tom is out-and-about drumming for the band San Saba County.

  1. 1

    Mike Hughes

    May 31, 2017 5:51 am

    Step 8 – I get a modal, but it does not look like your screenshot. When I flip the switch it takes me off to Actions for Google Console.

    0
    • 2

      I see what you mean. Google and API.AI have changed the way you integrate it for Google Home. I’ll look at the changes to the steps and get this tutorial updated. Were you able to finish it out?

      2
  2. 3

    James hunter

    June 29, 2017 1:42 pm

    API does not work. Totally blunder regarding with this post. Google had changed the scenarios previously it was easy and google made it difficult for the user convenience.

    1
    • 4

      In my experience, digging into this, the API and documentation has completely gone off the edge. It’s incredibly difficult to follow, very ambiguous, and more often than not contradictory. What in the world is going on over there?! Things either aren’t explained, or are defined and explained in two different ways on different pages.

      0
      • 5

        I’ve attempted to use both ActionsSdkApp and API.AI and both are insufferable.

        0
  3. 6

    What’s the issue with API. It’s not working anymore…

    0
  4. 7

    Tom are you really architecting client/server technologies over 15 years? I’m not sure…

    -9
    • 8

      Yeah I currently lead a development team in Austin Texas. Were you asking to be mean, or asking out of curiosity?

      2
  5. 9

    James hunter

    June 30, 2017 3:30 pm

    yea… he’s good for drumming not suitable for it. Good to go for drumming not for API solution….

    -8
    • 10

      James, is there something I can help you out with? I fixed the screens in this article to correspond with changes on API.AI. The difficult thing about it is that, given it is such a new technology, the interface for API.AI is constantly changing. Also, thanks for the compliments on my drumming. I think.

      2
  6. 11

    LOL, he’s dumping IT field like a drum he used to do it. ;)

    -7
    • 12

      I’m not sure I understand. I am currently working at a shop in Austin Texas doing software development and design. Were you having issues with building the Google Action that I can help with?

      1
  7. 13

    Loving the article, rather than testing the action on a google home simulator is it possible to straight up test it on a mobile phone? Also is it possible to change the wake word on google home or google assistant using snowboy?

    2
    • 14

      Hey Tanway! Thanks for the note. I don’t know of a way to test it on a mobile phone. As far as changing the wake words, Google isn’t allowing it yet, but rumors are they may be open to it in the future.

      2
    • 15

      From what I can tell, the action should work on your phone the same way it does on the real Google Home.

      0
  8. 16

    Thanks Tom, this was quite easy to follow and worked nicely in the test simulator. Now I’m off to buy the physical device!

    2
    • 17

      Awesome. Thanks sammo!

      0
    • 18

      It’s easy to build and follow steps even on simulator works but there is no way to move your command on actual Google home device. Don’t buy it otherwise you lose your money.

      0
  9. 19

    Tom,

    Thank you for the detailed steps. I was able to make it working.
    Only in the last step where how to wake up the actual device,
    Instead of saying” ok google , open …. “. I have to say, ok google talk to my test app.

    It just worked fine. May be the users above might have faced issues where you actually have to enable the Test button – when you click on Integrations and Action on google , you have to click Test. And then go to simulator and make sure its Active that way it allows you to test on simulator as well as device.

    But overall, i am very happy with the explanation to get started with and get an idea how you can actually connect this device for your custom voice command.

    Thanks again.

    2
  10. 21

    Great article, Tom! I guess Google has changed some of the screens and options, but that’s expected in a new product. Your article has given me enough pointers to get started. Do you have a similar one on using Google actions to read from Google drive?

    0

↑ Back to top