The Google Interactive Canvas control could be the silver bullet that unlocks the commercial potential of smart displays, beginning the revolution of how we use the internet through natural language.
From a technical perspective, the Interactive Canvas Control (ICC) is essentially a voice enabled IFrame, with a few restrictions - developers have no access to the camera or mic, for now!
Google Assistant Interactive Canvas Demo Video
Its early days for the ICC, and as such I could not find any examples out there so decided to write my own. You can download my basic example of how to create a Google Assistant Interactive Canvas Control using c# .net from Github.
How Does the Google Interactive Canvas Control Work?
The best way to describe how the new voice enabled Google Assistant control works is through a diagram:
So, what is going on in the diagram above?
- User speaks a command to the Interactive Canvas control via a smart display or phone.
- User intent is determined by Dialogflow and passed on to your C# .net Web API for an answer.
- Web API produces an HTML response based on your custom logic.
- Google pass the data within your HTML response into your IFrame html page.
- User sees the response via your HTML page.
Thats about it! My code example of the google assistant interactive canvas on Github is deliberately simple, and I have fully documented all aspects so you can have a greater understanding of what is going on.
For a more 'commercialised' example of this technology, and how I think business can use it to start to monitise this medium have a read of how I think the google assistant can be used for ecommerce on smart displays such as the new google home nest hub max.