WearML Embedded

There are two different methods for adapting your existing application to work with WearHF using WearML Embedded.

The first method involves adding WearML directives to the layout to tweak the application’s current functionality to work on the HMT.

The second method involves using intents to launch built-in applets, which provide a way for users to preform more complex tasks hands free.

Using WearML on these applications can turn it from a clumsy-to-use application to one that is designed to work efficiently and intuitively in hands-free mode.

 

Directives

WearML Embedded provides a series of metadata directives that can be used to optimize a source code application for hands-free use on the HMT class of devices. It consists of a list directives that are associated with an existing user interface, to guide the speech recognizer to understand a UI more efficiently.

When building a standard Android application, a developer typically builds up a user interface screen by assembling UI components onto a canvas. This can be done using a drag-and-drop visual editor such as Android Studio, or programmatically in code.

The result is a hierarchical list of ViewGroups (containers) and Views (UI elements inside containers). The HMT operating system, WearHF, works by traversing these hierarchical lists at runtime and extracting features relevant for hands-free control.

For example, if WearHF detects a Button embedded in the UI tree it will copy the button text and send it to the speech recognizer to listen for. But often there are clickable buttons on the screen that have no text associated with them – typically image buttons. In cases such as these WearHF will automatically offer a speech command in the form “SELECT ITEM 1”, and overlay a numeric index next to the control. Now the user must say “SELECT ITEM 1” to activate the button.

For simple UI trees the system works well – button texts, checkboxes and other “clickable” items are analyzed and passed to the speech recognizer. For the most part the user can speak the name of the onscreen controls, but in a few cases might have to resort to a “SELECT ITEM 1” notation.

However, the real clumsiness starts to appear for complex UI trees. Some UI trees may contain 30 or more clickable buttons, many just as graphic icons with no text associated with them. And for such screens the user is now presented with a mass of numerical indexes to speak.

WearML Embedded allows the developer to add hints and optimizations to their current application at source code level. These hints are ignored by other Android systems, and only picked up when the application runs on the HMT. In this way developers can write a single application designed for touch screens, that runs just as well in the hands-free environment of HMT.

Tutorial
The tutorial shows how to add speech directives to a view.

API
The WearML Embedded API shows all the available directives.

Intents
The WearML Intents shows all the available intents that WearHF provides.

Applets

The HMT comes with a number of built in applets which allow the user to preform common tasks. The applets have been designed specifically for the HMT and provide a completely hands-free experience.

It is possible to interact with these applets from inside an application to quickly add functionality that provides users with a solution that they will already be familiar with.

While applets provide greater functionality than WearHF Directives, use of them does mean you will have to manage code when running on devices other than the HMT when the applets are not available.

 

The following applets are available to all developers:

Camera Applet

This example shows how launch a camera from an application and how to display the picture the user takes.

Document Applet

This example shows how to open documents and images in the document viewer from an application.

Movie Applet

This example shows how to open videos in the movie viewer from an application.

Barcode Applet

This example shows how launch a barcode scanner from an application and how to read the response once the user has scanned a code.

Keyboard and Dictation Applet

This example shows how accept input from the user using either a keyboard or dictation.

Text to Speech Applet

This example shows how use HMT’s built-in text to speech engine to read out text to the user.