© Szymon Rozga 2018
Szymon RozgaPractical Bot Developmenthttps://doi.org/10.1007/978-1-4842-3540-9_6

6. Diving into the Bot Builder SDK

Szymon Rozga1 
(1)
Port Washington, New York, USA
 

In the previous chapter, we built a simple bot that can utilize an existing LUIS application and the Bot Builder SDK to enable a conversational flow for a calendar bot. As it stands, the bot is useless. It responds with text describing what it understood from user input, but it does not accomplish anything of substance. We’re building up to connecting our bot to the Google Calendar API, but in the meantime, we need to figure out what tools the Bot Builder SDK provides at our disposal to create meaningful conversational experiences.

In this chapter, we will elaborate on some of the techniques we used in our Chapter 5 code and more thoroughly explore some of the Bot Builder SDK features. We will figure out how the SDK stores state, builds messages with rich content, builds actions and cards, and allows the framework to customize channel behavior, dialog behavior, and user action handling. Lastly, we will look at how we best group bot functionality into reusable components.

Conversation State

As mentioned throughout the previous chapters, a good conversational engine will store each user and conversation’s state so that whenever a user communicates with the bot, the right state of the conversation flow is retrieved, and there is a coherent experience for the user. In the Bot Builder SDK, this state is, by default, stored in memory via the aptly named MemoryBotStorage. Historically, state was stored in a cloud endpoint; however, this has been deprecated. Every so often, we may run into a reference to the state service in some older documentation, so be aware that it no longer exists.

The state for every conversation is composed of three buckets accessible to bot developers. We introduced all of them in the previous chapter, but to reiterate they are as follows:
  • userData: Data for a user across all conversations in a channel

  • privateConversationData: Private user data scoped to a conversation

  • conversationData: Data for a conversation, shared for any users who are part of the conversation

In addition, as a dialog is executing, we have access to its state object referred to as dialogData . Any time a message is received from a user, the Bot Builder SDK will retrieve the user’s state from the state storage, populate the three data objects plus dialogData on the session object, and execute the logic for the current step in the conversation. Once all responses are sent out, the framework will save the state back into the state storage.

let entry = new et.EntityTranslator(session.dialogData.addEntry);
if (!entry.hasDateTime) {
    entry.setEntity(results.response);
}
session.dialogData.addEntry = entry;

In some of the code from the previous chapter, there were instances where we had to re-create a custom object from dialogData and then store the object into the dialogData. The reason for this is that saving an object into the dialogData (or any of the other state containers) will turn the object into a vanilla JavaScript object, like using JSON.stringify would. Trying to invoke any method on session.dialogData.addEntry in the previous code, before resetting to a new object, would cause an error.

The storage mechanism is implemented by an interface called IBotStorage.

export interface IBotStorage {
    getData(context: IBotStorageContext, callback: (err: Error, data: IBotStorageData) => void): void;
    saveData(context: IBotStorageContext, data: IBotStorageData, callback?: (err: Error) => void): void;
}

The ChatConnector class that we instantiate when building a new instance of a bot installs the default MemoryBotStorage instance, which is a great option for development. The SDK allows us to provide our own implementation to replace the default functionality, something you will most likely want to do in a production deployment as this ensures that states are stored instead of being erased any time your instances restarts. For instance, Microsoft provides two additional implementations of the interface, a NoSQL implementation for Azure Cosmos DB1 and an implementation for Azure Table Storage.2 Both are Azure services available through the Azure Portal. You can find the two storage implementations in the botbuilder-azure node package, documented at https://github.com/Microsoft/BotBuilder-Azure . You are also able to write your own IBotStorage implementation and to register it with the SDK. Writing your own implementation is a matter of following the simple IBotStorage interface.

const bot = new builder.UniversalBot(connector, (session) => {
    // ... Bot code ...
})
.set('storage', storageImplementation);

Messages

In the previous chapter, our bot communicated to the user by sending text messages using either the session.send or session.endDialog method. This is fine, but it limits our bot a fair amount. A message between a bot and a user is composed of a variety of pieces of data that we ran into in the “Bot Builder SDK Basics” section in the previous chapter.

The Bot Builder IMessage interface defines what a message is really composed of.

interface IEvent {
    type: string;
    address: IAddress;
    agent?: string;
    source?: string;
    sourceEvent?: any;
    user?: IIdentity;
}
interface IMessage extends IEvent {
    timestamp?: string;              // UTC Time when message was sent (set by service)
    localTimestamp?: string;         // Local time when message was sent (set by client or bot, Ex: 2016-09-23T13:07:49.4714686-07:00)
    summary?: string;                // Text to be displayed by as fall-back and as short description of the message content in e.g. list of recent conversations
    text?: string;                   // Message text
    speak?: string;                  // Spoken message as Speech Synthesis Markup Language (SSML)
    textLocale?: string;             // Identified language of the message text.
    attachments?: IAttachment[];     // This is placeholder for structured objects attached to this message
    suggestedActions: ISuggestedActions; // Quick reply actions that can be suggested as part of the message
    entities?: any[];                // This property is intended to keep structured data objects intended for Client application e.g.: Contacts, Reservation, Booking, Tickets. Structure of these object objects should be known to Client application.
    textFormat?: string;             // Format of text fields [plain|markdown|xml] default:markdown
    attachmentLayout?: string;       // AttachmentLayout - hint for how to deal with multiple attachments Values: [list|carousel] default:list
    inputHint?: string;              // Hint for clients to indicate if the bot is waiting for input or not.
    value?: any;                     // Open-ended value.
    name?: string;                   // Name of the operation to invoke or the name of the event.
    relatesTo?: IAddress;            // Reference to another conversation or message.
    code?: string;                   // Code indicating why the conversation has ended.
}

For this chapter, we will be most interested in the text, attachments, suggestedActions, and attachmentLayout as they form the basis of a good conversational UX.

To create a message object in code, we create a builder.Message object. At that point, you can assign the properties as per the following example. A message can then be passed into the session.send method.

const reply = new builder.Message(session)
    .text('Here are some results for you')
    .attachmentLayout(builder.AttachmentLayout.carousel)
    .attachments(cards);
session.send(reply);

Likewise, when a message comes into your bot, the session object contains a message object. Same interface. Same type of data. But, this time, it is coming in from the channel rather than from the bot.

const bot = new builder.UniversalBot(connector, [
    (session) => {
        const input = session.message.text;
    }]);

Note that IMessage inherits from IEvent, which means it has a type field. This field is set to message for an IMessage, but there are other events that may come from either the framework or a custom app.

Some of the other event types that the bot framework supports, based on channel support, are the following:
  • conversationUpdate : Raised when a user has been added or removed from a conversation or some metadata about the conversation has changed; used for group chat management.

  • contactRelationUpdate : Raised when the bot was either added or removed from a user’s contact list.

  • typing : Raised when a user is typing a message; not supported by all channels.

  • ping : Raised to figure out if the bot endpoint is available.

  • deleteUserData : Raised when the user requests to have their user data deleted.

  • endOfConversation : Raised when a conversation has ended.

  • invoke : Raised when a request is sent for the bot to perform some custom logic. For example, some channels may need to invoke a function on the bot and expect a response. The Bot Framework would send this request as an invoke request, expecting a synchronous HTTP reply. This is not a common scenario.

We can register a handler for each event type by using the on method on the UniversalBot. The resulting conversation with a bot that handles events can provide for more immersive conversational experiences for your users (Figure 6-1).

const bot = new builder.UniversalBot(connector, [
    (session) => {
    }
]);
bot.on('conversationUpdate', (data) => {
    if (data.membersAdded && data.membersAdded.length > 0) {
        if (data.address.bot.id === data.membersAdded[0].id) return;
        const name = data.membersAdded[0].name;
        const msg = new builder.Message().address(data.address);
        msg.text('Welcome to the conversation ' + name + '!');
        msg.textLocale('en-US');
        bot.send(msg);
    }
});
bot.on('typing', (data) => {
    const msg = new builder.Message().address(data.address);
    msg.text('I see you typing... You\'ve got me hooked! Reel me in!');
    msg.textLocale('en-US');
    bot.send(msg);
});
../images/455925_1_En_6_Chapter/455925_1_En_6_Fig1_HTML.jpg
Figure 6-1

A bot responding to typing and conversationUpdate events

Addresses and Proactive Messages

In the message interface, the address property uniquely represents a user in a conversation. It looks like this:

interface IAddress {
    channelId: string;              // Unique identifier for channel
    user: IIdentity;                // User that sent or should receive the message
    bot?: IIdentity;                // Bot that either received or is sending the message
    conversation?: IIdentity;       // Represents the current conversation and tracks where replies should be routed to.
}

The importance behind an address is that we can use it to send a message proactively outside the scope of a dialog. For example, we could create a process that sends a message to a random address every five seconds. This message has zero effect on the user’s dialog stack.

const addresses = {};
const bot = new builder.UniversalBot(connector, [
    (session) => {
        const userid = session.message.address.user.id;
        addresses[userid] = session.message.address;
        session.send('Give me a couple of seconds');
    }
]);
function getRandomInt(min, max) {
    return Math.floor(Math.random() * (max - min + 1)) + min;
}
setInterval(() => {
    const keys = Object.keys(addresses);
    if (keys.length == 0) return;
    const r = getRandomInt(0, keys.length-1);
    const addr = addresses[keys[r]];
    const msg = new builder.Message().address(addr).text('hello from outside dialog stack!');
    bot.send(msg);
}, 5000);

If we did want to modify the dialog stack, perhaps by calling into a complex dialog operation, we can utilize the beginDialog method on the UniversalBot object.

setInterval(() => {
    var keys = Object.keys(addresses);
    if (keys.length == 0) return;
    var r = getRandomInt(0, keys.length-1);
    var addr = addresses[keys[r]];
    bot.beginDialog(addr, "dialogname", { arg: true});
}, 5000);

The significance of these concepts that we can have external events in disparate systems begin affecting the state of a user’s conversation within the bot. We will see this applied in the context of OAuth web hooks in the next chapter.

Rich Content

Rich content can be sent to the user using the attachments functionality in the BotBuilder IMessage interface. In the Bot Builder SDK, an attachment is simply a name, content URL, and a MIME type.3 A message in the Bot Builder SDK accepts zero or more attachments. It is up to the bot connectors to translate that message into something that the channel will understand. All types of messages and attachments are not supported by every channel. Be careful when creating attachments of various MIME types.

For example, to share an image, we can use the following code:

const bot = new builder.UniversalBot(connector, [
    (session) => {
        session.send({
            text: "Here, have an apple.",
            attachments: [
                {
                    contentType: 'image/jpeg',
                    contentUrl: 'https://upload.wikimedia.org/wikipedia/commons/thumb/1/15/Red_Apple.jpg/1200px-Red_Apple.jpg',
                    name: 'Apple'
                }
            ]
        })
    }
]);
Figure 6-2 shows the resulting user interface in the emulator, and Figure 6-3 shows it in Facebook Messenger. We could imagine similar rendering in other platforms.
../images/455925_1_En_6_Chapter/455925_1_En_6_Fig2_HTML.jpg
Figure 6-2

Emulator image attachment

../images/455925_1_En_6_Chapter/455925_1_En_6_Fig3_HTML.png
Figure 6-3

Facebook Messenger image attachment

This code will send audio file attachments, which can be played right from within the messaging channel.

const bot = new builder.UniversalBot(connector, [
    (session) => {
        session.send({
            text: "Here, have some sound!",
            attachments: [
                {
                    contentType: 'audio/ogg',
                    contentUrl: 'https://upload.wikimedia.org/wikipedia/en/f/f4/Free_as_a_Bird_%28Beatles_song_-_sample%29.ogg',
                    name: 'Free as a bird'
                }
            ]
        })
    }
]);
Figure 6-4 shows the emulator, and Figure 6-5 shows Facebook Messenger ().
../images/455925_1_En_6_Chapter/455925_1_En_6_Fig4_HTML.jpg
Figure 6-4

An OGG sound file attachment in the Emulator

../images/455925_1_En_6_Chapter/455925_1_En_6_Fig5_HTML.jpg
Figure 6-5

An OGG sound file attachment in Facebook Messenger

Whoops! It seems like OGG4 files are not supported. This is a good example of Bot Framework behavior when our bot sends an invalid message to Facebook or any other channel. We will investigate this further in the “Channel Errors” section later in this chapter. My console error log has this message:

Error: Request to 'https://facebook.botframework.com/v3/conversations/1912213132125901-1946375382318514/activities/mid.%24cAAbqN9VFI95k_ueUOVezaJiLWZXe' failed: [400] Bad Request
If we look at the error list in the Bot Framework Messenger Channels page, we should find another clue like in Figure 6-6.
../images/455925_1_En_6_Chapter/455925_1_En_6_Fig6_HTML.jpg
Figure 6-6

Bot Framework error for an OGG sound file on Messenger

OK, so they make it somewhat easy to diagnose the problem. We know we must provide a different file format. Let’s try an MP3.

const bot = new builder.UniversalBot(connector, [
    (session) => {
        session.send({
            text: "Ok have a vulture instead!",
            attachments: [
                {
                    contentType: 'audio/mp3',
                    contentUrl: 'http://static1.grsites.com/archive/sounds/birds/birds004.mp3',
                    name: 'Vulture'
                }
            ]
        })
    }
]);
You can see the resulting Emulator and Facebook Messenger renderings in Figure 6-7 and Figure 6-8.
../images/455925_1_En_6_Chapter/455925_1_En_6_Fig7_HTML.jpg
Figure 6-7

Emulator MP3 file attachment

../images/455925_1_En_6_Chapter/455925_1_En_6_Fig8_HTML.jpg
Figure 6-8

Facebook Messenger MP3 file attachment

The Emulator still produces a link, but Messenger has a built-in audio player you can utilize! The experience uploading a video is similar. Messenger will provide a built-in video player right within the conversation.

Exercise 6-1

Experimenting with Attachments

The goal of this exercise is to write a simple bot that can send different types of attachments to users and observe the behavior of the emulator and another channel, like Facebook Messenger.
  1. 1.

    Create a basic bot using the echo bot as a starting point.

     
  2. 2.

    From the bot function, send different types of attachments in your message such as JSON, XML, or file. Experiment with some types of rich media such as video. How does the emulator render these types of attachments? How about Messenger?

     
  3. 3.

    Try sending an image to the bot from the emulator. What data does the incoming message contain? Is this any different from when you send an image via Messenger?

     

Attachments are an easy way to share all kinds of rich content with your users. Use them wisely to create colorful and engaging conversational experiences.

Buttons

Bots can also send buttons to users. A button is a distinct call to action for a user to perform a task. Each button has a label associated with it, as well as a value. A button also has an action type, which will determine what the button does with the value when the button is clicked. The three most common types of actions are open URL, post back, and IM back. Open URL typically opens a web view within the messaging app or a new browser window in a desktop setting. Both post back and IM back send the value of the button as a message to the bot. The difference between the two is that clicking the post back should not display a message from the user in the chat history, whereas the IM back should. Not all channels implement both types of buttons.

const bot = new builder.UniversalBot(connector, [
    (session) => {
        const cardActions = [
            builder.CardAction.openUrl(session, 'http://google.com', "Open Google"),
            builder.CardAction.imBack(session, "Hello!", "Im Back"),
            builder.CardAction.postBack(session, "Hello!", "Post Back")
        ];
        const card = new builder.HeroCard(session).buttons(cardActions);
        const msg = new builder.Message(session).text("sample actions").addAttachment(card);
        session.send(msg);
    }
]);

Note that in the previous code we used a CardAction object. A CardAction is an encapsulation of the data we discussed earlier: a type of action, a title, and value. The channel connectors will usually render a CardAction into a button on the individual platforms.

Figure 6-9 shows what running this code looks like in the emulator, and Figure 6-10 shows it in Facebook Messenger. If we click the Open Google button in the emulator, it opens the web page in your default browser. We first click Im Back, and then once we receive the response card, we click Post Back. Note that Im Back sent a message and the message appears in the chat history, whereas the Post Back button sent a message that the bot responds to, but the message does not appear in the chat history.
../images/455925_1_En_6_Chapter/455925_1_En_6_Fig9_HTML.jpg
Figure 6-9

A sampling of Bot Builder button behaviors in the emulator

Messenger works a bit differently.5 Let’s look at the mobile app behavior. If we click Open Google, a web view will show up that covers about 90 percent of the screen. If we click Im Back and Post Back, the app exhibits the same behavior. Messenger only supports post back; in addition, the message value is never showed to the user. The chat history contains only the title of the button that was clicked.
../images/455925_1_En_6_Chapter/455925_1_En_6_Fig10_HTML.jpg
Figure 6-10

Sampling of button behaviors in Facebook Messenger

The Bot Builder SDK supports the following action types:
  • openUrl: Opens a URL in a browser

  • imBack: Sends a message to the bot from the user, which is visible to all conversation participants

  • postBack: Sends a message to the bot from the user, which may not be visible to all conversation participants

  • call: Places a call

  • playAudio: Plays an audio file within the bot interface

  • playVideo: Plays a video file within the bot interface

  • showImage: Shows an image within the bot interface

  • downloadFile: Downloads a file to the device

  • signin: Kicks off an OAuth flow

Of course, not all channels support all types. In addition, channels may natively support other functionality that the Bot Builder SDK is not. For example, Figure 6-11 shows the documentation for actions Messenger supports through its button templates as of the time of this writing. We will look at utilizing native channel functionality later in this chapter.
../images/455925_1_En_6_Chapter/455925_1_En_6_Fig11_HTML.jpg
Figure 6-11

Messenger button template types

In the Bot Builder SDK, every card action can be created by using the static factory methods in the CardAction class. Here is the relevant code from the Bot Builder source:

    CardAction.call = function (session, number, title) {
        return new CardAction(session).type('call').value(number).title(title || "Click to call");
    };
    CardAction.openUrl = function (session, url, title) {
        return new CardAction(session).type('openUrl').value(url).title(title || "Click to open website in your browser");
    };
    CardAction.openApp = function (session, url, title) {
        return new CardAction(session).type('openApp').value(url).title(title || "Click to open website in a webview");
    };
    CardAction.imBack = function (session, msg, title) {
        return new CardAction(session).type('imBack').value(msg).title(title || "Click to send response to bot");
    };
    CardAction.postBack = function (session, msg, title) {
        return new CardAction(session).type('postBack').value(msg).title(title || "Click to send response to bot");
    };
    CardAction.playAudio = function (session, url, title) {
        return new CardAction(session).type('playAudio').value(url).title(title || "Click to play audio file");
    };
    CardAction.playVideo = function (session, url, title) {
        return new CardAction(session).type('playVideo').value(url).title(title || "Click to play video");
    };
    CardAction.showImage = function (session, url, title) {
        return new CardAction(session).type('showImage').value(url).title(title || "Click to view image");
    };
    CardAction.downloadFile = function (session, url, title) {
        return new CardAction(session).type('downloadFile').value(url).title(title || "Click to download file");
    };

Cards

Another type of Bot Builder attachment is the hero card. In our previous example with button actions, we conveniently ignored the fact that button actions need to be part of a hero card object, but what is that?

The term hero card originates from the racing world. The cards themselves are usually bigger than baseball cards and are designed to promote a race team, specifically the driver and sponsors. It would include photos, information about the driver and sponsors, contact information, and so on. But really the concept is reminiscent of typical baseball or Pokémon cards.

In the context of UX design, a card is an organized way of displaying images, text, and actions. Google brought cards to the masses when it introduced the world to its Material Design6 on Android and the Web. Figure 6-12 shows two examples of card design from Google’s Material Design documentation. Notice the distinct usage of images, titles, subtitles, and calls to action.
../images/455925_1_En_6_Chapter/455925_1_En_6_Fig12_HTML.jpg
Figure 6-12

Google’s Material Design card samples

In the context of bots, the term hero card refers to a grouping of an image with text, buttons for actions, and an optional default tap behavior. Different channels will call cards different things. Facebook loosely refers to them as templates . Other platforms just refer to the idea as attaching content to a message. At the end of the day, the UX concepts are the same.

In the Bot Builder SDK, we can create a card using the following code. We also show how this card renders in the emulator (Figure 6-13) and on Facebook Messenger (Figure 6-14).

const bot = new builder.UniversalBot(connector, [
    (session) => {
        const cardActions = [
            builder.CardAction.openUrl(session, 'http://google.com', "Open Google"),
            builder.CardAction.imBack(session, "Hello!", "Im Back"),
            builder.CardAction.postBack(session, "Hello!", "Post Back")
        ];
        const card = new builder.HeroCard(session)
            .buttons(cardActions)
            .text('this is some text')
            .title('card title')
            .subtitle('card subtitle')
            .images([new builder.CardImage(session).url("https://bot-framework.azureedge.net/bot-icons-v1/bot-framework-default-7.png").toImage()])
            .tap(builder.CardAction.openUrl(session, "http://dev.botframework.com"));
        const msg = new builder.Message(session).text("sample actions").addAttachment(card);
        session.send(msg);
    }
]);
../images/455925_1_En_6_Chapter/455925_1_En_6_Fig13_HTML.jpg
Figure 6-13

A hero card as rendered by the emulator

../images/455925_1_En_6_Chapter/455925_1_En_6_Fig14_HTML.jpg
Figure 6-14

Same hero card in Facebook Messenger

Cards are a great way to communicate the results of a bot action invoked by the user. If you would like to display some data with an image and follow-up actions, there is no better way to do so than using cards. The fact that you get only a few different text fields, with limited formatting abilities, means that the UX resulting in this approach can be a bit limited. That is by design. For more complex visualizations and scenarios, you can either utilize adaptive cards or render custom graphics. We will explore both topics in Chapter 11.

The next question is, can we display cards side by side in a carousel style? Of course, we can. A message in the Bot Builder SDK has a property called attachmentLayout . We set this to carousel, add more cards, and we’re done! The emulator (Figure 6-15) and Facebook Messenger (Figure 6-16) take care of laying the cards out in a friendly carousel format. The default attachmentLayout is a list. Using this layout, the cards would appear one below the other. It is not the most user-friendly approach.

const bot = new builder.UniversalBot(connector, [
    (session) => {
        const cardActions = [
            builder.CardAction.openUrl(session, 'http://google.com', "Open Google"),
            builder.CardAction.imBack(session, "Hello!", "Im Back"),
            builder.CardAction.postBack(session, "Hello!", "Post Back")
        ];
        const msg = new builder.Message(session).text("sample actions");
        for(let i=0;i<3;i++) {
            const card = new builder.HeroCard(session)
                .buttons(cardActions)
                .text('this is some text')
                .title('card title')
                .subtitle('card subtitle')
                .images([new builder.CardImage(session).url("https://bot-framework.azureedge.net/bot-icons-v1/bot-framework-default-7.png").toImage()])
                .tap(builder.CardAction.openUrl(session, "http://dev.botframework.com"));
            msg.addAttachment(card);
        }
        msg.attachmentLayout(builder.AttachmentLayout.carousel);
        session.send(msg);
    }
]);
../images/455925_1_En_6_Chapter/455925_1_En_6_Fig15_HTML.jpg
Figure 6-15

A hero card carousel in the emulator

../images/455925_1_En_6_Chapter/455925_1_En_6_Fig16_HTML.jpg
Figure 6-16

Same hero card carousel on Messenger

Cards can be a bit tricky because there are many ways of laying out buttons and images. Each platform has ever so slightly different rules. On some platforms, openUrl buttons (but not others) must point to an HTTPS address. There may also be rules that limit the number of buttons per card, number of cards in a carousel and image aspect ratios. Microsoft’s Bot Framework will handle all this in the best way it can, but being aware of these limitations will help us debug our bots.

Suggested Actions

We’ve discussed suggested actions in the context of conversational design; they are message-context-specific actions that can be performed immediately after a message is received. If another message comes in, the context is lost, and the suggested actions disappear. This is in opposition to card actions, which stay on the card in the chat history pretty much forever. The typical UX for suggested actions, also referred to as quick replies , is as a horizontally laid out list of buttons along the bottom of the screen.

The code for building suggested actions is similar to a hero card, except the only data we need is a collection of CardActions. The type of actions allowed in the suggested actions area will depend on the channel. Figure 6-17 and Figure 6-18 shows renderings on the emulator and Facebook Messenger, respectively.

msg.suggestedActions(new builder.SuggestedActions(session).actions([
    builder.CardAction.postBack(session, "Option 1", "Option 1"),
    builder.CardAction.postBack(session, "Option 2", "Option 2"),
    builder.CardAction.postBack(session, "Option 3", "Option 3")
]));
../images/455925_1_En_6_Chapter/455925_1_En_6_Fig17_HTML.jpg
Figure 6-17

Suggested actions rendered in the emulator

../images/455925_1_En_6_Chapter/455925_1_En_6_Fig18_HTML.jpg
Figure 6-18

Same suggested actions in Messenger

The suggested actions buttons are great to keep the conversation with the user going without asking the user to guess what they can type into the text message field.

Exercise 6-2

Cards and Suggested Actions

A dictionary and thesaurus are good inspirations for a good bot navigation experience. A user can input a word. The resulting card may show an image of the word and the definition. A button below may allow us to open a reference page, such as on https://www.merriam-webster.com/ . The suggested actions could be a set of buttons of synonyms for the current word. Let's put this kind of interaction in place.
  1. 1.

    Create an account with and establish connectivity to https://dictionaryapi.com . This API will allow you to use the Dictionary and Thesaurus APIs.

     
  2. 2.

    Create a bot that can look up a word based on user input using the Dictionary API and responds with a hero card that includes the word and the definition text. Include a button that opens the word’s page on the dictionary website.

     
  3. 3.

    Connect to the Thesaurus API to return the first ten synonyms as suggested actions.

     
  4. 4.

    As a bonus, use the Bing Image Search API to populate the image in the card. You can get an access key in Azure and use the following sample as a guide: https://docs.microsoft.com/en-us/azure/cognitive-services/bing-image-search/image-search-sdk-node-quickstart .

     

You now have experience connecting your bot to different APIs and translating those API responses into hero cards, buttons, and suggested actions. Well done!

Channel Errors

In the “Rich Content” section, we noted that when a bad request is sent by our bot to the Facebook Messenger connector, our bot will receive an HTTP error. This error was also printed out in the console output of the bot. It seems that the Facebook Bot connector is reporting an error from the Facebook APIs back to our bot. That is cool. The additional feature we saw was that the channel detail page in Azure also contained all those errors. Although minor, this is a powerful feature. It allows us to quickly see how many messages were rejected by the API and the error codes. The case we ran into, that a specific file type format was not supported, was just one of many possible errors. We would see errors if the message is malformed, if there are authentication issues, or if Facebook rejects the connector message for any other reason. Similar ideas apply to the other set of connectors. In general, the connectors are good at translating Bot Framework activities into something that will not be rejected by the channels, but it happens.

In general, if our bot sends a message to a Bot Framework connector and the message does not appear on the interface, chances are there was an issue with the interaction between the connector and channel, and this online error log will contain information about the failure.

Channel Data

We have mentioned several times that different channels may render messages differently or have different rules about certain items, such as the number of hero cards in a carousel or the number of buttons in a hero card. We have been showing examples of Messenger and emulator renderings, as those channels typically work well. Skype is another one that supports a lot of the Bot Builder features (which makes sense, as both are owned by Microsoft). Slack does not have as much rich support for these features, but its editable messages are a slick feature we will visit in Chapter 8.

For illustration purposes, Figure 6-19 is what the carousel with the suggested actions discussed earlier looks like in Slack.
../images/455925_1_En_6_Chapter/455925_1_En_6_Fig19_HTML.jpg
Figure 6-19

Same Bot Builder object rendered in Slack

That’s not a carousel. There is no such concept in Slack! There are also no cards to speak of; it is just messages with attachments. The images are not clickable either; the default link is displayed above the image. Both the Im Back and Post Back buttons appear to do a post back. There is no concept of suggested actions/quick replies. You can find more information about the Slack Message format online.7

However, the team behind the Bot Builder SDK has thought of the issue where you may want to specify the exact native channel message, distinct from the default Bot Framework connector rendering for that channel. The solution is to provide a field on the Message object that contains the native channel JSON data for incoming messages and a field that may contain native channel JSON responses.

The terminology used in the Node SDK is sourceEvent (the C# version of Bot Builder refers to this concept as channelData). The sourceEvent in the Node SDK exists on the IEvent interface. Remember, this is the interface that IMessage implements as well. This means any event from a bot connector may include the raw channel JSON.

Let’s look at a feature in Facebook Messenger that is not readily supported by the Bot Framework. By default, cards in Messenger require an image with a 1.91:1 aspect ratio.8 The default conversion of a hero card by the connector utilizes this template. There is, however, the ability to utilize a 1:1 image ratio. There are other options in the documentation that are hidden by the Bot Framework. For example, Facebook has a specific flag around setting cards as sharable. Furthermore, you can control the size of the WebView invoked by an openURL button in Messenger. For now, we will stick to modifying the image aspect ratio.

For starters, let’s see the code to send the same card we have been sending using the hero card object but using Facebook’s native format:

const bot = new builder.UniversalBot(connector, [
    (session) => {
        if (session.message.address.channelId == 'facebook') {
            const msg = new builder.Message(session);
            msg.sourceEvent({
                facebook: {
                    attachment: {
                        type: 'template',
                        payload: {
                            template_type: 'generic',
                            elements: [
                                {
                                    title: 'card title',
                                    subtitle: 'card subtitle',
                                    image_url: 'https://bot-framework.azureedge.net/bot-icons-v1/bot-framework-default-7.png',
                                    default_action: {
                                        type: 'web_url',
                                        url: 'http://dev.botframework.com',
                                        webview_height_ratio: 'tall',
                                    },
                                    buttons: [
                                        {
                                            type: "web_url",
                                            url: "http://google.com",
                                            title: "Open Google"
                                        },
                                        {
                                            type: 'postback',
                                            title: 'Im Back',
                                            payload: 'Hello!'
                                        },
                                        {
                                            type: 'postback',
                                            title: 'Post Back',
                                            payload: 'Hello!'
                                        }
                                    ]
                                }
                            ],
                        }
                    }
                }
            });
            session.send(msg);
        } else {
            session.send('this bot is unsupported outside of facebook!');
        }
    }
]);
The rendering (Figure 6-20) looks identical to the rendering using the hero card.
../images/455925_1_En_6_Chapter/455925_1_En_6_Fig20_HTML.jpg
Figure 6-20

Rendering a generic template in Messenger

We set image_aspect_ratio to square, and now Facebook renders it as a square (Figure 6-21)!

const msg = new builder.Message(session);
msg.sourceEvent({
    facebook: {
        attachment: {
            type: 'template',
            payload: {
                template_type: 'generic',
                image_aspect_ratio: 'square',
                // more...
            }
        }
    }
});
session.send(msg);
../images/455925_1_En_6_Chapter/455925_1_En_6_Fig21_HTML.jpg
Figure 6-21

Rendering a generic template with a square image on Messenger

It’s that easy! This is just a taste. In Chapter 8, we will explore using the Bot Framework to integrate with native Slack features.

Group Chat

Some types of bots are meant to be used in a group setting. In the context of Messenger, Twitter direct messages, or similar platforms, the interaction between a user and a bot is typically one on one. However, some channels, most notably Slack, are focused on collaboration. In such a context, the ability to converse with multiple users simultaneously becomes important. Giving your bot the ability to productively participate in a group conversation as well as to handle mention tags correctly is important.

Some channels will allow the bot to view every single message that is sent between users in a channel. Other channels will only send messages to the bot if it is mentioned (for example, “hey @szymonbot, write a book on bots will ya?”).

If we are in a channel that allows our bot to see all messages in a group setting, our bot could monitor the conversation and silently execute code based on the discussion (because replying to every message on a group conversation is kind of annoying), or it could ignore everything that doesn’t have a mention of the bot. It could also implement a combination of the two behaviors, where the bot is activated by a mention with a certain command and becomes chatty.

In the “Messages” section, we showed the interface for a message. We glossed over the entities list, but it becomes relevant here. One type of entity we may receive from a connector is mentions. The object includes the name and id of the mentioned user and looks as follows:

{
    mentioned: {
        id: '',
        name: ''
    },
    text: ''
};

Facebook does not support this type of entity, but Slack does. We will connect a bot to Slack in Chapter 8, but in the meantime, here is the code that could always reply in a direct messaging scenario but only reply in a group chat if it is mentioned:

const bot = new builder.UniversalBot(connector, [
    (session) => {
        const botMention = _.find(session.message.entities, function (e) { return e.type == 'mention' && e.mentioned.id == session.message.address.bot.id; });
        if (session.message.address.conversation.isGroup && botMention) {
            session.send('hello ' + session.message.user.name + '!');
        }
        else if (!session.message.address.conversation.isGroup) {
            // 1 on 1 session
            session.send('hello ' + session.message.user.name + '!');
        } else {
            // silently looking at non-mention messages
            // session.send('bein creepy...');
        }
        session.send(msg);
    }
]);
Figure 6-22 is what the experience looks like in Slack in a direct conversation.
../images/455925_1_En_6_Chapter/455925_1_En_6_Fig22_HTML.jpg
Figure 6-22

Direct messaging a group chat–enabled bot in Slack

Figure 6-23 shows the behavior in a group chat (excuse the overly original username srozga2).
../images/455925_1_En_6_Chapter/455925_1_En_6_Fig23_HTML.jpg
Figure 6-23

Group chat–enabled bot ignoring messages without a mention

Custom Dialogs

We have constructed our dialogs by using the bot.dialog(…) method. We also discussed the concept of a waterfall. In the calendar bot we started in the previous chapter, each of our dialogs was implemented via waterfalls: a set of steps that will execute in sequence. We can skip some steps or end the dialog before all steps are completed, but the idea of a predefined sequence is key. This logic is implemented by a class in the Bot Builder SDK called WaterfallDialog. If we look at the code behind the dialog(…) call, we will find this bit:

if (Array.isArray(dialog) || typeof dialog === 'function') {
    d = new WaterfallDialog(dialog);
} else {
    d = <any>dialog;
}

What if the conversation piece we would like to encode is not easily represented in a waterfall abstraction? What choices do we have? We can create a custom implementation of a dialog!

In the Bot Builder SDK, a dialog is a class that represents some interaction between the user and the bot. Dialogs can call other dialogs and accept return values from those child dialogs. They live on a dialog stack, not unlike a function call stack. Using the default waterfall helper hides some of these details; implementing a custom dialog brings us closer to the dialog stack reality. The abstract Dialog class from the Bot Builder is shown here:

export abstract class Dialog extends ActionSet {
    public begin<T>(session: Session, args?: T): void {
        this.replyReceived(session);
    }
    abstract replyReceived(session: Session, recognizeResult?: IRecognizeResult): void;
    public dialogResumed<T>(session: Session, result: IDialogResult<T>): void {
        if (result.error) {
            session.error(result.error);
        }
    }
    public recognize(context: IRecognizeDialogContext, cb: (err: Error, result: IRecognizeResult) => void): void {
        cb(null, { score: 0.1 });
    }
}
Dialog is just a class that we can inherit from that has four important methods.
  • Begin: Called when the dialog is first placed on the stack.

  • ReplyReceived: Called anytime a message arrives from a user.

  • DialogResumed: Called when a child dialog ends and the current dialog becomes active again. One of the parameters received by the dialogResumed method is the child dialog’s result object.

  • Recognize: Allows us to add custom dialog recognition logic. By default, BotBuilder provides declarative methods to set up custom global or dialog-scoped recognition. However, if we would like to add further recognition logic, we can do so using this approach. We’ll get more into this in the “Actions” section.

To illustrate the concepts, we create a BasicCustomDialog. Since Bot Builder is written in TypeScript,9 a typed superset of JavaScript, we went ahead and wrote the subclass in TypeScript, compiled into JavaScript using the TypeScript Compiler (tsc), and then used it in app.js.

Let’s look at the custom dialog’s code. This happens to be TypeScript as it has a cleaner interface when using inheritance; the compiled JavaScript is shown later. When the dialog begins, it send the “begin” text. When it receives a message, it responds with the “reply received” text. If the user sent the “prompt” text, the dialog will ask the user for some text input. It would then receive the text input in the dialogResumed method, which prints that result. If the user had entered “done,” the dialog finishes and returns to the root dialog.

import  { Dialog, ResumeReason, IDialogResult, Session, Prompts } from 'botbuilder'
export class BasicCustomDialog extends Dialog {
    constructor() {
        super();
    }
    // called when the dialog is invoked
    public begin<T>(session: Session, args?: T): void {
        session.send('begin');
    }
    // called any time a message is received
    public replyReceived(session: Session): void {
        session.send('reply received');
        if(session.message.text === 'prompt') {
            Prompts.text(session, 'please enter any text!');
        } else if(session.message.text == 'done') {
            session.endDialog('dialog ending');
        } else {
            // no-op
        }
    }
    public dialogResumed(session: Session, result: any): void {
        session.send('dialog resumed with value: ' + result);
    }
}

We use an instance of the dialog directly in app.js. In the default waterfall, we echo any message, except the “custom” input, which begins the custom dialog.

const bot = new builder.UniversalBot(connector, [
    (session) => {
        if(session.message.text === 'custom') {
            session.beginDialog('custom');
        } else {
            session.send('echo ' + session.message.text);
        }
    }
]);
const customDialogs = require('./customdialogs');
bot.dialog('custom', new customDialogs.BasicCustomDialog());
Figure 6-24 shows what a sample interaction looks like.
../images/455925_1_En_6_Chapter/455925_1_En_6_Fig24_HTML.jpg
Figure 6-24

Interacting with a custom dialog

Incidentally, the Promps.text, Prompts.number, and other Prompt dialogs are all implemented as custom dialogs.

The compiled JavaScript for the custom dialog is shown next. It is a bit more challenging to reason about, but at the end of the day, it is standard ES5 JavaScript prototype inheritance.10

"use strict";
var __extends = (this && this.__extends) || (function () {
    var extendStatics = Object.setPrototypeOf ||
        ({ __proto__: [] } instanceof Array && function (d, b) { d.__proto__ = b; }) ||
        function (d, b) { for (var p in b) if (b.hasOwnProperty(p)) d[p] = b[p]; };
    return function (d, b) {
        extendStatics(d, b);
        function __() { this.constructor = d; }
        d.prototype = b === null ? Object.create(b) : (__.prototype = b.prototype, new __());
    };
})();
exports.__esModule = true;
var botbuilder_1 = require("botbuilder");
var BasicCustomDialog = /** @class */ (function (_super) {
    __extends(BasicCustomDialog, _super);
    function BasicCustomDialog() {
        return _super.call(this) || this;
    }
    // called when the dialog is invoked
    BasicCustomDialog.prototype.begin = function (session, args) {
        session.send('begin');
    };
    // called any time a message is received
    BasicCustomDialog.prototype.replyReceived = function (session) {
        session.send('reply received');
        if (session.message.text === 'prompt') {
            botbuilder_1.Prompts.text(session, 'please enter any text!');
        }
        else if (session.message.text == 'done') {
            session.endDialog('dialog ending');
        }
        else {
            // no-op
        }
    };
    BasicCustomDialog.prototype.dialogResumed = function (session, result) {
        session.send('dialog resumed with value: ' + result);
    };
    return BasicCustomDialog;
}(botbuilder_1.Dialog));
exports.BasicCustomDialog = BasicCustomDialog;

Exercise 6-3

Implementing a Custom Prompts.number

As an exercise of the concept of a custom dialog, you will now create a custom Prompts.number dialog. This exercise is purely academic; it is interesting to know how framework-level behavior may be implemented.
  1. 1.

    Create a bot with a two-step waterfall that uses the standard Prompts.number to collect a numerical value and send the number back to the user in the second waterfall step. Note that you will be using the response field on the args parameter to the waterfall functions.

     
  2. 2.

    Create a custom dialog that collects user input until it receives a number. You can use parseFloat for the purposes of the exercise. When a valid number is received, call session.endDialogWithResult with an object of the same structure as the one returned by Prompts.number. If the user’s input is invalid, return an error message and ask for a number again.

     
  3. 3.

    In your waterfall, instead of calling Prompts.number, call your new custom dialog. Your waterfall should still work!

     
  4. 4.

    As a bonus, add logic to your custom dialog to allow a maximum of five tries. After that, return a canceled result to your waterfall.

     

You now understand the building blocks of all dialogs in the Bot Builder SDK! We can use this knowledge to build just about any sort of interaction.

Actions

We now have a good idea of how powerful abstraction dialogs are and how the Bot Builder SDK manages the dialog stack. One of the key pieces of the framework that we do not have good insight into is how to link user actions to transformations of the dialog stack. At the most basic level, we can write code that simply calls beginDialog. But how do we make that determination based on user input? How can we hook that into the recognizers that we learned about in the previous chapter and specifically LUIS? That is what actions allow us to do.

The Bot Builder SDK contains six types of actions, with two being global and four scoped to a dialog. The two global actions are triggerAction and customAction. We’ve run into triggerAction before. It allows the bot to invoke a dialog when an intent is matched at any point during the conversation, assuming the intent does not match a dialog-scoped action beforehand. These are evaluated any time user input is received. The default behavior is to clear the entire dialog stack before the dialog is invoked.

lib.dialog(constants.dialogNames.AddCalendarEntry, [
    function (session, args, next) {
        ...
]).triggerAction({
    matches: constants.intentNames.AddCalendarEntry
});

Each of our main dialogs in our code in the calendar bot from the previous chapter uses the default triggerAction behavior, except for Help. The Help dialog is invoked on top of the dialog stack, so when it completes, we are back to whatever dialog the user was on to begin on. To achieve this effect, we override the onSelectAction method and specify the behavior we want.

lib.dialog(constants.dialogNames.Help, (session, args, next) => {
...
}).triggerAction({
    matches: constants.intentNames.Help,
    onSelectAction: (session, args, next) => {
        session.beginDialog(args.action, args);
    }
});

A customAction binds directly to the bot object, instead of a dialog. It allows us to bind a function to respond to user input. We don’t get a chance to query the user for more information like a dialog implementation would. This is good for functionality that simply returns a message or performs some HTTP call based on user input. In fact, we could as far as to rewrite the Help dialog like this. The code looks straightforward, but we lose the encapsulation and extensibility of the dialog model. In other words, we no longer have the logic in its own dialog, with the ability to execute several steps, collect user input, or provide a result to the calling object.

lib.customAction({
    matches: constants.intentNames.Help,
    onSelectAction: (session, args, next) => {
        session.send("Hi, I am a calendar concierge bot. I can help you create, delete and move appointments. I can also tell you about your calendar and check your availability!");
    }
});

The four types of contextual actions are beginDialogAction, reloadAction, cancelAction, and endConversationAction. Let’s examine each one.

BeginDialogAction creates an action that pushes a new dialog on the stack whenever the action is matched. Our contextual help dialogs in the calendar bot used this approach. We created two dialogs: one as the help for the AddCalendarEntry dialog and the second as a help for the RemoveCalendarEntry dialog.

// help message when help requested during the add calendar entry dialog
lib.dialog(constants.dialogNames.AddCalendarEntryHelp, (session, args, next) => {
    const msg = "To add an appointment, we gather the following information: time, subject and location. You can also simply say 'add appointment with Bob tomorrow at 2pm for an hour for coffee' and we'll take it from there!";
    session.endDialog(msg);
});
// help message when help requested during the remove calendar entry dialog
lib.dialog(constants.dialogNames.RemoveCalendarEntryHelp, (session, args, next) => {
    const msg = "You can remove any calendar either by subject or by time!";
    session.endDialog(msg);
});

Our AddCalendarEntry dialog can then bind the beginDialogAction to its appropriate help dialog.

lib.dialog(constants.dialogNames.AddCalendarEntry, [
    // code
]).beginDialogAction(constants.dialogNames.AddCalendarEntryHelp, constants.dialogNames.AddCalendarEntryHelp, { matches: constants.intentNames.Help })
.triggerAction({ matches: constants.intentNames.AddCalendarEntry });

Note that the behavior of this action is the same as calling beginDialog manually. The new dialog is placed on top of the dialog stack, and the current dialog is continued when done.

The reloadAction call performs a replaceDialog. replaceDialog is a method on the session object that ends the current dialog and replaces it with an instance of a different dialog. The parent dialog does not get a result until the new dialog finishes. In practice, we can utilize this to restart an interaction or to switch into a more appropriate dialog in the middle of a flow.

Here is the code for the conversation (see Figure 6-25):

lib.dialog(constants.dialogNames.AddCalendarEntry, [
    // code
])
    .beginDialogAction(constants.dialogNames.AddCalendarEntryHelp, constants.dialogNames.AddCalendarEntryHelp, { matches: constants.intentNames.Help })
    .reloadAction('startOver', "Ok, let's start over...", { matches: /^restart$/i })
    .triggerAction({ matches: constants.intentNames.AddCalendarEntry });
../images/455925_1_En_6_Chapter/455925_1_En_6_Fig25_HTML.jpg
Figure 6-25

Sample conversation triggering the reloadAction

CancelAction allows us to cancel the current dialog. The parent dialog will receive a cancelled flag set to true in its resume handler. This allows the dialog to properly act on the cancellation. The code follows (the conversation visualization is shown in Figure 6-26):

lib.dialog(constants.dialogNames.AddCalendarEntry, [
    // code
])
    .beginDialogAction(constants.dialogNames.AddCalendarEntryHelp, constants.dialogNames.AddCalendarEntryHelp, { matches: constants.intentNames.Help })
    .reloadAction('startOver', "Ok, let's start over...", { matches: /^restart$/i })
    .cancelAction('cancel', 'Cancelled.', { matches: /^cancel$/i})
    .triggerAction({ matches: constants.intentNames.AddCalendarEntry });
../images/455925_1_En_6_Chapter/455925_1_En_6_Fig26_HTML.jpg
Figure 6-26

Sample conversation triggering the cancelAction

Lastly, the endConversationAction allows us to bind to the session.endConversation call. Ending a conversation implies that the entire dialog stack is cleared and that all the user and conversation data is removed from the state store. If a user starts messaging the bot again, a new conversation is created without any knowledge of the previous interactions. The code is as follows (Figure 6-27 shows the conversation visualization):

lib.dialog(constants.dialogNames.AddCalendarEntry, [
    // code
])
    .beginDialogAction(constants.dialogNames.AddCalendarEntryHelp, constants.dialogNames.AddCalendarEntryHelp, { matches: constants.intentNames.Help })
    .reloadAction('startOver', "Ok, let's start over...", { matches: /^restart$/i })
    .cancelAction('cancel', 'Cancelled.', { matches: /^cancel$/i})
    .endConversationAction('end', "conversation over!", { matches: /^end!$/i })
    .triggerAction({ matches: constants.intentNames.AddCalendarEntry });
../images/455925_1_En_6_Chapter/455925_1_En_6_Fig27_HTML.jpg
Figure 6-27

A sample conversation triggering an endConversationAction

Extra Notes on Actions

Recall from the previous chapter that each recognizer accepts a user input and returns an object with an intent text value and a score. We touched upon the fact that we can use recognizers that determine the intent from LUIS, that use regular expressions, or that implement any custom logic. The matches object in each of the actions that we have created is a way for us to specify which recognizer intent an action is interested in. The matches object implements the following interface:

export interface IDialogActionOptions {
    matches?: RegExp|RegExp[]|string|string[];
    intentThreshold?: number;
    onFindAction?: (context: IFindActionRouteContext, callback: (err: Error | null, score: number, routeData?: IActionRouteData) => void) => void;
    onSelectAction?: (session: Session, args?: any, next?: Function) => void;
}
Here is what this object contains:
  • Matches is the intent name or regular expression the action is looking for.

  • intentThreshold is the minimum score a recognizer must assign to an intent for this action to get invoked.

  • onFindAction allows us to invoke custom logic when an action is being checked for whether it should be triggered.

  • onSelectAction allows you to customize the behavior for an action. For instance, use it if you don’t want to clear the dialog stack but would rather place the dialog on top of the stack. We have seen this in action in our previous action samples.

In addition to this level of customization, the Bot Builder SDK has very specific rules around actions and their precedence. Recall that we’ve looked at global actions, dialog-scoped actions, and a possible recognize implementation on each dialog in our discussion on custom dialogs. The order of action resolution when a message arrives is as follows. First, the system tries to locate the current dialog’s implementation of the recognize function. After that, the SDK looks at the dialog stack, starting from the current dialog all the way to the root dialog. If no action matches along that path, the global actions are queried. This order makes sure that actions closest to the current user experience are processed first. Keep this in mind as you design your bot interactions.

Libraries

Libraries are a way of packaging and distributing related bot dialogs, recognizers, and other functionality. Libraries can reference other libraries, resulting in bots with highly composed pieces of functionality. From the developer perspective, a library is simply a nicely packaged collection of dialogs, recognizers, and other Bot Builder objects with a name and, commonly, a set of helper methods to aid in invoking the dialogs and other library-specific features. In our Calendar Concierge Bot in Chapter 5, each dialog was part of a library related to a high-level bot feature. The app.js code loads all the modules and then installs them into the main bot via the bot.library call.

const helpModule = require('./dialogs/help');
const addEntryModule = require('./dialogs/addEntry');
const removeEntryModule = require('./dialogs/removeEntry');
const editEntryModule = require('./dialogs/editEntry');
const checkAvailabilityModule = require('./dialogs/checkAvailability');
const summarizeModule = require('./dialogs/summarize');
const bot = new builder.UniversalBot(connector, [
    (session) => {
        // code
    }
]);
bot.library(addEntryModule.create());
bot.library(helpModule.create());
bot.library(removeEntryModule.create());
bot.library(editEntryModule.create());
bot.library(checkAvailabilityModule.create());
bot.library(summarizeModule.create());

This is library composition in action: UniversalBot is itself a subclass of Library. Our main UniversalBot library imports six other libraries. A reference to a dialog from any other context must be namespaced using the library name as a prefix. From the perspective of the root library or dialogs in the UniversalBot object, invoking any other library’s dialog must use a qualified name in the format: libName:dialogName. This fully qualified dialog name referencing process is necessary only when crossing library boundaries. Within the context of the same library, the library prefix is not necessary.

A common pattern is to expose a helper method in your module that invokes library dialog. Think of it as library encapsulation; a library should not know anything about the internals of another library. For example, our help library exposes a method to do just that.

const lib = new builder.Library('help');
exports.help = (session) => {
    session.beginDialog('help:' + constants.dialogNames.Help);
};

Conclusion

Microsoft’s Bot Builder SDK is a powerful bot construction library and conversation engine that helps us develop all types of asynchronous conversational experiences from simple back and forth to complex bots with a multitude of behaviors. The dialog abstraction is a powerful way of modeling a conversation. Recognizers define the mechanisms that our bot utilizes to translate user input into machine-readable intents. Actions map those recognizer results into operations on the dialog stack. A dialog is principally concerned with three things: what happens when it begins, what happens when a user’s message is received, and what happens when a child dialog returns its result. Every dialog utilizes the bot context, called the session, to retrieve the user message and to create responses. A response may be composed of text, video, audio, or images. In addition, cards can produce richer and context-sensitive experiences. Suggested actions are responsible for keeping the user from guessing what to do next.

In the following chapter, we’ll apply these concepts to integrate our bot with the Google Calendar API, and we’ll take steps to creating a compelling first version of our calendar bot experience.