Next Reaction: Monitor your conversations, get started with CCAI Insights

Windows 11 Arrives | With Day One Support From SentinelOne
October 13, 2021
Empowering non-profit New Incentives with no-code
October 14, 2021
Windows 11 Arrives | With Day One Support From SentinelOne
October 13, 2021
Empowering non-profit New Incentives with no-code
October 14, 2021

Developers & Practitioners

CloudNext21_6.jpg

In this year’s NEXT session: AI103 Using CCAI insights to better understand your customers, a new conversational AI tool has been introduced, CCAI Insights. With Contact Center AI Insights, business stakeholders and QA compliance teams can analyze and monitor customer service interactions and patterns in their contact center data.

It gives businesses insights into the topics that are being discussed by their end-users.

You can monitor how those conversations have been handled by the service agent through transcripts, caller sentiment detection, silence detection, entity identification, and topic modeling.

CCAI Insights can be used stand-alone but it also seamlessly integrates with all other Contact Center AI Solution products like Dialogflow and Agent Assist, as part of our Conversational AI offerings.

offerings

The first thing you will have to do is import conversations to your CCAI Insights instance. In a production environment, you will likely have CCAI Insights integrated with your virtual agent and contact center systems that push conversations via the runtime integrations in real-time to CCAI Insights. – However, it’s also possible to import existing datasets manually.

Importing a text chat conversation

Let’s start with importing a text conversation between an end-user and a virtual agent into CCAI insights. The data that’s imported in CCAI insights, under the hood makes use of Cloud’s Spanner. In case regionalization matters to you, because of enterprise data regulations, it’s good to know that US and EU regionalization is on the roadmap for early next year. With that being said, there is also a setting to delete the data after a preset period of time (TTL) and all data can be exported via API, Cloud Data Fusion, or direct to BigQuery.

You can import it through Google Cloud Storage by pointing to the GCS URL and providing the name of the virtual agent who handled the chat.

time

As seen in the listing below. Your conversation will need a specific JSON format, which defines: the text, the timestamp, the user id, and the role.

  [
   {
      "text":"Is there anything else I can help you with?",
      "user_id":2,
      "start_timestamp_usec":200000,
      "role":"AGENT"
   },
   {
      "text":"No that's it for now, thank you.",
      "user_id":1,
      "start_timestamp_usec":400000,
      "role":"CUSTOMER"
   },
   {
      "text":"Thank you for contacting us. Have a nice day!",
      "user_id":2,
      "start_timestamp_usec":600000,
      "role":"AGENT"
   }
]

Once the conversation is imported, you can dive into the conversation and press the Start analysis button. This will analyze your conversation transcript and annotate bits of your conversation, such as locations, persons, or objects. Clicking on these entities will highlight the parts of the conversation where those entities were mentioned.

mentioned

You can imagine that it’s extremely useful for a business or contact center managers to get insights on the topics that are being discussed in the call or chat. For example, in the case of a chatbot, are these the topics the chatbot was trained on? Or should you come up with a set of intents?

In the conversation hub, you can use the filter to include or exclude conversations based on agent ID, transcript, duration, turn count, and more. These filters can be combined to find specific conversations, and it’s possible to label these so you can find it back, or if you want to review these over a longer period of time.

chat but use time

Importing a call (audio) conversation

We can do the same for audio recordings. You will need to have a two-channel audio file of a uniform sample rate and encoding supported by Cloud Speech-to-Text. Speech-to-Text could generate a transcript from an audio file.

What’s important is that your transcript matches the Speech-to-Text response format, which contains the bits of a sentence with the start and end timestamps for each word, as shown in the below listing. Each conversational turn is tagged with a channel tag to refer to the speaker that is speaking on that channel.

  {
   "results":[
      {
         "alternatives":[
            {
               "transcript":"Hello.",
               "confidence":0.9805153,
               "words":[
                  {
                     "startTime":"1s",
                     "endTime":"1.600s",
                     "word":"Hello."
                  }
               ]
            }
         ],
         "channel_tag":1,
         "languageCode":"en-us"
      },
      {
         "alternatives":[
            {
               "transcript":"Hey there",
               "confidence":0.9805153,
               "words":[
                  {
                     "startTime":"3s",
                     "endTime":"3.500s",
                     "word":"Hey"
                  },
                  {
                     "startTime":"3.500s",
                     "endTime":"4s",
                     "word":"there"
                  }
               ]
            }
         ],
         "channel_tag":2,
         "languageCode":"en-us"
      }
   ]
}

Once you dive into your conversation, you can analyze the audio, and it’s also possible to play the audio recording.

Besides the entities, chat and audio conversations can also analyze the silence and the sentiment of the caller and the agent. This is very useful for contact center managers who want to learn from customer escalations.

Importing large datasets

Lastly, you can also import conversations as a batch to import existing large datasets. You can import these through scripts using the API or via Cloud Data Fusion.

Topic modeling

A CCAI Insights topic model uses Google’s Natural Language Processing to generate primary topics for each conversation in your dataset. You can then deploy the model to analyze future conversations as they’re imported.

To train your own topic model with good accuracy, you will need a minimum of 10 thousand conversations. Then you can start the training. Please understand that training a topic model can take up to 12 hours, as it’s a very extensive process, as it analyzes every conversation with each other, to find the most common entities.

Once your model has been successfully trained from customer data, you can deploy the model and view the most used topic drivers. Note the screenshot below, who would have known that apparently your human or virtual service agents spend a lot of time answering questions on how people can login to their accounts!

accounts

Conversation highlights

Smart Highlights automatically detect highlights through keywords and/or phrases in your conversation without requiring additional configuration. Smart Highlights draws from various possible scenarios to detect highlights, such as asking to hold, ensuring that an issue was resolved, a complaint, and more. Any highlights present in a conversation are labeled in the returned transcript on sentence level. It analyzes each conversation turn and categorizes the user’s intention.

intention

It’s also possible to create your own highlighters by providing keywords. In the below screenshot, you can see a custom highlighter tagging conversational turns discussing money amounts.

amounts

When using CCAI Insights combined with Dialogflow, it’s possible to create intelligent highlighters using Dialogflow intents.

As you have read in this article, CCAI Insights enables businesses to hear what customers are saying to make data-driven business decisions and increase operational efficiency. To learn more about CCAI Insights, check out the documentation.

Leave a Reply

Your email address will not be published. Required fields are marked *