How we built automated support

This is the story of how Kickstarter’s Data team collaborated with the Community Support team and the Engineering team to build a support automation tool called Sassy — and how our workplace collaboration was enriched in the process.

Origins in empathy

Successful companies tend to have a deep, fundamental empathy with their users. At Kickstarter we try to bring empathy to every part of the organization, from the design of human-oriented products to the care and tone of our Community Support emails.

On the Data team we seek to understand our users through the data they generate. This sometimes proves a challenge, as data points are only shadows of the humans that create them.

We also try to have empathy with our colleagues in other departments at Kickstarter. Our Community Support team, for instance, promotes this by having what’s called a CS rotation, wherein everybody in the company gets a CS training shift and temporarily acts as CS agent, answering a handful of user support tickets over the course of one week.

I did my CS rotation in the fall of 2015, the first week it was offered. Answering tickets was much harder than I could have imagined, but I found that I loved being able to connect with and help the people who actually use the products that I help build and maintain. There was a thrill that I don’t usually get when analyzing the results of an A/B test.

Not only did this rotation help me gain a better understanding of the patterns and pain points of our users, it was also a great exercise in getting to know the workflows used by our CS team.

While writing my responses to user emails was a slow and painstaking process, seasoned CS agents work at lightning speed. And they have to — the volume of support tickets is high, and it takes a lot of human power to handle them in a timely and accurate manner.

As I learned about the CS workflow, I also learned that the CS team was interested in exploring augmented support — meaning hiring outside contractors to manage certain types of tickets, or possibly even introducing some automation to their workflows. An opportunity for cross-team collaboration started to germinate.

Human automation

Working fast is imperative to our CS team, but just as crucial is making sure our users are treated like human beings. It is important to be able to dig deeper when users write in with tricky tickets, but often times these high-touch deep dives can only really begin after the exchange of a few diagnostic troubleshooting messages. These initial emails often solve users’ issues outright.

A large part of community support, I quickly learned, involves responding to lots and lots of straightforward, nearly identical tickets: password resets, payment troubleshooting, and users asking to change their pledge are commonplace. Like many support teams across the industry, Kickstarter’s CS team has created an array of “macros” — pre-written responses to straightforward questions that can be conjured in a few keystrokes — to respond to many of these tickets. Seasoned CS agents have memorized scores of these macros, and can produce the correct one, personalize it, and fire it off in just a few seconds.

This is all done in a customer service software called Zendesk, which, in addition to supporting macros, allows user emails to be tagged, prioritized, and routed to different support agents. A lot of this can be controlled automatically too, using the Zendesk API.

While the use of macros already saves us a lot of time, we wanted to see if we could go further. The CS team here is small and scrappy, and the time they spend on even the quickest macro-based emails takes away from time that could be spent solving harder problems that require expert — and human — assistance.

Together the CS and Data teams identified a few use cases where highly automatic and ultimately time-consuming human actions might be able to be automated with the help of machine learning.

  1. Automatic prioritization: The CS team had recently organized support into a few different levels based on the complexity of issues/tickets. The task of determining ticket priority and routing tickets to the appropriate agent or group could be automated.
  2. Automatic responding: Some common tickets are often answered with simple, unmodified macros. If a machine learning algorithm could reliably identify these tickets, we could respond to certain first-touch tickets automatically, bringing a human into the loop for any subsequent replies.

With these goals in mind we set out to build a ticket classification service that would be known as Sassy — a portmanteau of Support + Classifier.

The pitch and the service

So far, the idea of automated community support was a brainchild of only the Data team and the Community Support team. To sell the idea to the product team, I worked in R to develop a Shiny app prototype of a ticket classification service. This had a text box on one side and a list of response macros and relevance scores on the other. As you typed your question, the macro scores would update and the most relevant macros would rise to the top of the list.

Early prototype built with R and Shiny

Behind the scenes was a multi-class logistic regression classifier trained in Glmnet. I worked with the CS team to group macros into a dozen or so topical buckets, and I used those buckets as the labels in my training set. People liked the prototype, and we decided to build the service for real.

Back on the Engineering side of Kickstarter, we had just begun an initiative to break up our monolithic Rails app into smaller autonomous microservices. After much consideration and pilot testing our Engineering team decided that these services would be written in Java.

I had some experience writing C++ code, so I was excited to get to spend some time away from R and Ruby to work in the statically typed Java world. Some kind engineers on the Platform team (including someone who was on my CS rotation months before) helped build all the robust plumbing and infrastructure to make the skeleton of a Classifications Java service. It was my job to insert the AI.

Bridging the gap

Here I began to encounter the classic problem in data science of how to bridge the gap between prototype and production — how to connect the thing that trains your model to the thing that serves it.

I knew that R was great for building ML models from raw data, and my prototype was built in R, but we didn’t want to use those R objects to serve requests in our production-level service; that was Java’s job. On the other hand, I had no experience training models in Java, and I wasn’t even sure if there were reliable libraries out there for creating ML models strictly in the Java world.

This is when we came across a machine learning platform called H2O, which is basically designed to solve this problem. I was able to train my multi-class classifier in R using H2O’s highly customizable ML models (I ended up using a gradient boosting tree model), and then export my trained model as a Plain Old Java Object (POJO) file. This POJO could then be inserted into the Java service without any dependencies — all the logic of the model itself is encoded in the POJO with simple Java classes and containers.

I wrote a training script to fetch raw data from our support ticket database, clean and tokenize the text in R, train a model in H2O, export that model to a POJO file, and copy that file into the Java source code. There are a series of accuracy tests that the model must pass in order to get released. The logic I had to insert into the Java service to serve requests was pretty simple, and the POJO, though comprising many tens of thousands of lines of generated code, worked like a charm.

A few of the half-million lines of an autogenerated H2O POJO file

This foray into Java land also provided a template for our new service-oriented architecture paradigm. We kept our classes abstract enough to easily extend the Classifications service to other classification tasks that we may need in the future, such as models for predicting project success or spam classifiers.

We deployed the service into the wild in early 2017. It processes Zendesk tickets via a POST request sent from our app containing the ticket text and some other metadata. The Java service then cleans and tokenizes the text, passes it through the logic of the model POJO, and responds with a list of labels and corresponding probabilities. The caller can then take an action based on the response, such as routing the ticket by priority or even invoking Zendesk to auto-respond to a ticket. These actions are triggered by tags that the caller applies to tickets via the Zendesk API.

Empowered by Sassy

Part of the joy of this project was getting to collaborate with other teams throughout Sassy’s development. This did not end when the product was deployed. The process of modifying the training labels and retraining the model periodically over time is owned by the Community Support team. The decisions as to which macros should be grouped together, which macros should be used as auto-responses, and which macros should never be automated are questions that can only be answered by CS.

I worked with members of the CS team in getting them acquainted with using Github, making and committing changes, and opening and merging pull requests. All the details about model training labels are contained in one straightforward config file which they have ownership to edit, and the retraining script is run with a simple console command. With a little effort to empower people outside of the Engineering team to use a few engineering tools, Sassy is now able to grow more autonomously — and our separate teams are more closely knit.

Here is an example snippet of the config file used for grouping macros into Sassy labels:

- macro_pw_trouble
- macro_acct_profile_forgot_password
- macro_creator_build_video_resolution
- macro_creator_build_embed_video_in_update
- macro_creator_build_insert_video_in_description

CS agents periodically monitor Sassy’s accuracy manually, and every so often CS and Data stakeholders get together to discuss the results. By taking ownership of much of this process, my CS colleagues are developing strong intuition for the strengths and limitations of the ML models we are using. This helps us jointly make more informed updates to the model, and the virtuous cycle continues.

Sassy in the future

This service dubbed Sassy is currently in use, sorting tickets by priority and even auto-responding to certain types of tickets. Sassy will continue to evolve further as we harvest better training data, iterate on different models, and explore other product incarnations. For example, with a bit of work, a future iteration of Sassy could perhaps power real-time, automated, while-you-type support on the site. We are hopeful for what is to come.

What started as a week of answering tickets on the CS rotation grew into a cross-team collaboration with Community Support, Data, and Engineering building and deploying a brand-new service. We learned from each other throughout the process, and our teams and ourselves are stronger as a result.

Interested in working with Kickstarter data? Come join us! We’re currently hiring a Data Engineer.

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.