Code[ish] logo
Related Podcasts

Looking for more podcasts? Tune in to the Salesforce Developer podcast to hear short and insightful stories for developers, from developers.

Tags

  • chatbot
  • high availability
  • Heroku buildpacks
  • automation
  • multi-region

30. The Infrastructure Behind Salesforce's Chatbots

Hosted by Jason Salaz, with guests Ian Asaff, Rob Hurring, Joslyn Esser, and Marvin Lam.

Chatbots have a long history weaved into the evolution of communication on the Internet, from IRC and XMPP to Slack and Discord. They're scriptable applications that respond to a user's commands, and they rely heavily on APIs to send actions and receive responses. They don't require much in terms of computational power or disk storage, but as with any kind of software, scaling them to support millions of users across the world requires a fail-safe operational strategy. For its customers, Salesforce offer a live agent support product with a chatbot that reacts to customer inquiries. The team behind the bot join us on this episode of Code[ish] to talk about how they leverage Heroku for its multi-regional requirements.


Show notes

As part of its product suite to automate a business' needs, Salesforce offers a Live Agent product, whose central component is a chatbot that can respond to a user's inquiries on the web or other messaging platforms, such as Facebook or SMS. A key design requirement of the chatbot is to be able to simulate human interaction by responding quickly. For an engineering team of eight, that means being able to offload as much operational responsibility as possible to Heroku's platform.

Salesforce's customers exist around the world, from Europe to Asia to the Americas. Heroku provide multiregional offerings for its dynos and databases that don't require too much administration or configuration. The benefits of not needing to define roles or specifications further simplifies the chatbot team's own architectural decision to design it as a collection of microservices.

Furthermore, the chatbot team uses Terraform to define the infrastructure requirements of their application. By predefining the necessary add-ons, data services, and private spaces for every application, they are able to quickly spin up new external services whenever necessary.

Transcript

Jason: Today we're going to talk about chatbots with Ian, Rob, Joslyn and Marvin. I'm Jason Salaz, a member of the Heroku support team and a lifelong user of connected network chat services. Let's go around and introduce each of my co-hosts today. Ian.

Ian: My name's Ian. I'm an engineer on the chatbot runtime team and I work on the application side of the product as opposed to the infrastructure side.

Rob: I'm Rob. I also work alongside Ian on the application side, and we work to build out all the individual microservices on top of Heroku.

Joslyn: Yeah, I'm Joslyn. I'm also just a software engineer, focused primarily more on the infrastructure side and empowering, empowering the developers of tooling so that they can, you know, be productive.

Marvin: Hi, I'm Marvin. I work alongside with everyone else, focus is on infrastructure as well. So we've been moving all of our, moving a lot of our work over to Heroku and this is our story.

Jason: Awesome. That being said, give me an introduction to this project. How did we get to starting this project?

Rob: So a long, long time ago we had this thing called Live Agent. When our customers needed to handle a support case, they would do a live chat and everything was backed up by a human behind it, which as you can imagine led to a lot of hours. So chatbots came along to kind of fill that need, and say well why can't we automate a lot of the basic use cases for what people are using support for? And that's kind of the history of the product here. Take out some of that human element and leverage the Salesforce platform to hopefully automate and help people without getting anyone else involved.

Jason: Right. There is a bit of an appeal of the conversational elements. You know, the service industry at large is something that benefits very highly from this. And it's the matter of having the technology to being able to fulfill so many aspects of this to... I very much don't want to say replace a human, but certainly augment their capabilities for things that happen at just such a high scale.

Ian: Yeah. I think a big part of it too is just being able to give humans tasks that they're good at instead of just clicking, you know, reset password for someone. Let the computer do that and they can listen to a more complicated support case and work there. So yeah, it's definitely not to replace, it's more to give them actual interesting real work to do as opposed to doing boring stuff that could be automated.

Jason: There's something that I've certainly learned in my years of support experience so far, is the documentation, because everybody learns and interacts a little bit differently, it is really advantageous to have multiple mediums, written mediums, video mediums. There's obviously a role for purely audio, kind of a guided feedback, guided help. And then in the most recent case, the interactivity offered by chatbots kind of makes things a little bit more friendlier than just a series of webpages with unchanging images that say go here, do this. When something can be a little bit more multi-step, multi input, or just a variety of use cases in one centralized place.

Rob: That's one of the nice things too is have the machine learning behind it as well, to kind of pick up and also drop you in the right place. Should we know you're looking for a specific article if we can't handle your situation, send you to the right areas.

Jason: Was there any more explicit purpose for creating them?

Ian: Yeah, I mean I don't know. I don't know if I want to speculate on products decision. I know that chatbots were kind of a thing coming up in the market in a big way, and given business is built on Salesforce's platform just, it kind of just made sense with the overall direction the market was going, and giving the Salesforce data backing it, it was like a no brainer.

Marvin: I think a lot of people basically pay Salesforce to automate a lot of their business. Right? So for us, our specific areas on service, and this is one of the ways that you basically can make much better use of, if I'm a company and I'm interacting with customers, this allows me to help more customers with a good experience, and I don't have to increase my staff. And that's a huge win. Right? And that ends up being kind of the target market for us, is folks that start growing their customer base or already have a large base, and be able to make better use of resources. That's huge.

Jason: So how are they being used? What actual kind of tasks and workflows have they been able to very largely take over so far?

Rob: We've got all different ones. The main thing would be something like where is my order? Something that very easily you could look up in a database and say, "It's on its way. Here's a tracking number." A lot of use cases are in E-commerce, just on general inquiries about order status like that. But in reality, I mean they could be used for anything. We hook into the whole Apex and Salesforce scripting language. So really you can have a chatbot call out to any API and pretty much do anything. They're very open-ended, which is one of the really neat things about the product.

Marvin: Yeah, I think one of the kind of really cool use cases that they did is, every year we have Dreamforce and there's usually a lot of different questions which come in and instead of having people staff like a chat room to basically help answer questions for people, we had a chatbot there. So if people are looking for directions or schedules or whatever on what, what talks are coming up, get a map, whatever. They were able to just interact with a bot straight out of our app, and they're just, you know, basically get the information they needed. And it turns out that very few actual human agents really needed to be involved.

Jason: I know I've used an almost incountable number of chat networks as I think of my history on the internet and larger connected networks. I've used various chatbots. It's called Smarter Bot, I believe that's just a webpage frontend and does those things. I've made and interacted with the whole host of IRC bots, XMPP bots. More recently, Discord and Slack are the real prominent ones and there's no lack of app integration and bots present on those mediums. So where do these chatbots reside? What medium does an individual go to begin this conversation?

Rob: From the beginning we had that, what we call Live Agent, which is our customer support product. Our bot has kind of integrated into that. So it would be a webpage. You'd click a button, it says contact us. And now instead of being routed to a human, you get routed to a chatbot. More recently we're opening up other channels. Facebook is going to be a big one, a few other messenger channels, SMS. But as of right now, yeah, I mean we're growing a list of channels we support. There's been a couple spikes working with Alexa so we can plug in. We're essentially an API.

Marvin: Yeah. I think the most traditional is if I'm a business, I have a website and then my customer comes to our website, clicks on support or anything like that. You can pop up a window, basically a chat window and I think that's the most traditional. But like Rob was saying, we're definitely adding more and more different ways to get to a bot so that our customers can hook it up in whichever way they want.

Jason: So it was to, to use the example of E-commerce, you were saying before, this isn't solely a Salesforce customer speaking to Salesforce. It can also be, I believe you used the term customers of customers earlier for individual E-commerce backs. They can embed this bot functionality that is siloed specifically to their set of data, their set of circumstances, their tasks, what their business actually is and that sort of information as well.

Marvin: Exactly right yeah. So our customers, Salesforce's customers will go in and configure their own box within their organization. And that's what we end up serving is they're bots to their customers.

Jason: Right?

Marvin: Yeah.

Jason: So you have this large scope across mediums, the vast amount of data that this bot has to be able to access. Why did you choose to use Heroku?

Joslyn: The primary reason, I mean, so what's interesting about on this podcast right now, we have about half the team. So to give a scaling size of the team, we have eight engineers. Eight engineers to develop this products and own the entire product too. And that's a very important piece as far as the ownership of it. So taking calls for it. So being on a pager rotation for it. Making sure that it's scaling well. And also developing it, making sure that everything gets done in a tight timeline.

Joslyn: All of these things. They all kind of factored into, okay, we have eight people and we're all engineers, we're all developers. Some of them like me, come from a little bit more of an, and Marvin, come from an operational background. So we definitely have, more experience in doing some lower level operational things. But with the team size of eight, we can't afford doing some more on premise or DIY on AWS, EC2, and managing our own databases. There's a lot of management and administrative overhead there, that a team of eight, that needs to develop this product and serve it across the world too. That's a big requirement was Salesforce customers exist not just in the US but in Europe and Tokyo, Sydney, and all over the place.

Joslyn: And Heroku kind of provided all that management, the operational administrative overhead, takes all that for us, database maintenances, all that kind of stuff handled for us. Then the product itself for engineers. So for Rob and Ian, they could probably talk more on this, but just the tooling for Heroku, is so amazing, has been developed over the course of many, many years. And the engineer's love it. So the operational minded engineers like myself and Marvin on this podcast, we love it from, okay, all of these things are handled for us on the shoulders of giants. Heroku manages millions of Postgres databases and has expertise there. We could learn Postgres and sort of manage it. But in a team of eight, that's one out of your eight and then you're on backup for it.

Joslyn: So leveraging all that is kind of, the one side of the coin, the other is just Heroku's development experience is amazing. Salesforce is very Java heavy. And our engineer, we were asked, "Can we kind of stay within that ecosystem of developing Java applications?" And Heroku, "Yeah, sure." They support, they have a full Java buildpack, and set of documentation for Java engineers to get started on Heroku. Yeah. Just get up and go, you know. So that's kind of what a high level administrative overhead taken from us, and then also all of the tooling overhead that we don't have to write for engineers. It just empowered us to get stuff done. That's why we chose it.

Rob: I think you said, right, we're a team of eight, and I feel like it's important also to mention that we're running at least 10 applications. I'd have to keep count, but in terms of the actual engineers versus ops, where there's four of us or so, and we're running 10 plus applications across multiple versions, every region, high availability, with zero downtime deploys. It's actually pretty amazing we were able to scale up this fast, and it's all pretty much due to Heroku and how easy it is to work with it. And how like Joce was saying, when we're using Kafka or Postgres or Redis, we can trust that it's up, that someone else is making sure that that's working. It takes a lot off our backs I think.

Jason: I think the most surprising detail of that description is that you call it a product, which is certainly a step up from a feature, but it seems a little surprising to me that the ownership is contained in this group, such that you have your dedicated on call rotation, and that you're basically an organization all to yourselves for this particular situation that you inevitably architected, prototyped, and set up, scaled up, and are running this entirely amongst yourselves. Obviously with additional insight from the rest of the organization as you need it. But operationally, as you said, I am talking to half of the staff of this project so far, and it's very dedicated in scope, but with the huge amount of data backing it that it ultimately interfaces with.

Ian: Yeah. And so just to kind of draw a box around what specifically our team is responsible for. This product is pretty large. There's an entire interface that our Salesforce customers will use to build and configure their bots. I mean they can, they literally tell the bot what they want it to do, what Apex calls to make, what part of the inventory system, access, whatever.

Ian: The piece that we focus on is, when the customer describes how they want their bot to work. They generate a file and our service ingest that file essentially and causes it to run. So we call ourselves the engine, the runtime, you know, the bot runtime and it's basically, it's like a virtual machine. Like we take bot code and we execute it. And that's the piece of the product that we've built. So it's not eight engineers on the entire chatbot thing, but it's eight engineers building and managing essentially the brain, the thing that causes the bot to be out in the world doing stuff.

Jason: Yeah. I like those terms. The engine, the runtime, the... you are the humans behind the automation that the bot provides.

Marvin: Yeah. Yeah. I think to me it really is the conversational engine, which drives kind of all of the conversation. But as Ian was saying, right, there's a lot of other pieces which exist in the Salesforce, within the Salesforce core infrastructure, which is basically what the UI becomes, that's what people interact with, our customers interact with on a day to day basis. There's a lot of work that's also done in there as well. And that has many engineers, more than even the size of this team working on those pieces. And we just drive, basically all of the conversation.

Jason: So you've touched on this already, talking about multi-regional localities and then the amount of resources, but what scale are we actually up to? Everything from, you mentioned a myriad of applications and all the add ons that Heroku provides. I'd love to have more specific information on what that actual resource profile looks like across that powers all this.

Marvin: So today we are in all the regions, which Heroku has capabilities in. So that would be six regions around the world, generally two in each geography. So in other words, like North America has two, two regions they're matched up with, where they're serving. Europe has one as well as APAC. So in order to serve our customers worldwide, we serve our core infrastructure, where our UI is, worldwide. And we put our serving for the conversational piece near where our, basically our calling infrastructure is. So we integrate very closely with the rest of the Salesforce infrastructure and hence we co-locate with them and in the same region or geography. So with that, we use a lot of the add ons which Heroku provides. There are several which are for our data services. So things like Postgres, Heroku, Redis, Heroku Kafka, all of these are managed services.

Marvin: So we pretty much almost never have to lift a finger. If we were to do nothing, it would just keep running. And that's really the beauty of it. Right. And that's really what made it worthwhile for us to consume these services because with a team of eight, that's about all the time that we have, frankly. But a lot of the maintenance as well, Heroku gives us notice to say is "Hey, your Kafka's going to be upgraded. If you want a test, go ahead and test. Here's what you can do. Just spin up like another cluster, make sure that everything works properly and oh, everything looks good. You can go ahead and run this command, we'll upgrade it for you and everything's good." Right? So there's a lot of those things where if you did nothing, everything would just work fine generally.

Marvin: But you have a lot of options and you get a lot of notifications that say, "Hey, this is about to happen. You might want to be ready for this just in case." Even the non-Heroku add-ons, right? There's things that we use for logging and for metrics, which are huge for us, right? Being able to just say, "Hey, this is Logplex and I can take that data and put it to pretty much whichever logging add-on I want to put on it. That gives me the flexibility to go where I've gone through an investigated these add-ons, and they meet my security requirements and everything works from an integration standpoint, and I'm cleared to use that. Then that allows me to use that particular add-on versus another one, which I may not be so confident about. But having that choice is really invaluable.

Jason: Right? Yeah. There's the entirety of the Internet ecosystem of services you are available to use, whether that is a homegrown or in-organization solution, whatever fulfills your needs. But there is also the ecosystem of add ons that are managed centralized billing things, one click away generally as easy to scale as the rest of dynos and everything else that Heroku ourselves does maintain, that's at your fingertips so long as it meets your requirements.

Joslyn: Yeah, and I want to do point out too, the amazing thing about that ecosystem was that our team kind of declares our entire system in code. And includes all of our Heroku resources. And so we use Terraform incredibly heavily to define the entire stack. So all of our Heroku apps, our Heroku private spaces, and included with that is all our Heroku add-ons. So our data services, any external provided add-ons, we can declare those with code, and know that they're spun up and integrated when we stamp out our entire, like if we stamp out a new region with Terraform, the actual external service providers will be spun up as well.

Joslyn: Accounts will be created for those at the same time. And so that integration there is helpful rather than needing to go separately, sign up for a service, get that account ID come over, get it configured for your Heroku apps or whatever it is. Get that same integration, that play there. And so a few lines of code and we were able to, "Okay, we want to use New Relic. Let's get an account up for that, and integrate that with all the apps, and share that license key that we get from the Heroku add-on to all the apps, we'll Terraform and boom, we're done." You know, everything's nicely integrated.

Marvin: No, actually I wanted to give a shout out to David Gee, he's on our team as well. He's out of APAC, so it's like 4:00am in the morning so he couldn't make it. But he contributed hugely to the Terraform provider for Heroku. Which basically filled a lot of the gaps in for us to be able to use Terraform to drive everything that we do in Heroku. That basically has given us the ability to pretty much fully automate like Joce was saying.

Jason: Joce also mentioned earlier in that the magic of attachments and even me personally, that was one of my biggest fascinations as I was learning Heroku. I think this was... this was definitely after my tenure when I was playing around with resources as well. That bit of magic was a particularly special to me because I reflexively had my, you know, small handfuls of hobby apps, and I would throw a logging add-on on there. I would throw a profiling add-on on there, I would throw a database on there. And then when I came across the attachment article, I pivoted instead to have one centralized, to basically a skeleton app, something that wasn't actually running code, but basically owned billing and core add-on, add-on attachment, and then attach it to every single other one of my apps. And because all of these add-ons are multi-app aware, they come back into the same interface, into the same profile information, preference information, all that kind of stuff.

Jason: And it just simplified everything in such a quick and dramatic fashion for me. It made me so happy. It was so fun to just, exactly as you said, if done manually, this breadth of information and just isolate it, just centralize it down to one thing for, for everybody's sake since we have access to these apps.

Rob: I mean just, just think about running ZooKeeper and Kafka without the add-ons, you know? How much trouble that would be.

Jason: Yeah. And that is just one service in the entirety that the cluster of that makes up.

Joslyn: That's a good, yeah. We use that same exact strategy of spinning up separate apps that own different things. Whether it's a data provider, so whether it's Postgres or Kafka, or if it's other providers, having one central ownership. Yeah, it was someone at Heroku, showed us that little neat trick and that was huge for overall management. We took that a step further for security reasons. We leveraged Heroku config bars very heavily because they're incredibly secure, easy, simple, key-value configuration. So for our secrets, a lot of our secrets, we've done some really clever use of that and leveraging it to great effect to distribute and manage secrets across the stack. Some of those tricks, like once you figured that out in the ecosystem, it really lets you scale kind of beyond a single app. And you might graduate from a single Heroku app to a staging in a production.

Joslyn: But then once you got to go across six regions, and then you have multiple services adopt this micro service. Rob, how many were you saying we got now, like 10? Anyway.

Rob: Actually it's over 10. But off the top of my head, I can think of 10.

Joslyn: So 10 different apps that all interact with each other over the, within the private space. And so we use the internal routing of Heroku as well, it's a wonderful feature for private space, internal routing between the services. So all of that kind of stuff has helped us scale kind of within a region and across the region, and between all the developers owning different services and making it simple to do so. That's the beauty of it, is how simple it is, with not needing all the overhead of managing all those things.

Jason: We've talked a lot about how it's the, the amount of time and operational work that it saved you so far, but no road is perfectly smooth. What did you actually bump into? What were kind of potholes and speed bumps and things that you hit along the way?

Marvin: I think for us, because kind of prior to us using Heroku for our development, there was a lot of, existing processes within the company on software development, right? And anything new takes ramp up time. And it's mainly getting the people that we work with familiar with the whole ecosystem that we were going to be running in. There's a lot of things that you end up having to integrate. So there's some technical challenges, but a lot of them are also process challenges too. Figuring out, okay, what is the security stance of all of this? How do we integrate safely and all of that. It's new so people are not familiar with it. So it took time. But now that we've invested that time into it, for other projects, which come along after it's, "Oh hey, we're using the same type of pattern, this is the same type of invocation." Things like that where they just adopt what we have and it makes a lot easier for them.

Marvin: But I think that's the main caution is anytime you're doing something new, getting people to buy in, can take a bit of time and been there, done that. And I think we're better for it at this point. As far as the other technical challenges, even like some of the things that you've mentioned about, we start developing and we structure our apps a certain way and then you find out later, well maybe it's a little bit better to, like you said, have attachment apps. So then trying to rework some of those while you're kind of in flight, is obviously a little more challenging than if you were just to bring it up new. So I think those are, but you know, for the most part we really interacted with the Heroku APIs just like everybody else in the world does. So there really is not a lot of technical challenge, I think on the Heroku end, to us being able to get our service online. I mean it really was a very pleasant experience.

Ian: Yeah. And I think Marvin, I do remember too, in our early days, one of the biggest challenges we had early on was how do we build our applications. So for Heroku, our slugs, like build our slugs and actually get those running on Heroku's runtime. But building those slugs within our data center that have the network firewall access to all of our source code. So our source code is locked down to internal networks, our libraries. So like you know, if you're a Ruby app and you need gems. You know if you're a Java app and you need JARs, a lot of that internally sourced stuff is locked down. So it's only available within our private network.

Ian: And so we couldn't leverage, Heroku's build system, right? That's running external to that. But Heroku because of its open design, and because buildpacks are open source, and because Heroku's platform API lets you basically create slugs and release them through an API, we were able to actually replicate, not replicate because Heroku's is much better, but essentially build Heroku's build system within our own data center within Jenkins, to basically build their slugs and push them out to Heroku, so that we could then release them Heroku.

Ian: And that wouldn't have been possible. That was a challenge. That was probably one of our hardest things to overcome early on. But guess what, Heroku had the APIs needed. We didn't have to get any new API, no new APIs were developed for that. It was available and we just figured out, "Oh wow, I can actually build a slug locally. Tarball it up, push it up to Heroku, and release it." And that that was huge for us.

Jason: Right. To tie that all together. The slug terminology is the resulting application. When you approach Heroku and you bring your own application, you have your code that typically also has the specifications of, as you said, the Ruby runtime, gem dependencies, Java and JARs. The Heroku build process is you specify your code, you push it to Heroku, specify the build pack. The buildpack will continue to use the Ruby example, puts everything that's needed to run Ruby and most of the key OS internals in place, puts your code in there and then gathers any and all dependencies, gems and everything else, that are all typically publicly available via Rubygems or some other gym server that you typically provide authentication to, or just a third party source in general.

Jason: And this process all runs on Heroku, so it has a security and access rules as Heroku. And as Joce was describing, because we publish build packs as a repository on GitHub, you can take all of those tools, all of the version specific specification everything else, and run the build process on your own infrastructure, which gives it privileged access to internal libraries, dependencies, and things where you have more stringent security rules. And then you create the slug externally to Heroku and just give it back to Heroku host.

Jason: So it's, it's the same process run within a different security context, which is openly available because we, I mean there's certainly Ruby and Java and everything else aren't ours. We create tools to put it together. And you've implemented a step above due to your specific requirements.

Ian: Yeah, that was beautifully said.

Jason: It's as if I've done this for many years. Rob, Ian, was there anything language specific that you guys ran across?

Rob: Some of the things I know, like Joce was saying, we had to get locked down internally, which it's a little, it's a little tough. But it's almost like as if you just treat it like Docker or something, as long as we can build it internally, it's perfect because we don't have to worry about where the artifacts end up or how to expose them. So yeah, like Joce was saying, it managed to work out beautifully because we're able to just publish to those APIs. Takes a lot of stress off our plate I say.

Jason: I think we may have a jumped the gun on this talking about attachments in particular, but what did you learn from this process? What did you take away that was new and specific to this project?

Rob: Nothing really. I mean I know we, we all, we've all used-

Jason: Just another app you know?

Rob: I mean we've all used Heroku I think, because coming from the Ruby world, most of us, we have very fond memories of Heroku. And I was just saying to Joce too, I use app in a totally different context. I'm talking about the individual applications, but I just looked at our Heroku space and we had 110, maybe even more. I counted quick. But that's a lot. I mean for a team of eight that's pretty impressive. 110 plus apps, managed in code, by eight people and everybody kind of knows what's going on with the audit trails and everything. So you know, Heroku continues to blow me away. It's a great system.

Jason: I take it that that means, as you were describing 10 apps earlier, you say app to mean frontend, backend data and probably even more particular subsets in there. 10 apps, six regions. You said over 110 so we're talking just shy of 20 apps per region that makes up this kind of single app identifier that you were describing?

Marvin: Yeah, so we have multiple services. I guess that's probably the more proper term. But yeah, we run multiple services, which all interact with one another and each one of those services, of course they're deployed to kind of a six production regions. We also have a staging environment. We have our internal environments as well. Right. So if you sum all that stuff up, let's say that's about 110 today.

Ian: Yeah. And then that ends up being the 110 Heroku apps. Yeah. And some of those are like the meta level, like you're mentioning Joce, like a New Relic app that manages the add on for New Relic and consolidated billing there for all the apps to kind of fan out and share. Yeah. So some of those apps are very lightweight, not running any dynos. Some are just managing some secrets that are shared across the rest of them. So-

Rob: A hundred, say a hundred.

Ian: Let's say a hundred?

Rob: I thought I skipped those, but a hundreds close enough. But each one of our services is versioned. Well there's a couple that are versioned. So we're running three different release versions. It would be incredibly difficult to do if it wasn't so easy to see everything at a glance on the Heroku.

Jason: Any other fun anecdotes from this process? From building it up and, and running it so far?

Ian: Well, you know, we're kind of outside the core code system at the company. Because we're on Heroku and we built this thing up from scratch, and that allows us to have very fast turnaround times. So a bug was found and within like an hour, we had found the issue, fixed the issue, released the issue. And when we kind of dropped that note in Slack, someone who works on the other, other part of the organization where things are a little bit more legacy and crusty, you could like hear their head exploding through Slack, they were like, "You did that in 45 minutes? That's amazing." And you know, usually there'd be a bit more process involved. But that's my favorite story about it.

Rob: Yeah we found a bug and we were able to push it out as fast as the pipeline could take us pretty much. But we talked about our process, but I don't think we talked about when things really go bad. Between all the monitoring when we deploy we, we pretty much do single ticket deploys out to production. But one of the beautiful things is how fast it is to roll back when something breaks. So all you do is you just use Heroku, rollback a version, and pretty much don't effect any customer, which is an amazing thing. You know, that would be pretty much hand rolled in a lot of different ways. But Heroku just kind of has that.

Marvin: I want to also expand on one small thing that Rob said, but actually it's really huge for us is that when whenever we do deploy, it's one feature at a time. And because our pipeline can support that velocity, right? We are able to do that versus having to batch up multiple features and then not knowing what broke a given release. So yeah, that flexibility to roll forward and roll back quickly. That is really big. That's much more than I think a lot of different other processes internal to a lot of companies can do. But you, once you see it and you can see how easy it is to basic troubleshoot issues and get back from there, you just can't go back.

Jason: Is there any aspect that allows you folks to take advantage of the bots capabilities as a user?

Joslyn: Yeah, I think the easiest one for us, we had this, we still haven't done this guys, but a chatbot to actually do a lot of our operational actions. So let me think about a chat ops or, I know Heroku probably does a lot, integrations with Slack and things like that where you can control your deployments maybe, or do a lot of rolling back or database migrations or all of those actions that take some sort of manual prompting or manual interactions. We've definitely toyed around with the idea of building a chatbot for ourselves to do that very thing. But I think it's still in development, right guys? That's still in development.

Rob: Yeah. We need a new Slack channel.

Joslyn: Yeah.

Jason: Take the functionality that Heroku designs and interfaces and plug it into this for your own sake of operational, as you said, deployment or a reactionary to any ongoings or anything else along those lines.

Joslyn: Yeah, exactly. Because we've scaled so far out, we've had to abstract a lot of that with tooling. And some of that tooling does take, you have to take action on those things, and those are something that definitely a chatbot could take care of for you. Now if it was machine learning based, there's always room for error though. And so that's where with chatbots, you have to train it. If you're doing anything where you want to detect intent or train the bot to understand if you set it one way but not the other way. Right. I want to roll back to version two. You would want the bot to hopefully roll back to version two. Right? And understand what that means, and not misinterpret that without prompting and saying, are you sure you meant that? Or did you mean to actually do this? You want to make sure it's trained well, if you do any kind of that. But even just basic commands, I could see a lot of value in that to speed up and integrate that with Slack and handle our high level operations a little bit better.

Rob: It's like the new wave Hubot.

Jason: Ian, Rob, Joslyn, Marvin, my fellow ohana. Thank you guys all very much for your time.

Ian: Thanks, Jason this was fun.

Rob: Thanks for having me.

Joslyn: Thanks a lot, Jason.

Marvin: Yeah that's awesome, thank you.

About code[ish]

A podcast brought to you by the developer advocate team at Heroku, exploring code, technology, tools, tips, and the life of the developer.

Hosted by

Avatar

Jason Salaz

Cloud Platform Support Engineer, Heroku

Jason is a Heroku Support Engineer specializing in Platform topics, including Dynos, DNS, HTTP, SSL, and plenty more.

With guests

Avatar

Ian Asaff

Engineer, Salesforce

Ian started as a trainer & telemarketer, but tired of dialing for dollars. He's been building software for 15 yrs for various industries & companies.

Avatar

Rob Hurring

PMTS Salesforce, Chatbot Runtime Team, Salesforce

Rob has been working as a backend engineer at Salesforce for the past 3 years, focusing primarily on distributed systems.

Avatar

Joslyn Esser

Software Architect, Salesforce

Being faced with high level problems to solve is what keeps him motivated. Empowering others and seeing them succeed is what keeps him smiling.

Avatar

Marvin Lam

Principal Software Engineer, Service Cloud, Salesforce

Marvin is a seasoned infrastructure engineer, whose job is quickly disappearing (yay!), thanks to Heroku ... just in time to become a first-time dad!

More episodes from Code[ish]