Looking for more podcasts? Tune in to the Salesforce Developer podcast to hear short and insightful stories for developers, from developers.
58. Capturing and Analyzing Energy Usage Metrics
Hosted by David Morgenthaler, with guests Emmanuel Levijarvi and Teddy Ward.
We're all familiar with using data and analytics to monitor the performance of our applications, but Kevala is applying those software fundamentals in new industries. Kevala tracking energy grids in cities and neighborhoods, to map the ways that power is produced, distributed, and consumed. The technology has the potential to decarbonize the energy grid, or at least, offer lower energy prices for distributors and consumers. Kevala's engineering lead, Emmanuel Levijarvi, and one if its software engineers, Teddy Ward, talk about how Kevala works and the tools they rely on to reliably predict energy utilization.
David Morganthaler, an Account Manager at Heroku, interviews two members from Kevala: Emmanuel Levijarvi, its engineering lead, and Teddy Ward, a software engineer. Kevala is building a one-to-one map of a community's energy grid, to identify how power is produced and model how it's consumed. They pull data from public sources and aggregate data to reliably predict when energy will be needed, and what an optimal price to pay for that energy generation.
Balancing the exact amount of energy that people want alongside the amount that is being produced is an incredibly hard problem for the utility sector. If you don't produce enough, electronics won't work and vehicles can't be charged; if you produce too much, it's either wasted or has the potential to fry wiring and infrastructure. Kevala tries to monitor these figures by using machine learning on models they create, as well as tracking usage throughout the day.
The Kevala engineers spend some time also talking about the dependencies used to achieve their goals. These range from hardware, like the IoT devices which consumers install or the access points they tap into to collect accurate data, as well as the software services which make up their app, like Auth0 and Postgres. Because of the unreliable nature of energy consumption, Heroku's autoscaling platform allows Kevala itself to spin up resources in a cost-effective manner.
Links from this episode
- Kevala is a software company focused on analytics for the energy industry
David: Hello and welcome to Code[ish], Heroku's podcast. My name is David Morgenthaler, I am an Account Manager here at Heroku, and I'm joined today by my friends from Kevala.
Emmanuel: Hi, my name is Emmanuel Levijarvi and I lead our engineering team at Kevala.
Teddy: And my name's Teddy Ward, and I'm a Software Engineer.
David: I want to start off quickly by learning a little bit more about Kevala and what you do.
Emmanuel: Sure. We are building what we like to call one-to-one map of the energy grid and we map the way that power is distributed and we map the way that power is produced. We try to model how it's consumed, and we're just trying to develop a picture of what's going on for regulators and utilities and-
Teddy: Also, solar installers. Basically, I guess we are trying to take all the data that could possibly exist about the electric grid and show it in meaningful ways to people who want to do something with that specifically. Our mission would say, people who want to put more clean energy onto the grid.
Emmanuel: This is entirely the brainchild of our founder, Aram Shumavon, and he comes from the California CPUC and learned a lot about this and he is very passionate about decarbonizing the energy grid. And I think I can speak for both of us as feeling the same.
Teddy: Yeah, definitely passionate about decarbonizing the grid which is why I came to Kevala in the first place. And I was pretty quickly won over by Aram's knowledge of-
Emmanuel: And vision.
Emmanuel: I had a personal interest long before I joined Kevala and various things related to energy and capturing data via like IoT type devices, and gathering data from a smart meter. And so when I learned about Kevala it was kind of, I was like, "Oh, wow, I can actually do this from work and not just for fun" kind of.
David: Awesome. So, can you guys talk a little bit about some of the ways you go about data collection? How are you aggregating data today?
Teddy: Well, we pull data from huge variety of sources, whether these are public maps that we can find ways to incorporate in our tool, or data that's collected manually by a team of trained data specialists. Basically, once we have all of that data, we can show it using tools like Leaflet and MapBox. Just to give people a picture of everything that's out there. And then for aggregation, or either spatially aggregating things. So for example, taking all of the rooftop solar, and let's say that there are 1000 solar installations that connect to this power line, which generates at 3:00 p.m. in the afternoon a certain peak of solar energy production. And then we also aggregate temporarily so you can check it, 9:00 p.m. where there's nothing generated by the solar. But on Labor Day, 2017, I think it was when there was a heatwave everybody's energy usage spiked because it was 105 degrees in San Francisco and everybody that had an AC turned it on. And everybody that didn't, I guess was going out and buying fans.
David: I think I was lucky enough to be in Italy when that happened.
Emmanuel: I would add another one of the sources for data are proprietary data sets where we will do some analysis on behalf of customers and that data. We will adjust it on their behalf and do analysis that's specific to them and kind of keep it private to their proprietary kind of versions of our analytics. The math tends to be the same, but the data input is different.
David: So, I'm interested in also understand a little bit more about how you guys are capturing wholesale energy pricing and working with that.
Emmanuel: A lot of people may not be aware that the price of electric energy is changing constantly throughout the day, and there's actually more than one market in most places related to wholesale energy. So here in California, we have three markets. One is the day-ahead market and you can buy if you're a utility or large energy consumer. You can buy energy a day in advance, and at different hours in the day. And then there are also markets that you can buy them maybe an hour ahead of time. So, the price of energy is changing not only over time, but it's also changing depending on where you are. So, we could possibly have different energy prices here in San Francisco than you do further south on the peninsula. They may have different energy prices than we do in the wholesale market. And so we capture these and we feed them into some of our analysis. For example, when we're modeling batteries, if you're a utility and you want to charge a battery when energy prices are low, you might want to use this type of data to feed into that model.
David: And when you say battery, I mean you're talking about things like electric cars. Or potentially if you have like something like a power wall or something like that.
Emmanuel: So, a little bit larger scale. So as a consumer, you're not buying power in the wholesale energy market, you're buying power usually at more or less a fixed price from your utility. There's some variation, but these would be the prices that the utility would pay for power to provide to you as a consumer. But it can apply to electric vehicles. If somebody has a fleet of electric vehicles, say like a city or something or a county, it becomes important for them if they're buying energy in bulk, what the prices are and that they're charged at the right time, they can save or they can spend a lot more money.
Teddy: We're also interested in electric vehicles and the Tesla Powerwall stuff that you mentioned. For example, the utility might want to build new infrastructure in some part of the state. And instead, one thing that we might suggest is looking into ways that adding more Powerwalls or whatever the technology is for behind the meter storage could allow you to avoid the need for spending billions of dollars on new infrastructure.
David: So can you guys talk a little bit about... I mean, obviously, you're here as Heroku customers, I'm interested to hear a bit about sort of how you've designed your application stack around some of the challenges that are unique to this industry. And if you sort of brought anything unusual, perhaps from your various backgrounds to this field that might otherwise be foreign to this industry.
Emmanuel: So, I've had previous experience in other companies, capturing metrics and one thing that's very common is to capture what the metrics are on a particular compute node. Like what's its CPU, how much memory does it have on a time series basis. We found that these tools are extremely applicable to capturing things like wholesale energy prices, capturing, say, smart meter data, which we also capture meter from smart meters. To do modeling on how particular residents like how much power they probably are using at a given time. We've also found that some of these things can be bursty. And it's really helpful to be able to spin up a bunch of compute and brokers who say we need more. We can easily either autoscale it or manually scale it depending on how we want to handle that. So, say, as we start capturing more and more smart meter data, we can have more and more Heroku frontends. So, it's been really fun to see how these tools that kind of, at least I've used in different contexts are really applicable to energy.
Teddy: I came from working at Google, actually on payroll timesheets, which isn't that relevant except that it meant that I had experience with a lot of different Google Cloud tools, which we can now apply to an industry that is maybe slower to adopt new technology. The utility sector, I don't want to offend anybody, but it tends to be behind by a couple decades.
David: That makes sense. A lot of large industries are like that, slow to move. As far as a couple other sort of bullet points about how you're capturing data, smart meters, feeders, rooftop solar, how do those all factor in?
Emmanuel: So, all of these things are interconnected, and they can impact each other. So, feeders, in case you're not familiar, is the way that energy gets from, say, a substation to your house more or less. And a feeder is a way of aggregating all of the energy consumption that a whole set of houses use. So, feeders are very important to us because they provide a way of collecting a lot of energy consumption into an aggregate. And then, when we talk about things like rooftop and we talk about consumption, these things tend to kind of cancel each other out a little bit. So whereas your refrigerator is consuming, your rooftop PV is producing and so when we look at this as a whole, we kind of look at these as counterbalancing. But there are other things like electric vehicles that also are big consumer, so we consider those a little bit separate.
Teddy: I mean, it's incredibly hard problem that the utility sector has to deal with, which is that you always have to be balancing the exact amount of energy that people want to use with the amount that you're producing because you can't... I mean, there are batteries, but they haven't taken off enough yet that you can store your electricity for any amount of time, like the electricity that's running through our microphones right now was coal less than a second ago, or probably since we're in California it was sunlight but whatever. So, the power is generated at some giant power plant or a solar farm and it travels along a transmission line to a substation which then takes it down to the feeder, which then is disseminated to the people.
Teddy: But then also more power might be added at the feeder level if you put a solar panel on there like you are connecting at the optimal place, but you want to be able to connect into the grid and give what value you can. So, I guess a ton of spatial data that you need to know where everything is all the time and exactly how much is being produced at every moment. So that you don't do something like under produce power and cause a blackout or overproduce power and melt people's wires in homes.
Emmanuel: This also relates back to the wholesale cost price of energy. That's why we have... You may have heard way back in the day or even more recently, that the California System Operator and they're the ones that manage where how much you have to pay for energy at any given spot. And one of their big jobs is to balance consumption with production and prices with one way to do that.
David: No kidding. So, that's actually a pretty good transition to mapping in general. It's not something I'm super familiar with. But particular when you talk about essentially the, for lack of a better term, supply chain effect, more or less. Can you talk about, first of all, the electric grid to who has solar energy, sort of where energy prices might be spiking, where people are using lots of energy, power being shut off. When you think about those challenges, do those factor into some of the ways that you're collecting data or?
Emmanuel: Not so much the way... So, avoiding spikes that would cause instability drives a lot of the data that we're trying to get. We are not yet anyway, in a position to try to make the choices in real-time. We're trying to provide the analytics to say this is how you avoid having these spikes that would cause an instability by installing a battery or installing more infrastructure, or whatever the data suggests we're trying to provide that sort of an analysis.
David: So, our approaches from a slightly different angle. I'm in sales and account management. When I think about how you sell to your customers or attract new customers a really critical thing is for them to understand what your value proposition might be. So would you guys mind talking a little bit about as you built your product, what sort of the core value prop is? What you've done with the product, both on a software level, and also potentially at Heroku level? Some of what you're doing that's unique and driving value for those customers.
Teddy: We're hoping to allow our customers to do things that otherwise that would take them months in an hour or a couple hours, maybe. Let's say that you're trying to find a field to build a solar farm on, you'll have to look up all the property records for the county that you're interested, you have to look up... You have to find some satellite imagery or send some poor sap out there to walk down the street and follow the power line back to the substation that it goes to. You'll have to look around and see where are other solar farms around here. Because if there are too many that try to latch on to the same power line, then you're going to melt and overflow the grid.
Teddy: So, you're doing just amazing amount of looking around and trying to find different data types. And everybody's doing the same thing. Everybody is looking for the same data. So on our platform, we're hoping to provide a unified interface where this is the same everywhere and it's the same for everyone. So everybody can see the same thing and in a consistent format, whether you're in Pennsylvania or Oregon. And we use tools like Heroku to make sure that this is consistently available. And all the databases are in the same format. And you can download parcels, whether it's eight o'clock in the morning or 2:00 a.m., wherever you are in the world, whether you're in France or here. And we can only do that because we're one system using Heroku, which fortunately doesn't have a lot of downtime that we have to worry about.
Emmanuel: So, this has been a trend for last several years. But we found, especially in last few years that we have been able to almost fully focus on just our analysis and just providing the product. And we don't have to worry about whether disks are filling up or whether something is running out of computer. They're just scaling themselves. So, it's really been amazing how much we've been able to do with frankly, so few people because they're just whole classes of technical problems that we just don't have to worry about. That not that long ago we did have to worry about and Heroku is a huge part of that.
David: I mean, strictly from an infrastructure perspective.
Emmanuel: Infrastructure. We don't have any infrastructure staff, we're completely... And it's really just frees us to really focus on the problem that we're trying to focus on.
Teddy: Whereas the individual counties or whoever that we pulled data from, or whoever is looking for data from might have that stuff. It might still have that old infrastructure, a county might still be keeping, storing your property records on a CD-
David: Like a-
Teddy: A compact disk.
David: Oh my god.
Teddy: So, we're trying to pull all those things together.
David: My goodness. Can you guys talk a little bit about how you're analyzing this data? Because I mean, first of all, it's got to be an enormous quantity of data you're collecting.
Emmanuel: One of the challenges and it seemed pretty crazy when I started it. Sometimes still seems pretty crazy is that we're trying to analyze things. And I said this at the beginning at the one to one level. So, our analysis doesn't start at say, a county or start at a substation, how much load it is. We're trying to say what's the probable load on every single house on the street? And what's their likelihood to have PV or maybe we even know that they have PV because we've done some analysis to figure that out.
David: And just for the record, PV is?
Emmanuel: Oh, photovoltaics, like rooftop solar.
David: Awesome, thanks.
Emmanuel: And this is really a daunting challenge because we talked about feeders. This is actually pretty hard. Our internal data staff has spent a lot of time figuring out where these are. And this kind of goes into us being like I said, one to one map. And often as you might imagine, the probable load isn't exactly right at a given residence. But the great thing is that when you add enough of these up, it starts becoming pretty much spot on. Once, when you add enough of them together and in aggregate then it starts working out, which is the important part.
David: You say like as an average effectively.
Emmanuel: Not so much an average. I mean, you could think of a little bit of an average but we don't have to get any single residents correct as far as what their hourly energy consumption. We just model it, but when you add it all up to the feeder, and the feeder starts getting correct, and then the substation is even more correct.
Teddy: It is sort of an average. If you have an electric vehicle and you plug it in, we probably won't know what time you plug in your electric vehicle, say which if you plug in an electric vehicle, it might quintuple the amount of energy that you're using at that moment. We might not know that it's at 5:03 verses at 6:10 depending on what time you got home from work. But if we know that 10% of the people in your neighborhood have electric vehicles, and most of them are going to be getting home at a certain time. And we've trained on real data from our utility customers then we can provide a glimpse as to what needs to be available at 6:00 p.m. most days.
David: Gotcha. And I just heard you use the word train. So, is there some amount of machine learning going on here?
Emmanuel: Right. So that kind of like I mentioned at the beginning, we have these kind of like silo data sets, and they're like, they want us to answer a specific question. Can I build a substation here? And so we can vary, we have the actual non-model data that we can answer that question specifically for them. But the math that says that we can use model data and run it through the same math for areas that it's less. It doesn't have to be quite as precise or quite as accurate, and we can use modeled instead of actual.
David: Are these data set strictly data that's being given to you by your customers, or what percentage of this is publicly available?
Teddy: Your individual usage data to be clear is not publicly available and for good reason, because we don't want bad actors being able to attack everybody with a Nest Thermostat, or something like that. And we've done a lot of work into looking into how to prioritize people's individual data. There might be publicly available data sets that's aggregated up a very large scale, you might be able to look up the PG&E customers as a whole use this much energy last year. And you might be able to look up where the power lines are, in some cases or something like that. We might be able to even look up depending on how lenient your county is, where your house is, and who owns it, but we aren't able to pull David plugged his car in at 5:06, so we know that David gets home at 5:06. So if we wanted to rob David at 4:30, he wouldn't be home. No, we don't look at anything like that.
David: So, I think in a previous conversation that we had you mentioned something about sort of load and production on a circuit and on a panel. Can you talk a little bit about how that factors in and also non-wire alternatives.
Emmanuel: We use models a lot and the models are informed by real data, but then the analysis uses the model instead of the real data. So, we may look at, see it like a PV data set that has been made, excuse me, photovoltaic roof datasets, that's made available. And there are a number of kind of models available for PV. So what we can use is, we know where the PV is, and we know what a model is for PV in this location and then we can add up all of the individual installations in that location. And then we get the probable production, which we can also feed in weather and that sort of thing to really get closer into what the PV production probably was without actually measuring it.
Teddy: Talking about non-wires alternatives, that's basically trying to avoid building new wires, whether that's a new transmission line or feeder substation. It's especially important in California, perhaps because we've had wires start fires. And so the greater the degree to which we can reduce the number of wires in California, the better. All of that stuff is built based on forecasted growth in an area. They think that Modesto is going to double in size or whatever, I'm making that up. But let's say that they think that then they might need to build a ton of new electrical infrastructure in Modesto to service all these new people and all their new air conditioners.
David: So, what you're saying is participate in the census?
Teddy: Yeah. That's not what I was saying. But yes. Our models are just sort of an alternative to the utility model, which is saying, "This is going to grow by 10% next year." Instead, we're showing what we think it's going to be and we're also saying, "Okay, but what if, instead of building a new transmission line here, you give people a subsidy or something to put solar on their homes."
David: So effectively, providing your customers with clearly actionable insights about what's going to be a good way for them to reach their customers in the best possible way with regards to energy needs is as they can.
Teddy: Yeah, absolutely.
David: That's awesome. So, I'd love to hear a little bit more now about if you don't mind talking about some of the tools you're using to make mapping software. How are you guys currently using Heroku? How does that all fit into this part right now?
Emmanuel: So yeah, we have, I think a pretty common I think, architecture for mapping and for information. We start with... We have geospatial databases, we happen to use Postgres with the PostGIS for a lot of our geospatial data. And we run our API on top of Heroku, and we have our applications that are also on top of Heroku. And we found that this works really well for doing the maps and it makes doing the types of geospatial queries that we do all the time really pretty effortless. Assuming we have enough resources. And then our frontend stack we as I think Teddy mentioned earlier, we use leaflet for that. And that's worked out really well for us.
Teddy: For on Heroku, we're able to quickly spin up new application frontends, that can all use the same API. So, for example, our data acquisition tools that our specialists use to input new data calls Kevala's API, which is running on Heroku. And then we have a separate application also made on Heroku that shows all of the data's utility and regulators. And then we have another application that lets you self serve. Let's say that you just want to pick one house and get its energy profile. That's a public application called site assessor that you can use to hit Kevala's API. So, we can have all these different applications running on Heroku that use small number of dynos that hit our API, which uses some much larger number.
David: I heard you talking about a bunch of frontend apps. The backend is also on Heroku. Are you also plugged into any other third party services as well? Can you talk a little bit on that?
Teddy: A ton of them. Maybe not a ton but-
David: I would assume some of that is based on that volume of data. I would-
Teddy: We don't have any compute resources, like physical resources that were have other than our laptops. And so we use a whole set of services. We use Auth0 handles our authorization. We have databases that are running on Google Cloud. We use AWS for a lot of stuff. We use a whole mix of services. And most of them do what we would have to hire whole teams of people to do to manage these sorts of things. So, we've been able to build up our actual application much more quickly by kind of tying all these building blocks together.
David: I'm particularly interested because I think a big part of what we're striving do here is focus on Heroku, being extremely good at a smaller slice of things that some of these gargantuan services like AWS out there, which has, I mean I can't even keep track, 180 different product offerings or something like that.
David: So, I'm really interested in hearing a little bit more about for the things that you're running on Heroku, why did you choose Heroku for those things, and why did you choose these various other? I mean, you don't have to get all the way down to the weeds but talk a little bit about that application.
Emmanuel: So, productivity. So I mean, you kind of really described what we experience also with some of these. We use AWS pretty extensively as well for kind of more specialized things and we like AWS. But there's just a huge menu of stuff and I kind of like I don't even know where to start and we could go to trainings and stuff. But to just kind of get up and running, we really need something focused and that handles our whole workflow. And Heroku does that beautifully from integration with GitHub, which we use also extensively, to creating review apps, to moving our code from review apps into staging, where we can test it all out and make sure it works and into production. This is a really seamless hands-off process for us. And also, not only is it seamless, and hands-off, but we understand it, which I think is also very, very important to kind of not only say hands-off, but we actually know what's going on in this process.
Teddy: I mean, the biggest win I guess, was my first day of Kevala when I first saw the Heroku review app. I was literally calling friends of mine who were at the gym that night, talking to people. Oh my god, you wouldn't believe this. Never again will I merge something and then get paged, of course, always at 1:00 a.m. telling me that the thing that I did just broke everything. And that was a huge relief from Heroku. And so the workflow like the ability to easily makes edging applications and advance to post-production, everything is handled perfectly by Heroku. But then we're able to plug into all these other tools as well Papertrail and Rollbar. Papertrail being our logging software. Rollbar being-
Emmanuel: An exception-
Emmanuel: ... reporter. We'd like that marketplace really is great for us too. We're like, we need a tool that does this. And we're definitely not going to build it. Let's see what we can find.
Teddy: We're an 11 person company, newly. So, we really can't build anything from scratch that we don't absolutely have to.
David: I had heard somebody talk a little bit about time-series databases. I'm interested in understanding a little bit more about how you're building out on those. How does that relate to your use of Postgres at all? Or does it?
Emmanuel: I think for us they're complementary. They are some time-series database plugins for Postgres, but we use one called InfluxDB that we like quite a lot. And it's very easy to use in the purpose-built for what we need. And we were talking about the wholesale energy prices. And we're talking about solar production, energy production and gathering smart meter data. And all of this just... we just put it all in there and it just keeps... it's great. And we can get it out easily, weather data, all kinds of things that we capture. It works work really well. And of course, we could put this technically in Postgres, but we find it fits in a lot more easily into a purpose-built tool. But I mentioned this earlier, but I had first come across Influx as capturing metrics for how the response times on API calls and that sort of thing. And it's all-time series data. So it's not like it cares. But it was just really neat to see this transition into a totally new industry for me.
Teddy: Especially since Heroku does our metrics.
Emmanuel: Yeah, right. Exactly. A good point.
David: Correct me if I'm wrong here, some of what you're talking about sounds like it might have a little bit of overlap with so-called IoT-
Emmanuel: Sure. Absolutely
David: ... in the sense of you're working with, I don't know, millions of sensors, billions of sensors.
Emmanuel: So, we are working with real-time data, but it's more aspirational for us in terms of capturing a ton in real-time. We're gradually building up our capability there more and more.
David: Gotcha. So you're not necessarily as concerned right now about pub/sub or messaging queues or things like that.
Emmanuel: We are concerned about them in that we're preparing for the future. And so we were talking about smart meter data, we have a small set that we are generating ourselves, our friends and families' smart meters. And we've set that all up in a pub/sub type architecture with the idea that we'd like to expand that out quite a bit in the future. But right now, we're using that, partly out of interest, and partly because it helps us with our modeling.
Teddy: We have lots of smart meter data, but as far as pulling it in real-time, that's sort of that next big step. Because not only do we want to pull in smart meter data, but we'd like to pull in any smart device data, anything that causes electricity to be used or produced.
David: So, as you guys look to augment the capabilities of your core product going forward. And I heard you talk a little bit about the elements of marketplace, which can be a little overwhelming as far as the options in there. I mean, obviously, you're looking outside of Heroku as well. But I'm interested to understand sort of when you're looking to augment your core product, how do you go about making a decision about what sorts of data-oriented solutions or whatever it might be, to adopt, to try to address the lowest hanging fruit?
Emmanuel: It depends on what we're looking for. So I think, for example, we are very interested in real-time in the next year or so, get capturing real-time moving from what I would say is periodic data where we're maybe getting stuff once a day or once a couple times a day to getting it as a constant flow. And so we're like, "Okay, we want to get into real-time. What are my options for can I get... I don't want to manage a Kafka server. So is there Kafka on there? Or is there a RabbitMQ server? Or is there... With things that we don't want to manage but we need. And so usually we have an idea of the type of tool that we want, but we'll kind of do a survey on what's available. And then how much is it going to cost us? And how does it integrate with our tooling that we already are using? And can I figure out how to use this relatively quickly? Or is this going to be like "I got to take a month to figure out how to use something or go to some training."
Emmanuel: And so usually, I would say, I got it up and running quickly, and without much fuss is a really big win for us. And that usually is the one that tend to kind of win, unless there's some other problem with it. Another example is where we're like, "Well, we need something slow. We should look at caching that," and then we're like, "Well, of course, we don't want to manage a Rabbit server. So, let's find out what's available," that sort of thing. So, I don't think there's anyone thing. But being able to get something up and running that we can just start using right away is really important to us.
Teddy: We have so many ideas coming into our team all the time for potential projects because we have so many ongoing projects right now. So basically, anytime that I see a post on Hacker News, or something that tells me or that says a best practice for something, we can find a way to apply that to our business. Or if I stumble across a new feature in Google Cloud or a new feature on Heroku, we try to find some way to apply that to some products that we're working on. For example, there was a beta thing on Google Cloud recently that was computer vision auto ML, they call it, auto-machine learning. And it's just pretty much automatically did a bunch of computer vision for us. So, pretty much as soon as I found out that that existed, I was like, "Oh, yeah, we're using this."
David: Before we start to wrap up here, I'd be interested to hear a little bit about advice that you would give to anyone else who is working with similar types or similarly enormous sets of data or mapping. What sort of advice would you give to other companies looking at addressing those kinds of challenges?
Teddy: I don't know if this is actionable advice. But one of the most advantageous things about what we do is that we have a few customers that use our product a lot or extensively rather than many customers who are just creating a little bit. So, we have a lot less load on our servers and everything and a lot less load of customer relation management because we might have a few dozen customers instead of a few dozen million.
Emmanuel: That's absolutely helped us in terms of our customer support burden. We are able to handle as developers pretty much all of this customer support or triage of bugs just because we don't have three million customers or things that we have to do. So, that's kind of a fortunate aspect. So, I would recommend doing that in future. No. In terms of advice, I think for a small company, I think the model of putting prebuilt pieces together versus trying to build them yourselves. I think it's worked out well for us, and I can't really imagine any other way whether you have a lot of customers or few in building a full application with only a few people-
David: Just strictly in terms of speed to market.
Emmanuel: Speed to market, right or speed to add new features or speed to fix things that are broken. A lot of what we do admittedly is a reaction to some of my prior work experience and I have experienced being paged because disks fill up. And I've experienced trying to fix something and didn't really know anything about it. And I think those sorts of things make sense in a bigger organization, but when you're small, we just don't have time to figure that out. And it's just a big burden lifted off of us to have all these mentors basically take care of it for us. And that, of course, presents its own challenges when vendors have problems but we've actually found to be very reliable. We haven't really had much problem with one of the services that we rely on going down.
David: You're saying with even one of them going down.
Emmanuel: So, you might say, "Well, given how many different services that we use, even if they're all very reliable in their own right, collectively, something might be always broken." And I think that, that's a legitimate concern. We just haven't really experienced it that much.
David: Gotcha. So, it almost sounds like a microservices of vendors.
Emmanuel: Yeah, I think that's actually very apt. I think that there's this whole ecosystem of vendor, microservices that are competing to provide very good services to us. And, frankly, it's wonderful. Again, I think that, that's a very correct way to put it.
David: All right. Well, again, Emmanuel and Teddy. I really want to thank you guys for coming in. And joining us today. And, again, this has been Code[ish] from Heroku. Look forward to seeing you guys for the next one. Thanks so much.
Emmanuel: Thank you.
Teddy: Thank you.
A podcast brought to you by the developer advocate team at Heroku, exploring code, technology, tools, tips, and the life of the developer.
Account Manager, Emerging Business, Heroku
As an account manager, David advises startups, public companies, and non-profits to ensure their apps' long-term viability on the Heroku platform.
Director of Engineering, Kevala, Inc.
Emmanuel leads Kevala's software engineering team. He enjoys mountain biking, learning about clean energy, and time series data.
Senior Software Engineer, Kevala
Teddy is a full-stack engineer from Chicago who is especially interested in user interfaces, clean energy, and gardening.
More episodes from Code[ish]
Nick Sawhney, Greg Nokes, Dan Mehlman, Michael Rose, and Jack Ziesing
Having a goofy meme project go viral can be an exhilarating feeling. It can also cause your heart to drop, as you've suddenly been saddled with new responsibilities around uptime and scaling resources. Nick Sawhney shares his experiences... →
Andrew Lenards and Chris Castle
Meditation can take many forms. While it may conjure up cliched images of people sitting on cushions and chanting, in actually, many different groups, from the Harvard Business Review to medical professionals, are exploring the ways in which... →
Julián Duque, Lynn Fisher, and Chris Castle
Back in the day, the web felt smaller and people used simpler ways to connect with others. Those with niche interests still found each other despite the absence of mega social platforms. Lynn Fisher, Chief Creative Officer at &yet, shares... →