Why WattShift · Utility Rate Database

I kept needing utility rate data I could actually trust in product work.

Before WattShift, I kept running into the same problem: teams trying to ship rate-aware software either leaned on brittle public data, bought tools shaped for a different motion, or slowly became their own tariff-data vendor.

Why I built the database too

The database started as a product need, not a data ambition.

Long before I thought about offering a standalone utility rate database, I was trying to build products that needed to behave intelligently around electricity economics. First that was at Cyberpowered Home, where I was working on the kind of behind-the-meter control problem that companies like Span and Lumen now make more legible to the market. Later it was at Alarm.com, where I was responsible for thermostats and energy. In both settings, I did not need a theory of why utility rates mattered. I needed software that could actually use them.

At Alarm.com especially, we were not coming at this from a naive starting point. We had a strong team, an energy-forward posture, EnergyHub in the family, and a relationship with Genability for what was, at the time, best-in-class utility rate data. Even with that setup, it was still hard to turn rate-aware ambition into a product system we could scale with confidence. A filing revision could quietly change the answer underneath you. A reasonable normalization shortcut could flatten logic that later turned out to matter. A team could get far enough to demo the concept, then realize the operational burden of keeping it right was much bigger than the first integration made it look.

That was the moment the database stopped feeling secondary to me. The problem was not that rate data did not exist. The problem was that product teams rarely got it in a form they could trust, integrate, debug, and keep current without taking on a whole new category of operational work. The more serious you got about shipping rate-aware features, the more the data layer stopped looking like a background input and started looking like the thing that determined whether the feature would survive contact with the real world.

What teams actually end up choosing between
Free sources that are hard to trust
Enterprise tools that are heavy to buy
Internal builds that are heavy to own

None of those are irrational choices. They just all break in different ways when a software team is trying to ship and scale a real product.

That showed up downstream everywhere. Launches got slower because each new territory felt like another research project. Engineers wound up debugging data assumptions instead of product behavior. PMs and operators had to decide whether the reachable impact in a given utility justified another pocket of special-case work. Teams that wanted to make energy a real product capability kept getting pushed toward a bad trade: either simplify the economics until the product got weaker, or absorb a maintenance burden that did not really belong inside the product team.

That is why the database became a product in its own right for me. I was not trying to create a grand standalone story about tariff data. I was trying to make rate-aware software more buildable. What I wanted was revision-aware tariff history, schemas software teams could actually use, and a rollout motion that made sense for people shipping product instead of running a bespoke utility-data operation on the side.

If you want the concrete API surface, coverage posture, and product mechanics, that is what the Utility Rate Database page is for. This page is just the reason I came to believe that a lot of software teams needed that product to exist in the first place.

See the product I wish we had earlier.

If this sounds like a problem you have felt inside real product work, the next step is to look at the product itself and see how WattShift packages that infrastructure burden.