For most people, it was a Thursday. For the internet, it was a crash landing.
Google Cloud Platform suffered a massive global outage — rendering countless services inoperable for hours. Spotify went silent. Discord froze. Google Meet couldn’t meet. And Cloudflare, a key player in DNS and web security infrastructure — hosted on GCP — also went dark. It was a cascading failure of internet services at a scale we rarely see.
If your EV charging infrastructure runs on a CPMS hosted with Google Cloud or dependent on Cloudflare… your users probably couldn’t charge their cars that day.
No app. No RFID. No power.
And no, this isn’t alarmism — it’s architecture.
On June 12, around 18:00 UTC, Google Cloud’s API services began failing due to a major issue in their global backend infrastructure. The problem spanned authentication, DNS, and data handling — the core plumbing that modern cloud-native applications rely on.
Cloudflare’s Workers KV — also hosted on GCP — failed soon after, bringing down not just websites, but the connective tissue that keeps internet services reliable. According to Cloudflare’s own postmortem, the outage lasted over 2.5 hours in some regions. Google’s status page recorded degradation lasting even longer.
Now picture a driver pulling into a charging station during that window. They open the app… it won’t load. They tap their RFID card… nothing happens. They wait. They refresh. They curse. Then they leave — or worse, they’re stranded.
Specifically, they rely on connectivity to a backend. And most Charge Point Management Systems (CPMS) today — especially those following the OCPP (Open Charge Point Protocol) model — are built in a client-server architecture.
That’s great for logging data, real-time updates, smart pricing, integrations, etc. But it also means: if your backend is unreachable, your chargers become useless boxes of plastic and copper.
And it doesn’t matter why the backend is unreachable:
The result is the same: charging fails.
HeyCharge takes a different approach.
From day one, our system was built to not care whether the backend is available. Our SecureCharge technology ensures that chargers and mobile apps can work together even when there’s no internet at all — not just for minutes, but for hours, days, or more.
We do this not by avoiding OCPP, but by rethinking what resilience really means in EV infrastructure.
So on June 12, when GCP and Cloudflare had a meltdown, HeyCharge-enabled chargers kept humming along. Our users didn’t notice. Our operators didn’t panic. Everything kept working — because we didn’t build our system on a single point of failure.
Google Cloud is excellent. So is Cloudflare. But even the best infrastructure has bad days. The real question is: What does your system do when they do?
If your CPMS depends on 24/7 backend availability just to let someone start a charge, you’re not running a charging system — you’re gambling with user trust.
And that trust is fragile.
Drivers remember when they’re left stranded. Site owners remember when tenants complain. Fleet managers remember when uptime SLAs are broken.
The industry loves to talk about smart charging, V2G, dynamic pricing, carbon-aware routing… but let’s get real:
If the car can’t start charging, none of that matters.
At HeyCharge, our philosophy is simple:
Let charging work no matter what.
Of course, we haven’t figured out how to charge during power outages. Yet. (If you’re building pocket-sized fusion reactors, call us
)
June 12 was a reminder. Not just that the cloud is fallible, but that we need offline-resilient infrastructure for mission-critical services like EV charging.
Because users don’t care about DNS routing tables, cloud quotas, or Kubernetes clusters. They just want to plug in and go.
And that should always be possible.
—
Want to learn more about how we do it? Let’s talk.
#EVcharging #HeyCharge #cloudoutage #resilience #mobility #GCP #Cloudflare #futureproof #OCPP #IoT #smartcity #CPMS #chargingworks