Skip to content
Go back

The Journey of Cloud Run Microservices From Public Calls to Multi‑Region Challenges

When we first build microservices on Cloud Run in Google Cloud, everything feels smooth. Each service gets a nice URL, requests flow, and things just work. But then reality kicks in: how do these services actually talk to each other at scale?

This is the story of that journey — discovering one problem at a time, fixing it, and then moving to the next.


Table of contents

Open Table of contents

Step 1: The Obvious Way — Public URLs

At the beginning, Service A calls Service B using its .run.app URL. Easy!

But wait…

👉 First Problem: Microservices inside the same cloud are communicating as if they’re strangers on the internet.


Step 2: Restricting Traffic — Private Access Needed

We think: “Fine, let’s make services private.” Cloud Run allows ingress settings:

Now, only Google Cloud resources can call our service. This improves security. But… where’s the private IP?

👉 Second Problem: Without private IPs, we can’t have true internal service-to-service communication.


Step 3: Introducing the Load Balancer

The fix: put Cloud Run behind a Load Balancer.

Now, finally, Service A can call Service B through a private entrypoint. Latency drops, egress costs vanish. But then comes a new snag…

👉 Third Problem: Do we really want to hardcode ILB private IPs everywhere? And what about TLS certificates? They expect hostnames, not IPs.


Step 4: The DNS Layer

Enter Cloud DNS private zones.

Now services call each other by name, not by number. TLS certs validate properly. If the ILB IP changes, DNS handles it transparently.

We solved discovery. But then, developers ask:

👉 Fourth Problem: What happens when we run apps locally on laptops? Private DNS won’t resolve outside the VPC.


Step 5: Local Development Realities

Locally, the .internal.example.com names fail. Why? Because Cloud DNS private zones live only inside the VPC.

Options emerge:

Now local dev matches production — one hostname everywhere, but with split resolution.


Step 6: One Load Balancer or Two?

At this point, we have:

Do we need two load balancers?

The decision depends on your priorities: simplicity vs strict separation.


Step 7: Multi‑Region Deployments

So far, everything is within one region. But what happens when we scale out to multiple regions?

We deploy Cloud Run in:

Each gets its own URL. Now…

👉 New Problem: How do clients (and other services) know which region to hit?


Step 8: The Latency Dilemma

If a European user calls the US endpoint, latency spikes. If Service A in Europe calls Service B in the US, cross‑continent delays pile up. If one region fails, there’s no automatic failover.

👉 We need smart routing.


Step 9: Enter the Global Load Balancer

Google Cloud’s Global HTTPS Load Balancer solves this:

Now, clients always get low latency and high availability.

👉 External traffic is solved. But internal traffic?


Step 10: Internal Multi‑Region Calls

ILBs are regional.

👉 Multi‑region introduces new challenges: state management, routing, and cost.


Step 11: DNS + Multi‑Region Awareness

We can extend DNS with region‑aware records:

Services choose their regional peer. For more automation, add Traffic Director or a service mesh.


Step 12: The Cost Factor

Cross‑region calls aren’t free.

👉 Efficiency matters: don’t let microservices chatter across continents.


🏁 The Destination: A Balanced Architecture

Our journey uncovered challenges step by step:

  1. Public URLs → easy but inefficient.
  2. Ingress restrictions → secure but still public path.
  3. Load Balancers → private entrypoints.
  4. Cloud DNS → service discovery.
  5. Local development → VPN or DNS tricks.
  6. One vs two LBs → simplicity vs isolation.
  7. Multi‑region → latency, failover, consistency.
  8. Global LB → external smart routing.
  9. DNS + replication → internal multi‑region solutions.

Key Lesson: Cloud Run is regional and public by default. To build resilient microservices, you need Load Balancers, Cloud DNS, and a multi‑region strategy. Each fix solves one problem, but opens the door to the next challenge.



Share this post on: