Synchronizing Goroutines with Channels: The Bell Pattern in Go
Published on Saturday, Apr 25, 2026
1 min read
While learning Go, I stumbled upon a fantastic resource: learnxinyminutes.com/go — concise, practical, and gets you writing Go almost immediately. One thing I noticed is that it leans heavily into channels early, before introducing heavier synchronization primitives like sync.WaitGroup or sync.Mutex.
At first, that felt limiting.
But then I ran into a problem that changed how I think about concurrency — and made me realize:
Channels alone are powerful enough to model event-driven systems.
The Problem: A Race Condition at Startup
I wrote a simple program that starts an HTTP server and immediately makes a request to it:
go http.ListenAndServe(":8080", handler)
requestServer()
And got:
connection refused
Why This Happens
This is a classic race condition. When you prefix a function call with go, Go schedules that function to run on a separate goroutine — but the scheduler decides when it actually starts executing. The main goroutine continues immediately without waiting.
So the timeline looks like this:
Main goroutine: go http.ListenAndServe(...) → requestServer() ← FAILS
|
Server goroutine: └──> [maybe starts here, maybe later]
http.ListenAndServe internally calls net.Listen to bind the port, then begins accepting connections. Both steps take real time. If requestServer() runs before the bind completes, there’s no socket listening on :8080 yet — hence connection refused.
This isn’t a bug in Go’s scheduler; it’s a fundamental property of concurrent execution. You cannot assume ordering across goroutines unless you explicitly establish it.
The Naive Fix — and Why It’s Wrong
time.Sleep(1 * time.Second)
requestServer()
This “works” in practice but has serious problems:
It’s non-deterministic. Under load, CI, or on slow hardware, the server may take longer than 1 second to bind. Your test passes locally and fails in production.
It’s wasteful. You’re sleeping for a full second when the server might be ready in 5 milliseconds.
It’s not expressing intent. The real goal isn’t “wait 1 second.” The real goal is “wait until the server is ready.” time.Sleep is a proxy for that condition, not the condition itself.
You’re guessing time instead of synchronizing on state.
The Shift: From Time-Based to Event-Based Thinking
What we actually want is:
“Run
requestServer()after the server is ready to accept connections.”
This is not a timing problem. It’s a synchronization problem — and Go’s channels are purpose-built for exactly this.
Instead of asking “how long does it take?”, we ask: “what event signals readiness?” Then we wait for that event to occur.
The Bell Pattern
func learnWebProgramming() {
ready := make(chan struct{})
go func() {
ln, err := net.Listen("tcp", ":8080")
if err != nil {
log.Fatal(err)
}
close(ready) // 🔔 ring the bell: port is bound, server is ready
err = http.Serve(ln, pair{})
log.Println(err)
}()
<-ready // block here until the bell rings
requestServer()
}
Let’s walk through exactly what happens at each step.
Step-by-Step Breakdown
1. Create the signal channel
ready := make(chan struct{})
struct{} is the zero-size type — it carries no data, allocates no memory. This channel exists purely for signaling, not for data transfer. Using struct{} instead of, say, chan bool makes the intent clear: there’s no meaningful value to send, just a notification.
2. Split port binding from serving
ln, err := net.Listen("tcp", ":8080")
This is the crucial insight. Instead of calling http.ListenAndServe (which binds and serves in one blocking call), we split it into two steps:
net.Listenbinds the port and returns anet.Listener. After this line, the OS has reserved:8080and the server can accept connections — even before we callhttp.Serve.http.Serve(ln, handler)enters the accept loop and blocks forever.
This split gives us a precise moment to signal readiness: after binding, before serving.
3. Signal with close
close(ready)
close broadcasts to every goroutine blocked on <-ready that the channel is done. This is the event trigger — the bell ringing.
Why close instead of ready <- struct{}{}? Two reasons:
closeis a broadcast: every goroutine waiting on<-readyunblocks simultaneously. Sending a value wakes only one receiver.closenever blocks: a send on an unbuffered channel blocks until someone receives.closereturns immediately regardless of how many receivers are waiting (including zero).
4. Block until ready
<-ready
A receive on a closed channel returns immediately with the zero value. A receive on an open, empty channel blocks. So this line says: “wait here until the channel is closed.”
The Go memory model guarantees that operations before close(ready) in the sending goroutine are visible to the goroutine that observes the close via <-ready. This is a happens-before relationship — you’re not just waiting for time to pass, you’re establishing a causal ordering between goroutines.
5. Execute safely
requestServer()
By the time we reach this line, the server has bound its port. The connection will succeed.
Channel Behavior Reference
| Channel State | Receive Operation | Send Operation |
|---|---|---|
| Open, no value buffered | Blocks until value is sent | Blocks until receiver is ready |
| Open, buffered, has value | Returns value immediately | Blocks if buffer is full |
| Closed | Returns zero value immediately | Panics |
| Nil | Blocks forever | Blocks forever |
The key insight here is the third row: receiving from a closed channel never blocks and never panics. This is what makes close ideal for broadcast signaling.
Why Not Use sync.WaitGroup Here?
WaitGroup is the right tool when you want to wait for N goroutines to finish a task. It’s counting down completions.
The Bell Pattern is different: you’re waiting for a goroutine to reach a specific state, not finish entirely. The server goroutine never “finishes” — it runs indefinitely. A WaitGroup would require calling wg.Done() inside the goroutine and then continuing to run, which works but is semantically awkward.
Channels express the intent more clearly: “notify me when you’re ready,” not “tell me when you’re done.”
Why Not Use a sync.Mutex or Condition Variable?
You could use sync.Cond to signal readiness, and in some languages that’s the idiomatic approach. Go’s philosophy, summarized in the proverb:
“Do not communicate by sharing memory; share memory by communicating.”
Channels are the communication mechanism. Using sync.Cond for this pattern would be more verbose and less idiomatic.
Common Pitfalls
Closing a channel twice
close(ready)
close(ready) // panic: close of closed channel
close on an already-closed channel panics at runtime. If multiple goroutines might call close, protect it with sync.Once:
var once sync.Once
// ...
once.Do(func() { close(ready) })
Sending on a closed channel
close(ready)
ready <- struct{}{} // panic: send on closed channel
Once a channel is closed, all sends panic. Close is a one-way, one-time operation.
Forgetting to handle net.Listen errors
ln, err := net.Listen("tcp", ":8080")
if err != nil {
log.Fatal(err) // don't close(ready) — no signal on failure
}
close(ready)
If Listen fails, you should not signal ready. The main goroutine will block on <-ready forever — which is correct behavior: it shouldn’t proceed if startup failed. In production, you’d want a separate error channel or a context with cancellation.
Real-World Applications
Service startup dependencies
<-dbReady // wait for database connection pool to initialize
<-cacheReady // wait for Redis connection
startHTTPServer()
Fan-out worker pools
ready := make(chan struct{})
var wg sync.WaitGroup
for i := 0; i < numWorkers; i++ {
wg.Add(1)
go func() {
// initialize worker state
wg.Done() // signal: I'm initialized
<-ready // now wait at the gate
processJobs(jobs)
}()
}
wg.Wait() // block until ALL 5 are initialized and waiting
close(ready) // now release all 5
Test synchronization
func TestServer(t *testing.T) {
ready := make(chan struct{})
go startTestServer(ready)
<-ready
resp, err := http.Get("http://localhost:8080/ping")
// ...
}
Eliminates time.Sleep in tests — a common cause of flaky CI pipelines.
Graceful shutdown (done channel)
The same pattern inverted:
done := make(chan struct{})
go func() {
<-sigCh // wait for OS signal
close(done)
}()
select {
case <-done:
cleanup()
}
The Bigger Picture: Event-Driven Thinking
What you’ve built with the Bell Pattern is a primitive event-driven system. Instead of polling (“is the server ready yet? is it ready now?”) or guessing with sleep, you’ve modeled a causal dependency:
[port bound] → [signal emitted] → [request sent]
This pattern scales. In distributed systems, readiness signaling works the same way — a Kubernetes readiness probe, a health check endpoint, a ZooKeeper ephemeral node. The mechanism differs, but the idea is identical: don’t proceed until an event tells you it’s safe.
Go channels make this first-class at the language level, without external libraries or complex primitives.
Summary
| Approach | Deterministic | Correct | Idiomatic |
|---|---|---|---|
| No synchronization | ✗ | ✗ | ✗ |
time.Sleep | ✗ | Sometimes | ✗ |
Bell Pattern (close) | ✓ | ✓ | ✓ |
sync.WaitGroup | ✓ | ✓ | Partial |
sync.Cond | ✓ | ✓ | ✗ |
The Bell Pattern is small, but it encodes a shift in thinking:
Don’t wait for time. Wait for truth.
What started as a connection refused error turned into an understanding of how Go models causality between goroutines — and how that same model scales all the way to distributed systems.