Concurrency is one of Go's most powerful features. With goroutines and channels, Go makes it easy to write concurrent programs that can handle thousands of tasks simultaneously. In this guide, we'll explore common concurrency patterns that will help you build robust and efficient applications.
Understanding Goroutines
Goroutines are lightweight threads managed by the Go runtime. They're incredibly cheap to create - you can easily run thousands or even millions of goroutines concurrently.
package main
import (
"fmt"
"time"
)
func sayHello(name string) {
time.Sleep(100 * time.Millisecond)
fmt.Printf("Hello, %s!\n", name)
}
func main() {
// Start multiple goroutines
go sayHello("Alice")
go sayHello("Bob")
go sayHello("Charlie")
// Wait for goroutines to complete
time.Sleep(200 * time.Millisecond)
}Channels: Communication Between Goroutines
Channels provide a way for goroutines to communicate and synchronize. They're typed conduits that allow you to send and receive values between goroutines safely.
Unbuffered Channels
Synchronous communication - sender blocks until receiver is ready.
ch := make(chan string)
// Sender goroutine
go func() {
ch <- "Hello from goroutine"
}()
// Receiver
msg := <-ch
fmt.Println(msg)Buffered Channels
Asynchronous communication with a capacity buffer.
ch := make(chan int, 3)
// Can send 3 values without blocking
ch <- 1
ch <- 2
ch <- 3
// Receive values
fmt.Println(<-ch) // 1
fmt.Println(<-ch) // 2Common Concurrency Patterns
1. Worker Pool Pattern
Distribute work across a fixed number of workers to limit resource usage and control concurrency.
package main
import (
"fmt"
"sync"
)
func worker(id int, jobs <-chan int, results chan<- int, wg *sync.WaitGroup) {
defer wg.Done()
for job := range jobs {
fmt.Printf("Worker %d processing job %d\n", id, job)
results <- job * 2
}
}
func main() {
const numWorkers = 3
jobs := make(chan int, 10)
results := make(chan int, 10)
var wg sync.WaitGroup
// Start workers
for i := 1; i <= numWorkers; i++ {
wg.Add(1)
go worker(i, jobs, results, &wg)
}
// Send jobs
for j := 1; j <= 9; j++ {
jobs <- j
}
close(jobs)
// Wait for workers to finish
wg.Wait()
close(results)
// Collect results
for result := range results {
fmt.Println("Result:", result)
}
}2. Fan-Out, Fan-In Pattern
Fan-out multiple goroutines to process data in parallel, then fan-in the results to a single channel.
func fanOut(input <-chan int, workers int) []<-chan int {
channels := make([]<-chan int, workers)
for i := 0; i < workers; i++ {
channels[i] = process(input)
}
return channels
}
func fanIn(channels ...<-chan int) <-chan int {
out := make(chan int)
var wg sync.WaitGroup
for _, ch := range channels {
wg.Add(1)
go func(c <-chan int) {
defer wg.Done()
for n := range c {
out <- n
}
}(ch)
}
go func() {
wg.Wait()
close(out)
}()
return out
}3. Pipeline Pattern
Chain multiple stages of processing where each stage receives input from the previous one.
func generate(nums ...int) <-chan int {
out := make(chan int)
go func() {
for _, n := range nums {
out <- n
}
close(out)
}()
return out
}
func square(in <-chan int) <-chan int {
out := make(chan int)
go func() {
for n := range in {
out <- n * n
}
close(out)
}()
return out
}
// Usage
c := generate(2, 3, 4)
out := square(c)
for n := range out {
fmt.Println(n) // 4, 9, 16
}Best Practices
Always close channels
The sender should close channels when done sending to signal receivers.
Use context for cancellation
Implement proper cancellation and timeout using context.Context.
Avoid goroutine leaks
Ensure all goroutines can exit - use proper signaling mechanisms.
Be careful with shared state
Use channels or sync primitives (mutex) to protect shared data.
Conclusion
Go's concurrency model with goroutines and channels makes it straightforward to write efficient concurrent programs. By understanding and applying these patterns - worker pools, fan-out/fan-in, and pipelines - you can build robust, scalable applications that take full advantage of modern multi-core processors. Remember to follow best practices to avoid common pitfalls like goroutine leaks and race conditions.