Ring buffer in Golang
Using Golang, you've likely encountered slices, the versatile data structure for ordered collections. However, when dealing with limited memory or fixed-size datasets, you might need a more efficient alternative: ring buffers.
Perhaps the constraints you're facing are different? For example, in Logdy** we faced the challenge of implementing a fixed-size, ordered collection with FIFO capability. The primary focus for us was fast insertion times. Using raw slices, we would quickly face memory allocation overhead. There is no native Golang data structure that could be used efficiently like FIFO, at some point, using slices, you would be facing a situation where you have to copy an entire slice to remove and insert a new element! We knew that slices are not a way to go.
This blog post delves into the key differences between slices and ring buffers, guiding you in choosing the right tool for your needs. We'll also showcase how a ring buffer has been implemented in Logdy, with a couple of interesting methods to enable interesting use cases.
Key Points
- Slices: Flexible, dynamic arrays that can grow and shrink in size. Offer efficient access and modification of elements.
- Ring Buffers: Fixed-size circular buffers that overwrite older entries when full. Optimized for constant-time additions and removals.
Comparison
- Memory Usage: Slices can be memory-intensive as they require dynamic allocation. Ring buffers maintain a fixed size, making them efficient for resource-constrained environments.
- Operations: Slices excel in random element access and modification. Ring buffers offer faster insertions and deletions due to their circular nature, eliminating the need for element shifting.
- Complexity: Slices are simpler to understand and use, while ring buffers require additional logic for handling the circular nature. Choosing the Right Tool:
Use slices when
- You need a dynamic data structure that can grow or shrink
- Random access and modification of elements are frequent (not entirely true, we'll show why)
- Memory is not a critical concern
Opt for ring buffers when
- You have a fixed-size data set and need efficient insertions and deletions.
- You care about the performance of push operations
- Memory optimization is crucial.
- You can handle the additional logic for managing the ring buffer.
Challenges for Logdy
When implementing message buffer for Logdy we had specific constraints for the underlying data structure:
- High performance for inserting elements Fixed-size
- Fast sequential access
- No performance critical operations required: Access by an index is required
- Peeking an N elements at specific index is required
- Searching an element within the collection
We knew that a ring buffer could be the answer, initial tests proved good results. We've found an excellent blog post by Serge Toro](https://www.sergetoro.com/golang-round-robin-queue-from-scratch/) explaining a basic implementation of a ring bugger in Golang along with code snippets. We've used that example as a foundation to expand it with more features.
Below you can see a benchmark for "pushing" an item to a slice (array) that contains 100,000 elements (remove the first item, add item at the end) and the same for ring buffer implementation. The performance difference is over 2500x (!!!). This is all due to the fact that when you want to add a new element to a slice (array) in Golang, you need to "rewrite" every element to a previous index and overwrite the item at the end. This still incurs zero memory allocation but requires Go runtime to rewrite specific parts of memory.
BenchmarkArray
BenchmarkArray-8 1000000 13844 ns/op 0 B/op 0 allocs/op
BenchmarkRR
BenchmarkRR-8 209373150 5.482 ns/op 0 B/op 0 allocs/op
A naive implementation of "push" used for a benchmark to illustrate the overhead work that needs to be done
var ar [100_000]int
size := 0
for n := 0; n < len(elementsToAdd); n++ {
// if we reached an array capacity
if size >= 100_000 {
// rewrite the original array starting at index 1 to
// original array starting at index 0
// this is the place where most time is spent.
// Golang runtime needs to spend time on rewriting 100,000 items
copy(ar[0:], ar[1:])
size--
}
// write the element essentially at the end
ar[size] = n
size++
}
Upgrades to a standard ring buffer
As mentioned, we did a few upgrades to an initial implementation to satisfy the constraints presented. Here is a list of extra methods added.
PushSafe
method makes sure that there will be a free spot for an incoming element no matter if it's full or not
func (r *RingQueue[T]) PushSafe(elem T) error {
if r.isFull {
r.Pop()
}
return r.Push(elem)
}
PeekIdx
method allows to peek an item at a specific index
func (r *RingQueue[T]) PeekIdx(idx int) (T, error) {
var res T
if idx < 0 || idx >= cap(r.data) {
return res, fmt.Errorf("index out of bounds")
}
index := (r.start + idx) % len(r.data)
if index >= r.end && index < r.start {
return res, fmt.Errorf("data not available at index %d yet", idx)
}
return r.data[index], nil
}
PeekSlice
method is similar to PeekIdx
but returns all of the elements after that specific index.
func (r *RingQueue[T]) PeekSlice(startIdx int) ([]T, error) {
if startIdx < 0 || startIdx >= cap(r.data) {
return nil, fmt.Errorf("start index out of bounds")
}
sliceSize := r.Size() - startIdx
slice := make([]T, sliceSize)
for i := 0; i < sliceSize; i++ {
slice[i], _ = r.PeekIdx(startIdx + i)
}
return slice, nil
}
Scan
method, as the name suggests, scans through an entire buffer and executes fn
callback with a specific element. The callback should return a bool
that signals whether scanning should be terminated.
func (r *RingQueue[T]) Scan(fn func(elem T, idx int) bool) {
for i := 0; i < r.Size(); i++ {
index := (r.start + i) % len(r.data)
stop := fn(r.data[index], i)
if stop {
break
}
}
}
That's it, we hope you enjoyed the blog post. All of the code is available on Github.