Go evolves in the wrong direction
Go programming language is known to be easy to use. Thanks to its well-thought syntax, features and tooling, Go allows writing easy-to-read and maintain programs of arbitrary complexity (see this list at GitHub).
Some software engineers call Go “boring” and “outdated”, since it lacks of advanced features from other programming languages, such as monads, option types, LINQ, borrow checkers, zero-cost abstractions, aspect-oriented programming, inheritance, function and operator overloading, etc. While these features may simplify writing code for specific domains, they have non-zero costs additionally to benefits. These features are usually good for brain workout. But we don’t need additional mental load when dealing with production code, since we are already busy solving business tasks. The main cost of all these features is increased complexity of the resulting code:
- it becomes harder to understand what’s going on by just reading the code;
- it becomes harder to debug such code, since you need to jump over dozens of non-trivial abstractions before reaching the business logic;
- it becomes harder to add new functionality to such code because of restrictions these features apply.
This may significantly slow down and even halt the pace of code development. That’s the main reason why Go had no these features in the first place.
Unfortunately, some of these features start appearing in recent Go releases:
- Generics have been added in Go1.18. Many software engineers wanted generics in Go because they were thinking this will significantly improve their productivity in Go. Two years passed since Go1.18 release, but there is no sign in the increased productivity. The overall adoption of generics in Go remains low. Why? Because generics aren’t needed in most of practical Go code. On the other hand, generics significantly increased the complexity of Go language itself. Try, for example, understanding all the details of Go type inference after generics’ addition. Its’ complexity already looks very close to C++ type inference :) Another issue is that generics in Go lack essential features, which exist in C++ templates. For example, Go generics do not support generic methods at generic types. They also do not support template specialization and template template parameters, plus many other features, which are needed for taking full advantage of generic programming. Let’s add these missing features to Go! Wait, we’ll get yet another over-complicated C++ clone then. Then why adding partially working generics to Go in the first place? 🤦
- Range over functions aka iterators, generators or coroutines are going to be added in Go 1.23 according to this commit. Let’s look closer to this “feature”.
Iterators in Go1.23
If you aren’t familiar with iterators in Go, then please read this excellent introduction. In the essence, this is a syntactic sugar, which allows writing for ... range
loops over functions with special signatures. This allows writing custom iterators over custom collections and types. This sounds like great feature, isn’t it? Let’s try figuring out which practical problem does this feature resolve. This is outlined here:
There is no standard way to iterate over a sequence of values in Go. For lack of any convention, we have ended up with a wide variety of approaches. Each implementation has done what made the most sense in that context, but decisions made in isolation have resulted in confusion for users.
In the standard library alone, we have archive/tar.Reader.Next, bufio.Reader.ReadByte, bufio.Scanner.Scan, container/ring.Ring.Do, database/sql.Rows, expvar.Do, flag.Visit, go/token.FileSet.Iterate, path/filepath.Walk, go/token.FileSet.Iterate, runtime.Frames.Next, and sync.Map.Range, hardly any of which agree on the exact details of iteration. Even the functions that agree on the signature don’t always agree about the semantics. For example, most iteration functions that return (T, bool) follow the usual Go convention of having the bool indicate whether the T is valid. In contrast, the bool returned from runtime.Frames.Next indicates whether the next call will return something valid.
When you want to iterate over something, you first have to learn how the specific code you are calling handles iteration. This lack of uniformity hinders Go’s goal of making it easy to easy to move around in a large code base. People often mention as a strength that all Go code looks about the same. That’s simply not true for code with custom iteration.
Again, this sounds legit — to have a unified way to iterate over various types in Go. But what about backwards compatibility, one of the main strenghs of Go? All the existing custom iterators from the standard library mentioned above will remain in the standard library forever according to Go compatibility rules. So, all the new Go releases will provide at least two different ways for iterating over various types in the standard library — the old one and the new one. This increases Go programming complexity, since:
- You need to know about both ways of iterating over various types instead of a single way.
- You need to be able to read and maintain the old code, which uses old iterators, and the new code, which may use either old iterators or new iterators, or both iterator types simultaneously.
- You need to choose the appropriate iterator type when you write new code.
Other issues with iterators in Go1.23
The for ... range
loop could be applied only to built-in types until Go 1.23: integers (since Go1.22), strings, slices, maps and channels. The semantics of these loops was clear and easy to understand (loops over channels have more complicated semantics, but, if you deal with concurrent programming, then you should understand it easily).
Since Go1.23, the for ... range
loops can be applied to functions with special signatures (aka pull and push functions). This makes impossible to understand what the given innocent for ... range
loop can do under the hoods by just reading the code. It can do anything, like any function call can make. The only difference that the function calls in Go were always explicit, e.g. f(args)
, while for ... range
loop hides the actual function call. Additionally, it applies non-obvious transformations for the loop body:
- It implicitly wraps the loop body into an anonymous function and implicitly passes this function to the push iterator function.
- It implicitly calls the anonymous pull function and passes the returned results to the loop body.
- It implicitly transforms return, continue, break, goto and defer statements into another non-obvious statements inside the anonymous function passed to the push iterator function.
On top of this, it is unsafe to use args returned by the iterator function after the loop iteration in general case, since the iterator function can re-use them on the next loop iteration.
Go was known as easy-to-read-and-understand code with explicit code execution paths. This property breaks irreversibly in Go1.23 :( What we get in exchange? Yet another way to iterate over types, which has non-trivial implicit semantics. And this way doesn’t work as advertised when iterating over types, which may return error during the iteration (for example, database/sql.Rows, path/filepath.Walk or any other type, which makes IO during iteration), since you need to manually check for iteration error either inside the loop or immediately after the loop, in the same way as you do it with the old approach.
Even if you use the iterator, which cannot return errors, the resulting for ... range
loop looks less clear than the old approach with the explicit callback. Which code is easier to understand and debug?
tree.walk(func(k, v string) {
println(k, v)
})
for k, v := range tree.walk {
println(k, v)
}
Keep in mind that the latter loop is implicitly converted into the former code with explicit callback call. Now let’s return something from the loop:
for k, v := range tree.walk {
if k == "foo" {
return v
}
}
It is implicitly converted into hard-to-track code similar to the following one:
var vOuter string
needOuterReturn := false
tree.walk(func(k, v string) bool {
if k == "foo" {
needOuterReturn = true
vOuter = v
return false
}
})
if needOuterReturn {
return vOuter
}
Looks easy to debug :)
This code can break if tree.walk
passes v
to the callback via unsafe conversion from byte slice, so v
contents can change on the next loop iteration. So the implicitly generated bullet-proof code must use strings.Clone()
, which leads to possibly unnecessary memory allocation and copy:
var vOuter string
needOuterReturn := false
tree.walk(func(k, v string) bool {
if k == "foo" {
needOuterReturn = true
vOuter = strings.Clone(v)
return false
}
})
if needOuterReturn {
return vOuter
}
The “range over func” feature applies restrictions to the function signature. These restrictions do not fit all the possible cases when iteration over collection items is needed. This forces software engineers to make a hard choice between ugly hacks for the for ... range
loop and writing explicit code, which ideally fits the given task.
Conclusions
It is sad that Go started evolving in the direction of increased complexity and implicit code execution. Probably, we need to stop adding features, which increase Go complexity, and instead, focusing on the essential Go features — simplicity, productivity and performance. For example, recently Rust started taking over Go share in performance-critical space. I believe this trend can be reverted if the core Go team will focus on hot loops’ optimizations such as loop unrolling and SIMD usage. This shouldn’t affect compilation and linking speed too much, since only a small subset of the compiled Go code needs to be optimized. There is no need in trying to optimize all the variations of dumb code — this code will remain slow even after optimizing hot loops. It is enough optimizing only specific patterns, which are intentionally written by software engineers, who care about the performance of their code.
Go is much easier to use than Rust. Why losing Rust in performance race?
Another example of useful features, which Go can gain without increasing the complexity of the language itself and the Go code, which uses these features, are small quality-of-life improvements similar to this one.
Who am I?
I’m Go software engineer specialized in writing simple performance-oriented code in Go such as VictoriaMetrics, quicktemplate, fastjson, fasthttp, fastcache, easyproto, etc. Thanks to Go I try following KISS design principle all the time.