Introduction to Go — Part 3

Ido Magor
5 min readAug 28, 2020

This part of our Go series will be on the amazing Go concurrency model, and the way we can communicate between the threads.

I remember when I’ve started to learn Go, the first thing that everyone talked about is the concurrency model Go has.

For those who don’t know what concurrency is then here a brief summary for you :)

Concurrency is actually executing multiple execution units in parallel, which could share some states of data together and talk to each other, and today in programming languages in order to achieve concurrency we do that using Threads.

This is at a very high level, of course, so I highly suggest you read more about what is concurrency or basic implementation of basic programs that use threads, and of course, I highly suggest as well to see that in your most familiar programming language.

Today hardware

Today with the hardware we have, we sometimes aren’t maximizing the of our hardware, and in particular our CPU in the most efficient way.

Not that it’s being done on purpose but because when we develop concurrent application we could face many challenges.

The infamous is a race condition, and of course, let’s not forget deadlocks.

These challenges with the mix of concurrency model of other languages, we aren’t achieving the highest throughput of our CPU.

Go wanted to challenge that area and give a great concurrency model that allows simple development workflows for that, and as well as a very high throughput model that satisfies us the best.

Don’t get me wrong but today many languages have great concurrency models with very high performance, that compete with Go and some even maybe beat.

Some of them even have features that make us choose them mainly because of those features, so every language has its own purpose and features regarding this area.

So when we choose our programming language, we need to choose wisely according to our needs.

So without any further a due, let’s see what Go has.

Concurrency and communications

In Go, we don’t have the definition of a Thread or a library for that matter.

The reason for that is quite simple.
As we said, Google wanted to recreate the concurrency model from scratch and build something new. That was done in order to allow us to leverage our Cloud machines to the maximum in an elegant way.

Currently, Go has very high throughput for running many Threads on a machine, which is currently deemed as the king of the block on that field.

But wait… a moment ago I said there’s no Thread inside Go, so what do we have?

Go has Goroutine which allows us to create threads or to be more accurate, to create Goroutines, and of course, like many other famous languages, we have a great tool called channels in order to communicate between them.

Hope the following example on the bottom will help understand better

As you can see we now have another new keyword which is go and it helps us to identify the call of the function to execute as a Goroutine.

The channel as created using the make function, which allows us to basically create every kind of memory we want for a lot of more cases like — int, char, byte, structs, []int, and more…
All we need is to tell what kind of memory we want, and we also are able to tell specific characteristics for that data like telling the maximum values for an array, or telling what is the maximum of values a channel could hold.

The final important note regarding channels is about the relationships between the producers and the consumers.

The close() was closed on the producer part, and the reason for that is because only the producer knows when he will stop sending more data.
Because of that we can know when reading from a channel if there’s actually any data inside from the first place and operate in a more elegant way.

Knowing when everyone finished

If you’re familiar with Threads then I guess you know about the join and detach functions of them.

Join as known helps to stop the current running thread which calls join on another thread, and as a result, waits for a result from the thread he called join on.
For example, This helps for waiting on the main thread for all other threads to finish their work, and only then exit the main thread function.

In order to implement something similar in Go, we have the WaitGroup.

As you can see from this example, with WaitGroup we can configure the number of goroutines we want to have, and we simply wait for them all to finish in a single line and not many calls to Join for each thread.

In conclusion

Wanna hear something cool?
If you went through these entire 3 part series and understood what I was blabbering about, you now can start developing your first Go program with knowing the basic features of it.

Go has a lot of more features into it, and if you’re planning to continue reading about it then you’ve quite a journey ahead, and I hope it would be a fruitful one.

Regarding concurrency, I highly suggest you read about the select keyword.
It allows you to wrap multiple operations of reading from multiple channels, and handling events when receiving data on them.
That is kinda cool because you can make a logic like sending events to a goroutine to execute functions on your behalf.
I didn’t want to add this to this final series because I wanted to keep it as simple as possible, but I hope you got the point :)

I hope you had a great time reading this piece, and if you have any more further questions I would be delighted to answer them.
Also, if you have any opinions or suggestions for improving this piece, I would like to hear :)

Thank you and have a great journey on your journey for knowledge! :)

--

--

Ido Magor

My biggest passions are learning and improving each day, in what I do and why I do it.