How to Dockerize a RTC App or Service

Ido Magor
10 min readAug 8, 2021

A short description of how to handle RT(WebRTC, RTSP, SIP, etc .. ) applications in Docker containers

Docker is one if not the most spoken tech stack in the tech industry and for several reasons.

Unfortunately in the cases of RT communications some of the information is not that wide in the open as they say, because in RT communications you need to have a deep understanding of how networking works as the basics of it, and when adding to the mix the use of Docker containers, we also need to understand more about OS and how to handle the containers communications.

So in this post, we’re going to understand how to connect both of these worlds together, understand how that connection works, and have an understanding of how to use Dockerized RTC app in a few ways.

There’s going to be a background of RTC but I also suggest before starting this post, if you’re not familiar then go to these posts below if you want a broader picture.

A Little Background

In the world of RTC, there are multiple ways to make a communication channel between two parties, which can be either a phone, servers for communications, recording servers, and many more …

We have the option to make a cellphone call on cellular networks, we can do instant messaging using WhatsApp or Facebook messenger like applications, we can do also VoIP calls on IP networking with IP Phones which can be physical or software ones, and we can also make a video conversation that involves audio and video.

But when talking about RTC there’s the signaling channel which signals to the parties about the flow of conversation regarding its events like dialing, call transfer, going to busy, making a conference, and many more events like that. The main protocols in this section are SIP, WebSocket, and XMPP.

In another aspect, there’s the data of the communication channel which is RTP and RTCP which is responsible to control the way that data is being transferred.

Because we have several protocols together, they each have some assumptions that can go wrong inside Docker containers if not planned well, which can be IP addresses, ports accessibility, manipulation of data/configurations in cases we also involve HTTP API’s and don’t know where the data (RTP) is, and many more …

This post was built in the way of describing a case of wishing to build a Docker image that has a SIP server with the ability to handle RTP, and also add few more applications, so in the end, it will work as expected like a regular machine.

The main aspect that we should focus on is actually to understand that there are two or even more protocols to achieve a full functioning session, and by that understand how to configure our Docker container to make these protocols communicate properly.

Of course, there are a few things different in WebRTC than in SIP, so if another post dedicated to WebRTC would be wished, I’ll do my best to write it for you :)

Let’s S-T-A-R-T

If we focus firstly on SIP, we know that in SIP negotiation we have a SIP Invite. It holds an SDP which describes the session regarding codec, clock rate, IP address, and port for each media endpoint description, which will all be used by RTP when SIP signals him to.

Because the RTP is UDP-based and uses information from another protocol(SIP), he simply does what’s he told.

But let’s ask a question… how does a SIP server will know the IP address to give for RTP to send the data to? 🤔🤔🤔

He simply takes the IP address of the OS he runs in, or from a configuration file and puts that inside the SDP.

But wait… if he uses a non-valid IP, doesn’t it mean that the OS/Machine in which the SIP server runs, will not receive the RTP? The answer is YES, you’re correct.

So what do we do? The answer is quite simple and you already know it, we make sure that the SIP server in his configurations knows which IP address to use, for the configurations of SDP he builds.

Sounds simple, isn’t it? It is but unfortunately not when it comes to Docker containers. The docker container by default runs in bridge mode which says that he has in own network address(172.17.x.x), and also cannot be accessed unless we tell him to publish ports for us.

Regarding publishing his RTP(UDP) ports, we will talk about that later but assume that all the ports are accessible.

The problem regarding the IP address is that the packet destined for the container, let’s say 172.17.10.1 would not be able to reach the laptop/machine which runs a Container of the SIP server.

So what do we do?

Hello Network Drivers!

One of the Docker network drivers is called the host. That network driver allows for configuring the Docker container to be run on the machine itself, like the applications themselves run in the namespace of the OS net stack.

It simply gives us an easy time not configuring any IP address especially because we’re now on the OS, so we’re like one big application.

But wait .. there’s more.

Today, we have a problem with that Windows and macOS-based machines don’t have support for host mode, so Linux like Ubuntu, for example, is our go-to guy for running containers in host mode.

But what if I still want to run my containers on a Windows-based machine?

If it’s wished to run a container on Windows, and it’s of course wished because it’s like a basic necessity to work for our testing purposes, it can be done by publishing ports of the container to the host.

But again, there’s more :)

As you might already know, a Docker container once he published his ports means that there's a destination port for it when the host has received any kind of data. The host will know that every data that is being received for the published port, will go to the relative container with no problem.

So what is the problem exactly if we can simply publish the ports we need and continue?

The issue is quite simple because when we want to publish ports from a container to the host, it’s like we’re binding the port from the host to the container, and it’s a heavy process to be done.

So just imagine we will try to publish not one, not two, and not three, but a range of 5k ports? Will that be heavy you think?

You might already again have guessed that the answer is a big Y-E-S, and you might ask yourself why am I even pointing that out…

The reason for that is because for RT applications we don’t have only one port that listens to the signaling of the application like in SIP which is 5060, RTSP with any other custom port chose, or with WebRTC which uses WebSockets to send the SDPs and any other events for configuring the sessions, but we also have RTP.

Let’s Talk About UDP

RTP is a UDP-based protocol, and for each session between parties there could be multiple RTP connections for each data or even a single stream that is multiplexing multiple data of a single party of the call(Multiple SSRCs for different kinds of

Each RTP session can have what is called a SSRC. An SSRC basically signifies a channel of data that is inside a RTP session. Each RTP session can have multiple SSRCs. For example, a user can have video, audio, and/or raw binary data channels, so for the example a single RTP connection had 3 SSRCs.

Because each session needs to have a dedicated port for receiving RTP data, the application needs to have a range of available ports to be allocated for the RTP incoming data. Therefore, we need to open a range of ports to be published from the container to the host.

As said before, that operation is not small, and might sometimes even fail when you will try to run a container that would try to publish 1k ports for example, which I personally experienced as part of my testings, and that was done a pretty powerful laptop.

So a solution can be making behavior for local environments to open up only 10–100 ports which seems pretty reasonable for a container that is meant for testings, while for deployment environment you can use a wider range or simply choose the host mode.

But if I’m being honest with you, when you deploy a container, you finally wish to get to Kubernetes deployments, and that’s a whole other game. I’m not going to cover that topic in this post, but in Kubernetes, you can configure your containers to work with a driver called Macvlan which lets the containers be as part of the host NIC, but that would be in a whole new post in the future.

But what If I’ll tell you that there’s one more thing that still will not work?

Finally, Routing with SDPs and IPs

When a packet is being routed over the Internet it uses the IP which it has in the destination header on the IP layer of it.

But right now we have our cool container on our laptop which listens to SIP/RTSP/WebSockets for the signaling part, and also for 10k UDP ports for the RTP(Just for the example).

Once a connection is attempted to be made, SIP/RTSP or the implementation of ours for WebSocket to handle WebRTC sessions are building an SDP. That SDP contains the media descriptions which tell between parties where A wants from endpoint B to send the RTP data to ‘which are — codec, clock-rate, the ports, many more… and the IP!

But wait… which IP he puts inside the SDP?
It might have dawned on you right now, but the IP that usually can be put inside the SDP is the container IP, which can be from this range 127.17.x.x.

So it basically tells the other party to send the RTP data to an IP address that might even exist on B endpoint local machine if he has by any chance a Docker container, which is also listening to the allocated port that was put in the SDP A sent to him.

Cool right? Not really, but it’s cool to know stuff :)

So what do we do?
In cases we’re on WebRTC solutions and managing the SDP creation, we can make sure that the IP in the SDP for the RTP is the IP of the host machine.

In the case of RTSP/SIP servers, there is in most cases a way to configure for the solutions the IP we wish to publish in the SDPs.

We can basically get the IP of the host machine once the container has started to run, and save that for injecting it in the relative configuration file, or use it while the application lives to inject it in the SDP in the way that it’s wished.

A Little More

As you saw until now, and might already know, when creating a Docker container which is meant usually to run in production environments, a lot is needed to be taken care of in order for it to run as fluid as possible.

Because in RTC there’s the need for having more control of the network traffic, we also need to understand better what are the capabilities of Docker containers and also how the routing of data across the internet, and especially to our container works on the routing aspect.

In RTC, because we use multiple protocols for defining the communications themselves, we need to give the information that will allow understanding which IPs to use, so the other end which can be a mobile phone or a laptop, can access the IP and get to the relevant machine with the container.

So at the end of the day, if you are either publishing ports so the host will traffic the relevant ports he receives to your container or using a host mode network driver, you still need to configure the relevant IP to use as your public-facing IP address.

In Conclusion

As you saw and might already know when dealing with RT applications, there are many challenges, especially when routing the packets to the container.

We saw that for local containers that can be solved with fetching the host machine IP, and also publishing a smaller number of ports after we understood the heavy load of it that it does for our host machine.

About deployment for production, for example, we haven’t talked about it too deeply, but we understood that we can use a host mode on Linux machines, or publish ports because we have a tough machine that is dedicated only to run our containers. Of course, there are better solutions and they are using Kubernetes deployments that would be covered in another post, and many of the challenges can be solved there easily if understood well.

In addition to all of that, because the idea itself of this post is not that simple, maybe some practical example is needed. If that’s wished, please let me know in the comments or in any other way so I’ll try to provide when possible.

I hope you had a great time reading this piece, and if you have any further questions I would be delighted to answer them.
Also, if you have any opinions or suggestions for improving this piece, I would like to hear :)

Thank you all for your time and I wish you a great journey!

--

--

Ido Magor

My biggest passions are learning and improving each day, in what I do and why I do it.