Have you ever thought about how you can contribute to your company's strategies by investing in streaming? Technology is being increasingly used for various purposes: remote work, training, courses, entertainment, and sales, among many others.
Most of us rarely go even one day without watching streaming videos. The rise in popularity of this kind of consumer behavior towards content is due to the availability of video streaming protocols.
Files must be compressed for transport; this process is achieved using a "codec" such as the most common H.264. Before files can be transferred, they must also be saved in a "container format" such as .mp4 or .avi.
The source of the video file can be directly from the camera of the broadcasting user in the case of a live broadcast or static files in the case of video on demand (VoD).
As the demand for video streaming continues to grow, thanks in part to increased internet penetration, the number of video streaming platforms is also on the rise. In the 1990s, streaming was mostly limited to sports broadcasts; in the 2000s, the technology began to take off with Flash and RTMP-based streaming. Then came YouTube, Netflix, and other platforms in the 2010s.
The video streaming market is vibrant today, with multiple platforms, providers, and uses, including live audio, movie and game streaming. Along with these developments, the capabilities of video streaming protocols have also expanded.
There are several video streaming protocols in existence today. Some of them can be called obsolete standards, but they still apply. Others, on the contrary, are developing rapidly, primarily due to open source.
Some of the protocols are relatively recent and will take time to become widespread, but they are the ones with the most significant potential to shape the video streaming pattern of the future. Not all protocols support the same codecs.
Below we consider the most common of them.
HLS is the most commonly used protocol for streaming today. Apple originally released it in 2009 as part of the fight against the Flash format in the iPhone. This protocol is compatible with many devices, from desktop browsers, smart TVs, set-top boxes, and Android and iOS mobile devices to HTML5-based video players. Naturally, this allows streaming companies to reach the broadest possible audience.
HLS also supports adaptive bitrate streaming. It is a technology that allows dynamic video delivery to provide the best video quality for end users.
The only serious drawback of the HLS protocol is its considerable delay. Latency refers to the time it takes for the delivered content to travel from the source to the requested location and back, especially if large amounts of data are transferred.
MPEG-DASH is one of the latest streaming protocols developed by the Moving Pictures Expert Group (MPEG) as an alternative to the HLS standard. It is an open-source standard that can be configured for any audio or video codec.
Like HLS, MPEG-DASH supports adaptive bitrate streaming, allowing viewers to receive the highest quality video, depending on the level their network can support.
WebRTC is also an open-source project that aims to deliver streaming with real-time response. Originally developed exclusively for VoIP applications, it became popular in video chat and conferencing applications after Google bought it.
Some of the most common consumer applications today, such as Google Meet, Discord, Houseparty, Gotomeeting, WhatsApp, and Messenger, use the WebRTC protocol.
What makes WebRTC unique is that it is based on peer-to-peer streaming. This method can be called the preferred solution if low latency is required for streaming.
SRT is another open-source protocol developed by streaming technology provider Haivision. This protocol is the preferred protocol for members of the SRT Alliance: a group of companies that includes technology and telecommunications providers. The main advantages that SRT is known for are security, reliability, high compatibility and low latency streaming.
SRT can stream high-quality video even if network conditions are unstable. It is also independent of a single codec, allowing it to be used with any audio or video codec.
RTMP is a protocol already known to many. It was developed by Macromedia (now known as Adobe) to transfer audio and video files between a streaming server and Adobe Flash Player.
But with the phasing out of Flash in 2020, its use has become less about delivering video content and more about uploading live streams to the platform through RTMP-enabled encoders. This means that the video stream from the encoder is sent to the streaming platform via the RTMP protocol and then delivered to the end user via the standard HLS protocol.
RTSP is another legacy protocol developed for the entertainment industry and is primarily used to establish and manage multimedia sessions between endpoints. Although similar to the HLS protocol, it does not help transfer real-time streaming data.
RTSP servers must work alongside RTP and other protocols to perform their streaming tasks.
Although it supports low-latency streaming, incompatibility with most standard devices and browsers may be an issue. You can think of it as a protocol capable of delivering low-latency streaming to a select group of small audiences from a dedicated server.
Because most IP cameras still support RTSP, it is still the standard in surveillance and video surveillance systems.
The Real-Time Messaging Protocol (RTMP) is a technology that works with the Transmission Control Protocol (TCP). Like RTSP, it was initially developed to transmit audio, video and other data in real-time. Its TCP compatibility allows advanced communication between the recording device and the server where the data is transmitted. Users can enjoy a consistent and reliable stream through their recording devices.
RTMP is commonly used as a protocol for live-streaming platforms. It converts streams into playable formats by leveraging low-cost encoders.
RTSP and RTMP share many common characteristics and do not compete. The decision to use one over the other depends on the demands of your platform and streaming operation in general.
What's excellent about RTMP and RTSP is that they are both low latency and can control streams by providing media on demand, in real-time, over a stable connection.
However, RTSP is perfect as a cheaper and simpler streaming alternative. It developed significantly due to its widespread use by engineers when RTMP was isolated as a proprietary technology. As mentioned earlier, RTSP is the default with most IP cameras. It's excellent for localized streams and as an input to conferencing or monitoring systems.
While RTSP is beneficial, it has its drawbacks. Streams should be repackaged for friendlier playback, but unfortunately, this can result in latency issues that may cause delays. Given the critical use of IP cameras in highly critical surveillance situations, it's critical that you can overcome latency issues to promote crisp, clear playback where you can identify what's happening on your screen.
One of the best ways to ensure better video delivery is to use Web Real-Time Communications (WebRTC). This API has transcended the streaming scene by converting RTSP feeds into real-time streams displayed in clear quality with no playback issues.
WebRTC is compatible with most browsers and keeps delivery under a second. It provides a more consistent viewing experience than RTSP, which can cause up to 20 seconds of latency.
WebRTC works by relaying RTSP content. Your application highlights the importance of working with an effective media server to ingest your IP camera stream and repackage it into WebRTC. You can access the URL of your web-hosted replay page whenever you want.
RTSP uses commands to send requests from the client to the server. This is all part of controlling and negotiating media streams.
RTSP uses the following commands:
These are coordinated to present the media in its best possible form. Users can access the content via a generated link when the data is transferred and repackaged on the server. The ability to play files on demand without storing them on your device physically is one of the biggest reasons why RTSP will continue to play a prominent role in the streaming world.
As a protocol system, RTSP is rarely used for playback because it is not formatted to create a physical file that plays on a device. However, it is compatible with Quicktime Player, 3Gpp compatible mobile devices and VLC media player.
RTSP is great for low-latency streaming, but it's not optimized for quality of experience and scalability. For this reason, adaptive bitrate streaming is widely used in other contexts, especially when IP cameras are not in operation.
The protocols and encoders used in streaming transmission can be made in two different ways. Even the combination of the two models is usually a complete strategy for delivering content to customers.
Live streaming is one in which the generated signal is sent in real-time to the public. In this case, there is no need for storage. Audio and video are captured and converted using the encoder, then streamed directly over the internet from servers.
In on-demand, the transmission is made on demand. In this case, the recorded content (such as video lessons or podcasts) is stored on servers. As soon as the consumer presses play, the stream of that file starts playing immediately. All this with low latency.
The choice of video streaming protocol depends on certain factors that may be important to your business needs. You may want to reach as wide an audience as possible or minimize latency. Of course, you need to pay attention to the security and confidentiality of streams.
Below is a rough guide on how to make a choice based on these factors.
If you want to reach the broadest possible audience with streaming content, a protocol compatible with most devices, platforms, and browsers will do. HLS is the best option in this case. The protocol can even choose it as the default solution if there is any doubt.
HLS provides the most comprehensive coverage for streaming but introduces the most latency in the transmission process. RTMP provides low latency streams but is not compatible with HTML5 video players.
SRT supports low-latency streams, while WebRTC provides real-time latency. If you choose one of these options, be aware that audience reaches may be at risk as these protocols are less widely supported in the streaming technology environment.
If you can't compromise on either coverage or latency, one option in this situation is to use the HLS protocol. Thus, you decide in favor of accelerated multimedia content and will be able to stream with ultra-low latency.
If the most important thing is to ensure the integrity and safety of streams on the way to the end user, it is worth using a protocol that provides security features. Most protocols, including the widely used HLS standard, provide secure streaming, but SRT is the protocol with best-in-class security and privacy features.
As discussed earlier, adaptive bitrate allows for the best possible video quality based on network, device, and end-user software capabilities. HLS and MPEG-DASH are the protocols that support this feature. To learn more about adaptive bitrate streaming, you can read our blog.
If you are planning to develop your own video platform, consider:
In such a case, consider a cloud-based VoD content management system or an all-in-one real-time streaming solution that integrates receiving, managing, processing, publishing, and other aspects of video streaming on a single platform.
Developing successful multimedia applications for the internet is a highly challenging problem. Most current streaming protocols are based on either TCP or UDP. Both have advantages and disadvantages.
TCP provides reliable service, packet retransmission, and congestion and flow control. While reliable service is desirable, in the case of TCP, it is accompanied by disadvantages such as increased latency and throttling throughput.
At each stage of packet loss, TCP, through its congestion control, decreases the transmission rate. Thereafter, the transmission rate gradually grows until a new packet loss occurs. For this reason, we avoid using TCP for real-time streaming applications.
Thus, UDP was the chosen transport protocol for real-time applications such as RTP and RTCP, although it is not reliable and does not have congestion control. Multi-casting techniques can efficiently distribute live audio and video to many receivers.
Some techniques currently used to improve the quality of streaming are:
And some problems remain, such as
In this article, we have talked about some protocols that allow the execution of multimedia applications over networks where current protocols have difficulty delivering the necessary characteristics for the proper functioning of these applications. There are several solutions with very different mechanisms for current challenges. Multiple data sources, selective packet dropping, congestion control, redundancy in data transmission, packet forwarding, and multiple packet forwarding are just a few tools of the new streaming protocols
At Teyuto, we offer the most popular streaming protocols, including RTMP in input, and output HLS and Dash to ensure the best possible video playback experience for viewers across a wide range of devices and platforms.