This is a common question, and to make matters worse, they perform related functions.
They are all involved in transmitting video and audio data.
First, let’s break things up into four main categories
- Transmitting video and audio
- Reporting packet loss
- Programming APIs
Setup includes sharing the destination IP address, client and server UDP/TCP ports used for sending video and audio data, and selecting the streams to send or receive.
Primarily used for session setup: RTSP
Bonus: SIP and H.323 are also used for setting up real-time media sessions.
WebRTC and oRTC allow video sessions to be setup too, but communicating the setup information between the two ends of the session is not part of these standards. Typically a custom RESTful API (that you write) is used to pass the handshake data between the two nodes.
Transmitting video and audio
RTP is used for transmitting video and audio data. It typically sends its data over UDP, but can send it over TCP, or even interleaved in an RTSP session’s TCP connection. RTP is best used for low-latency applications, such as video chats, phone calls, or live security feeds.
Reporting packet loss
RTCP is used to report packet loss. RTCP is used in conjunction with RTP. When a sender is transmitting video to a receiver over RTP, the receiver is transmitting RTCP quality reports back to the sender. This information can be used by the sender to lower their bit rate to reduce packet loss.
Because RTCP is used along side RTP, WebRTC and oRTC will both utilize RTCP.
WebRTC and oRTC are APIs that allow real-time video and audio from a browser. WebRTC is used by Chrome, Firefox, and Safari, and oRTC is Microsoft-specific. To gain support for each of these browsers, you will need to write the client-side code twice, once in WebRTC, and once in oRTC.
As mentioned above, WebRTC and oRTC can communicate with each other, because the data they transmit between browsers is the same.