IP multicast is an interesting technology. It’s main purpose is to
save network bandwidth as much as possible – traffic is sent to hosts
which asked for it only (as opposed to broadcast). On the other
hand, you need smarter (manageable) switches and specific
non-trivial configuration on both routers and switches. Even more
complicated it is when you try to make it work over VPN.
I have a server, which can join multicasts using IGMP on one
interface and has a public IP on another interface. I wanted to make
those multicasts available via VPN.
It has a static route for multicast to know where to join them:
route add -net 224.0.0.0/4 dev eth1
So my first try was to use OpenVPN with tun device (routed IP packets). I set it up to push route to clients:
push "route 224.0.0.0 240.0.0.0"
And to to preserve TOS value for QoS:
passtos
Next, I needed to make my server join on eth1 and route multicasts to
tun0 interface. Linux has a native support for multicast routing but it
needs to be managed by an userspace application which statically or
dynamically manages routes. For this I used igmpproxy. It listens for IGMP packets on tun0, joins/un-joins multicasts on eth1 and install multicast routes accordingly.
# ip mroute show (10.2.3.1, 239.12.12.1) Iif: eth1 Oifs: tun0
This was working ok, but it has one huge disadvantage: OpenVPN treats
multicasts as broadcast and sends them to all clients. If you have VPN
clients with poor network or CPU performance, you can effectively make
the VPN unusable for them.
igmpproxy can listen on multiple interfaces (even dynamically created)
so if each client had it’s own tun interface, the above problem would
disappear, but unfortunately OVPN can’t do this. There are VPNs which do this by default (e.g. pptpd – I tested it and it works) but I wanted to
stay with OpenVPN.
Solution
So I came to an idea to use tap device. Even though OVPN does the
same as in TUN mode and broadcasts packets to all clients (there are
some efforts to change it), I got a new opportunity: to direct the packets using Ethernet header.
Solution in 2024
Thanks to Felix Fietkau there is now a good way how to do it using kernel. You just need to create a bridge device:
brctl addbr br-mcast ifconfig br-mcast up
Add tap interface to br-mcast and move its IP to br-mcast, e.g.:
brctl addif br-mcast tap0 ip a del 10.8.0.4/24 dev tap0 ip a add 10.8.0.4/24 dev br-mcast
Configure igmpproxy to do multicast routing to br-mcast (igmpproxy.conf):
phyint br-mcast downstream ratelimit 0 threshold 1
Restart igmpproxy and now you should see working multicast routing to tap interface (which will be changed to broadcast by openvpn) – note the destination multicast MAC address:
tcpdump -i tap0 -n -e | head tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on tap0, link-type EN10MB (Ethernet), capture size 262144 bytes 08:37:18.762873 c2:23:09:78:45:fe > 01:00:5e:0a:2f:07, ethertype IPv4 (0x0800), length 46: 10.8.0.1 > 233.10.47.7: igmp v2 report 233.10.47.7 08:37:18.787610 e6:35:6e:79:aa:e4 > 01:00:5e:0a:2f:07, ethertype IPv4 (0x0800), length 1370: 194.160.9.3.47294 > 233.10.47.7.1234: UDP, length 1328 08:37:18.788768 e6:35:6e:79:aa:e4 > 01:00:5e:0a:2f:07, ethertype IPv4 (0x0800), length 1370: 194.160.9.3.47294 > 233.10.47.7.1234: UDP, length 1328 08:37:18.789711 e6:35:6e:79:aa:e4 > 01:00:5e:0a:2f:07, ethertype IPv4 (0x0800), length 1370: 194.160.9.3.47294 > 233.10.47.7.1234: UDP, length 1328 08:37:18.790646 e6:35:6e:79:aa:e4 > 01:00:5e:0a:2f:07, ethertype IPv4 (0x0800), length 1370: 194.160.9.3.47294 > 233.10.47.7.1234: UDP, length 1328 08:37:18.791578 e6:35:6e:79:aa:e4 > 01:00:5e:0a:2f:07, ethertype IPv4 (0x0800), length 1370: 194.160.9.3.47294 > 233.10.47.7.1234: UDP, length 1328 08:37:18.792720 e6:35:6e:79:aa:e4 > 01:00:5e:0a:2f:07, ethertype IPv4 (0x0800), length 1370: 194.160.9.3.47294 > 233.10.47.7.1234: UDP, length 1328 08:37:18.793725 e6:35:6e:79:aa:e4 > 01:00:5e:0a:2f:07, ethertype IPv4 (0x0800), length 1370: 194.160.9.3.47294 > 233.10.47.7.1234: UDP, length 1328
Now the magic comes, change the port setting:
bridge link set dev tap0 mcast_to_unicast on
After this you can see the destination MAC is unicast MAC of the remote party:
tcpdump -i tap0 -n -e | head tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on tap0, link-type EN10MB (Ethernet), capture size 262144 bytes 08:39:07.354869 c2:23:09:78:45:fe > 01:00:5e:0a:2f:07, ethertype IPv4 (0x0800), length 46: 10.8.0.1 > 233.10.47.7: igmp v2 report 233.10.47.7 08:39:07.375981 e6:35:6e:79:aa:e4 > c2:23:09:78:45:fe, ethertype IPv4 (0x0800), length 1370: 194.160.9.3.47294 > 233.10.47.7.1234: UDP, length 1328 08:39:07.376643 e6:35:6e:79:aa:e4 > c2:23:09:78:45:fe, ethertype IPv4 (0x0800), length 1370: 194.160.9.3.47294 > 233.10.47.7.1234: UDP, length 1328 08:39:07.377934 e6:35:6e:79:aa:e4 > c2:23:09:78:45:fe, ethertype IPv4 (0x0800), length 1370: 194.160.9.3.47294 > 233.10.47.7.1234: UDP, length 1328 08:39:07.378922 e6:35:6e:79:aa:e4 > c2:23:09:78:45:fe, ethertype IPv4 (0x0800), length 1370: 194.160.9.3.47294 > 233.10.47.7.1234: UDP, length 1328 08:39:07.379625 e6:35:6e:79:aa:e4 > c2:23:09:78:45:fe, ethertype IPv4 (0x0800), length 1370: 194.160.9.3.47294 > 233.10.47.7.1234: UDP, length 1328 08:39:07.380748 e6:35:6e:79:aa:e4 > c2:23:09:78:45:fe, ethertype IPv4 (0x0800), length 1370: 194.160.9.3.47294 > 233.10.47.7.1234: UDP, length 1328
To automate this, you can use openvpn up script with following content:
#!/bin/bash exec >> /var/log/openvpn-up.log exec 2>&1 ip a del $4 dev $1 brctl addif br-mcast $1 bridge link set dev $1 mcast_to_unicast on
The bridge interface and IP should be configured using your standard OS way.
Old solution
The idea is following: I’ll write a daemon, which will listen and
process IGMP joins on VPN tap interface. When it gets one, it will
record senders MAC address, join requested group on upstream interface a
listen for incoming multicast packets. It will then take the whole IP
packet, prepends recorded destination MAC, some source MAC and send it
to the tap interface. This should cause the packet to be sent only to
one client but keep the IP payload unchanged.
So I wrote a prototype and for my surprise it also WORKS! At least on Linux. You can find it here.
Feel free to test it. Now to explain the title, this daemon can be also
used for network with stupid switches to avoid network flooding. The
traffic effectively changes to unicast on link layer and is delivered
directly to subscribers.
I hope this helps someone with similar problem.
Looking forward for your comments and suggestions.
Bye!
Hi Danman
Im struckling with openvpn and multicast my self, but and fairly new to linux and openvpn.
I have my openvpn running on an virtual server, and devices can see each other by ping, but i need to allow multicast.
The devices are on mobile 3g/4g networks with the vpn to the openvpn server and of what i can understand the android devices does not allow TAPs which in my case then leave me for routing vpn.
Not sure if i understand your topology correct. but if you any hint/ guides everything would be grateful accepted
best regards
Kevin
Ok and what do you want to achieve?
Hi again.
I want the multicast send from the tunnel returned to other devices in the tunnel if this makes sense
Best regards
K
Awesome, thank you.
On my router i have following interfaces :
vlan1 : Ethernet LAN
ath1 : Wifi LAN
vlan2 : WAN
tap0 : TAP for openvpn
br0 : bridge vlan1 / ath1 (whole LAN) and tap0
In vpnmcast.conf:
sourceif = “tap0”
destifs = [“br0”]
From a remote OpenVPN client, i’m able to see DLNA server using :
$ gssdp-discover -i tap0 –timeout=3
But from VLC > UPnP Discover, the server is not listen… any idea why ?
What UPNP client do you use ?
Thanks !
Did you try to add route for multicast over tap0 ?
hmmmm……
gssdp-discover show DLNA server even if vpnmcast is not running.
There is something i clearly don’t understand :s
thanks, you are right, adding this route client-side make the server appear in VLC :
# route add -net 224.0.0.0 netmask 240.0.0.0 tap0
I cannot browse files from VLC, i don’t understand because vpnmcast show:
22: 2 -> XXX.XXX.168.192 250.255.255.239 -> INVERTED DLNA SERVER IP
adding sender 239.255.255.250
adding forward 239.255.255.250 5404a6cd2934
Answering myself : it’s seems that VLC indexes all medias on the server (instead of indexing only opened folder), so it can be very, very long…
so does it work?
Hi there,
We’re running a similar setup using OPNsense.
We have “bridge0” with two client interfaces:
ovpns1 – the tap interface used by the OpenVPN server
vtnet1 – the interface that plugs into the IPTV router
Since both interfaces are in the same bridge, all VPN clients will be connected to the IPTV router transparently when they bridge their tap interfaces with their IPTV stb network-interfaces.
However we have the same problem, when a single VPN client does a multicast join, all other VPN clients will receive the traffic as well.
Will your python script also work in this scenario? e.g. when both interfaces are part of a network bridge?
Hi, no, it won’t work. You have to separate the networks, they cannot be bridged.