Foward all traffic from one NIC to another IP



  • I've got this setup of servers with a bunch of LXC containers on them (it's called Proxmox). They're all connected through bridged mode to the internal network through the first NIC. From there on it's through the router to the wider internet, i.e. NAT.

    Now, I've got those lovely measuring devices which you can connect to via a web interface. However, they're only doing HTTP (stands to reason, installing certificates on such devices can be a pain). Which in turn prohibits me from embedding said web interface into my websites because those all run on HTTPS (and thus browsers rightfully decline to include HTTP content). Now, a reverse proxy does help in that case but due to how the network works, I'd need two reverse proxies chained behind each other and websocket connections don't seem to like that. One is fine, two is: "Nope, you only get HTML!"

    But now I got lovely public IP addresses and managed to assign one of those to the 2nd NIC of one of the servers. Due to budget constraints I cannot get another server dedicated solely to this. But I also don't really want to connect the server to the internet, I'd rather have one of the LXC containers filling that role.

    Now, Proxmox offers the following three network options out of the box: Internal (i.e. only talking to other containers), NAT and Bridge - the latter of which is almost what I want but would still leave the server itself exposed.
    I found some instructions on how to hand over PCIe devices to containers (i.e. the server saying: "This is yours now! I don't care what you do with it") but the instructions for that do not work for some reason (as usual: No error messages, simply a silent "Nope").

    So I had the idea of using iptables stuff to reroute the packets - i.e. take everything coming for the IP 200.100.50.25 on device eno2 and forward it to IP 10.10.10.10 on device vmbr0 and enabling responses to go back, of course.

    However, all the instructions I'm able to find are meant for a NAT. Not what I need. And iptables is deep magic so...

    ... anybody have an idea on how to achieve this or an alternative or a better idea?


  • Grade A Premium Asshole

    @Rhywden said in Foward all traffic from one NIC to another IP:

    Now, Proxmox offers the following three network options out of the box: Internal (i.e. only talking to other containers), NAT and Bridge - the latter of which is almost what I want but would still leave the server itself exposed.

    Bridge would have the server itself connected directly to the internet, but if you VLAN'd that wouldn't you be adequately covered? At that point, in theory, all traffic should be contained from WAN to the virtual interface adapter.

    In theory.

    @Rhywden said in Foward all traffic from one NIC to another IP:

    I found some instructions on how to hand over PCIe devices to containers (i.e. the server saying: "This is yours now! I don't care what you do with it") but the instructions for that do not work for some reason (as usual: No error messages, simply a silent "Nope").

    Yeah, PCI-E passthrough on Proxmox is not exactly plug-n-play. Lots of arcane incantations and hoping that things work. What hardware is Proxmox installed on?



  • @Polygeekery said in Foward all traffic from one NIC to another IP:

    but if you VLAN'd that wouldn't you be adequately covered?

    Well, I only have two public IP addresses and wouldn't that consume one of them?

    What hardware is Proxmox installed on?

    d8f7fd1a-d80a-45bb-b4c4-8d9b75afc7fd-image.png

    that's all I can pull up right now. Intel VT-D is enabled in BIOS, though, already checked.


  • Grade A Premium Asshole

    @Rhywden VT-d is part of the equation, but your motherboard also needs to support IOMMU for PCI passthrough to work. I assume this is server grade hardware, so in theory it should support it. But not necessarily guaranteed. Do you have SSH access to Proxmox?



  • @Polygeekery Yes, either through the console/WebGUI or a "proper" SSH client.

    Already tried adding the magic GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on" to the kernel command line, updated grub and rebooted but no dice.

    edit: Oh, and in the BIOS menu I found the Intel VT-D settings to be enabled under the heading of "Northbridge" so the motherboard should support that. Unless it's lying to me ;)


  • Grade A Premium Asshole

    @Rhywden what is the output of:

    dmesg | grep IOMMU  
    

    and

    dmesg | grep dmar
    


  • @Polygeekery

    [    0.197603] DMAR-IR: IOAPIC id 12 under DRHD base  0xfbffc000 IOMMU 2
    [    0.197607] DMAR-IR: IOAPIC id 11 under DRHD base  0xe0ffc000 IOMMU 1
    [    0.197610] DMAR-IR: IOAPIC id 10 under DRHD base  0xc5ffc000 IOMMU 0
    [    0.197613] DMAR-IR: IOAPIC id 8 under DRHD base  0xaaffc000 IOMMU 3
    [    0.197616] DMAR-IR: IOAPIC id 9 under DRHD base  0xaaffc000 IOMMU 3
    

    and

    [    0.197549] DMAR: dmar0: reg_base_addr c5ffc000 ver 1:0 cap 8d2078c106f0466 ecap f020df
    [    0.197561] DMAR: dmar1: reg_base_addr e0ffc000 ver 1:0 cap 8d2078c106f0466 ecap f020df
    [    0.197571] DMAR: dmar2: reg_base_addr fbffc000 ver 1:0 cap 8d2078c106f0466 ecap f020df
    [    0.197580] DMAR: dmar3: reg_base_addr aaffc000 ver 1:0 cap 8d2078c106f0466 ecap f020df
    

  • Grade A Premium Asshole

    @Rhywden VT-d is on in BIOS, IOMMU appears to be available, you should be good to go on PCI passthrough. I assume you followed the guide at:

    ?



  • @Polygeekery Well, yes. It's working? Weird. Because they said this:

    ded95406-3ae5-4d89-a0fe-4091952d3f8f-image.png

    Didn't see that "enabled" message so I assumed that something was wrong.



  • @Polygeekery But I guess I now need to look ACS (Access Control Services) is enabled because

    find /sys/kernel/iommu_groups/ -type l
    

    yields an empty set. Thanks for the help!


  • Grade A Premium Asshole

    @Rhywden I never said that it was working. I only went with the two steps that came to mind to check if IOMMU and VT-d were enabled. Also, I have never really worked with LXC containers. Let me look through the documentation on Proxmox PCI passthrough yet again, but I cannot do so until later.

    The way we use it is a bit....unconventional. We passthrough a GPU to a Debian VM that is then running Docker. Because raisins.



  • @Polygeekery No rush. I now have to wait until Monday anyway.


  • Grade A Premium Asshole

    @Rhywden said in Foward all traffic from one NIC to another IP:

    @Polygeekery But I guess I now need to look ACS (Access Control Services) is enabled because

    find /sys/kernel/iommu_groups/ -type l
    

    yields an empty set. Thanks for the help!

    Ohhhhhh, I have run into something similar to this before. The server that we deploy only supports......something (IOMMU, VT-d, ACS?) on certain PCI slots. IIRC we could only passthrough devices on slots 1-3 or something like that. Start Googling your server model and PCI passthrough and chances are someone else has run into this.

    You can also try looking through manufacturer's documentation but good luck because they could refer to it with any number of terms and such.


  • Trolleybus Mechanic

    @Rhywden said in Foward all traffic from one NIC to another IP:

    So I had the idea of using iptables stuff to reroute the packets - i.e. take everything coming for the IP 200.100.50.25 on device eno2 and forward it to IP 10.10.10.10 on device vmbr0 and enabling responses to go back, of course.

    However, all the instructions I'm able to find are meant for a NAT. Not what I need. And iptables is deep magic so...

    ... anybody have an idea on how to achieve this or an alternative or a better idea?

    actually you do need NAT

    If I understand you right you want to have it such that everyhing that comes in at 200.100.50.25:443 is handled by 10.10.10.10:443

    To "forward" packages that arrive at 200.100.50.25 to 10.10.10.10 you need to replace the destination address in the packages -> that's called destination nat

    having that in place the answers back from 10.10.10.10 will be ignored by the external pc because the source address of the returning package will be 10.10.10.10 which is not what it expects (200.100.50.25)
    rewriting that information would then be source nat

    If you don't want to look into the specifics of getting that running and can live with not doing the networking in kernel socat is an option.

    socat TCP4-LISTEN:443,fork,bind=200.100.50.25 TCP4:10.10.10.10:443


  • Grade A Premium Asshole

    @OloEopia said in Foward all traffic from one NIC to another IP:

    actually you do need NAT

    Not necessarily. Later he mentioned assigning one of his public IP addresses to the container. If he did that he would not need to NAT it.



  • @OloEopia Thanks, that'll work for some of it. Though I actually prefer the virtualization method because that way the host is not involved at all beyond the initial setup and the guest completely owns the network connection for that NIC. I'm also thinking of using the 2nd public IP for a TURN/STUN-server and I'm not sure how well that one plays with NAT (after all, the installation instructions for this state: You don't need beefy hardware but connect it directly to the internet and do not run anything else on that box)

    @Polygeekery Well, I guess I forgot to mention that I was using the onboard NICs. Seems that you can't foward those. Ordered a dedicated PCIe network card - (an I210-T1), let's see if that does anything. And even if it doesn't work, the 40€ for that card don't really hurt.


  • Grade A Premium Asshole

    @Rhywden said in Foward all traffic from one NIC to another IP:

    Well, I guess I forgot to mention that I was using the onboard NICs. Seems that you can't foward those.

    If you mean PCI passthrough, then no. AFAIK it works on a per slot basis and those are not in a slot. Or at least it will be a bit more difficult to do. I most certainly could be wrong about that though.

    Of course, now that I think about it, why not bridge the NIC port and assign it to the appropriate container or VM? I know you said:

    @Rhywden said in Foward all traffic from one NIC to another IP:

    Now, Proxmox offers the following three network options out of the box: Internal (i.e. only talking to other containers), NAT and Bridge - the latter of which is almost what I want but would still leave the server itself exposed.

    Bridging the physical interface to a container or VM would be no more of a security concern than it would be to do PCI passthrough as you are setting up to do. Or am I missing something here?



  • @Polygeekery The way I've seen Proxmox do bridging necessitates assigning an IP to the bridge on both sides.



  • @Polygeekery Okay, so, the new network card arrived today and it seems I overlooked something...

    See, I followed the instructions to enable IOMMU but seem to have had an older version of it from somewhere else on Proxmox's site. Because that one only made mention of editing the GRUB config file.

    Turns out that for my setup I need to edit the system-d boot config file.

    Because after doing so (and rebooting), now everything shows up. Might even be that I can reroute the internal NIC! But I have the additional NIC working now regardless, so there's that. So I could assign the two external IPs I have to two separate VMs.


  • Grade A Premium Asshole

    @Rhywden said in Foward all traffic from one NIC to another IP:

    seem to have had an older version of it from somewhere else on Proxmox's site.

    Oh, yeah, that is an issue you have to watch out for. Usually when I am looking for a solution to some problem I will Google some search term. You know, like how we all do our jobs. It is 99% Googling and/or copying and pasting from StackOverflow.

    For some reason the Proxmox wiki has higher rank for older versions of the software. So whenever you pull up anything you have to be careful that you are looking at current information for the version you are on and not partial instructions for 1-3 version numbers back. That has also bitten me once or twice.


Log in to reply