Difference between revisions of "Multi-NIC"

From The Linux Source
Jump to: navigation, search
(Created page with "=== Multi-NIC Routing (ent 7) === The multi-NIC routing scenario has not yet been tried/tested on Enterprise 7. Things may work correctly based on (possibly) proper gateway s...")
 
m
Line 3: Line 3:
  
 
There was some testing done here, we ended up doing the Source-based Routing (below).
 
There was some testing done here, we ended up doing the Source-based Routing (below).
 
  
 
===  Multi-NIC Routing (before ent 7) ===
 
===  Multi-NIC Routing (before ent 7) ===
Line 33: Line 32:
 
  NETMASK5=255.255.255.255
 
  NETMASK5=255.255.255.255
 
  GATEWAY5=172.200.200.1
 
  GATEWAY5=172.200.200.1
 
  
 
===  Teaming (ent 7) ===
 
===  Teaming (ent 7) ===
Line 47: Line 45:
 
4. add the second NIC
 
4. add the second NIC
 
  # nmcli con add type team-slave con-name team0-slave2 ifname em2 master team0
 
  # nmcli con add type team-slave con-name team0-slave2 ifname em2 master team0
 
  
 
===  Bonding (before ent 7) ===
 
===  Bonding (before ent 7) ===
Line 113: Line 110:
 
           collisions:0 txqueuelen:1000
 
           collisions:0 txqueuelen:1000
 
           RX bytes:2651544232860 (2.4 TiB)  TX bytes:1948544659918 (1.7 TiB)
 
           RX bytes:2651544232860 (2.4 TiB)  TX bytes:1948544659918 (1.7 TiB)
 
  
 
===  Renumbering Ports (ent 6) ===
 
===  Renumbering Ports (ent 6) ===
Line 135: Line 131:
  
 
Reboot when done to properly pick up all the udev/network config changes/etc
 
Reboot when done to properly pick up all the udev/network config changes/etc
 
  
 
=== Source-based Routing (ent 7) ===
 
=== Source-based Routing (ent 7) ===

Revision as of 12:30, 9 May 2017

Multi-NIC Routing (ent 7)

The multi-NIC routing scenario has not yet been tried/tested on Enterprise 7. Things may work correctly based on (possibly) proper gateway settings per NIC (if this works correctly under ent 7). If not, we know how to add static routes on ent 7, and can replicate the configuration for pre-ent 7 envs via Network Manager (nmcli).

There was some testing done here, we ended up doing the Source-based Routing (below).

Multi-NIC Routing (before ent 7)

Before Enterprise 7, since we could not have a gateway (that works) per interface (even though it lets you set a gateway in every interface config file: which it uses to overwrite the default gateway), we have to set the default gateway to the outside or customer facing network (since we cannot possibly know all IP's/networks these connections would be coming from), and then set static routes to every possible network and host it needs access to for our inside network. Here is an example for /etc/sysconfig/network-scripts/route-eth1 (where the eth0/default is the primary/outside/customer network, and eth1 is the secondary/internal/private network).

Static list for NOTEL (example, the NOTEL data center no longer exists)

# default network (set this for your specific env/stack)
ADDRESS0=172.200.200.0
NETMASK0=255.255.255.0
GATEWAY0=172.200.200.1
# VPN network
ADDRESS1=10.100.100.0
NETMASK1=255.255.255.0
GATEWAY1=172.200.200.1
# DNS host 1
ADDRESS2=210.210.90.80
NETMASK2=255.255.255.255
GATEWAY2=172.200.200.1
# DNS host 2
ADDRESS3=210.210.120.140
NETMASK3=255.255.255.255
GATEWAY3=172.200.200.1
# spacewalk host
ADDRESS4=172.200.90.60
NETMASK4=255.255.255.255
GATEWAY4=172.200.200.1
# trusted host
ADDRESS5=172.200.90.50
NETMASK5=255.255.255.255
GATEWAY5=172.200.200.1

Teaming (ent 7)

1. add the teaming inferface

# nmcli con add type team con-name team0 ifname team0 config '{"runner": {"name": "loadbalance"}}'

2. set IP address info

# nmcli con mod team0 ipv4.method manual ipv4.addresses 172.100.200.140/24

3. add the first NIC

# nmcli con add type team-slave con-name team0-slave1 ifname em1 master team0

4. add the second NIC

# nmcli con add type team-slave con-name team0-slave2 ifname em2 master team0

Bonding (before ent 7)

Before Enterprise 7, interface Bonding was configured via various config files in /etc/sysconfig/network-scripts/ (this has been rewritten in ent 7 and is now called Teaming), example setup;

eth0 config (ifcfg-eth0)

# Broadcom Corporation NetXtreme II BCM5709 Gigabit Ethernet
DEVICE=eth0
BOOTPROTO=none
ONBOOT=yes
HWADDR=D4:BE:D9:AA:D7:16
MASTER=bond0
SLAVE=yes

eth1 config (ifcfg-eth1)

# Broadcom Corporation NetXtreme II BCM5709 Gigabit Ethernet
DEVICE=eth1
BOOTPROTO=none
ONBOOT=yes                                                                                 
HWADDR=D4:BE:D9:AA:D7:18
MASTER=bond0
SLAVE=yes

bond0 config (ifcfg-bond0)

DEVICE=bond0
BOOTPROTO=none
ONBOOT=yes
BONDING_OPTS="miimon=100 mode=1"
IPADDR=172.200.110.140
NETMASK=255.255.255.0

Additional bond IP's bond0:0 config (ifcfg-bond0:0)

DEVICE=bond0:0
BOOTPROTO=none
ONBOOT=yes
IPADDR=172.200.110.200
NETMASK=255.255.255.0

ifconfig output

bond0    Link encap:Ethernet  HWaddr D4:BE:D9:AA:D7:16
         inet addr:172.200.110.140  Bcast:172.200.110.255  Mask:255.255.255.0
         inet6 addr: fe80::d6be:d9ff:feaa:d716/64 Scope:Link
         UP BROADCAST RUNNING MASTER MULTICAST  MTU:1500  Metric:1
         RX packets:951518061 errors:0 dropped:244110 overruns:0 frame:0
         TX packets:377721364 errors:0 dropped:0 overruns:0 carrier:0
         collisions:0 txqueuelen:0
         RX bytes:868579848472 (808.9 GiB)  TX bytes:88332253777 (82.2 GiB)

bond0:0  Link encap:Ethernet  HWaddr D4:BE:D9:AA:D7:16
         inet addr:172.200.110.200  Bcast:172.200.110.255  Mask:255.255.255.0
         UP BROADCAST RUNNING MASTER MULTICAST  MTU:1500  Metric:1

eth0     Link encap:Ethernet  HWaddr D4:BE:D9:AA:D7:16
         UP BROADCAST RUNNING SLAVE MULTICAST  MTU:1500  Metric:1
         RX packets:244110 errors:0 dropped:244110 overruns:0 frame:0
         TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
         collisions:0 txqueuelen:1000
         RX bytes:15623040 (14.8 MiB)  TX bytes:0 (0.0 b)

eth1     Link encap:Ethernet  HWaddr D4:BE:D9:AA:D7:18
         UP BROADCAST RUNNING SLAVE MULTICAST  MTU:1500  Metric:1
         RX packets:3095102322 errors:0 dropped:0 overruns:0 frame:0
         TX packets:2613440853 errors:0 dropped:0 overruns:0 carrier:0
         collisions:0 txqueuelen:1000
         RX bytes:2651544232860 (2.4 TiB)  TX bytes:1948544659918 (1.7 TiB)

Renumbering Ports (ent 6)

Example is from a R630 system used as an appliance with 4 ports on the motherboard that had 2 coppper & 2 fiber. For this appliance they wanted the 2 copper ports to be eth0/1 and the fiber be eth2/3, but a recently built system had them designated in reverse. The renaming/mapping went as follows;

eth0 (fiber)  -> eth2
eth1 (fiber)  -> eth3
eth2 (copper) -> eth0
eth3 (copper) -> eth1

Relabel the ports by changing the udev net rules file, change eth0 to eth2, etc, change only the NAME= lines (as mentioned in the comment at the top of the file)

# vi /etc/udev/rules.d/70-persistent-net.rules

Rename all the network config files

# cd /etc/sysconfig/network-script/
# cp ifcfg-eth* /tmp/
# cp /tmp/ifcfg-eth0 ifcfg-eth2
etc

Fix the device names in each file, new ifcfg-eth0 has DEVICE=eth2, change this to say eth0, etc

# vi ifcfg-eth?

Reboot when done to properly pick up all the udev/network config changes/etc

Source-based Routing (ent 7)

Note: using NetworkManager

In this scenario, the system is using the gateway on the primary NIC. Any incoming packets on the 2nd interface end up going out the primary interface, and packets are not returning to devices on the 2nd network.

Note: table '2' was chosen since this is the 2nd NIC. Names can be used if the proper mapping is set in /etc/iproute2/rt_tables

1. Add policy routing to NetworkManager

# yum install NetworkManager-dispatcher-routing-rules
# systemctl enable NetworkManager-dispatcher.service
# systemctl start NetworkManager-dispatcher.service

2. Add policy rule Note: ens33 is the 2nd NIC, 10.60.130.250 is the NIC IP

# vi /etc/sysconfig/network-scripts/rule-ens33
iif ens33 table 2
from 10.60.130.250 table 2

3. Add static routes using policy rules (may be able to do this w/nmcli) Note: 10.60.130.0/24 is the subnet/cidr of the 2nd network, 10.60.130.1 is the gateway

# vi /etc/sysconfig/network-scripts/route-ens33
10.60.130.0/24 dev ens33 table 2
default via 10.60.130.1 dev ens33 table 2

4. Load the new/changed config files

# nmcli connection reload
# nmcli connection down ens33 ; nmcli connection up ens33