知乎专栏 |
举例说明该算法。
例:给定一 class c address : 192.168.5.0 ,要求划分20个子网,每个子网5个主机。
解:因为4 <5 < 8 ,用256-8=248 ---->即是所求的子网掩码,对应的子网数也就出来了。这是针对C类地址。
针对B类地址的做法。对于B类地址,假如主机数小于或等于254,与C类地址算法相同。对于主机数大于254的,如需主机 700台,50个子网(相当大了),512 < 700< 1024
256-(1024/256)=256-4=252 ---->即是所求的子网掩码,对应的子网数也就出来了。上面256-4中的4(2的2次幂)是指主机数用2进制表示时超过8位的位数,即超过2位,掩码为剩余的前6位,即子网数为2(6)-2=62个。
Append :Host/Subnet Quantities Table ---------------------------------------------------------------------- Class A Effective Effective # bits Mask Subnets Hosts ------- --------------- --------- --------- 2 255.192.0.0 2 4194302 3 255.224.0.0 6 2097150 4 255.240.0.0 14 1048574 5 255.248.0.0 30 524286 6 255.252.0.0 62 262142 7 255.254.0.0 126 131070 8 255.255.0.0 254 65536 9 255.255.128.0 510 32766 10 255.255.192.0 1022 16382 11 255.255.224.0 2046 8190 12 255.255.240.0 4094 4094 13 255.255.248.0 8190 2046 14 255.255.252.0 16382 1022 15 255.255.254.0 32766 510 16 255.255.255.0 65536 254 17 255.255.255.128 131070 126 18 255.255.255.192 262142 62 19 255.255.255.224 524286 30 20 255.255.255.240 1048574 14 21 255.255.255.248 2097150 6 22 255.255.255.252 4194302 2 Class B Effective Effective # bits Mask Subnets Hosts ------- --------------- --------- --------- 2 255.255.192.0 2 16382 3 255.255.224.0 6 8190 4 255.255.240.0 14 4094 5 255.255.248.0 30 2046 6 255.255.252.0 62 1022 7 255.255.254.0 126 510 8 255.255.255.0 254 254 9 255.255.255.128 510 126 10 255.255.255.192 1022 62 11 255.255.255.224 2046 30 12 255.255.255.240 4094 14 13 255.255.255.248 8190 6 14 255.255.255.252 16382 2 Class C Effective Effective # bits Mask Subnets Hosts ------- --------------- --------- --------- 2 255.255.255.192 2 62 3 255.255.255.224 6 30 4 255.255.255.240 14 14 5 255.255.255.248 30 6 6 255.255.255.252 62 2 *Subnet all zeroes and all ones excluded. *Host all zeroes and all ones excluded.
-f: 发送洪水请求,每个请求打印一个点,每个响应删除一个点.如果网络存在丢包,那么会呈现出一长串不断增加的点. -n: 选项,加上之后可以阻止ping程序去进行反向dns查询 当每次ping完得到响应之后,ping程序会尝试一次反向dns查询(reverse dns lookup)来获取“64 bytes from”后面的域名,如果查询速度很慢的话,就会给人似乎延迟很大的感觉,其实这也是ping感觉慢,但是每次ping的响应时间却并不慢的原因.
$ ping -c 1 -s $((1500-28)) -M do www.debian.org PING www.debian.org (140.112.8.139) 1472(1500) bytes of data. 1480 bytes from linux3.cc.ntu.edu.tw (140.112.8.139): icmp_seq=1 ttl=47 time=52.7 ms --- www.debian.org ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 52.778/52.778/52.778/0.000 ms
Try 1454 instead of 1500
ss是Socket Statistics的缩写 ss命令可以用来获取socket统计信息,它可以显示和netstat类似的内容;但ss的优势在于它能够显示更多更详细的有关TCP和连接状态的信息,而且比netstat更快速更高效. 当服务器的socket连接数量变得非常大时,无论是使用netstat命令还是直接cat /proc/net/tcp,执行速度都会很慢;ss快的秘诀在于,它利用到了TCP协议栈中 tcp_diag . tcp_diag是一个用于分析统计的模块, 用netfilter来获取第Linux内核中第一手的信息,这就确保了ss的快捷高效;如果你的系统中没有tcp_diag,ss也可以正常运行,只是效率会变得稍慢. netstat命令是net-tools工具集中的一员,而ss命令是iproute工具集中的一员. yum install iproute iproute-doc #### ss过滤器 ss的过滤器分为两种: state 状态:established,syn-sent,syn-recv,fin-wait-1,fin-wait-2,time-wait,closed,close-wait,last-ack,listen,closing 除了这13种状态之外,还有几个聚类的状态: all – for all the states bucket – 显示状态为maintained as minisockets,如:time-wait和syn-recv big – 和bucket相反 connected – 除了listen and closed的所有状态 synchronized – 所有已连接的状态除了syn-sent addr+port 地址和端口可以使用表达式,类似于tcpdump中的用法,关键字有: dst ADDRESS_PATTERN – matches remote address and port src ADDRESS_PATTERN – matches local address and port dport RELOP PORT – compares remote port to a number sport RELOP PORT – compares local port to a number autobound – checks that socket is bound to an ephemeral port #### ss usage ss [ OPTIONS ] [ FILTER ] OPTIONS: -p 显示每个进程的名字和pid -s 列出当前socket详细信息 -n 不解析服务名称 -r 解析主机名 -a 显示所有套接字(sockets) -o 显示计时器信息(timer) -l 显示监听状态的套接字(sockets) -e 显示详细的套接字(sockets)信息 -m 显示套接字(sockets)的内存使用情况 -i 显示 TCP内部信息 -4 仅显示IPv4的套接字(sockets) -6 仅显示IPv6的套接字(sockets) -0 显示 PACKET 套接字(sockets) -t 仅显示 TCP套接字(sockets) -u 仅显示 UCP套接字(sockets) -d 仅显示 DCCP套接字(sockets) -w 仅显示 RAW套接字(sockets) -x 仅显示 Unix套接字(sockets) -f --family=FAMILY 显示 FAMILY类型的套接字(sockets),FAMILY可选,支持 unix, inet, inet6, link, netlink -D --diag=FILE 将原始TCP套接字(sockets)信息转储到文件 -F --filter=FILE 从文件中都去过滤器信息 FILTER := [ state TCP-STATE ] [ EXPRESSION ] #### Recv And Send [root@netkiller ~]# ss -anp | column -c1 State Recv-Q Send-Q Local Address:Port Peer Address:Port LISTEN 0 128 127.0.0.1:9000 *:* users:(("php-fpm",1481,9),("php-fpm",1482,0),("php-fpm",1483,0),("php-fpm",1484,0),("php-fpm",1485,0),("php-fpm",1486,0),("php-fpm",1487,0),("php-fpm",1488,0),("php-fpm",1489,0),("php-fpm",1490,0),("php-fpm",1491,0)) LISTEN 0 50 *:3306 *:* users:(("mysqld",2680,11)) LISTEN 0 128 *:443 *:* users:(("nginx",1743,8),("nginx",1744,8),("nginx",1745,8)) LISTEN 0 128 10.1.17.17:2812 *:* users:(("monit",2030,6)) TIME-WAIT 0 0 127.0.0.1:43251 127.0.0.1:80 TIME-WAIT 0 0 127.0.0.1:43248 127.0.0.1:80 ESTAB 0 0 10.1.17.17:22 10.1.17.18:51752 users:(("sshd",3122,3)) ESTAB 0 0 10.1.17.17:22 10.1.20.70:51531 users:(("sshd",19093,3)) 处于LISTEN状态的socket: Recv-Q表示了current listen backlog队列元素数目(等待用户调用accept的完成3次握手的socket) Send-Q表示了listen socket最大能容纳的backlog.这个数目由listen时指定,且不能大于 /proc/sys/net/ipv4/tcp_max_syn_backlog; 对于非LISTEN socket: Recv-Q表示了receive queue中的字节数目(等待接收的下一个tcp段的序号-尚未从内核空间copy到用户空间的段最前面的一个序号) Send-Q表示发送queue中容纳的字节数(已加入发送队列中最后一个序号-输出段中最早一个未确认的序号) #### Sockets State >1 Listen [root@netkiller ~]# ss -lnp | column -c1 State Recv-Q Send-Q Local Address:Port Peer Address:Port LISTEN 0 128 127.0.0.1:9000 *:* users:(("php-fpm",1481,9),("php-fpm",1482,0),("php-fpm",1483,0),("php-fpm",1484,0),("php-fpm",1485,0),("php-fpm",1486,0),("php-fpm",1487,0),("php-fpm",1488,0),("php-fpm",1489,0),("php-fpm",1490,0),("php-fpm",1491,0)) LISTEN 0 50 *:3306 *:* users:(("mysqld",2680,11)) LISTEN 0 50 *:3307 *:* users:(("mysqld",2564,11)) >2 Established [root@netkiller ~]# ss -onp state established | column -c1 Recv-Q Send-Q Local Address:Port Peer Address:Port 0 0 10.1.17.17:22 10.1.17.18:51752 timer:(keepalive,70min,0) users:(("sshd",3122,3)) 0 0 10.1.17.17:22 10.1.20.70:51531 timer:(keepalive,69min,0) users:(("sshd",19093,3)) >3 Sockets Summary [root@netkiller ~]# ss -s Total: 93 (kernel 150) TCP: 106 (estab 10, closed 88, orphaned 0, synrecv 0, timewait 88/0), ports 41 Transport Total IP IPv6 * 150 - - RAW 0 0 0 UDP 1 1 0 TCP 18 18 0 INET 19 19 0 FRAG 0 0 0 >4 Expand 1 显示所有状态为established的ssh连接 [root@netkiller ~]# ss -o state established '( dport = :ssh or sport = :ssh )' Recv-Q Send-Q Local Address:Port Peer Address:Port 0 0 10.1.17.17:ssh 10.1.17.18:51752 timer:(keepalive,109min,0) 0 0 10.1.17.17:ssh 10.1.20.70:51531 timer:(keepalive,103min,0) #### ***timer user mem rto*** ------在另外一个终端执行 ssh 10.1.2.103----- 然后在本终端执行如下命令 [root@netkiller ~]# ss -eimpn '( dport = :22 )' -o State Recv-Q Send-Q Local Address:Port Peer Address:Port ESTAB 0 0 10.1.2.23:44107 10.1.2.103:22 timer:(keepalive,28min,0) users:(("ssh",9545,4)) ino:21970248 sk:ffff88013c2e5900 mem:(r0,w0,f4096,t0) sack cubic wscale:7,8 rto:203 rtt:3.25/1.75 ato:40 cwnd:10 send 35.9Mbps rcv_rtt:33427 rcv_space:113592 ------在另外一个终端执行 telnet 27.111.200.86 15672----- 然后在本终端执行如下命令 [root@netkiller ~]# ss -eimpn '( dport = :15672 )' -o State Recv-Q Send-Q Local Address:Port Peer Address:Port ESTAB 0 2 10.1.2.23:57531 27.111.200.86:15672 timer:(on,614ms,0) users:(("telnet",10163,4)) ino:21983807 sk:ffff8800378ba040 mem:(r0,w554,f3542,t0) sack cubic wscale:7,8 cwnd:10 rcv_space:14600 > timer -o 显示计时器信息(timer),linux对一个tcp socket总共有7个定时器,通过4个timer实现 通过icsk_retransmit_timer实现的重传定时器,零窗口探测定时器; 通过sk_timer实现的连接建立定时器,保活定时器和FIN_WAIT_2定时器; 通过icsk_delack_timer实现的延时ack定时器以及TIME_WAIT定时器. timer 这个输出描述的是tcp socket上的定时器 timer 的输出含义就是(类型,过期时间,重试次数) off: 当前socket没有timer on: 重传timer keepalive:连接建立timer or fin_wait_2 timer or 保活timer;具体是那个timer,可以根据连接的状态来确定. timewait: TIME_WAITtimer persist:零窗口探测timer > user ss -p 输出users项里会出现三个参数: 第一个是进程名 第二个为pid 第三项该进程文件描述符的使用数量 > mem mem:(r0,w554,f3542,t0) r the read (inbound) buffer w the write (outbound) buffer f the "forward allocated memory" (memory available to the socket) t the transmit queue (stuff waiting to be sent or waiting on an ACK) > socket information sack cubic wscale rto rtt cwnd send rcv_space #### Notice >1 ss process name and pid only name ss -tp | grep -v Recv-Q | sed -e 's/.*users:(("//' -e 's/".*$//' | sort | uniq only pid [root@netkiller ~]# ss -tp | grep -v Recv-Q | sed -e 's/.*users:((.*",//' -e 's/,.*$//' | sort | uniq name and pid # ss -tp | grep -v Recv-Q | sed -e 's/.*users:(("\(.*\)",\(.*\),.*$/\1:\2/' | sort | uniq f_e_related_dat:4695 mysqld:4289 salt-minion:4001 sshd:25161
子网掩码快速算法 大家都应该知道2的x次方值吧?下面是2的0次到10次方的计算值分别是: 1 2 4 8 16 32 64 128 256 512 1024。 实例 如果你希望每个子网中只有5个ip地址可以给机器用,那么你就最少需要准备给每个子网7个ip位址,因为需要加上两头的不可用的网络和广播ip,所以你需要选比7多的最近的那位,也就是8,就是说选每个子网8个ip。到这一步,你就可以算屏蔽了。 这个方法就是:最后一位屏蔽就是256减去你每个子网所需要的ip位元址的数量,那么这个例子就是256-8=248,那么算出这个,你就可以知道那些ip是不能用的了, 依此类推:0-7,8-15,16-23,24-31,……,写在上面的0、7、8、15、16、23、24、31……都是不能用的,你应该用某两个数字之间的IP,那个就是一个子网可用的IP。 再试验一下,就拿200台机器分成4个子网来做例子吧。 200台机器,4个子网,那么就是每个子网50台机器,设定为192.168.10.0,C类的IP,大子网掩码应为255.255.255.0,对吧,但是我们要分子网,所以按照上面的,我们用32个IP一个子网内不够,应该每个子网用64个IP(其中62位可用,足够了吧),然后用我的办法:子网掩码应该是256-64=192,那么总的子网掩码应该为:255.255.255.192。不相信?算算:0-63,64-127,128-191,192-255,这样你就可以把四个区域分别设定到四个子网的机器上了。
# iptab +----------------------------------------------+ | addrs bits pref class mask | +----------------------------------------------+ | 1 0 /32 255.255.255.255 | | 2 1 /31 255.255.255.254 | | 4 2 /30 255.255.255.252 | | 8 3 /29 255.255.255.248 | | 16 4 /28 255.255.255.240 | | 32 5 /27 255.255.255.224 | | 64 6 /26 255.255.255.192 | | 128 7 /25 255.255.255.128 | | 256 8 /24 1C 255.255.255.0 | | 512 9 /23 2C 255.255.254.0 | | 1K 10 /22 4C 255.255.252.0 | | 2K 11 /21 8C 255.255.248.0 | | 4K 12 /20 16C 255.255.240.0 | | 8K 13 /19 32C 255.255.224.0 | | 16K 14 /18 64C 255.255.192.0 | | 32K 15 /17 128C 255.255.128.0 | | 64K 16 /16 1B 255.255.0.0 | | 128K 17 /15 2B 255.254.0.0 | | 256K 18 /14 4B 255.252.0.0 | | 512K 19 /13 8B 255.248.0.0 | | 1M 20 /12 16B 255.240.0.0 | | 2M 21 /11 32B 255.224.0.0 | | 4M 22 /10 64B 255.192.0.0 | | 8M 23 /9 128B 255.128.0.0 | | 16M 24 /8 1A 255.0.0.0 | | 32M 25 /7 2A 254.0.0.0 | | 64M 26 /6 4A 252.0.0.0 | | 128M 27 /5 8A 248.0.0.0 | | 256M 28 /4 16A 240.0.0.0 | | 512M 29 /3 32A 224.0.0.0 | | 1024M 30 /2 64A 192.0.0.0 | | 2048M 31 /1 128A 128.0.0.0 | | 4096M 32 /0 256A 0.0.0.0 | +----------------------------------------------+
$ sudo apt-get install netmask
-s, --standard Output address/netmask pairs
$ netmask -s 192.168.1.0/28 192.168.1.0/255.255.255.240 $ netmask -s 192.168.1.0/24 192.168.1.0/255.255.255.0 $ netmask -s 192.168.1.0/24 192.168.1.0/255.255.255.0 $ netmask -s 192.168.1.0/26 192.168.1.0/255.255.255.192 [root@netkiller src]# netmask -s 11.111.195.211/27 11.111.195.192/255.255.255.224
-c, --cidr Output CIDR format address lists
$ netmask -c 192.168.1.0/255.255.255.252 192.168.1.0/30 $ netmask -c 192.168.1.0/255.255.255.192 192.168.1.0/26 $ netmask -c 192.168.1.0/255.255.255.240 192.168.1.0/28
-i, --cisco Output Cisco style address lists 思科风格的反子网掩码计算
$ netmask -i 192.168.1.0/255.255.255.0 192.168.1.0 0.0.0.255 $ netmask -i 192.168.1.0/255.255.255.252 192.168.1.0 0.0.0.3 $ netmask -i 192.168.1.0/24 192.168.1.0 0.0.0.255 $ netmask -i 192.168.1.0/28 192.168.1.0 0.0.0.15
-r, --range Output ip address ranges 输出地址范围
计算子网掩码位数
[root@netkiller src]# netmask 11.111.195.211/255.255.255.224 11.111.195.192/27
$ netmask -r 192.168.1.0/255.255.255.0 192.168.1.0-192.168.1.255 (256) $ netmask -r 192.168.1.0/255.255.255.192 192.168.1.0-192.168.1.63 (64) $ netmask -r 192.168.1.0/255.255.255.252 192.168.1.0-192.168.1.3 (4) $ netmask -r 192.168.1.0/28 192.168.1.0-192.168.1.15 (16) $ netmask -r 192.168.1.0/24 192.168.1.0-192.168.1.255 (256)
$ netmask -r 192.168.1.0/255.255.255.252 192.168.1.0-192.168.1.3 (4) $ netmask -r 192.168.1.2/255.255.255.252 192.168.1.0-192.168.1.3 (4) $ netmask -r 192.168.1.6/255.255.255.252 192.168.1.4-192.168.1.7 (4) $ netmask -r 192.168.1.12/255.255.255.252 192.168.1.12-192.168.1.15 (4) $ netmask -r 192.168.1.13/255.255.255.252 192.168.1.12-192.168.1.15 (4) $ netmask -r 192.168.1.100/255.255.255.252 192.168.1.100-192.168.1.103 (4) $ netmask -r 192.168.1.100/255.255.255.240 192.168.1.96-192.168.1.111 (16) $ netmask -r 192.168.1.50/255.255.255.240 192.168.1.48-192.168.1.63 (16)
-b, --binary Output address/netmask pairs in binary 二进制
$ netmask -b 192.168.1.0/255.255.255.240 11000000 10101000 00000001 00000000 / 11111111 11111111 11111111 11110000 $ netmask -b 172.16.0.0/255.255.252.0 10101100 00010000 00000000 00000000 / 11111111 11111111 11111100 00000000
display (all) hosts in alternative (BSD) style
[root@dev2 ~]# arp -a ? (192.168.3.253) at 00:1D:0F:82:05:DC [ether] on eth0 ? (192.168.3.48) at 00:25:64:9A:D7:CC [ether] on eth0 ? (192.168.3.101) at 00:25:64:A3:65:93 [ether] on eth0 nis.example.com (192.168.3.5) at 00:25:64:9A:D7:E0 [ether] on eth0 ? (192.168.3.1) at 00:0F:E2:71:8E:FB [ether] on eth0 ? (192.168.3.153) at B8:AC:6F:25:D2:2E [ether] on eth0
display (all) hosts in default (Linux) style
[root@dev2 ~]# arp -e Address HWtype HWaddress Flags Mask Iface 192.168.3.48 ether 00:25:64:9A:D7:CC C eth0 192.168.3.101 ether 00:25:64:A3:65:93 C eth0 nis.example.com ether 00:25:64:9A:D7:E0 C eth0 192.168.3.1 ether 00:0F:E2:71:8E:FB C eth0 10.0.0.1 ether 00:1F:12:55:A9:02 C eth0 192.168.3.153 ether B8:AC:6F:25:D2:2E C eth0
don't resolve names
[root@dev2 ~]# arp -a -n ? (192.168.3.253) at 00:1D:0F:82:05:DC [ether] on eth0 ? (192.168.3.48) at 00:25:64:9A:D7:CC [ether] on eth0 ? (192.168.3.101) at 00:25:64:A3:65:93 [ether] on eth0 ? (192.168.3.5) at 00:25:64:9A:D7:E0 [ether] on eth0 ? (192.168.3.1) at 00:0F:E2:71:8E:FB [ether] on eth0 ? (192.168.3.153) at B8:AC:6F:25:D2:2E [ether] on eth0
[root@dev2 ~]# arp -d 192.168.3.101 [root@dev2 ~]# arp -i eth1 -d 10.0.0.1
[root@dev2 ~]# cat /proc/net/arp IP address HW type Flags HW address Mask Device 192.168.3.48 0x1 0x2 00:25:64:9A:D7:CC * eth0 192.168.3.101 0x1 0x2 00:1E:7A:E0:47:40 * eth0 192.168.3.5 0x1 0x2 00:25:64:9A:D7:E0 * eth0 192.168.3.1 0x1 0x2 00:0F:E2:71:8E:FB * eth0 192.168.3.153 0x1 0x2 B8:AC:6F:25:D2:2E * eth0
add 增加路由 del 删除路由 via 网关出口 IP地址 dev 网关出口 物理设备名
[root@gitlab ~]# ip route replace help Usage: ip route { list | flush } SELECTOR ip route save SELECTOR ip route restore ip route showdump ip route get [ ROUTE_GET_FLAGS ] ADDRESS [ from ADDRESS iif STRING ] [ oif STRING ] [ tos TOS ] [ mark NUMBER ] [ vrf NAME ] [ uid NUMBER ] [ ipproto PROTOCOL ] [ sport NUMBER ] [ dport NUMBER ] ip route { add | del | change | append | replace } ROUTE SELECTOR := [ root PREFIX ] [ match PREFIX ] [ exact PREFIX ] [ table TABLE_ID ] [ vrf NAME ] [ proto RTPROTO ] [ type TYPE ] [ scope SCOPE ] ROUTE := NODE_SPEC [ INFO_SPEC ] NODE_SPEC := [ TYPE ] PREFIX [ tos TOS ] [ table TABLE_ID ] [ proto RTPROTO ] [ scope SCOPE ] [ metric METRIC ] [ ttl-propagate { enabled | disabled } ] INFO_SPEC := { NH | nhid ID } OPTIONS FLAGS [ nexthop NH ]... NH := [ encap ENCAPTYPE ENCAPHDR ] [ via [ FAMILY ] ADDRESS ] [ dev STRING ] [ weight NUMBER ] NHFLAGS FAMILY := [ inet | inet6 | mpls | bridge | link ] OPTIONS := FLAGS [ mtu NUMBER ] [ advmss NUMBER ] [ as [ to ] ADDRESS ] [ rtt TIME ] [ rttvar TIME ] [ reordering NUMBER ] [ window NUMBER ] [ cwnd NUMBER ] [ initcwnd NUMBER ] [ ssthresh NUMBER ] [ realms REALM ] [ src ADDRESS ] [ rto_min TIME ] [ hoplimit NUMBER ] [ initrwnd NUMBER ] [ features FEATURES ] [ quickack BOOL ] [ congctl NAME ] [ pref PREF ] [ expires TIME ] [ fastopen_no_cookie BOOL ] TYPE := { unicast | local | broadcast | multicast | throw | unreachable | prohibit | blackhole | nat } TABLE_ID := [ local | main | default | all | NUMBER ] SCOPE := [ host | link | global | NUMBER ] NHFLAGS := [ onlink | pervasive ] RTPROTO := [ kernel | boot | static | NUMBER ] PREF := [ low | medium | high ] TIME := NUMBER[s|ms] BOOL := [1|0] FEATURES := ecn ENCAPTYPE := [ mpls | ip | ip6 | seg6 | seg6local | rpl ] ENCAPHDR := [ MPLSLABEL | SEG6HDR ] SEG6HDR := [ mode SEGMODE ] segs ADDR1,ADDRi,ADDRn [hmac HMACKEYID] [cleanup] SEGMODE := [ encap | inline ] ROUTE_GET_FLAGS := [ fibmatch ]
[root@localhost ~]# ip link show 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 2: enp2s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq state UP mode DEFAULT group default qlen 1000 link/ether 00:e0:70:81:a0:f5 brd ff:ff:ff:ff:ff:ff 3: wlp1s0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000 link/ether 40:9f:38:b6:e0:31 brd ff:ff:ff:ff:ff:ff 4: br-0e0f0a52c09e: <BROADCAST,MULTICAST> mtu 1500 qdisc noqueue state DOWN mode DEFAULT group default link/ether 02:42:c4:61:cb:51 brd ff:ff:ff:ff:ff:ff 5: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT group default link/ether 02:42:8b:b0:1d:c1 brd ff:ff:ff:ff:ff:ff 16578: br-ad3d9e94154d: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default link/ether 02:42:5a:e6:15:f8 brd ff:ff:ff:ff:ff:ff 16582: vethb1a595b@if16581: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-ad3d9e94154d state UP mode DEFAULT group default link/ether 2a:cb:a5:0e:ff:58 brd ff:ff:ff:ff:ff:ff link-netnsid 0
-s, -stats, -statistics Output more information. If the option appears twice or more, the amount of information increases. As a rule, the information is statistics or some time values.
[root@localhost ~]# ip -s link show 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 RX: bytes packets errors dropped missed mcast 524494906 58478 0 0 0 0 TX: bytes packets errors dropped carrier collsns 524494906 58478 0 0 0 0 2: enp2s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq state UP mode DEFAULT group default qlen 1000 link/ether 00:e0:70:81:a0:f5 brd ff:ff:ff:ff:ff:ff RX: bytes packets errors dropped missed mcast 1650138393 3456155 0 1419 0 369 TX: bytes packets errors dropped carrier collsns 631678091 1615937 0 0 0 0 3: wlp1s0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000 link/ether 40:9f:38:b6:e0:31 brd ff:ff:ff:ff:ff:ff RX: bytes packets errors dropped missed mcast 0 0 0 0 0 0 TX: bytes packets errors dropped carrier collsns 0 0 0 0 0 0 4: br-0e0f0a52c09e: <BROADCAST,MULTICAST> mtu 1500 qdisc noqueue state DOWN mode DEFAULT group default link/ether 02:42:c4:61:cb:51 brd ff:ff:ff:ff:ff:ff RX: bytes packets errors dropped missed mcast 0 0 0 0 0 0 TX: bytes packets errors dropped carrier collsns 10148 114 0 0 0 0 5: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT group default link/ether 02:42:8b:b0:1d:c1 brd ff:ff:ff:ff:ff:ff RX: bytes packets errors dropped missed mcast 560 20 0 0 0 0 TX: bytes packets errors dropped carrier collsns 0 0 0 0 0 0 16578: br-ad3d9e94154d: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default link/ether 02:42:5a:e6:15:f8 brd ff:ff:ff:ff:ff:ff RX: bytes packets errors dropped missed mcast 4026856 31020 0 0 0 0 TX: bytes packets errors dropped carrier collsns 534479810 41161 0 0 0 0 16582: vethb1a595b@if16581: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-ad3d9e94154d state UP mode DEFAULT group default link/ether 2a:cb:a5:0e:ff:58 brd ff:ff:ff:ff:ff:ff link-netnsid 0 RX: bytes packets errors dropped missed mcast 4461136 31020 0 0 0 0 TX: bytes packets errors dropped carrier collsns 534480956 41176 0 0 0 0
查看所有IP地址
[root@localhost ~]# ip addr show enp2s0 2: enp2s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq state UP group default qlen 1000 link/ether 00:e0:70:81:a0:f5 brd ff:ff:ff:ff:ff:ff inet 192.168.30.13/24 brd 192.168.30.255 scope global noprefixroute enp2s0 valid_lft forever preferred_lft forever [root@localhost ~]# ip addr show 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: enp2s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq state UP group default qlen 1000 link/ether 00:e0:70:81:a0:f5 brd ff:ff:ff:ff:ff:ff inet 192.168.30.13/24 brd 192.168.30.255 scope global noprefixroute enp2s0 valid_lft forever preferred_lft forever 3: wlp1s0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether 40:9f:38:b6:e0:31 brd ff:ff:ff:ff:ff:ff 4: br-0e0f0a52c09e: <BROADCAST,MULTICAST> mtu 1500 qdisc noqueue state DOWN group default link/ether 02:42:c4:61:cb:51 brd ff:ff:ff:ff:ff:ff 5: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default link/ether 02:42:8b:b0:1d:c1 brd ff:ff:ff:ff:ff:ff 16578: br-ad3d9e94154d: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default link/ether 02:42:5a:e6:15:f8 brd ff:ff:ff:ff:ff:ff inet 192.168.49.1/24 brd 192.168.49.255 scope global br-ad3d9e94154d valid_lft forever preferred_lft forever inet6 fe80::42:5aff:fee6:15f8/64 scope link valid_lft forever preferred_lft forever 16582: vethb1a595b@if16581: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-ad3d9e94154d state UP group default link/ether 2a:cb:a5:0e:ff:58 brd ff:ff:ff:ff:ff:ff link-netnsid 0 inet6 fe80::28cb:a5ff:fe0e:ff58/64 scope link valid_lft forever preferred_lft forever
显示活动状态的IP地址
[root@localhost ~]# ip addr show up 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: enp2s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq state UP group default qlen 1000 link/ether 00:e0:70:81:a0:f5 brd ff:ff:ff:ff:ff:ff inet 192.168.30.13/24 brd 192.168.30.255 scope global noprefixroute enp2s0 valid_lft forever preferred_lft forever 5: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default link/ether 02:42:8b:b0:1d:c1 brd ff:ff:ff:ff:ff:ff 16578: br-ad3d9e94154d: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default link/ether 02:42:5a:e6:15:f8 brd ff:ff:ff:ff:ff:ff inet 192.168.49.1/24 brd 192.168.49.255 scope global br-ad3d9e94154d valid_lft forever preferred_lft forever inet6 fe80::42:5aff:fee6:15f8/64 scope link valid_lft forever preferred_lft forever
查看指定接口的IP地址
[root@localhost ~]# ip addr show enp2s0 2: enp2s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq state UP group default qlen 1000 link/ether 00:e0:70:81:a0:f5 brd ff:ff:ff:ff:ff:ff inet 192.168.30.13/24 brd 192.168.30.255 scope global noprefixroute enp2s0 valid_lft forever preferred_lft forever
[root@localhost ~]# ip route list default via 192.168.30.1 dev enp2s0 proto static metric 100 192.168.30.0/24 dev enp2s0 proto kernel scope link src 192.168.30.13 metric 100 192.168.49.0/24 dev br-ad3d9e94154d proto kernel scope link src 192.168.49.1 192.168.49.2 via 192.168.49.1 dev br-ad3d9e94154d
[root@localhost ~]# ip route get default local 0.0.0.0 dev lo src 127.0.0.1 uid 0 cache <local> [root@localhost ~]# ip route get 192.168.49.2 192.168.49.2 dev br-ad3d9e94154d src 192.168.49.1 uid 0 cache
主机路由
[root@gitlab ~]# ip route add 192.168.49.1 via 192.168.30.13 dev enp2s0
网络路由,指定吓一跳IP地址
[root@gitlab ~]# ip route add 192.168.0.0/24 via 192.168.0.1
指定出口接口
[root@gitlab ~]# ip route add 192.168.49.0/24 dev enp2s0
[root@gitlab ~]# ip route add 192.168.49.0/24 via 192.168.30.13 dev enp2s0
ip route del 192.168.0.0/24 via 192.168.0.1
ip route del 192.168.49.0/24 via 192.168.30.5 dev enp2s0
[root@router ~]# ip route 192.168.5.0/24 dev eth0 proto kernel scope link src 192.168.5.47 192.168.3.0/24 dev eth0 proto kernel scope link src 192.168.3.47 default via 192.168.3.1 dev eth0 [root@router ~]# ip route change default via 192.168.5.1 dev eth0 [root@router ~]# ip route list 192.168.5.0/24 dev eth0 proto kernel scope link src 192.168.5.47 192.168.3.0/24 dev eth0 proto kernel scope link src 192.168.3.47 default via 192.168.5.1 dev eth0
[root@development ~]# ip -4 -o addr 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever 2: enp2s0 inet 192.168.30.11/24 brd 192.168.30.255 scope global enp2s0\ valid_lft forever preferred_lft forever 2: enp2s0 inet 192.168.30.13/24 brd 192.168.30.255 scope global secondary noprefixroute enp2s0\ valid_lft forever preferred_lft forever 4: docker0 inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0\ valid_lft forever preferred_lft forever 45: br-a32fa1ca1437 inet 172.18.0.1/16 brd 172.18.255.255 scope global br-a32fa1ca1437\ valid_lft forever preferred_lft forever 71: br-2bb2f800fb7a inet 172.20.0.1/16 brd 172.20.255.255 scope global br-2bb2f800fb7a\ valid_lft forever preferred_lft forever 120: br-fc7ddec9d410 inet 172.21.0.1/16 brd 172.21.255.255 scope global br-fc7ddec9d410\ valid_lft forever preferred_lft forever 399: br-a82ea0e05c7b inet 172.26.0.1/16 brd 172.26.255.255 scope global br-a82ea0e05c7b\ valid_lft forever preferred_lft forever 149: br-6d50d8b97aac inet 172.22.0.1/16 brd 172.22.255.255 scope global br-6d50d8b97aac\ valid_lft forever preferred_lft forever 1209: br-2eeefaf97995 inet 172.28.0.1/16 brd 172.28.255.255 scope global br-2eeefaf97995\ valid_lft forever preferred_lft forever 185: br-3a54bbf16bd3 inet 172.24.0.1/16 brd 172.24.255.255 scope global br-3a54bbf16bd3\ valid_lft forever preferred_lft forever 717: br-f5d2855f7db6 inet 172.19.0.1/16 brd 172.19.255.255 scope global br-f5d2855f7db6\ valid_lft forever preferred_lft forever 206: br-33100abbf284 inet 172.25.0.1/16 brd 172.25.255.255 scope global br-33100abbf284\ valid_lft forever preferred_lft forever 734: br-92f61288b627 inet 172.23.0.1/16 brd 172.23.255.255 scope global br-92f61288b627\ valid_lft forever preferred_lft forever 482: br-469d326ed73c inet 172.27.0.1/16 brd 172.27.255.255 scope global br-469d326ed73c\ valid_lft forever preferred_lft forever
比如我们的LINUX有3个网卡 eth0: 192.168.1.1 (局域网) eth1: 172.17.1.2 (default gw=172.17.1.1,可以上INTERNET) eth2: 192.168.10.2 (连接第二路由192.168.10.1,也可以上INTERNET) 实现两个目的 1、让192.168.1.66从第二路由上网,其他人走默认路由 2、让所有人访问192.168.1.1的FTP时,转到192.168.10.96上 配置方法: vi /etc/iproute2/rt_tables # # reserved values # 255 local 254 main 253 default 100 ROUTE2 # ip route default via 172.17.1.1 dev eth1 # ip route default via 192.168.10.1 dev eth2 table ROUTE2 # ip rule add from 192.168.1.66 pref 1001 table ROUTE2 # ip rule add to 192.168.10.96 pref 1002 table ROUTE2 # echo 1 >; /proc/sys/net/ipv4/ip_forward # iptables -t nat -A POSTROUTING -j MASQUERADE # iptables -t nat -A PREROUTING -d 192.168.1.1 -p tcp --dport 21 -j DNAT --to 192.168.10.96 # ip route flush cache
http://phorum.study-area.org/viewtopic.php?t=10085 引用:# 對外網卡 EXT_IF="eth0" # HiNet IP EXT_IP1="111.111.111.111" EXT_MASK1="24" GW1="111.111.111.1" # SeedNet IP EXT_IP2="222.222.222.222" EXT_MASK2="24" GW2="222.222.222.1" # ?#93;定 ip ip addr add $EXT_IP1/$EXT_MASK1 dev $EXT_IF ip addr add $EXT_IP2/$EXT_MASK2 dev $EXT_IF # ?#93;定 HiNet routing ip rule add to $EXT_IP1/$EXT_MASK1 lookup 201 ip route add default via $GW1 dev $EXT_IF table 201 # ?#93;定 SeedNet routing ip rule add to $EXT_IP2/$EXT_MASK2 lookup 202 ip route add default via $GW2 dev $EXT_IF table 202 # ?#93;定 Default route ip route replace default equalize \ nexthop via $GW1 dev $EXT_IF \ nexthop via $GW2 dev $EXT_IF # 清除 route cache ip route flush cache 它这里的ip rule也是这么使用的
ip route add default scope global nexthop dev ppp0 nexthop dev ppp1
neo@debian:~$ sudo ip route add default scope global nexthop via 192.168.3.1 dev eth0 weight 1 \ nexthop via 192.168.5.1 dev eth2 weight 1 neo@debian:~$ sudo ip route 192.168.5.0/24 dev eth1 proto kernel scope link src 192.168.5.9 192.168.4.0/24 dev eth0 proto kernel scope link src 192.168.4.9 192.168.3.0/24 dev eth0 proto kernel scope link src 192.168.3.9 172.16.0.0/24 dev eth2 proto kernel scope link src 172.16.0.254 default nexthop via 192.168.3.1 dev eth0 weight 1 nexthop via 192.168.5.1 dev eth1 weight 1
ip route add default scope global nexthop via $P1 dev $IF1 weight 1 \ nexthop via $P2 dev $IF2 weight 1
iptables–tnat–APOSTROUTING–d192.168.1.0/24–s0/0–oppp0–jMASQUERD iptables–tnat–APOSTROUTING–s192.168.1.0/24-jSNAT–to202.103.224.58 iptables -t nat -A POSTROUTING -s 192.168.0.0/24 -j MASQUERADE
#ip route add via ppp0 dev eth0 #ip route add via 202.103.224.58 dev eth0
ipip 是IP隧道模块
过程 11.1. ip tunnel IP隧道配置步骤
server 1
modprobe ipip ip tunnel add mytun mode ipip remote 220.201.35.11 local 211.100.37.167 ttl 255 ifconfig mytun 10.42.1.1 route add -net 10.42.1.0/24 dev mytun
server 2
modprobe ipip ip tunnel add mytun mode ipip remote 211.100.37.167 local 220.201.35.11 ttl 255 ifconfig mytun 10.42.1.2 route add -net 10.42.1.0/24 dev mytun
nat
/sbin/iptables -t nat -A POSTROUTING -s 10.42.1.0/24 -j MASQUERADE /sbin/iptables -t nat -A POSTROUTING -s 211.100.37.0/24 -j MASQUERADE
删除路由表
route del -net 10.42.1.0/24 dev mytun
修改IP隧道的IP
ifconfig mytun 10.10.10.220 route add -net 10.10.10.0/24 dev mytun
ip 伪装
/sbin/iptables -t nat -A POSTROUTING -s 10.10.10.0/24 -j MASQUERADE
首先需确保加载了内核模块 802.1q
[root@development ~]# lsmod | grep 8021q [root@development ~]# modprobe 8021q
加载后会生成目录/proc/net/vlan
[root@development ~]# cat /proc/net/vlan/config VLAN Dev name | VLAN ID Name-Type: VLAN_NAME_TYPE_RAW_PLUS_VID_NO_PAD
Linux 系统4个物理网卡的名称则分别为eth0,eth1,eth2,eth3。我们将四个网口桥接到br0端口。
你可以这样理解 vlan 2, vlan ip 192.168.0.1,然后将4个接口划分到vlan2, 这时这4个接口可以通过vlan 2访问其他用户。我只是做了一个比喻,让你能够理解。
[root@localhost ~]# dnf -y install bridge-utils
# brctl addbr br0 # brctl addif br0 eth0 # brctl addif br0 eth1 # brctl addif br0 eth2 # brctl addif br0 eth3 # ifconfig eth0 0.0.0.0 # ifconfig eth1 0.0.0.0 # ifconfig eth2 0.0.0.0 # ifconfig eth3 0.0.0.0 # ifconfig br0 192.168.0.1
[root@localhost ~]# bridge Usage: bridge [ OPTIONS ] OBJECT { COMMAND | help } bridge [ -force ] -batch filename where OBJECT := { link | fdb | mdb | vlan | monitor } OPTIONS := { -V[ersion] | -s[tatistics] | -d[etails] | -o[neline] | -t[imestamp] | -n[etns] name | -c[ompressvlans] -color -p[retty] -j[son] } [root@localhost ~]# bridge link 16582: vethb1a595b@if16581: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 master br-ad3d9e94154d state forwarding priority 32 cost 2 16586: veth1@veth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 master br0 state forwarding priority 32 cost 2 16587: veth0@veth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 master br0 state forwarding priority 32 cost 2
ip link add name br0 type bridge ip addr add 192.168.3.1/24 dev br0 ip link set br0 up
[root@localhost ~]# ip link add name br0 type bridge [root@localhost ~]# ip addr add 192.168.3.1/24 dev br0 [root@localhost ~]# ip link set br0 up [root@localhost ~]# ifconfig br0 br0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.3.1 netmask 255.255.255.0 broadcast 0.0.0.0 inet6 fe80::444c:55ff:fe96:d7dd prefixlen 64 scopeid 0x20<link> ether 46:4c:55:96:d7:dd txqueuelen 1000 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 6 bytes 516 (516.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 [root@localhost ~]# ip route default via 192.168.30.1 dev enp2s0 proto static metric 100 192.168.3.0/24 dev br0 proto kernel scope link src 192.168.3.1 [root@localhost ~]# ping -c 1 -I br0 192.168.3.1 PING 192.168.3.1 (192.168.3.1) from 192.168.3.1 br0: 56(84) bytes of data. 64 bytes from 192.168.3.1: icmp_seq=1 ttl=64 time=0.052 ms --- 192.168.3.1 ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms
ip link add veth0 type veth peer name veth1 ip addr add 192.168.3.11/24 dev veth0 ip addr add 192.168.3.12/24 dev veth1 ip link set veth0 up ip link set veth1 up
创建veth设备,并配置IP
[root@localhost ~]# ip link add veth0 type veth peer name veth1 [root@localhost ~]# ip addr add 192.168.3.11/24 dev veth0 [root@localhost ~]# ip addr add 192.168.3.12/24 dev veth1 [root@localhost ~]# ip link set veth0 up [root@localhost ~]# ip link set veth1 up [root@localhost ~]# ifconfig veth0 veth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.3.11 netmask 255.255.255.0 broadcast 0.0.0.0 inet6 fe80::849:3eff:fe7f:646f prefixlen 64 scopeid 0x20<link> ether 0a:49:3e:7f:64:6f txqueuelen 1000 (Ethernet) RX packets 7 bytes 586 (586.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 7 bytes 586 (586.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 [root@localhost ~]# ifconfig veth1 veth1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.3.12 netmask 255.255.255.0 broadcast 0.0.0.0 inet6 fe80::a0ec:9fff:feb2:d8ff prefixlen 64 scopeid 0x20<link> ether a2:ec:9f:b2:d8:ff txqueuelen 1000 (Ethernet) RX packets 7 bytes 586 (586.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 7 bytes 586 (586.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 [root@localhost ~]# ip route default via 192.168.30.1 dev enp2s0 proto static metric 100 192.168.3.0/24 dev br0 proto kernel scope link src 192.168.3.1 192.168.3.0/24 dev veth0 proto kernel scope link src 192.168.3.11 192.168.3.0/24 dev veth1 proto kernel scope link src 192.168.3.12
[root@localhost ~]# ip link set dev veth0 master br0 [root@localhost ~]# ip link set dev veth1 master br0 [root@localhost ~]# bridge link 16582: vethb1a595b@if16581: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 master br-ad3d9e94154d state forwarding priority 32 cost 2 16586: veth1@veth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 master br0 state forwarding priority 32 cost 2 16587: veth0@veth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 master br0 state forwarding priority 32 cost 2
# 创建 namespace ip netns a ns1 ip netns a ns2 # 创建一对 veth-pair veth0 veth1 ip link add veth0 type veth peer name veth1 # 将 veth0 veth1 分别加入两个 ns ip link set veth0 netns ns1 ip link set veth1 netns ns2 # 给两个 veth0 veth1 配上 IP 并启用 ip netns exec ns1 ip addr add 192.168.3.11/24 dev veth0 ip netns exec ns1 ip link set veth0 up ip netns exec ns2 ip addr add 192.168.3.12/24 dev veth1 ip netns exec ns2 ip link set veth1 up # 从 veth0 ping veth1 [root@localhost ~]# ip netns exec ns1 ping -c 3 192.168.3.12 PING 192.168.3.12 (192.168.3.12) 56(84) bytes of data. 64 bytes from 192.168.3.12: icmp_seq=1 ttl=64 time=0.025 ms 64 bytes from 192.168.3.12: icmp_seq=2 ttl=64 time=0.019 ms 64 bytes from 192.168.3.12: icmp_seq=3 ttl=64 time=0.022 ms --- 192.168.3.12 ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2025ms rtt min/avg/max/mdev = 0.019/0.022/0.025/0.002 ms
[root@localhost ~]# ip netns a ns1 [root@localhost ~]# ip netns a ns2 [root@localhost ~]# ip netns ns2 (id: 2) ns1 (id: 1)
#创建 bridge br0 ip link add name br0 type bridge ip addr add 192.168.3.1/24 dev br0 ip link set br0 up # 创建两对 veth-pair ip l a veth0 type veth peer name br-veth0 ip l a veth1 type veth peer name br-veth1 # 分别将两对 veth-pair 加入两个 ns 和 br0 ip l s veth0 netns ns1 ip l s br-veth0 master br0 ip addr add 192.168.3.10/24 dev br-veth0 ip l s br-veth0 up ip l s veth1 netns ns2 ip l s br-veth1 master br0 ip l s br-veth1 up # 给两个 ns 中的 veth 配置 IP 并启用 ip netns exec ns1 ip a a 10.1.1.2/24 dev veth0 ip netns exec ns1 ip l s veth0 up ip netns exec ns2 ip a a 10.1.1.3/24 dev veth1 ip netns exec ns2 ip l s veth1 up # veth0 ping veth1 [root@localhost ~]# ip netns exec ns1 ping -c 3 192.168.3.12 PING 192.168.3.12 (192.168.3.12) 56(84) bytes of data. 64 bytes from 192.168.3.12: icmp_seq=1 ttl=64 time=0.024 ms 64 bytes from 192.168.3.12: icmp_seq=2 ttl=64 time=0.017 ms 64 bytes from 192.168.3.12: icmp_seq=3 ttl=64 time=0.014 ms --- 192.168.3.12 ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2065ms rtt min/avg/max/mdev = 0.014/0.018/0.024/0.005 ms
添加物理设备
ip link set dev eth0 master br0
添加虚拟设备
ip link set dev veth0 master br0
[root@localhost ~]# ip link set dev veth0 master br0 [root@localhost ~]# ip link set dev veth1 master br0 [root@localhost ~]# ip route default via 192.168.30.1 dev enp2s0 proto static metric 100 192.168.3.0/24 dev br0 proto kernel scope link src 192.168.3.1 192.168.3.0/24 dev veth0 proto kernel scope link src 192.168.3.11 192.168.3.0/24 dev veth1 proto kernel scope link src 192.168.3.12 [root@localhost ~]# bridge link 16586: veth1@veth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 master br0 state forwarding priority 32 cost 2 16587: veth0@veth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 master br0 state forwarding priority 32 cost 2