nftables 日志解决方案实践
共 24633字,需浏览 50分钟
·
2021-07-14 18:18
nftables 重要规则进行日志记录,并配置日志切割、nftables 规则固定到文件,保证重启不丢失。
最后更新时间:2021/7/13
根据官方 wiki 中Logging traffic[1]这篇文章的说明:从 Linux 内核 3.17 开始提供完整的日志支持。如果您运行较旧的内核,则必须 modprobe ipt_LOG 以启用日志记录。从 nftables v0.7 开始,支持 log 标志。
那么我们的内核是5.13.0-1.el7.elrepo.x86_64
,nftables 版本是nftables v0.8 (Joe Btfsplk)
,应该不需要手动加载内核模块。
前置条件:
openvpn 网卡 tun0、openvpn 用户虚拟 IP 网段为 10.121.0.0/16;
wireguard 网卡 wg0、wireguard 虚拟 IP 网段 10.122.0.0/16;
1. nftables 规则创建及测试
我这边需要在这台 VPN 中枢服务器进行路由转发和控制,并且需要监控哪个 VPN 用户访问了哪些服务的日志;
1.1. 简单测试-权限粒度从 ip 到目的 ip.端口
首先测试确定的用户虚拟 IP 到确定的服务器的控制转发规则:
## 添加单条规则如下:
nft add rule ip nat POSTROUTING oifname eth0 ip saddr 10.121.6.6 ip daddr 192.168.5.71 counter log masquerade
nft add rule ip nat POSTROUTING oifname eth0 ip saddr 10.121.6.6 ip daddr 10.10.210.9 counter log masquerade
## 查询到的规则如下
table ip filter {
chain INPUT {
type filter hook input priority 0; policy accept;
counter packets 248456 bytes 62354809 ## handle 4
}
chain FORWARD {
type filter hook forward priority 0; policy accept;
iifname "wg0" counter packets 97 bytes 6924 accept ## handle 5
oifname "wg0" counter packets 121 bytes 7776 accept ## handle 6
}
chain OUTPUT {
type filter hook output priority 0; policy accept;
}
}
table ip nat {
chain PREROUTING {
type nat hook prerouting priority 0; policy accept;
}
chain INPUT {
type nat hook input priority 0; policy accept;
}
chain OUTPUT {
type nat hook output priority 0; policy accept;
}
chain POSTROUTING {
type nat hook postrouting priority 0; policy accept;
counter packets 76584 bytes 5488498 ## handle 7
oifname "eth0" ip saddr 10.121.6.6 ip daddr 192.168.5.71 counter packets 1 bytes 60 log masquerade ## handle 27
oifname "eth0" ip saddr 10.121.6.6 ip daddr 10.10.210.9 counter packets 0 bytes 0 log masquerade ## handle 28
}
}
openvpn 客户端 ping 10.10.210.9 和 192.168.5.71 测试,查看日志 tail -100f /var/log/nftables.log,需要先做第二步骤的日志设置
值得注意的是,短时间内再次 ping 同一个 IP 的话,不会出现新的记录,conntrack -E
看也是一样的,有网友说可能是按照比例输出的日志,后面根据生产经验看其实日志是有的,只不过 tail 没全部展现。
Jul 9 17:32:12 test-openvpn kernel: IN=tun0 OUT=eth0 MAC= SRC=10.121.6.6 DST=10.10.210.9 LEN=60 TOS=0x00 PREC=0x00 TTL=127 ID=59753 PROTO=ICMP TYPE=8 CODE=0 ID=1 SEQ=716
Jul 9 17:32:17 test-openvpn kernel: IN=tun0 OUT=eth0 MAC= SRC=10.121.6.6 DST=192.168.5.71 LEN=60 TOS=0x00 PREC=0x00 TTL=127 ID=36231 PROTO=ICMP TYPE=8 CODE=0 ID=1 SEQ=720
1.2. 权限粒度控制到网段
若要对用户以及目的地址进行批量配置和控制用户可以目的地址的粒度
需要新增VPN 用户虚拟 IP 集合、目的服务集合(粒度到端口)、目的服务器(粒度服务器或网段);
然后用三条规则控制这样的处理流程。以下是三个集合的创建和新增规则过程【只需要关注 nat 表的 POSTROUTING 链即可】:
我们需要将刚才的表删掉nft delete table filter
、nft delete table nat
,并从头创建覆盖范围更广的通用规则:
## -i参数进入交互模式 从头开始创建表filter、nat以及链和新增规则
nft -i
add table ip filter
add chain ip filter INPUT { type filter hook input priority 0; policy accept; }
add chain ip filter FORWARD { type filter hook forward priority 0; policy accept; }
add chain ip filter OUTPUT { type filter hook output priority 0; policy accept; }
add table ip nat
add chain ip nat PREROUTING { type nat hook prerouting priority 0; policy accept; }
add chain ip nat INPUT { type nat hook input priority 0; policy accept; }
add chain ip nat OUTPUT { type nat hook output priority 0; policy accept; }
add chain ip nat POSTROUTING { type nat hook postrouting priority 0; policy accept; }
## 测量流量
add rule filter INPUT counter
add rule nat POSTROUTING counter
## 决定是否增加的测试规则
add rule ip nat POSTROUTING oifname eth0 ip saddr 10.11.6.6 ip daddr 192.168.5.71 counter masquerade
add rule ip nat POSTROUTING oifname eth0 ip saddr 10.11.6.6 ip daddr 10.10.210.18 counter masquerade
## 允许wg0网卡转发流量
add rule filter FORWARD iifname wg0 counter accept
add rule filter FORWARD oifname wg0 counter accept
## user_vips 用户虚拟IP
add set nat user_vips { type ipv4_addr \; flags interval\; }
## dest_addrs 目标地址/网段
add set nat dest_addrs { type ipv4_addr \; flags interval\; }
## 查看 退出i交互模式操作
nft list ruleset
## 【核心地址】
## 添加用户虚拟地址 若想直接给所有虚拟用户添加,则需要删除单个ip,添加成10.121.0.0/16
nft add element nat user_vips { 10.121.6.6, 10.121.6.7 }
##查看结果
nft list set nat user_vips
## 添加目标服务【粒度到端口协议】 【这里此阶段没有测试!】
nft add element nat dest_svcs { 192.168.5.15 . tcp . 80, 192.168.5.15 . tcp . 22 }
## 添加目标网段【粒度粗一些】【此阶段测试了】
nft add element nat dest_addrs { 10.10.210.99, 192.168.5.77 }
##查看结果
nft list set nat dest_addrs
## 【核心规则】
## tun0 进 精确到端口 tcp协议
nft insert rule ip nat POSTROUTING oifname eth0 counter log ip saddr @user_vips ip daddr . meta l4proto . tcp dport @dest_svcs masquerade
## tun0 进 精确到端口 udp协议
nft insert rule ip nat POSTROUTING oifname eth0 counter log ip saddr @user_vips ip daddr . meta l4proto . udp dport @dest_svcs masquerade
## tun0 进 eth0 出
nft insert rule ip nat POSTROUTING oifname eth0 counter log ip saddr @user_vips ip daddr @dest_addrs masquerade
## tun0 进 wg0 出
nft insert rule ip nat POSTROUTING oifname wg0 counter log ip saddr @user_vips ip daddr @dest_addrs masquerade
##查看结果
nft list ruleset -a -nn
table ip filter {
chain INPUT {
type filter hook input priority 0; policy accept;
counter packets 1245244 bytes 478473640 ## handle 4
}
chain FORWARD {
type filter hook forward priority 0; policy accept;
iifname "wg0" counter packets 429596 bytes 254444866 accept ## handle 5
oifname "wg0" counter packets 566722 bytes 80498891 accept ## handle 6
}
chain OUTPUT {
type filter hook output priority 0; policy accept;
}
}
table ip nat {
set user_vips {
type ipv4_addr
flags interval
elements = { 10.121.6.6, 10.121.6.7 }
}
set dest_svcs {
type ipv4_addr . inet_proto . inet_service
}
set dest_addrs {
type ipv4_addr
flags interval
elements = { 10.10.210.99, 192.168.5.77 }
}
chain PREROUTING {
type nat hook prerouting priority 0; policy accept;
}
chain INPUT {
type nat hook input priority 0; policy accept;
}
chain OUTPUT {
type nat hook output priority 0; policy accept;
}
chain POSTROUTING {
type nat hook postrouting priority 0; policy accept;
oifname "eth0" counter packets 2010 bytes 128703 log ip saddr @user_vips ip daddr @dest_addrs masquerade ## handle 14
oifname "wg0" counter packets 24631 bytes 1400846 log ip saddr @user_vips ip daddr @dest_addrs masquerade ## handle 13
counter packets 26604 bytes 1536440 ## handle 5
}
}
## 若以后nat表POSTROUTING链的规则粒度改成了更细的粒度,规则数目会变成人数*服务端口数 查询可以这么晒选
nft list table ip nat -a | grep "10.121.0.3"
2. 日志设置
简单配置并重启rsyslog
服务,则 nftables 规则中写了 log 标志及符合正则的事件流将会出现在日志文件中。
vim /etc/rsyslog.d/nftables.conf
## 写入以下内容 此处的配置将匹配conntrack中符合正则的日志
:msg,regex,"IN=.*OUT=.*SRC=.*DST=.*" -/var/log/nftables.log
## 重启日志服务
systemctl restart rsyslog
## 查看日志
tail -100f /var/log/nftables.log
如何对日志文件进行管理呢?这就用到了日志切割:参考博客https://zhuanlan.zhihu.com/p/90507023
## 为nftables和openvpn设置切割规则,无需重启logrotate服务 因为nftables生产上有测试机器疯狂访问数据库,这里清理日志改成每3月
vim /etc/logrotate.d/nftables
/var/log/nftables.log {
monthly
rotate 3
dateext
compress
delaycompress
missingok
notifempty
create 644 root root
postrotate
/usr/bin/killall -HUP rsyslogd
endscript
}
## openvpn日志也类似
vim /etc/logrotate.d/openvpn
/var/log/openvpn.log {
monthly
rotate 5
dateext
compress
delaycompress
missingok
notifempty
create 644 root root
postrotate
/usr/bin/killall -HUP rsyslogd
endscript
}
## 手动测试
logrotate -vf /etc/logrotate.d/nftables
logrotate -vf /etc/logrotate.d/openvpn
## 测试结果
-rw-r--r-- 1 root root 69619 Jul 12 15:06 nftables.log
-rw------- 1 root root 5959277 Jul 12 14:55 nftables.log-20210712
-rw-r--r-- 1 root root 0 Jul 12 14:57 openvpn.log
-rw------- 1 root root 118663677 Jul 12 15:10 openvpn.log-20210712
## 规则说明
rotate 5 保留五个备份;dateext每次切割结果后面加上日期
考虑到nftables流量会很大的话,可以只存三个月、或者改成每周切割一次
3. nft 规则的备份恢复
若要临时备份规则,请如下操作:
## 备份
nft list ruleset >> backup.nft
## 恢复(原子性)
nft -f backup.nft
若想达成 nft 规则开机自动恢复的效果,需要将上面文件中的规则保存到/etc/sysconfig/nftables.conf
这样重启后无需做任何操作(wireguard、openvpn、nftables 都自动恢复);
缺点:需要每次服务器重启前尽量将最新的规则覆盖更新到此文件中,这点暂时未考虑方案~
目前的全部规则如下,ICMP 和 22 端口咱们用云服务器的安全组作了限制,不允许外部访问!
table ip filter {
chain INPUT {
type filter hook input priority 0; policy accept;
counter packets 22510792 bytes 7510722663 ## handle 4
}
chain FORWARD {
type filter hook forward priority 0; policy accept;
iifname "wg0" counter packets 10352763 bytes 4340019433 accept ## handle 5
oifname "wg0" counter packets 10642518 bytes 1781990328 accept ## handle 6
oifname "eth0" log counter packets 26348 bytes 1381110 accept ## handle 8
iifname "eth0" log counter packets 23762 bytes 57192818 accept ## handle 9
}
chain OUTPUT {
type filter hook output priority 0; policy accept;
}
}
table ip nat {
set user_vips {
type ipv4_addr
flags interval
elements = { 10.111.0.0/16, 10.122.0.0/16 }
}
set dest_svcs {
type ipv4_addr . inet_proto . inet_service
}
set dest_addrs {
type ipv4_addr
flags interval
elements = { 10.1.0.0-10.3.255.255, 10.10.0.0/16,
192.168.0.0/16 }
}
chain PREROUTING {
type nat hook prerouting priority 0; policy accept;
}
chain INPUT {
type nat hook input priority 0; policy accept;
}
chain OUTPUT {
type nat hook output priority 0; policy accept;
}
chain POSTROUTING {
type nat hook postrouting priority 0; policy accept;
oifname "eth0" counter packets 10674 bytes 669286 log ip saddr @user_vips ip daddr @dest_addrs masquerade ## handle 14
oifname "wg0" counter packets 272649 bytes 14720638 log ip saddr @user_vips ip daddr @dest_addrs masquerade ## handle 13
counter packets 35014 bytes 2061579 ## handle 5
}
}
4.未成功的方案(ulogd & ulogd2)
因为系统环境是 centos7,不是 debian,所以 yum 源没有匹配ulogd
的软件;于是需求 netfilter 官网下载包,编译安装的方式:
## 下载ulogd包并编译安装
cd /opt
curl -O https://www.netfilter.org/projects/ulogd/files/ulogd-2.0.7.tar.bz2
tar xvf ulogd-2.0.7.tar.bz2
cd ulogd-2.0.7
./configure
## 编译报错 没有libnetfilter_acct包
configure: error: Package requirements (libnetfilter_acct >= 1.0.1) were not met:
No package 'libnetfilter_acct' found
## 于是又去官网找libnetfilter_acct
https://www.netfilter.org/projects/libnetfilter_acct/downloads.html
## 下载源码包 编译安装 没有任何报错
curl -O https://www.netfilter.org/projects/libnetfilter_acct/files/libnetfilter_acct-1.0.3.tar.bz2
tar xvf libnetfilter_acct-1.0.3.tar.bz2
cd libnetfilter_acct-1.0.3
./configure
make && make install
## 但是此时再回去编译ulogd2还是报一样的错误找不到libnetfilter_acct包
## 这里并不了解是不是因为源码编译安装包后需要让系统知道安装完的包有哪些,我理解make & install后系统会识别出来的;
如果有任何想法的前辈请告诉我!感激无比~
5. 设置 nf_log 内核参数【未用到】
未修改过内核日志模块的机器
sudo sysctl -a | grep nf_log
sysctl: reading key "net.ipv6.conf.all.stable_secret"
sysctl: reading key "net.ipv6.conf.default.stable_secret"
sysctl: reading key "net.ipv6.conf.eth0.stable_secret"
sysctl: reading key "net.ipv6.conf.lo.stable_secret"
sysctl: reading key "net.ipv6.conf.wg0.stable_secret"
net.netfilter.nf_log.0 = NONE
net.netfilter.nf_log.1 = NONE
net.netfilter.nf_log.10 = NONE
net.netfilter.nf_log.11 = NONE
net.netfilter.nf_log.12 = NONE
net.netfilter.nf_log.2 = NONE
net.netfilter.nf_log.3 = NONE
net.netfilter.nf_log.4 = NONE
net.netfilter.nf_log.5 = NONE
net.netfilter.nf_log.6 = NONE
net.netfilter.nf_log.7 = NONE
net.netfilter.nf_log.8 = NONE
net.netfilter.nf_log.9 = NONE
net.netfilter.nf_log_all_netns = 0
测试机器(自己折腾过日志内核,发现不需要折腾)
sysctl -a | grep nf_log
sysctl: reading key "net.ipv6.conf.all.stable_secret"
sysctl: reading key "net.ipv6.conf.default.stable_secret"
sysctl: reading key "net.ipv6.conf.eth0.stable_secret"
sysctl: reading key "net.ipv6.conf.lo.stable_secret"
sysctl: reading key "net.ipv6.conf.wg0.stable_secret"
net.netfilter.nf_log.0 = NONE
net.netfilter.nf_log.1 = NONE
net.netfilter.nf_log.10 = nf_log_ipv6
net.netfilter.nf_log.11 = NONE
net.netfilter.nf_log.12 = NONE
net.netfilter.nf_log.2 = nf_log_ipv4
net.netfilter.nf_log.3 = nf_log_arp
net.netfilter.nf_log.4 = NONE
net.netfilter.nf_log.5 = nf_log_netdev
net.netfilter.nf_log.6 = NONE
net.netfilter.nf_log.7 = nf_log_bridge
net.netfilter.nf_log.8 = NONE
net.netfilter.nf_log.9 = NONE
net.netfilter.nf_log_all_netns = 0
6. 参考资料
Steve Suehring 的《Linux Firewalls_ Enhancing Security with nftables and Beyond-Addison-Wesley Professional》这本书中对防火墙有充分的介绍,很多方面都有可借鉴之处 设置 nf_log 内核参数[2] (最终没用到) 连接跟踪(conntrack):原理、应用及 Linux 内核实现[3]
脚注
Logging traffic: https://wiki.nftables.org/wiki-nftables/index.php/Logging_traffic
[2]设置 nf_log 内核参数: https://forums.centos.org/viewtopic.php?t=54411#p230026
[3]连接跟踪(conntrack):原理、应用及 Linux 内核实现: https://cloud.tencent.com/developer/article/1761367
你可能还喜欢
点击下方图片即可阅读
云原生是一种信仰 🤘
关注公众号
后台回复◉k8s◉获取史上最方便快捷的 Kubernetes 高可用部署工具,只需一条命令,连 ssh 都不需要!
点击 "阅读原文" 获取更好的阅读体验!
发现朋友圈变“安静”了吗?