Home 应用安全 SELKS开源IDS部署

SELKS开源IDS部署

by zinan

    此文章由我初次接触SELKS这套系统时所写,文中很多配置并不是最为合理的,因此本文不是一篇最佳实践,如需部署此系统最好以研究官方文档为主。

    总有朋友邮件问我一些性能方面的问题,我大概说一下目前我在真实生产环境20G流量下的测试结果,Suricata 4.0.4 + PF_RING 7.0.0(ZC模式) + CentOS7.2 + CPU 40核 + 内存 64G,Suricata在不加载任何规则的情况下仅仅运行流量重组引擎可以完全不丢包,当加载5W条Snort规则后偶尔会有轻微丢包情况,目前看来Suricata规则检测引擎对性能的占用要远大于流量重组引擎对性能的占用,可以注意一下后续的优化方向。

 

一 、Suricata 分布式IDS项目 目的对办公网流量监听,入侵和违规行为告警

Suricata 由OISF(Open Information Security Foundation )开发为标准libpcap或libpfring接口,支持snort规则。OISF由DHS(United States Department of Homeland Security)及Breach Security等多家企业资质开发

1. Highly Scalable

Suricata is multi threaded. This means you can run one instance and it will balance the load of processing across every processor on a sensor Suricata is configured to use. This allows commodity hardware to achieve 10 gigabit speeds on real life traffic without sacrificing ruleset coverage.

2. Protocol Identification

The most common protocols are automatically recognized by Suricata as the stream starts, thus allowing rule writers to write a rule to the protocol, not to the port expected. This makes Suricata a Malware Command and Control Channel hunter like no other. Off port HTTP CnC channels, which normally slide right by most IDS systems, are child’s play for Suricata! Furthermore, thanks to dedicated keywords you can match on protocol fields which range from http URI to a SSL certificate identifier.

3. File Identification, MD5 Checksums, and File Extraction

Suricata can identify thousands of file types while crossing your network! Not only can you identify it, but should you decide you want to look at it further you can tag it for extraction and the file will be written to disk with a meta data file describing the capture situation and flow. The file’s MD5 checksum is calculated on the fly, so if you have a list of md5 hashes you want to keep in your network, or want to keep out, Suricata can find it.

http://suricata-ids.org/features/

http://www.aldeid.com/wiki/Suricata-vs-snort  对比

部署参考:

https://redmine.openinfosecfoundation.org/projects/suricata/wiki/suricata_snorby_and_barnyard2_set_up_guide

https://redmine.openinfosecfoundation.org/projects/suricata/wiki/_Logstash_Kibana_and_Suricata_JSON_output

http://shaurong.blogspot.com/2016/02/suricata-30-centos-72-x64_22.html

https://github.com/StamusNetworks/scirius-docker/blob/master/django/scirius.sh#L21

S –      Suricata IDPS – http://suricata-ids.org/

E –      Elasticsearch – http://www.elasticsearch.org/overview/

L –      Logstash – http://www.elasticsearch.org/overview/

K –      Kibana – http://www.elasticsearch.org/overview/

S –      Scirius – https://github.com/StamusNetworks/scirius

https://github.com/StamusNetworks/scirius  IDS Rule and Signature management

图形界面 Python django开发

二、部署步骤

1.安装pf_ring

参考:http://www.ntop.org/pf_ring/installation-guide-for-pf_ring/

加载pf_ring驱动:

modprobe pf_ring transparent_mode=2 min_num_slots=16384

ixgbe安装参考:http://techedemic.com/2015/08/04/installing-ixgbe-driver-on-ubuntu-server-14-04-lts/

https://linux.cn/article-5149-1.html

加载pf_ring_aware的ixgbe网卡驱动:

modprobe ixgbe RSS=1

(普通驱动./ixgbe-4.1.2-2.6.32/src/ixgbe.ko)

将RSS数减少为1:

http://suricata.readthedocs.io/en/latest/performance/packet-capture.html

启动eth4网卡

sudo ifconfig eth4 up

pfring接收测试程序

sudo ./PF_RING/userland/examples/pfcount -i eth4

sar -n EDEV 2 10000 | grep eth4

/usr/local/sbin/tcpdump   基于pfring库的tcpdump

注意各CPU的软中断使用率,可能需要进行调优,参考:

Linux 多核下绑定硬件中断到不同 CPU(IRQ Affinity)

2.安装Redis

(过程略)

Log使用redis临时保存,日志不落地直接保存进ES。

3.安装Suricata

安装依赖:

yum install wget libpcap-devel libnet-devel pcre-devel gcc-c++ automake autoconf libtool make libyaml-devel zlib-devel file-devel jansson-devel nss-devel

安装Hiredis:https://github.com/redis/hiredis

它是Redis最小的C客户端

git clone https://github.com/redis/hiredis.git  
cd hiredis/  
make  
sudo make install

安装Hyperscan支持:

http://suricata.readthedocs.io/en/latest/performance/hyperscan.html

安装Tcmalloc:

http://suricata.readthedocs.io/en/latest/performance/tcmalloc.html

suricata configure 参数

./configure --enable-lua --enable-pfring --enable-old-barnyard2 --enable-hiredis --enable-unix-socket --enable-profiling --enable-geoip --with-libnss-libraries=/usr/lib64 --with-libnss-includes=/usr/include/nss3 --with-libnspr-libraries=/usr/lib64 --with-libnspr-includes=/usr/include/nspr4 --enable-pfring --with-libpfring-includes=/usr/local/include --with-libpfring-libraries=/usr/local/lib --with-libhs-includes=/usr/local/include/hs/ --with-libhs-libraries=/usr/local/lib/
make
make install
ldconfig

4.安装Logstash

这里需要修改logstash向ES中写数据时添加的隐含字段模板,logstash在向ES中写数据时,在缺省配置下,只向名为”logstash-“的索引中的每个文档添加隐含的默认字段,而scirius在读取ES中的数据时需要调用这些默认字段,因此若需要更改logstash写入ES中的索引名,则还需要更改建立默认字段的模板,在logstash v2.3.4中,编辑文件:./vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.7.1-java/lib/logstash/outputs/elasticsearch/elasticsearch-template.json

修改”template”字段值为自定义的索引名。

5.安装Scirius

详见Github:https://github.com/StamusNetworks/scirius

6.安装ES

(过程略)

7.安装Kibana

(过程略)

三、配置

Suricata 文档https://redmine.openinfosecfoundation.org/projects/suricata/wiki/Suricata_User_Guide

Suricata配置文件和解释

http://www.ntop.org/pf_ring/accelerating-suricata-with-pf_ring-dna/

https://home.regit.org/2012/07/suricata-to-10gbps-and-beyond/

https://redmine.openinfosecfoundation.org/projects/suricata/wiki/Suricatayaml

http://blog.csdn.net/wuyangbotianshi/article/

1.修改Scirius功能代码:

最新版Scirius已不需要修改任何代码了!

这部分将添加Scirius连接ES时的HTTP认证的功能。

Scirius版本:Scirius version: 1.2.2

(1)编辑scirius/setting.py

添加以下代码:

#########################################################
# HTTP AUTH
ELASTICSEARCH_HTTP_AUTH = True
ELASTICSEARCH_HTTP_AUTH_USER = "username"
ELASTICSEARCH_HTTP_AUTH_PASS = "password"


#########################################################

(2)编辑rules/es_graphs.py

在开头添加以下两个函数:

#########################################################
def gen_http_auth_field():
    base64string = base64.encodestring('%s:%s' % (settings.ELASTICSEARCH_HTTP_AUTH_USER,
    settings.ELASTICSEARCH_HTTP_AUTH_PASS)).replace('\n', '')
    auth_field = "Authorization", "Basic %s" % base64string
    return auth_field
def add_http_auth_field(req):
    if settings.ELASTICSEARCH_HTTP_AUTH is False:
        return req
    auth_field = gen_http_auth_field()
    req.add_header(auth_field[0], auth_field[1])
    return req


#########################################################

然后搜索全文,在所有urllib2.Request()调用前添加add_http_auth_field()函数。
还需修改es_delete_alerts_by_sid_v2()函数中提交请求的代码为:

#########################################################
if settings.ELASTICSEARCH_HTTP_AUTH is True:
    auth_field = gen_http_auth_field()
    r = requests.delete(delete_url, headers={auth_field[0]:auth_field[1]})
else:
    r = requests.delete(delete_url)


#########################################################

修改es_delete_alerts_by_sid_v5()函数中提交请求的代码为:

#########################################################
if settings.ELASTICSEARCH_HTTP_AUTH is True:
    auth_field = gen_http_auth_field()
    r = requests.post(delete_url, headers={auth_field[0]:auth_field[1]}, data = json.dumps(data))
else:
    r = requests.post(delete_url, data = json.dumps(data))


#########################################################

(3)编辑rules/es_data.py

修改ESData类中的__init__()函数代码如下:

#########################################################
es_addr = 'http://%s/' % settings.ELASTICSEARCH_ADDRESS
if settings.ELASTICSEARCH_HTTP_AUTH is True:
    self.client = Elasticsearch([es_addr], http_auth=(settings.ELASTICSEARCH_HTTP_AUTH_USER, settings.ELASTICSEARCH_HTTP_AUTH_PASS))
else:
    self.client = Elasticsearch([es_addr])


#########################################################

然后搜索整个文件,将所有硬编码的索引名index=’.kibana’改为:index=settings.KIBANA_INDEX

2.编辑logstash配置文件

整个配置文件logstash.conf文件内容如下:

input
{  
redis
 {
  data_type => "list"
  key => "suricata"
  host => "127.0.0.1"
  port => 6379
  db => 0
  threads => 5
  codec => json
  type => "SELKS"
 }
}

filter {
  if [type] == "SELKS" {
    date {
      match => [ "timestamp", "ISO8601" ]
    }
    ruby {
      code => "if event['event_type'] == 'fileinfo'; event['fileinfo']['type']=event['fileinfo']['magic'].to_s.split(',')[0]; end;"
    }
  }

  if ([src_ip] =~ /^10\.(10[1-9]{1}|1[1-9]{1}[0-9]{1}|2[0-9]{1}[0-9]{1})\.[0-9]{1,3}\.[0-9]{1,3}/) {  #IDC IP
    if([dest_ip] =~ /(^10\.([0-9]{1,2}|100)\.[0-9]{1,3}\.[0-9]{1,3})|(^192\.168\.[0-9]{1,3}\.[0-9]{1,3})|(^172\.([123]{1}[0-9]{1})\.[0-9]{1,3}\.[0-9]{1,3})/) {  #Home IP
      mutate {
        add_field => [ "direction", "idc_to_home" ]
      }
    }
  }

  if ([direction] != "idc_to_home") {
    if ([src_ip] =~ /(^192\.168\.[0-9]{1,3}\.[0-9]{1,3})|(^10\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3})|(^172\.([123]{1}[0-9]{1})\.[0-9]{1,3}\.[0-9]{1,3})/) {
      if ([dest_ip] =~ /(^192\.168\.[0-9]{1,3}\.[0-9]{1,3})|(^10\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3})|(^172\.([123]{1}[0-9]{1})\.[0-9]{1,3}\.[0-9]{1,3})/) {
        mutate {
          add_field => [ "direction", "intranet" ]
        }
      }
      else {
        mutate {
          add_field => [ "direction", "outbound" ]
        }
      }
    }
    else {
      if ([dest_ip] =~ /(^192\.168\.[0-9]{1,3}\.[0-9]{1,3})|(^10\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3})|(^172\.([123]{1}[0-9]{1})\.[0-9]{1,3}\.[0-9]{1,3})/) {
        mutate {
          add_field => [ "direction", "inbound" ]
        }
      }
      else {
        mutate {
          add_field => [ "direction", "internet" ]
        }
      }
    }
  }

  if ([direction] == "inbound" or [direction] == "internet") {
    if [src_ip]  {
      geoip {
        source => "src_ip" 
        target => "geoip" 
        #database => "/opt/logstash/vendor/geoip/GeoLiteCity.dat" 
        add_field => [ "[geoip][coordinates]", "%{[geoip][longitude]}" ]
        add_field => [ "[geoip][coordinates]", "%{[geoip][latitude]}"  ]
      }
      mutate {
        convert => [ "[geoip][coordinates]", "float" ]
      }
    }
  }

  if ([direction] == "outbound" or [direction] == "internet") {
    if [dest_ip]  {
      geoip {
        source => "dest_ip"
        target => "geoip"
        #database => "/opt/logstash/vendor/geoip/GeoLiteCity.dat"
        add_field => [ "[geoip][coordinates]", "%{[geoip][longitude]}" ]
        add_field => [ "[geoip][coordinates]", "%{[geoip][latitude]}"  ]
      }
      mutate {
        convert => [ "[geoip][coordinates]", "float" ]
      }
    }
  }
}

output {
  elasticsearch {
    hosts => ["http://ip_address:9200/"]
    manage_template => true
    template => "/ids/logstash-2.3.4/selks_template.json"
    template_name => "ids_log_*"
    user => "username"
    password => "password"
    index => "ids_log_%{+YYYY.MM.dd}"
  }
}
#########################################################

3.logstash添加ES数据模板:

仅适用于ES 5.X

logstash在向ES中写数据时,在缺省配置下,只向名为”logstash-“的索引中的每个文档添加隐含的默认字段,而scirius在读取ES中的数据时需要调用这些默认字段,因此若需要更改logstash写入ES中的索引名,则还需要更改建立默认字段的模板,在logstash v2.3.4中,编辑文件:./vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.7.1-java/lib/logstash/outputs/elasticsearch/elasticsearch-template.json
修改”template”字段值为自定义的索引名,并在geoip对象中新添加一个名为direction的string类型变量,最后保存为logstash调用的模板文件selks_template.json,完整内容如下:
#########################################################
{
“template” : “ids_log_*”,
“settings” : {
“index.refresh_interval” : “5s”
},
“mappings” : {
“_default_” : {
“_all” : {“enabled” : true, “omit_norms” : true},
“dynamic_templates” : [ {
“message_field” : {
“match” : “message”,
“match_mapping_type” : “string”,
“mapping” : {
“type” : “string”, “index” : “analyzed”, “omit_norms” : true,
“fielddata” : { “format” : “disabled” }
}
}
}, {
“string_fields” : {
“match” : “*”,
“match_mapping_type” : “string”,
“mapping” : {
“type” : “string”, “index” : “analyzed”, “omit_norms” : true,
“fielddata” : { “format” : “disabled” },
“fields” : {
“raw” : {“type”: “string”, “index” : “not_analyzed”, “ignore_above” : 256}
}
}
}
} ],
“properties” : {
“@timestamp”: { “type”: “date” },
“@version”: { “type”: “string”, “index”: “not_analyzed” },
“geoip”  : {
“dynamic”: true,
“properties” : {
“ip”        : { “type” : “ip” },
“location”  : { “type” : “geo_point” },
“latitude”  : { “type” : “float” },
“longitude” : { “type” : “float” }
}
}
}
}
}
}
#########################################################
4.向Scirius中添加规则源

在Scirius根目录下执行下列命令:

python manage.py addsource "ETOpen Ruleset" https://rules.emergingthreats.net/open/suricata-3.0/emerging.rules.tar.gz http sigs
python manage.py addsource "SSLBL abuse.ch" https://sslbl.abuse.ch/blacklist/sslblacklist.rules http sig
python manage.py addsource "PT Research Ruleset" https://github.com/ptresearch/AttackDetection/raw/master/pt.rules.tar.gz http sigs

5.修改告警展示代码

./scirius/rules/es_graphs.py

560,def es_get_rules_stats(request, hostname, count=100, from_date=0 , qfilter = None)

604  tables.RequestConfig(request,paginate={‘per_page’:100}).configure(rules)

607 tables.RequestConfig(request,paginate={‘per_page’:100}).configure(rules)

6.配置Kibana dashboards

网上有公开的dashboards模板,链接:https://github.com/StamusNetworks/KTS

使用这个模板前,需先修改模板中的ES索引名,命令如下:

$ find ./ -name ‘*.json’ -type f -exec sed -i ‘s/logstash.*-\*/ids_log_*/g’ {} \;

然后修改load.sh文件中的kibana索引名和ES地址。

也可以用elasticdump工具将dashboards的数据导入到ES中,先导入索引mapping,再导入数据,文件地址:

http://weizn.net/file/kibana_mapping.json

http://weizn.net/file/kibana_data.json

或直接通过Kibana导入:

http://weizn.net/file/Dashboards.zip

四、启动

1.启动Redis

启动命令:

$ redis-server /ids/redis-3.2.0/redis.conf

监控命令:

$ redis-cli  MONITOR

2.启动ES

(略)

3.启动Logstash

$ bin/logstash -f /ids/logstash-2.3.4/logstash.conf

4.启动Suricata

$ suricata –pfring -c /ids/suricata/suricata_SELKS_redis.yaml -v

5.启动Scirius

初始化命令:

$ python manage.py syncdb

开启Web服务:

$ python manage.py runserver 0.0.0.0:80

6.启动Kibana

(略)

图示:

scirius.jpg

alert.jpg

http.jpg

status.jpg

4 comments

You may also like

4 comments

aab 2017年9月30日 - 上午8:48

楼主有联系方式吗?

Reply
Wayne 2017年10月2日 - 上午8:54

@aab:联系我邮箱吧:wzinan@outlook.com

Reply
moon 2017年4月9日 - 上午9:13

/ids/suricata/suricata_SELKS_redis.yaml 这个文件能否发来看下呢

Reply
Johng247 2017年3月25日 - 下午10:41

you’re truly a just right webmaster. The site loading speed is incredible. It kind of feels that you are doing any distinctive trick. In addition, The contents are masterpiece. you’ve performed a magnificent task in this subject! ggfecadgafke

Reply

Leave a Comment