Monday, May 5, 2014

Elastic Security: Deploying Logstash, ElasticSearch, Kibana "securely" on the Internet

Hello folks!
Continuing with the tradition of at least one post per year I wanted to write about a pilot I built and keep on refining based on ElasticSearch (1.1.1), Logstash (1.4.0) and Kibana (3.0.1). I wanted to get my hands dirty with these as I have increasingly seen traditional SQL based security applications/tools failing when attempting to scale.

NoSQL databases and big data technologies are becoming a must if you want to properly take care of enterprise security in which you can get large quantities of log data per day. Being the three of them Open Source and well supported I decided to give them a try and put something together.

The pilot I will present here is a sort of "SSH scanners statistics" that will display several charts and a geo-location map based on the source IP address of systems scanning for open SSH ports on the Internet.


I have used only one ElasticSearch node as I am just collecting traffic hitting port 22/TCP(SSH) on a VPS running Ubuntu 13.10 on top of OpenVZ. This means/required that I have had to implement a bit of hardening so that it is bulletproof...well never say never ;). The steps I took to put all this together are as follows:
  1. Probably the first step would be enabling SSH on a non standard port so that your own SSH connections don't clutter the SSH logs. To do so simply edit your /etc/ssh/sshd_configand set the port parameter to a number other than 22. Be mindful your ssh connection will break when you do this and you'll have to reconnect. After acknowledging this execute service ssh restart to restart ssh on the specified port
  2. Enable iptables rules to log incoming traffic to the SSH port with this command on a root shell:
  3. iptables -A INPUT -p tcp -m tcp --dport 22 -j LOG --log-level 4 
    
  4. You may want to save this rule to make it persistent after reboots. To do so you can use the iptables-save and iptables-restore commands. A useful package is iptables-persistent, install it by running apt-get install iptables-persistent. It will generate a file on your file system on /etc/iptables/rules.v4 to which you can save your iptables changes and they will be restored after reboots:
  5. iptables-save>/etc/iptables/rules.v4
    
  6. Now, in order to store those logs generated by iptables into a specific file you need to edit you syslog server configuration file.  On your /etc/syslog.conf file add the line:
    kern.warning /var/log/iptables.log
    
    In order for that file not to grow without control create the following file /etc/logrotate.d/iptables.
  7. /var/log/iptables.log {
            daily
            rotate 7
            copytruncate
            compress
            missingok
            notifempty
    }
    
  8. Check that the rule is working by attempting to connect to your box on port 22. A simple way to do this would be using telnet:
    telnet <server_ip> 22
    
    Your /var/log/iptables.log file should contain the corresponding logs resembling something like the below
  9.  
    May  3 17:20:08 host kernel: [742317.869535] IN=venet0 OUT= MAC= SRC=80.100.155.250 DST=A.B.C.D LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51695 DF PROTO=TCP SPT=36333 DPT=22 WINDOW=14600 RES=0x00 SYN URGP=0
    
  10. Let's now get the tools we need. I recommend grabbing the deb files from the elasticsearch official site as they include the init.d starting/stopping scripts. Note that these packages require Java run time machine so it will be worth installing openjdk-7-jdk openjdk-7-jre-headless. All the described activities are performed as follows
  11. apt-get install openjdk-7-jdk openjdk-7-jre-headless
    //example of grabbing the deb file for elasticsearch, note you will also need logstash deb file and Kibana (Kibana comes in a zipped or tar file)
    wget https://download.elasticsearch.org/elasticsearch/elasticsearch/elasticsearch-1.1.1.deb
    //the below command installs a deb file; you will need to do the same for the logstash deb file
    dpkg -i elasticsearch-1.1.1.deb
    
  12. In order to run, stop or restart logstash and elasticsearch once they are installed you can use the /etc/init.d/logstash [start|stop|restart|status] or the elasticsearch equivalent. Note that to have them starting autonomously after any system reboot you will need to enable them via update-rc command.
  13. update-rc.d elasticsearch defaults 95 10
    update-rc.d logstash defaults
    
  14. Note that if running your machine on top of an OpenVZ node the Elasticsearch start script could fail stating sysctl: permission denied on key 'vm.max_map_count' my quick and dirty solution for this was commenting out the offending lines on /etc/init.d/elasticsearch init script.
  15. #       if [ -n "$MAX_MAP_COUNT" ]; then
    #               sysctl -q -w vm.max_map_count=$MAX_MAP_COUNT
    #       fi
    
  16. Once the services are running. If you issue a netstat command you will notice elasticsearch listening to any incoming connection on two different ports.
  17. netstat -putan | grep LISTEN
    tcp        0      0 0.0.0.0:9200          0.0.0.0:*               LISTEN      5420/java
    tcp        0      0 0.0.0.0:9300          0.0.0.0:*               LISTEN      5420/java
    
  18. We obviously don't want this behaviour, so below we will make the following changes so that elasticsearch listens on the loopback interface a.k.a. localhost. On the file /etc/elasticsearch/elasticsearch.yml enable/uncomment the following line as follows
  19. network.host: 127.0.0.1
    
  20. Now after restarting the services, if we issue netstat again we will see how elasticsearch is listening on the loopback interface or 127.0.0.1
  21. netstat -putan | grep LISTEN
    tcp        0      0 127.0.0.1:9200          0.0.0.0:*               LISTEN      5420/java
    tcp        0      0 127.0.0.1:9300          0.0.0.0:*               LISTEN      5420/java
    
  22. Let's take care of the logstash piece before jumping into the presentation layer (Apache & Kibana). Logstash would be the agent feeding log information to elasticsearch. Let's see how we can tell logstash to store our iptables logs into elasticsearch. Create the file /etc/logstash/conf.d/geoiptables.conf.
  23. input {
    #this file contains the iptables logs as defined in our syslog.conf file.        
    file {
                    path => [ "/var/log/iptables.log" ]
                    type => "iptables"
            }
    }
    
    
    filter {
            if [type] == "iptables" {
    
                    grok {
    #See next bullet below for the contents of /usr/share/grok/patterns
                            match => { "message" => "%{IPTABLES}"}
                            patterns_dir => ["/usr/share/grok/patterns"]
                    }
            }
    
            if [src_ip]  {
                    geoip {
                            source => "src_ip"
                            target => "geoip"
    #This should be shipped in the deb file we downloaded.
                            database => "/opt/logstash/vendor/geoip/GeoLiteCity.dat"
    #note that the below field additions and mutations are required for Kibana to properly plot the information.
                            add_field => [ "[geoip][coordinates]", "%{[geoip][longitude]}" ]
                            add_field => [ "[geoip][coordinates]", "%{[geoip][latitude]}"  ]
                    }
                    mutate {
                            convert => [ "[geoip][coordinates]", "float" ]
                    }
            }
    }
    
    output {
    #we leave this so that when we debug running logstash from the command line we can see the output that will be stored in elasticsearch
            stdout {
                    codec => rubydebug
            }
    #again for security purposes our elasticserach installation only runs on localhost.
            elasticsearch {
                    protocol => "http"
                    host => "127.0.0.1"
            }
    }
    
    
  24. Note that for my purposes I used the pre-built iptables regex snippet available on the address you will see below. However it did not work out of the box as my iptables logs don't contain the source MAC address, neither the inbound interface. Therefore, I created the file /usr/share/grok/patterns/iptables and modified it as follows (basically stating that those fields are optional with the "?" symbol.
  25. # Source : http://cookbook.logstash.net/recipes/config-snippets/
    NETFILTERMAC %{COMMONMAC:dst_mac}:%{COMMONMAC:src_mac}:%{ETHTYPE:ethtype}
    ETHTYPE (?:(?:[A-Fa-f0-9]{2}):(?:[A-Fa-f0-9]{2}))
    IPTABLES1 (?:IN=%{WORD:in_device} OUT=(%{WORD:out_device})? MAC=(%{NETFILTERMAC})? SRC=%{IP:src_ip} DST=%{IP:dst_ip}.*(TTL=%{INT:ttl})?.*PROTO=%{WORD:proto}?.*SPT=%{INT:src_port}?.*DPT=%{INT:dst_port}?.*)
    IPTABLES2 (?:IN=%{WORD:in_device} OUT=(%{WORD:out_device})? MAC=(%{NETFILTERMAC})? SRC=%{IP:src_ip} DST=%{IP:dst_ip}.*(TTL=%{INT:ttl})?.*PROTO=%{INT:proto}?.*)
    IPTABLES (?:%{IPTABLES1}|%{IPTABLES2})
    
  26. Let's start logstash from the commend line to verify it is all working without errors. To do so you need to issue /opt/logstash/bin/logstash -f /etc/logstash/conf.d/geoiptables.conf. You should see an output like this:
  27. Using milestone 2 input plugin 'file'. This plugin should be stable, but if you see strange behaviour, please let us know! For more information on plugin milestones, see http://logstash.net/docs/1.4.0/plugin-milestones {:level=>:warn}
    {
           "message" => "May  5 09:55:37 host kernel: [888446.478915] IN=venet0 OUT= MAC= SRC=69.254.180.138 DST=69.99.151.69 LEN=60 TOS=0x00 PREC=0x00 TTL=50 ID=29355 DF PROTO=TCP SPT=51955 DPT=22 WINDOW=14600 RES=0x00 SYN URGP=0 ",
          "@version" => "1",
        "@timestamp" => "2014-05-05T13:55:37.658Z",
              "type" => "iptables",
              "host" => "host",
              "path" => "/var/log/iptables.log",
         "in_device" => "venet0",
            "src_ip" => "69.254.180.138",
            "dst_ip" => "69.99.151.69",
             "proto" => "TCP",
          "src_port" => "51955",
          "dst_port" => "22",
             "geoip" => {
                        "ip" => "69.254.180.138",
             "country_code2" => "GB",
             "country_code3" => "GBR",
              "country_name" => "United Kingdom",
            "continent_code" => "EU",
                  "latitude" => 50.5,
                 "longitude" => -0.14999999999999545,
                  "timezone" => "Europe/London",
                  "location" => [
                [0] -0.12999999999999545,
                [1] 51.5
            ],
               "coordinates" => [
                [0] -0.12999999999999545,
                [1] 51.5
            ]
        }
    }
    
  28. This means everything is working fine and the information is well parsed by logstash and stored in elasticsearch, now you should be ready to stop logstash command line with ctrl+c and start the daemon with /etc/init.d/logstash start. See the troubleshooting section below if you run into issues.
  29. If you haven't done yet install apache2 with apt-get install apache2 . Let's apply a bit of hardening to it by modifying the file /etc/apache2/conf-enabled/security.conf
  30. #Edit the file: /etc/apache2/conf-enabled/security.conf
    ServerTokens Prod
    ServerSignature Off
    TraceEnable Off
    
  31. We won't be exposing the Kibana front-end to everyone on the Internet, we will be enabling basic authentication and enabling SSL to mitigate attackers easily sniffing the credentials/session information. To take care of the basic auth piece we just need to generate the htpasswd file with:
  32. htpasswd -c /etc/htpasswd <username>
    
  33. For the SSL piece we will be self-generating a certificate (ideally you would buy one or generate one signed by your CA of choice and trusted by your browser).
  34. mkdir /etc/apache2/ssl
    openssl req -x509 -nodes -days 99999 -newkey rsa:2048 -keyout /etc/apache2/ssl/apache.key -out /etc/apache2/ssl/apache.crt
    //respond to the questions as requested
    
  35. We will need to enable several Apache modules as follows
  36. a2enmod ssl
    a2enmod proxy
    a2enmod proxy_http
    a2enmod auth_basic
    a2enmod rewrite
    
  37. Finally we should create an Apache site configuration to configure this all. Create it on /etc/apache2/sites-available/kibana.conf and enable it afterwards using the a2ensite
  38. #file /etc/apache2/sites-available/kibana.conf
    <ifmodule mod_ssl.c>
    <virtualhost *:443>
    
            SSLEngine on
            SSLCertificateFile      /etc/apache2/ssl/apache.crt
            SSLCertificateKeyFile /etc/apache2/ssl/apache.key
    
    
            ServerAdmin webmaster@localhost
            DocumentRoot /var/www
    #this allows end user -> reverse proxy -> query local listening elasticsearch
            ProxyPreserveHost On
            ProxyPass /kibana2/ http://127.0.0.1:9200/
            ProxyPassReverse /kibana2/ http://127.0.0.1:9200/
    
            <directory var="" www="">
                    AllowOverride None
                    AuthType basic
                    AuthName "private"
                    AuthUserFile /etc/htpasswd
                    Require valid-user
            </directory>
            
            ErrorLog ${APACHE_LOG_DIR}/error.log
            CustomLog ${APACHE_LOG_DIR}/access.log combined
    
    </virtualhost>
    
    </ifmodule>
    
  39. You also can think of either disabling port 80/tcp or creating a redirector to port 443/tcp as follows. Create the file /etc/apache2/sites-available/redirect2ssl.conf and later enable it with a2toensite.
  40.         <virtualhost>
    
            RewriteEngine on
            ReWriteCond %{SERVER_PORT} !^443$
            RewriteRule ^/(.*) https://%{HTTP_HOST}/$1 [NC,R,L]
    
            ErrorLog ${APACHE_LOG_DIR}/error.log
            CustomLog ${APACHE_LOG_DIR}/access.log combined
    
            </virtualhost>
    
  41. Let's not forget about downloading Kibana and setting it up for our needs.
  42. //Kibana is a tar file, so untar it and move its contents to /var/www/kibana
    wget https://download.elasticsearch.org/kibana/kibana/kibana-3.0.1.tar.gz
    tar -xzvf kibana-3.0.1.tar.gz
    mkdir /var/www/kibana/ -p
    mv kibana-3.0.1 /var/www/kibana
    //allowing apache to serve it
    chown -R www-data: /var/www/kibana
    //now edit the file /var/www/kibana/config.js and make this changes (note the https:// and the kibana2) that will be handled by Apache proxy module
    elasticsearch: "https://"+window.location.hostname+"/kibana2",
    
  43. After Kibana has been set up let's connect to it by going to https://www.yourserver.com/kibana/. You should be prompted for the username and password we set up with the htpasswd command. And then you should see something like the following.
  44. Now you click on the link to load the default logstash interface where it says You can access it here. The default view will show with a nice time picker and your log lines below similarly to an expensive Splunk interface, but free in this case :D
  45. If you scroll down to the bottom you will see a button that says add a row, a row is the container in which we will incorporate our panels (the maps and charts) for the iptables log information. Go to the Rows tab, give the new row a Title and a Height, 650px could do, and then click on Create Row.
  46. Now at the bottom you should see the new row in which we will add the map panel. Click on the Add Panel green button and select Bettermap. Give it a Title, for instance SSH Scanners For the Coordinate Field write geoip.coordinates, as per the span (horizontally) selecting 9 should be fine, finally click on save.
  47. As per the charts it is also pretty straight forward. Similarly to how we added the map this time choose a terms panel, give it a Title such as Scanning Countries, for the Field select geoip.country_name, the style would be pie then click on Save.

That's all folks!

Troubleshooting

  • I have noticed logstash verbosity is not that good when there are errors parsing the log files. Some helpful tips would be:
  • Run logstash from command line (explained in this post) with the stdout output plugin enabled.
  • Check the logs on  /var/log/logstash.log
  • Your failed parsing logs would be still stored in elasticsearch but with a tag stating _grokparsefailure so when observing this don't think it is an error on elasticsearch side, revise logstash configuration file for glitches.
  • If you are using syslog-ng make sure you configure the user and group for  the files so that logstash can read them. Otherwise you will get a failed to open /var/log/iptables.log: Permission denied error on /var/log/logstash/logstash.log.
  • #file /etc/syslog-ng/syslog-ng.conf
    destination d_kern { file("/var/log/iptables.log" owner("logstash") group("adm") perm(0660)); };
    
  • Be careful if you modify your logstash configuration to respect when the configuration stanza requires using double quotes, brackets and square brackets.

References

No comments:

Post a Comment