Category Archives: Splunk

Using systemd to squash THP and start splunk enterprise

The concept presented in this post, as well as the original inspiration, have some risks. Using alternatives to the vendor provided init scripts have support risks including loss of the configuration by future upgrades. Each operating system vendor has their own specific guidance on how to do this, each automation vendor has example automation scripts as well. Picking an approach that is appropriate for your environment is up to you.

THP the bain of performance for so many things in big data is often left on by default and is slightly difficult to disable. As a popular Splunk answers post and Splunk consultants include Marquis have found the best way to ensure ulimit and THP settings are properly configured is to modify the init scripts. This is a really crafty and reliable way to ensure THP is disabled for Splunk, it works on all Linux operating systems regardless of how services are started.

I’m doing some work with newer operating systems and wanted to explore how systemd really works and changes what is possible in managing a server. Lets face it systemd has not gotten the best of receptions in the community, after all it moved our cheese, toys and the ball all at once. It seems to be here to stay what if we could use its powers for good in relation to Splunk. Let’s put an end to THP and start Splunk the systemd native way.

Create the file /etc/systemd/system/disable-transparent-huge-pages.service

[Unit]
Description=Disable Transparent Huge Pages

[Service]
Type=oneshot
ExecStart=/bin/sh -c “echo never >/sys/kernel/mm/transparent_hugepage/enabled”
ExecStart=/bin/sh -c “echo never >/sys/kernel/mm/transparent_hugepage/defrag”
RemainAfterExit=true
[Install]
WantedBy=multi-user.target

Verify THP and defrag is presently enabled to avoid a false sense of success

# cat /sys/kernel/mm/transparent_hugepage/enabled

[always] madvise never

# cat /sys/kernel/mm/transparent_hugepage/defrag

[always] madvise never

Enable and start the unit to disable THP

# systemctl enable disable-transparent-huge-pages.service

# systemctl start disable-transparent-huge-pages.service

# cat /sys/kernel/mm/transparent_hugepage/enabled

always madvise [never]

# cat /sys/kernel/mm/transparent_hugepage/defrag

always madvise [never]

Reboot and repeat the verification to ensure the process is enforced

create the unit file /etc/systemd/system/splunk.service

[Service]
Type=simple
ExecStart=/opt/splunk/bin/splunk start --no-prompt --answer-yes --accept-license
ExecStop=/opt/splunk/bin/splunk stop
User=splunk
PIDFile=/opt/splunk/var/run/splunk/splunkd.pid

Restart=on-failure

#ulimit -Sn 65535
#ulimit -Hn 65535
LimitNOFILE=65535
#ulimit -Su 20480
#ulimit -Hu 20480
LimitNPROC=20480
#ulimit -Hf unlimited
#ulimit -Sf unlimited
LimitFSIZE=infinity
LimitCORE=infinity
[Install]
WantedBy=multi-user.target

# systemctl enable splunk.service

# systemctl start splunk.service

Verify the ulimits have been applied via splunk logs

#cat /opt/splunk/var/log/splunk/splunkd.log | grep ulimit

Reboot and repeate all verifications

Bonus material, kill Splunk (lab env only) and watch systemd bring it back

# killall splunk

# ps aux | grep splunk

Splunk the server and the enpoint aka “All the Things”

I’m sharing something today that has been available thanks to many in white papers and presentations dealing with identification of malicious code and activities in your windows event data. Shout out to everyone from our “friends” at the NSA, to Splunk .Conf presenters and malwarearcheology.com just to name a few.

The PDF attached is a portion of the next evolution of the Use Case Repository I maintain at Splunk. Along with the reference TAs and inputs, this will allow you to quickly and consistently collect very valuable data supporting security use cases at multiple levels of maturity. If it seems like too much don’t work Splunk Pro Services and partners are able to help you get this visibility just contact your account team.

Standard disclaimer, this is a blog post, I built the content from public non-warrantied information, and this is still public non-warrantied information, your situation might not match the advice given.

PT005-Microsoft-Windows

Building a more perfect Syslog Collection Infrastructure

A little while back I created a bit of code to help get data from linux systems in real time where the Splunk Universal Forwarder could not be installed. At the time we had a few limitations the biggest problem being time stamps were never parsed only “current” time on the indexer could be used.  Want to try out version 2 lets get started! First let me explain what we are doing

If you manage a Splunk environment with high rate sources such as a Palo Alto firewall or Web Proxy you will notice that events are not evenly distributed over the indexers because the the data is not evenly balanced across your aggregation tier. The reasons for this are boiled down to “time based load balancing” in Larger environments the universal forwarder may not be able to split by time to distribute a high load. So what is an admin to do? Lets look for a connection load balancing solution. We need to find a way to switch from “SYSLOG” to HTTP(s) so we can utilize a proper load balancer. How will we do this?

  1. Using containers we will dedicate one or more instance of RSYSLOG for each “type” of data,
  2. Use a custom plugin to package and forward batches of events over http(s)
  3. Use a load balancer configured for least connected round robin to balance the batches of events

image-390

What you need

  • At least two indexers with http event collector, more = better. The “benefits” of this solution require collection on the indexer dedicated collectors will not be a adequate substitute
  • One load balancer, I use HA Proxy
  • One syslog collection server with rsyslog 8.24+ host I use LXC instances hosted on proxmox. Optimal deployment will utilize 1 collector per source technology. For example 1 instance collecting for Cisco IOS and another for Palo Alto Firewalls. Using advanced configuration and filters you can combine several low volume source.
  • A GUID if you need one generated there are many ways this one is quick and easy https://www.guidgenerator.com/online-guid-generator.aspx

Basic Setup

  1. Follow docs, to setup HTTP event collector on your indexers, note if your indexers are clustered docs does not cover this, you must create the configuration manually be sure to generate a unique GUID manually. Clusters environments can use the sample configuration below:
  2. Follow documentation for your load balancer of choice to create a http VIP with https back end servers. HEC listens on 8088 by default
  3. Grab the code and configuration examples from bitbucket
    1. Deploy the script omsplunkhec.py to /opt/rsyslog/ ensure the script is executable
    2. Review rsyslogd.d.conf.example and your configuration in /etc/rsyslog.d/00-splunkhec.conf replace the GUID and IP with your correct values
    3. Restart rsyslog

What to expect, My hope data balance Zen.

image-391

HTTP Event Collector inputs.conf example deployed via master-apps

[http] 
disabled=0
port=8088
#
[http://SM_rsyslog_routerboard]
disabled=0
index=main
token=DAA61EE1-F8B2-4DB1-9159-6D7AA5220B21
indexes=main,summary

Example /etc/rsyslog.d/00-splunk.conf

This example will listen on 514 TCP and UDP sending events via http, be sure to replace the GUID and ip address

module(load="imudp")
input(type="imudp" port="514" ruleset="default_file")
module(load="imptcp")
input(type="imptcp" port="514" ruleset="default_file")
module(load="omprog")

ruleset(name="default_file"){
    $RulesetCreateMainQueue    
    action(type="omprog"
       binary="/opt/rsyslog/omsplunkhec.py DAA61EE1-F8B2-4DB1-9159-6D7AA5220B21 192.168.100.70 --sourcetype=syslog --index=main" 
       template="RSYSLOG_TraditionalFileFormat")
    stop
}

Example HAProxy Configuration 1.7 /etc/haproxy/haproxy.cfg

 

global
        log /dev/log    local0
        log /dev/log    local1 notice
        chroot /var/lib/haproxy
        stats socket /run/haproxy/admin.sock mode 660 level admin
        stats timeout 30s
        user haproxy
        group haproxy
        daemon
        # Default SSL material locations
        ca-base /etc/ssl/certs
        crt-base /etc/ssl/private
        # Default ciphers to use on SSL-enabled listening sockets.
        # For more information, see ciphers(1SSL).
        ssl-default-bind-ciphers kEECDH+aRSA+AES:kRSA+AES:+AES256:RC4-SHA:!kEDH:!LOW:!EXP:!MD5:!aNULL:!eNULL
defaults
        log     global
        mode    http
        option  httplog
        option  dontlognull
        timeout connect 5000
        timeout client  50000
        timeout server  50000
        errorfile 400 /etc/haproxy/errors/400.http
        errorfile 403 /etc/haproxy/errors/403.http
        errorfile 408 /etc/haproxy/errors/408.http
        errorfile 500 /etc/haproxy/errors/500.http
        errorfile 502 /etc/haproxy/errors/502.http
        errorfile 503 /etc/haproxy/errors/503.http
        errorfile 504 /etc/haproxy/errors/504.http
listen  stats   
        bind            *:1936
        mode            http
        log             global
        maxconn 10
        clitimeout      100s
        srvtimeout      100s
        contimeout      100s
        timeout queue   100s
        stats enable
        stats hide-version
        stats refresh 30s
        stats show-node
        stats auth admin:password
        stats uri  /haproxy?stats
frontend localnodes
    bind *:8088
    mode http
    default_backend nodes
backend nodes
    mode http
    balance leastconn
    option forwardfor
    http-request set-header X-Forwarded-Port %[dst_port]
    http-request add-header X-Forwarded-Proto https if { ssl_fc }
    option httpchk
    server idx2 192.168.100.52:8088 ssl verify none check 
    server idx1 192.168.100.51:8088 ssl verify none check 

Making Splunk Certified Apps

As a developer of “Apps” for the Splunk platform; I have been very eager to automate more tedious tasks including build and static code analysis. Today our very awesome development community has access to a new tool App Inspect. The new python based extensible framework will allow your automated build process to validate key issues and prepare for formal certification for Public apps on Splunk Base, or assure quality for internally developed apps. The process example can easily be ported to the tool section of your choice allowing for effective version control and testing of applications built on the Splunk platform.

To help you get started I’ve developed an example using our partner’s tools at Atlassian.

  • Bitbucket repository containing the source
  • CMAKE build script for packaging and versioning
  • Bitbucket pipelines integration using docker to ensure a clean package and execute validation
  • Publish to AWS S3 as a package repository before manual publishing to Splunk Base

Getting started review https://bitbucket.org/Splunk-SecPS/seckit_sa_geolocation this is my first and most complete example.

  • CMakeLists.txt controls the build process
  • src/ contains the applications source
  • src/default/app.conf.in is the template for app.conf our build will update this file with the correct version tag supplied by git
  • bitbucket-pipelines.yml controls the pipelines automated integration process
    • Retrieve and deploy the latest docker image with build tools and app inspect
    • Package the app
    • Push to S3

Try it yourself!

Ghost Detector (CVE-2015-7547)

4375461
image-355

Just in case you need need yet another reason to utilize passive DNS analytic, a new significant vulnerability is out for GLIBC. Have stream? You can monitor your queries for this IOC

https://sourceware.org/ml/libc-alpha/2016-02/msg00416.html

Update: the attack requires both A and AAAA records. Only show possible attacks with both involved. This should return zero results. If results are returned there “may” be something of interest drill into the answers involved to determine if they are malicious based on the CVE above.

index=streams sourcetype=stream:dns (query_type=A OR query_type=AAAA)
[
search index=streams sourcetype=stream:dns (query_type=A OR query_type=AAAA)
| rare limit=20 dest
| fields + dest | format
]
| stats max(bytes_in) max(bytes_out) max(bytes) values(query_type) as qt by src,dest,query
| where mvcount(qt)>=2
| sort – max*
| lookup domain_segments_lookup domain as query OUTPUT privatesuffix as domain
| lookup alexa_lookup_by_str domain OUTPUT rank
| where isnull(rank)

Don’t have stream yet? Deploy in under 20 minutes.
http://www.rfaircloth.com/2015/11/06/get-started-with-splunk-app-stream-6-4-dns/

Dealing with bad threat data

Every now and then a threat data provider will include invalid entries in their threat list creating loads of false positives in Enterprise Security. For “reasons” namely performance ES will append new entries to the internal threat system but does not remove entries no longer present in a source. You can easily clear an entire threat collection which will allow your system to reload from the current sources.

splunk stop
splunk clean inputdata threatlist
splunk clean inputdata threat_intelligence_manager
splunk start
splunk clean kvstore -app DA-ESS-ThreatIntelligence -collection

Common values for collection are http_intel and domain_intel

Building Reliable Syslog infrastructure on Centos 7 for Splunk

 

Overview

Preparation of a base infrastructure for high availability ingestion of syslog data with a default virtual server and configuration for test data on boarding. Reference technology specific on boarding procedures.

Requirement

Multiple critical log sources require a reliable syslog infrastructure. The following attributes must be present for the solution

  • Enterprise supported linux such as RHEL, OR Centos
  • Syslog configuration which will not impact the logging of the host on which syslog is configured
  • External Load Balancing utilizing DNAT lacking available enterprise shared services NLB devices KEMP offers a free to use version of their product up to 20 Mbs suitable for many cases

Technical Environment

The following systems will be created utilizing physical or virtual systems. System specifications will vary due estimated load.

  • Centos 7.x (current) servers in n+1 configuration
    • Minimum 2 GB memory
    • Minimum 2 x 2.3 GHZ core
    • Mounts configure per enterprise standard with the following additions
      • /opt/splunk 40 GB XFS
      • /var/splunk-syslog 40 GB XFS
  • Dual interfaced load balancer configured for DNAT support.
  • Subnet with at minimum the number of unique syslog sources (technologies) additional space for growth is strongly advised
  • Subnet allocated for syslog servers

Solution Prepare the syslog-ng servers

The following procedure will be utilized to prepare the syslog-ng servers

  1. Install the base operating system and harden according to enterprise standards
  2. Provision and mount the application partitions /opt/splunk and /var/splunk-syslog according the estimates required for your environment.
    1. Note 1 typical configuration utilize noatime on both mounts
    2. Note 2 typical configuration utilizes no execute on the syslog moun
  3. Enable the EPEL repository for RHEL/CENTOS as the source for syslog-ng installation 
    yum -y install epel-release
    yum -y repolist
    yum -y update
    reboot
  4. Install the syslog-ng software

     

    yum y install syslog-ng
  5. Replace /etc/syslog-ng/syslog-ng.conf
    @version:3.5
    @include "scl.conf"
    
    # syslog-ng configuration file.
    #
    # SecKit template 
    # We utilize syslog-ng on Centos to allow syslog ingestion without 
    # interaction with the OS
    
    # Note: it also sources additional configuration files (*.conf)
    #    located in /etc/syslog-ng/conf.d/
    
    options {
        flush_lines (0);
        time_reopen (10);
        log_fifo_size (1000);
        chain_hostnames (off);
        use_dns (no);
        use_fqdn (no);
        create_dirs (no);
        keep_hostname (yes);
    };
    
    # Source additional configuration files (.conf extension only)
    @include "/etc/syslog-ng/conf.d/*.conf"
  6. Create the following directories for modular configuration of syslog-ng
    mkdir -p /etc/syslog-ng/conf.d/splunk-0-source
    mkdir -p /etc/syslog-ng/conf.d/splunk-1-dest  
    mkdir -p /etc/syslog-ng/conf.d/splunk-2-filter  
    mkdir -p /etc/syslog-ng/conf.d/splunk-3-log  
    mkdir -p /etc/syslog-ng/conf.d/splunk-4-simple
  7. Create the Splunk master syslog-configuration /etc/syslog-ng/conf.d/splunk.conf
    ################################################################################
    # SecKit syslog template based on the work of Vladimir
    # Template from https://github.com/hire-vladimir/SA-syslog_collection/
    ################################################################################
    
    ################################################################################
    #### Global config ####
    options {
      create-dirs(yes);
    
      # Specific file/directory permissions can be set
      # this is particularly needed, if Splunk UF is running as non-root
      owner("splunk");
      group("splunk");
      dir-owner("splunk");
      dir-group("splunk");
      dir-perm(0755);
      perm(0755);
    
      time-reopen(10);
      keep-hostname(yes);
      log-msg-size(65536);
    };
    
    @include "/etc/syslog-ng/conf.d/splunk-0-source/*.conf"
    @include "/etc/syslog-ng/conf.d/splunk-1-dest/*.conf"
    @include "/etc/syslog-ng/conf.d/splunk-2-filter/*.conf"
    @include "/etc/syslog-ng/conf.d/splunk-3-log/*.conf"
    @include "/etc/syslog-ng/conf.d/splunk-4-simple/*.conf"
  8. Create the catch all syslog collection source. /etc/syslog-ng/conf.d/splunk-4-simple/8100-default.conf
    ################################################################################
    #### Enable listeners ####
    source remote8100_default
    {
        udp(port(8100));
        tcp(port(8100));
    };
    
    #### Log remote sources classification ####
    destination d_default_syslog {
            file("/var/splunk-syslog/default/$HOST.log");
    };
    
    # catch all, all data that did not meet above criteria will end up here
    log {
            source(remote8100_default);
            destination(d_default_syslog);
            flags(fallback);
    };
  9. Ensure splunk can read from the syslog folders. The paths should exist at this point due to the dedicated mount
    chown -R splunk:splunk /var/splunk-syslog
    chmod -R 0755 /var/splunk-syslog
  10. Verify syslog-ng configuration no errors should be reported (no output)
    syslog-ng -s
  11. Update the systemd servics configuration to correctly support both rsyslog and syslog-ng edit /lib/systemd/system/syslog-ng.service

     

    find:
    ExecStart=/usr/sbin/syslog-ng -F -p /var/run/syslogd.pid
    replace:
    ExecStart=/usr/sbin/syslog-ng -F -p /var/run/syslogd-ng.pid
  12. Create log rotation configuration /etc/logrotate.d/splunk-syslog
    /var/splunk-syslog/*/*.log
    {
        daily
        compress
        delaycompress
        rotate 4
        ifempty
        maxage 7
        nocreate
        missingok
        sharedscripts
        postrotate
        /bin/kill -HUP `cat /var/run/syslogd-ng.pid 2> /dev/null` 2> /dev/null || true
        endscript
    }
  13. Resolve SELinux blocked actions
    semanage port -a -t syslogd_port_t -p tcp 8100
    semanage port -a -t syslogd_port_t -p udp 8100
    semanage fcontext -a -t var_log_t /var/splunk-syslog
    restorecon -v '/var/splunk-syslog'
    logger -d -P 8100 -n 127.0.0.1 -p 1 "test2"
    cd /root
    mkdir selinux
    cd selinux
    audit2allow -M syslog-ng-modified -l -i /var/log/audit/audit.log
    #verify the file does not contain anything no related to syslog
    vim syslog-ng-modified.te
    semodule -i syslog-ng-modified.pp
  14. Allow firewall access to the new ports
    firewall-cmd --permanent --zone=public --add-port=8100/tcp 
    firewall-cmd --permanent --zone=public --add-port=8100/udp
    firewall-cmd --reload
  15. Enable and start syslog-ng
    systemctl enable syslog-ng
    systemctl start syslog-ng

 

Solution Prepare KEMP Loadbalancer

  • Deploy virtual load balancer to hypervisor with two virtual interfaces
    • #1 Enterprise LAN
    • #2 Private network for front end of syslog servers
  • Login to the load balancer web UI
  • Apply free or purchased license
  • Navigate to network setup
    • Set eth0 external ip
    • Set eth1 internal ip
  • Add the first virtual server (udp)
    • Navigate to Virtual Services –> Add New
    • set the virtual address
    • set port 514
    • set port name syslog-default-8100-udp
    • set protocol udp
    • Click Add this virtual service
    • Adjust virtual service settings
      • Force Layer 7
      • Transparency
      • set persistence mode source ip
      • set persistence time 6 min
      • set scheduling method lest connected
      • Use Server Address for NAT
      • Click Add new real server
        • Enter IP of syslog server 1
        • Enter port 8100
  • Add the first virtual server (tcp)
    • Navigate to Virtual Services –> Add New
    • set the virtual address
    • set port 514
    • set port name syslog-default-8100-tcp
    • set protocol tcp
    • Click Add this virtual service
    • Adjust virtual service settings
      • Service type Log Insight
      • Transparency
      • set scheduling method lest connected
      • TCP Connection only check port 8100
      • Click Add new real server
        • Enter IP of syslog server 1
        • Enter port 8100
  • Repeat the add virtual server process for additional resource servers

 

Update syslog server routing configuration

Update the default gateway of the syslog servers to utilize the NLB internal interface

Validation procedure

from a linux host utilize the following commands to validate the NLB and log servers are working together
logger -P 514 -T -n <vip_ip> "test TCP"
logger -P 514 -d -n <vip_ip> "test UDP"
verify the messages are logged in /var/splunk-syslog/default

Prepare Splunk Infrastructure for syslog

  • Follow procedure for deployment of the Universal Forwarder with deployment client ensure the client has has valid outputs and base configuration
  • Create the indexes syslog and syslog_unclassified
  • Deploy input configuration for the default input
[monitor:///var/splunk-syslog/default/*.log]
host_regex = .*\/(.*)\.log
sourcetype = syslog
source = syslog_enterprise_default
index = syslog_unclassified
disabled = enabled

 

  • Validate the index contains data