Category Archives: Security

What’s in a URL now you can Splunk that

Hunting we find URLs in logs both email and proxy that are interesting all the time. What will that URL return, if it redirects where is it going and what kind of content questions you might be asking. If you are not asking them now is the time to start. I’ve released a new add on to Splunk Base, a little adaptive response action that can be used with just Splunk Enterprise OR Splunk Enterprise Security to collect and index information about those URLs.

https://splunkbase.splunk.com/app/3630/

Ghost Detector (CVE-2015-7547)

4375461

Just in case you need need yet another reason to utilize passive DNS analytic, a new significant vulnerability is out for GLIBC. Have stream? You can monitor your queries for this IOC

https://sourceware.org/ml/libc-alpha/2016-02/msg00416.html

Update: the attack requires both A and AAAA records. Only show possible attacks with both involved. This should return zero results. If results are returned there “may” be something of interest drill into the answers involved to determine if they are malicious based on the CVE above.

index=streams sourcetype=stream:dns (query_type=A OR query_type=AAAA)
[
search index=streams sourcetype=stream:dns (query_type=A OR query_type=AAAA)
| rare limit=20 dest
| fields + dest | format
]
| stats max(bytes_in) max(bytes_out) max(bytes) values(query_type) as qt by src,dest,query
| where mvcount(qt)>=2
| sort – max*
| lookup domain_segments_lookup domain as query OUTPUT privatesuffix as domain
| lookup alexa_lookup_by_str domain OUTPUT rank
| where isnull(rank)

Don’t have stream yet? Deploy in under 20 minutes.
http://www.rfaircloth.com/2015/11/06/get-started-with-splunk-app-stream-6-4-dns/

Dealing with bad threat data

Every now and then a threat data provider will include invalid entries in their threat list creating loads of false positives in Enterprise Security. For “reasons” namely performance ES will append new entries to the internal threat system but does not remove entries no longer present in a source. You can easily clear an entire threat collection which will allow your system to reload from the current sources.

splunk stop
splunk clean inputdata threatlist
splunk clean inputdata threat_intelligence_manager
splunk start
splunk clean kvstore -app DA-ESS-ThreatIntelligence -collection

Common values for collection are http_intel and domain_intel

Building Reliable Syslog infrastructure on Centos 7 for Splunk

 

Overview

Preparation of a base infrastructure for high availability ingestion of syslog data with a default virtual server and configuration for test data on boarding. Reference technology specific on boarding procedures.

Requirement

Multiple critical log sources require a reliable syslog infrastructure. The following attributes must be present for the solution

  • Enterprise supported linux such as RHEL, OR Centos
  • Syslog configuration which will not impact the logging of the host on which syslog is configured
  • External Load Balancing utilizing DNAT lacking available enterprise shared services NLB devices KEMP offers a free to use version of their product up to 20 Mbs suitable for many cases

Technical Environment

The following systems will be created utilizing physical or virtual systems. System specifications will vary due estimated load.

  • Centos 7.x (current) servers in n+1 configuration
    • Minimum 2 GB memory
    • Minimum 2 x 2.3 GHZ core
    • Mounts configure per enterprise standard with the following additions
      • /opt/splunk 40 GB XFS
      • /var/splunk-syslog 40 GB XFS
  • Dual interfaced load balancer configured for DNAT support.
  • Subnet with at minimum the number of unique syslog sources (technologies) additional space for growth is strongly advised
  • Subnet allocated for syslog servers

Solution Prepare the syslog-ng servers

The following procedure will be utilized to prepare the syslog-ng servers

  1. Install the base operating system and harden according to enterprise standards
  2. Provision and mount the application partitions /opt/splunk and /var/splunk-syslog according the estimates required for your environment.
    1. Note 1 typical configuration utilize noatime on both mounts
    2. Note 2 typical configuration utilizes no execute on the syslog moun
  3. Enable the EPEL repository for RHEL/CENTOS as the source for syslog-ng installation 
    yum -y install epel-release
    yum -y repolist
    yum -y update
    reboot
  4. Install the syslog-ng software

     

    yum y install syslog-ng
  5. Replace /etc/syslog-ng/syslog-ng.conf
    @version:3.5
    @include "scl.conf"
    
    # syslog-ng configuration file.
    #
    # SecKit template 
    # We utilize syslog-ng on Centos to allow syslog ingestion without 
    # interaction with the OS
    
    # Note: it also sources additional configuration files (*.conf)
    #    located in /etc/syslog-ng/conf.d/
    
    options {
        flush_lines (0);
        time_reopen (10);
        log_fifo_size (1000);
        chain_hostnames (off);
        use_dns (no);
        use_fqdn (no);
        create_dirs (no);
        keep_hostname (yes);
    };
    
    # Source additional configuration files (.conf extension only)
    @include "/etc/syslog-ng/conf.d/*.conf"
  6. Create the following directories for modular configuration of syslog-ng
    mkdir -p /etc/syslog-ng/conf.d/splunk-0-source
    mkdir -p /etc/syslog-ng/conf.d/splunk-1-dest  
    mkdir -p /etc/syslog-ng/conf.d/splunk-2-filter  
    mkdir -p /etc/syslog-ng/conf.d/splunk-3-log  
    mkdir -p /etc/syslog-ng/conf.d/splunk-4-simple
  7. Create the Splunk master syslog-configuration /etc/syslog-ng/conf.d/splunk.conf
    ################################################################################
    # SecKit syslog template based on the work of Vladimir
    # Template from https://github.com/hire-vladimir/SA-syslog_collection/
    ################################################################################
    
    ################################################################################
    #### Global config ####
    options {
      create-dirs(yes);
    
      # Specific file/directory permissions can be set
      # this is particularly needed, if Splunk UF is running as non-root
      owner("splunk");
      group("splunk");
      dir-owner("splunk");
      dir-group("splunk");
      dir-perm(0755);
      perm(0755);
    
      time-reopen(10);
      keep-hostname(yes);
      log-msg-size(65536);
    };
    
    @include "/etc/syslog-ng/conf.d/splunk-0-source/*.conf"
    @include "/etc/syslog-ng/conf.d/splunk-1-dest/*.conf"
    @include "/etc/syslog-ng/conf.d/splunk-2-filter/*.conf"
    @include "/etc/syslog-ng/conf.d/splunk-3-log/*.conf"
    @include "/etc/syslog-ng/conf.d/splunk-4-simple/*.conf"
  8. Create the catch all syslog collection source. /etc/syslog-ng/conf.d/splunk-4-simple/8100-default.conf
    ################################################################################
    #### Enable listeners ####
    source remote8100_default
    {
        udp(port(8100));
        tcp(port(8100));
    };
    
    #### Log remote sources classification ####
    destination d_default_syslog {
            file("/var/splunk-syslog/default/$HOST.log");
    };
    
    # catch all, all data that did not meet above criteria will end up here
    log {
            source(remote8100_default);
            destination(d_default_syslog);
            flags(fallback);
    };
  9. Ensure splunk can read from the syslog folders. The paths should exist at this point due to the dedicated mount
    chown -R splunk:splunk /var/splunk-syslog
    chmod -R 0755 /var/splunk-syslog
  10. Verify syslog-ng configuration no errors should be reported (no output)
    syslog-ng -s
  11. Update the systemd servics configuration to correctly support both rsyslog and syslog-ng edit /lib/systemd/system/syslog-ng.service

     

    find:
    ExecStart=/usr/sbin/syslog-ng -F -p /var/run/syslogd.pid
    replace:
    ExecStart=/usr/sbin/syslog-ng -F -p /var/run/syslogd-ng.pid
  12. Create log rotation configuration /etc/logrotate.d/splunk-syslog
    /var/splunk-syslog/*/*.log
    {
        daily
        compress
        delaycompress
        rotate 4
        ifempty
        maxage 7
        nocreate
        missingok
        sharedscripts
        postrotate
        /bin/kill -HUP `cat /var/run/syslogd-ng.pid 2> /dev/null` 2> /dev/null || true
        endscript
    }
  13. Resolve SELinux blocked actions
    semanage port -a -t syslogd_port_t -p tcp 8100
    semanage port -a -t syslogd_port_t -p udp 8100
    semanage fcontext -a -t var_log_t /var/splunk-syslog
    restorecon -v '/var/splunk-syslog'
    logger -d -P 8100 -n 127.0.0.1 -p 1 "test2"
    cd /root
    mkdir selinux
    cd selinux
    audit2allow -M syslog-ng-modified -l -i /var/log/audit/audit.log
    #verify the file does not contain anything no related to syslog
    vim syslog-ng-modified.te
    semodule -i syslog-ng-modified.pp
  14. Allow firewall access to the new ports
    firewall-cmd --permanent --zone=public --add-port=8100/tcp 
    firewall-cmd --permanent --zone=public --add-port=8100/udp
    firewall-cmd --reload
  15. Enable and start syslog-ng
    systemctl enable syslog-ng
    systemctl start syslog-ng

 

Solution Prepare KEMP Loadbalancer

  • Deploy virtual load balancer to hypervisor with two virtual interfaces
    • #1 Enterprise LAN
    • #2 Private network for front end of syslog servers
  • Login to the load balancer web UI
  • Apply free or purchased license
  • Navigate to network setup
    • Set eth0 external ip
    • Set eth1 internal ip
  • Add the first virtual server (udp)
    • Navigate to Virtual Services –> Add New
    • set the virtual address
    • set port 514
    • set port name syslog-default-8100-udp
    • set protocol udp
    • Click Add this virtual service
    • Adjust virtual service settings
      • Force Layer 7
      • Transparency
      • set persistence mode source ip
      • set persistence time 6 min
      • set scheduling method lest connected
      • Use Server Address for NAT
      • Click Add new real server
        • Enter IP of syslog server 1
        • Enter port 8100
  • Add the first virtual server (tcp)
    • Navigate to Virtual Services –> Add New
    • set the virtual address
    • set port 514
    • set port name syslog-default-8100-tcp
    • set protocol tcp
    • Click Add this virtual service
    • Adjust virtual service settings
      • Service type Log Insight
      • Transparency
      • set scheduling method lest connected
      • TCP Connection only check port 8100
      • Click Add new real server
        • Enter IP of syslog server 1
        • Enter port 8100
  • Repeat the add virtual server process for additional resource servers

 

Update syslog server routing configuration

Update the default gateway of the syslog servers to utilize the NLB internal interface

Validation procedure

from a linux host utilize the following commands to validate the NLB and log servers are working together
logger -P 514 -T -n <vip_ip> "test TCP"
logger -P 514 -d -n <vip_ip> "test UDP"
verify the messages are logged in /var/splunk-syslog/default

Prepare Splunk Infrastructure for syslog

  • Follow procedure for deployment of the Universal Forwarder with deployment client ensure the client has has valid outputs and base configuration
  • Create the indexes syslog and syslog_unclassified
  • Deploy input configuration for the default input
[monitor:///var/splunk-syslog/default/*.log]
host_regex = .*\/(.*)\.log
sourcetype = syslog
source = syslog_enterprise_default
index = syslog_unclassified
disabled = enabled

 

  • Validate the index contains data

 

When you have 100 problems, more logs are not the answer

big_fire_01 Often SIEM projects begin where log aggregation projects end. So many logs cut into organized stacks of wood ready to burn for value. I can be quoted on this “All logs can be presumed to have security value”. One project to build the worlds largest bonfire however is seldom the correct answer. What value you may ask? Value will be gained in one or more of these categories:

Continue reading When you have 100 problems, more logs are not the answer

Making Asset data useful with Splunk Enterprise Security CSC 1 Part 1

54080041

Update broken link 2017-10-04

Friend we need to talk, there is something important that you have been overlooking for a long time. Two years ago when you implemented your first SIEM you gave your consultant an excel file listing all of your servers on the corporate network. You promised you would spend time on it after the consultant left but, then you got the new FireEye. You didn’t forget but then the you got a new Next Gen firewall and after there was the new red team initiative.

It is time make a difference in the security posture of your organization. It is time to take a bite out of CSC #1a that’s not a typo we need to work on #1a, #2 can wait. so can #1b .It is time to work SANs critical control #1. I know the CMDB is out of date and doesn’t reflect today’s architecture. We can do a lot with a small amount of work, today I will share how to lay a foundation to address CSC 1: Inventory of Authorized (a) and Unauthorized Devices (b).

Objective

Objective 1: Identify the location of each asset using latitude, longitute, city state and zip
Objective 2: Identify the compliance zone for each network segment
Objective 3: Identify categories that can assist the analyst in review of events related to the network containing the source or destination
Objective 4: Identify the minimum priority of devices in a given network segment.

Code is provided via Security Kit Install the app “SecKit_SA_idm_common” on your ES Search head.

Don’t forget to update the app imports to include “SecKit_SA_.*”

Walkthrough

  1. Update seckit_idm_pre_cidr_location.csv so that for each subnet in cidr notation define the location. On a very large campus it may be desirable to present a point on a specific building however in most cases it will be adequate to have a single lat/long pair for all subnets on a campus. Include all private and public spaces owned or managed by your organization do not include any public space not external spaces such as hosting providers and cloud services.
  2. Update seckit_idm_pre_cidr_category.csv note subnet in this case may be larger or smaller than used in locations. The most precise definition will be utilized by Splunk Identity Management within Enterprise Security. This may contain cloud address space if the ip space is not continually re-purposed
    1. Populate pci_cidr_domain we will overload this field for non PCI environments.
      1. PCI usage “wireless ORtrust|cardholder OR  trust|dmz OR empty (empty or default represents untrust
      2. Non PCI usage substitute other compliance in place of cardholder such as pii, sox, hippa, cip
    2. Populate cidr_priority
      1. low the most often used value should represent the majority of your devices
      2. medium common servers
      3. high devices of significant importance
      4. critical devices requiring immediate response such as
        1. A server whose demise would cause you to work on Christmas
        2. A server whose demise could cause the closure of the company even if you work on Christmas
    3. Populate cidr_category values provided here would apply to all devices in this network. I will list some very common categories I apply note each category needs to be pipe “|” separated and may not contain a space
      1. net_internal – internal IP space
      2. net_external – external IP space
      3. netid_ddd_ddd_ddd_ddd_bits – applied to each allocated subnet (smallest assigned unit.
      4. zone_string – where string is one of dmz, server, endpoint, storage management, wan, vip, nat
      5. facility_string – where string is the internal facility identification code
      6. facility_type_string – where string is a common identifier such as datacenter, store, office, warehouse, port, mine, airport, moonbase, cloud, dr, ship
      7. net_assignment_string – where string is static dyndhcp, dynvirt
    4. Run the saved search “seckit_idm_common_assets_networks_lookup_gen” and review the results in seckit_idm_common_assets_networks.csv you may run this report on demand as the lookup files above are changed or on a schedule of your choice.
    5. Enable the asset file in enterprise security by navigating to Configuration –> Enrichment –> Assets and Identities then clicking enable on “seckit_idm_common_assets_networks”

Bonus Objective

Enhance your existing server and network device assets list by integrating the following lookups and merging the OUTPUT fields with the device specific asset data.

  1. | lookup seckit_idm_pre_cidr_category_by_cidr_lookup cidr as lip OUTPUT cidr_pci_domain as pci_domain cidr_category as category
  2. | lookup idm_shared_cidr_location_lookup cidr as ip OUTPUT lat long city country

 

 

 

Share that search! Building a content pack for Splunk Enterprise Security 4.0+

Splunk has initial support for export of “content” which can be dashboards and correlation searches created by the user to share with another team. What if you need to be a little more complex for example including a lookup generating search? This will get a little more complicated but very doable by the average admin. Our mission here is to implement UC0029. What is UC0029 glad you ask Each new malware signature detected should be reviewed by a security analyst to determine if proactive steps can be taken to prevent infection. We will create this as a notable event so that we can provide evidence to audit that the process exists and was followed.

Source code will be provided so I will not detail step by step how objects will be created and defined for this post

UC0029 Endpoint new malware detected by signature

 

My “brand” is SecKit so you will see this identifier in content I have created alone or with my team here at Splunk. As per our best practice adopt your own brands and use appropriately for your content. There is no technical reason to replace the “brand” on third party content you elect to utilize.

Note ensure all knowledge objects are exported as all app’s owned by admin as you go

      • Create a DA-ESS-SecKit-EndpointProtection
        • This will contain ES specific content such as menus dashboards, and correlation searches
      • Create the working app SA-SecKit-EndpointProtection
        • This will contain props transforms lookups and scheduled searches created outside of ES
      • Create the lookup seckit_endpoint_malware_tracker this lookup will contain each signature as it is detected in the environment and some handy information such as the endpoint first detected, user involved and the most recent detection.
      • Create empty lookup CSV files
        • seckit_endpoint_malware_tracker.csv (note you will not ship this file in your content pack)
        • seckit_endpoint_malware_tracker.csv.default

Build and test the saved search SecKit Malware Tracker – Lookup Gen. This search will use tstats to find the first and last instance of all signatures in a time window and update the lookup if an earlier or later instance is found

 

      Build and test the correlation search UC0029-S01-V001 New malware signature detected. This search will find “new” signatures from the lookup we have created and create a notable event”Make it default” In both apps move content from local/ to default/ this will allow your users to customize the content without replacing the existing searches”Turn if off by default” It is best practice to ensure any load generating searches are disabled by default

        add disabled=1 to each savedsearches.conf stanza that does not end in”- Rule”add disabled=1 to each correleationsearches.conf

Create a spl (tar.gz) containing both apps createdWrite a blog post explaining what you did, how the searches work and share the code!Gain fame and respect maybe a fez or a cape

The source code

https://bitbucket.org/rfaircloth-splunk/securitykit/src/1ea60c46b685622116e28e8f1660a6c63e7d9e96/base/ess/?at=master

Bonus: Delegate administration of content app

  1. Using your favorite editor edit app/metadata/local.meta
  2. Update the following permisions adding “ess_admin” role

## access = read : [ * ], write : [ admin,role2,role3 ]
[savedsearches]
access = read : [ * ], write : [ admin,ess_admin ]

[correlationsearches]
access = read : [ * ], write : [ admin,ess_admin ]