The sites been down for a few days, BlueHost has been suffering from a DDOS on at least one of the sites they host. My site shared infrastructure. for $3.95 a month I don’t expect too much but having some ability to move sites to new hosts would be nice. Anyways, I’m up on Azure now until I decide if I want to be my own webmaster or revert to paying someone else to pretend to worry about things like that. On the plus side of things, the outage forced me to update the site infrastructure. Now using certificates from Let’s Encrypt. If you have CLI access to your apache hosted site, super easy and free to enable good encryption.
sudo certbot –apache -d www.rfaircloth.com -d rfaircloth.com -d rfaircloth.westus.cloudapp.azure.com –must-staple –redirect –hsts –uir –rsa 4096
Hunting we find URLs in logs both email and proxy that are interesting all the time. What will that URL return, if it redirects where is it going and what kind of content questions you might be asking. If you are not asking them now is the time to start. I’ve released a new add on to Splunk Base, a little adaptive response action that can be used with just Splunk Enterprise OR Splunk Enterprise Security to collect and index information about those URLs.
This post is short and sweet, in ES 4.7 the Alexa download is not enabled by default enabling and using this list which can be very valuable in domain/fqdn based analysis is a simple two step process
- Navigate to Enterprise Security –> Configure –> Threat Intelligence Downloads
- Find Alexa
- Click enable
- Navigate to Splunk Settings –> Search Reports and Alerts
- Select “All” from the app drop down
- Search for “Threat – Alexa Top Sites – Lookup Gen“
- Click Edit under actions and then enable
- Optional Click Edit under actions again and cron schedule, Set the task to daily execution 03:00 with an auto window. This reduces the chances the list will not be updated if skipped due to search head maintenance.
- Optional the OOB gen search creates a large dispatch directory entry which is not desirable on search head clusters or where disk space is premium such as public clouds. Update the search as follow (appending the stats count) to prevent creation of a result set on the search head | inputthreatlist alexa_top_one_million_sites fieldnames=”rank,domain” | outputlookup alexa_lookup_by_stra | stats count
- Click “Run” to build the list so you can have it right now
I’ve had this in the bucket for a while waiting for the right time to share. There is a growing demand to develop “real time” analytic capability using machine data. Some great things are being created in labs their problem coming out of the lab is generally the inability to get events from the source systems, immediately following by difficulty normalizing events. If you’ve been working with these systems for very long and also worked with Splunk you may share my opinion that the Universal Forwarder, and the Schema at read power of Splunk is simply unmatched. How can we leverage the power of Splunk without reinventing the wheel, the axel, and the engine.
- Liu-yuan Lai, Engineer, Splunk https://conf.splunk.com/session/2015/conf2015_LYuan_Splunk_BigData_DistributedProcessingwithSpark.pdf
- Splunk App for CEF https://splunkbase.splunk.com/app/1847/
Back in 2015 I attended a short conf presentation that introduced me to the concepts and the value of Spark like engines. Last year our new CEF app introduced the idea message distribution can be executed on the indexer allowing very large scale processing with Splunk.
Introducing Integration Kit (IntKit)
- Message Preparation Tools https://bitbucket.org/SPLServices/intkit_sa_msgtools
- Kafka Producer https://bitbucket.org/SPLServices/intkit_sa_kafkaproducer
The solution adds three interesting abilities to Splunk using “summarizing searches” to distribute events via a durable message bus.
- Send raw events using durable message queue
- Send reformated events using an arbitrary schema
- Send “Data Model” schema eliminating the need to build parsing logic for each type of source on the receiving side.
But what about other solutions
- Syslog Output using the heavy forwarder
- Syslog is not a reliable delivery protocol unable to resend lost events can cause backup on the UF
- CEF 2.0
- Great tool limited to single line events or reformating also allows for data loss.
The tools consist of a message formatter currently preparing a _json field, other formats such as xml or csv could be implemented and a producer that will place the message into the kafka queue (other queues can also be implemented)
| datamodel Network_Traffic All_Traffic search
| fields + _raw,All_Traffic.*
| generatejsonmsg suppress_empty=true suppress_unknown=true suppress_stringnull=true output_field=_json
include_metadata=true include_fields=true include_raw=false sort_fields=true sort_mv=true
| ProduceKafkamsgCommand bootstrap_servers="localhost:9092" topic="topicname" msgfield="_json"
| stats count
What does this do:
- Using the datamodel command gather all Network_Traffic events
- Keep only _raw and the data model fields
- generate a _json field containing the fields in json format omit empty strings, “null”, sort the values of mv fields
- Send the message to kafka using a bootstrap server (localhost) topic “topicname”
This project is slightly above science project. That is poorly documented and mostly functional. I expect it will fit in well with the ecosystem its helping. Please submit enhancements to make it better including documentation if you use it.
The concept presented in this post, as well as the original inspiration, have some risks. Using alternatives to the vendor provided init scripts have support risks including loss of the configuration by future upgrades. Each operating system vendor has their own specific guidance on how to do this, each automation vendor has example automation scripts as well. Picking an approach that is appropriate for your environment is up to you.
THP the bain of performance for so many things in big data is often left on by default and is slightly difficult to disable. As a popular Splunk answers post and Splunk consultants include Marquis have found the best way to ensure ulimit and THP settings are properly configured is to modify the init scripts. This is a really crafty and reliable way to ensure THP is disabled for Splunk, it works on all Linux operating systems regardless of how services are started.
I’m doing some work with newer operating systems and wanted to explore how systemd really works and changes what is possible in managing a server. Lets face it systemd has not gotten the best of receptions in the community, after all it moved our cheese, toys and the ball all at once. It seems to be here to stay what if we could use its powers for good in relation to Splunk. Let’s put an end to THP and start Splunk the systemd native way.
Create the file
Description=Disable Transparent Huge Pages
ExecStart=/bin/sh -c “echo never >/sys/kernel/mm/transparent_hugepage/enabled”
ExecStart=/bin/sh -c “echo never >/sys/kernel/mm/transparent_hugepage/defrag”
Verify THP and defrag is presently enabled to avoid a false sense of success
# cat /sys/kernel/mm/transparent_hugepage/enabled
[always] madvise never
# cat /sys/kernel/mm/transparent_hugepage/defrag
[always] madvise never
Enable and start the unit to disable THP
# systemctl enable disable-transparent-huge-pages.service
# systemctl start disable-transparent-huge-pages.service
# cat /sys/kernel/mm/transparent_hugepage/enabled
always madvise [never]
# cat /sys/kernel/mm/transparent_hugepage/defrag
always madvise [never]
Reboot and repeat the verification to ensure the process is enforced
create the unit file
ExecStart=/opt/splunk/bin/splunk start –no-prompt –answer-yes –accept-license
#ulimit -Sn 65535
#ulimit -Hn 65535
#ulimit -Su 20480
#ulimit -Hu 20480
#ulimit -Hf unlimited
#ulimit -Sf unlimited
# systemctl enable splunk.service
# systemctl start splunk.service
Verify the ulimits have been applied via splunk logs
#cat /opt/splunk/var/log/splunk/splunkd.log | grep ulimit
Reboot and repeate all verifications
Bonus material, kill Splunk (lab env only) and watch systemd bring it back
# killall splunk
# ps aux | grep splunk
I’ve updated my best practices a bit and moved the implementation guides from confluence out to the bitbuckets in markdown so they can be more easily referenced on any platform or secured environments where PDFs might be discouraged.
Each repo will contain a README.md and one or more INSTALL.md files with the implementation guides. If you find an issue have a better practice or other enhancement, please open an issue in the repositories tracker.
I really do “get” it, logging and monitoring can be very costly, we all agree not nearly as costly as a breach. Each organization is struggling to ensure they log enough to see detection and value while being good stewards of their company budget. It has been a day reading vault 7 leaks and I see honestly not much that surprises me. I do see something worth a strong restatement, that is an encouragement to rethink what you log and how you log it. The CIA has a very cool (sorry hacker at heart) tool we have known about for some time but have not been able to talk about. Their tool “Drillbit” allows the creation of a covert tunnel using common cisco gear in such a way typical monitoring and logging using IDS and firewalls will not identify. American companies should note criminal gangs and foreign governments certainly have similar capabilities. Splunk has your back if you are willing to let us. Using Splunk Stream and proper sensor placement we can collect data from the inside and outside of your firewall that can be used to identify covert tunnels. Detection should be performed three approaches. The danger these leaks are presenting you is an increased awareness of the effectiveness of these techniques, encouraging the advancement of commodity cybercrime toolkits with ever more difficult to detect features. Don’t use cisco, sorry bad news is almost every major gear vendor has been exploited with similar approaches
- Static Rules such as
- This not that “Stream identified traffic not in firewall logs”
- New patterns in DNS, NTP, GRE flows
- Change/login to firewall or switch not associated with a change record
- Threat List Enrichment and detection
- Source and Destination traffic on quality threat lists. Traffic for protocols other than http(s) and DNS should be treated with high or critical priority
- Machine Learning
- Anomalous egress traffic by source from network devices
- Anomalous admin connections by source to network devices
I pulled this out of the archives , on request notice this was originally developed for Splunk 6.2.x and RHEL 7.0. Please review the details make sure it is suitable for you and TEST. If I can talk you out of doing things this way I would. Salt is a great way to manage app config its free and just awesome.
Title: Splunk Universal Forwarder Version 6.2.3+ Red Hat Enterprise Linux 7 Author: Ryan Faircloth Summary: Using repositories for version managment of the Splunk Universal Forwarder assists in ensuring managed Red Hat and compatible linux systems are using the approved version of the software at all times. [TOC] ## Setup the repository server ## 1. Install createrepo and nginx ``` yum install createrepo apache2 ``` 3. Create a user to work with the repository ``` sudo adduser repouser ``` 3. Change user to our repouser user all commands for the repository should be executed using this ID ``` sudo su - repouser ``` ## Generate GPG Keys ## 1. Change user to our repouser user all commands for the repository should be executed using this ID ``` sudo su - repouser ``` 2. Create the default configuration for gpg by running the command ``` gpg --list-keys ``` 3. Edit ~/.gnupg/gpg.conf * uncomment the line ``` no-greeting ``` * add the following content to the end of the file ``` # Prioritize stronger algorithms for new keys. default-preference-list SHA512 SHA384 SHA256 SHA224 AES256 AES192 AES CAST5 BZIP2 ZLIB ZIP UNCOMPRESSED # Use a stronger digest than the default SHA1 for certifications. cert-digest-algo SHA512 ``` 4. Generate a new key with the command ``` gpg --gen-key ``` 5. Select the folowing options ON CENTOS/RHEL this procedure must be executed on the console or SSH having logged in as the repouser 1. Type of key "(1) RSA and RSA (default)" 2. Key size "4096" 3. Expires "10y" 4. Confirm "Y" 5. Real Name "Splunk local repository" 6. Email address on repository contact this generally should be an alias or distribution list 7. Leave the comment blank 8. Confirm and "O" to Okay 9. Leave passphrase blank and confirm, a key will be generated not the sub KEY ID in the following example * E507D48E * ``` gpg: checking the trustdb gpg: 3 marginal(s) needed, 1 complete(s) needed, PGP trust model gpg: depth: 0 valid: 1 signed: 0 trust: 0-, 0q, 0n, 0m, 0f, 1u gpg: next trustdb check due at 2025-05-24 pub 4096R/410E1699 2015-05-27 [expires: 2025-05-24] Key fingerprint = 7CB8 81A9 E07F DA7B 83FF 2E1B 8B31 DA83 410E 1699 uid Splunk local repository <email@example.com> sub 4096R/E507D48E 2015-05-27 [expires: 2025-05-24] ``` 10. Export the signing keys public component save this content for use later ``` gpg --export --armor KEY_ID >~/repo.pub ``` 11. Install the new key into the RPM database ``` sudo cp ~/repo.pub /etc/pki/rpm-gpg/RPM-GPG-KEY-splunkrepo sudo rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-splunkrepo ``` 12. Configure RPM signing with the new key ``` echo "%_signature gpg" > ~/.rpmmacros echo "%_gpg_name splunkrepo" >> ~/.rpmmacros `` 13. Create a repository ``` mkdir /opt/splunkrepo cp splunkforwarder*.rpm /opt/splunkrepo createrepo /opt/splunkrepo ``` 14. Configure the local repository create the following configuration /etc/yum.repos.d/splunk.repo ``` [splunkrepo] name=splunk repository baseurl=file:///opt/splunkrepo/ gpgcheck=1 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-splunkrepo enabled=1 ``` 15. Test the local repository by installing splunkforwarder ``` sudo yum update sudo yum install splunkforwader ``` Note: Create a configuration RPM refer to https://fedoraproject.org/wiki/How_to_create_an_RPM_package and https://www.redhat.com/promo/summit/2010/presentations/summit/opensource-for-it-leaders/thurs/pwaterma-2-rpm/RPM-ifying-System-Configurations.pdf for more information do not run as root sudo to repouser 17. Prepare the rpm tree ``` rpmdev-setuptree ``` 18. Create a spec file with the following content ~/splunkforwarder-baseconfig.spec ``` #-------------------------------------------------------------------------- # This spec file is Copyright 2010, My Company, Inc. #-------------------------------------------------------------------------- Summary: My Company general configuration RPM Name: splunkforwarder-baseconfig Version: 1 Release: 3 License: Copyright 2010, My Company, Inc. Group: MyCompany/Configs Packager: Packager Name <firstname.lastname@example.org> requires: splunkforwarder BuildArch: noarch %description This RPM provides general services and security configuration for My Company. %triggerin -- splunkforwarder /opt/splunkforwarder/bin/splunk enable boot-start --accept-license --answer-yes service splunk stop if [ -d "/opt/splunkforwarder/etc/apps/org_all_deploymentclient/local" ] then echo "Directory /opt/splunkforwarder/etc/apps/org_all_deploymentclient/local exists." else mkdir -p /opt/splunkforwarder/etc/apps/org_all_deploymentclient/local fi echo #Base deployment configuration >/opt/splunkforwarder/etc/apps/org_all_deploymentclient/local/deploymentclient.conf echo [deployment-client] >>/opt/splunkforwarder/etc/apps/org_all_deploymentclient/local/deploymentclient.conf #echoclientName >>/opt/splunkforwarder/etc/apps/org_all_deploymentclient/local/deploymentclient.conf echo [deployment-client] >>/opt/splunkforwarder/etc/apps/org_all_deploymentclient/local/deploymentclient.conf echo targetUri = ds.example.com:8089 >>/opt/splunkforwarder/etc/apps/org_all_deploymentclient/local/deploymentclient.conf service splunk start %triggerun -- splunkforwarder if [ $1 -eq 0 -a $2 -gt 0 ] ; then /opt/splunkforwarder/bin/splunk stop /opt/splunkforwarder/bin/splunk disable boot-start rm -Rf /opt/splunkforwarder/etc/apps/org_all_deploymentclient fi %files ``` 18. Build the RPM ``` rpmbuild -sign -ba splunkforwarder-baseconfig.spec ``` 19. Copy the RPM to the repository cp ~/rpmbuild/RPMS/noarch/splunkforwarder-baseconfig-1-3.noarch.rpm / /opt/splunkrepo 20. Update repository DB ``` createrepo /opt/splunkrepo ``` 21. Test the rpms ``` yum update yum install splunkforwarder-baseconfig ``` 22. Configure a web server (Apache) for use as a repository server 23. Set permissions on the repository folder ``` chmod -R 755 /opt/splunkrepo ``` 24. Create the web server configuration file with the following contents /etc/http/conf.d/splunkrepo.conf ``` Alias /splunkrepo/ "/opt/splunkrepo/" <Directory "/opt/splunkrepo"> Options Indexes FollowSymLinks MultiViews AllowOverride All Require local Order allow,deny Allow from all </Directory> ``` 25. Reload (or restart) the web server ```service httpd reload ``` 26. Test `` lynx http://localhost/splunkrepo/repodata/repomd.xml ``` 27. Enable the new repository on the first test client ``` sudo yum-config-manager --add-repo http://localhost/splunkrepo sudo yum update sudo yum install splunkforwarder-baseconfig ```
I’m sharing something today that has been available thanks to many in white papers and presentations dealing with identification of malicious code and activities in your windows event data. Shout out to everyone from our “friends” at the NSA, to Splunk .Conf presenters and malwarearcheology.com just to name a few.
The PDF attached is a portion of the next evolution of the Use Case Repository I maintain at Splunk. Along with the reference TAs and inputs, this will allow you to quickly and consistently collect very valuable data supporting security use cases at multiple levels of maturity. If it seems like too much don’t work Splunk Pro Services and partners are able to help you get this visibility just contact your account team.
Standard disclaimer, this is a blog post, I built the content from public non-warrantied information, and this is still public non-warrantied information, your situation might not match the advice given.
Having great and informative data will make for some hefty lookups. I’ve heard from a few customers that run into this rather than plan for it so let us talk about the levers we need to pull.
- Don’t wait around upgrade to Splunk Enterprise 6.5.2+ Now is the time
- Don’t wait any longer upgrade to Splunk Enterprise Security 4.5.1 the dev team invested in improvements to assets and identities lookups that also improve by decreasing the size of the merged lookups.
- Update server.conf on the indexers and search head cluster peers.
max_content_length = 1610612736 # 1.5 GB
- Update distsearch.conf to better replication on the SH/SHC
[replicationSettings] # 1.5 GB with encoding room this will increase the memory utilization while decreasing CPU utilization maxMemoryBundleSize = 1700 #1.5 GB to match server.conf on the other side maxBundleSize = 1536