What exactly is that talkers name is one of the most frustrating problems in syslog eventing and the most frustrating in analytics. For far too long the choices have been to use the devices name OR use reverse DNS but never both. Today SC4S 1.20.0 solves this problem by doing what you would do!
If the device has a host name in the event use that
Else if our management/cmdb solution knows the right name use that instead
The faces I’ve seen made to this statement say a lot. I hope you read past the statement for my reasons and when other requirements may prompt another choice.
Wait you say TCP uses ACKS so data won’t be lost, yes that’s true but there are buts
But when the TCP session is closed events published while the system is creating a new session will be lost. (Closed Window Case)
But when the remote side is busy and can not ack fast enough events are lost due to local buffer full
But when a single ack is lost by the network and the client closes the connection. (local and remote buffer lost)
But when the remote server restarts for any reason (local buffer lost)
But when the remote server restarts without closing the connection (local buffer plus timeout time lost)
But when the client side restarts without closing the connection
That’s a lot of buts and its why TCP is not my first choice when my requirement is for mostly available syslog (no such thing as HA) with minimized data loss.
Wait you say when should I use TCP syslog. To be honest there is only one case. When the syslog event is larger than the maximum size of the UDP packet on your network typically limited to Web Proxy, DLP and IDs type sources. That is messages that are very large but not very fast compared to firewalls for example. So we jump to TCP when the network can’t handle the length of our events
There is a third option TLS a subset of devices can forward logs using TLS over TCP this provides some advantages with proper implementation.
TLS can continue a session over a broken TCP reducing buffer loss conditions
TLS will fill packets for more efficient use of wire
TLS will compress in most cases
While I am here I want to say a word about Load Balancers as a means of high availability. This is snake oil.
TCP over an NLB double the opportunity for network error to cause data loss and almost always increases the size of the buffer lost I have seen over 25% loss on multiple occasions
TCP over NLB can lead to imbalanced resource use due to long-lived sessions. The NLB is not designed to balance for connection KbS its design to balance connections in TCP all connections are not equal leading to out of disk space conditions
UDP can not be probed UDP over NLB can lead to sending logs to long-dead servers.
Load Balancers break message reassembly common examples of 1 of 3 type messages like Cisco ACS, Cisco ISE, Symantec Mail Gateway can not be properly processed when sprayed across multiple servers.
Wait you ask how do I mitigate down time for Syslog?
Use VM Ware or hyper-v with a cluster of hosts which will reduce your outage to only host reboots which in this day and time is rare
Use a Clustered IP solution (i.e. Keepalived) so you can drain the server to a partner before restart.
A few other idea’s you may have to bring “HA” to syslog that will be counter productive
Most known Syslog sources will only use 1 typically the first or one random IP from a list of A records for a very long period of time ignoring the TTL. Using DNS to change the target is likely to not work in a short enough period of time in some cases hours
DNS Global Load Balancer similar to the above clients often holds cached results for far longer than TTL. In addition, the actual device configuration does not use the correct DNS servers for GLB to properly detect distance and will route incorrectly
UDP anycast can work in exceptional condition the scale of a single clustered pair of Syslog servers can not provide capacity. (Greater than 10 TB per day) However, because of the polling issues described with NLBs above my experience with AnyCast has been high data loss and project failure. Over a dozen projects with well-known logos over the last 10 years names you would know.
TLS/TCP anycast, this is an oxymoron don’t try it
Sending the message multiple times to multiple servers to so it can be “de-duplicated” by “someone’s software” Deduplication requires global unique keys this doesn’t exist so this isn’t possible. More than once is worse than sometimes never because if we are counting errors or attacks we see more than is real resulting in false positives and causing lack of operational trust in the data making your project effectively useless. A missed event will more likely than not occur again and be captured in short order.
Splunk release 1.2.0 of Splunk Connect for syslog today. This release focused on timezone management. We all wish time was standardized on UTC many of us have managed to get that written into approved standards but did not live to see the implementation of it. SC4S 1.2.0 enables the syslog-ng feature “guess-timezone” allowing the dynamic resolution of time zone of those poorly behaving devices relative to UTC. As a fall back or to deal with devices that batch/or stream with high latency device TZ can be managed at the host/ip/subnet level. Ready to upgrade? If you are running the container version just restart SC4S this feature is auto-magic.
Want to know more about SC4S Checkout these blog posts.
I’ve had quite a bit to say about syslog as a component of a streaming data architecture primarily feeding Splunk Enterprise (or Enterprise Cloud). In seven days I will be presenting the culmination of small developments that have taken shape into the brand new Splunk Connect for Syslog (SC4S).
Do the heavy lifting of deploying a functioning current build of the awesome syslog-ng OSE (3.24.1 as of this posting).
Support many popular syslog vendor products OOB with zero configuration or as little configuration as a host glob or IP address
Scale your Splunk vertically by very evenly distributing events across indexers by the second
Scale your syslog-ng servers by reducing constrains on CPU and disk
Reduce your exposure to data loss by minimizing the amount of data at rest on the syslog-ng instance
Promote great practices and collaboration. SC4S is a liberally licensed open source solution. We will be able to collaborate directly with the end users on filters and usage to promote great big data deployments.
Personal thanks to many but especially Mark Bonsack and Balazs Scheidler (syslog-ng creator)
It shouldn’t be a news flash that bias people “train” bias into computers just like we train bias into our children. We will one day realize we have no other choice but hard continuous work to eliminate bias.
This is a theoretical attack abusing a compromised kubectl certificate pair and exposed K8s api to deploy a phishing site transparently on your targets infrastructure. This is a difficult attack to pull off and required existing compromised administrative access to the k8s cluster. A privileged insider, or compromised cert based authentication credential can be used.
Target www.spl.guru which is one of my test domains.
Desired outcome detect an attempt to intercept admin login for a wordpress site we will utilize a fake email alert informing the administrator a critical update must be applied.
We will deploy the site hidden behind the targets existing ingress controller, this allows us utilize the customers own domain and certificates eliminating detection by domain name (typo squatting etc) and certificate transparency reporting monitoring.
Phase one: Recon
Using kubectl identify name spaces and find the ingress controller used for the site you intended to compromise. For the purposes of my poc my target used a very obvious “wordpress” namespace.
kubectl -n wordpress get ing
NAME HOSTS ADDRESS PORTS AGE
site www.spl.guru 133a0685-wordpress-site-62d9-1332557661.us-east-1.elb.amazonaws.com 8020h
I’m not going to go into details on deploying gophish and setting up or sending the phishing emails. Thats beyond the scope of the blog post, I’m here to help the blue team so lets get on to detection.
The following manifest hides the gophish instance on a path under the main site url. Of note in this case /wplogin.cgi” is the real site while /wplogin is where we are credential harvesting.
Using the K8S events and meta data onboard using Splunk Connect for K8S we have some solid places we can monitor for abuse. Side note don’t get hung up on “gophish” this is a hammer type tool your opponent may be much more subtle
Modification to an ingress in my case AWS ALB, ingress records for most will not change often when they are changed an associated approved deployment should also exist.
New Image, New Image Repository, New Registry, maintain a list of approved images and registries alert when an image is used not on the pre-defined list, this . may be noisy in dev clusters for no prod clusters reporting may be better than alerting.
Splunk has a great token based auth solution for its S2S protocol it was added several versions back. Inputs have both just worked and remained unchanged for so long many administrators have not noticed the feature. This allows you to safely expose indexers or heavy forwarders so that UFs on the internet can forward data back in without VPN. This is super critical for Splunk endpoints that don’t require a connection to the corporate network via VPN constantly
When a Splunk input is exposed to the internet there is a risk of resource exhaustion dos. A simple type of attack where junk data or “well formed but misleading” data is feed to Splunk until all CPU/memory/disk is consumed.
Once this feature is enabled all UF/HF clients must supply a token to be allowed to connect if your adding this feature to a running Splunk deployment be ready to push the outputs.conf update and inputs.conf updates in close succession to prevent a data flow break.
Update your inputs.conf as follows, note you can use multiple tokens just like HEC so you can mitigate the number of tokens that need to be replaced if a stolen token is used in a DOS attack
# Access control settings.
* Use this stanza to specify forwarders from which to accept data.
* You must configure a token on the receiver, then configure the same
token on forwarders.
* The receiver discards data from forwarders that do not have the
* This setting is enabled for all receiving ports.
* This setting is optional.
token = <string>
* token should match regex [A-Za-Z0-9\-]+ and with a min length of 12
Update outputs.conf use the token value of choice from inputs.conf
token = <string>
* The access token for receiving data.
* If you configured an access token for receiving data from a forwarder,
Splunk software populates that token here.
* If you configured a receiver with an access token and that token is not
specified here, the receiver rejects all data sent to it.
* This setting is optional.
* No default.
This is a rather long time coming, today version 4.0.0 of SecKit for Geolocation with MaxMind is available for Splunk via Splunk base and Splunk cloud. This version includes a built in solution for keeping the database files up to date for both free and paid subscribers. Free subscribers are of course limited to the basic information available as equivalent to the built in iplocation command from Splunk. Commercial subscribers can use all of available fields from their licensed files. Jump over to Splunk base to check it out https://splunkbase.splunk.com/app/3022/
A few years ago flying across the Atlantic, unable to sleep, I had an idea to integrate common syslog aggregation servers using Splunk’s new HTTP event Collector rather than file and the tired and true Universal Forwarder. This little idea implemented in python started solving real problems of throughput and latency while reducing the complexity of configuring syslog aggregation servers. I’m very pleased to say the python script I created is now obsolete. Leading syslog server products syslog-ng and rsyslog upstreams have implemented maintained modules. Even more exciting Mark Bonsack has invested considerable time to further develop modular configuration for both to make getting data though syslog and into Splunk even easier!
While all linux distributions include a syslog server this should NOT be used as the production syslog aggregation solution. Linux distros are often many point releases or worse selectively back port patches based only on their own customer reported issues. Before attempting to build a syslog aggregation solution for production it is critical you source current upstream binaries or build your own.
I’ve had this crop up enough times, I think its worth a short post. Most Splunk deployments use local and/or LDAP authentication. LDAP configuration is something of a black art and often the minimal configuration that works is the first and last time this is considered. It is worth your time as an administrator to optimize your LDAP configuration or better yet move to the more secure and reliable SAML standard.
Things to consider
Stop using LDAP and use SAML
LDAP authentication should never be enabled on indexers. If you have enabled LDAP authentication remove this. Indexers rarely require authentication, when required only applicable to admins and then under very strict conditions.
Ensure the Group BIND and Group Filters are both in use and limit the groups to only those required for Splunk access management
Ensure the User BIND and User Filters are appropriate to limit the potential users to only those users who may login to Splunk.
Validate the number of users returned by the LDAP query used is under 1000 or increase the number of precached users via limits.conf to an appropriate number.
Ensure DNSMASQ or an alternative DNS cache client is installed on Linux Search Heads and Indexers.
Stages to LDAP Auth
Interactive User Login (ad-hoc search) or scheduled report execution
Check User Cache if not cached or cache expired
DNS lookup to resolve the LDAP host to IP (This is the reason DNS Cache on linux is important)
TCP Connection to LDAP Server
Post query for specific user to LDAP
Wait for response
Process Referrals if applicable by repeating the above sequence
The time taken for each DNS query and LDAP query is added to the time taken too login to the Splunk UI, execute an ad-hoc search OR scheduled report. Its important to ensure the DNS and LDAP infrastructure is highly available and able to service potentially thousands of requests per second. Proper use of caching ensures resources on the Splunk Server including TCP client sessions are not exhausted causing users to wait in line for their turn at authentication or worst case time outs and authorization failures.