Oh Sh**T we didn’t think you would check our work.

Do you have a workflow to check your work or are you trusting the system because you think it works? One of the most frequent conversations I have goes something like this. Ryan: The best way to accomplish this task is …… some common alternatives you might think work is A B and C but they often fail in silent ways and this is how you know by checking D E F. Frequently I am challenged on my experience with a reply to the effect of “We’ve been doing B for years and never had a problem. I say great I’m always eager to learn how do you validate that works. If you are a betting person what do you think the odds of an answer are here pretty low. Computers and humans are both very reliable one does exactly what it is told the other does exactly what it knows it has to.

Enter Stamford in order to be fair in vaccine distribution they create a data drive algorithm to “do fair” or really delegate the determination of fair to someone else that didn’t know how to check their work. Please when writing software the lives, careers, and economies depend on test your code don’t be these guys.

https://www.theverge.com/2020/12/20/22191749/stanford-medicine-covid-19-vaccine-distribution-list-algorithm-medical-residents

Commitment to diversity in tech

I’m very pleased with the progress tech has made this year, and I say progress, not arrival because change is hard for humans. As a segment of society, I think tech is willfully changing. Every now and then I have something to say on this topic. If it is not personal to me I don’t say much honestly because so much is already said and virtue signaling is a bad look. I commented recently on laziness in ML leading to re-enforced bias. Gatekeeping in tech has been an issue that has personally impacted me. My path to tech was a unique one, I wasn’t a Barista I was a bored C student in an underperforming rural Alabama highschool that learned how to “tech” the business way. I liked solving problems and people would pay me well to solve computer problems so that’s what I learned. I started really small. Networking Mac computers on apple talk using phonenet connectors to solve a problem the poor rural kids I went to school within Skipperville Alabama needed more opportunity to learn to read than their parents could provide them (Thank you Pizzahut). Also thank you one stoplight little town for giving me a chance to get started. My path to IT Started because no one in “tech” was willing to solve the problem. I’ve built a worldwide reputation not on my formal education but on my customer focus. I don’t like the gatekeeping in tech but I also highly value education and well-educated knowledgeable teams I work with. I often find especially west coast tech community members are focused on tech for tech’s sake and that’s great for R&D but it brings solutions to real problems which are where “non-traditional” people like me come in. While I might not be the guy that invents a new way of storing machine data I am the guy you call to build the largest application of that software in the world we can’t do this without diversity.

Lets talk about that phrase non-traditional. Think about the 10 names in tech you can quickly and got find their resume online. I promise you most of them don’t have formal tech education Like me we came to tech not because tech attracted us but the problem attracted us. Yes more people are in tech today because of STEM education but I would argue I am “classically trained” :). Lets work together to solve the worlds problems, I say for 2021 lets turn off the zoom camera. Change how we evaluate resumes to be focused on passion outcomes and less on certifications and degrees.

MaxMind Databases and Splunk Enterprise

I’ve finally been able to take a couple of days and update and refresh my MaxMind Add-on for Splunk Enterprise and Enterprise Cloud. The latest version of the add-on updates the GeoIP2 library allowing for additional fields from the licensed anonymous IP database. It also built and tested using the new Addonfactory CI/CD infrastructure at Splunk. (See my conf talk). This is a major version as it introduces a requirement for python3 and thus Splunk Enterprise 8.0> because GeoIP2 is now python3 only. Older versions should still work for now if you can not upgrade. Head over to Splunkbase to get it now https://splunkbase.splunk.com/app/3022/

Your cloud vendor wants to send syslog cloud to cloud

I get asked about this from time to time whats wrong with sending syslog over the internet its a standard right?

IETF Syslog meaning RFC5424 over TLS (RFC5425) seems like a good idea until you think of the consequences and just what those consequences might be?

How do you plan to authenticate that.

Certificates well maybe this opens your SIEM up to a nasty low cost denial of service problem. Client cert auth is trivial to use as DOS with any invalid cert and expensive validation options. If this was happening how would you know neither syslog nor rsyslog will log this in an obvious way.

Secret SDATA? now we allow any client to auth and send data we must accept and parse the data to find out if its allowed sure that can’t be abused

IP Restrictions I have some beach front property for you.

All of the above

How will you scale that? please see prior posts on load balancing syslog

Next time you hear the suggestion of RFC 5424 syslog just laugh at the joke and ask what options are really being proposed.

When I say syslog what I really mean is

Syslog is a ambiguous term so I thought I would clarify what I am talking about

syslog is a daemon where Linux/UNIX sent logs back in the day. This in most cases results in an entry in a file in /var/log that may or may not have any particular structure this is normally not what I am talking about

Syslog was not a standard in the beginning. RFC 3164 is not a standards document its a memorialization of some common practices. Do you want a 1988 Honda Civic if you vendor’s Syslog looks like this you should look at it like a used car.

<111> July 01 12:13:11 My old car's logs

Syslog is not just text over tcp/udp. A syslog message must have the PRI such as <111> it must have a structure something like this:

<34>1 2003-10-11T22:14:15.003Z mymachine myapplication 1234 ID47 [example@0 class="high"] BOMmyapplication is started

Syslog is now a set of standards

  • RFC 5424 is the transport neutral message format https://tools.ietf.org/html/rfc5424
  • RFC 5425 describes how to use TLS as the transport (best practice) if network security matters worst practice when performance matters https://tools.ietf.org/html/rfc5425
  • RFC 5426 describes how to use UDP as the transport best practice for performance https://tools.ietf.org/html/rfc5426
  • RFC 6587 describes how to use TCP as the transport worst practice for performance best practice for large messages over unreliable networks https://tools.ietf.org/html/rfc5587

A message should not be considered “standard Syslog” if it is not in the RFC5424 protocol using RFC 5425 5426 or 6587 as the transport. Standards compliance matters lets start making vendors feel bad they have had 12 years to get it right.

Devices that think you know their name

What exactly is that talkers name is one of the most frustrating problems in syslog eventing and the most frustrating in analytics. For far too long the choices have been to use the devices name OR use reverse DNS but never both. Today SC4S 1.20.0 solves this problem by doing what you would do!

  1. If the device has a host name in the event use that
  2. Else if our management/cmdb solution knows the right name use that instead
  3. Else maybe someone updated DNS try that instead.

Simple logical easy to understand and available now in Splunk Connect for Syslog. No more of this

Event with IP as a host

Plenty more like this

IP translated to host using CMDB sourced lookup

Performant AND Reliable Syslog UDP is best

The faces I’ve seen made to this statement say a lot. I hope you read past the statement for my reasons and when other requirements may prompt another choice.

Wait you say TCP uses ACKS so data won’t be lost, yes that’s true but there are buts

  • But when the TCP session is closed events published while the system is creating a new session will be lost. (Closed Window Case)
  • But when the remote side is busy and can not ack fast enough events are lost due to local buffer full
  • But when a single ack is lost by the network and the client closes the connection. (local and remote buffer lost)
  • But when the remote server restarts for any reason (local buffer lost)
  • But when the remote server restarts without closing the connection (local buffer plus timeout time lost)
  • But when the client side restarts without closing the connection

That’s a lot of buts and its why TCP is not my first choice when my requirement is for mostly available syslog (no such thing as HA) with minimized data loss.

Wait you say when should I use TCP syslog. To be honest there is only one case. When the syslog event is larger than the maximum size of the UDP packet on your network typically limited to Web Proxy, DLP and IDs type sources. That is messages that are very large but not very fast compared to firewalls for example. So we jump to TCP when the network can’t handle the length of our events

There is a third option TLS a subset of devices can forward logs using TLS over TCP this provides some advantages with proper implementation.

  • TLS can continue a session over a broken TCP reducing buffer loss conditions
  • TLS will fill packets for more efficient use of wire
  • TLS will compress in most cases

While I am here I want to say a word about Load Balancers as a means of high availability. This is snake oil.

  • TCP over an NLB double the opportunity for network error to cause data loss and almost always increases the size of the buffer lost I have seen over 25% loss on multiple occasions
  • TCP over NLB can lead to imbalanced resource use due to long-lived sessions. The NLB is not designed to balance for connection KbS its design to balance connections in TCP all connections are not equal leading to out of disk space conditions
  • UDP can not be probed UDP over NLB can lead to sending logs to long-dead servers.
  • Load Balancers break message reassembly common examples of 1 of 3 type messages like Cisco ACS, Cisco ISE, Symantec Mail Gateway can not be properly processed when sprayed across multiple servers.

Wait you ask how do I mitigate down time for Syslog?

  • Use VM Ware or hyper-v with a cluster of hosts which will reduce your outage to only host reboots which in this day and time is rare
  • Use a Clustered IP solution (i.e. Keepalived) so you can drain the server to a partner before restart.

A few other idea’s you may have to bring “HA” to syslog that will be counter productive

  • DNS –
    • Most known Syslog sources will only use 1 typically the first or one random IP from a list of A records for a very long period of time ignoring the TTL. Using DNS to change the target is likely to not work in a short enough period of time in some cases hours
    • DNS Global Load Balancer similar to the above clients often holds cached results for far longer than TTL. In addition, the actual device configuration does not use the correct DNS servers for GLB to properly detect distance and will route incorrectly
  • AnyCast
    • UDP anycast can work in exceptional condition the scale of a single clustered pair of Syslog servers can not provide capacity. (Greater than 10 TB per day) However, because of the polling issues described with NLBs above my experience with AnyCast has been high data loss and project failure. Over a dozen projects with well-known logos over the last 10 years names you would know.
    • TLS/TCP anycast, this is an oxymoron don’t try it
  • Sending the message multiple times to multiple servers to so it can be “de-duplicated” by “someone’s software” Deduplication requires global unique keys this doesn’t exist so this isn’t possible. More than once is worse than sometimes never because if we are counting errors or attacks we see more than is real resulting in false positives and causing lack of operational trust in the data making your project effectively useless. A missed event will more likely than not occur again and be captured in short order.

A syslog time zone is a terrible thing to get wrong

Splunk release 1.2.0 of Splunk Connect for syslog today. This release focused on timezone management. We all wish time was standardized on UTC many of us have managed to get that written into approved standards but did not live to see the implementation of it. SC4S 1.2.0 enables the syslog-ng feature “guess-timezone” allowing the dynamic resolution of time zone of those poorly behaving devices relative to UTC. As a fall back or to deal with devices that batch/or stream with high latency device TZ can be managed at the host/ip/subnet level. Ready to upgrade? If you are running the container version just restart SC4S this feature is auto-magic.

Want to know more about SC4S Checkout these blog posts.

Syslog server you say

I’ve had quite a bit to say about syslog as a component of a streaming data architecture primarily feeding Splunk Enterprise (or Enterprise Cloud). In seven days I will be presenting the culmination of small developments that have taken shape into the brand new Splunk Connect for Syslog (SC4S).

You don’t have to wait swing over via Splunk Base https://splunkbase.splunk.com/app/4740/#/details

SC4S is designed to:

Do the heavy lifting of deploying a functioning current build of the awesome syslog-ng OSE (3.24.1 as of this posting).

Support many popular syslog vendor products OOB with zero configuration or as little configuration as a host glob or IP address

Scale your Splunk vertically by very evenly distributing events across indexers by the second

Scale your syslog-ng servers by reducing constrains on CPU and disk

Reduce your exposure to data loss by minimizing the amount of data at rest on the syslog-ng instance

Promote great practices and collaboration. SC4S is a liberally licensed open source solution. We will be able to collaborate directly with the end users on filters and usage to promote great big data deployments.

Personal thanks to many but especially Mark Bonsack and Balazs Scheidler (syslog-ng creator)

Bias in ML

One day perhaps we can teach machines to avoid bias but maybe just maybe we need to understand how to teach humans the same first.

https://tech.slashdot.org/story/19/08/16/1916202/the-algorithms-that-detect-hate-speech-online-are-biased-against-black-people

It shouldn’t be a news flash that bias people “train” bias into computers just like we train bias into our children. We will one day realize we have no other choice but hard continuous work to eliminate bias.