When you have 100 problems, more logs are not the answer

big_fire_01 Often SIEM projects begin where log aggregation projects end. So many logs cut into organized stacks of wood ready to burn for value. I can be quoted on this “All logs can be presumed to have security value”. One project to build the worlds largest bonfire however is seldom the correct answer. What value you may ask? Value will be gained in one or more of these categories:

Continue reading When you have 100 problems, more logs are not the answer

Making Asset data useful with Splunk Enterprise Security CSC 1 Part 1

54080041

Update broken link 2017-10-04

Friend we need to talk, there is something important that you have been overlooking for a long time. Two years ago when you implemented your first SIEM you gave your consultant an excel file listing all of your servers on the corporate network. You promised you would spend time on it after the consultant left but, then you got the new FireEye. You didn’t forget but then the you got a new Next Gen firewall and after there was the new red team initiative.

It is time make a difference in the security posture of your organization. It is time to take a bite out of CSC #1a that’s not a typo we need to work on #1a, #2 can wait. so can #1b .It is time to work SANs critical control #1. I know the CMDB is out of date and doesn’t reflect today’s architecture. We can do a lot with a small amount of work, today I will share how to lay a foundation to address CSC 1: Inventory of Authorized (a) and Unauthorized Devices (b).

Objective

Objective 1: Identify the location of each asset using latitude, longitute, city state and zip
Objective 2: Identify the compliance zone for each network segment
Objective 3: Identify categories that can assist the analyst in review of events related to the network containing the source or destination
Objective 4: Identify the minimum priority of devices in a given network segment.

Code is provided via Security Kit Install the app “SecKit_SA_idm_common” on your ES Search head.

Don’t forget to update the app imports to include “SecKit_SA_.*”

Walkthrough

  1. Update seckit_idm_pre_cidr_location.csv so that for each subnet in cidr notation define the location. On a very large campus it may be desirable to present a point on a specific building however in most cases it will be adequate to have a single lat/long pair for all subnets on a campus. Include all private and public spaces owned or managed by your organization do not include any public space not external spaces such as hosting providers and cloud services.
  2. Update seckit_idm_pre_cidr_category.csv note subnet in this case may be larger or smaller than used in locations. The most precise definition will be utilized by Splunk Identity Management within Enterprise Security. This may contain cloud address space if the ip space is not continually re-purposed
    1. Populate pci_cidr_domain we will overload this field for non PCI environments.
      1. PCI usage “wireless ORtrust|cardholder OR  trust|dmz OR empty (empty or default represents untrust
      2. Non PCI usage substitute other compliance in place of cardholder such as pii, sox, hippa, cip
    2. Populate cidr_priority
      1. low the most often used value should represent the majority of your devices
      2. medium common servers
      3. high devices of significant importance
      4. critical devices requiring immediate response such as
        1. A server whose demise would cause you to work on Christmas
        2. A server whose demise could cause the closure of the company even if you work on Christmas
    3. Populate cidr_category values provided here would apply to all devices in this network. I will list some very common categories I apply note each category needs to be pipe “|” separated and may not contain a space
      1. net_internal – internal IP space
      2. net_external – external IP space
      3. netid_ddd_ddd_ddd_ddd_bits – applied to each allocated subnet (smallest assigned unit.
      4. zone_string – where string is one of dmz, server, endpoint, storage management, wan, vip, nat
      5. facility_string – where string is the internal facility identification code
      6. facility_type_string – where string is a common identifier such as datacenter, store, office, warehouse, port, mine, airport, moonbase, cloud, dr, ship
      7. net_assignment_string – where string is static dyndhcp, dynvirt
    4. Run the saved search “seckit_idm_common_assets_networks_lookup_gen” and review the results in seckit_idm_common_assets_networks.csv you may run this report on demand as the lookup files above are changed or on a schedule of your choice.
    5. Enable the asset file in enterprise security by navigating to Configuration –> Enrichment –> Assets and Identities then clicking enable on “seckit_idm_common_assets_networks”

Bonus Objective

Enhance your existing server and network device assets list by integrating the following lookups and merging the OUTPUT fields with the device specific asset data.

  1. | lookup seckit_idm_pre_cidr_category_by_cidr_lookup cidr as lip OUTPUT cidr_pci_domain as pci_domain cidr_category as category
  2. | lookup idm_shared_cidr_location_lookup cidr as ip OUTPUT lat long city country

 

 

 

Share that search! Building a content pack for Splunk Enterprise Security 4.0+

Splunk has initial support for export of “content” which can be dashboards and correlation searches created by the user to share with another team. What if you need to be a little more complex for example including a lookup generating search? This will get a little more complicated but very doable by the average admin. Our mission here is to implement UC0029. What is UC0029 glad you ask Each new malware signature detected should be reviewed by a security analyst to determine if proactive steps can be taken to prevent infection. We will create this as a notable event so that we can provide evidence to audit that the process exists and was followed.

Source code will be provided so I will not detail step by step how objects will be created and defined for this post

UC0029 Endpoint new malware detected by signature

 

My “brand” is SecKit so you will see this identifier in content I have created alone or with my team here at Splunk. As per our best practice adopt your own brands and use appropriately for your content. There is no technical reason to replace the “brand” on third party content you elect to utilize.

Note ensure all knowledge objects are exported as all app’s owned by admin as you go

      • Create a DA-ESS-SecKit-EndpointProtection
        • This will contain ES specific content such as menus dashboards, and correlation searches
      • Create the working app SA-SecKit-EndpointProtection
        • This will contain props transforms lookups and scheduled searches created outside of ES
      • Create the lookup seckit_endpoint_malware_tracker this lookup will contain each signature as it is detected in the environment and some handy information such as the endpoint first detected, user involved and the most recent detection.
      • Create empty lookup CSV files
        • seckit_endpoint_malware_tracker.csv (note you will not ship this file in your content pack)
        • seckit_endpoint_malware_tracker.csv.default

Build and test the saved search SecKit Malware Tracker – Lookup Gen. This search will use tstats to find the first and last instance of all signatures in a time window and update the lookup if an earlier or later instance is found

 

      Build and test the correlation search UC0029-S01-V001 New malware signature detected. This search will find “new” signatures from the lookup we have created and create a notable event”Make it default” In both apps move content from local/ to default/ this will allow your users to customize the content without replacing the existing searches”Turn if off by default” It is best practice to ensure any load generating searches are disabled by default

        add disabled=1 to each savedsearches.conf stanza that does not end in”- Rule”add disabled=1 to each correleationsearches.conf

Create a spl (tar.gz) containing both apps createdWrite a blog post explaining what you did, how the searches work and share the code!Gain fame and respect maybe a fez or a cape

The source code

https://bitbucket.org/rfaircloth-splunk/securitykit/src/1ea60c46b685622116e28e8f1660a6c63e7d9e96/base/ess/?at=master

Bonus: Delegate administration of content app

  1. Using your favorite editor edit app/metadata/local.meta
  2. Update the following permisions adding “ess_admin” role

## access = read : [ * ], write : [ admin,role2,role3 ]
[savedsearches]
access = read : [ * ], write : [ admin,ess_admin ]

[correlationsearches]
access = read : [ * ], write : [ admin,ess_admin ]

Advancing security through the use of security assessments

Long ago our in the distant past that is the late 1970s individuals were alone and unconnected. Visionaries of the future began to connect the individuals in communities. These communities were open and without borders, individuals could enter and use all dwellings with ease. The community thrived with each individual adding unique value.


As the community grew the individuals began to notice unwelcome occurrences. Dwellings changed without the approval of its owner, items moved from dwelling to dwelling without kind notes left. Most disturbing of all some smaller individuals would simply disappear. Each community started to address the concerns of its individuals on their own. Some communities fared better than others, elders would meet together and discuss the successes in their communities (just the sucesses). In order to determine which elders community had fared the best consultants were hired to asses the communities security.  The following levels commonly used to range community security.

Level 0 Awareness of roaming beats in the village as identified by missing young children, food supplies and occasional spotting of red eyed monsters in the night.

Level 1 Young boys with clubs seek to prove the existence of of such beasts despite denial by the elders.

Level 2 A small dog has fallen dead of age in town square signs are placed elsewhere presuming the animals will read and obey the signs.

Level 3 As children continue to disappear in the night demands that more must be done continue. Young men are given small stones and placed at the community gates. Additional signs are added to ensure beats will only use the main gate

Level 4 Losses continue, recent reports of missing valuables such as silver and  gold alarm the elders. Each community member is interviewed and background checks are completed. Community leaders and elders and guards are excluded from the process.

Level 5 Media reports of losses become public. Elders embarrassed demand more actions from the guards. New guards are posted around certain well lighted intersections. Guards dance every 30 minutes between 9 AM and 10 AM around the intersection ensuring the requirement of more activity is satisfied.

Level 6 Additional losses occur new Elders are brought in by the community to solve the problem. Immediately all guards are replaced with new cards from neighboring communities suffer more public and higher losses. The new elders carry forth a plan to double their efforts. The following plan is put in place
A single new guard is set to walk along the parameter of the community during business hours Monday through Thursday
The number of intersections guarded are doubled. The dance is performed every 15 minutes and the intersection guards are equipped with monoculars.

Outside actors are hired to impersonate monsters of the night by entering the community at night and taking small tokens such as napkins. The actors are immediately fired for not playing fairly for reasons not disclosed to the elders.

Level 7 The new guard leadership brings additional guards from neighboring community to patrol the perimeter outside of business hours. The outsourced guards are instructed to awake a day guard should anything severe or important be observed.

Outside actors are again hired and directed to attempt to take a small flyer from sign post at a single intersection. After repeated success all guards are placed at the same intersection and a successful test is reported to the elders.

Level 8 The senior elders european beach vacation photos are placed around the community near the fading signs installed when the community reached level 1. Senior guards are replace. The new senior guards offer to higher the “best” of the outsourced guards for the new perimeter security program. The terms of the offer were not disclosed  2% of the staff takes the offer. The outsource firm does not counter to retain the guards. The new firm observes the photo liberators have opposable thumbs. Reduction security processes for small animals is reduced increasing the rate of loss for small animals and children. Elders are not allowed contact to life forms with opposable thumbs. The probation is receded after 1 hour.

Level 9 The senior guards request more outside assistance, new consultants recommend a new monitoring system built of mirrors allowing the guards to view the intersections from a central location on a single glass wall. Perimeter guards and intersection guards are immediately discontinued. Days latter all small farm animals disappear without notice. On the one year anniversary a senior investigator comes to the elders with a fantastic story of finding a single chicken they must have been taken from this community at a black market in a far away land. The Senior guards initially are certain this must be an isolated incident however a manual inspection of the community find all small animals are indeed missing.

Level 10 Senior guards are once again replaced. The single wall of glass vendor is brought in to explain why their solution has failed. The vendor quickly finds the system was implemented in the very same way as the neighboring communities system. The vendor points out the shape of their neighbors community differs greatly the mirrors as installed have excellent visibility of the latrine  and the community dump but have very limited visibility on the perimeter. The vendor recommends a larger glass system to provide visibility on the perimeter in addition to the current solution. Construction begins on a larger hut with bigger glass walls.

Level 11 Following delays in construction to the new hut Additional Senior guards are engaged from far away with experience in guarding large animals. Additional guards are hired with differing skills. Each guard begins to adjust the minors to their personal liking. Often complaining they spend to much time in the hut. Senior guards begin to require each guard to roam the community during the day looking for signs of wild beats.

Finding signal in the noise of DNS data using Splunk

DNS is a fundamental component of our computing infrastructure before we identify bad actions easily we should remove what we can easily identify to be good. For all of our queries we will rely on common information model fields and extractions. For most customers I will assist them in deploying the Splunk App for Stream to collect query information from their DNS servers in a reliable way regardless of the logging capabilities of their chosen server product.

Note: Be sure to install Cedric’s URLToolbox Add on we will make use of its power here.

Lets start by looking at the data everyone is spending the most time talking about queries for A (ipv4) and AAA (ipv6). Lets search for no more than the last 60 min while we are working to be kind to our indexers. For real analysis you will use bigger windows.

tag=dns tag=resolution tag=dns index=* NOT source=”stream:Splunk_*” (query_type=A OR query_type=AAAA)

My sample environment is small very small 5 users, 10 windows servers. In the last 24 hours this query gave me 24,000+ results way more than I can examine lets start to cut that down. We also need to remember what we will probably be learning from our data that is which domains require investigation for suspicion of involvement in malicious activity.

Reduction #1 Lets remove all domains owned by our organization for email or web hosting.

  • Update the following files to include the domains used for email or web hosting.
    1. Splunk_SA_CIM/lookups/cim_corporate_email_domains.csv
    2. Splunk_SA_CIM/lookups/cim_corporate_web_domains.csv
  • Update our search to extract the domain and tld for latter use. this is more complicated than it looks so we will make up a uri and let UTToolbox do the work for us
  •  The new base search will look like this
    tag=dns tag=resolution tag=dns NOT source=”stream:Splunk_*” index=* (query_type=A OR query_type=AAAA)
    | eval uri=”dnsquery://”+query
    | `ut_parse(uri)` | fields – ut_fragment ut_netloc ut_params ut_path ut_port ut_query ut_scheme
  • Now we can use our email and web domain lookups to reduce the data set we are working with. This took about about 13% of my results. Notice I use fields – to get rid of stuff I don’t need moved from my indexer back to my search heads.
  • tag=dns tag=resolution tag=dns NOT source=”stream:Splunk_*” index=* (query_type=A OR query_type=AAAA)
    | eval uri=”dnsquery://”+query
    | `ut_parse(uri)`
    | fields – ut_fragment ut_netloc ut_params ut_path ut_port ut_query ut_scheme
    | lookup cim_corporate_email_domain_lookup domain as ut_domain OUTPUT domain as cim_email_domain
    | lookup cim_corporate_web_domain_lookup domain as ut_domain OUTPUT domain as cim_web_domain
    | where isnull(cim_email_domain) AND isnull(cim_web_domain)
    | fields -cim_email_domain cim_web_domain)
  • The next easy win is to remove all queries for one of our assets we do that by kicking out all queries for one of our assets by dns name or where the resulting IP is one of our assets
  • tag=dns tag=resolution tag=dns NOT source=”stream:Splunk_*” index=* (query_type=A OR query_type=AAAA)
    | eval uri=”dnsquery://”+query
    | `ut_parse(uri)`
    | fields – ut_fragment ut_netloc ut_params ut_path ut_port ut_query ut_scheme
    | lookup cim_corporate_email_domain_lookup domain as ut_domain OUTPUT domain as cim_email_domain
    | lookup cim_corporate_web_domain_lookup domain as ut_domain OUTPUT domain as cim_web_domain
    | where isnull(cim_email_domain) AND isnull(cim_web_domain)
    | fields – cim_email_domain cim_web_domain
    | lookup asset_lookup_by_str dns as query OUTPUTNEW asset_id as query_asset_id
    | lookup asset_lookup_by_cidr ip as host_addr OUTPUTNEW asset_id as host_addr_asset_id
    | where isnull(query_asset_id) AND isnull(host_addr_asset_id)
    | fields – query_asset_id host_addr_asset_id
  • Next up is to remove all queries for Alexa Top 1 M domains why? well in the Top 1M we will probably not find any new domains, or any domains being used for C2 using a DNS channel. Thats not to say XML file on drop box or feedburner can’t be used but we won’t find that threat here. This further reduced by data set by 92%
  • tag=dns tag=resolution tag=dns NOT source=”stream:Splunk_*” index=* (query_type=A OR query_type=AAAA)
    | eval uri=”dnsquery://”+query
    | `ut_parse(uri)`
    | fields – ut_fragment ut_netloc ut_params ut_path ut_port ut_query ut_scheme
    | lookup cim_corporate_email_domain_lookup domain as ut_domain OUTPUT domain as cim_email_domain
    | lookup cim_corporate_web_domain_lookup domain as ut_domain OUTPUT domain as cim_web_domain
    | where isnull(cim_email_domain) AND isnull(cim_web_domain)
    | fields – cim_email_domain cim_web_domain
    | lookup asset_lookup_by_str dns as query OUTPUTNEW asset_id as query_asset_id
    | lookup asset_lookup_by_cidr ip as host_addr OUTPUTNEW asset_id as host_addr_asset_id
    | where isnull(query_asset_id) AND isnull(host_addr_asset_id)
    | fields – query_asset_id host_addr_asset_id
    | lookup alexa_lookup_by_str domain as ut_domain OUTPUTNEW rank as alexa_rank
    | where isnull(alexa_rank)
  • Down from 24K to under 1700 but that’s still alot, at this point I noticed a couple of things. I have queries for .local domains I can’t explain but I know are not malicious, bare host names (no period)  and I have a couple of devices servicing DNS from guest wifi identify those points and update the search to remove them. This leaves me with 216 domains to investigate. But we can tune this even further lets keep going.
  • CDN networks can host malicious content however dns analysis is again not the way to find such threats. This takes me down to 173 domains
    • Create a new lookup Splunk_SA_cim/lookups/custom_cim_cdn_domains.csv you may find new domains and need to update this list over time
    • Upload this file custom_cim_cdn_domains
    • add a new lookup via Splunk_SA_cim/local/transforms.conf [custom_cim_cdn_domain_lookup]
      filename    = custom_cim_cdn_domains.csv
      match_type  = WILDCARD(domain)
      max_matches = 1
    • Update with a new search to exclude known CDN domains
    • tag=dns tag=resolution tag=dns NOT source=”stream:Splunk_*” index=* (query_type=A OR query_type=AAAA)
      query=”*.*” NOT query=”*.local”
      | eval uri=”dnsquery://”+query
      | `ut_parse(uri)`
      | fields – ut_fragment ut_netloc ut_params ut_path ut_port ut_query ut_scheme
      | lookup cim_corporate_email_domain_lookup domain as ut_domain OUTPUT domain as cim_email_domain
      | lookup cim_corporate_web_domain_lookup domain as ut_domain OUTPUT domain as cim_web_domain
      | where isnull(cim_email_domain) AND isnull(cim_web_domain)
      | fields – cim_email_domain cim_web_domain
      | lookup asset_lookup_by_str dns as query OUTPUTNEW asset_id as query_asset_id
      | lookup asset_lookup_by_cidr ip as host_addr OUTPUTNEW asset_id as host_addr_asset_id
      | where isnull(query_asset_id) AND isnull(host_addr_asset_id)
      | fields – query_asset_id host_addr_asset_id
      | lookup alexa_lookup_by_str domain as ut_domain OUTPUTNEW rank as alexa_rank
      | where isnull(alexa_rank)
      | lookup custom_cim_cdn_domain_lookup domain as query OUTPUTNEW is_cdn |
      where isnull(is_cdn)

       

  • Optional Step if you have domain tools integration enabled (whois) the following lines added to your search will show when the domain was first seen by you and when it was registered.
  • | rename ut_domain as domain
    | `get_whois`
    | eval “Age (days)”=ceil((now()-newly_seen)/86400)

  • Many people of written on what to do with this data now, go hunting!

 

Get started with Splunk App Stream 6.4 for DNS Analysis

Passive DNS analysis is all the rage right now, the detection opportunities presented have been well discussed for some time. If your organization is like most now is the time you are being asked how you can implement these detection strategies. Leveraging your existing Splunk investment you can get started very quickly with less change to your organization than one might think. Here is what we will use older versions will work fine however the screen shots will be a bit off:

  •  Splunk Enterprise 6.3.1
  • Splunk App for Stream 6.4

We will assume Splunk Enterprise 6.3.1has already been installed.

Decide where to install your Stream App. Typically this will be the Enterprise Security search head. However if your ES search head is also a search head cluster you will need to use an AD-HOC search head,  dedicated search head or a deployment server. Current versions of Stream fully support installation on a Search Head Cluster.

Note: If using the deployment server (DS) you must configure the server to search the indexer or index cluster containing your stream data.

  1. Install Splunk App for Stream using the standard procedures located here.
  2. Copy the deployment TA to your deployment server if you installed on a search head. /opt/splunk/etc/deployment-apps/Splunk_TA_stream
  3. On your deployment server create a new folder to contain configuration for your stream dns server group.
    • mkdir -p Splunk_TA_stream_infra_dns/local
  4. Copy the inputs.conf from the default TA to the new TA for group management
    • cp Splunk_TA_stream/local/inputs.conf Splunk_TA_stream_infra_dns/local/
  5. Update the inputs.conf to include your forwarder group id
    • vi Splunk_TA_stream_infra_dns/local/inputs.conf
    • Alter “stream_forwarder_id =” to “stream_forwarder_id =infra_dns”
  6. Create a new server class “infra_stream_dns” include both the following apps and deploy to all DNS servers (Windows DNS or BIND)
    • Splunk_TA_stream
    • Splunk_TA_stream_infra_dns
  7. Reload your deployment server

Excellent at this point the Splunk Stream app will be deployed to all of your DNS servers and sit idle. The next few steps will prepare the environment to start collections

  • Create a new index I typically will create stream_dns and setup retention for 30 days.

Configure your deployment group

  1. Login to the search head with the Splunk App for Stream
  2. Navigate to Splunk App for Stream
  3. If this is your first time you may find you need to complete the welcome wizard .
  4. Click on Configure the “Distributed Forwarder Management”
    • stream_configure_dfm
  5. Click Create New Group as follows then click Next
    1. Name Infra_DNS
    2. Description Applied to All DNS servers
    3. Include Ephemeral Streams? No
  6. Enter “infra_dns” as this will ensure all clients deployed above will pickup this configuration from the Stream App
  7. Search for “Splunk_DNS” and select each match then Click Finish
    • stream_dns_aggs
  8. Click on Configuration then “Configure Streams”
    • stream_configure
  1. Click on New Stream
  2. Setup basic info as follows then click Next
    1. Protocol DNS
    2. Name “Infra_DNS”
    3. Description “Capture DNS on internal DNS servers”
    4. stream_configure_dns
  3. We will no use Aggregation so leave this as “No” and click Next
  4. The default fields will meet our needs so go ahead and click Next
  5. Optional Step: Create filters in most cases requests from the DNS server to the outside are not interesting as they are generated based on client requests that cannot be answer from the cache. Creating filters will reduce the total volume of data by approximately 50%
    1. Click create filter
    2. Select src_ip as the field
    3. Select “Not Regular Expression” as the type
    4. Provide a regex capture that will match  all DNS server IPs example “(172\.16\.0\.(19|20|21))” will match in my lab network.
      • stream_filter
    5. Click next
    6. Select only the Infra_DNS group and click Create Stream

At this point stream will deploy and begin collection however index selection is not permitted in this workflow so we need to go back and set it up now.

  1. Find Infra_DNS and click edit
  2. Select the index appropriate for your environment
  3. Click save

Ready to check your work? Run this search replace index=* with your index

index=* sourcetype=stream:dns | stats count by query | sort – count

 

Getting all the logs – Avoiding the WEC

I get asked about this one often, I happen to have a bit of experience with this which is very rare. There is scant documentation on the technology from Microsoft or anyone else. I do know of some success being had with very specific low volume use cases but that’s not what I do. I’m a specialist of sorts I walk of a Delta plane, drop my bag at a Marriott then walk into change someones world with data. Actual facts about their environment from their environment and I need and use data my customers don’t know they had. Which brings me to Windows Event Collection (WEC).

Customer ask me about it its seems so easy lets talk about the parts

  • Group policy use to make changes to all systems in an environment.
  • Remote Power Shell
  • COM/DCOM/Com+ and all of the RPC that goes with it
  • Kerberos authentication

How does it work?

  1. Group policy instructs the computer to connect to a collector and gather a policy
  2. Policy read causes a Com+ server to read the event log (yes this is code you have not been running it can and will impact your endpoints)
  3. Local filter determines what do do with this event (xml parsing with XPATH and XSLT)
  4. RPC call using computer account to Collector
  5. Denial (Auth required)
  6. Authentication (event log write on DC and on Collector)
  7. Serial write with sync and block to round robin data base on the server. So if 300 events come in these have to get in queue to go to disk.
  8. Close connection
  9. Poll period go back to 3

Lots of steps? Lets ask about failure modes

  • What happens if my collector is down
    • Answer client goes to sleep and retries hope your logs don’t wrap
  • What happens if my collector won’t get back up
    • Answer build a new one, open a change record, wait for approval, explain to audit why you don’t have logs
  • What happens to the format of the logs?
    • Answer Good question I can’t explain what MS is doing to these logs if you know please share
  • What about log rotation and archival
    • Answer not possible you need another tool to read back and store them some place (splunk)
  • My collector isn’t keeping up what do I do now?
    • Answer Well hopefully the org structure of your Domain will support creating an assignment policy at the OU level, you might be able to use the same policy/collector pair at multiple OU points but you might also need to break up the OUs to manage the policy.
  • Cross domain?
    • Answer 1 or more collectors per domain.
  • Wait I only want events XX and ZZYY from certain servers for compliance.
    • Answer you get another collection policy
  • I can’t make this work on server2134
    • Answer call Support at MS, explain what event collection is, hopefully convince that person it is supported
  • My sensitive “application/service log” doesn’t use the event log
    • Answer logfile this is windows who would do that?

Lets compare to universal forwarders with Splunk

  • What happens if my “indexer” is down
    • Answer Client connect to another indexer, in a production system the indexer itself is replicated and you retain access to all data.
  • What happens if my collector won’t get back up
    • Answer. Data is replicated still available
  • What happens to the format of the logs?
    • Answer We capture the original text of all logs
  • What about log rotation and archival
    • Answer Built in
  • My collector isn’t keeping up what do I do now?
    • Answer Horizontal scaling Splunk will help you plan for this with experience and performance data from real world implementations
  • Cross domain?
    • Certainly, WAN no issue, Cloud not a problem. VPN sure why not
  • Wait I only want events XX and ZZYY from certain servers for compliance.
    • Deployment server will push a configuration based on the server names you select
  • I can’t make this work on server2134
    • Answer call Support (paid) at Splunk, we have real people with real knowledge  and a great community who has probably solved that problem before.
  • My sensitive system doesn’t use the event log file it
    • Answer probably not a problem, files, database, network capture can be a data source we do this all the time.
%d bloggers like this: