Your searches are queued but you have cores, memory and IO to spare? Tuning your limits can allow Splunk to utilize “more” of your hardware when scaled up instances are in use.
Note This approach is not useful when searches run LONG only when they run fast enough but we don’t have available search slots. Be careful to apply the right solution to your problem this solution may not be a problem you have.
First in all versions of Splunk less than or equal to 7.0.1 apply the following setting to disable a feature that can slow search initialization.
[search] #SPL-136845 Review future release notes to determine if this can be reverted to auto max_searches_per_process = 1
On the search head only where DMA is utilized (ES) update the following
#this is useful when you have ad-hoc to spare but are skipping searches (ES I'm looking at you) or other # home grown or similar things [scheduler] max_searches_perc = 75 auto_summary_perc = 100
Evaluate the load percentage on the search heads and indexers including memory, cpu utilized and memory utilized. We can increase this value to allow more concurrent searches per SH until one of the following occurs
- CPU or memory utilization is 60% on IDX or SH
- IOPS or storage throughput hits ceiling and no longer increases decrease by 1 increment of 10
- Skipping /queuing no longer occurs (increase by 1-3 additional units from this point
#limits.conf set SH only [search] #base value is 6 increase by 10 until utilization on IDX or SH is at 60% CPU/memory starting with 20 #base_max_searches = TBD