Unbelievably simple (ipfix|(net|j|s)flow) collection

Do blog posts come in threes, keep watching to find out? Yesterday I gave you the run down on a new way to collect syslog. Today I’m going to spend some time on a simple low cost and performant way to collect flow data.

  • At least two indexers with http event collector, more = better. For this use case it is not appropriate to utilize dedicated HEC servers.
  • One http load balancer, I use HA proxy. You can certainly use the same one from our rsyslog configuration.
  • Optional one UDP load balancer such as NGNIX. I am not documenting this setup at this time.
  • One ubuntu 16.04 VM

Basic Setup

  1. Follow docs, to setup HTTP event collector on your indexers, note if your indexers are clustered docs does not cover this, you must create the configuration manually be sure to generate a unique GUID manually. Clusters environments can use the sample configuration below: IMPORTANT ensure your data indexes AND _internal are allowed for the token
  2. [http] 
    disabled=0
    port=8088
    #
    [http://streamfwd]
    disabled=0
    index=main
    token=DAA61EE1-F8B2-4DB1-9159-6D7AA5220B21
    indexes=_internal,main
  3. Follow documentation for your load balancer of choice to create a http VIP with https back end servers. HEC listens on 8088 by default.
  4. Install stream for the independent per Docs
  5. Kill stream if its running “killall -9 streamfwd”
  6. Remove the init script
    1. update-rc.d -f streamfwd remove”
    2. rm /etc/init.d/streamfwd
  7. Create a new service unit file for systemd /etc/systemd/system/streamfwd.service
    [Unit]
    Description= Splunk Stream Dedicated Service
    After=syslog.target network.target
    [Service]
    Type=simple
    ExecStart=/opt/streamfwd/bin/streamfwd -D
  8. Enable the new service “systemctl enable streamfwd”
  9. Create/update the streamfwd.conf replacing GUID VIP and INTERFACE
    1. [streamfwd]
      
      httpEventCollectorToken = <GUID>
      
      indexer.0.uri= <HEC VIP>
      netflowReceiver.0.ip = <INTERFACE TO BIND>
      netflowReceiver.0.port = 9995
      netflowReceiver.0.decoder = netflow
  10. Create/update the inputs.conf ensure the URL is correct for the location of your stream app
  11. [streamfwd://streamfwd]
    
    splunk_stream_app_location = https://192.168.100.62:8000/en-us/custom/splunk_app_stream/
    
    stream_forwarder_id=infra_netflow
  12. Start the streamfwd “systemctl start streamfwd”
  13. Login to the search head where Splunk App for Stream is Installed
  14. Navigate to Splunk App for Stream –> Configuration –> Distributed Forwarder Managment
  15. Click Create New Group
  16. Enter Name as “INFRA_NETFLOW”
  17. Enter a Description
  18. Click Next
  19. Enter “INFRA_NETFLOW” as the rule and click next
  20. Click Finish without selecting options
  21. Navigate to Splunk App for Stream –> Configuration –> Configure Streams
  22. Click New Stream select netflow as the protocol (this is correct for netflow/sflow/jflow/ipfix
  23. Enter Name as “INFRA_NETFLOW”
  24. Enter a Description and click next
  25. No Aggregation and click next
  26. Deselect any fields NOT interesting for your use case and click next
  27. Optional develop filters to reduce noise from high traffic devices and click next
  28. Select the index for this collection and click enable then click next
  29. Select only the Infra_netflow group and Create_Stream
  30. Configure your NETFLOW generator to send records to the new streamfwd

Validation! search the index configured in step 27

Building a more perfect Syslog Collection Infrastructure

A little while back I created a bit of code to help get data from linux systems in real time where the Splunk Universal Forwarder could not be installed. At the time we had a few limitations the biggest problem being time stamps were never parsed only “current” time on the indexer could be used.  Want to try out version 2 lets get started! First let me explain what we are doing

If you manage a Splunk environment with high rate sources such as a Palo Alto firewall or Web Proxy you will notice that events are not evenly distributed over the indexers because the the data is not evenly balanced across your aggregation tier. The reasons for this are boiled down to “time based load balancing” in Larger environments the universal forwarder may not be able to split by time to distribute a high load. So what is an admin to do? Lets look for a connection load balancing solution. We need to find a way to switch from “SYSLOG” to HTTP(s) so we can utilize a proper load balancer. How will we do this?

  1. Using containers we will dedicate one or more instance of RSYSLOG for each “type” of data,
  2. Use a custom plugin to package and forward batches of events over http(s)
  3. Use a load balancer configured for least connected round robin to balance the batches of events

What you need

  • At least two indexers with http event collector, more = better. The “benefits” of this solution require collection on the indexer dedicated collectors will not be a adequate substitute
  • One load balancer, I use HA Proxy
  • One syslog collection server with rsyslog 8.24+ host I use LXC instances hosted on proxmox. Optimal deployment will utilize 1 collector per source technology. For example 1 instance collecting for Cisco IOS and another for Palo Alto Firewalls. Using advanced configuration and filters you can combine several low volume source.
  • A GUID if you need one generated there are many ways this one is quick and easy https://www.guidgenerator.com/online-guid-generator.aspx

Basic Setup

  1. Follow docs, to setup HTTP event collector on your indexers, note if your indexers are clustered docs does not cover this, you must create the configuration manually be sure to generate a unique GUID manually. Clusters environments can use the sample configuration below:
  2. Follow documentation for your load balancer of choice to create a http VIP with https back end servers. HEC listens on 8088 by default
  3. Grab the code and configuration examples from bitbucket
    1. Deploy the script omsplunkhec.py to /opt/rsyslog/ ensure the script is executable
    2. Review rsyslogd.d.conf.example and your configuration in /etc/rsyslog.d/00-splunkhec.conf replace the GUID and IP with your correct values
    3. Restart rsyslog

What to expect, My hope data balance Zen.

HTTP Event Collector inputs.conf example deployed via master-apps

[http] 
disabled=0
port=8088
#
[http://SM_rsyslog_routerboard]
disabled=0
index=main
token=DAA61EE1-F8B2-4DB1-9159-6D7AA5220B21
indexes=main,summary

Example /etc/rsyslog.d/00-splunk.conf

This example will listen on 514 TCP and UDP sending events via http, be sure to replace the GUID and ip address

module(load="imudp")
input(type="imudp" port="514" ruleset="default_file")
module(load="imptcp")
input(type="imptcp" port="514" ruleset="default_file")
module(load="omprog")

ruleset(name="default_file"){
    $RulesetCreateMainQueue    
    action(type="omprog"
       binary="/opt/rsyslog/omsplunkhec.py DAA61EE1-F8B2-4DB1-9159-6D7AA5220B21 192.168.100.70 --sourcetype=syslog --index=main" 
       template="RSYSLOG_TraditionalFileFormat")
    stop
}

Example HAProxy Configuration 1.7 /etc/haproxy/haproxy.cfg

 

global
        log /dev/log    local0
        log /dev/log    local1 notice
        chroot /var/lib/haproxy
        stats socket /run/haproxy/admin.sock mode 660 level admin
        stats timeout 30s
        user haproxy
        group haproxy
        daemon
        # Default SSL material locations
        ca-base /etc/ssl/certs
        crt-base /etc/ssl/private
        # Default ciphers to use on SSL-enabled listening sockets.
        # For more information, see ciphers(1SSL).
        ssl-default-bind-ciphers kEECDH+aRSA+AES:kRSA+AES:+AES256:RC4-SHA:!kEDH:!LOW:!EXP:!MD5:!aNULL:!eNULL
defaults
        log     global
        mode    http
        option  httplog
        option  dontlognull
        timeout connect 5000
        timeout client  50000
        timeout server  50000
        errorfile 400 /etc/haproxy/errors/400.http
        errorfile 403 /etc/haproxy/errors/403.http
        errorfile 408 /etc/haproxy/errors/408.http
        errorfile 500 /etc/haproxy/errors/500.http
        errorfile 502 /etc/haproxy/errors/502.http
        errorfile 503 /etc/haproxy/errors/503.http
        errorfile 504 /etc/haproxy/errors/504.http
listen  stats   
        bind            *:1936
        mode            http
        log             global
        maxconn 10
        clitimeout      100s
        srvtimeout      100s
        contimeout      100s
        timeout queue   100s
        stats enable
        stats hide-version
        stats refresh 30s
        stats show-node
        stats auth admin:password
        stats uri  /haproxy?stats
frontend localnodes
    bind *:8088
    mode http
    default_backend nodes
backend nodes
    mode http
    balance leastconn
    option forwardfor
    http-request set-header X-Forwarded-Port %[dst_port]
    http-request add-header X-Forwarded-Proto https if { ssl_fc }
    option httpchk
    server idx2 192.168.100.52:8088 ssl verify none check 
    server idx1 192.168.100.51:8088 ssl verify none check 

Syncing up shclusterapps

This one is short and sweet, when building a Splunk search head cluster we often will create a search head unattached to indexers to “stage” .spl deployments, configure THEN update shcluster/apps and push the following rsync command does this for you and obeys the golden rule to avoid default core apps. The list is correct as of 6.4.1 update as needed for new versions and be sure to exclude anything like an “app” containing deployment client

rsync –verbose –progress –stats –recursive –times –perms \
–exclude alert_logevent \
–exclude launcher \
–exclude SplunkForwarder \
–exclude alert_webhook \
–exclude learned \
–exclude splunk_httpinput \
–exclude appsbrowser \
–exclude legacy \
–exclude SplunkLightForwarder \
–exclude framework \
–exclude sample_app \
–exclude splunk_management_console \
–exclude gettingstarted \
–exclude search \
–exclude*_deploymentclient** \
–exclude introspection_generator_addon \
–exclude splunk_archiver \
–exclude user-prefs \
/opt/splunk/etc/apps/* /opt/splunk/etc/shcluster-test/apps

Building High Performance low latency rsyslog for Splunk

This is a brief followup on my earlier post in a very large scale environment write -> monitor –> read between a log appending source such as rsyslogd and Splunk can impact the latency of log data entry into the destination environment. Last week I stumbled onto a feature of Rsyslog developed a couple of major versions ago that has been very under appreciated. OmProgram allows a developer to receive events from rsyslog using any program without first waiting for disk write. I’ve developed a little bit of code allowing direct transfer of events to Splunk using the http collector download and try it out.

What the output module allows for is direct scale-able  transfer between rsyslog and splunk in native protocols. Ideal use cases include dynamically scaling cloud environments and embedded devices where agents are not acceptable.

Credits

  • Rsyslog dev team for making this possible and Rainer for this presentation that inspired me
  • Splunk dev team for the really awesome http event collector and George who developed the python class interface
  • Splunk Stream team who added direct event collector usage in stream 6.5 proving significant scale.

Setup

  • Setup http event collector behind a load balancer
  • Note your token
  • Install requests using apt,yum or pip http://docs.python-requests.org/en/master/user/install/
  • If using certificate verification setup what is required for requests
  • “git” the code https://bitbucket.org/rfaircloth-splunk/rsyslog-omsplunk/src
  • place omsplunkhec.py and splunk_http_event_collector.py in a location executable by rsyslog
  • Setup rsyslog rule set with  an action similar to the following
    module(load="omprog")
    action(type="omprog"
           binary="/opt/rsyslog/hecout.py --source=rsyslog:hec --sourcetype=syslog --index=main" 
           template="RSYSLOG_TraditionalFileFormat")

Building reliable rsyslogd infrastructure for Splunk

 

Overview

Preparation of a base infrastructure for high availability ingestion of syslog data with a default virtual server and configuration for test data on boarding. Reference technology specific on boarding procedures.

Requirement

Multiple critical log sources require a reliable syslog infrastructure. The following attributes must be present for the solution

  • Enterprise supported linux such as RHEL, OR Centos, or recent Ubuntu LTS
  • Syslog configuration which will not impact the logging of the host on which syslog is configured
  • External Load Balancing utilizing DNAT lacking available enterprise shared services NLB devices KEMP offers a free to use version of their product up to 20 Mbs suitable for many cases

Technical Environment

The following systems will be created utilizing physical or virtual systems. System specifications will vary due estimated load.

  • servers in n+1 configuration
    • Minimum 2 GB memory
    • Minimum 2 x 2.3 GHZ core
    • Mounts configure per enterprise standard with the following additions
      • /opt/splunk 40 GB XFS
      • /var/splunk-syslog 40 GB XFS
  • Dual interfaced load balancer configured for DNAT support.
  • Subnet with at minimum the number of unique syslog sources (technologies) additional space for growth is strongly advised
  • Subnet allocated for syslog servers

Solution Prepare the rsyslogd servers

The following procedure will be utilized to prepare the rsyslogd servers

  1. Install the base operating system and harden according to enterprise standards
  2. Provision and mount the application partitions /opt/splunk and /var/splunk-syslog according the estimates required for your environment.
    1. Note 1 typical configuration utilize noatime on both mounts
    2. Note 2 typical configuration utilizes no execute on the syslog mount
  3. Create the following directories for modular configuration of rsyslogd
    mkdir -p /etc/rsyslog.d/conf.d/splunk-0-rules
    mkdir -p /etc/rsyslog.d/conf.d/splunk-1-inputs
  4. Create the Splunk master syslog-configuration /etc/rsyslog.d/splunk.conf
    #
    # Include all config files for splunk /etc/rsyslog.d/
    #
    
    $IncludeConfig /etc/rsyslog.d/splunk-0-rules/*.conf
    $IncludeConfig /etc/rsyslog.d/splunk-1-inputs/*.conf
  5. Create the catch all syslog collection source. /etc/rsyslog.d/splunk-1-inputs/default.conf
    #define syslog source
    input(type="imptcp" port="8100" ruleset="default_file");
    input(type="impudp" port="8100" ruleset="default_file");
  6. Define a rule for all incoming data on the default port /etc/rsyslog.d/splunk-0-rules/default.conf
    ruleset(name="default_file"){
        $RulesetCreateMainQueue    
        $template DynaFile,"/var/splunk-syslog/default/%HOSTNAME%.log"
        *.* -?DynaFile
        stop
    }
  7. Ensure splunk can read from the syslog folders. The paths should exist at this point due to the dedicated mount
    chown -R splunk:splunk /var/splunk-syslog
    chmod -R 0755 /var/splunk-syslog
  8. Reload rsyslogd
    systemctl reload rsyslog
  9. Create log rotation configuration /etc/logrotate.d/splunk-syslog
    /var/splunk-syslog/*/*.log
    {
        daily
        compress
        delaycompress
        rotate 4
        ifempty
        maxage 7
        nocreate
        missingok
        sharedscripts
        postrotate
        /bin/kill -HUP `cat /var/run/syslogd-ng.pid 2> /dev/null` 2> /dev/null || true
        endscript
    }
  10. Allow firewall access to the new ports (RHEL based)
    firewall-cmd --permanent --zone=public --add-port=8100/tcp 
    firewall-cmd --permanent --zone=public --add-port=8100/udp
    firewall-cmd --reload

 

Solution Prepare KEMP Loadbalancer

  • Deploy virtual load balancer to hypervisor with two virtual interfaces
    • #1 Enterprise LAN
    • #2 Private network for front end of syslog servers
  • Login to the load balancer web UI
  • Apply free or purchased license
  • Navigate to network setup
    • Set eth0 external ip
    • Set eth1 internal ip
  • Add the first virtual server (udp)
    • Navigate to Virtual Services –> Add New
    • set the virtual address
    • set port 514
    • set port name syslog-default-8100-udp
    • set protocol udp
    • Click Add this virtual service
    • Adjust virtual service settings
      • Force Layer 7
      • Transparency
      • set persistence mode source ip
      • set persistence time 6 min
      • set scheduling method lest connected
      • Use Server Address for NAT
      • Click Add new real server
        • Enter IP of syslog server 1
        • Enter port 8100
  • Add the first virtual server (tcp)
    • Navigate to Virtual Services –> Add New
    • set the virtual address
    • set port 514
    • set port name syslog-default-8100-tcp
    • set protocol tcp
    • Click Add this virtual service
    • Adjust virtual service settings
      • Service type Log Insight
      • Transparency
      • set scheduling method lest connected
      • TCP Connection only check port 8100
      • Click Add new real server
        • Enter IP of syslog server 1
        • Enter port 8100
  • Repeat the add virtual server process for additional resource servers

 

Update syslog server routing configuration

Update the default gateway of the syslog servers to utilize the NLB internal interface

Validation procedure

from a linux host utilize the following commands to validate the NLB and log servers are working together
logger -P 514 -T -n <vip_ip> "test TCP"
logger -P 514 -d -n <vip_ip> "test UDP"
verify the messages are logged in /var/splunk-syslog/default

Prepare Splunk Infrastructure for syslog

  • Follow procedure for deployment of the Universal Forwarder with deployment client ensure the client has has valid outputs and base configuration
  • Create the indexes syslog and syslog_unclassified
  • Deploy input configuration for the default input
[monitor:///var/splunk-syslog/default/*.log]
host_regex = .*\/(.*)\.log
sourcetype = syslog
source = syslog_enterprise_default
index = syslog_unclassified
disabled = enabled

 

  • Validate the index contains data

 

Ghost Detector (CVE-2015-7547)

4375461

Just in case you need need yet another reason to utilize passive DNS analytic, a new significant vulnerability is out for GLIBC. Have stream? You can monitor your queries for this IOC

https://sourceware.org/ml/libc-alpha/2016-02/msg00416.html

Update: the attack requires both A and AAAA records. Only show possible attacks with both involved. This should return zero results. If results are returned there “may” be something of interest drill into the answers involved to determine if they are malicious based on the CVE above.

index=streams sourcetype=stream:dns (query_type=A OR query_type=AAAA)
[
search index=streams sourcetype=stream:dns (query_type=A OR query_type=AAAA)
| rare limit=20 dest
| fields + dest | format
]
| stats max(bytes_in) max(bytes_out) max(bytes) values(query_type) as qt by src,dest,query
| where mvcount(qt)>=2
| sort – max*
| lookup domain_segments_lookup domain as query OUTPUT privatesuffix as domain
| lookup alexa_lookup_by_str domain OUTPUT rank
| where isnull(rank)

Don’t have stream yet? Deploy in under 20 minutes.
http://www.rfaircloth.com/2015/11/06/get-started-with-splunk-app-stream-6-4-dns/

When you have 100 problems, more logs are not the answer

big_fire_01 Often SIEM projects begin where log aggregation projects end. So many logs cut into organized stacks of wood ready to burn for value. I can be quoted on this “All logs can be presumed to have security value”. One project to build the worlds largest bonfire however is seldom the correct answer. What value you may ask? Value will be gained in one or more of these categories:

Continue reading “When you have 100 problems, more logs are not the answer”

Share that search! Building a content pack for Splunk Enterprise Security 4.0+

Splunk has initial support for export of “content” which can be dashboards and correlation searches created by the user to share with another team. What if you need to be a little more complex for example including a lookup generating search? This will get a little more complicated but very doable by the average admin. Our mission here is to implement UC0029. What is UC0029 glad you ask Each new malware signature detected should be reviewed by a security analyst to determine if proactive steps can be taken to prevent infection. We will create this as a notable event so that we can provide evidence to audit that the process exists and was followed.

Source code will be provided so I will not detail step by step how objects will be created and defined for this post

UC0029 Endpoint new malware detected by signature

 

My “brand” is SecKit so you will see this identifier in content I have created alone or with my team here at Splunk. As per our best practice adopt your own brands and use appropriately for your content. There is no technical reason to replace the “brand” on third party content you elect to utilize.

Note ensure all knowledge objects are exported as all app’s owned by admin as you go

      • Create a DA-ESS-SecKit-EndpointProtection
        • This will contain ES specific content such as menus dashboards, and correlation searches
      • Create the working app SA-SecKit-EndpointProtection
        • This will contain props transforms lookups and scheduled searches created outside of ES
      • Create the lookup seckit_endpoint_malware_tracker this lookup will contain each signature as it is detected in the environment and some handy information such as the endpoint first detected, user involved and the most recent detection.
      • Create empty lookup CSV files
        • seckit_endpoint_malware_tracker.csv (note you will not ship this file in your content pack)
        • seckit_endpoint_malware_tracker.csv.default

Build and test the saved search SecKit Malware Tracker – Lookup Gen. This search will use tstats to find the first and last instance of all signatures in a time window and update the lookup if an earlier or later instance is found

 

      Build and test the correlation search UC0029-S01-V001 New malware signature detected. This search will find “new” signatures from the lookup we have created and create a notable event”Make it default” In both apps move content from local/ to default/ this will allow your users to customize the content without replacing the existing searches”Turn if off by default” It is best practice to ensure any load generating searches are disabled by default

        add disabled=1 to each savedsearches.conf stanza that does not end in”- Rule”add disabled=1 to each correleationsearches.conf

Create a spl (tar.gz) containing both apps createdWrite a blog post explaining what you did, how the searches work and share the code!Gain fame and respect maybe a fez or a cape

The source code

https://bitbucket.org/rfaircloth-splunk/securitykit/src/1ea60c46b685622116e28e8f1660a6c63e7d9e96/base/ess/?at=master

Bonus: Delegate administration of content app

  1. Using your favorite editor edit app/metadata/local.meta
  2. Update the following permisions adding “ess_admin” role

## access = read : [ * ], write : [ admin,role2,role3 ]
[savedsearches]
access = read : [ * ], write : [ admin,ess_admin ]

[correlationsearches]
access = read : [ * ], write : [ admin,ess_admin ]

Advancing security through the use of security assessments

Long ago our in the distant past that is the late 1970s individuals were alone and unconnected. Visionaries of the future began to connect the individuals in communities. These communities were open and without borders, individuals could enter and use all dwellings with ease. The community thrived with each individual adding unique value.


As the community grew the individuals began to notice unwelcome occurrences. Dwellings changed without the approval of its owner, items moved from dwelling to dwelling without kind notes left. Most disturbing of all some smaller individuals would simply disappear. Each community started to address the concerns of its individuals on their own. Some communities fared better than others, elders would meet together and discuss the successes in their communities (just the sucesses). In order to determine which elders community had fared the best consultants were hired to asses the communities security.  The following levels commonly used to range community security.

Level 0 Awareness of roaming beats in the village as identified by missing young children, food supplies and occasional spotting of red eyed monsters in the night.

Level 1 Young boys with clubs seek to prove the existence of of such beasts despite denial by the elders.

Level 2 A small dog has fallen dead of age in town square signs are placed elsewhere presuming the animals will read and obey the signs.

Level 3 As children continue to disappear in the night demands that more must be done continue. Young men are given small stones and placed at the community gates. Additional signs are added to ensure beats will only use the main gate

Level 4 Losses continue, recent reports of missing valuables such as silver and  gold alarm the elders. Each community member is interviewed and background checks are completed. Community leaders and elders and guards are excluded from the process.

Level 5 Media reports of losses become public. Elders embarrassed demand more actions from the guards. New guards are posted around certain well lighted intersections. Guards dance every 30 minutes between 9 AM and 10 AM around the intersection ensuring the requirement of more activity is satisfied.

Level 6 Additional losses occur new Elders are brought in by the community to solve the problem. Immediately all guards are replaced with new cards from neighboring communities suffer more public and higher losses. The new elders carry forth a plan to double their efforts. The following plan is put in place
A single new guard is set to walk along the parameter of the community during business hours Monday through Thursday
The number of intersections guarded are doubled. The dance is performed every 15 minutes and the intersection guards are equipped with monoculars.

Outside actors are hired to impersonate monsters of the night by entering the community at night and taking small tokens such as napkins. The actors are immediately fired for not playing fairly for reasons not disclosed to the elders.

Level 7 The new guard leadership brings additional guards from neighboring community to patrol the perimeter outside of business hours. The outsourced guards are instructed to awake a day guard should anything severe or important be observed.

Outside actors are again hired and directed to attempt to take a small flyer from sign post at a single intersection. After repeated success all guards are placed at the same intersection and a successful test is reported to the elders.

Level 8 The senior elders european beach vacation photos are placed around the community near the fading signs installed when the community reached level 1. Senior guards are replace. The new senior guards offer to higher the “best” of the outsourced guards for the new perimeter security program. The terms of the offer were not disclosed  2% of the staff takes the offer. The outsource firm does not counter to retain the guards. The new firm observes the photo liberators have opposable thumbs. Reduction security processes for small animals is reduced increasing the rate of loss for small animals and children. Elders are not allowed contact to life forms with opposable thumbs. The probation is receded after 1 hour.

Level 9 The senior guards request more outside assistance, new consultants recommend a new monitoring system built of mirrors allowing the guards to view the intersections from a central location on a single glass wall. Perimeter guards and intersection guards are immediately discontinued. Days latter all small farm animals disappear without notice. On the one year anniversary a senior investigator comes to the elders with a fantastic story of finding a single chicken they must have been taken from this community at a black market in a far away land. The Senior guards initially are certain this must be an isolated incident however a manual inspection of the community find all small animals are indeed missing.

Level 10 Senior guards are once again replaced. The single wall of glass vendor is brought in to explain why their solution has failed. The vendor quickly finds the system was implemented in the very same way as the neighboring communities system. The vendor points out the shape of their neighbors community differs greatly the mirrors as installed have excellent visibility of the latrine  and the community dump but have very limited visibility on the perimeter. The vendor recommends a larger glass system to provide visibility on the perimeter in addition to the current solution. Construction begins on a larger hut with bigger glass walls.

Level 11 Following delays in construction to the new hut Additional Senior guards are engaged from far away with experience in guarding large animals. Additional guards are hired with differing skills. Each guard begins to adjust the minors to their personal liking. Often complaining they spend to much time in the hut. Senior guards begin to require each guard to roam the community during the day looking for signs of wild beats.