Every day we discover new vulnerabilities in our systems, cracks in the fence the adversaries take advantage of to get into your organization and wreak havoc. Understanding what you have in your environment (e.g., types of devices, systems equipment, etc.) is very important in order to make sure the controls in place are working and more importantly, keeping up with the threat landscape.
With system vulnerabilities outpacing manual verification, the best safest approach is to have an automated way to verify and alert when system vulnerabilities are discovered, or worse when attackers are actively targeting your systems.
So how do you gain this insight and bring it to the attention of the end-user/analyst?
THE FOUNDATION
The first step in the process is to collect your log data in a centralized location where you can run searches and aggregations. The ability to ingest many different types of log data is key because you need to get data in from all types of tools and you usually need to use various data collection methods to do so. Once the data is collected, ideally you will store the data for at least 90 days for trend analysis, confirmation of vulnerability patches or remediation after patching.
Additionally, you will want the centralized log management to include the ability to run real-time alerts, so you can receive notifications when issues are discovered. A reporting service also helps in keeping management and auditors up to date.
WHAT SHOULD YOU COLLECT
A good centralized log management platform will collect all of your log data. You have the data from the obvious devices everyone thinks of when considering where vulnerabilities lie (firewalls, domain controllers, servers, etc.) as well as the not so obvious devices. The trick is figuring out how to make sense of it.
Vulnerability scan data can give authenticated results on an endpoint, showing exact patch levels, versions of software, and configuration settings. This data is valuable, but tying that into other data, is often hard to do, but using it as a reference point with supplemental data, will provide a big picture, and reduce many false positives.
IDS/IPS technologies can detect real live traffic as well as what is happening on the network. The IPS functionality is now built into many leading firewalls today, and utilizing this functionality, can save additional purchases. Viewing into the traffic, one sees what is happening from an endpoint. If an endpoint is going to a domain controller, and the attacker is trying many types of authentication methods, forcing the endpoint to try to exploit a new Kerberos/Ticketing issue in the wild. Similarly, if the attacker is using plain text, unencrypted communications to gather data from a data server, you can see how they are accessing it for comparison.
If you correlate the data between the IPS signals and the vulnerability scan data, you can confirm the activities beyond what was showing up on the scan.
Note that not all issues found in the scan are even part of the available “Attack Surface”. (An Attack Surface is all the ways a host can be vulnerable.) For example, a Linux server could be running an outdated version of Apache, vulnerable to an authentication attack per the authenticated scan, but if the webserver is not running, or if the firewall is not allowing any connections to it, it is not really exploitable, thus the attack surface is empty there.
Other good sources of data include the endpoint and endpoint security software as it has visibility after encryption or decryption is performed. Don’t forget any tool monitoring lateral movement, as many attacks land on one host, but quickly spread as broad as they can to establish a foothold.
NOW WHAT?
With the data coming into the centralized log management tool, alerting gives you a real-time look into the data. Aggregation of many types of IPS alerts against one host could signify a scanning attempt to find an exploitable hole. Now, most of these will fail, but once 5+ different IPS alerts fire against a host, it is safe to say it is getting attacked. Creating a real-time alert for this gives you the ability to do live forensics or quickly block the source of the attacks.
Taking the alerts to the next stage, let’s say you have 5+ alerts on attacks against a host, and some of these attacks show up in a vulnerability scan. This is a critical alert that tells you to check and see if the attack got through, or if it was blocked by the endpoint security software.
Having the host logs can help with this as well as provide a correlation point for the alert generated by the log management tool. If a host has been compromised, the host logs can give insight on the attacker’s methods, and create Indicators of Compromise to be used in other alerts in case any other host might report them.
Another way to look for potential vulnerabilities is to create aggregations and thresholds for data collected. If a host has a new TCP Port startup, it could be an indicator of a new process listening for a connection from the malware installed. If the host’s connections are now going to an unknown location, or country outside normal operations, it could be a C2 channel opened up. Having a method like using a lookup table of approved ports, and alerting when a new listening port is started can give a heads up to an exploited host.
ALERTING CONSIDERATIONS
Real-time alerts can give you the needed advantage to stop an attack before it gets too far spread out, but there are things to watch for and consider when setting up alerts.
- Can you tune the alert rule? – Does the rule allow filtering, or exclusions while for known issues or while remediation is taking place?
- Apply only to the right hosts and data sets – Not all alert rules will be relevant to all the data. Linux alerts will only really be valid for Linux hosts.
- Multiple notification methods? – Having the ability to send out an email is table stakes nowadays, but many users are on Slack more than email, can the system tie into the messaging client?
- Can the alerts give the right type of data to take action immediately? – Just because an alert is fired if another trip to the console is needed to understand what the alert is, was it a valuable alert? Make sure enough data is in the alert to take action on right away.
- Do the alerts have a disabled state? – Alerts can get noisy if improperly tuned, can they be disabled until re-configured?
- Can alert rules use enrichment sources to make them dynamic? – Can alert rules use a lookup for known bad IP addresses, or look inside LDAP/AD for all the “Domain Admin” users, so the rules need less care and feeding.
IN SUM
Alerts from centralized log management can give advance notice of an attack or alert you when they are going on. Having the ability to find these automatically and keeping the alert rule flexible enough for all situations is essential.