User Behavior Analytics or UBA was and still is the term for describing searching for patterns of usage that indicate unusual computing activities — regardless of whether it’s coming from a hacker, employee, or even malware or other process. While UBA won’t prevent hackers or insiders from accessing critical system, it can quickly spot their work and minimize damage and overall risk.
What’s the difference between UBA and the trendier User Entity Behavior Analytics or UEBA?
Why Look at Entities?
The letter E, of course! There’s a lot going on with the addition of the word “entity”. The key idea is that UEBA extends the reach of its analytics to cover non-human processes and machines entities. Gartner analyst Anton Chuvakin has a good breakdown of UEBA: in short, it’s still UBA but enhanced with more context from entities and better analytics.
Why go beyond the user?
Many times it makes sense to not to look at individual user accounts to spot unusual behaviors. For example, hackers who have landed on a victim’s computer may be leveraging multiple users to launch their post-exploitation— say lateral movement to other machines.
The larger entity to focus on is not at the account but at the machine level, which can then be identified by an IP address. In other words, look for unusual activities where the common element is the IP address of a workstation.
Machine Learning and User Entity Behavior Analytics
Gartner’s guide to UEBA (downloadable here) has some insights into UEBA and its appropriate use cases. As they point out, there’s more emphasis in UEBA than in UBA on using data science and machine learning to separate out normal activities of persons and entities from abnormal.
Gartner sees UEBA being applied to use cases where finer-tuned analytics and gathering more context is essential, including:
- Malicious Insiders
- APT groups leveraging zero-day vulnerabilities
- Data exfiltration involving novel channels
- User Account access monitoring
Since these use cases involve a shifting attack surface, Gartner notes that machine learning or ML is essential to establish a baseline derived from “interactions between all users, systems, and data”. But As ML researchers have pointed out, there’s no single approach to working out these baselines.
K-means clustering. Classification. Regressions. Component Analysis. All can be used in UEBA algorithms. If the nerd is strong in you, you can learn more about these topics here.
But even the biggest boosters of ML-based analytics will tell you, there are limits. They are notably hard to tune and can lead to the curse of all UEBA systems: too many false positives. In other words, the algorithms are so sensitive they alert on conditions that may be unusual but are not abnormal and indicative of an attacker or insider.
Perhaps a system architect had to work over the weekend to meet a deadline and was copying hundreds of files. And UEBA clustering algorithms, say, found this employees to be abnormal, locked his account, and thereby causing a critical project to be delayed. No one wants that!
UEBA, Clean Data, and Threat Models
The bigger question for UEBA — as it was for system and information event management or SIEM systems — is the data source.
As we’ve pointed out in the IOS blog before, it’s very difficult to base security analytics on the raw Windows events log. It’s a complicated (and potentially error-prone) process to correlate related events from the system log. On top of that, it’s resource intensive. There are, ahem, better solutions that can produce cleaner file-related event histories.
Another problem posed by the UEBA algorithms is that they’re in a sense starting from scratch: they have to be trained either through formal supervised training or on-the-fly in a semi-supervised fashion. There us nothing inherently wrong with this idea since it’s simply the way ML works.
But in the data security space, we have a big advantage in that we know how most of the more critical incidents occur. Thankfully, the folks at MITRE have done the heavy lifting and have organized a lot the techniques and tactics into various models.
This is a good thing!
For UEBA, we don’t need to rely solely on ML techniques to “learn” what the key factors are in determining abnormal behaviors. MITRE and others tell us that, for example, lateral movement, credential access, and privilege escalation are some of the common known methods of attackers.
Starting with these well-understood patterns gives you a big head start in organizing event data. This naturally leads to the topic of threat modeling.
Varonis and Threat Modeling
Threat models are the key characteristics of real-world attacks organized into larger more meaningful categories. And this is where Varonis can help.
Without any configuration, Varonis threat models are ready to go. Varonis uses predictive threat models to automatically analyze behaviors across multiple platforms and alert you to a potential attacker. From CryptoLocker infections to compromised service accounts to disgruntled employees, we’ll detect and alert you on all types of abnormal user behavior.
Want to learn more? Request a demo today!