Introduction: SOC Processes Are Flawed
Desensitization, as defined within the psychological definition, is a process that diminishes responsiveness to adverse events after repeated exposure.
As I write this near midnight, I can’t help but think of the hundreds of security analysts that work within the confines of security operations centers (SOCs) worldwide, perhaps on a second or third shift, eyeing alerts from log aggregators, security incident event monitoring (SIEM) solutions, or even event correlation engines.
That was me in 2005. Even now I remember the stamina needed to stay focused and engaged to the alerts that came into what seemed like a never-ending queue, much of them even labeled as false positives by fellow associates from prior shift notes or the event tracker itself. I wonder if the analysts today feel desensitized over their 8, 10, or even 12-hour shifts, as they receive security alerts from multiple infrastructures that they’ll never even know contextually what purposes they serve.
Of course, those analysts will become desensitized. Hackers count on it every day and have for years.
I make this point facetiously to draw some much-needed light to the status quo methods of running and subscribing to SOC services in 2020. As many continue to search for the silver bullet of products to correctly identify an indication of compromise based upon 20, 30, or even 50 distinct events for in-scope infrastructures, systems, and applications, I would like to point out many key observations that indicate today’s SOC processes are fundamentally flawed.
The observations are as follows:
- The SIEM Alone SOC. Most of us in the field know of countless failed or incomplete SIEM implementations. If a SIEM alone is the primary tool shaping the security analysis of your SOC personnel, you may be missing a lot of valuable information. Regardless of the number of flows going into your SIEM, poor implementations and configurations may undermine the expected results of a solution that is traversing all log events as advertised.
- Denial of Threat Information. Tools are ingesting massive amounts of information and often, the quality of how the information is ingested, parsed and then reflected back to analysts is not effective for triaging.
- Many SOCs Use the Wrong Metrics. Many MSSPs are driven by metrics that make them appear that they are focused on service. Events mitigated, closed incidents, total alerts processed, and high risks addressed are just some of the examples of “wrong” metrics that fuel SOC today. These metrics are focused on demonstrating numbers in work that may not be useful or relevant for threat mitigation. Many of the events or incidents reflected as “closed” may not have been true positives and (most importantly) may not really pertain to an organization’s threat model.
Introducing Organizational Threat Models
Threat models serve as a pattern that allow organizations to more easily identify a list of threats against a target. In an organizational threat model, the organizations serve as that target.
The model itself is intended to mesh together a custom threat library [against a target], along with associated threat motives, probable attack patterns, vulnerabilities that facilitate threat objectives, associated targets, and a list of countermeasures that help resist the effectiveness of attack patterns that support threats and their objectives. Notice that the items in bold represent factors that contribute in the calculation of risk.
This is an important distinction because the messaging of what is attacking a target is important to understand, particularly for those that are on the front lines of monitoring adverse alerts on a target’s infrastructure.
The organizational threat model essentially provides a way for context to find its way into the role of security operations. Beyond simply relying on tools to govern decisions, an organizational threat model provides the context that security analysts need to think about what is important based upon the likelihood, severity, and accuracy of both threat data and threat intel.
Organizational Threat Models as a Blueprint for Threat Intelligence
At this point, organizational threat models are not something that snap on as a plugin to a SIEM or threat intel subscription feed, but something that can be instead used to train SOC analysts on how to think in the trenches when triaging security events and incidents.
The organizational model will allow a team of SOC analysts to ask the following important questions:
1. Who is my enemy?
- Understanding motives and likely threat actors is important because the majority of cybercrime is a copy, or rehash, of prior successful attacks. The current gap today is that most analysts do not actually know or understand who their attacker is and are forced to play detective on purely aggregated log information/threat intel, which doesn’t bring a lot of context. It’s important that a threat actor profile be developed so that, with an understanding of a company’s exposure, analysts can correlate to see if abuse patterns to company infrastructure, assets, and applications match threat motives that pertain to a threat actor profile.
- The answer to this question is determined largely from threat intelligence feeds, advisories, industries CERT/ ISAC reports, and more. It allows for the analyst to match a developed threat actor profile to threat events worldwide that may be targeting the industry of a specific target(s).
2. What are they after?
- Profiling an enemy naturally leads to answering the question of what this enemy wants. Many inexperienced security analysts may just be focused on data-based attack patterns, meaning attack patterns that are simply looking to exploit and pilfer data sources. Not all threat actors are looking for data; some may be looking for persistence and others may simply be looking to burn the whole place down.
- Understanding what targets are in-scope based on trends, prior incidents, other threat advisories and intel is how these types of information can go from simply mass consumption of threat information that looks like noise to more selective filtering that leads to improved analysis. Knowing what to protect is a fundamental part of defense and it’s not fulfilled with simply a point-in-time discovery. Discovery of what is in-scope for monitoring and defense is an ongoing process that can be automated for virtual, physical, data, and application assets.
3. What trends have developed or are forming?
- Trends have lifecycles. Many companies react to trends at different stages and today, there are virtually no SOC analysts that are thinking about trends. In the “trenches”, trend discussion is important because it makes the discussion more fluid in the minds of the analysts. Some companies may argue that trends are looked at via feeds or at higher management levels. Unfortunately, at this level conversations don’t become operationalized, which is why it’s important to allow for analysts to not only have news run within their operational centers but to allow them to converse on what trends may be forming in real-time or may already have formed and to discuss how this affects their fluid threat model.
- It’s also important to note where trends come from, as there are many different types of trends. In threat analysis, you have economic trends as you do have trends in social engineering attack vectors. Often, the latter serve as obvious trends and are presented to analysts after the point that they are able to take a proactive stance. Security trends are largely based upon inquiries with analysts, surveys, annual reports, or other sources that assess market conditions. This means the activities that shape these trends have been occurring or are currently occurring, thereby forcing company operations centers to react versus foresee possible threats. The improvements of correlation engines to collate similar events on a corporate network does allow companies to start thinking more proactively, and this should be greatly encouraged within the trenches.
Building and Operationalizing a Threat Model for Defense
Organizational threat models can leverage a framework like the Process for Attack Simulation and Threat Analysis, or PASTA, to think about building a threat model in both an offensive and risk-based perspective. The idea of a risk-centric approach like PASTA is that it compels a threat modeling security champion to focus on the things that matter, where what “matters” are things that support the business in a way that is of critical or high importance. It leverages elements of a business impact analysis in order to help qualify the criticality of components that support an organization.
The simple steps to building a threat model using PASTA is to leverage the activities that are depicted in every stage of the threat modeling methodology. These stages are simplified below with a lite version of some exemplary artifacts, questions to ask and objectives to achieve. It is by no means comprehensive, but they do help to convey the idea of each stage number. It also correlates with sample tech genres to help depict how this can all come together for a threat-focused Security Operations team.
Threat Modeling Methodology Stages
Stage 1: Know Your Business. Know What Supports Your Organization
- How does your company make money?
- What are the online components that support revenue?
- What are the physical components that support revenue?
- What does downtime mean over a unit of time? Over how many units of time do things get bad?
- How is continuity ensured for the components of your business?
- How important is information/data to the business model?
- Is it confidential?
- Is it regulated?
- How is it protected?
- Regulatory Risks
- What are they and what impacts do they bring in terms of go-to market, customer adoption, avoiding fees/ penalties
Stage 2: People – Process – Product. What, Where, and Who are they in the support of the Organization?
- Which roles are essential?
- Who has access to the keys to the kingdom?
- What external human resources play a critical role?
- What operations are core to revenue generation and growth?
- What information is leveraged by these operations?
- How is this information safeguarded?
- Is this information regulated?
- What third party operations support the business?
- Consider Shared Services, Offshore Development, Business Process Outsourcing, and Foreign Manufacturing as some examples.
- What proprietary products support current revenue cycles or growth?
- What infrastructure (e.g., CoLo, Managed Services, orCloud) supports these products?
- What third party vendors contribute to the product success?
- What information is managed by these products?
- What actions are taken by the organization to manage such information?
- What third parties or sub-processors are in play to support the information managed by the product/services of the Organization?
Stage 3: Process Mapping to Business Objectives.
- Enumerate mission critical processes, products and serves and map these to People, Process and Product that play a role.
- Consider the following as process to objectives are put together:
- Information flow
- Information ownership
- Regulatory laws (private/public) that are in scope
- Business use cases supported by all People, Process and Product components
- Inherent security controls that are in place
- Technology footprint that is being leveraged (e.g., MS, Linux, Oracle, Apache, Zigbee, iOS, React.js, AS/400, HID, etc.) by critical or high impact business processes
Stage 4: Threat Analysis.
- Build a threat library for your organization. An immutable mnemonic like STRIDE will not do since threats are dynamic and vary greatly by industry and organization. Threat libraries are living lists so they should be updated on a regular basis. An example threat library for a consumer electronic manufacturing company may look like:
- Information Compromise
- Account Compromise
- Introduce malicious SDK or alter existing SDK
- Compromise device via Supply Chain
- Note that the above are not attack patterns but threats. Stage 6 will begin to create attack libraries from frameworks like ATT&CK or CAPEC to map to what types of attack patterns could realize threats depicted in this stage.
- Use the information and context from Stages 1-3 to shape how threat intel is meaningful. Context is everything and can more accurately help funnel threat intel in the right way for an organization. This is a many-to-limited mapping. Threat intel and data is plentiful, but custom develop threat libraries and the attack surface defined by Stages 1-3 can help funnel to more meaningful results.
Stage 5: Vulnerability Analysis.
- What active weaknesses or vulnerabilities (vulns) do we have?
- How do they help support the threat library that was created in Stage 4?
- Build your threat patterns based on abuse cases that can alter your product/service use cases.
- Focus on the vulns that could facilitate threat objectives. Build a vulnerability list and use vulns frameworks like CVEs to better map to exploits in the next stage.
- Vulns don’t only come from vuln scanners. Consider human and physical weaknesses as well.
Stage 6: Attack Modeling – What attacks are going to realize the goals of the threats depicted in your threat library?
- Build a customer attack library. A sample of one that correlates to the one above is as follows (these don’t have CAPEC IDs, but they definitely can and are suggested):
- Device NFC Man-in-the-Middle (for Information Compromise)
- Credential Stuffing Attack for Management Account Page
- DNS Spoofing Attack to Fake SDK Site for Users
- Hijacked embedded library in mirror sites for package inclusion in device
- It’ll be important to test attack viabilities as this will factor in threat likelihood for the overall risk analysis.
- Attack libraries are also fluid lists but should always be supporting the threats and threat objectives that were previously defined. This is a major difference as many security professionals use threats and attacks as interchangeable words, even though their meanings are different.
Stage 7: Residual Risk Analysis – What’s the net-net of where we should be concerned as an organization?
- With a net of identified vulnerabilities and simulated attack patterns all supported by a customized threat library, an organization is able to successfully see the residual effects in a controlled environment. This allows a security operations team to discover the detective and reactive technologies that are most critical to triage in the event an incident occurred?
- This stage allows for a more threat supportive and risk-focused alignment that allows threat data and threat intel sources to operate in a more concerted and strategic effort, as compared to simply leveraging tool-based alerts that are devoid of so much context.
Over the years, VerSprite has found threat models to provide an excellent lens through which our threat intelligence group supports various clients. With a unique client profile and threat model in place, alerts can become a lot more contextualized, which allows us to truly be analysts versus simply looking at their roles as tool administrators or ticket managers for security tickets in a queue. I hope this write-up encourages others to consider threat models in their own respective SOCs for improved analysis and response by their blue teams.
VerSprite leverages our PASTA (Process for Attack Simulation and Threat Analysis) methodology to apply a risk-based approach to threat modeling. This methodology integrates business impact, inherent application risk, trust boundaries among application components, correlated threats, and attack patterns that exploit identified weaknesses from the threat modeling exercises.