Introduction: Understanding the importance of Adoption Engineering for SIEM
Okay, so you have finally deployed your shiny new SIEM – what now? How can you make sure it actually adds value to your organisation? Whether it fulfils your security and business requirements? Whether you are making the most out of it? How can you make sure your organisation has embraced the tool and is ready to use it efficiently? These and many more questions reasonably arise after the initial configuration and deployment of your SIEM tool has taken place. Even if the deployment was a massive success there is still a lot to do until your SIEM has reached its full potential.
The area that involves post-deployment activities whose goal is to enable SOC teams and guide/advise them along the lifetime of your SIEM tool, is typically referred to as Adoption Engineering (AE) and is widely considered a crucial step towards the success of not only the SIEM’s deployment but, in many cases, an organisation’s security program as a whole.
We have identified four main post-deployment areas that we believe organisations should focus and spend time on as part of the adoption exercise – Planning Ahead, Use Case Validation, New Use Case Development and Training.
Ready to take your SIEM implementation to the next level? Click here to schedule a consultation.
Planning Ahead: Aligning SIEM with organisational security goals
Frequently undervalued is the importance of planning when it comes to deploying and adopting a SIEM. Organisations typically form a long-term security strategy which, in short, outlines company-wide security concerns as well as ways to mitigate them. Including the SIEM in this equation and aligning it with the general security strategy will help organisations adjust the adoption plan accordingly and better understand the best path forward. This planning exercise should answer or, at least, provide insights around crucial questions such as which use cases need to be implemented, which data feeds to be onboarded and what are the requirements resource-wise. It basically establishes, amongst other things, the success criteria for the SIEM deployment itself.
Use Case Validation: Testing and optimising detection mechanisms
Validating the implemented use cases is one of the most (if not the most) significant post-deployment tasks. The main objective of this AE area is to make sure organisations feel comfortable with the currently deployed SIEM detection mechanisms and trust their platform can be used to achieve their security goals.
There are three key activities that need to take place as part of the use case validation – testing of current mechanisms, analysis of the testing results and remediation of the findings.
During the testing phase, the security teams try to emulate attacks and nefarious activities in order to check how accurate and efficient the configured detection and alerting mechanisms are. There are many creative ways to accomplish this, but certainly a penetration test exercise is one of the most effective ways to test your SIEM’s capabilities. Of course, penetration tests are not a new thing and are widely used by organisations to evaluate their security posture. However, they are rarely thought of as tools to assess a SIEM and its implemented use cases. Penetration tests can be performed either manually or using an automated testing platform. Each solution has its pros and cons and there is really no straightforward answer as to which is better than the other, since it depends on a number of factors (it is definitely an interesting topic that probably deserves its very own tech blog!).
Another way to test your SEIM’s coverage and alerting mechanisms is for security analysts to perform purposely, ad-hoc malicious activities (e.g. brute force attack against an AD account). This approach has certain limitations though, with the most prominent one being that it doesn’t scale well – it might work for one or two use cases, but you simply can’t validate tens of use cases in this way. Moreover, penetration tests are performed by highly trained professionals specialised in emulating real-life attacks.
Once the testing phase is done, next step is to assess the testing report and analyse the outcome. The security team will need to identify which use cases need better coverage, understand why the existing detection mechanisms failed and come up with a remediation plan.
There are a number of ways to deal with the issues that may be identified during the results’ analysis, depending on the nature of the issues themselves. The most common approach is to try and tune the existing rules. More specifically, with this technique we are attempting to effectively fulfil the use case requirements, by refining the rules’ logic and triggering conditions. Another reason why your current use case implementation might be failing or not working as expected is because there is probably critical information missing from the data feeding your use case. Ideally, this issue could have been prevented during the use case design and development stage where security teams normally outline the requirements of a use case, such as the required data elements that need to feed the use case in order for it to offer as much value and coverage as possible. Lastly, another technique that could help in minimising the coverage gap is by simply adding a new rule or alert, enhancing in this way your current detection capabilities.
New use case development: Expanding SIEM capabilities
As businesses grow and more software tools and complexity are being added to an organisation’s environment, the need for additional security coverage grows as well. The requirement for new use cases arises inevitably and naturally, even if organisations do not introduce any new technology in their environment at all. New threats emerge every day so it’s crucial that (apart from all the patching and prevention controls) SIEM tools and detection mechanisms are up to speed with the ever-changing threat landscape.
Adopting a widely accepted and respected framework such as MITRE ATT&CK will provide valuable input and guidance at the early design stages of the use case building lifecycle, and potentially help you identify even more coverage gaps along the way. It is also a good practice that organisations utilise a use case development methodology (either developing one internally or utilising a publicly available one) comprising of models and processes which will help expedite and standardise the implementation of use cases to a great extent.
Training: Ensuring teams are prepared to use SIEM efficiently
Training is fundamental for both the SIEM deployment and adoption phases. In most cases, the training and knowledge transfer performed as part of the initial deployment of a SIEM tool aims to offer basic familiarisation with the tool, just enough to get the SOC team started. It’s really during the post-deployment/adoption phase that teams get to know the tool in more depth and understand its full strength and capabilities. This is when the teams actually learn how to utilise the tool in an efficient way, both from a user/analyst and an admin point of view, e.g. onboard and validate new sources, develop new use cases and threat hunting.
Conclusion: Achieving success with SIEM adoption through Adoption Engineering
To conclude, deploying a new, cutting edge SIEM in your environment might nowadays be a standardised process, however embracing it can pose a significant challenge and should by no means be overlooked. Do not underestimate this process and expect it can take quite a bit of time and effort until an acceptable adoption and maturity level has been reached, depending on factors such as the complexity of the organisation’s requirements as well as the size of the security team. It will eventually worth it though as it will lead to a state where security teams (along with the business side of the organisation) trust the tool itself, making it an integral part of the SOC processes, turning it into the heart of security investigations, and consequently getting as much value as possible from your investment.