Building a World Class Application Security Program
In this article, I will walk through setting up a modern appsec program from scratch. The goal is to help security professionals building a new application security program, but there should be plenty of ideas that can be adapted into existing programs as well.
I'll start by looking at the organization from a high level risk lens to understand what things we should focus on in our program. Then we will look at a secure development framework to draft a few principles and policies. Finally, we'll choose some metrics to monitor depending on where we are in our lifecycle.
The rest of the series will focus on deep dive discussions of each phase of an application security roll out, in the order I recommend doing them. Overall, the series will consist of the following articles:
- An application security program strategic overview (this article)
- Documenting the landscape: How I went about building an application inventory for security assessments and policy adherence around secure development.
- Assessing threats: prioritization, data flows, and threat modeling
- Creating policies: processes and architectures that support your business goals
- Secure development: Automate security tooling throughout the development lifecycle.
- Data transparency: Kanban the security state of applications
Strategizing An Application Security Program
I believe it's important to tailor any security program to the organization and groups it is being integrated with. Frequently, programs are developed without the input of the application teams who will be affected by the policies and tools selected by security teams. Taking a partnership mindset leads to greater security adoption and better outcomes for everyone.
But before we start working with various partners across the business, let's start by building a foundational strategy that we can present to them for feedback and discussion.
In this section, we will spend some time understanding why the organization cares about security in the first place, and start building our intuition around where we should be focusing our energy.
Identify likely adversaries & primary risks
Similar to threat modeling a specific application or environment, we want to begin our appsec program by modeling our organization - what risks are we concerned with and what kinds of attacks are likely against our applications?
I usually break these down into the following categories, and use the questions below to start thinking about the overall organization.
Regulatory & Legal Risk
- Do we have regulations that our applications have to meet?
- What ranges of penalties might non-compliance bring? (I recommend actually writing this out - for instance if you handle personal data and have European customers, GDPR can run up to 4% of annual revenue)
- Do we have security requirements written into customer contracts? (This might be something like maintaining third party audits, certification, or openness to third party pen tests)
Likely Adversary Risk
- What kinds of adversaries are likely to target our applications?
- What potential consequences would attacks of opportunity have? I usually define these as the class of non-targeted attacks such as malware, worms, crypto-miners, ransomware, etc. (Although non-targeted, these attacks have had some of the most disastrous effects of any attack, as seen in Atlanta, and the UK)
- Are we likely to be the target of motivated criminals? (common attacks include stealing customer payment information or tricking employees into wiring money)
- Do we warrant nation state interest? (anything related to state interests or critical infrastructure, or suppliers to those industries)
- How likely is industrial espionage? (in some companies, employees stealing data for a competitor is a very real concern)
Company Reputation Risk
- What impact would a data breach have on our short term and long term sales?
- Could a breach lead to further uncomfortable exposure? (As was the case when Equifax was called to testify before congress)
- Is privacy or security one of our core features?
Usually spending this time researching the industry, regulations, and contracts will give a better feel for the overall risks the organization faces. This will come in handy when we start looking more closely at applications, prioritizing them, and threat modeling actual programs.
Next up, let's think about what policies might make sense to start with.
Secure Development Policies
We'll want to draft some initial policies around secure development and secure deployment. I don't recommend spending too much time on this step immediately - the idea is to have something to start with that can be refined and worked over time, not to walk away with a complete set of legal books.
I also suggest doing this for yourself even if your organization already has a set of policies defined. Going through this yourself and then comparing your findings against the organizations policies can help find shortcomings in both - and give you a better understanding of where you should head with your partners.
To get started, I like to use the Microsoft secure development life-cycle as a guide, writing out a brief checklist for each box on that site. This is the start of your own tailored SDL. It's perfectly fine if some areas are blank or left until later. We don't need to be perfect right away - the goal is instead to have some direction.
At a minimum, I recommend you consider drafting brief policies in the following areas:
- General security training & secure development training - who has to get trained, what courses are required, and how will it be tracked? I'll talk about training in more depth when we talk about security cultural awareness in the enterprise.
- Security tool requirements - What tool usage is required, and what should happen with results followup? If you don't have in-house tools at this stage, or have not determined how to integrate with development teams yet, I'll be discussing this in a future article in some depth.
- What, if any, deployment gates will exist? I recommend that security not be a deployment gatekeeper, but some organizations need a formal sign off before anything is put in production.
It is often better to start with a small handful of high impact policies (3-5) and focus on implementing them well than to create a large number of policies that everyone will struggle to implement simultaneously.
One framework I have used is an agile SDLC framework with steps aligned to development phases. This cycle is meant to be gone through with every two week sprint, though each step is conditional based on what is in the sprint.
For instance, if you are adding a new feature in the current sprint that has a regulatory impact (such as a change to how credit card data is handled), then some time should be spent in the first step to determine what new controls are needed as a result.
Others are only used in some cases - a pen test is not needed every two weeks, but you might have a policy that it should be run annually, in which case the task would be to check when it is next needed and to schedule if it is due.
Application Security Metrics
Finally, let's think about how we will measure security, both as we start our security program and as our teams mature. Metrics should support the phase you are in - first in getting individuals to buy in to the mission, and second to identify gaps in your processes or tools. Here are a few metric profiles I have used successfully:
Metrics to use during initial appsec program rollouts:
- Number of people trained divided by the trainable population
- Percentage of applications using at least one automated security tool. For large applications or technically advanced teams you could instead consider percentage of people integrating static tools into their IDE, percentage of applications with tools integrated into CI/CD, etc.
- Percentage of teams / people that have adopted the SDL policies and are in the process of integrating them into their processes.
Metrics to use during intermediate phases, when tools are in place, and teams are using them, but you may not have everything automated or some teams may be ahead of others.
- Number of open findings by severity (medium/high/crit security findings from SAST/DAST/Pen test/ etc)
- Frequency of automated scanning relative to code changes (how quickly are scans run? Ideally we should strive to make scanning as close to continuous as possible)
- New vulnerability trend lines - are teams generally creating fewer new security bugs as they get trained and start learning to fix and spot them?
- Tool false positive rate - Don't let it get very high or teams will not use the tools - with good reason! This is how you can determine if your tools are serving you or you need to re-calibrate and invest in a new set of tools or processes.
Metrics to target as you approach maturity. Once security tooling is well integrated, we can focus on optimizing & continual improvement.
- Mean time to remediation - How long does it take, on average, to fix a security bug. Measured as the total time from finding (from a scan or manual test) to a patch being deployed in the vulnerable environment. Likely you would want the metric grouped by severity.
- Defect Density - number of defects per thousand lines of code. This could be security bugs or all findings depending on the team. Industry average is around 15 defects per thousand lines of code. Quality software generally targets <1 defect/1000 LOC, and to be in the 90th percentile the defect rate would have to be .1/1000. (I took these numbers from the 2017 Coverity scan report). One important thing to note in this metric is that the trend is more important than the absolute number.
Pick metrics that make sense for you and support your goals. In large organizations it is totally reasonable to have teams just starting out and teams that are mature, so be sure to tailor the metrics to the stage that a given team is in, or you risk security burn out.
Putting it all together
Once you have spent a little time thinking about the above, you are ready to take your strategy out to other infosec professionals in your group for feedback to see what might have been overlooked.
Although there are a lot of things to think about, I don't believe too much time should be spent in this initial process. I believe the most successful plans will be forged within the execution process, which means that it is imperative to keep your strategies and policies flexible as you go through the process of getting buy in from your peers and partners, and then roll out the changes and tooling over time.
In the next few posts I will be focusing on building that coalition with key partners (the development and application teams) and prioritizing simplicity and automation to maximize usage and effectiveness of your security program. As part of that, we'll dive into specific tools and processes to consider for each stage.