When I first formed the Secure DevOps team, one of the first things I set out to do was to understand our software landscape. Our company has traditionally done a pretty good job of maintaining a list of all running IT applications (we use ServiceNow and their CMDB module to maintain this), along with a yearly survey our compliance team runs asking application owners about their app. This survey includes questions to assess risk, to determine what regulations they must adhere to, and whether the application is purchased or developed in house.
When I pulled up the data, I found we had over 500 internally built applications (meaning either wholly programmed by us, or customized vendor software that we write some code for via plugins or customizations) and more than 2000 total applications deployed. The risk information showed more than half of them to be "critical" - so I needed a better way to understand and prioritize the inventory.
This article is part of the building out a secure development and application security program series.
Finding the right apps to focus on
Our company has both internal systems that house critical business data, and software products that interact with or are built directly into critical infrastructure. The inventory I started with, however, had no way to distinguish between the criticality of an application that rolled up financial numbers or a piece of software sitting on a controller in a Power Plant, aside from regulatory flags or hints by application name.
I wanted to slice the data along several dimensions to get a better feel for what was important. The first division I thought was important was whether a given piece of software was a sold product or not. I ended up with the following slices, many of which were missing from my data source and would need to be filled in later.
- Sold product vs. Internal use only
- Exposed on the internet vs. Used only on network
- Subject to regulation vs. unregulated
- Flagged as "critical" in our system (meaning subject to IT controls like high availability or disaster recovery) vs. not flagged
- Under active development vs. not currently developed
I started by talking to a couple of our product managers, and found that at least some of our products were missing from my inventory. I would also need a way to find applications that I had no visibility to.
Using the slices I defined, I decided that I would focus my secure development program efforts on applications that met the following formula:
( (Sold Product) OR ( (Exposed on Internet) OR (regulated) ) ) AND (actively developed)
Some of this was in the inventory - but much was missing.
Interviewing stakeholders to find missing applications
I spent some time talking to key IT leaders in important areas such as finance, the aforementioned product managers, engineering, etc. I asked each of them the following list of questions:
- What would you say are the most important applications in your space? Why?
- Are there any applications that have you worried from a security perspective?
- Over the next year, which applications are you investing in most heavily from a development perspective?
- What were the most heavily invested in applications last year?
If I had a list of applications in their space already, I would show it to them and ask if I were missing any important applications. If the list was short enough, I would also quickly ask about any missing information from my data slices. My primary goal at this point was making sure I had a list of all the important applications and I wasn't missing any, not necessarily getting details about the specific uses of the application
From here, it was not hard to determine the most important applications to focus on and setup additional discussions with stake holders of those applications.
With all the data I had now, I was able to create a risk scoring matrix that would let me prioritize which applications to focus on first for our secure development program. This was based purely on technical and business risk, and did not take into account any security practices that applications were already employing.
For each application, I created a set of columns, and scored them in the following way. It's mostly a back of the napkin approach, but it works well to determine what to focus on:
|Is the app exposed on the internet||10|
|Is the app sold to customers?||10|
|Is the app used as part of critical infrastructure (responsible in any way for power plant operations, or connected to networks that contain such devices, etc).||100|
|Does the app suggest real world changes to operators who might take action?||100|
|Does the app contain any financial data that might be worth stealing?||20|
|Does the app contain proprietary engineering drawings or similar data?||20|
|Could a security flaw lead to information disclosure that would impact commercial contracts (for example, showing customers margin information or information about a competitor)||20|
Then I added up each column to get an overall risk score. I tried to set point values so that real world physical impact would immediately flow to the top, while others would increase through the combination of risk factors.
These questions are unique to my business, and are easy to generate after going through the list of business risk identifiers I outlined in my previous article, building an application security program.
These risk scores, combined with my attention formula, left me with about 50 applications that needed focus - a manageable list!
It's tempting to look for automated inventory tools. I have yet to find any tool suite that would truly help with application risk landscaping. Tools like nmap can help to find assets on the network that might be part of an application, but often require a lot of time investigating assets that may or may not be related to a given application.
It is not unheard of for modern applications to have dozens or even hundreds of components on the network. A microservice architecture may create dozens of mini-applications that should be evaluated in the context of the overall application and not independently. Only humans have this knowledge today; working with those who understand the connections and data borders is critical to proper modeling and understanding.
This is also a space where many vendors offer solutions. I find it works best when the security team is not the team responsible for maintaining and building a CMDB and application inventory, though certainly we are stakeholders, and this was my starting point. If your company does not have a formal ITIL organization, you may have to start with leadership interviews and build your inventory on the fly.
What other inventory challenges and successes have you seen?