Software developers favor agility over sticking to a fixed roadmap in response to a dynamic and ever-changing technological landscape. It’s time security learned the same lesson.
The first principle for security (and more specifically attack surface management) should be this: You don’t know (and probably never will know) your whole inventory. And if your business is critically dependent upon knowing everything, your business is probably going to get hurt.
Forward progress in cybersecurity requires that we question assumptions that we view as fundamental. First among these: the idea that we can know everything (about anything) and that we’re building security programs on solid foundations. The reality is that we don’t know everything, and that we can’t know everything.
We don’t know all of our assets, and we don’t know how every asset works. We don’t know where all of our data resides. We can’t see our entire attack surface, and we always have gaps. And that’s okay. We need to let go of the concept of security as a destination. Instead, we should focus on keeping things operating and on trying to make continual incremental improvements.
Resiliency is achievable, even though security is not.
Cybersecurity can learn a thing or two from agile development
This unsteady footing isn’t a new concept and is certainly not unique to security. Those of us who have been working with software for more than 20 years saw the transition from waterfall to agile to chaos engineering, and see the same familiar trend in security. In the old, we executed grand designs in service to objectives laid out at the beginning of a project. But informational hubris and a lack of information led to a lot of wasted work and time. We saw how development projects continually pivoted midway, and in response to this churn we created new ways of building to adapt to the dynamic nature of business. We chose to value agility, and the ability to redirect quickly to account for missing information, new learnings and changes in requirements. We relinquished our belief that we understood the various tools upon which we were building. We came up with new ways to test, and we began to assume that failures were inevitable. In lieu of trying to defeat every possible ill, we focused on delivering continuity of service. We often even choose to favor fast recovery over understanding each failure.
This pattern should sound familiar to those of us who “grew up” in a security world of find-and-fix. The legacy way where we designed secure systems, where we patched all the vulnerabilities, where the objective was a known state. But now, when we don’t know everything, we can’t find-and-fix our way to security. “Simple” operating systems in our environment are not free from bugs, and we can’t expect a complex system free from bugs either.
Cybersecurity needs to go agile. It needs to manage risks instead of trying to eliminate them. Practitioners in cyber and the businesses around them need to favor resiliency, and “give up” on security. “Security” is the wrong goal because we don’t live in a world where security is even possible. A better idea is to be “secure enough” — that is, to make conscious tradeoffs, trying to achieve the balance that’s right for your organization.
You can’t know everything, so don’t depend on it
The idea of “knowing or seeing everything” as a necessary precondition to success is utterly absurd. You’ve never known everything, and you’re never going to. This is as true in life as it is in cybersecurity. It’s also true that knowing more is often better than knowing less, but “more” can be actively harmful if the information isn’t actionable, enabling, or constructive. More data doesn’t have some sort of intrinsic good, but most data has gravity — it’ll bog you down if you’re lost in a pile of it.
If you’re constantly surprised by things you find on your attack surface, you need structural changes. If you find that one CVSS 8 bug can take your business down, you need to stop playing whack-a-mole. You’re not going to firefight your way out of disaster. Fixing the instance may alleviate the acute pain, but enhancing the program builds resilience and can repair chronic issues. You have to do the work that matters: Understand how and why changes are happening. Get the right buy-in from the organization to mitigate categories of issues.
When I think about my own attack surface and broader security program, I think in terms of defense-in-depth. How many failures does it take for a hacker to get to the goods? It should take more than one mishap for critical data to end up accidentally online. Each individual failure, each mishap, is a single instance within a broader set. When you focus on those broader sets, you can layer technology and process together to get a “good enough” outcome. Keeping your software up-to-date is one category, but if every internet facing asset is always behind the WAF, and your DMZ always has a cluster of alerting tools, you might need three or four individual failures to turn into a larger compromise. When you focus on sets of issues, the total number of problems you have to address is reduced to the categories, and you focus on the categories that will actually move the needle instead of being mired in the cesspool of alerts.
Be okay with “good enough”
Think in terms of probability and likelihood, and put in place controls to prevent accidental changes to baseline security. Least privilege should be the default, and segmenting prevents rapid lateral motion in any one spot. Monitoring and alerting are supposed to trigger responses. At any given point, I’ve got a handful of meaningful bugs being resolved, and a non-specific number of services in use. But if any wild deviations occur, there are lots of fail safes in place that trigger, and we’re trying like hell to make sure that no single failure is a thing that will cripple the organization. Are we certain? No. But we’re dedicated to continually doing better.
On the face, “see everything” sounds great, and of course “you can’t defend what you can’t see” is certainly a theme that I’ve heard repeated in security circles. Unfortunately, it’s always repeated along with a series of stories about things we didn’t see, and that we were expected to defend anyway. And therein lies the rub: We must defend ourselves, our business, and our institutions from enemies that we cannot always see and without being able to see every aspect of ourselves. Our attack surface is dynamic, like any living thing. It’s built of thousands of discrete, ephemeral components, often flickering in and out of existence without our notice.
It is possible to build a resilient institution, even though you can’t prevent every compromise. You don’t have to be afraid of an ephemeral attack surface, because the processes and the structure can be good enough to be resilient, even though you’ll never know your whole inventory.