How Hackers Hunt Bugs, and Why You Should Care

David Wolpoff (moose)
6 min readJan 20, 2021

--

Image credits to BetaBreakers

This is part three of a series on hacker logic — weaponizing a vulnerability. Check out parts one (doing reconnaissance) and two (figuring out what asset to exploit) to catch up.

Just as water flows to the lowest point, hackers are usually looking to take the path of least resistance. They want to be able to break into a system as quietly and with as little effort as possible — and with the fewest exploits. Once an attacker finds a tempting asset on your attack surface to exploit, they will typically deploy a few different tricks and techniques to find a vulnerability within that asset — some techniques get you a win faster, others take more time. Finding a bug and exploiting it can take anywhere from a couple of hours to several months or more. Often hackers will go after assets that are out-of-the box Linux machines, or other open source products where they can easily access the code and find bugs that aren’t reported. Other times, if easy targets aren’t found on a perimeter, an attacker will go digging in support forums to find code or firmware updates, and use that to hunt a bug. If all else fails, they’ll resort to fuzzing.

Using a publicly known bug as a clue to find its doppelganger

Attackers encounter a great deal of noise, and they need ways to cut through it and decide what matters. Cross-checking tempting assets against known vulnerabilities is an easy place to start. As we discussed earlier, high-severity CVEs aren’t necessarily an attacker’s top target, as they are publicly known and likely well-monitored. However, these CVEs can be clues to other undiscovered bugs hiding elsewhere in the code. Often, older bugs will have already been patched, but it’s possible to go digging for the same bug elsewhere in the code. And it’s more likely that bug isn’t patched and represents a more covert way in. Doing an audit of open source code is an easy way to find potential flaws — it’s easy to go search for a variant of the same bug elsewhere.

Capitalizing on developer “notes”

Reading source code reveals all kinds of goodies. One place I love to check out is the notes developers leave for each other. Coders will go through the code and mark areas which need attention, but like in any workplace, tasks fall through the cracks. It doesn’t take too keen an eye to pinpoint the frequently-leftover tags from developers in their code that say “FIXME” or “RBF” (remove before flight). Tags like this put a bullseye on potential vulnerabilities that are actually exploitable and not patched. I once found a bug in a function labeled as “FIXME: buffer overflow possible here. DO NOT SHIP AS IS.”

Companies invest a lot of money into their security programs for endpoints and servers, but leave seemingly innocuous portals like VOIP systems, phones or printers unsecured. But these are computers too. They are out-of-the-box, typically unaltered open source Linux machines which are connected to the internet. Attackers can hide out for as long as they need here for the right opportunity to move laterally within the network.

VoIP phones often run on Linux and aren’t well secured.

Support forums for the win

A while back, looking for something juicy to exploit on a target’s perimeter, my team and I saw a new appliance pop online that our target was testing. It appeared to be an easy asset to break — not a lot of CSS, and a dated-looking web portal. Not sure exactly what it was, we jumped on Google, and determined it was an absurdly expensive product from a well-known manufacturer of telephony equipment — think VoIP phones. (At this point, everybody still had desk phones and we determined that the company’s senior leadership wanted to be able to check their voicemail from their cell phones.) We went digging around support forums and found part of a firmware update posted online — we found three bugs.

We located a bug in URL path parsing that let me bypass authentication. Another bug let us reach code paths without being a system administrator, leading us to the ability to upload and download files. Next, we found an arbitrary file leak that let us read every file in the file system of the application. Each of these steps represented its own exploit. By combining these different abilities we were able to do what we came for. But at every chapter, it was publicly available information that held the key to the next.

Fuzzing

Should the first run-through fail to yield results, I will begrudgingly move on to a more time-consuming and (to me) less satisfying tactic: fuzzing. If the code audit is using a map to search for buried treasure, fuzzing is grabbing a shovel and simply digging everywhere. Or, in the case where you’re working from the outside: like banging on a wall until it explodes wildly. When you input something that makes the system react strangely, you know you’re onto something.

A trivial, but completely true example: I was once tasked with breaking into a company so I hopped onto their employee login page and began blindly prodding at it. I started with ‘a’ as the username, and hit Enter: access denied. I typed two a’s… access denied again. Then I tried typing 1000 a’s, and it didn’t say access denied; it just stopped talking to me. 62 seconds later, it came back on the internet, and I gave it 1000 a’s again. When it went offline again, I knew that I had found a bug. Most fuzzing is wildly more sophisticated in how it’s set up, but the core idea is simply to shake the damn thing and see if it breaks..

Everyone and their mother attempts to fuzz their way into exploit-nirvana, but I’ve found it’s a tactic that rarely works on its own. And, of course, if you have to fuzz against a live system, you will almost definitely tip off a system admin to your presence. I prefer what I call spear-fuzzing: supplementing the process with a human research element. Using real-world knowledge to narrow the attack surface and identify where to dig saves a good deal of time. If you haven’t already, check out my section on social engineering in part 1 of this series.

Impact x Likelihood = More Resilient

Security teams are constantly focused on building a bigger castle to keep out intruders, but hackers don’t see walls. All hackers see is a personal cost of time and effort. And a determined hacker will find something to exploit. Their job is to find a way in and hide in the less secured parts of your system until you make a mistake.

Pairing the impact of an asset getting hit, with the likelihood of an attacker successfully hacking it, will get you closer to building a more resilient program. You’ll start to think through what’s possible, and get you to a point of choosing a path that reduces your attack surfaces. You’ll ask what are your fail-safes, and probably focus on the CVEs that actually matter. The best you can do is to prioritize securing the most valuable resources by keeping them behind layered defenses — so that it takes multiple individual failures to really do damage. Good ol’ fashioned network segmentation and defense and depth will get better results than what you’re getting today.

--

--

David Wolpoff (moose)
David Wolpoff (moose)

Written by David Wolpoff (moose)

temporarily unemployed. previously co-founder @randori. once upon a time a red-teamer.

No responses yet