Sometimes the biggest security threats don't come from hackers halfway across the world. They come from people who already have the keys to your kingdom.
This is the story of how we helped a client discover a sophisticated data exfiltration scheme that had been running right under their noses, on a business-critical application that was costing them thousands in infrastructure alone.
Meet Roger Fedris (Not His Real Name)
Roger was everything you'd want in a senior developer. Experienced, skilled, reliable. He'd been with the company for years, working across different domains and consistently delivering quality work.
Over time, Roger earned something invaluable: trust. The kind of trust that gets you access to production servers, even after denial of DevOps team. The kind that makes management comfortable giving you the keys to their most critical systems.
And this wasn't just any application. This was a business-critical system running on bare metal servers, the kind of setup that costs serious money to maintain and operate. Hosted on AWS, it was the backbone of their customer communication infrastructure.
When Roger eventually resigned, everything seemed fine. The exit was smooth, professional, no drama. He handed over his work, said his goodbyes, and left. Life went on.
Or so they thought.
When Things Started Getting Weird
A few months after Roger left, something strange started happening. The server began behaving abnormally. Nothing catastrophic at first, just odd.
Then came the real problem: tons of emails were being sent to end customers through random SMTP servers. This wasn't supposed to be happening. The application had specific, controlled email pathways. This was something else entirely.
Their security team jumped on it. DevOps got involved. Everyone was scrambling to figure out what was wrong. They had security tools in place, firewalls, monitoring systems, the works. Everything was configured to filter incoming traffic and watch for suspicious activity.
But the strange behavior continued.
After countless hours of investigation and too many dead ends to count, they started reaching out to external vendors. They needed fresh eyes on the problem, and they needed them fast.
That's when they contacted us at Bithost.
The Investigation Begins
When we first got on the call, they were clear about one thing: "We're talking to multiple vendors. Whoever solves this first wins."
Fair enough. We asked for access to investigate.
"No way," was their initial response. And honestly, we understood. This was a critical production system. You don't just hand over the keys to a stranger, even if you're desperate for help.
After some discussion, they agreed to give us something: a short terminal session. Limited access, time-boxed, supervised. It wasn't much, but it was enough to start.
Finding What Everyone Else Missed
Our team started with the basics, scanning all the server ports. Nothing unusual showed up. Everything looked clean from the outside.
But we had a hunch. We decided to test something different: instead of looking at what was coming in, we'd look at what was going out.
We set up a test server on our end and started sending data from their server to ours on various ports. And that's when we found it.
Incoming traffic on ports 80 and 443 was filtered perfectly. But outgoing traffic? Completely unrestricted.
Now, here's something most people don't realize: this is incredibly common in AWS security groups. We've audited over 100+ AWS accounts, and almost all of them have the same configuration. Everyone focuses on inbound rules, blocking bad guys from getting in. But outbound rules? They're wide open.
Even big tech companies make this mistake. The assumption is: "If someone's already inside, they're supposed to be there." But that's exactly the vulnerability that someone like Roger could exploit.
Connecting the Dots
We asked their DevOps team for a remote session with screen sharing. We needed to map out exactly what was communicating where.
Together, we started documenting the legitimate traffic patterns:
- Database connections on specific ports to fixed subnets
- Identity server traffic on defined ports and subnets
- API calls to known endpoints
Once we had the normal traffic mapped out, we could see what didn't belong.
And there it was: mysterious outbound traffic that had no business being there.
We dug deeper into the application code, tracing back where this traffic was originating. That's when we found Roger's handiwork.
The Clever Disguise
What Roger had done was actually quite smart, and that's what made it so dangerous.
He'd embedded code that appeared to be doing something completely legitimate: validating email addresses to check if they existed before sending messages. On the surface, this looked like good practice. You don't want to send emails to invalid addresses, right?
But buried in that "validation" code was something else entirely. While it was checking if emails were valid, it was also quietly sending data out to external servers. Customer information, business data, all flowing out through what looked like routine email validation.
The beauty of it (from an attacker's perspective) was that it blended in perfectly with normal application behavior. Email validation is expected. Outbound connections for verification are normal. No alarm bells rang.
But once we isolated that code block and traced where the data was actually going, the picture became crystal clear. This wasn't validation. This was data theft.
The Wake-Up Call
When we showed the DevOps team what we'd found, you could see it hit them. This wasn't just a bug or a configuration mistake. This was a deliberate, sophisticated attack from someone who knew exactly what he was doing, because he'd built the system in the first place.
The security gap they'd missed wasn't in their fancy security tools or monitoring systems. It was in the fundamental assumption that outbound traffic from trusted applications didn't need to be restricted.
Roger knew this. He'd counted on it.
The Bigger Picture
Outbound traffic matters just as much as inbound. Your firewall isn't protecting you if it only looks one way. Data exfiltration happens through outbound connections, and if you're not monitoring and restricting those, you're leaving the back door wide open.
Security tools are only as good as their configuration. Having AWS security groups doesn't mean you're secure. Having monitoring tools doesn't mean you're protected. You need to actually configure them properly—including those outbound rules everyone ignores.
Code reviews need a security lens. That "email validation" function looked innocent enough. But if someone had reviewed it with security in mind—asking "where is this data actually going?"—they might have caught it earlier.
Exit processes matter. When someone with deep system access leaves, you need more than a polite handshake and a laptop return. You need code audits, access reviews, and traffic analysis.
The Cost of Waiting
By the time they found us, this had been running for months. Months of customer data flowing out. Months of infrastructure costs supporting malicious activity. Months of reputation risk they didn't even know they had.
The financial cost was significant. But the reputational cost, if customers had found out their data was being exfiltrated, could have been devastating.
What You Can Do
If you're reading this and thinking about your own infrastructure, here are some practical steps:
- Audit your AWS security groups today. Look at those outbound rules. Are they actually restricting anything, or is everything allowed out?
- Review code from recently departed employees, especially those with production access. Look for anything that makes external connections.
- Monitor outbound traffic patterns. Know what normal looks like, so you can spot abnormal.
- Implement least-privilege access, even for trusted employees. Does your developer really need access to production? Or can they work in a controlled environment?
- Don't assume your existing security tools have you covered. They're configured for the threats you thought about, not necessarily the ones you're actually facing.
Need Help?
At Bithost, we've seen this pattern more times than we'd like to admit. Trusted insiders. Misconfigured security groups. Data walking out the door while everyone's watching the front entrance.
If you're concerned about your infrastructure security, or if you're experiencing something similar and can't figure out what's wrong, we can help.
Whether you need a security audit, incident response, or just want someone to take a fresh look at your setup with experienced eyes—we've been there, and we know what to look for.
Reach out to our team at sales@bithost.in
We'll help you find what others might miss—before it becomes a problem you're reading about in the news.
Bithost is a division of Zhost Consulting Private Limited, specializing in cloud infrastructure security, incident response, and security audits for businesses of all sizes.