Friday, April 25, 2014

"Amazon Cloud IaaS Service servers riddled with vulnerabilities" - really???

So this article came across my Twitter feed and I had to check it out.  I've long wanted to probe at Amazon's IaaS service and see if something shakes loose, but then I remember the CFAA.  I generally dislike prison, so I've avoided probing the service up to this point.  But I figured "hey, if someone else did it, I'll learn from their research."

My problem with this "research" (which is detailed here) is that they didn't really do much research here.  The entirety of their research can be summarized in Amazon's own documentation.
We recommend that you run the Windows Update service as a first step after every Windows instance that you launch.
That sounds like good advice to me.  Earlier in the same documentation, we find these nuggets:
AWS updates the AWS Windows AMIs several times a year. 
In addition to the public AMIs provided by AWS, AMIs published by the AWS developer community are available for your use. We highly recommend that you use only those Windows AMIs that AWS or other reputable sources provide.
So I don't know what your definition of "several times a year" is, but let's just say that it isn't every patch Tuesday.  Further, the article doesn't mention whether the official Amazon AMIs were years out of date or whether it was a community contributed AMI.  Community contributed AMIs might never be updated after being released, and we shouldn't expect that Amazon has any control/influence over this.

So what's the answer here?  Well, I certainly side with Amazon in advocating that users enable Windows updates and update their systems before entering production.  However, there's a good reason why updates are disabled by default.  EC2 instances can be configured to terminate on shutdown.  That means that the EBS volumes backing them cease to exist.  If you have automatic updates enabled, that won't work well if you depend on this behavior.

The bottom line is that you have to be smart.  Moving some processing to the cloud solves a great many problems.  Patching is unfortunately not one of them...  Just like you wouldn't install a Windows server from DVD and deploy it without patching, you shouldn't start up an AMI and deploy it without patching either.

As an aside, Help-Net Security should be ashamed of themselves for the tag line they used in the article.  It's just sensationalism, plain and simple.

Tuesday, April 22, 2014


I'm working with Paul Henry (@phenrycisssp) on the new Cloud Forensics course, FOR559.  So far it's been a blast.  On the exciting news front, I've deployed built a SIFT workstation in AMI in EC2. The SIFT workstation can be found by searching for 'sift' in community AMIs, or you can reference it directly as ami-25879c4c" if using the command line to launch instances. While this was primarily for course development, I've made it public so everyone can benefit.

Why do you need SIFT in EC2?  Well, first and foremost, I prefer to do forensics investigations of IaaS cloud assets in the cloud itself.  There's usually no data transfer cost to moving data within the same region of a given service.  I can do my analysis and then only copy the data I need from the cloud service saving big money on bandwidth costs.  The other issue is time.  It takes time to move whole images out of the cloud.  That's time I could better spend answering the client's questions.  Maybe we can solve the "were we hacked" or "what's the damage" questions without moving data out. 

We'll cover the steps to performing forensics in the cloud, as well as moving data out of the cloud, in the upcoming Cloud Forensics course.

Some notes on the SIFT:
  • The username is ubuntu.  This is the standard for Ubuntu based AMIs and I decided not to change it.  You need to know this username to SSH into the machine after you launch it.
  • The VNC password is password.  Since your firewall rules shouldn't allow you to directly VNC to the machine, I figured that's not a big deal.  You should be using SSH forwarding to get there.
  • The desktop is installed, but not tested.  I built this primarily for command line use and the tools I needed work.  
  • If you find issues, please let me know and I'll work to correct them.
Look for similar base images to pop up in Azure and Rackspace in the coming months.

Thursday, April 10, 2014

Heartbleed and banks - WTF?!

We've all heard about the HeartBleed bug.  More than anything, I think that the lack of very public coordination by vendors has been troubling.  I should not have to dig on a vendor's site to find out whether they had vulnerable products.  Should be front page news, even if the answer is "we still don't know."

And banks... wow. I have accounts at several major banks.  I checked their websites this afternoon and not a single one has a notice up about whether or not they were ever vulnerable.  If they were, customers should know this, so they can choose what information they feel may have been compromised.  As it stands, the customer is currently at a significant disadvantage.  Why is this?  Admitting you were following best practices is a good thing. There's no shame in being vulnerable to a zero day bug.  There is shame in hiding this fact from your customers who might want to change passwords (not a bad idea anyway) or even do something more drastic like change account numbers.

So, does anyone know of any banks out there that have publicly disclosed whether they were vulnerable?  Has anyone given the "all clear"?  What's the status of the security of my accounts, and why am I having to guess.

If you check a bank site, leave a note here (or DM me on Twitter) with the name of the bank so we can keep a running list of who is and isn't notifying customers about HeartBleed status.

HeartBleed links

Wanted to post a few links I used in the HeartBleed simulcast last night (and a few others emailed to me after the fact):
  • ISC list of potentially vulnerable vendors is here.
  • Snort signatures from FoxIT are located here.
  • MetaSploit vulnerability test module is located here.
  • Server to test for HeartBleed is located here.
Check out the full post over at the SANS DFIR blog.

Tuesday, April 8, 2014

HeartBleed slides

UPDATE: Better (more complete) slides and other material available here.

Since I am at SANS Orlando, but not teaching a long course this week, I was asked by the SANS staff to put together a presentation on the heartbleed vuln.  This was short notice.  I had spent most of the day working with various clients on addressing their own specific issues.  I'll likely update this presentation tomorrow night when we know more about who has updated, who has not, and what client software is vulnerable.  Until then, consider these slides beta.

Here are the (decidedly non-technical) slides

HeartBleed POC script

HeartBleed server test script

Please understand that these are extremely beta as I was working with clients all day.  If you are looking for super technical explanations, look elsewhere. Otherwise, feel free to use this to communicate to your management/IT departments.

Monday, April 7, 2014

OPSEC vs. virustotal

So you just found malware running on one of the systems in your enterprise.  Do you upload it to to see how many antivirus vendors detect the malware?  The answer to this question is an overwhelming NO!  Don't do it. The temptation is understandable, but just what are you giving away to your attacker?

Anything you upload to (or any other public malware site for that matter) will eventually be shared to other antivirus vendors.  What happens if the attacker created a sample for your environment, suppose with a unique hash?  Suppose that you work at SuperMegaBank and you have been targeted by Eastern European bad guys.  Because they don't like being caught, they create a new piece of malware, with a unique hash, just for you.  When you send this to any public site, it will eventually get back to antivirus vendors (in fact, eventually all of them).  Any decent attacker has a lab with a number of AV products.  When the antivirus vendors incorporate your sample into the updated definitions, your attacker will know they've been found out.  Unfortunately for them, by this time you have removed the malware from your environment and they are back to doing initial access operations.

The more immediate concern is that most sites have a search feature.  Using this feature, anyone with API access (or in some cases just a web browser) can determine whether a given sample has ever been submitted or analyzed.  The problem here is that as soon as you upload the sample, anyone searching by hash can determine that the file has been analyzed.  This of course includes your attacker.  Obviously, if your attackers know that you are onto them, that's not a good thing.  They may change out their tools and/or go to ground.  That leaves you holding the bag, trying to find attackers that are now doing their best to hide from you.

The only question now is "do attackers really do this?"  I can't say for sure, but I've observed what I consider to be some pretty strong circumstantial evidence.  I've worked several incident response cases where the first responders uploaded samples to  In every one of these cases, the attacker went to ground.  This left us with a much harder task of detection and remediation than we would have experienced if we'd only kept the sample to ourselves.  Is a useful tool?  Absolutely, I love it.  But is it appropriate for uploading samples in an incident response?  Absolutely not.  Period.  Practice good OPSEC.