Tuesday, December 30, 2014

2014 - the infosec year in review - part 5

This is part 5 of an n-part blog series, discussing the things I found to be game changers in Infosec in 2014.

Item:  Heartbleed (yes, I know this has been overdone, but don't stop reading yet)

What is (or was) it?  A vulnerability in OpenSSL that left servers (and clients) open to data exposure.

Why it's significant? The Heartbleed bug was huge - I know because it passed the "Jake's mom test."  What's that you ask?  I can always tell when the media hype surrounding a security vulnerability is truly huge, because my mom calls.  She's a nurse and doesn't work in infosec, so the only reason she calls is that she's heard about it in the media and is alarmed enough to ask for more information.  This happened three times in 2014, more than any time previously.  This means that national coverage of infosec happenings is improving.

CloudFlare issued a challenge, under the notion that private keys could not be reliably recovered.  They were wrong.  Servers can and did lose private keys to theft.  At Rendition Infosec, we found hat most of our clients are not familiar with the purpose of key reissue.  Some issued new keys, but kept the original information in the key intact (so customers could not tell the keys had been replaced). Some patched the vulnerable libraries but did not restart the software using those libraries.  Most businesses have never reissued their keys en-masse before.  Additionally, CRL's were never really designed to handle revoking everyone's keys.

Could it have been prevented? Yes, Ray Charles could have seen this coming.  Maybe not the specific vulnerability, but the OpenSSL code was a train wreck.  Here we were all using the code (myself included) and assuming it must be relatively secure because someone else must be looking at it.

Two specific preventive measures come to mind.  The first is the use of the custom memory allocator. Developers - stop doing this unless you really need it.  OpenSSL did not need a custom memory allocator.  One comment in the code suggests that malloc is slow on some platforms and this is the reason OpenSSL manages its own free lists.  OpenSSL even acknowledges the criticism on its own wiki.  If malloc and free had been used, my mom would never have heard about heartbleed.  The headline would have been "almost certain DoS with extremely unlikely data disclosure found in OpensSSL."  In the SANS Advanced Exploit Development course (where I'm a contributing author), we cover mitigations for heap overflows.  There are many and they make exploitation much harder.  But by using a custom allocator, OpenSSL removed the possibility of using any of these mitigations.

The second preventive measure is code review of critical open source software.  Again, this code was a train wreck.  @ErrataRob did a great job already reviewing some critical errors in the code.  One of the most glaring of these is actually an old programming joke.
   { s2n(strlen(s->ctx->psk_identity_hint), p); strncpy((char *)p, s->ctx->psk_identity_hint, strlen(s->ctx->psk_identity_hint));
p+=strlen(s->ctx->psk_identity_hint);
}
The joke is:
    strncpy(dst,src,strlen(src));
This just gets rid of the compiler warning without actually providing any additional security.  The length field should never be blindly set to the size of the source string.  Anyone with a college background in programming knows this is insecure, yet it was in the OpenSSL code (and interestingly did not cause Heartbleed).  We need better security audits around out open source software, just trusting that someone else is doing it won't work.

Stay tuned for more installments in the Infosec year in review.

Monday, December 29, 2014

2014 - the infosec year in review - part 4

This is part 4 of an n-part blog series, discussing the things I found to be game changers in Infosec in 2014.

Note: for those who have contacted me asking where Heartbleed and Shellshock are, don't worry - they are coming. I'll concede they are obvious choices for a series like this and that (as much as anything else) is why we didn't start with them.

Item:  The Truecrypt scandal

What is (or was) it?  Truecrypt project goes dark after being subjected to an independent code review.

Why it's significant? The Truecrypt project has always been shrouded in secrecy.  The developers have never been publicly identified and despite the project being open source, getting the code to build on Windows was exceedingly difficult.  Like many open source projects, there was a problem confirming that the published binary builds were actually representative of the source code.  In any case, people began to fear that a government may have planted a backdoor in the code.  A project was crowdsourced to audit the code.  The Truecrypt bootloader (the most obvious place to place a backdoor) passed the audit.  The remaining funds were then allocated to audit the crypto itself.  This is when the project was deprecated by the developers.  Coincidence?  I've publicly said that I think the project was sponsored by a nation-state.  I also think that the project closing shop and the code audit were connected.  Others disagree.

Regardless of what you believe, there are lessons here for companies relying on open source software.  How much of that software has been audited?  A code audit of OpenSSL by a college intern would have found Heartbleed.  How dependent are you on a project that may close up shop at a any time?  I know several SMBs who went scrambling after Truecrypt ceased support.  Internally at Rendition Infosec, we used it for encryption of our external USB drives.  Just because software is popular, it doesn't mean that it is secure.  It also offers no guarantee that people will continue to support it.

Could it have been prevented? That question doesn't really apply to this item very well...  Our reliance on encryption will continue to increase as additional nation-state hacking is exposed.  However, to the extent possible, we should seek to confirm that those security products are not causing vulnerabilities themselves.

Stay tuned for more installments in the Infosec year in review.

Sunday, December 28, 2014

2014 - the infosec year in review - part 3

This is part 3 of an n-part blog series, discussing the things I found to be game changers in Infosec in 2014.

Item:  Target CEO resigns

What is (or was) it?  Target's CEO resigned in the wake of the 2013 data breach

Why it's significant? Greg Steinhafel certainly isn't the first exec to lose a job after a breach, but he is one of the more prolific.  He was a 35 year veteran at Target - more than a figurehead, he was a real company man.  We expected heads to roll at Target after their breach, but usually we expect to see the CIO or CISO (or both) being made available to industry.  The replacement of the CEO at such a large institution was sort of surprising.  We knew this would be a black mark on his record, but again were surprised to see him leaving outright.

Could it have been prevented? Yes - of course it could have been (I see a theme emerging here).  I won't do another postmortem on the Target breach here - plenty has already been written on that.  But the short of it is that Target had all the logging data available to detect the attack - they had to have it to be PCI compliant.  And of course, per the PCI standards (and other industry best practices) they were performing regular log reviews (wink, wink).

Here at Rendition Infosec, we believe that the Target's SOC and systems admins were understaffed and simply lacked the time and/or expertise to identify the evidence of compromise present in the logs.  If staffing were appropriate (both in numbers and expertise), we believe that Target would have detected their breach early.  We'll never close all the holes in the network, forget about preventing a compromise. Focus on detecting it before data is exfiltrated and you have a breach.

Stay tuned for more installments in the Infosec year in review.

Saturday, December 27, 2014

2014 - the infosec year in review - part 2

This is part 2 of an n-part blog series, discussing the things I found to be game changers in Infosec in 2014.

Item:  Home Depot lawsuits

What is (or was) it?  Breach lawsuits against Home Depot allege negligence in their handling of payment card data.

Why it's significant? Home Depot claims to have been compliant with the PCI standards in 2013 but was undergoing its 2014 PCI compliance evaluations when the breach was discovered.  In a statement to investors, Home Depot concluded that their position was tenuous and that they would likely have to settle claims.  At Rendition Infosec, we think that the most significant aspect is the PCI compliance angle.  Although Home Depot was compliant in 2013, they were clearly not at the time of the breach.  If you work breach investigations, you know that what the last auditors saw is not what you are seeing now. Announced inspections are sort of like that, companies get a chance to clean up.  Attackers don't announce that they are coming.  It will be interesting to see how these cases settle out and how they impact later breach investigations and lawsuits.   

Could it have been prevented? Yes - of course it could have been.  The fact that the company was found to not be compliant with PCI standards in 2014 (and remember that compliant != secure) means that Home Depot clearly could have done better.  A company that can't even remain compliant with PCI is like an office building manager who decides that railings are no longer needed in the stairwell.  Both represent an abject failure to meet basic safety standards.  Of course it's always tempting to be a Monday morning quarterback on these sorts of things, but this was a pretty egregious case that impacted a lot of people.  Home Depot had the means to protect their network, they just failed to execute.

Stay tuned for more installments in the Infosec year in review.

Friday, December 26, 2014

2014 - the infosec year in review - part 1

This is part 1 of an n-part blog series, discussing the things I found to be game changers in Infosec in 2014.

The first significant item of note that will comprise this blog post is Synolocker.

Item: Synolocker

What is (or was) it?  A ransomware attack for Synology NAS devices.

Why it's significant?  Ransomware attacks are not new, but this was the first that we're aware of for ransomware to target a NAS device.  Most ransomware attacks up to this point have targeted Windows based machines.  But malware targeting storage appliances, particularly those not running Windows? That's new.  To add to the oddity, the attackers closed up shop quickly.  They then posted this message on the website, offering the remaining keys for bulk sale:

Synolocker criminals closing up shop, originally found on The Guardian

F-Secure also wrote a tool for unlocking the Synolocker encrypted files.  This speaks to the number of users impacted by this issue.

Could it have been prevented? Yes - as it turns out, Synolocker exploited a known vulnerability that had been patched in last 2013.  Synolocker first appeared in the middle of 2014.  Should users have patched in the previous 6 months? Sure. But to be fair, some early adopters of the update reported that the update was causing their systems to brick.

Stay tuned for more installments in the Infosec year in review.