Thursday, December 31, 2015

Significance of the compile time stamp in the Ukrainian power malware

Earlier, I did some analysis on the Ukrainian malware for a post on the SANS ICS blog with Robert M. Lee.  I noted that my analysis was hasty, and sure enough I screwed something up.  I failed to notice that my tool was reporting the compile time stamp in local time rather than UTC.  Classic forensics fail...

When the compile timestamp is converted to UTC, we see that the time is 2202 on January 6, 1999.  Since Ukraine is UTC+2, the local Ukraine time is 0002 on January 7, 1999.  What significance might this date play?

Many Orthodox Ukrainians celebrate Christmas starting on January 7th.  This might be significant, though I'm no history scholar so I'm not sure if something special happened that year.  Google didn't turn up anything interesting there.

January 7, 1999 is also the date that Clinton was impeached.  Is that significant for the malware authors?  Hard to say.  Doesn't seem likely, but maybe they were big fans of Lewinsky's blue dress.

One possibly interesting happening on January 6, 1999 has to do with the possible establishment of a solid waste disposal site as noted here.

If the date has something to do with Crimea, it looks like the Constitution of the Autonomous Republic of Crimea came into effect Jan 12, 1999. Yeah, it's not Jan 7, but maybe it means something to someone.

A final possibility has to do with the fact that the Prince Rostislav Romanov of the Russian Imperial Family died January 7, 1999.

Or it could all be coincidence.  With things like this, we may never know.

More info on Ukrainian power problems

I wanted to update the available information on the Ukrainian utility hacks to round out my blogging year.  Nothing like ending the year on a high note :)


Based on available reporting, it appears that attackers combined both cyber and kinetic sabotage for maximum effect.  This article indicates that explosions were heard and damage consistent with an explosion was found near a pylon supporting power lines.
The text below comes from the official Ukrainian Government website (translated with Google Translate).  It indicates that malware was used in the attacks.
Security Service of Ukraine has warned Russian special services try to hit the computer network of the energy complex of Ukraine.
Employees of the Security Service of Ukraine found malicious software in the networks of individual regional power companies. Virus attack was accompanied by continuous calls (telephone "flood") technical support room supply.
Malware localized. Continued urgent operational-investigative actions.
Press Center Security Service of Ukraine
Another interesting note is the use of  telephone DoS attacks during the suspected malware attack.  Any organization conducting sand table exercises should consider how this combination of attacks would impact an incident response.

Reuters also published information on the suspected attack in this article, though technical details on the malware and/or tactics used are largely lacking.

An direct attack on one nation's power grid by another nation would potentially open the door for retaliation in the form of disruptive cyber attacks.  The laws of war for cyber attacks that cause kinetic effects are not universally accepted and are poorly understood by political leaders who might order or authorize such attacks.  Citizens in a nation who believe that the state is not acting in their best interests might move to strike without the authorization of the state.  While the danger of asymmetric cyber warfare has been discussed fairly extensively, we lack real experience to know how nations will react to this sort of attack.

As I was finishing this post, I noted that my friend and SANS colleague Robert M. Lee is working on a post of his own.  Stay tuned to his updates as he apparently has acquired a sample of suspected malware used in the attack.

Happy New Year and let's have a great 2016!

Update: Twitter follower "Lin S" noted that the telephone DoS may have simply been an overload of phone capacity as technicians and customers called for help.  I meant to note this as a possibility originally, but got busy when someone forwarded me a sample of the malware (I'm a malware geek at heart).  If the numbers flooded were publicly available, then this is probably due to customers without power.  If the number is not publicly available (e.g. internal help desk), it would more likely indicate malicious attacker behavior.  The reason I gravitated to the latter was the note of the "telephone flood" in the official state press release.

Ukranian power grid hacked, details sketchy

Last week, the Ukrainian power grid was hacked.  I found out through the wonderful SCADASEC mailing list.  Although the details are a little sketchy, this online news article has the details.  Since it wasn't in English, I put it through Google translate to get something resembling English back out.
On Wednesday 23 December due to technical failures in the "Prykarpattyaoblenergo" half remained without electricity Ivano-Frankivsk region and part of the regional center. The company for no apparent reason began to disconnect electricity. 
The reason was hacking, virus launched from outside brought down management system remote control. Restore the electricity supply specialists managed only six hours.
As a result of the attacks have been partially or completely de-energized Ivano-Frankivsk, Gorodenka, Dolyna, Kolomiysky Nadvirnyans'kyi, Kosovo, Kalush, Tysmenytsya of Yaremche districts and zones. 
Energy did not immediately understand the causes of the accident. Central dispatching suddenly "blind". "We can say that the system actually" haknuly. " This is our first time working ", - reported in" Prykarpattyaoblenergo. " 
Restored work station manually, as infected system had to turn off. 
At the time of inclusion network "Prykarpattyaoblenergo" was still infected with the virus, experts have worked to defuse it. Did this PreCarpathian experts still unknown. 
It is Ukraine's first successful cyberattack on the public enterprise supply of energy.
This is more than a little frightening.  At Rendition Infosec, we've evaluated security in numerous ICS and SCADA environments and the results are usually terrifying to say the least.  The reality is that much of the public sector utility equipment was never engineered with network security in mind. We all know that when security is an afterthought, the outcome is predictably horrible.  


We predict that in the future, we'll see more disruptive attacks against SCADA systems controlling our utilities.  Some of these attacks will be from relative script kiddies just flexing their muscles against poorly secured systems.  Other attacks are likely to come from state actors using utility disruptions for economic or political gain.  

If you are in control of SCADA systems, make sure that they are as secure as possible.  Network segmentation goes a long way towards security.  ICS systems have no place operating on the same network segments as user systems.  Though many SCADA controllers have VNC built in for remote management, VNC should never be exposed to the open Internet.  Telnet should never be exposed either.  But despite common sense rules such as these, we regularly see them in assessments.  Secure your ICS systems so some Ukrainian doesn't use Google Translate to read about your security failures like we just did for them.

Update: I actually wrote this late last night and then overnight, Chris Sistrunk posted this Reuters article with additional details.  Chris apparently also broke the original story to the SCADASEC mailing list. Definitely worth looking at.

Wednesday, December 30, 2015

"Chief Internet of Things Officer" or "dumbest idea ever"

This little gem was brought to my attention by Andrew Hay.  If you aren't following him, you should start.  You'll be smarter for it.

This morning, he posted a link about a new position that exists (maybe?!) somewhere in the industry. The position is called the Chief IoT (Internet of Things) Officer.  Yes, you read that correctly.  While IoT is a big concern and getting it wrong can really jam a lot of things up, it's not like IoT demands its own board level position.

Just like with IT, there are two major components to IoT administration.  The first is implementation/operation and the second is security.  Implementation and operation should be handled by the CIO.  Security should be handled by the CISO.  Some will argue that IoT is so fundamentally different from regular devices that a new C level position is required.  I think this is hogwash.

At Rendition Infosec, we usually can tell a lot about the security posture at an organization just by looking at the org chart.  When the CISO reports to the CIO, you can usually anticipate problems in security.  News about the security posture of information systems is simply not accurately reported when the CISO doesn't report to the board with an independent voice.

Creating another position in the C-suite for IoT will simply confuse non-technical board members about an already confusing topic.  Will the CIoTO (??) be in charge of security or just implementation for IoT?  What about the obvious places where regular IT and IoT merge in operation?  We already see friction between IT and information security on issues of who is responsible for administering security tools like the SIEM.  Obviously this will just create more issues with no real benefit.

That being said, organizations should not neglect IoT.  IoT devices come with their own unique challenges.  The CIO and the CISO should both have advisors who can help them identify upcoming trends in IoT adoption and take appropriate action to maximize value to the organization.  Ignoring IoT is almost as stupid as having a Chief IoT Officer.

Is your organization adopting IoT devices?  Do you have specific policies governing IoT?  Is your organization considering a C-level IoT officer?  Leave a comment and let me know what you see coming for IoT.

Saturday, December 26, 2015

Cyber Santa results call out vendors for shenanighans

Thanks to @da_667 for publishing the Cyber Santa list this year.  While @krypt3ia's Cyber Krampus list celebrates "full spectrum cyber douchery," the Cyber Santa list was supposed to celebrate those in infosec who are being nice.


I really liked the way @da_667 put the list together.  I especially like that he called out Fortinet for their apparent misbehavior.  Vendors should have seen this wasn't an appropriate marketing tool.  They can't really expect that people are going to vote a company/product in one of these lists.  Well, I guess it happens in Lee Whitfield's Forensic 4Cast.  But it's sort of accepted there (why I'm not entirely sure). 

In any case, I loved the way @da_667 called out Fortinet on their apparent behavior:
Fortinet. Cyber Santa isn’t blind. He saw that you favorited da_667’s tweet linking the google form, then immediately filled out two tweets that look like they were fresh out of a fucking marketing pamphlet. Shame on you.
Vendor shenanighans like these are never welcome and tarnish our good profession.  Thanks to @da_667 for putting the list together and thanks to those vendors who kept it classy by not nominating their marketing brochures.

Friday, December 25, 2015

Merry Christmas, you've been phished

One of my clients wast hit with an interesting phish last week.  It wasn't a threat to the organization, but definitely was to the user who fell for it.  Before we blame the user for being dumb, the email came to the organization claiming to be from the user's bank.  The address was spoofed correctly, so the user only saw USAA in the from address.

Eek - it's a phish

The user said he examined the link he clicked on, but felt safe because it was an https link.  But this is where the user education fell short.  The user had been told to inspect the links he was clicking.  The training modules used by this organization did not specifically reinforce that users needed to ensure the links they clicked actually took them to the correct page.

The actual domain the user was redirected to had a lookalike of the USAA log in, complete with the pin verification required by USAA.  The site itself was a compromised website where the attacker had taken up residence.  I won't victim shame here by sharing the actual domain used, but the user should have EASILY noticed this wasn't really USAA.

Results of a quick Google search show that McAlum is real

The victim reported that they felt a need to follow up on the phish since the message mentioned security problems on the user's account.  He said he actually Google'd the sender and when he found the user was really the USAA Chief Security Officer, he felt the message was legitimate.  Good on the user for thinking, but the logic here is all wrong.  If he can Google to find the name of the CSO, so can any attacker.  A little critical thinking should have led the user to wonder why the CSO would be contacting him directly.  The CSO at a major bank probably doesn't work issues (even security issues) with individual checking accounts as the phish suggests.

At Rendition Infosec, we always recommend that clients educate their users on realistic phishing scenarios.  As infosec professionals, we owe it to our users to ensure that our scenarios are realistic and cover issues like this.  Otherwise, our users may get a nasty Christmas surprise in their stockings far worse than a lump of coal.

Thursday, December 24, 2015

Faulty jail software lets thousands of inmates out early

A critical flaw in software used to calculate release dates for inmates in the Washington state department of corrections released thousands of inmates early.  The software flaw was introduced 13 years ago.  Yes, you read that correctly - 13 years ago.  According to the DOC's own timeline of events, nobody there noticed the flaw at all for 10 years.  Incredible, just incredible.  The DOC was first made aware of the problem by a victim's family in 2012 and then failed to fix it... repeatedly.  This is a ridiculous state of affairs considering that it was resulting in prisoners not serving their correct sentences.

Reading into the situation a little further, different sources report the average time for early release between 49 and 55 days.  Reportedly, only 3% of inmates were released early because of the flaw.  So the problem could definitely be worse, but the point to remember here is that it never should have happened in the first place.


There's another angle here that nobody has talked about yet.  Did any inmates serve too much time because of the error?  If so, they are entitled to remuneration, even if they only served a day longer than they should have.  Furthermore, the governor has ordered that all releases under the program are suspended until the dates can be tallied by hand.  Under the circumstances, this seems reasonable.  That is of course unless you are supposed to be released and are now sitting in prison because for three years the DOC failed to patch faulty software and is now relying on hand calculations (which are more likely to result in error than properly written computer code).  This is unreasonable and likely to result in civil lawsuits against the state and those specifically responsible for delaying the fixes.

Side note: this is an interesting sand table exercise for your organization.  Who, if anyone, might be named in a civil suit if delaying implementation of certain controls results in a lawsuit.  Attitudes change very quickly when individuals are named in lawsuits along with the organization they represent.

At Rendition Infosec, we always tell clients that when we smell smoke, it almost always means fire.  This software flaw - and the cavalier attitude towards security that left it unpatched for three years - definitely constitutes smoke.  I'll bet the money in my pocket that there are numerous other logic and security flaws in the software.  Logic flaws are always the hardest to detect since they require in-depth knowledge of the application.  But any program that lacks the resources and leadership to implement a fix impacting public safety probably doesn't have a very good grasp on computer security either.

This is a great case where heads should roll.  I'm not generally for firing people in response to infosec fails.  Playing the blame game rarely sets the correct tone for a security program.  But in this case, I think we can make an exception.  If you know about a security fail, particularly one that impacts public safety, and fail to act you should be out of a job.  Period.  End of story.  The failures here are ridiculous.  The WA DOC needs a top to bottom review of their information security programs to find out:
  1. What other vulnerabilities exist that were previously documented but not acted on
  2. What currently unknown vulnerabilities exist

DOC can't do this internally either.  It's clear that management there is incapable of prioritizing and acting on even the most grievous issues.  I hope that the state inspector general (or equivalent) mandates independent assessments across the DOC's information security programs to determine just how bad things are there.  Experienced auditors know that what we have seen here is likely just the tip of the iceberg.

I first became aware of this problem yesterday evening via a tweet from Greg Linares.  I immediately responded that Rendition Infosec would help with some free code reviews for the WA DOC if that's what they needed.  Yes, I think the overall state of affairs is that serious.  And I stand by that offer.  If you are with the WA DOC and want some help from true professionals, not a Mickey Mouse assessment, contact me.  I'm a pretty easy guy to find.

Wednesday, December 23, 2015

The lottery wasn't nearly as random as we thought

There's news that a former lottery security director Eddie Tipton used his trusted position to install rootkits and scam millions from a multi-state lottery.  Of course this is discouraging to hear since it tarnishes our infosec profession.  But it brings up a good point about defense in depth.  As NSA will more than happily tell you, insider attacks are a real thing.  

At Rendition Infosec, we regularly see insiders contributing to security incidents, whether they are directly the cause (e.g. IP theft) or whether they are simply a means to an end (e.g. scammed into giving the attackers access).



It seems a little naive that the organization with the computers determining the numbers for a multi-state lottery didn't have better fraud detection in place.  We know that their fraud detection schemes were poor because it took the perpetrator getting greedy and buying a ticket himself to get the attention of the authorities.  And even then, it was only after he walked away from a $16.5 million jackpot that they began investigating.

There are obvious infosec angles here.  Good defense in depth, separation of privilege, and periodic external security audits would have likely stopped this long before Tipton was caught through his own hubris.  We've seen time and again that absolute power corrupts absolutely.  Internal security audits are good, but periodically you need an external set of eyes to validate things aren't being missed.  The US Government, for all of there ineffectiveness knows this.  It's why the Office of the Inspector General exists.

Additionally, the public purchased lottery tickets believing that they had an a particular chance to win (based on published odds).  It's obvious that those odds were not correct based on the manipulation of the numbers by Tipton's rootkit.  I would be surprised if a class action lawsuit from those who purchased lottery tickets doesn't emerge here.  It's obvious they were playing a game with odds very different from those published.  A mathematician will tell me that even when the numbers were set, the odds didn't really change.  But the payouts did change when Tipton and accomplices fraudulently received payouts on their "winnings."

Finally, as someone who has built rootkits in the past, I can tell you it's not the easiest thing in the world to pull off.  Few have the capability to do it.  Another interesting thing to know would be whether Tipton had help in constructing the rootkit and if so, who helped him.  If anyone has access to a copy of this rootkit, I'd love to analyze it and will not attribute it back to you (unless you want me to).

Tuesday, December 22, 2015

Oracle settles with FTC over deceptive patching practices

Ever felt like Oracle may not have had your best interests in mind with Java?  Turns out you aren't the only one.  The FTC alleged that Oracle engaged in deceptive business practices by offering patching programs that installed new versions of Java, but failed to remove old versions.  Of course this left millions of machines vulnerable to attack even though consumers thought they were doing the right thing.  The FTC also alleges that Oracle knew this was a problem and did nothing.


Yesterday, the FTC reached a proposed settlement with Oracle.  But depending on who you ask, this doesn't go nearly far enough.  
Under the terms of the proposed consent order, Oracle will be required to notify consumers during the Java SE update process if they have outdated versions of the software on their computer, notify them of the risk of having the older software, and give them the option to uninstall it. In addition, the company will be required to provide broad notice to consumers via social media and their website about the settlement and how consumers can remove older versions of the software.
But how many consumers will truly understand what is being asked of them?  Do you really want to uninstall earlier versions?  In the SOHO market, uninstalling old versions may break software that consumers depend on, so it will be interesting to see how instructions to consumers are worded.  

One thing is for sure though: with Oracle put on notice by the FTC, we can expect other software vendors to take a careful look at their software patching policies.  Failure to do so will clearly invoke ire from the FTC.

Monday, December 21, 2015

Juniper Follow Up

There's more news in the Juniper compromise case.  Again, giving indications about attacker motivation and possibly providing clues about attribution.  Note that although it seems likely that the two backdoors are the work of the same malicious actor, there is currently no conclustive evidence that this is the case.  CERT-Bund provided an analysis of the vulnerabilities and discovered that the vulnerability to leak VPN keys was present as early as 2013, while the backdoor password vulnerability didn't appear until 2014.

Juniper CVE Matrix

The timing of the vulnerabilities seems to imply that the attackers discovered that there were objectives that could not be accomplished through passive monitoring.  From an auditing perspective, the inclusion of the backdoor was quite a bit more risky than the VPN key disclosure.  Crypto is really hard to audit - authentication routines a little less so.  Of course, this indicates an active network exploitation program, but it seems like that would be nearly required for the type of access you would need to insert the backdoor in the first place.

Yesterday, I wrote about how this whole incident largely ends NSA's NOBUS doctrine.  Supposing that NSA was responsible for the VPN key leakage (and let me be the first to say I do not think they are), they might assume that nobody else would be able to intercept the data and make use of the keys.  They might feel justified in introducing this vulnerability, even though it makes them vulnerable too, based on the idea that exploitation would require the ability to intercept traffic - something far from trivial.

However, the backdoor is something else entirely.  VPN products, by definition, most often sit on the open Internet.  Successful compromise would put the attacker in a privileged position inside the corporate network.  Since anyone who can audit firmware can discover the backdoor password and use it to gain access to the internal network, this would represent a clear violation of the NOBUS policy.  If this source code change (specifically the backdoor) was performed by US Intelligence, it seems clear that they violated the law.  If I were Juniper, I'd be livid.  It's one thing to suck at programming and have vulnerabilities in your code.  Having what is very clearly a nation state insert them for you is so far over the line, I almost can't comprehend how mad I'd be.

The password itself is "<<< %s(un='%s') = %u" which was likely chosen because it appears to be a format string.  Format strings are regularly used in debugging routines to help developers figure out what happened at a particular location in the code.  In this case, the attackers appear to have been banking on a future code audit and their need to have the code blend in with other debug statements.  Of course, this string is being passed as an argument to the strcmp function instead of the sprintf or fprintf functions.  This should trigger some suspicion, but its much harder to spot than something like 's00pers3kr3t' or some such.

It's worth mentioning that FOXIT has released Snort rules to detect the backdoor password in use.  But honestly, you should probably not have SSH (and especially not Telnet) exposed to the Internet on your Juniper VPN/firewall.  If you do have these open, you can count on constant scans and connection attempts.  These best practices haven't stopped at least 26,000 people from doing so, according to Shodan queries.  There's evan a Python script for locating vulnerable Netscreen devices on the Internet.  However, I should note that you probably are exceeding legal authorities using it since it tries to log in with the backdoor password using the system user.

Any use of the password would indicate possible exploitation attempts.  At Rendition Infosec, we recommend that our clients add the telnet password rule to their IDS products regardless of whether they have Juniper devices in the network.  Any hits on the password would indicate an exploitation attempt and would provide valuable threat intelligence to the organization.

A close inspection of the password appears to reveal the string "Sun Tzu" - the author of the famous Art of War.  That doesn't do anything for attribution, but it does show that the attackers have a sense of humor.  Speaking of humor, this linked video wins the Internet today.

Sunday, December 20, 2015

New Juniper hack should end NOBUS argument forever

Former Director of the NSA General Hayden outlined the concept of "NOBUS" which he said is "useful in making macro-judgments about vulnerabilities."  NOBUS is short for "nobody but us."  It means that a particular capability, exploit, or technique is so complicated that nobody but NSA could have pulled it off.  Maybe it requires some sort of special access to accomplish.  It may be that the technique required some sort of advanced system to develop.   Or maybe the discovery of the vulnerability was pure chance and NSA feels like the odds that someone else would discover it are astronomically small.  They use the term NOBUS to explain that although the particular technique/exploit/capability would be devastating if used against US interests, there's nothing to worry about because nobody else could pull it off.


The recent Juniper vulnerabilities are especially concerning though when it comes to the NOBUS argument.  In fact, they completely destroy the argument.  The US has already confirmed they are not behind the attack, and I believe them.  But if not the US, then you have to ask who would be behind such an attack?

We can gain some insight by first considering that the hack was very likely performed by government sponsored intelligence.  Criminals are generally playing a short game when it comes to cyber crime.  Meaning they compromise a network, exfiltrate data, and make use of that data.  Planting backdoors in Juniper source code they hoped would eventually make their way into the product base may take years to come to fruition.  This completely discounts the effort that might be required to penetrate Juniper's code base and remain hidden.  And did they ever remain hidden.  Based on a review of the affected platforms, the attackers have had the backdoor in place since 2012.  More three years of internal code reviews failed to discover the backdoor code.

But the vulnerabilities themselves are also telling.  In particular, CVE-2015-7756 could only really be  exploited by an adversary with the ability to intercept VPN traffic.  If you don't have the VPN traffic, then the keys to decrypt it are meaningless.  The fact that this change would be made to the Juniper code base implies that whoever did it has the ability to intercept VPN traffic en-masse and the ability to process the decrypted traffic.  

In the infosec community, we often talk about how any country with good talent can start a network exploitation program.  But the resources required to capitalize on the Juniper hack really limits the number of possible suspects.  I won't speculate on who I think is responsible for the compromise, but since it wasn't the US, I think we can officially put the NOBUS argument to bed... forever.

Saturday, December 19, 2015

Hacking Linux with the back space key?

Yesterday, I heard news that "anyone can hack into any Linux system just by pressing the backspace key!!!"  Of course I was skeptical, but weirder bugs certainly have happened before.  I was intrigued to check this one out since I figured it could be useful in a future penetration test.


At Rendition Infosec, we like to advise clients about realistic risk.  Is CVE-2015-8370 something you should drop everything to fix?  The bug is interesting and probably breaks security models for many high security environments.  But the first important thing to note is that it only matters if the attacker has physical access to your Linux machine of interest.

In order for an attacker to exploit CVE-2015-8370, they must not only have physical access, but they must also reboot the machine.  Will you notice your machine being rebooted?  You should.  Will there be logs of the user exploiting the vulnerability?  Unfortunately, no.  The vulnerability occurs before they system logging function (syslog) is available during boot.  But the attacker doesn't just need to reboot - they have to take the system into a recovery mode and install malware.

All of this takes time.  The researchers who found the vulnerability show a proof of concept that demonstrates this attack, but I take serious issue with the suggestion that APT attackers will somehow gain physical access to exploit this vulnerability (or that they even need to).  So it's not just a reboot you have to miss, but an installation that will likely add at least a minute (possibly several) to your boot time.

But before you run for the hills and drop everything to patch this, note that this vulnerability only helps the attacker bypass the boot loader password.  If you don't have a BIOS/EFI password protecting your boot sequence, this doesn't matter.  If you don't have a password on the boot loader (GRUB), this doesn't matter.  Most organizations we work with at Rendition Infosec don't have BIOS passwords or boot loader passwords on their Linux machines (most of which are servers in restricted access areas).  This isn't to say that we don't recommend it.  But these are required before you even need to think about this vulnerability.  In other words, the attacker can ALREADY do everything in the POC if you don't implement BIOS/EFI and boot loader passwords.

But let's call a spade a spade - if your server reboots and the SOC and ops teams don't notice it, someone probably needs a resume update, don't they?

For those looking for more information on MS15-130, I'll post it tomorrow or early next week.  I was getting to many calls about this bug and wanted to set the record straight today.

Friday, December 18, 2015

MS15-130 exploit likely in coming days

TL;DR - Rendition Infosec recommends that if you haven't patched yet (and many of our clients have not) make sure you do it.  At least patch the MS15-130 vulnerability if you are running Windows 7 or Server 2k8R2 (no other versions are impacted).  Do it now.  Font parsing vulnerabilities are huge.  An attacker could send you a PDF, an office document, a web page, or an HTML enabled email.  Basically anything that can trick your system into loading a custom font.  This has the potential to result in remote code execution.

A trigger for the MS15-130 vulnerability was made publicly available as early as yesterday.  I'm again teaching this week, so I can't devote the time to it I'd like to.  But unlike MS15-127 that I wrote about last week, this particular vulnerability now has a trigger available, simplifying any exploit that would be created.

The vulnerability is reported to be in usp10.dll.  This DLL handles some unicode font parsing.  I'll post patch differentials and maybe work on this a little bit tomorrow.  I'm wrapping up the new SANS CTI class (FOR578) today and have to host DFIR Netwars tonight at SANS CDI.

Tuesday, December 15, 2015

Holy HIPAA settlement Batman!!

At Rendition Infosec, incidents involving phishing are some of the most common that we work.  Let's face it, its a good way to gain initial access to an organization - and your attackers know it.  The most recent OCR HIPAA settlement shows the monetary damage that phishing can have, particularly in healthcare organizations.  Of course phishing damages aren't limited to healthcare.  But one thing I particularly like about this OCR settlement is that it assigns a concrete cost to damages that can be caused by phishing.

In its press release on the settlement, OCR details that the source of the compromise was a malicious attachment downloaded through email - e.g. phishing.
...the electronic protected health information (e-PHI) of approximately 90,000 individuals was accessed after an employee downloaded an email attachment that contained malicious malware. The malware compromised the organization’s IT system...
But the release goes on to highlight the need for comprehensive risk assessments, noting that all parts of the organization and partners must be addressed in the risk assessment.
OCR’s investigation indicated UWM’s security policies required its affiliated entities to have up-to-date, documented system-level risk assessments and to implement safeguards in compliance with the Security Rule.  However, UWM did not ensure that all of its affiliated entities were properly conducting risk assessments and appropriately responding to the potential risks and vulnerabilities in their respective environments.
OCR had some additional guidance in the release concerning risk assessments, specifically that they extend past the EHR to all areas of the organization.

“All too often we see covered entities with a limited risk analysis that focuses on a specific system such as the electronic medical record or that fails to provide appropriate oversight and accountability for all parts of the enterprise,” said OCR Director Jocelyn Samuels.  “An effective risk analysis is one that is comprehensive in scope and is conducted across the organization to sufficiently address the risks and vulnerabilities to patient data.”
Whether you are in healthcare or another vertical, you have to get risk assessments correctly.  When you document the requirement, you document the recognition that getting risk assessments correct is a necessity.  But you actually have to what you document.  Any failure to do so is a liability.  If you need help with risk assessments, find individuals that specialize in infosec to help you.  The cost of failure, as demonstrated by the latest OCR settlement is too high for you to get it wrong.

Monday, December 14, 2015

Another hospital hit with malware

At Rendition Infosec, we've been following the recent rash of hospital disclosures surrounding PHI.  Only a few years ago, it seems that most legal teams felt the disclosure standard was confirmed PII or PHI disclosure to third parties.  Recently, we've seen that malware installed on machines used to enter or process PII or PHI seems to trigger a notification.  This is true even when there is no confirmed loss of PHI or PII - the simple possibility of illicit disclosure seems to trigger the notification.

A notification by an Ohio hospital chain in November seems to highlight this case perfectly.  They identified malware on multiple machines.  They were originally notified by the FBI that they had a problem, indicating that they lacked the internal monitoring capability to detect the breach themselves.  In this case, the legal decision may have been made to disclose the breach based on the lack of information available about what data the malware may have exfiltrated.  In any case, it's yet another disclosure from a major health care organization (UCLA led the way with notifications earlier this year) where there was no evidence of the threat actor accessing or exfiltrating data.

To make matters worse, according to the notification the hospital was infected with the malware when they were purchased in July 2015.  This points to another common problem we see at Rendition - inappropriate due diligence performed during mergers and acquisition (M&A).  During M&A, getting valuation correct is critical.  But after a breach, the valuation of a company can change drastically.  Further, the acquiring company is stuck footing the bill for any incident response, credit monitoring, and other costs associated with the breach.

Many healthcare providers have chosen to disclose without direct evidence of theft or misuse.  What does this mean for others in a similar situation?  I am not a lawyer, but I know that legal precedent matters.  Of course you should consult with your legal team when deciding whether or not to disclose.  But as the infosec professional in the organization you should also be assisting them in understanding whether there are similar cases where organizations have already made decisions.  These may be viewed as industry standard and influence the decisions of the legal team.

Friday, December 11, 2015

MS15-127 - notes from limited examination

I'm on the road this week with a Rendition Infosec client so I haven't really had time to get too deep into MS15-127.  But I have a few notes I think can be moderately useful in understanding the scope of the vulnerability.

This post explains how to disable recursion entirely, but not explicitly for DNAME records.  The relevant information from the post is pasted below. If you want to disable recursion on the server, us the '1' value.
dnscmd <ServerName> /Config /NoRecursion {1|0}
I was originally fearful that Windows Server 2003 would be vulnerable with no patch available, but it does not appear that 2003 supports DNAME records.  In fact, this KB corrects errors with DNAME processing without really adding support.  I haven't looked at the code for dns.exe, but I think we're safe there.

So back to the vulnerable servers.  Where do the code changes lie?  Well, running the application through bindiff, we see very few changed functions.

The changed functions certainly seem to imply that DNAME records are to blame.

Answer_PerformDNAMESubstitution function changes
Above are some example changes in Answer_PerformDNAMESubstitution - notice the calls to lock and unlock the database that have been added.  This seems to be a common theme with the patch. 

processCacheUpdateQueryResponse changes
The changes to processCacheUpdateQueryResponse look very similar. The database is locked, then _Recurse_CacheMessageResourceRecords is called.  Note that sometimes Microsoft adds extra changes to a patch just to throw off reverse engineers.  But this looks pretty solid.  My point here is that while it appears that both DNAME caching and recursion are required to exploit this, it's possible that one of these is a decoy.

processRootNsQueryResponse
Here are more changes, these from processRootNsQueryResponse.  Again, we see the use of _Dbase_LockEx and _Dbase_UnlockEx in the code.
Gnz_ProcessCacheUpdateQueryResponse
Above is a portion of the changes in Gnz_ProcessCacheUpdateQueryResponse - this function again adds the lock and unlock code.

I know there are a lot of people working feverishly to get a consistent exploit trigger in DNS.exe.  This of course is the first step in getting a working exploit.  I know that Greg Linares has a crash in dns.exe as well, though I don't know that it is reliable or the conditions around it.

I usually have some recommendations in my posts, so I'll try to part with a few here. If you aren't using recursion on your DNS servers, turn it off.  Of course, patch.  For future architecture decisions, I think this shows why we should not put DNS and AD on the same server.  A bug in DNS can lead to complete compromise of your DC.  As for detection, I need to look in some of my network logs to understand how common DNAME record responses are.  I don't think they are common, and if so, they may offer a detection method.  I would also suggest limiting outbound communications from your DNS/AD/*any* server - that way even if you are exploited you make the bad guy's job that much harder.

Wednesday, December 9, 2015

Use prepared statements or... use prepared statements

While performing security assessments at Rendition Infosec, one of our most common findings is SQL injection.  One great way to cut down substantially on SQL injection is to use prepared statements.  Note: I didn't say that prepared statements will eliminate all SQL injection, but its a critical first step.  This is particularly true for complex injection types, such as 2nd order SQL injection.

But refactoring code is hard and many customers will try to eliminate only those bugs that were identified in a security audit.  That of course is a good start, but stopping there is a mistake.  Code that does not use prepared statements will likely cause problems eventually.  Trying to put patches on your code will cause problems too.  It's like propping up that rotting floor joist instead of actually replacing it. Bottom line: even if the program isn't currently vulnerable, it is too easy for a future code change create a vulnerable condition.  Prepared statements make this substantially less likely.

So to companies unwilling to use prepared statements in their code and refactor to their code I say this: use other prepared statements.  Coordinate with your PR team and have them prepare statements (see what I did there?) explaining why you were compromised when you could have fixed the issues found in the code audit earlier.  Basically, if you don't use prepared statements in your SQL queries expect to be exploited.  Don't get caught unprepared to handle the media backlash when you lose customer information or regulated data.

Monday, December 7, 2015

Junk code makes reversing a pain

Warning: I know some who follow this blog are looking for deep tech, others aren't.  If you are not a techie, this post isn't for you (sorry).

I was looking at some malware the other day and found an interesting use of adding junk code.  I've seen this a lot over the years, but this particular sample is just filled with garbage code that does nothing.  Why add junk code?  It makes the file bigger and usually bigger files mean less stealth.  But in the case of junk code, we see superfluous instructions that the processor just executes to get to the good stuff.  The original program instructions are carefully interlaced in the junk code, making it harder for the reverse engineer to understand what is going on.

In other cases, junk code never executes but it may not be obvious from static analysis that the code will not be used.  The whole point of the junk code is to complicate analysis in the hopes that you will give up before understanding the true meaning of the code.
Junk code
Now this is by far not the most exciting example of junk code insertion I've ever seen - this one doesn't even try to break the disassembler. But the sheer quantity of junk code is inserted is interesting.  The basic block we examine above has more junk instructions than real.

The first clue that we are looking at junk code are the combination of additions and subtractions to the ESP register.  Normally, we subtract from ESP to create space on the stack for local variables.  Adding to ESP is used to clear space for local variables before the function returns or to clear function arguments after the return.  But in this case, we see an add followed immediately by a subtraction - no net change to ESP at all.

The values added to EBX do definitely represent a change to the value of EBX.  These would be very interesting indeed, except that the register EBX is not used anywhere in the function.  There are many other junk code obfuscations in the program.  I'll detail some more in future posts.

Friday, December 4, 2015

What does phishing look like?

At Rendition Infosec, we regularly get asked by clients what a good phish looks like.  I wanted to share an example for reference.  For most infosec professionals, this won't be very exciting but it is worth having a concrete reference available in case your executives ask to see a phish.

This PDF was delivered to one of our clients at Rendition Infosec.  The client said we were free to publish some details for educational purposes since it was not specifically targeting them.  The PDF file itself does not contain any malicious content and does not contain any JavaScript, meaning that it will bypass antivirus scanners with ease (since it is technically not infected).  However the PDF does contain a link and vaguely threatening message telling the employee to click and submit information or risk their account being suspended.  The PDF was named "Notice from IT department" and sent in an email with the subject "Important notice from IT."

Text of the phishing PDF
A couple of things to note here.  First, there is no personalization, contact information or company logo.  This indicates a lack of targeting and means the file was probably sent to multiple organizations.  The second thing to note is that the document uses multiple fonts.  This probably indicates that it was edited from some template (and poorly I might add).

The link takes victims to a page on a shared hosting site at jimdo.com where the attackers have set up a fake web site.  Users who click on the link will be shown this site.

"OWA" login form
Now of course this is unlike any real Outlook Web Access site I've ever seen.  But hey, who am I...  Seriously, you should be educating your employees that if the login site for a service ever changes to something they don't recognize, they should expect to hear from IT before the change.

Now, if your users actually try to type something in, they should note that in no way is the password obfuscated (or blanked out).  This is really lazy on the part of the attackers, but not everyone can be a winner.  Also, employees should note that this is not an HTTPS site and become suspicious there.

Should the password be visible here?
Upon submitting the form, users are presented with the following response, letting them know that their response was received.  This again should be a big red flag.  Wasn't this supposed to have been an Outlook Web Access login?!

Wasn't this supposed to be a login form? Where's my email?
Nothing in this blog should be earth shattering to any infosec professional, but if you want to pass this along to your less technical friends to show them a concrete example of a common phishing attempt involving credential harvesting, feel free.

Wednesday, November 25, 2015

Doing Infosec research? The FBI will take that, thanks... (well, maybe)

You've probably heard about the TOR hidden service unmasking that was used by the FBI to bring down Silk Road and other illegal services hiding on TOR.  There was later some speculation that FBI paid Carnegie Mellon University (CMU), a Federally Funded Research and Development Center (FFRDC) more than $1 million for research into unmasking TOR hidden services.  Needless to say, folks over at the TOR project are really mad about this.  The assertion also made CMU look bad.  Both the FBI and CMU have denied that CMU was paid specifically to decloak TOR users.

You may have also heard that CMU released a statement sort of hinting that they weren't specifically paid for the TOR research.  They don't really come out and say this, only suggest that's the case.  Whether you believe this or not doesn't really matter.  The implication is chilling.

It's probably also worth noting that CMU researchers submitted to speak at Blackhat, but withdrew their presentation.  This was probably not the result of a subpoena.  FFRDC's really do depend on their federal funding and giving away secrets the FBI is currently using is probably a bad way to secure that on a continued basis.

If CMU is truly alleging that they turned over the research based on a subpoena and CMU has no special relationship with the FBI/DoJ, then the effects for infosec researchers are potentially chilling. Not everyone wants to turn over their research to the FBI.  For some, the reasons are financial - they may plan to turn their research into a product for sale.  Others may simply not wish to assist law enforcement with a particular case (or any case).

As a security researcher, I for one would feel more comfortable if the FBI disclosed what help they received from CMU and how they received it.  I actually don't care if they FBI paid CMU for their help in de-cloaking TOR users (as some have suggested).  I I just want to know what my exposure as a security researcher is if the FBI wants my research to help with a case.  Is this something they can just take from me if it will help them solve a case?  If a subpoena truly was required for CMU, it is somewhat unlikely that CMU fought it very hard given their status as an FFRDC.  Many companies who don't have to worry about receiving FFRDC funding may have a different view on fighting a subpoena.

In the end, I don't care what the arrangement between CMU and the FBI was.  But as a security researcher, I would like to know what my liability and exposure are if the FBI wants my research.  Because quite frankly, I'd be comforted to know that at least CMU was paid.  The idea that the FBI can just subpoena my research because it's of use in a criminal investigation is a little terrifying.

Friday, November 20, 2015

Encryption backdoors are a fool's errand

It's no secret that the government wants encryption backdoors. But will they really help in the fight against terror?  A convincing argument can be made that implementing encryption backdoors will actually hurt intelligence operations.

Today it seems reasonable to assume that any well-funded intelligence organization has some practical attacks against modern mainstream cryptography.  What they likely don't have is unlimited time and resources to reverse engineer thousands of new custom cryptographic implementations.  But if we backdoor cryptography, that's exactly what we'll get.

More custom crypto - is that a good thing?
The very people who lawmakers want to spy on with encryption backdoors will turn to custom solutions. Cryptographers may initially say this is good. The road is littered with the bodies of the many who have tried and failed to implement secure custom encryption systems. In tech we even have the phrase "don't roll your own crypto."

At Rendition Infosec, custom crypto implementations (and poor configurations of existing implementations) are always findings in any software assessment. In the current landscape where good crypto is unbreakable by anyone other than dedicated intelligence organizations (and even then, only with significant effort), rolling your own crypto almost always a mistake.

Almost always a mistake... That is unless you KNOW that your adversary can circumvent existing solutions.  In that case, rolling your own crypto is the only sane thing to do.  If we change the playing field, we must expect the players to adapt.

But will we get better intelligence?
Placing backdoors in encryption will lead to some intelligence gains. But they will be among low level assets of little Intel value who practice bad OPSEC.  Those with good OPSEC (the really dangerous guys), will turn to custom solutions developed in house.  Terrorists will employ mathematicians and computer scientists who share their views to help develop custom crypto solutions.  There's some evidence that Al Qaeda is already doing this.

While custom implementations may have structural weaknesses, they won't be standards based.  Each solution will require dedicated reverse engineering, and this is hard work.  Remember, we aren't talking about standards based crypto here.  EVERYTHING has to be reverse engineered from the ground up, totally black box. If they roll out new algorithms periodically, all the worse for intelligence agencies trying to monitor the terrorist communication.

What then? Just exploit the endpoint so we don't need to reverse engineer the custom cdatao?  Yes, that sounds like a good plan... Or does it? If we can reliably attack terrorist endpoints today, then why worry about backdooring encryption?

Do something, do anything
In the wake of any terrorist attack, it's easy to feel like soemthing must be done.  We drive on emotion and doing anything is better than nothing. But remember, that's how we got warrantless bulk collection programs in the first place.  Perhaps "something" should be better sharing of intelligence data. According to open source reporting, several of the Paris attackers were on US intelligence watch lists.

Closing Thoughts
I'm not a terrorism expert. But I am computer security expert.  I know that putting backdoors in encryption won't serve the original stated goals of monitoring only terrorist organizations. And that's because terrorist organizations won't use the backdoored cryptography.  They'll know better. But after the expense of the backdoor projects, intelligence agencies will be forced to show results.  They'll use the capability for parallel construction, much like the DEA is revealed to have done with other data.  In the end, the crypto will only make all of our communucations weaker.

Wednesday, November 18, 2015

Kerberos silver tickets - unique attacker persistence

Last year there was a lot of talk about Golden Tickets in Kerberos allowing attackers to regain domain administrator if the KRBTGT account password in the domain wasn't changed.  There was less fanfare about silver tickets since they don't offer unrestricted domain access, only unrestricted access to particular service accounts on a specific machine.

However, at Rendition Infosec we have recently observed an attacker using Kerberos Silver tickets for long term access.  The silver ticket relies on compromising credentials to a particular service account (or more correctly the hash for that service account).  For the case we observed, the attackers using the computer account password hash to create tickets with admin rights on the local machine.  This means that even when local admin passwords are changed, the attacker can still access the machine as admin by using the machine password hash.

Doesn't the machine password change?
By default, the machine password should change every 30 days.  However, machine password changes are recommendations and not enforced at the domain.  In other words, even if the domain policy says to change the password every 30 days, a machine can go for years without its password changing and there's no change to the operation on the machine.  Also, it's up to the machine to change its own password.

Gimme an IOC
On the machines where we observed this behavior, we saw the attackers updating a registry value to ensure that the machine password would never be updated.  At this time, we believe this to be a reliable indicator of compromise and have not observed it on machines that were not under active attack.  If you have seen this set elsewhere, please comment on the post or touch base out of band (jake at Rendition Infosec).
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Netlogon\Parameters
DisablePasswordChange = 1
Hopefully you find this useful in hunting your favorite neighborhood APT.

Tuesday, November 17, 2015

Auditing your Linux libraries

TL;DR just because you patched libpng doesn't mean that you're server isn't still vulnerable

Last week I posted about the need to audit the libraries that are used by your software.  In the post last week, we talked about the fact that Dropbox was using a version of OpenSSL with numerous public vulnerabilities. 

The post could not have been much more timely.  It turns out that vulnerabilities were recently discovered in libpng.  In my minimal searching, I’ve been unable to locate any Windows software using libpng, but I’m betting it’s out there somewhere.  However, libpng is used in numerous open source projects on the Linux platform.  That’s right, it’s time again to check your libraries. 

But how do you know on Linux whether you application uses libpng?  That’s where the ldd tool can come in tremendously handy.
root@kali:~# ldd -r /usr/bin/yersinia 2>/dev/null |grep libpng
    libpng12.so.0 => /lib/i386-linux-gnu/libpng12.so.0 (0xb6875000)
Rendition Infosec has put together a simple bash script to help you check your system for binaries that dynamically load libpng.  Certainly you could put together a MUCH more robust script but this will suffice more most users.

You need not patch all of these applications, simply updating the library will suffice in most cases.  But is the system secure then?  Probably not…

There are two additional cases you should test for.  Unlike Windows, Linux supports deleting library files that are currently in use.  Also unlike Windows, your Linux server has probably been up for months or years.  This means that you might have a library that has been updated on the system (it might even be deleted) but it continues to be loaded into a long running application (like a system service).

To check for this case, examine /proc/<pid>/map_files.  It is important to note that even if you remove the old library from the system, the library isn’t really gone until there are no dangling references to it.  This is true even if you can’t see it on the filesystem.  Examining /proc/ will help you determine which processes must be restarted to take advantage of the new library.  Rendition Infosec has put together a quick bash script to help you identify processes currently running that need to be restarted to take advantage of the library upgrade.

The other case to consider is that libpng might be statically linked into an executable.  This is unfortunately more difficult to discover.  Running strings on the binary and checking for libpng (and libpng related strings) is one option for checking.  Another is checking configure scripts for your software (if you build from source).  Given the flexibility of dynamic shared libraries, it is increasingly unusual to see statically compiled programs.

Hopefully this post helps you keep your job (or look like a rock star) when the next Linux library vulnerability is released.

Sunday, November 15, 2015

How securely will presidential candidates handle your data?

If you haven't been living under a rock recently, you know that the presidential candidates can't talk enough about cyber security and how committed they are to it.  But mostly they talk about making big corporations accountable for breaches, securing health care data, and making China/Iran/Whoever pay for stealing US intellectual property.  Then they usually get some word in about the proliferation of cyber weapons and how we'll have to "do something" about that.

All of that is great - but how secure are the candidates themselves today?  Johnathan Lampe at Infosec Institute has already covered how securely the candidates' websites are and the results are pretty abysmal.  Most candidates can't even lead their own campaigns, devoid of US government beuaracracy to properly secure their own websites.

But when it comes to insecurely handling the data of the people that work for you (or want to), that's where I draw the line.  If a candidate can't get it right inside their own campaign, I seriously doubt their ability to secure our data once they are elected.  You can argue that it's not the candidate's job to secure the data.  But that's a hollow argument.  The judgment they exercise in selecting staff today is the same judgment they'll exercise in selecting appointees after they are elected.

To that end, I note that Hillary Clinton's campaign has done a terrible job in their intern application process by using an HTTP only page to have intern applicants upload potentially sensitive data.


I'm sure that in response to this blog post, site changes will be made so I've chosen to document a screenshot here.

I do a lot of interviews for tech companies looking to recruit top tier cyber talent.  I can assert that candidates, particularly those right out of college put an amazing amount of private information in their resumes.  Sometimes this includes date of birth, social security number, home address, etc.  In essence, more than enough data to steal the identity of a candidate.  Unfortunately, the very people most likely (in my experience) to upload their sensitive data are the very people Clinton is trying to attract - college students.  They are also the most likely to upload their data over public, unencrypted wifi since many lack dedicated internet access otherwise.

I know there are those who will feel like I have an axe to grind with this post - I do not.  When shenanigans like these are observed, it is our duty to call them out regardless of political leanings.  Furthermore, in Clinton's specific case, this is another example of poor judgement surrounding IT security (her private email server with RDP exposed to the Internet is the prime example).

I hope Clinton's campaign issues apologies to all those who applied for internships and takes steps to resolve this mis-handling of personal data.


Tuesday, November 10, 2015

The libaries you use can make you vulnerable too

TL;DR - I'm not saying Dropbox is vulnerable to denial of service based on its use of out of date OpenSSL libraries.  We didn't verify that and have no plans to do so.  This is more a cautionary tale for businesses who haven't thought of auditing their apps (or the libraries they depend on).

Audit your apps
So you've audited your application and found no vulnerabilities.  And that's good - you're in the top n% (where n < 10) of security programs out there if you are doing real security auditing on your applications.  But it turns out that it's about more than just the code your developers write.  It's the libraries your code uses too.

Vulnerable History
There's a rich history of applications being vulnerable through the use of vulnerable libraries.  For example, in CVE 2014-8485 all the reporting seemed to be that the program strings was vulnerable.  But that wasn't really the case.  Rather, it was libbfd that actually had the vulnerability - strings was simply using libbfd.

Another obvious example is Heartbleed.  Although the vulnerability was in OpenSSL, applications that used a vulnerable version of OpenSSL (compiled with DTLS heartbeats enabled) were also vulnerable.

Heartbleed has been patched - no more OpenSSL patches?
Unfortunately, Heartbleed wasn't the last vulnerability associated with OpensSSL.  There have been a number of OpenSSL vulnerabilities in 2015 alone.  Of those, CVE 2015-0209 is probably the most concerning since it is a use after free.  So far, there is no publicly working code execution exploit for this (and a working exploit might not be possible).  But that's not to say that the recent OpenSSL vulnerabilities don't matter.  Several are Denial of Service (DoS) vulnerabilities, abusing a null pointer dereference.  This means that software using a vulnerable version of OpenSSL would crash if one of these vulnerabilities were exploited.

Poor Dropbox...
Why am I talking about libraries now?  I came across this last week looking at an implementation of Dropbox for a client of Rendition Infosec.  As you may know, Dropbox and I have a rich, sordid history with Dropsmack.

While investigating the current version of Dropbox (3.10.11) we noticed that one of the support libraries (Qt5Network.dll, version 5.4.2.0) is linked with OpenSSL 0.9.8ze.  This version of OpenSSL was released in January 2015 and is two versions out of date.  We didn't try to exploit these any null pointer dereference vulnerabilities, but feel free to jump on that if you're interested.

OpenSSL 0.9.8ze vulnerabilities
So in this case, it looks like Dropbox developers rely on Qt5Network.dll, which in turn was linked with a vulnerable version of OpenSSL.  It's a difficult environment indeed for developers who desire to code securely.  Not only do you have to ensure that the libraries you use are up to date, but you also have to ensure that all the libraries they use are also up to date (recursively).  This is why having a strong application security program in house (or via trusted partners) is so critical.  I would dare say that Dropbox has more application security resources and wider distribution than most organizations who publish their own software for internal or external use.  If it can happen to a major software vendor, it can happen to you too.

Remember to audit your apps (and the libraries they use) regularly.

Friday, November 6, 2015

Outsource your classified programming projects to... Russia.

The federal contractor CSC took a contract from the DoD Defense Information Systems Agency (DISA) to develop code for classified systems.  CSC then subcontracted portions of the coding to NetCracker Technology.  Subcontracting, especially in federal contracts, is completely legitimate - nothing wrong with that.  But in 2011, a NetCracker employee named John Kingsley filed a whistleblower lawsuit against NetCracker for using Russian (yes you read that right, Russian) programmers to write code for the classified DISA project.  Kingsley correctly noted that this was contrary to the contract guidelines which required CSC to staff the contract with properly cleared US employees.

Although it took four years, the lawsuit was eventually settled last month for a total of $12.75 million.  CSC agreed to pay $1.35 million and NetCracker will pay $11.4 million.  The settlement does not prohibit the Justice Department from filing criminal charges in the matter.  There are two potential criminal charges, one for contract fraud and one for allowing foreign entities access to classified data.  Kingsley reports that the programs written by the Russians contained "numerous viruses" though DISA did not confirm this, citing national security concerns.

Perhaps more concerning than viruses or other malware in the Russian code is that intentional weaknesses could have been baked into the code.  Subtle vulnerabilities would be very difficult to detect and would require millions of dollars in auditing to discover.  It is unclear at this time how much damage DISA and other DoD agencies suffered from the malware running on their systems.  It is also not clear how much of the Russian code ever made it into production and how much is still there.

For reporting the contracting violations, Kingsley reportedly received $2.4 million dollars.  It's not often that doing the right thing pays such huge dividends, but in this case it paid off huge for Kingsley.

Was this the Russian programmer who inserted "numerous viruses" into DISA code?

The infosec hooks here are clear.  When outsourcing your programming, do you really know who you are outsourcing to?  What is their security posture?  What are your subcontracting agreements?  How are they enforced?  Do you perform regular code audits to ensure that a contractor is providing secure code that is free of malware and vulnerabilities?  Remember that a contractor may be providing malware without willful participation.  In this case, CSC claims to be as much a victim as DISA due to the actions taken by NetCracker.

The bottom line is that if Russians can get viruses into DISA's code (even while getting paid to do it), you had better believe that your adversary can too.  That makes things the overall situation look pretty bleak.  What can you do then to ensure your organization is not a victim? At Rendition Infosec, we recommend the following to clients:
  1. Work with your legal team to get appropriate language in your contracts to govern what can and can't be subcontracted.  
  2. Maintain your right to audit subcontracting arrangements (if allowed).  
  3. Audit portions of the code (at a minimum) produced under any external contract.  Better yet, write in contract provisions that a third party will audit code and that vulnerabilities discovered will be fixed at the contractor's cost.  If the contractor is forced to pay for security re-tests of the code, they have a built in motivation to get it right the first time.
  4. Any auditing you perform on code contributed by a third party (and even that developed in house) should be examined using more than automated scanners.  Automated scanners are a good start, but they are just that - a start.  Manual testing often unearths subtle vulnerabilities (especially elevation of privilege) not found with any automated scanner.

Thursday, November 5, 2015

Rogue Access Points - are they a priority in your environment?

Last year, I was working with a customer who has architected their network environment without considering security.  They suffered a breach, lost a ton of intellectual property (thankfully none of it was regulated data), and were committed to moving forward with a more secure architecture. This customer has a relatively large number of wireless access points.  One of their managers suggested that a quick win would be to locate and disable rogue wireless access points, citing a Security+ class he had taken years prior.  But other managers wondered if this was a good use of limited resources, considering the abysmal state of overall security.

My take on this is that rogue access point detection is something that should only be done after other, higher priority items are taken care of.  Are rogue access points a big deal?  Sure.  But are they a bigger deal than 7 character passwords, no account lockout, or access control policies that allow anyone to install any software not explicitly denied by antivirus?  I think not.

As you probably would guess, I'm a big believer in the SANS top 20 critical security controls.  The best thing about the SANS 20 CSCs is that they are actually ordered to help you figure out what to do first.  What controls give you the best bang for your buck?  No need to guess when you are using the SANS Top 20.

So where in the 20 CSCs do rogue access points fall?  They fall well behind CSCs #1 and #2 (hardware and software inventory, respectively).  If you don't have basic blocking and tacking down, do you really need to consider rogue access points?  Wireless controls are #15 on the list of CSCs. Open wireless or WEP (gasp!) is a big deal.  But if you've implemented WPA-PSK, how much should you worry about rogue APs?

Yes, rogue APs are a threat. but every one of us is carrying a rogue AP in our pocket these days (your smart phone).  It's hard to track all of the access points that pop up.  This particular client has office space in many different multi-tenant areas so we have to deal with other access points in the area that the client doesn't control.  This makes it really hard to detect the rogues.  Not impossible, mind you.  Just difficult.

If you are in an area where mutil-tenancy isn't an issue, you'll still need good policies to prevent the use of wifi hotspots that all users have on their phones.  With those polluting the spectrum, separating "authorized" hotspots from rogue access points can be a real challenge.  At Rendition Infosec, we advise clients that hunting rogue access points is likely very low on the security spectrum.  We advise that clients only undertake such an endeavor when they have achieved at least baseline success in more important security controls.

One important distinction that I'll make is that evil twin access points will always be a threat.  These are access points with the same (or in some cases very similar) names to the legitimate access points in the environment.  Periodic surveys of the wireless environment will help detect these.  A good WIDS system can help with this as well.  When discovered, evil twins should be physically located and investigated.