Thursday, December 31, 2015

Significance of the compile time stamp in the Ukrainian power malware

Earlier, I did some analysis on the Ukrainian malware for a post on the SANS ICS blog with Robert M. Lee.  I noted that my analysis was hasty, and sure enough I screwed something up.  I failed to notice that my tool was reporting the compile time stamp in local time rather than UTC.  Classic forensics fail...

When the compile timestamp is converted to UTC, we see that the time is 2202 on January 6, 1999.  Since Ukraine is UTC+2, the local Ukraine time is 0002 on January 7, 1999.  What significance might this date play?

Many Orthodox Ukrainians celebrate Christmas starting on January 7th.  This might be significant, though I'm no history scholar so I'm not sure if something special happened that year.  Google didn't turn up anything interesting there.

January 7, 1999 is also the date that Clinton was impeached.  Is that significant for the malware authors?  Hard to say.  Doesn't seem likely, but maybe they were big fans of Lewinsky's blue dress.

One possibly interesting happening on January 6, 1999 has to do with the possible establishment of a solid waste disposal site as noted here.

If the date has something to do with Crimea, it looks like the Constitution of the Autonomous Republic of Crimea came into effect Jan 12, 1999. Yeah, it's not Jan 7, but maybe it means something to someone.

A final possibility has to do with the fact that the Prince Rostislav Romanov of the Russian Imperial Family died January 7, 1999.

Or it could all be coincidence.  With things like this, we may never know.

More info on Ukrainian power problems

I wanted to update the available information on the Ukrainian utility hacks to round out my blogging year.  Nothing like ending the year on a high note :)


Based on available reporting, it appears that attackers combined both cyber and kinetic sabotage for maximum effect.  This article indicates that explosions were heard and damage consistent with an explosion was found near a pylon supporting power lines.
The text below comes from the official Ukrainian Government website (translated with Google Translate).  It indicates that malware was used in the attacks.
Security Service of Ukraine has warned Russian special services try to hit the computer network of the energy complex of Ukraine.
Employees of the Security Service of Ukraine found malicious software in the networks of individual regional power companies. Virus attack was accompanied by continuous calls (telephone "flood") technical support room supply.
Malware localized. Continued urgent operational-investigative actions.
Press Center Security Service of Ukraine
Another interesting note is the use of  telephone DoS attacks during the suspected malware attack.  Any organization conducting sand table exercises should consider how this combination of attacks would impact an incident response.

Reuters also published information on the suspected attack in this article, though technical details on the malware and/or tactics used are largely lacking.

An direct attack on one nation's power grid by another nation would potentially open the door for retaliation in the form of disruptive cyber attacks.  The laws of war for cyber attacks that cause kinetic effects are not universally accepted and are poorly understood by political leaders who might order or authorize such attacks.  Citizens in a nation who believe that the state is not acting in their best interests might move to strike without the authorization of the state.  While the danger of asymmetric cyber warfare has been discussed fairly extensively, we lack real experience to know how nations will react to this sort of attack.

As I was finishing this post, I noted that my friend and SANS colleague Robert M. Lee is working on a post of his own.  Stay tuned to his updates as he apparently has acquired a sample of suspected malware used in the attack.

Happy New Year and let's have a great 2016!

Update: Twitter follower "Lin S" noted that the telephone DoS may have simply been an overload of phone capacity as technicians and customers called for help.  I meant to note this as a possibility originally, but got busy when someone forwarded me a sample of the malware (I'm a malware geek at heart).  If the numbers flooded were publicly available, then this is probably due to customers without power.  If the number is not publicly available (e.g. internal help desk), it would more likely indicate malicious attacker behavior.  The reason I gravitated to the latter was the note of the "telephone flood" in the official state press release.

Ukranian power grid hacked, details sketchy

Last week, the Ukrainian power grid was hacked.  I found out through the wonderful SCADASEC mailing list.  Although the details are a little sketchy, this online news article has the details.  Since it wasn't in English, I put it through Google translate to get something resembling English back out.
On Wednesday 23 December due to technical failures in the "Prykarpattyaoblenergo" half remained without electricity Ivano-Frankivsk region and part of the regional center. The company for no apparent reason began to disconnect electricity. 
The reason was hacking, virus launched from outside brought down management system remote control. Restore the electricity supply specialists managed only six hours.
As a result of the attacks have been partially or completely de-energized Ivano-Frankivsk, Gorodenka, Dolyna, Kolomiysky Nadvirnyans'kyi, Kosovo, Kalush, Tysmenytsya of Yaremche districts and zones. 
Energy did not immediately understand the causes of the accident. Central dispatching suddenly "blind". "We can say that the system actually" haknuly. " This is our first time working ", - reported in" Prykarpattyaoblenergo. " 
Restored work station manually, as infected system had to turn off. 
At the time of inclusion network "Prykarpattyaoblenergo" was still infected with the virus, experts have worked to defuse it. Did this PreCarpathian experts still unknown. 
It is Ukraine's first successful cyberattack on the public enterprise supply of energy.
This is more than a little frightening.  At Rendition Infosec, we've evaluated security in numerous ICS and SCADA environments and the results are usually terrifying to say the least.  The reality is that much of the public sector utility equipment was never engineered with network security in mind. We all know that when security is an afterthought, the outcome is predictably horrible.  


We predict that in the future, we'll see more disruptive attacks against SCADA systems controlling our utilities.  Some of these attacks will be from relative script kiddies just flexing their muscles against poorly secured systems.  Other attacks are likely to come from state actors using utility disruptions for economic or political gain.  

If you are in control of SCADA systems, make sure that they are as secure as possible.  Network segmentation goes a long way towards security.  ICS systems have no place operating on the same network segments as user systems.  Though many SCADA controllers have VNC built in for remote management, VNC should never be exposed to the open Internet.  Telnet should never be exposed either.  But despite common sense rules such as these, we regularly see them in assessments.  Secure your ICS systems so some Ukrainian doesn't use Google Translate to read about your security failures like we just did for them.

Update: I actually wrote this late last night and then overnight, Chris Sistrunk posted this Reuters article with additional details.  Chris apparently also broke the original story to the SCADASEC mailing list. Definitely worth looking at.

Wednesday, December 30, 2015

"Chief Internet of Things Officer" or "dumbest idea ever"

This little gem was brought to my attention by Andrew Hay.  If you aren't following him, you should start.  You'll be smarter for it.

This morning, he posted a link about a new position that exists (maybe?!) somewhere in the industry. The position is called the Chief IoT (Internet of Things) Officer.  Yes, you read that correctly.  While IoT is a big concern and getting it wrong can really jam a lot of things up, it's not like IoT demands its own board level position.

Just like with IT, there are two major components to IoT administration.  The first is implementation/operation and the second is security.  Implementation and operation should be handled by the CIO.  Security should be handled by the CISO.  Some will argue that IoT is so fundamentally different from regular devices that a new C level position is required.  I think this is hogwash.

At Rendition Infosec, we usually can tell a lot about the security posture at an organization just by looking at the org chart.  When the CISO reports to the CIO, you can usually anticipate problems in security.  News about the security posture of information systems is simply not accurately reported when the CISO doesn't report to the board with an independent voice.

Creating another position in the C-suite for IoT will simply confuse non-technical board members about an already confusing topic.  Will the CIoTO (??) be in charge of security or just implementation for IoT?  What about the obvious places where regular IT and IoT merge in operation?  We already see friction between IT and information security on issues of who is responsible for administering security tools like the SIEM.  Obviously this will just create more issues with no real benefit.

That being said, organizations should not neglect IoT.  IoT devices come with their own unique challenges.  The CIO and the CISO should both have advisors who can help them identify upcoming trends in IoT adoption and take appropriate action to maximize value to the organization.  Ignoring IoT is almost as stupid as having a Chief IoT Officer.

Is your organization adopting IoT devices?  Do you have specific policies governing IoT?  Is your organization considering a C-level IoT officer?  Leave a comment and let me know what you see coming for IoT.

Saturday, December 26, 2015

Cyber Santa results call out vendors for shenanighans

Thanks to @da_667 for publishing the Cyber Santa list this year.  While @krypt3ia's Cyber Krampus list celebrates "full spectrum cyber douchery," the Cyber Santa list was supposed to celebrate those in infosec who are being nice.


I really liked the way @da_667 put the list together.  I especially like that he called out Fortinet for their apparent misbehavior.  Vendors should have seen this wasn't an appropriate marketing tool.  They can't really expect that people are going to vote a company/product in one of these lists.  Well, I guess it happens in Lee Whitfield's Forensic 4Cast.  But it's sort of accepted there (why I'm not entirely sure). 

In any case, I loved the way @da_667 called out Fortinet on their apparent behavior:
Fortinet. Cyber Santa isn’t blind. He saw that you favorited da_667’s tweet linking the google form, then immediately filled out two tweets that look like they were fresh out of a fucking marketing pamphlet. Shame on you.
Vendor shenanighans like these are never welcome and tarnish our good profession.  Thanks to @da_667 for putting the list together and thanks to those vendors who kept it classy by not nominating their marketing brochures.

Friday, December 25, 2015

Merry Christmas, you've been phished

One of my clients wast hit with an interesting phish last week.  It wasn't a threat to the organization, but definitely was to the user who fell for it.  Before we blame the user for being dumb, the email came to the organization claiming to be from the user's bank.  The address was spoofed correctly, so the user only saw USAA in the from address.

Eek - it's a phish

The user said he examined the link he clicked on, but felt safe because it was an https link.  But this is where the user education fell short.  The user had been told to inspect the links he was clicking.  The training modules used by this organization did not specifically reinforce that users needed to ensure the links they clicked actually took them to the correct page.

The actual domain the user was redirected to had a lookalike of the USAA log in, complete with the pin verification required by USAA.  The site itself was a compromised website where the attacker had taken up residence.  I won't victim shame here by sharing the actual domain used, but the user should have EASILY noticed this wasn't really USAA.

Results of a quick Google search show that McAlum is real

The victim reported that they felt a need to follow up on the phish since the message mentioned security problems on the user's account.  He said he actually Google'd the sender and when he found the user was really the USAA Chief Security Officer, he felt the message was legitimate.  Good on the user for thinking, but the logic here is all wrong.  If he can Google to find the name of the CSO, so can any attacker.  A little critical thinking should have led the user to wonder why the CSO would be contacting him directly.  The CSO at a major bank probably doesn't work issues (even security issues) with individual checking accounts as the phish suggests.

At Rendition Infosec, we always recommend that clients educate their users on realistic phishing scenarios.  As infosec professionals, we owe it to our users to ensure that our scenarios are realistic and cover issues like this.  Otherwise, our users may get a nasty Christmas surprise in their stockings far worse than a lump of coal.

Thursday, December 24, 2015

Faulty jail software lets thousands of inmates out early

A critical flaw in software used to calculate release dates for inmates in the Washington state department of corrections released thousands of inmates early.  The software flaw was introduced 13 years ago.  Yes, you read that correctly - 13 years ago.  According to the DOC's own timeline of events, nobody there noticed the flaw at all for 10 years.  Incredible, just incredible.  The DOC was first made aware of the problem by a victim's family in 2012 and then failed to fix it... repeatedly.  This is a ridiculous state of affairs considering that it was resulting in prisoners not serving their correct sentences.

Reading into the situation a little further, different sources report the average time for early release between 49 and 55 days.  Reportedly, only 3% of inmates were released early because of the flaw.  So the problem could definitely be worse, but the point to remember here is that it never should have happened in the first place.


There's another angle here that nobody has talked about yet.  Did any inmates serve too much time because of the error?  If so, they are entitled to remuneration, even if they only served a day longer than they should have.  Furthermore, the governor has ordered that all releases under the program are suspended until the dates can be tallied by hand.  Under the circumstances, this seems reasonable.  That is of course unless you are supposed to be released and are now sitting in prison because for three years the DOC failed to patch faulty software and is now relying on hand calculations (which are more likely to result in error than properly written computer code).  This is unreasonable and likely to result in civil lawsuits against the state and those specifically responsible for delaying the fixes.

Side note: this is an interesting sand table exercise for your organization.  Who, if anyone, might be named in a civil suit if delaying implementation of certain controls results in a lawsuit.  Attitudes change very quickly when individuals are named in lawsuits along with the organization they represent.

At Rendition Infosec, we always tell clients that when we smell smoke, it almost always means fire.  This software flaw - and the cavalier attitude towards security that left it unpatched for three years - definitely constitutes smoke.  I'll bet the money in my pocket that there are numerous other logic and security flaws in the software.  Logic flaws are always the hardest to detect since they require in-depth knowledge of the application.  But any program that lacks the resources and leadership to implement a fix impacting public safety probably doesn't have a very good grasp on computer security either.

This is a great case where heads should roll.  I'm not generally for firing people in response to infosec fails.  Playing the blame game rarely sets the correct tone for a security program.  But in this case, I think we can make an exception.  If you know about a security fail, particularly one that impacts public safety, and fail to act you should be out of a job.  Period.  End of story.  The failures here are ridiculous.  The WA DOC needs a top to bottom review of their information security programs to find out:
  1. What other vulnerabilities exist that were previously documented but not acted on
  2. What currently unknown vulnerabilities exist

DOC can't do this internally either.  It's clear that management there is incapable of prioritizing and acting on even the most grievous issues.  I hope that the state inspector general (or equivalent) mandates independent assessments across the DOC's information security programs to determine just how bad things are there.  Experienced auditors know that what we have seen here is likely just the tip of the iceberg.

I first became aware of this problem yesterday evening via a tweet from Greg Linares.  I immediately responded that Rendition Infosec would help with some free code reviews for the WA DOC if that's what they needed.  Yes, I think the overall state of affairs is that serious.  And I stand by that offer.  If you are with the WA DOC and want some help from true professionals, not a Mickey Mouse assessment, contact me.  I'm a pretty easy guy to find.

Wednesday, December 23, 2015

The lottery wasn't nearly as random as we thought

There's news that a former lottery security director Eddie Tipton used his trusted position to install rootkits and scam millions from a multi-state lottery.  Of course this is discouraging to hear since it tarnishes our infosec profession.  But it brings up a good point about defense in depth.  As NSA will more than happily tell you, insider attacks are a real thing.  

At Rendition Infosec, we regularly see insiders contributing to security incidents, whether they are directly the cause (e.g. IP theft) or whether they are simply a means to an end (e.g. scammed into giving the attackers access).



It seems a little naive that the organization with the computers determining the numbers for a multi-state lottery didn't have better fraud detection in place.  We know that their fraud detection schemes were poor because it took the perpetrator getting greedy and buying a ticket himself to get the attention of the authorities.  And even then, it was only after he walked away from a $16.5 million jackpot that they began investigating.

There are obvious infosec angles here.  Good defense in depth, separation of privilege, and periodic external security audits would have likely stopped this long before Tipton was caught through his own hubris.  We've seen time and again that absolute power corrupts absolutely.  Internal security audits are good, but periodically you need an external set of eyes to validate things aren't being missed.  The US Government, for all of there ineffectiveness knows this.  It's why the Office of the Inspector General exists.

Additionally, the public purchased lottery tickets believing that they had an a particular chance to win (based on published odds).  It's obvious that those odds were not correct based on the manipulation of the numbers by Tipton's rootkit.  I would be surprised if a class action lawsuit from those who purchased lottery tickets doesn't emerge here.  It's obvious they were playing a game with odds very different from those published.  A mathematician will tell me that even when the numbers were set, the odds didn't really change.  But the payouts did change when Tipton and accomplices fraudulently received payouts on their "winnings."

Finally, as someone who has built rootkits in the past, I can tell you it's not the easiest thing in the world to pull off.  Few have the capability to do it.  Another interesting thing to know would be whether Tipton had help in constructing the rootkit and if so, who helped him.  If anyone has access to a copy of this rootkit, I'd love to analyze it and will not attribute it back to you (unless you want me to).

Tuesday, December 22, 2015

Oracle settles with FTC over deceptive patching practices

Ever felt like Oracle may not have had your best interests in mind with Java?  Turns out you aren't the only one.  The FTC alleged that Oracle engaged in deceptive business practices by offering patching programs that installed new versions of Java, but failed to remove old versions.  Of course this left millions of machines vulnerable to attack even though consumers thought they were doing the right thing.  The FTC also alleges that Oracle knew this was a problem and did nothing.


Yesterday, the FTC reached a proposed settlement with Oracle.  But depending on who you ask, this doesn't go nearly far enough.  
Under the terms of the proposed consent order, Oracle will be required to notify consumers during the Java SE update process if they have outdated versions of the software on their computer, notify them of the risk of having the older software, and give them the option to uninstall it. In addition, the company will be required to provide broad notice to consumers via social media and their website about the settlement and how consumers can remove older versions of the software.
But how many consumers will truly understand what is being asked of them?  Do you really want to uninstall earlier versions?  In the SOHO market, uninstalling old versions may break software that consumers depend on, so it will be interesting to see how instructions to consumers are worded.  

One thing is for sure though: with Oracle put on notice by the FTC, we can expect other software vendors to take a careful look at their software patching policies.  Failure to do so will clearly invoke ire from the FTC.

Monday, December 21, 2015

Juniper Follow Up

There's more news in the Juniper compromise case.  Again, giving indications about attacker motivation and possibly providing clues about attribution.  Note that although it seems likely that the two backdoors are the work of the same malicious actor, there is currently no conclustive evidence that this is the case.  CERT-Bund provided an analysis of the vulnerabilities and discovered that the vulnerability to leak VPN keys was present as early as 2013, while the backdoor password vulnerability didn't appear until 2014.

Juniper CVE Matrix

The timing of the vulnerabilities seems to imply that the attackers discovered that there were objectives that could not be accomplished through passive monitoring.  From an auditing perspective, the inclusion of the backdoor was quite a bit more risky than the VPN key disclosure.  Crypto is really hard to audit - authentication routines a little less so.  Of course, this indicates an active network exploitation program, but it seems like that would be nearly required for the type of access you would need to insert the backdoor in the first place.

Yesterday, I wrote about how this whole incident largely ends NSA's NOBUS doctrine.  Supposing that NSA was responsible for the VPN key leakage (and let me be the first to say I do not think they are), they might assume that nobody else would be able to intercept the data and make use of the keys.  They might feel justified in introducing this vulnerability, even though it makes them vulnerable too, based on the idea that exploitation would require the ability to intercept traffic - something far from trivial.

However, the backdoor is something else entirely.  VPN products, by definition, most often sit on the open Internet.  Successful compromise would put the attacker in a privileged position inside the corporate network.  Since anyone who can audit firmware can discover the backdoor password and use it to gain access to the internal network, this would represent a clear violation of the NOBUS policy.  If this source code change (specifically the backdoor) was performed by US Intelligence, it seems clear that they violated the law.  If I were Juniper, I'd be livid.  It's one thing to suck at programming and have vulnerabilities in your code.  Having what is very clearly a nation state insert them for you is so far over the line, I almost can't comprehend how mad I'd be.

The password itself is "<<< %s(un='%s') = %u" which was likely chosen because it appears to be a format string.  Format strings are regularly used in debugging routines to help developers figure out what happened at a particular location in the code.  In this case, the attackers appear to have been banking on a future code audit and their need to have the code blend in with other debug statements.  Of course, this string is being passed as an argument to the strcmp function instead of the sprintf or fprintf functions.  This should trigger some suspicion, but its much harder to spot than something like 's00pers3kr3t' or some such.

It's worth mentioning that FOXIT has released Snort rules to detect the backdoor password in use.  But honestly, you should probably not have SSH (and especially not Telnet) exposed to the Internet on your Juniper VPN/firewall.  If you do have these open, you can count on constant scans and connection attempts.  These best practices haven't stopped at least 26,000 people from doing so, according to Shodan queries.  There's evan a Python script for locating vulnerable Netscreen devices on the Internet.  However, I should note that you probably are exceeding legal authorities using it since it tries to log in with the backdoor password using the system user.

Any use of the password would indicate possible exploitation attempts.  At Rendition Infosec, we recommend that our clients add the telnet password rule to their IDS products regardless of whether they have Juniper devices in the network.  Any hits on the password would indicate an exploitation attempt and would provide valuable threat intelligence to the organization.

A close inspection of the password appears to reveal the string "Sun Tzu" - the author of the famous Art of War.  That doesn't do anything for attribution, but it does show that the attackers have a sense of humor.  Speaking of humor, this linked video wins the Internet today.

Sunday, December 20, 2015

New Juniper hack should end NOBUS argument forever

Former Director of the NSA General Hayden outlined the concept of "NOBUS" which he said is "useful in making macro-judgments about vulnerabilities."  NOBUS is short for "nobody but us."  It means that a particular capability, exploit, or technique is so complicated that nobody but NSA could have pulled it off.  Maybe it requires some sort of special access to accomplish.  It may be that the technique required some sort of advanced system to develop.   Or maybe the discovery of the vulnerability was pure chance and NSA feels like the odds that someone else would discover it are astronomically small.  They use the term NOBUS to explain that although the particular technique/exploit/capability would be devastating if used against US interests, there's nothing to worry about because nobody else could pull it off.


The recent Juniper vulnerabilities are especially concerning though when it comes to the NOBUS argument.  In fact, they completely destroy the argument.  The US has already confirmed they are not behind the attack, and I believe them.  But if not the US, then you have to ask who would be behind such an attack?

We can gain some insight by first considering that the hack was very likely performed by government sponsored intelligence.  Criminals are generally playing a short game when it comes to cyber crime.  Meaning they compromise a network, exfiltrate data, and make use of that data.  Planting backdoors in Juniper source code they hoped would eventually make their way into the product base may take years to come to fruition.  This completely discounts the effort that might be required to penetrate Juniper's code base and remain hidden.  And did they ever remain hidden.  Based on a review of the affected platforms, the attackers have had the backdoor in place since 2012.  More three years of internal code reviews failed to discover the backdoor code.

But the vulnerabilities themselves are also telling.  In particular, CVE-2015-7756 could only really be  exploited by an adversary with the ability to intercept VPN traffic.  If you don't have the VPN traffic, then the keys to decrypt it are meaningless.  The fact that this change would be made to the Juniper code base implies that whoever did it has the ability to intercept VPN traffic en-masse and the ability to process the decrypted traffic.  

In the infosec community, we often talk about how any country with good talent can start a network exploitation program.  But the resources required to capitalize on the Juniper hack really limits the number of possible suspects.  I won't speculate on who I think is responsible for the compromise, but since it wasn't the US, I think we can officially put the NOBUS argument to bed... forever.

Saturday, December 19, 2015

Hacking Linux with the back space key?

Yesterday, I heard news that "anyone can hack into any Linux system just by pressing the backspace key!!!"  Of course I was skeptical, but weirder bugs certainly have happened before.  I was intrigued to check this one out since I figured it could be useful in a future penetration test.


At Rendition Infosec, we like to advise clients about realistic risk.  Is CVE-2015-8370 something you should drop everything to fix?  The bug is interesting and probably breaks security models for many high security environments.  But the first important thing to note is that it only matters if the attacker has physical access to your Linux machine of interest.

In order for an attacker to exploit CVE-2015-8370, they must not only have physical access, but they must also reboot the machine.  Will you notice your machine being rebooted?  You should.  Will there be logs of the user exploiting the vulnerability?  Unfortunately, no.  The vulnerability occurs before they system logging function (syslog) is available during boot.  But the attacker doesn't just need to reboot - they have to take the system into a recovery mode and install malware.

All of this takes time.  The researchers who found the vulnerability show a proof of concept that demonstrates this attack, but I take serious issue with the suggestion that APT attackers will somehow gain physical access to exploit this vulnerability (or that they even need to).  So it's not just a reboot you have to miss, but an installation that will likely add at least a minute (possibly several) to your boot time.

But before you run for the hills and drop everything to patch this, note that this vulnerability only helps the attacker bypass the boot loader password.  If you don't have a BIOS/EFI password protecting your boot sequence, this doesn't matter.  If you don't have a password on the boot loader (GRUB), this doesn't matter.  Most organizations we work with at Rendition Infosec don't have BIOS passwords or boot loader passwords on their Linux machines (most of which are servers in restricted access areas).  This isn't to say that we don't recommend it.  But these are required before you even need to think about this vulnerability.  In other words, the attacker can ALREADY do everything in the POC if you don't implement BIOS/EFI and boot loader passwords.

But let's call a spade a spade - if your server reboots and the SOC and ops teams don't notice it, someone probably needs a resume update, don't they?

For those looking for more information on MS15-130, I'll post it tomorrow or early next week.  I was getting to many calls about this bug and wanted to set the record straight today.

Friday, December 18, 2015

MS15-130 exploit likely in coming days

TL;DR - Rendition Infosec recommends that if you haven't patched yet (and many of our clients have not) make sure you do it.  At least patch the MS15-130 vulnerability if you are running Windows 7 or Server 2k8R2 (no other versions are impacted).  Do it now.  Font parsing vulnerabilities are huge.  An attacker could send you a PDF, an office document, a web page, or an HTML enabled email.  Basically anything that can trick your system into loading a custom font.  This has the potential to result in remote code execution.

A trigger for the MS15-130 vulnerability was made publicly available as early as yesterday.  I'm again teaching this week, so I can't devote the time to it I'd like to.  But unlike MS15-127 that I wrote about last week, this particular vulnerability now has a trigger available, simplifying any exploit that would be created.

The vulnerability is reported to be in usp10.dll.  This DLL handles some unicode font parsing.  I'll post patch differentials and maybe work on this a little bit tomorrow.  I'm wrapping up the new SANS CTI class (FOR578) today and have to host DFIR Netwars tonight at SANS CDI.

Tuesday, December 15, 2015

Holy HIPAA settlement Batman!!

At Rendition Infosec, incidents involving phishing are some of the most common that we work.  Let's face it, its a good way to gain initial access to an organization - and your attackers know it.  The most recent OCR HIPAA settlement shows the monetary damage that phishing can have, particularly in healthcare organizations.  Of course phishing damages aren't limited to healthcare.  But one thing I particularly like about this OCR settlement is that it assigns a concrete cost to damages that can be caused by phishing.

In its press release on the settlement, OCR details that the source of the compromise was a malicious attachment downloaded through email - e.g. phishing.
...the electronic protected health information (e-PHI) of approximately 90,000 individuals was accessed after an employee downloaded an email attachment that contained malicious malware. The malware compromised the organization’s IT system...
But the release goes on to highlight the need for comprehensive risk assessments, noting that all parts of the organization and partners must be addressed in the risk assessment.
OCR’s investigation indicated UWM’s security policies required its affiliated entities to have up-to-date, documented system-level risk assessments and to implement safeguards in compliance with the Security Rule.  However, UWM did not ensure that all of its affiliated entities were properly conducting risk assessments and appropriately responding to the potential risks and vulnerabilities in their respective environments.
OCR had some additional guidance in the release concerning risk assessments, specifically that they extend past the EHR to all areas of the organization.

“All too often we see covered entities with a limited risk analysis that focuses on a specific system such as the electronic medical record or that fails to provide appropriate oversight and accountability for all parts of the enterprise,” said OCR Director Jocelyn Samuels.  “An effective risk analysis is one that is comprehensive in scope and is conducted across the organization to sufficiently address the risks and vulnerabilities to patient data.”
Whether you are in healthcare or another vertical, you have to get risk assessments correctly.  When you document the requirement, you document the recognition that getting risk assessments correct is a necessity.  But you actually have to what you document.  Any failure to do so is a liability.  If you need help with risk assessments, find individuals that specialize in infosec to help you.  The cost of failure, as demonstrated by the latest OCR settlement is too high for you to get it wrong.

Monday, December 14, 2015

Another hospital hit with malware

At Rendition Infosec, we've been following the recent rash of hospital disclosures surrounding PHI.  Only a few years ago, it seems that most legal teams felt the disclosure standard was confirmed PII or PHI disclosure to third parties.  Recently, we've seen that malware installed on machines used to enter or process PII or PHI seems to trigger a notification.  This is true even when there is no confirmed loss of PHI or PII - the simple possibility of illicit disclosure seems to trigger the notification.

A notification by an Ohio hospital chain in November seems to highlight this case perfectly.  They identified malware on multiple machines.  They were originally notified by the FBI that they had a problem, indicating that they lacked the internal monitoring capability to detect the breach themselves.  In this case, the legal decision may have been made to disclose the breach based on the lack of information available about what data the malware may have exfiltrated.  In any case, it's yet another disclosure from a major health care organization (UCLA led the way with notifications earlier this year) where there was no evidence of the threat actor accessing or exfiltrating data.

To make matters worse, according to the notification the hospital was infected with the malware when they were purchased in July 2015.  This points to another common problem we see at Rendition - inappropriate due diligence performed during mergers and acquisition (M&A).  During M&A, getting valuation correct is critical.  But after a breach, the valuation of a company can change drastically.  Further, the acquiring company is stuck footing the bill for any incident response, credit monitoring, and other costs associated with the breach.

Many healthcare providers have chosen to disclose without direct evidence of theft or misuse.  What does this mean for others in a similar situation?  I am not a lawyer, but I know that legal precedent matters.  Of course you should consult with your legal team when deciding whether or not to disclose.  But as the infosec professional in the organization you should also be assisting them in understanding whether there are similar cases where organizations have already made decisions.  These may be viewed as industry standard and influence the decisions of the legal team.

Friday, December 11, 2015

MS15-127 - notes from limited examination

I'm on the road this week with a Rendition Infosec client so I haven't really had time to get too deep into MS15-127.  But I have a few notes I think can be moderately useful in understanding the scope of the vulnerability.

This post explains how to disable recursion entirely, but not explicitly for DNAME records.  The relevant information from the post is pasted below. If you want to disable recursion on the server, us the '1' value.
dnscmd <ServerName> /Config /NoRecursion {1|0}
I was originally fearful that Windows Server 2003 would be vulnerable with no patch available, but it does not appear that 2003 supports DNAME records.  In fact, this KB corrects errors with DNAME processing without really adding support.  I haven't looked at the code for dns.exe, but I think we're safe there.

So back to the vulnerable servers.  Where do the code changes lie?  Well, running the application through bindiff, we see very few changed functions.

The changed functions certainly seem to imply that DNAME records are to blame.

Answer_PerformDNAMESubstitution function changes
Above are some example changes in Answer_PerformDNAMESubstitution - notice the calls to lock and unlock the database that have been added.  This seems to be a common theme with the patch. 

processCacheUpdateQueryResponse changes
The changes to processCacheUpdateQueryResponse look very similar. The database is locked, then _Recurse_CacheMessageResourceRecords is called.  Note that sometimes Microsoft adds extra changes to a patch just to throw off reverse engineers.  But this looks pretty solid.  My point here is that while it appears that both DNAME caching and recursion are required to exploit this, it's possible that one of these is a decoy.

processRootNsQueryResponse
Here are more changes, these from processRootNsQueryResponse.  Again, we see the use of _Dbase_LockEx and _Dbase_UnlockEx in the code.
Gnz_ProcessCacheUpdateQueryResponse
Above is a portion of the changes in Gnz_ProcessCacheUpdateQueryResponse - this function again adds the lock and unlock code.

I know there are a lot of people working feverishly to get a consistent exploit trigger in DNS.exe.  This of course is the first step in getting a working exploit.  I know that Greg Linares has a crash in dns.exe as well, though I don't know that it is reliable or the conditions around it.

I usually have some recommendations in my posts, so I'll try to part with a few here. If you aren't using recursion on your DNS servers, turn it off.  Of course, patch.  For future architecture decisions, I think this shows why we should not put DNS and AD on the same server.  A bug in DNS can lead to complete compromise of your DC.  As for detection, I need to look in some of my network logs to understand how common DNAME record responses are.  I don't think they are common, and if so, they may offer a detection method.  I would also suggest limiting outbound communications from your DNS/AD/*any* server - that way even if you are exploited you make the bad guy's job that much harder.

Wednesday, December 9, 2015

Use prepared statements or... use prepared statements

While performing security assessments at Rendition Infosec, one of our most common findings is SQL injection.  One great way to cut down substantially on SQL injection is to use prepared statements.  Note: I didn't say that prepared statements will eliminate all SQL injection, but its a critical first step.  This is particularly true for complex injection types, such as 2nd order SQL injection.

But refactoring code is hard and many customers will try to eliminate only those bugs that were identified in a security audit.  That of course is a good start, but stopping there is a mistake.  Code that does not use prepared statements will likely cause problems eventually.  Trying to put patches on your code will cause problems too.  It's like propping up that rotting floor joist instead of actually replacing it. Bottom line: even if the program isn't currently vulnerable, it is too easy for a future code change create a vulnerable condition.  Prepared statements make this substantially less likely.

So to companies unwilling to use prepared statements in their code and refactor to their code I say this: use other prepared statements.  Coordinate with your PR team and have them prepare statements (see what I did there?) explaining why you were compromised when you could have fixed the issues found in the code audit earlier.  Basically, if you don't use prepared statements in your SQL queries expect to be exploited.  Don't get caught unprepared to handle the media backlash when you lose customer information or regulated data.

Monday, December 7, 2015

Junk code makes reversing a pain

Warning: I know some who follow this blog are looking for deep tech, others aren't.  If you are not a techie, this post isn't for you (sorry).

I was looking at some malware the other day and found an interesting use of adding junk code.  I've seen this a lot over the years, but this particular sample is just filled with garbage code that does nothing.  Why add junk code?  It makes the file bigger and usually bigger files mean less stealth.  But in the case of junk code, we see superfluous instructions that the processor just executes to get to the good stuff.  The original program instructions are carefully interlaced in the junk code, making it harder for the reverse engineer to understand what is going on.

In other cases, junk code never executes but it may not be obvious from static analysis that the code will not be used.  The whole point of the junk code is to complicate analysis in the hopes that you will give up before understanding the true meaning of the code.
Junk code
Now this is by far not the most exciting example of junk code insertion I've ever seen - this one doesn't even try to break the disassembler. But the sheer quantity of junk code is inserted is interesting.  The basic block we examine above has more junk instructions than real.

The first clue that we are looking at junk code are the combination of additions and subtractions to the ESP register.  Normally, we subtract from ESP to create space on the stack for local variables.  Adding to ESP is used to clear space for local variables before the function returns or to clear function arguments after the return.  But in this case, we see an add followed immediately by a subtraction - no net change to ESP at all.

The values added to EBX do definitely represent a change to the value of EBX.  These would be very interesting indeed, except that the register EBX is not used anywhere in the function.  There are many other junk code obfuscations in the program.  I'll detail some more in future posts.

Friday, December 4, 2015

What does phishing look like?

At Rendition Infosec, we regularly get asked by clients what a good phish looks like.  I wanted to share an example for reference.  For most infosec professionals, this won't be very exciting but it is worth having a concrete reference available in case your executives ask to see a phish.

This PDF was delivered to one of our clients at Rendition Infosec.  The client said we were free to publish some details for educational purposes since it was not specifically targeting them.  The PDF file itself does not contain any malicious content and does not contain any JavaScript, meaning that it will bypass antivirus scanners with ease (since it is technically not infected).  However the PDF does contain a link and vaguely threatening message telling the employee to click and submit information or risk their account being suspended.  The PDF was named "Notice from IT department" and sent in an email with the subject "Important notice from IT."

Text of the phishing PDF
A couple of things to note here.  First, there is no personalization, contact information or company logo.  This indicates a lack of targeting and means the file was probably sent to multiple organizations.  The second thing to note is that the document uses multiple fonts.  This probably indicates that it was edited from some template (and poorly I might add).

The link takes victims to a page on a shared hosting site at jimdo.com where the attackers have set up a fake web site.  Users who click on the link will be shown this site.

"OWA" login form
Now of course this is unlike any real Outlook Web Access site I've ever seen.  But hey, who am I...  Seriously, you should be educating your employees that if the login site for a service ever changes to something they don't recognize, they should expect to hear from IT before the change.

Now, if your users actually try to type something in, they should note that in no way is the password obfuscated (or blanked out).  This is really lazy on the part of the attackers, but not everyone can be a winner.  Also, employees should note that this is not an HTTPS site and become suspicious there.

Should the password be visible here?
Upon submitting the form, users are presented with the following response, letting them know that their response was received.  This again should be a big red flag.  Wasn't this supposed to have been an Outlook Web Access login?!

Wasn't this supposed to be a login form? Where's my email?
Nothing in this blog should be earth shattering to any infosec professional, but if you want to pass this along to your less technical friends to show them a concrete example of a common phishing attempt involving credential harvesting, feel free.