Monday, February 29, 2016

10 Commandments of Exploit Development

I'm teaching SEC760 in London this week while Steven Sims relaxes at RSA Conference and my guys at Rendition Infosec do some pretty cool work on a new product line (and a pretty kick butt pentest).   A student in another class asked me this morning what are the most critical things to know about exploit development.  I took this and ran with it, producing my 10 commandments of exploit development.  I'm going to put together 10 commandments of exploit mitigation later this week as suggested be Ed McCabe.

Here are my 10 commandments (cross posted from Twitter) for your reading pleasure:
Anything you would change here? Let me know.  Until the next time, I'll be here rocking London and loving exploit development.

Friday, February 26, 2016

Repeat after me - port scanning is not hacking

I feel stupid from even having to say this (again) but port scanning is not hacking.  I read this article yesterday and lost a few brain cells.  Look - for the 100th time, just because you got port scanned doesn't mean you got hacked.  And stats like "300 million attacks per day" don't help anybody.  Even if I ate lead paint chips for breakfast and somehow believe that statistic, how do I operationalize it? What do I do with this "information?"

When we use misleading statistics, we make our leadership dumber.  There's no reason to mislead management - we need them making decisions based on facts, not port scans.   At Rendition Infosec, we go out of our way to make sure that any reporting we're involved with is factual and doesn't try to mislead with inaccurate statistics.

What then constitutes an attack?  
This will likely cause someone to tell me I'm wrong, but I'm not sure we'll ever reach a consensus on what constitutes an attack.  Take for instance some of these examples:

  • Your find evidence in your Apache logs that someone scanned for IIS vulnerabilities
  • Port scans against your firewall
  • Unsuccessful sqlmap attack attempts against a web server
  • Attempts to log in to an FTP server using the anonymous account 
  • Attempts to log into telnet using root/password combination
  • Thousands of login attempts over SSH from a single IP address over a 24 hour period

Which of these are attacks?  I don't profess to have all the answers.  What I know for sure is that each port scan isn't an attack.  When nikto is run, each attempt to reach a web page doesn't constitute an attack.  Each port scanned is not an attack.

The bottom line is that we need to select intelligent metrics.  In the community, we may disagree on what those metrics are, but two things are critical.
  1. When reporting within the organization, be consistent with how you generate your numbers. Changes  in methodology skew your stats.
  2. When reporting outside the organization, be transparent with how you generate your numbers.  If you have a new iPad and I offer you 1000 coins, that's a bad deal for you if the coins are pennies, but a good deal if the coins are silver dollars. Talking about "attacks" without context is like offering someone "coins" to buy goods.  You'll look like you're trying to hide something, because you probably are.
While this problem is widespread in our industry, I expect NSA to know better and stop spreading around useless hyperbole.  If NSA can't be bothered to get the facts right about cyber attacks, the rest of the industry has little hope.

Even if Apple decrypts the phone, what if local app storage uses encryption?

Even if Apple loses the legal challenge and is forced to decrypt the iPhone, the FBI still may not get all data they are looking for.  Best practices for mobile applications requires that any sensitive data is stored encrypted in local storage on the phone.  What this means is that it is possible that some applications in use by the suspects may be encrypted on the phone.  If the keys required for decryption are also stored on the phone, perhaps the FBI can decrypt the data from these applications.

However, the possibility also exists that the application requires the user to enter a password to access the data (e.g. when opening the application).  If this is the case, the FBI may not be able to get at the data regardless of Apple's help.  Just food for thought, but it's a great reason to ask "if the FBI gets its way, where will it end?"

Thursday, February 25, 2016

Obvious gov response to Apple claim "you can't force us to write code"

Apple has appealed to the government that they can't be forced to write vulnerable code for their phones.  If you check my other posts, I think you'll see I side with Apple on this.  Privacy and security concerns aside, the financial impacts for Apple would be disastrous.

Even if Apple wins this challenge on the grounds that the government can't force it to write code becuase it would be "unreasonably burdensome" it leaves important legal doors open to the FBI for future requests.

The government could conceivably create their own iPhone backdoor. What prevents them from loading it is the secret signing key that Apple currently has in its possession.  If the FBI used experts to write it's own backdoor, it might later sue Apple for access to its private key and/or force it to sign the software.  This would be even more devastating than for users than the original request and breaks the security update model for all.  This is doubly so if the updates are signed under sealed court orders (e.g. in secret, such as with an NSL).

While I applaud Apple for fighting back using all possible legal challenges, I hope this particular legal theory isn't the only one that holds the government back.  It will be a temporary setback to be sure, but expect follow on action from the FBI in that case. 

Potential alternatives for Apple

If the court order for Apple to build a backdoor for the San Bernandino shooter's phone is upheld, Apple has options.

Option 1:
Apple will be asked to build a backdoor to help the FBI compromise its own software.  Per the court order, this software image can be restricted to a single phone, meaning that the FBI can't test it on another phone first.  It's not inconceivable that the software image might have some unforeseen bugs.  Maybe one that does indeed erase the keying material?  Just saying.

Of course this "oops, we didn't mean to have a bug" will only work once.  After that, Apple might be forced to digitally sign FBI developed software under court order.  Who knows.  But it would certainly be interesting to see what the FBI does next if the software doesn't work as the FBI has planned.

Option 2:
Being a service connected disabled vet with a prior traumatic brain injury, I can be clumsy from time to time.  Students have all seen me trip over those damn projector screen feet.  It's like I can't help myself.  Having said that, I'd like to let Apple know that my friends call me Jake "Never Drop Anything" Williams.  In fact, I might even be better known by that name than MalwareJake.  I'd like to offer my services as a highly skilled mobile device forensics technician to Apple.  I like protecting our fundamental freedoms, they want to comply with the court order. Well, you get the picture.  Nothing could go wrong if you let me install the new software image on the suspect phone.

Even if Apple decides it wants to comply, a single rogue employee can fundamentally take the law into their own hands.  Of course, they'll likely be charged for this caper.  But I'm betting they'll have a great legal defense fund.

Blatant Disclaimer
Everything above is just an intellectual exercise.  I'm not seriously suggesting that Apple should intentionally create defective/damaging software - an action that would leave it in contempt of court.  I'm also not seriously saying I'd break a phone if it were in my possession.  And I'm not suggesting that anyone else do it.  I do however think it is an interesting thought exercise to consider some of the less than legal alternatives at Apple's disposal if they lose the court battle with the FBI.

Wednesday, February 24, 2016

The USG backdoor definition double standard - Juniper vs. Apple

I wanted to make another quick point in the FBI vs. Apple debate that I haven’t yet heard anyone else make.  Fancy that – I’ve had an original idea.  There are two key points to keep in mind as we begin this discussion:
  1. The FBI keeps saying that they aren’t looking for a backdoor.  The FBI just needs a capability that intentionally weakens security features in a product made by a US manufacturer.  
  2. The FBI wants Apple to weaken the security of the iPhone in question for the purposes of intelligence collection. Recall that this is not a law enforcement matter (the suspects are dead).
My memory is sometimes a little short, but I seem to remember a few months ago when it was discovered that an unknown actor placed a crypto backdoor in the Juniper code base.  I'm not talking about the SSH backdoor (CVE-2015-7755), I'd like to intentionally remove that from the debate.  But for the crypto backdoor (CVE-2015-7756) I see an obvious parallel.

Let's review the facts about the Juniper crypto backdoor:
  1. The software weakened security features designed to protect customers who used the device. But it definitely wasn't a backdoor because it didn't directly allow the unknown party access to the device.
  2. Though we don't know who planted the software, it is almost universally agreed upon that it was placed by a nation state for the purposes of intelligence collection.
When comparing these two points looking for similarities, the FBI was drawing a blank. So I brought in Ray Charles to take look and even he can see that there's a clear parallel here.

Okay, so the first point is ridiculous.  You can't seriously say that the Juniper software wasn't a backdoor.  It was an encryption backdoor.  It allowed an attacker to reveal data that customers wanted to keep secret.  In fact, they bought the Juniper devices specifically to keep their data protected from unauthorized viewing.  Then an unknown party put a backdoor in the software for the purposes of enabling intelligence collection.  

I seem to remember some people being really mad about this, including many of our elected representatives.  In fact, congress wants answers about the Juniper backdoor.  I'm really curious why so few elected officials are being vocal about the FBI order to Apple.  If an unknown attacker had the same capability that FBI is asking for, they almost certainly wouldn't be silent.  If China drafted a court order to get an iPhone backdoor, um I mean intentional software weakness, the US certainly wouldn't be silent.  This is a double standard of language if I've ever seen one.

Tuesday, February 23, 2016

Effectively implementing the NYTT in IR

In an earlier post I mentioned the New York Times Test (NYTT). Put broadly, this means that when making decisions, during an incident response, you should always consider how the particular action would look if the entire thing came to light and were reported on by the New York Times.  Organizations that consider how their actions would look often make better decisions about incident response (IR).  Too often, during an IR we are tempted to make decisions that assume that "nobody will know" if we cut a corner somewhere.  Even that feeling in our gut tells us that's probably a bad idea. Yet many still  go ahead with cutting corners, burying evidence that makes us look bad, and other DFIR high crimes and misdemeanors.

Step 1: Assume it will be public
So the first step of effectively implementing the NYTT is to assume that everything you do is going to end up in the press.  Your decisions about what you protect and what you don't will be there.  Your decisions about implementing some controls over others will also be there.  And to be fair, if you are in the middle of an IR, you probably didn't make the best decisions about implementing effective controls.  Or you did make the right decisions, but your limited resources didn't allow you to implement. Or... Well, something happened.  You're here now, and it could easily be made public.

Step 2: Don't give your audience too much credit
At Rendition Infosec, I work with a number of different clients who actually try to apply the NYTT.  But they give the audience too much credit.  They say something like:
Sure, one the surface that sounds bad. But when you consider ... our decisions make perfect sense.
First, you have to understand that most news articles have a max word count.  Second, they are written to the lowest common denominator.  If your ... the audience should consider requires specialized knowledge or even a college degree, it's not going to make the article. Sure, you can publish your own information via press releases or a blog, but again if it's complicated few will read it and even fewer will understand it.

When applying the NYTT, you have to figure that your audience will not fundamentally understand all the mitigating factors the way you do.  After all, you're a security professional - most are not.  I find that trying to explain it to my mom helps.  For obvious NDA reasons, mom doesn't actually get to hear about my work, but I visualize how I would explain it.  This removes infosec lingo and acronym paralysis.

Step 3: Involve PR early, involve PR often
In infosec, we generally don't interface well with PR.  I find that this is usually because infosec professionals think that PR tries to oversimplify issues.  But in these cases, PR is usually right.  Also, understand that PR is likely to be the voice of the organization during an incident.  They will influence how your story will be told.  Talk to PR and get their take on some sample situations.  How would they present these to the press? This understanding is invaluable in your decision making process.  Bottom line, PR is critical to getting your message out there, and I'll cover more about working with PR in a future post.  But for the scope of this article, I suggest you talk to PR to use their expertise on how your situations will most likely be framed in the press.  Verdicts in the court of public opinion matter - a lot.

There are certainly more things that you can and should consider when dealing with the NYTT.  But these three simple steps will address the vast majority of your issues.

Monday, February 22, 2016

If the FBI is successful with Apple backdoor, should you ever update your computer again?

Lots of people have some really strong opinions about the Apple debate.  I'd like to suggest that like all arguments involving technology, if you don't understand the technology at a rudimentary level please stay out of the debate.  Or better yet, educate yourself so you truly do understand what's going on and can speak intelligently. What this debate doesn't need is another talking head on Fox News or CNN who doesn't know the first thing about the technology involved.

This... so much this...
What the FBI is essentially asking for is a manufacturer produced backdoor for the iPhone.  I'll cover the technology issues with this specific request in a later post if I really think it is warranted.  In this post, I really want to examine the precedent that the request sets for the rest of the computing industry.

What really happens when you patch your computer?
When you patch your computer, an update process on your computer reaches out on the Internet to download an update.  Ideally, this update is downloaded over HTTPS.  With the HTTPS download, best practices dictate that the update process will use certificate pinning to ensure that the update process is talking to the correct remote server.  Without certificate pinning (or certificate validation) an attacker could use DNS spoofing and/or forged certificates to redirect an attacker controlled site.

When the update is downloaded, it is subsequently executed on the machine, almost always with elevated privileges (SYSTEM or root level).  Ideally, this update package has been digitally signed with a signature that is known to the software package applying the update.  If the update is just signed with any "validated" certificate, an attacker could compromise a CA and issue a "valid" certificate.

Attack vectors against updates
The most obvious solution here is to find a portion of the update process that is not protected using proper security controls.  One example of this might be to compromise a software distribution site and replace the original with trojaned software.  For insecure software upgrade processes, we already see the ISR Evilgrade tool being a useful proof of concept for the damage that can be done.

A more technologically advanced attacker could find a flaw in the way that updates are applied.  We saw this done with the Flame malware, which hijacked Windows updates.

Attackers could steal valid certificates as was seen in Stuxnet.  In Stuxnet, the attackers signed device drivers, but the same concept could be applied to software updates.  Likewise, we see some upgrades that aren't signed.  Engineers usually claim that these are still safe because the updates are distributed via HTTPS.  This is a fallacy, attackers could much more easily steal an HTTPS certificate than a software signing cert (especially with the newer EV signing certs).

How does all of this apply to Apple/FBI?
If Apple builds a firmware backdoor for the iPhone and delivers it to the FBI, it will set a dangerous precedent.  And the precedent goes far beyond mobile phones.  Once the FBI understands that it can use the courts to force vendors to backdoor products, they will use this capability to impact the update process.  

Picture this, the FBI is worried about a terror suspect.  Through surveillance, they determine that the suspect uses a Windows laptop.  But they can't get access to the laptop.  In fact, they are having trouble identifying any specific selector for the laptop itself.  For anonymity reasons, the terrorist only connects to the Internet on this laptop at his local neighborhood coffee shop.  So there's that.

In the name of national security, the FBI goes to Microsoft with a court order.  They need MS to engineer a backdoor that will allow them to read data on the terror suspect's computer.  The FBI says that MS should build a malicious Windows update that packages a remote access trojan (RAT) on the suspect's machine.  Because they don't know which machine, MS should apply this update to all machines that connect to the update service from the public IP address of the coffee shop.

This same game could play out with Ubuntu, Redhat, OSX, or you name it.  Pretty much any operating system.  If the development of that OS happens in the US (or a country that will honor a US court order) then the FBI can compel them to create malicious updates.

These update concerns don't just apply to operating systems, they apply to any software (including 3rd party software) that is running on the suspect's machine.

I'm not a criminal, I don't care
First off, are you sure?  If the answer is yes, there's a great Twitter account you should follow that tweets out obscure federal crimes.  Stuff you wouldn't believe.  After reading this account, I figure that we are all federal felons for some obscure reason we are probably unaware of.

In the scenario I provided earlier, the FBI will compel Microsoft to install the updates on any laptop calling out from the coffee shop since they can't find a selector for the specific user.  This is a serious issue since the FBI will now have backdoors on many machines rather than just one.  They might delete these other RATs or they might look for evidence of other crimes, you know like law enforcement does.  Of course then the scope of the original warrant might be an issue, but there's always parallel construction to the rescue.

Parting thoughts
Bottom line, this is a bad idea.  It sets a horrible precedent and changes the customer/vendor trust model forever.  Most of our enterprise trust models assume that vendors won't be backdoored, or at a very minimum that they will never exploit our machines directly.  If the vendor can exploit our machines at will to give a third party access (court order aside), do you really own the machine?  Do you really own the data?

If the FBI succeeds in this case, vendors are likely to flock away from the US to countries with less oppressive courts.  Another possibility is an M of N signing strategies.  In these strategies, the crypto key for signing updates would require a collusion of a majority of parties with portions of the keying material.  A court in a single country could no longer compel a vendor to sign an update.  Sure, this might be effective, but it will be costly and the costs will be passed on to you, the consumer.

There are other issues with these malicious updates falling into the wrong hands, targeting the wrong machines, etc.  If there's enough interest (hit me up in comments, Twitter, etc.) I might write another post that addresses these concerns.  Let me know if there's any interest in a follow up.

Sunday, February 21, 2016

Infosec professionals agree whistleblowers will be a problem - what are you doing about it?

I was reading an article from CSO Online about how we should expect a larger number of whistleblowers who sound the alarm over poor infosec practices.   I tend to agree.  At Rendition Infosec, we’ve seen an uptick in people willing to blow the whistle to regulators over perceived cyber security risks.  I think some of this is generational.  The younger generation (joining the workforce over the last 5-10 years) seems to be much less likely to stand by while anything is swept under the rug.  This, it turns out, also includes infosec issues.

The article points out that case law is relatively scant on protecting cybersecurity whistleblowers.  However, it also points out that because cyber security isn’t called out as an explicit exception to the law, whistleblowers are most likely protected.  This doesn’t mean that it’s a good idea, you could be blacklisted from future employment by blowing the whistle.  At a minimum it’s likely to involve you moving jobs.

The article correctly notes that the FTC and SEC are both ramping up efforts against companies who have lax cybersecurity.  Generally for publicly traded companies, just knowing that there’s a security issue forces the organization to act or disclose the vulnerability to shareholders in their public filings.  Since disclosing publicly is obviously is less than ideal, I think we are far more likely to see the organizations either fix the problems or just ignore them altogether.

No matter how you feel about whistleblowers, they will be a reality in your organization sooner than later.  If you don’t have a plan for dealing with a disgruntled employee blowing the whistle, you have a critical hole in your infosec playbook.

At Rendition Infosec, when we help organizations plan for a possible whistleblower disclosure, we generally tell them they have two critical areas to worry about.  First, make sure that decisions about infosec pass the “New York Times Test” (NYTT).  Second, work with PR before a disclosure and solidify your containment/press strategy.

What is the NYTT?
The New York Times Test is pretty simple. Simply look at your actions involving infosec and ask yourself “if this were published in the New York Times, would the average reader think we were handling things appropriately?”  If the answer is “of course not, they’d be outraged” then I submit you’re doing it wrong.  Unfortunately, the NYTT is actually a bit harder than it looks.  I’ll have a full post on effectively implementing the NYTT hopefully later this week.

PR engagement
Like it or not, you need PR.  Well, more accurately your organization needs PR.  When the press gets ahold of a lead from a whistleblower (and this will happen eventually), you need to be ready with a response. 

At Rendition, we worked with many organizations on the Ashley Madison disclosure.  We worked with these organizations to determine their exposure, including employees that registered using a corporate email and those who registered and/or paid for services from corporate IP ranges.  Let me take this opportunity to say that I don't really care what you do on your own time without using company assets. But that wasn't the case here, and that's sort of the point.  Of the 17 organizations Rendition did this for, only two were ever contacted by the press (that I know of).  But all 17 were ready with prepared statements – nobody was taken by surprise.  Again, I’ll do an upcoming full blog post about engaging your PR team effectively in the incident response process.

Parting thoughts
Overall, the consensus of the CSO Online article is clear: be ready for the cyber security whistleblower.  My experience tells me that they aren’t wrong.  If your policies and playbook don’t cover dealing with whistleblowers today, talk to a professional with experience in dealing with these issues before you are taken by surprise.  Above all, get your policies together and deal with issues as they arise. That way, potential whistleblowers have fewer opportunities to blow the whistle in the first place.

Saturday, February 20, 2016

A tale of idiots and red tape

Several people have emailed me privately to tell me how on target I was with the previous blog post.  Many have also publicly (and not so publicly) told me how very wrong I am about how Cyber Command (and their best bed buddy NSA) operate.  Most examples of idiocracy there are classified and can't be blogged about, but to illustrate how truly bad the problems are, let me share an unclassified story of idiots and red tape.

Cyber Command has all the red tape any infosec professional could ever want
A senior network exploitation operator noticed one day that the organization had deployed a large number of devices on an unclassified network.  He said to himself:
Wow - I know our targets frequently misconfigure these devices and leave default services enabled.  I wonder if our contract administrators staffed by the lowest bidder have done this too?
The operator decided to check, but realized the pen testing a DoD network without authorization could be a criminal offense.  He's a smart guy so he didn't penetration test anything.  He simply walked up to the device and started typing at the keypad.  Just be looking at options on the on-screen display, he confirmed that default services (including an incredibly insecure embedded HTTP server) were enabled.

The operator then emailed IT to let them know.  IT first said that the entire system was configured securely and he was wrong.  HTTP services were in fact disabled they said.  So he opened up a web browser on his system and navigated to the web page (which did not require authentication).  The web server would have accepted default credentials that would have given him additional access.  The operator knew the default passwords since he used them to regularly hack others (with authorization).  But he stopped short of logging in, knowing that this would be a big deal.  IT summarily ignored him and simply stopped answering emails.

When IT ignored him, he emailed security.  Rather than security contacting IT to address the vulnerabilities in Internet connected DoD systems, they opened up an investigation into the operator's actions.  Security noted in their report that connecting to a web server officially involves making a TCP connection, which is sort of technically a port scan.  And port scanning sounds a lot like hacking.  Oh yeah, you can see where this is going.  This senior CNE operator who hacks other nation-states for a living found a glaring vulnerability Cyber Command/NSA's own infrastructure.  They should have given this guy a medal.

But yeah, he didn't get a medal.  Instead he got a reprimand.  A written f*cking reprimand.  And that was the beginning of the end for him.  He started looking for a new job and no longer works for them.  He was one of the best operators I've ever had the pleasure of working with.

So go ahead and tell me all about how Cyber Command rewards creativity, problem solving and outside the box thinking. But meet me in a SCIF to do it.  I've got a hundred more stories like this that I can't share in open forum.  This didn't happen a decade ago, it was less than two years ago.  Are things getting better? Maybe. But according to the people I'm still talking to they are changing at a glacial pace (if at all).

Unfortunately, our adversaries are adapting to changing realities faster than Cyber Command.  Reminds me of the Polish Cavalry brigades meeting Hitler's tanks on the WWII battlefields with horses.  Sure, the cavalry had a rich history of tradition and military discipline.  But none of that mattered, because tanks > horses.  So please Cyber Command peeps... keep telling me about how well your traditional kinetic warfare models map to cyber.  I'll just remind you to go feed your horses and keep prepping for that tank battle.
So I've been informed that my public school education has failed me (again) and the Polish Cavalry never charged German tanks, instead killing infantry... My point stands and to reinforce it, I could list any of a number of examples like the Chauchat "machine gun" or the Sherman tank with its inferior gun and armor.  Seriously, thanks for correcting the record on the analogy, though my point about sticking to tradition in spite of evidence that it's the wrong move stands.

Friday, February 19, 2016

Cyber Command HR having trouble attracting candidates

I read a story today about how HR at US Cyber Command is having trouble attracting and retaining candidates.  The story notes that HR can use some incentives to offer additional pay to recruit members.  But I think the story falls short of addressing the real problem.

I know what I'm talking about here.  I used to work for a group within US Cyber Command and I can testify that pay wasn't the only issue there (by far).  Sure, it's a component of what keeps people happy but it's not the only thing.  Since the article focuses on pay, let's examine that for a moment.

A GS-12 step 1 in the DC area makes $77,490.  A GS-12 is something I've had to fight to get qualified candidates hired in at.  Which is ridiculous. Freaking ridiculous.  What do I mean by qualified?  At least a 4 year degree, preferably a master's degree in a technical topic.  HR was usually more accommodating for getting people in at GS-12 if they had at least one certification.  Note, these are people I was hiring that had 3-5 years or more in information technology and/or information security.  New college graduates were almost always give GS-9 or in very rare cases GS-11 assignments.

So what about pay incentives?  Under the current law, the maximum incentive pay for an individual is 25% of base pay.  For a particular group of highly specialized people you can offer 10% bonuses under the current law.  I understand that since I left, certain elements in the government are offering 10% bonuses for certain highly specialized cyber operators, but it's not an entry level thing so it won't work for recruiting.  And sadly, when your salary is only 65-70% of industry, a 10% bump on that doesn't really help much for retention.

I was a GS-14 when I left government service.  And I was a pretty rare animal when it came to network exploitation operators.  There are many smart people competing for a very few promotions to GS-13.  GS-14 was virtually unheard of within the technical ranks.  Even with a 25% bonus (the maximum allowed under current law) my salary would have been a pittance compared to private sector salaries in senior infosec (the job I was performing there).

It's not just the $$
But it's not all about money.  The US Cyber Command structure doesn't reward innovation.  At it's heart, Cyber Command is a military organization.  And military loves structure.  A privileged few set the operating procedures that all others will follow.  Thinking and acting outside the box are not only not rewarded, they are actively discouraged.  This is VERY unfortunate, because the true infosec professionals want the challenges associated with solving complex problems, not just following a flow chart.

Of course I'm not saying all cyber operations can be reduced to a flow chart, but believe me US Cyber Command is trying to get there.  And that's not really a bad thing.  It's the only way to scale out their CNE operations.  It's just that the methodology is bad for retaining top tier talent who wants a challenge.

Many employees, and most of those in leadership positions, at Cyber Command are either current or former military.  That's not a bad thing.  It's a military organization after all.  But it does cause some problems when it comes to attracting personnel who often sport blue mowhawks, tongue rings, ear discs.  Of course drugs (including marijuana where it is legal) are also off the table.  And we've recently seen how much trouble the FBI is having recruiting due to it's position on the wacky weed.  Bottom line: in order to be really accepted at Cyber Command you have to play the military game, something that many in infosec simply aren't willing to do.

Sadly, this guy will never really be accepted at US Cyber Command
The bottom line is that while pay is an issue for attracting top talent for Cyber Command, it's not the only issue by far.  Pay will help get some people in the door (though even with incentives, Cyber Command pay doesn't come close to industry norms).  Unfortunately, pay is very unlikely to keep those people there.  The culture will have to change and adapt to those workers who dominate the information security field.

Does everyone in infosec have strangely colored hair, body piercings, and smokes pot?  Of course not.  But enough people fall into one of those categories that Cyber Command really needs to take notice if it wants to retain them. Somehow, those leading Cyber Command will have to figure out how to do this without simultaneously losing it's own military roots. The bottom line is Cyber Command only has its battle cry of "awesome missions" to fall back on.  Presently they don't pay people commensurate with the rest of the very understaffed industry.  And when it comes to autonomy and problem solving, well they simply don't offer much else there.

Wednesday, February 17, 2016

Effective disaster recovery planning would make Hollywood Presbyterian a non-story

As you've probably heard, the Hollywood Presbyterian hospital has been the victim of a cyber attack.  The cyber attack reportedly involved ransomware and has resulted in patients being diverted in some cases to other hospitals in the area.  The network at the hospital has reportedly been offline to some degree for almost a week now.  Although the hospital reports that patient care has been unaffected (for obvious liability reasons), it’s hard to imagine how that isn’t the case when lab results are being faxed rather than emailed and much historical patient record data is currently unavailable to care providers.  The attackers are reportedly asking for $3.6 million, an amount relatively unheard of for a ransomware attack.  The hospital was reportedly impacted starting on February 5th and resumed normal operations 10 days later.  The hospital ended up paying $17,000 to the attackers in the interests of restoring it's "administrative functions."

At Rendition Infosec, we’ve helped a number of companies deal with ransomware attacks over the last couple of years.  These attacks cause significant stress for victims who have to figure out how to deal with an overall lack of access to their data.  Ransom payments are almost always demanded in bitcoin and even when the procurement departments are ready to pay, accounts payable often simply doesn’t know how to pay in bitcoin.  Of course Rendition helps customers navigate the process of paying to get decryption keys for their files.  As much as I enjoy helping clients through the process and getting them back on their feet, we shouldn’t have to.

Clients with good backups never need to deal with any ransom demands, they simply restore from backup.  Sure the ransomware attack is a pain, but it doesn’t elevate to the level of a potentially career altering event.  Note however that I said good backups.  Almost every organization has backups of some of their data.  How much they have backed up and how current the backups are separate organizations between those who have a recoverable incident and those who are truly be impacted by the event.

A good disaster recovery (DR) plan will ensure that an organization can weather the storm of a ransomware attack.  Those with good DR plans regularly test restoring their backups to ensure that these organizations really have what they think they have. 

Working with one organization a few years ago, Rendition found that the company only had partial backups for its critical email servers.  As the email server cluster had grown and accounts had been migrated from one server to another, backups were missed.  Backups were present for most users, except those with a last name beginning with ‘H’ and ‘T’.  Luckily we found this before there was an issue, but only because we were exercising the DR plan.  Simply pencil whipping an exercise concerning the DR plan won’t be enough to save you in one of these cases.

If you don’t have a good DR plan (or you aren’t sure), work with an expert to construct one.  Failing at DR is truly a career limiting move, and one you definitely don’t want to make.  If you think your DR plan is good, but you aren’t currently exercising it, start today to make sure that you really have the protection you think you do.  You definitely don’t want to find out during an emergency that your backups are ineffective.

Back to Hollywood Presbyterian… I believe that their IT management thought they had good backups.  No business would knowingly use bad backups in this day an age.  But obviously there was an issue somewhere that prevented them from using their backups effectively.  If they could just reboot and begin using their backups, they’d be doing so.  The fact that they aren’t speaks volumes about their inability to do so.  It seems almost certain at this point that they lacked a well exercised DR plan.  If you are in any industry, but health care in particular, you should be asking yourself how well your organization would be able to respond to such a cyber attack.  If you can’t say with certainty you would weather the storm unscathed, it’s probably time for professional help with your DR plan.

Glibc getaddrinfo vulnerability will disproportionately impact IoT

The glibc vulnerability announced yesterday is very interesting.  It requires a large response to a DNS request to trigger the vulnerability.  These are typically seen with EDNS and DNSSEC.  I published yesterday that disabling EDNS would partially mitigate the vulnerability on impacted hosts.  Several people have pointed out that this is only partially true (and I think they are right).

While disabling EDNS will prevent you from making large requests, an attacker with man in the middle (MITM) position could potentially send you a large reply anyway.   I haven't yet confirmed whether the resolver will process this large response if not configured for EDNS, but it might. Honestly this might not require MITM position anyway.  The attacker, by connecting to a vulnerable service, would force the DNS request for their source IP.  Presumably this would be fulfilled by a DNS server they control.  So disabling EDNS *might* mitigate the vulnerability.

But is it exploitable?  Google says they have a weaponized exploit for at least one service.  Reportedly, this has been shared with RedHat to demonstrate the vulnerability as well.  We know that the vulnerability is the result of a mismatch between the size of a buffer on the heap and a buffer on the stack.  The heap contents are copied onto the stack without checking or updating the size of the stack buffer.  So basically you have a run of the mill stack based overflow.

But stack based buffer overflows have been mitigated for years using exploit mitigations such as stack canaries, ASLR, and non-executable stacks.  So any stack based buffer overflow you want to exploit today requires some gymnastics to get working.  So will this work?  Probably not on your nice shiny Linux distro.

But what about IOT?  Stack canaries and ASLR take processing power and many embedded devices have precious little processor power to spare.  This means that they often don't enable exploit mitigations that will keep our core Linux distributions safe.  Unfortunately, these embedded devices that are not protected are also the most likely to suffer from patch rot.  Unfortunately, these systems are also the hardest to monitor in most cases.  It's not a good situation to say the least.

Are IOT devices impacted?  Yeah. For sure.  This vulnerability has been around since 2009 and would have been compiled into tons of embedded software.  If you are running an impacted IOT device, work with your manufacturer to demonstrate the vulnerability and get a patch.  

Tuesday, February 16, 2016

Problems with glibc getaddrinfo, patching is currently non-trivial

TL;DR - patches for this widely deployed vulnerability not generally available.  Recommend disabling EDNS by modifying your /etc/resolv.conf to ensure that the "options edns0" directive is not present.  Don't have that option present? Then EDNS probably isn't enabled on your machine and you probably* aren't exploitable.
*I still need more time with this to fully understand whether other exploitation paths might exist. Use this at your own risk.

What's going on?
Google has discovered issues with the getaddrinfo function in glib.  They state that these issues may be remotely exploitable, but you will have to bypass exploit restrictions like ASLR.  There is a patch available, but it's a source level patch, requiring users to recompile code.  Something that most people simply can't do.

Google claims to have a weaponized proof of concept, but they are not releasing it.  They have however released a proof of concept to demonstrate the vulnerability.

Is this a problem you need to worry about?  
Probably.  It's too early to understand all applications in which this vulnerability may be remotely exploited.  But suffice it to say that this is something we should patch sooner than later.

Why is this vulnerability more serious than others I hear about?
Because this vulnerability is present in a commonly used network function in glibc, the exposure is greater than if the vulnerability were in a single service.  Glibc is linked into many applications and many (if not most) network applications use the getaddrinfo function.

When was the vulnerability introduced?
It appears that the vulnerability was introduced in Glibc 2.9.  Interestingly, the bug was noticed and added to the glibc bug tracker in July, 2015.  If your system (or code) is using a much older version of glibc, you are not vulnerable to this bug (but you have a whole set of other problems).  Glibc was released February 26, 2009, but the offending code commit occurred in 2008.

How easy will this be to patch?
Unfortunately, patching is non-trivial.  You'll have to compile software from source or wait for new binary distributions to come out.  At this point, you are in stand by mode.  Consider this the warning order.

Are there any mitigations?
This writeup has some mitigations.  One detection strategy could be to look for are very large DNS responses.  Only a large DNS response would be able to overflow the stack based buffer.  But large DNS responses (those larger than 512 bytes) are not entirely abnormal.  In fact, many DNS SRV and TXT records exceed this size.  However, DNS host records (A and AAAA records) typically do not reach such large sizes).

The easiest mitigation to use appears to involve disabling EDNS by removing the "options edns0" directive in /etc/resolv.conf.  Of course you shouldn't use this if EDNS is in use on your production machines, but honestly it probably isn't.  Your mileage will vary as to whether you can use this as an effective strategy.  This fix will need to be applied on all hosts that are running impacted software, including ssh, curl, and other network aware software.  I checked several servers in client environments and found that EDNS wasn't enabled on any of them.  But if you use DNSSEC, you have to enable EDNS.  So if you are using DNSSEC, bear in mind that this mitigation breaks it.  Given the choice between DNSSEC and remote code execution, I'll disable DNSSEC every. single. time.  Hands down.

Have fun and stay safe.  Also, get some rest tonight.  We're likely to be in a flurry of required patching later this week

Saturday, February 13, 2016

Cell phone cameras create unexpected privacy problems

To regular readers, my apologies for a couple of light blogging weeks.  I've been busy and sick (a combination I don't recommend to anyone).  I'm just now getting back on my feet and will try to get back to a more regular tempo.

Cameras - they're everywhere
Everyone these days has a cell phone camera.  Everyone that is except for maybe my grandmother.  And it turns out, that might be a problem for her.

According to ProPublica, there have been at least 35 cases of nursing home workers sharing surreptitiously obtained photos of seniors under their care to social media platforms.  Apparently 16 of these involved snapchat.

A knee jerk reaction to this news might be to restrict cell phones in the work place.  I worked for years in environments where I couldn't ever take my cellphone, let alone any photographic equipment. I know what it's like to live like that and I don't think that employers should encourage that either.  Your employees will probably quit if you try.

But it is a problem, and not just for nursing homes.  During OSINT audits of our clients at Rendition Infosec, we regularly find photos of sensitive data posted to social media.  These range from photos identifying equipment in the workplace to sensitive documents captured in the background of some photos.  Not something to be taken lightly.

So what's the right BYOD strategy?
My recommendation is that organizations embrace BYOD. Usually when we talk about BYOD in infosec, we're talking about keeping the devices from insecurely accessing the network.  But there's way more to BYOD.  Two issues we frequently advise Rendition clients on concerning BYOD are workplace photos and charging devices on work computers.

Rendition generally recommends making the workplace a "no photo zone."  This takes away any potential issues of an employee making the wrong judgment call when determining whether to post a photo.  HR and legal will probably like this too, since it nearly eliminates the possibility of harassment complaints where one party says "I didn't think I was doing anything wrong."  If the workplace is a no photo zone, it doesn't matter what's in the photo and employees are no longer left open to bad judgment calls.

When it comes to charging phones using the USB ports of work computers, this is just a full-on no go event.  Don't even think about it.  There are many reasons for this, but perhaps one of the biggest is that DLP systems are only recently starting to treat some phones as removable storage. Previously, many DLP systems were treating anything that registers as an MTP device as "not removable storage."  This often had disastrous consequences for DLP rules which treat copying sensitive files to removable storage differently than to an internal drive.  Obviously there are infection issues to deal with too when mobile devices are plugged into corporate machines.

Overall, a good BYOD strategy should help secure your overall working environment - not just the network.

Friday, February 5, 2016

Enigmasoft v. Bleeping Computer - an unbiased review

As you may have heard, Enigma Software (enigmasoft) is suing computer help site Bleeping Computer for what is essentially a bad review.  The lawsuit alleges that many of the contributors knew that Bleeping Computer received affiliate money for recommending MalwareBytes, a competing program.  It further alleges that the reviewers on the site told people not to buy SpyHunter based on saying it had been classified as "rogue software" (which was once true) and that Enigma had been accused of misleading business practices (also true).  They further claim these issues had been resolved at the time of the review being published and the user who recommended MalwareBytes over SpyHunter based on this information he found on the Internet caused the company material harm in excess of $75,000.  I'm paraphrasing a little here, read the lawsuit to get all the details.

Enigma Software Website
I guess the only kind of bad publicity is no publicity at all.  I'll say that prior to this lawsuit, I'd never heard of Enigmasoft or their "flagship" security product SpyHunter.

HTTP login form? Seriously?!
One look at their website was more than enough to tell me that for a security company, they sure don't take security seriously.  The customer login form is available from an HTTP site.  When working with clients at Rendition Infosec, we sometimes see these forms redirect to an HTTPS site. But even then it isn't safe.  If a user connects to the form over HTTP, he could change the destination of the form or use JavaScript to steal the information before sending it to the original destination.

Even their dedicated login page at www.enigmasoftware/myaccount/ uses a POST to an HTTP site.  Any company should know better than this in 2016. But for a company that sells security software for a living, using an HTTP based login form is inexcusable.

Okay, HTTP logins... But how well does it detect malware?
Website aside, the SpyHunter software isn't particularly effective at detecting malware either.  I had several hundred malware samples on the virtual machine, ranging from the wiper malware that hit the Ukrainian power networks to some random stuff I downloaded months ago with maltrieve.  What were the detection results?  Not good.

SpyHunter can find cookies!
It turns out that SpyHunter can find cookies, particularly tracking cookies, like it's nobody's business.  It labels them as threats, which is arguably true.  Most people would probably prefer no tracking cookies be installed.  But would you pay to remove them when many free programs remove them too?  Of course not.  Shouldn't the SpyHunter program also find at least some of the well known malware on my machine too?  Of course it should.

No commission here
Let me be clear that I don't receive commission from any antivirus company for writing positive or negative reviews.  One of my Rendition employees took a look at the overall security of the SpyHunter program (stay tuned for more on that).

When it comes down to brass tacks, results are results.  My personal opinion is that if SpyHunter can't find well known malware, then it probably isn't worth paying for.

Is this a bad review?  Yep, damn skippy.  And well deserved too.  Is Enigma going to sue me for it?  I hope they know better.  This turtle says it better than I ever could...

Tuesday, February 2, 2016

What's in a prime? Ask socat...

Today it was announced that the tool socat, which is supposed to offer encrypted connections, has a glaring encryption flaw.  I teach this tool in the SEC660 course at SANS and mention it in another of other courses.  I've used the tool operationally in penetration tests, trusting that the code was secure.

But I am not a cryptographer. Heck, I don't even rate as a mathematician.  So I certainly didn't notice that a 1024 bit number used as a prime in crypto calculations wasn't actually a prime.  So much of our crypto today is built on the premise that a very large number that is the result of multiplying two primes (a semi prime) is exceedingly difficult to factor.  The problem is that factoring is exponentially easier if you happen to know that one of those numbers isn't really prime at all.  Good luck figuring that out.  Doubt I'm right?  Tell me which of these looks like a prime number.  Don't rush, I'll wait.
Guess which number might be prime...
Of course it's not easy to figure out the answer. That's the point. Crypto assessments are very difficult to do (and more difficult to do right).  FYI, the one on the top is the non-prime number that was in use by socat.  I changed some digits around in the bottom number.  I have no idea if it is prime either.  Given the mathematical odds, I seriously doubt it is.

And that's what has happened with socat.  The tools uses a number that isn't prime when it should be prime.  This impacts the Diffie-Hellman key exchange, meaning that the secret key could be eavesdropped on much more easily by an attacker than originally intended.

In their advisory, socat lists impacted versions.  It looks like some older versions are not impacted.  But one question being asked is who generated the supposed prime number?  If this was committed to the code base by an attacker with eavesdropping capability (e.g. a nation state), the attacker could potentially decrypt traffic that the sender thought was secure.  Below, you can see the code changes, with the old 1024 bit value being replaced with a new 2048 bit value.

New 2048 bit prime
At Rendition Infosec, we regularly tell clients that they should always consider code reviews before relying on any software (even open source).  Unfortunately, it's really hard to find a crypto backdoor like this.  How does one really figure out whether a particular number like this is a prime?  Hire a mathematician?  Probably not.  I'll just say that it's a hard problem.

But looking at overall code quality is relatively easy by comparison.  One thing we can look for that is pretty easy to identify are vulnerable functions.  One such function is strcpy, which should be replaced in all cases with strncpy.  Unfortunately, even the current code for socat has many.
jake@server$ grep strcpy *.c
hostan.c:      strcpy(ifr.ifr_name, ifp->ifr_name);
xio-exec.c:      strcpy(tmp+1, pargv[0]);
xio-exec.c:      strcpy(tmp, pargv[0]);

xio-ip4.c:      strcpy(namebuff, "ADDR");
xio-ip4.c:      strcpy(namebuff, "PORT");
xio-ip6.c:      strcpy(namebuff, "ADDR");
xio-ip6.c:      strcpy(namebuff, "PORT");
xiolockfile.c:   strcpy(s, lockfile);
xio-proxy.c:      strcpy(header, authhead);

xio-readline.c:   strcpy(cp, "using "); cp = strchr(cp, '\0');
xio-readline.c:      strcpy(cp, "readline on stdin for reading"); cp = strchr(cp, '\0');
xio-readline.c:      strcpy(cp, " and ");  cp = strchr(cp, '\0');
xio-readline.c:      strcpy(cp, "stdio for writing"); cp = strchr(cp, '\0');
xio-socket.c:   strcpy(val, ifr.ifr_name);
xio-socket.c:   strcpy(namebuff, lr);
xio-socket.c:      strcpy(namebuff, lr);

xiotransfer.c:      strcpy(timestamp, ctime(&nowt));
xiotransfer.c:      strcpy(timestamp, ctime(&now));
xio-unix.c:   strcpy(namebuff, "ADDR");
There are 19 calls to the vulnerable function strcpy.  Of the 19, only 7 (those in bold) would appear to be potentially exploitable. Additional work would be required to figure out whether they are actually exploitable.  It's worth noting as well that just because the programmer replaces strcpy with strncpy, the code is not necessarily secure.  The programmer might for instance mistake the size of the destination buffer.  These are especially concerning when you discover that a separate vulnerability (also patched in the latest version) resulted in a stack based buffer overflow. 

After looking at the code in xio-exec.c, we find that it is not vulnerable due to the way memory is allocated to the tmp variable.
      if ((tmp = Malloc(strlen(pargv[0])+2)) == NULL) {
         return STAT_RETRYLATER;
      if (dash) {
         tmp[0] = '-';
         strcpy(tmp+1, pargv[0]);
      } else {
         strcpy(tmp, pargv[0]);
I haven't looked at the others, or some of the other possible vulnerable functions.  That's something I'll leave for someone else.

My recommendations? Update socat now if you use it. If an attacker with nation-state level access inserted the non-prime number in the code, they could easily be eavesdropping.  But also, consider whether the security posture of the overall socat code base is concerning.  The use strcpy is just bad programming hygiene.  I'm not saying there's a vulnerability here, but the use of vulnerable functions is an overall indication of security posture.  There's rarely fire without smoke.