Monday, October 31, 2016

New Shadow Brokers dump - thoughts and implications

In case you missed it, the Shadow Brokers just released a list of keys that are reportedly used to connect to Equation Group compromised servers.  While this dump doesn't contain any exploits, the filenames do contain the IP addresses and hostnames of machines on the Internet.  Presumably these are all infected machines, some are reporting they have been used as staging servers for other attacks.

If your organization owns one of the servers in this dump, obviously you should be performing an incident response.  But the Shadow Brokers themselves recommend that you only perform dead box forensics, taking disk images in lieu of live response.  This quote was taken from the Shadow Brokers' latest post.
"To peoples is being owner of pitchimpair computers, don’t be looking for files, rootkit will self destruct. Be making cold forensic image. @GCHQ @Belgacom" 
If you're one of the organizations impacted, but you're not comfortable performing dead box forensics on Unix machines (most or all of these machines are Solaris, Linux, and HPUX according to those performing scans) talk to us at Rendition Infosec - we'd love to help.


What's interesting is that we now have a list of victims of an apparently government organization (Equation Group).  To my knowledge NSA has never openly admitted these are their tools, but every major media outlet seems to be running with that narrative and we have no substantive evidence against it.  

Cyber insurance coverage and nation state hacks
Let's assume that at least some of these organizations have cyber insurance.  There are some interesting questions here.  First, these hacks appear to be pretty old and many likely predate the purchase of cyber insurance.  How does the cyber insurance handle pre-existing conditions?  Even if the policy cover a pre-existing hack, the bigger question I have involves the "Act of War" exclusion in many policies.  

If we assume that Equation Group is a government organization (e.g. a state sponsored hacking group), does the compromise of the identified in the dump constitute an Act of War?  Since this is presumably only espionage and not attack, the answer is probably no. 

But suppose an organization hacked by Equation Group via one of these compromised servers detects they are being hacked.  Suppose they hack back and cause damage to the organization who owns one of these redirection servers?  What then?  Does this constitute an Act of War?  And if the insurance company thinks the a state sponsored hack is an Act of War, who has the burden of proof?

In short, I don't have the answers here.  But these are great questions to be asking your insurer.  I know I will be.

Sunday, October 30, 2016

How to throw an election - aka who has your data

I was thinking today about how much of my data is in the cloud with different service providers.  My email is hosted by Gmail. My chats are on gmail, Slack, Skype, etc. My Twitter DMs.  All of this is "guaranteed" to be private by the service providers, but as we've seen with NSA's recent problems with Snowden and Martin, even the most secure environments have leaks.  I'm not that interesting, so it's unlikely any service provider insider would leak my personal data.  But what data might they be motivated to leak?

As I was considering the "service provider insider" idea, I thought about two distinct scenarios where an insider might be tempted to leak data.  I'm sure there are more, but the two I can think of off hand are someone shorting a stock and someone influencing an election.

Election tampering
There are radicals on both sides of the aisle who probably view job loss, financial penalties, and perhaps even jail time (let's be honest, it wouldn't be much) as a small price to pay for swinging a presidential election.  I'm sure that Trump and Clinton have both said things using service providers (whether Twitter DM, Gmail, Skype, etc.) that they'd like to forget.  I know I have.  If someone released non-public data from a candidates communications, that could easily swing an election.  This is probably more damaging in smaller races, but depending on the data released, I could see a national election turning.

Stock shorting
If you mined non-public data (like Gmail does all the time), you might find information that leads you to believe that the price of a particular stock is going to fall.  In this case you can short the stock and reap the rewards.  But what if you short the stock and the stock price rises, possibly because the damaging information hasn't come to light?  Leak that data and cash in on that lower stock price!  Of course this is illegal, but that's not the point.

Where is your data?
If you've got the easy stuff checked off for infosec, step back for a moment and consider what damage your non-public data could do to your organization.  Insider threats are real.  Most mature infosec organizations understand insider threats and are looking for insiders in their organizations (with varying levels of success).  But are you considering the threats an insider at a business partner, service provider, or other trusted party?

Closing thoughts
A good data inventory will help organizations prepare for insider threats, no matter where they occur.  Tabletop exercises are invaluable in evaluating your insider detection and containment strategies.  If you need help with a tabletop, please hit me up over at Rendition Infosec and we'll be happy to help.

Thursday, October 27, 2016

Playing with fire and bug disclosure

A teen in Arizona has been arrested for hacking iOS devices to dial 911 repeatedly.  In one case, a 911 call center was reportedly so overwhelmed with calls that it "almost shut down."  The original press release is here.

But is this arrest warranted?  The teen wanted to display his elite hacking skills on iOS and claims to have accidentally pushed "the wrong link" to Twitter, where more than 1800 people clicked on it, congesting 911 centers.  The hacker known as "Meet" said that he intended to deploy a "lesser annoying bug that only caused pop ups, dialing to make peoples devices freeze up and reboot."


I think this admission is where Meet is in trouble.  He admits he intended to commit a crime by causing denial of service to devices he does not own.  If his statements are taken at face value, he did not mean to disable the 911 system.  But the fact is that he disabled the 911 system in the commission of another crime, the attempted DoS.

The denial of service is obviously concerning, but it raises several important questions, such as:
  • If Apple's bug bounty were open and available to all researchers, would Meet have tried to market his exploit there instead of this "prank" gone bad?
  • Should Meet be punished for the damage caused or the intent?
  • In a case like this where a cyber attack has a potential impact on life safety, do special circumstances apply to sentencing since lives may have been endangered?
  • If legal frameworks don't exist to do this today, should they?
I'm intentionally ignoring the potential for cell phones to take down 911 call centers here.  Plenty of news outlets are already doing a good job of sensationalizing that aspect.  They don't need my help.  I'm much more interested in the difference between the suspect's impact and intent.  We talk about that in the SANS cyber threat intelligence (CTI) class.  As CTI analysts we have to focus on the adversary intent since that tells us much more than the impact observed, especially when those things don't cleanly match.

What about Meet's friend?
According to the press release Meet was notified of the bug by his friend.  Does this make his friend an accomplice?  It may depend on his friend's intent in sharing the bug with him.  I think the fact that the Apple bug bounty is a closed ecosystem is significant here.  It seems especially likely that the friend might reasonably expect Meet to cause some sort of mischief with the bug - it couldn't have been reported to Apple under the bug bounty.


Thought Exercise
Suppose you plan to rob a convenience store, and I agree to be your getaway driver.  If during the robbery, you kill the clerk, I can be charged with murder.  This is true even if:
  • I never fired a shot
  • I never held the gun
  • I didn't know you brought a gun to the robbery at all
Applying this same standard to the cyber domain, a question of liability looms large.  What liability and culpability does Meet's friend have in this case?  Smarter people than me will definitely answer, but it's a question we should be thinking about now before we have an issue.  I share threat and vulnerability data all the time.  What happens if someone does something malicious with my vulnerability data?  Do I share in the liability?  

Monday, October 24, 2016

Vulnerabilities in St. Jude medical devices confirmed by independent 3rd party

An independent third party (Bishop Fox) has confirmed many of the claims made by MedSec and MuddyWaters about the vulnerabilities in St. Jude medical devices.  St. Jude filed a lawsuit after MuddyWaters released information about security issues in their devices and reportedly shorted St. Jude's stock.

The report (located here) details a number of inaccuracies in St. Jude's claims, which they swore to the court (under penalty of perjury) were true to the best of their knowledge.  This is a bad place for St. Jude to be in.  It appears that St. Jude is either:

  1. Incompetent at security, so much so they can't reproduce a problem even after being notified about it by a third party
  2. Lying to prop up it's stock price

The latter is illegal, but the former is likely to be problematic in a civil case.  How will jurors trust any St. Jude security personnel who take the stand?  Their very credibility appears to be substantially compromised at this point.

There's a lesson here for organizations making "knee jerk" reactions to public statements about their security. When your security sucks and you're called on it, that's bad.  But when you have time to confirm reports of vulnerabilities and fail to do so, that makes you look REALLY bad.  In the interest of full disclosure, MedSec and MuddyWaters didn't provide proof of concept code to St. Jude, but did provide that (and additional details about their discoveries) to Bishop Fox, who confirmed their findings.  This is not illegal in any way, though some might find it unethical.

You should read the report, but I'll point out some of the more damning claims below.

Researchers used a bag of meat to simulate human tissue in their tests.  For obvious reasons, they didn't deliver shocks or test findings on real patients.  This was to contradict St. Jude claims that attacks would not work in "real world" environments.


The problems with the excerpt above are obvious.  This cuts to the core of St. Jude's credibility in its ability to assess security concerns.  Obviously damning in any civil case.


This is another huge problem for St. Jude.  St. Jude says that access controls prevent anyone but a physician from unauthorized access, but this statement is demonstrably false.


Again, more demonstrably false claims.  Bishop Fox researchers were able to replicate the attack described by MedSec.


By far, the most damning claim is that the key space used for "encryption" is only 24 bits long.  Earlier this year, the FTC settled with a dental software manufacturer for not using AES to protect patient data.  The dental software certainly isn't life saving. I'd say St. Jude has a problem on their hands.

The lessons about addressing cyber security problems in your products are obvious, particularly if you are a publicly traded company.  When confronted with a notification of a security issue, you should move to address it (and the public) quickly.  But don't let your desire for a quick release of information lead to releasing demonstrably flawed data to the public.  In my assessment, the Bishop Fox independent confirmation of MedSec's findings deals a lethal blow to St. Jude's civil case - and maybe even their future business.

Sunday, October 23, 2016

So you found a vuln in that botnet code. Now what?

The awesome @SteveD3 on Twitter (you should be following this guy) asked a great question recently.
Question: If a botnet has vulns in its code, such as overflows or path manipulation... what could the good guys do w/ these flaws?
This is really thought provoking, and although I can probably answer this in 140 characters, the answer really does deserve a little more attention than that.

The short answer is "not much, at least not legally."  If you work for a government agency with special authorities, you are special.  Stop reading here as this doesn't apply to you.

If you don't work for a government agency, you could disclose this vulnerability (e.g. sell it) to a government agency and profit from your hard work (assuming they are willing to pay or haven't already found it themselves).  This might be of some use for the government agency obtaining data.  After all, there's no reason to exploit targets yourself if someone else already did the hard part for you (and then installed vulnerable malware).

If the vulnerability is on the command and control server, that likely belongs to a third party.  If you exploit it (even in the name of doing good) you're breaking the law.  If you find a DoS vulnerability in the C2 software and exploit it, you're still breaking the law.  This is true whether the attacker directly leased a VPS or compromised some poor website running Drupal (yeah, I'm kicking the slow kid here, but I can't help it).  In any case, you are exceeding your authorized access on the C2 server and that's a CFAA violation.

A SANS student posed a similar question to me a while back.  The question assumed that he found a vulnerability in botnet client code that would allow him to take control of infected nodes.  He hypothesized that even if taking control of the compromised machines was de facto illegal, nobody would care if he did it just to patch the machine itself and remove the malware.  But this brings up several important questions of its own, such as:
  1. What if the system you patch is a honeypot that is being used to track commands sent from the C2 server? In this case, you've exceeded your authority and interfered with the collection operations of an organization, potentially causing material harm.
  2. What if the system should not be patched due to third party software running on the machine? What if this is an industrial systems controller and patching causes physical harm?
  3. What if everything should have worked, but something goes awry due to unforeseen circumstances (you know, because Windows).  What then?
There are just too many unknowns to really do anything meaningful with a vulnerability like this (at least while wearing a white hat and carrying liability insurance).  As we tell our clients at Rendition Infosec, the easiest way to avoid unforeseen consequences of hacking back is just not to engage in it in the first place.  

Friday, October 21, 2016

Martin classified data leaks - pretrial court documents

In response to a pre-trial detention hearing, United States attorneys filed a motion to deny that Harold Martin III being released from prison.  Based on this document, we know a lot more about the strength of the government's case.

Misrepresentations or misinterpretations?
I've seen some pundits with poor reading comprehension misinterpret two things in this section.  First several pundits said that Martin had "dozens of computers."  That's not what the statement says.


The government says that they seized "dozens of computers and other digital storage devices" which is far different.  The wording may be intentionally designed to make the judge believe that Martin had dozens of computers.  But this isn't surprising to me.  Martin is a practicing infosec professional.  Take one trip to BlackHat or RSA and you can bring back a dozen or more USB devices from vendor booths.  Assuming that at some time in his career Martin went to a security conference (or many such conferences), he would likely have dozens of digital devices.

The other misinterpretation I'm seeing a lot in the media is that Martin stole 50TB of classified data.  But the government never makes this claim.  They only claim that they recovered 50TB of storage devices from his residence.  They never discuss (and honestly probably do not know) what percentage of the storage media contains classified data.

Handwritten notes
This next excerpt is particularly damning.  A document recovered from Martin's residence contains hand written notes, seemingly for explaining the document to those who lack the same context he has.


If the government is to be taken at face value, it appears that Martin was planning to pass this document to a third party. Whether Martin intended to pass the printed document to a reporter or a foreign government, the allegation is highly disturbing.

Are we still doing this "Need to know" thing?
This excerpt suggests that Martin had documents in his possession for which he had no need to know.  In a post-Snowden NSA, this seems a little cavalier - how did Martin come into possession of this very sensitive need to know document?


Documents stored openly in the back of Martin's car
This is huge - it's pretty amazing to think about classified documents stored openly in Martin's home and/or in the back of his car.


Later in the document, the government points out that Martin's residence lacks a garage. This means his car was parked out in the open at nights, probably with classified data storage.  The government states that's how they found it when they served the search warrant.

Classified theft may have begun in 1996, but the government doesn't claim that
The documents state that Martin had access to classified information starting in 1996.  However, they stop short of saying when he first started stealing data.  Many media outlets have talked about how he has been stealing data for 20 years.


Read the filing carefully however and you will see that there is no mention that Martin stole data for 20 years, only that he's had a security clearance that long.

Disgruntled? Yeah, I'd say so...
In 2007, apparently Martin drafted a letter to send to his coworkers.  It appears that he's a little vindictive and disgruntled.  Feeling marginalized (and wanting to feel important) is one of the reasons people commit treason.  Their failure to allow Martin to "help them" may have been a catalyst for the treason.


And of course "They" are inside the perimeter.  If the government's claims are to be believed, nobody knew this better than Martin himself.


That's all folks
It could keep writing, but this is probably a good place to drop off.  If you're really interested in more, you should read the source document.

Prosecutorial deception in the Harold Martin case

The government has released its arguments for pretrial confinement for Harold Martin.  Most of the arguments in the document are sound, but one section is nothing short of deceitful.  Unfortunately it makes me question other elements of the government's case.  Even if the rest of the case is solid (and it appears to be) the prosecutor here should be disbarred for attempting to deceive the judge in the case through misrepresenting facts.


Martin engaged in encrypted communications. So what? If you are reading this blog, chances are you are also "engaging in encrypted communications."  Martin had remote data storage accounts.  Again, so what?  This statement could be true of anyone with a Gmail address or an Office 365 account.  I'm not impressed.  Martin had encrypted communication and cloud storage apps installed on his mobile device.  Cloud storage apps?  If he's an iOS user, that's iCloud.  If he's an Android user, he has Google Drive installed by default.  Encrypted communication apps could refer to iMessage, Gmail chat, or even Skype.

Taken at face value, the government's case against Martin seems strong.  So why then does the government reduce its arguments to sweeping generalities?  I can see three probable explanations.

  1. The prosecutor doesn't know any better.
  2. The prosecutor thinks the judge doesn't know any better.
  3. The DoJ is setting precedent so these circumstances can be used later to get pretrial confinement for a suspect.

The pessimist in me thinks it's probably the latter of these.  I hope the EFF and other civil liberties groups weigh in on this, the precedent is highly disturbing.  Trying to obtain pretrial confinement by arguing that the defaults on a user's phone are somehow malicious is a gross misrepresentation.  I hope the government amends its filing to more clearly represent the facts.

Saturday, October 15, 2016

CIA cyber-counterstrike probably not a leak after all

Recently, NBC News ran a story that CIA is planning a cyber counterstrike against Russia to retaliate for interfering with the US elections.  Initially, I saw people taking to Twitter talking about how "loose lips sink ships" and other such cliches.  But is this a leak at all?  I think it's at least worth considering the other possibilities here.

Theory #1 - this is total misinformation
While CIA is an intelligence organization, their recent history of leaks has been a little better than NSA.  It therefore feels less likely that this was leaked by CIA sources directly.  Also, as Wikileaks pointed out in a tweet, CIA is probably not the right organization to carry out such a mission.


Now I don't usually cite WikiLeaks as a reliable source, but I think they are probably right here.  If this isn't a job for US Cyber Command, what would be?

Theory #2 - This is an exquisitely planned information operation
If you're not familiar with military deception operations, now would be a great time to fix that.  We're very likely to see a larger number of these in the future as cyber conflicts between nation states become the norm.

This "leak" feels to me like a deception operation designed to undermine the Russian people's confidence in their government.  That's the only reason I can think of to mention the CIA in the leak vs. the NSA or US Cyber Command.  The Russian people know of the CIA just like we know of the KGB in the US.  NSA and Cyber Command just aren't household names there.

How much will this "leak" impact Russian government information security operations?
While the leak may increase awareness of cyber attacks at the rank and file level, it isn't likely to change the Russian government's plans or information security posture in any way.  Whether or not the Russians are responsible for the DNC hack, now that they've been called out by US intelligence agencies, they are doubtlessly preparing to defend against a retaliatory cyber attack.  Saying "we're going to hack you" is completely unnecessary to prompt the Russian government to prepare for such an attack.

Increasing confidence of US citizens
If this is an information operation and not a leak, it does much to pacify the average US citizen who otherwise sees the Russian cyber attacks as being largely ignored.  At least now they can point to this operation and feel like Russia hasn't "gotten away with something."

Bolstering recruiting
Whether this is a true leak or an information operation, it almost certainly benefits the US intelligence community's ability to recruit future cyber operators.  "I can't tell you this was us or that you'll have the chance to stick it to Russia, but did you see that story about the US retaliating?"

What do you think?
I'd love to know what you think about this too.  Please feel free to continue the discussion on Twitter (I'm @MalwareJake) or post your thoughts in the comments section.