A colleague of mine called me tonight with what sounded like a simple question on penetration testing scope. But he's a smart guy. If there was a clear cut answer, he wouldn't have solicited advice.
Scope, fracking scope...
See, the problem had to do with scope. I intend to cover why I setting out of scope boxes/services is generally stupid in a later post. But that aside, determining what's in scope can be a hard topic. It's so confusing that the SANS SEC560 course on penetration testing actually pays it quite a bit of attention (and rightfully so). A bad scoping document can make or break a penetration test when expectations aren't met (on either side).
Here's the problem. Many clients don't want you to test their web applications. Of course no self respecting attacker would ever use one of those to gain access to your network. And oh yeah, about that social engineering stuff... that's unfair. Don't trick the user. Finally, we already know our Acrobat Reader is out of date. Please don't run any client side attacks. Without explicitly saying so, they've more or less restricted themselves to network services only.
Where do services end and web apps begin?
Here's the problem: my colleague has just that scope - no social engineering, no client side apps, and no web apps. While scanning, he found a publicly available web server running an outdated version of PHP. The question is whether it's in scope to exploit. The purist in me wants to say "of course it's in scope." However, I'd expect a fight from the client if he exploits it.
CVE 2012-1823 is a great example. It offers remote code execution, but only in certain fairly specific circumstances. I know some purists out there will argue that CVE 2012-1823 was HUGE (and if you were exploited by it, I'm sure it feels that way). However, not every server running an affected version of PHP shows up as vulnerable (even if the circumstances to exploit it don't exist on that server). However, it's the server configuration (not the web application) that controls whether the vulnerability is exploitable. This one seems like an easy win for the services column and clearly not a web application.
CVE 2011-1938 isn't so clear cut. This vulnerability relies on the socket name of a Unix socket in the function socket_connect(). Now I feel that I can successfully argue that user controlled values should never determine the name of a Unix socket. But let's suppose that your web app developer was huffing glue the night he wrote that part of your web application. It turns out you can exploit this web application, but only because PHP itself is vulnerable. We can agree that the web app developer probably didn't follow awesome coding practices, but his code itself isn't vulnerable. So where does this one land? Again, I say this is in scope for network services, but I expect some disagreement here. The waters have definitely gotten murky. I expect an embarrassed CISO would argue that this was a web application problem and whine his way out of a finding.
Thoughts, dissenting opinions?
I don't have all the answers (although my six year seems to think so). If you think I've missed the boat here, let me know. If you have any war stories to share, feel free to tell me about those too.
Ramblings about security, rants about insecurity, occasional notes about reverse engineering, and of course, musings about malware. What more could you ask for?
Tuesday, June 25, 2013
Saturday, June 15, 2013
Are the Germans really breaking PGP and SSH?
I saw this on Twitter today, suggesting that German intelligence agencies are able to break SSH and/or PGP. The question asked whether intelligence was in a position to "decrypt, at least partially or evaluate" traffic encrypted, such as via SSH or PGP. The answer was a conditional yes.
Malware anyone?
What does this mean? Are the Germans that awesome at math? Do they have ten-pound brains capable of performing 'impossible' decryption in a single bound? I think not.
Malware anyone?
Here's what I think: malware is to blame.
The Germans have specific network traffic they'd like to decrypt. They have three options:
1. They could hire a team of crack cryptanalysts and lock them in a room until they maybe do the impossible and come up with a decryption solution.
2. Harness more computing power than anyone knows exists (maybe quantum computing?) and brute force a solution.
3. Compromise one of the endpoints in the communication and install malware to steal the keys.
The Germans have specific network traffic they'd like to decrypt. They have three options:
1. They could hire a team of crack cryptanalysts and lock them in a room until they maybe do the impossible and come up with a decryption solution.
2. Harness more computing power than anyone knows exists (maybe quantum computing?) and brute force a solution.
3. Compromise one of the endpoints in the communication and install malware to steal the keys.
I'm a strong believer in Occam's razor. That belief forces me to believe that it is the third option at play here. I don't know what the budgets are for the first two operations. The budget for option 3 could be dirt cheap. You can hire college CS majors to write the malware and rent an exploit kit to deliver it to the intended victims to steal the keys. Total budget: < $5,000.
Okay, so it's possible. Does malware really do that?
In the HBGary Malware Reverse engineering course I used to teach, we examined a piece of malware that did this. It dates back to at least 2009. The malware injected into the user's Outlook process and then made a Windows API call to dump the certificate store. These are the private keys that you import into Outlook to make sure you can read encrypted email. On disk, these are always encrypted with your (undoubtedly strong) password. What's an attacker to do? Just dump them from Outlook's certificate store to the disk. Outlook happily complies since it thinks the user is making the request. The API requires the output file to be protected with a password, but that's no problem. The attacker supplied a known password and then exfiltrates the certificates to a waiting server. Given the keys, decryption is trivial.
The Windows API the attackers used was PFXExportCertStoreEx. If you see this in malware, know they are looking for your private keys. Yeah, it can be used to just dump your trusted certificates (without private keys) but what's the fun in that?
The PFXExportCertStoreEx function takes five arguments. The fifth argument contains a set of flags used to control the export behavior. In order for private keys to be exported, the aptly named EXPORT_PRIVATE_KEYS flag must be set. Of course you won't see flag names when reversing the malware, so look for the constant value 0x0004 being set in the fifth argument to PFXExportCertStoreEx.
If I've been teaching a fairly uncomplicated malware sample with this functionality for more than four years, you can bet German intelligence knows about it too. I think this answer makes the most sense because it explains the conditional yes answer that was given. In other words, they can break everything they have the keys for (so can you) and nothing they don't.
Countermeasures:
Every time I introduce someone to certificate stealing malware, I get jaw drops, face palms, and other sorts of 'why can they even do that' comments. A starting countermeasure to this is simple. When you import your private keys, mark them as non-exportable (offered as a check the box option). This prevents the keys from being exported using the API. This is enough to stop most attackers who can only use the API. Note however that your keys are still present and could be dumped directly from memory or using some other trick.
The real countermeasure is to stop attackers from running malware on your machine in the first place. Of course that's easier said than done.
If this is the first time you've heard of certificate stealing malware, I'm glad I could educate you today. If you think I'm wrong, feel free to post a comment and tell me why.
Okay, so it's possible. Does malware really do that?
In the HBGary Malware Reverse engineering course I used to teach, we examined a piece of malware that did this. It dates back to at least 2009. The malware injected into the user's Outlook process and then made a Windows API call to dump the certificate store. These are the private keys that you import into Outlook to make sure you can read encrypted email. On disk, these are always encrypted with your (undoubtedly strong) password. What's an attacker to do? Just dump them from Outlook's certificate store to the disk. Outlook happily complies since it thinks the user is making the request. The API requires the output file to be protected with a password, but that's no problem. The attacker supplied a known password and then exfiltrates the certificates to a waiting server. Given the keys, decryption is trivial.
The Windows API the attackers used was PFXExportCertStoreEx. If you see this in malware, know they are looking for your private keys. Yeah, it can be used to just dump your trusted certificates (without private keys) but what's the fun in that?
The PFXExportCertStoreEx function takes five arguments. The fifth argument contains a set of flags used to control the export behavior. In order for private keys to be exported, the aptly named EXPORT_PRIVATE_KEYS flag must be set. Of course you won't see flag names when reversing the malware, so look for the constant value 0x0004 being set in the fifth argument to PFXExportCertStoreEx.
If I've been teaching a fairly uncomplicated malware sample with this functionality for more than four years, you can bet German intelligence knows about it too. I think this answer makes the most sense because it explains the conditional yes answer that was given. In other words, they can break everything they have the keys for (so can you) and nothing they don't.
Countermeasures:
Every time I introduce someone to certificate stealing malware, I get jaw drops, face palms, and other sorts of 'why can they even do that' comments. A starting countermeasure to this is simple. When you import your private keys, mark them as non-exportable (offered as a check the box option). This prevents the keys from being exported using the API. This is enough to stop most attackers who can only use the API. Note however that your keys are still present and could be dumped directly from memory or using some other trick.
The real countermeasure is to stop attackers from running malware on your machine in the first place. Of course that's easier said than done.
If this is the first time you've heard of certificate stealing malware, I'm glad I could educate you today. If you think I'm wrong, feel free to post a comment and tell me why.
Note: While I was writing this, @WeldPond re-posted the article link and got a comment back I obviously agree with:
.@WeldPond Password bruteforcing, trojans, key exfiltration and the rest of the usual tricks. No math breakthrough in sight.
— Frank Rieger (@frank_rieger) June 15, 2013
Friday, June 14, 2013
CISO fodder: Two lessons from the Snowden case
The Edward Snowden leak is epic in proportion. It’s been all over the news. If you are like my wife, you’re sick of
seeing the 24x7 news coverage. I’ve
talked to several corporate security professionals who look at the Snowden case
and count the ways the situation doesn’t
apply to them. That’s a mistake. There are two quick lessons to be learned from
this situation. Now (while it’s all over
the news) is the time to review your security controls.
Why should you care?
Even if you don’t process classified information, your
intellectual property is definitely worth protecting. You don’t want it to be stolen by a
disgruntled employee or financially motivated insider. By studying this leak, you can apply some
lessons learned to your own networks.
Lesson One: Auditing
First, examine auditing in your environment. You have to trust your users, and you have to
trust your systems administrators even more.
After all, they have the keys to the kingdom. But who watches the
systems administrators? Many
organizations I’ve had the pleasure of working with audit administrator
activity on a periodic basis (often monthly).
In the Snowden case, monthly auditing may have failed to detect the
copying of documents to removable media (remember, he only worked there a
month). Frequent monitoring is critical
to detecting insider threats moving data out of the network.
Lesson Two:
Separation of Duties
But what happens when you have an employee who exhibits no
signs of mistrust until the one time he walks off with a USB drive full of
documents? Is real time monitoring the
answer? Yes and no. Real time monitoring is a goal we should all
strive to achieve for the sake of security.
However, a complimentary solution involves separation of duties. Restrict the number of domain/enterprise
administrators (the number should be approaching zero). Require two person integrity for certain
operations. It’s much easier for one
person to steal something than two people to collude to steal the same data.
Despite this being an obvious best practice, I don’t see it
done often. I see it done right even
less frequently. Why don’t we see more
separation of duties? It’s
inconvenient. It also appears to
increase IT costs..
Example Separation
One easy separation is to ensure that desktop admins can’t
bypass the DLP software (by restricting access to the DLP server). If the helpdesk requires this permission,
during off hours, then create a cadre of trusted agents who will serve as the
authorizing party for DLP bypass (and audit the trusted agents). Note to the reader: here’s that frequent
auditing theme again.
Conclusion
There are many more examples like this. Yes, they require policies to be written and
adhered to. Yes, they increase IT
burden. Yes, occasionally the policies
themselves will lengthen problem resolution times. With the wrong IT team, it can lead to serious
finger pointing during outage resolution.
But on the whole, your security is worth it. Come up with sensible policies and get buy in
from IT team members. Team members are
far less likely to implement policies when they don’t understand the problem.
Wednesday, June 12, 2013
Navy messaging system vulnerable to SQL injection, command injection, etc?
One of the people I follow on Twitter posted this breaking story that the Navy is migrating to a new system that, get this, allows commanders to send and receive operational messages that contain characters OTHER THAN ALL CAPS. When the new system goes live, the Navy will be able to send messages in characters other than caps.
Believe it or not, the Navy has "folks that have been around for a long time and are used to uppercase and they just prefer that it stay there because of the standardized look of it." Yep. That's all sorts of ignorant. However, the Navy realizes that the move to mixed case words and special characters "is imminent." The Navy acknowledged that the move is necessary because mixed case content "makes the readability better for the folks that are actually monitoring in a chat room or reading messages off a portal site."
Now there's the rub. I'd like to believe that the Navy will properly sanitize data for their core message handling system. I sincerely hope that the core of the system will be secure('ish). The story changes dramatically however when it comes to personnel monitoring operational messages in chat rooms or portal sites. I'm willing to bet that many of these portals are super-old and predate concerns about input validation. However, because the system never before delivered mixed case content (including special characters) there was never any possibility of command or SQL injection.
If you work for the Navy, bring this up to your IT department. Don't trust that they have the stick on this one (they probably don't). If you live near a Navy base and ships start launching cruise missiles at WalMart two years from now (or some other such catastrophe), don't say I didn't warn you.
Seriously though, if you work for the Navy and have no idea how to evaluate your applications for secure handling of data, you can contact me for help at malwarejake <at> g-m-a-i-l dot c-o-m.
Believe it or not, the Navy has "folks that have been around for a long time and are used to uppercase and they just prefer that it stay there because of the standardized look of it." Yep. That's all sorts of ignorant. However, the Navy realizes that the move to mixed case words and special characters "is imminent." The Navy acknowledged that the move is necessary because mixed case content "makes the readability better for the folks that are actually monitoring in a chat room or reading messages off a portal site."
Now there's the rub. I'd like to believe that the Navy will properly sanitize data for their core message handling system. I sincerely hope that the core of the system will be secure('ish). The story changes dramatically however when it comes to personnel monitoring operational messages in chat rooms or portal sites. I'm willing to bet that many of these portals are super-old and predate concerns about input validation. However, because the system never before delivered mixed case content (including special characters) there was never any possibility of command or SQL injection.
If you work for the Navy, bring this up to your IT department. Don't trust that they have the stick on this one (they probably don't). If you live near a Navy base and ships start launching cruise missiles at WalMart two years from now (or some other such catastrophe), don't say I didn't warn you.
Seriously though, if you work for the Navy and have no idea how to evaluate your applications for secure handling of data, you can contact me for help at malwarejake <at> g-m-a-i-l dot c-o-m.
Monday, June 10, 2013
Open call for comments - re: TSA
Okay, admittedly this isn't directly related to information security. But if you are familiar with the ISC2 CBOK, you know that physical security is one of the domains. Good, so we've established this is at least loosely related, so what is it?
Explosives where the sun don't shine
According to a NY Times article, an attacker tried to kill an Afghani dignitary by hiding explosives in his rectum. That's right, his rectum. And the plan worked (sort of). He detonated the explosive and delivered minor injuries to his target. The attacker of course was killed, but I'm told that's a predictable side effect of detonating explosives from inside your body.
Detection
According to reports, he was strip searched, but still managed to hide the goods and detonate the bomb. So strip searching didn't work. The L3 millimeter wave imaging machine focuses all the energy on a suspect's skin, so that won't help either. Even a friendly TSA groping wont' help. This is truly a problem, and I don't mean to make light of it.
Silver Linings
The only good thing I can say about this is that at least this guy didn't bomb an airplane. Every time the TSA has a near miss, they institute some half-baked scheme to prevent it from happening again. First we had a shoe bomber, now we can't wear our shoes through a checkpoint. After the underwear bomber, plans to deploy imaging machines were accelerated. Threats of liquid explosives? Now we have plastic baggies and no drinks through the checkpoint (Ziploc and airport shops are thriving though). I shudder to think what TSA will do to combat the yet-to-be-seen rectum bomber.
Every time TSA does something stupid, I get to enjoy a laugh on SNL. Hopefully the censors don't prevent SNL from doing an awesome skit on the future safeguards, whatever they may be.
Safeguards?
If this trend catches on, I can't imagine what the outcome will be, but I'm sure it won't be good. The only good thing to note is that there's a limited amount of explosive that one can hide using this method. Note, I'm no expert in the field of inter-body-concealment, but physics being what they are... I seriously have no idea how TSA (or any security agency) is going to effectively handle this. My first thought comes back to profiling, and while that has proven effective in some cases (outrage aside), critics rightly point out that terrorists actively recruit those who don't fit the profile. So that can't be 100% of the solution.
I'm not the smartest guy on the planet. Chime in and let's figure out options for detection. Better we figure it out here than leaving this to the TSA. And the clock is ticking folks, some Washington bureaucrats are meeting right now to approve random body cavity searches at checkpoints. Believe me, we want to get out in front of this.
Saturday, June 8, 2013
Malware Trends and Signed Malware
I took the time this week to catch up on some reading after the Techo Security/Forensics Conference in Myrtle Beach. Among some of the reading to catch up on was the McAfee Threats Report: First Quarter 2013. Here are some thoughts:
A couple of notes
New rootkit samples detected remain flat, but down substantially overall since last year. I don't know what, if anything, this says about security posture. However, once we break into the numbers a little more, we can see that new samples of TDSS are down substantially while new samples of ZeroAccess are up. I think in this case that TDSS was a victim of its own success. You can only own so much of the open Internet before the community figures you out. I think the growth in ZeroAccess is largely replacing the void left by TDSS.
On another front, new Fake AV samples are still running strong. This one never ceases to amaze me. I mean, who the heck falls for this stuff? Apparently enough people must to make it at least appear profitable to scammers. McAfee noted nearly a million new Fake AV samples this quarter alone.
Signed Malware
Signed malware is on the rise, according to McAfee. However, the growth rate was slower than the last quarter of 2012. McAfee reported almost 800,000 new signed malware samples in the first quarter of 2013. Signed malware presents an interesting problem for an attacker. The digital signature is a specific forensic artifact that aids in attribution. No more looking for .pdb filenames embedded in your malware. Once researchers identify one piece of malware signed with a specific certificate, they can instantly determine if any suspect piece of software was signed by the attacker. The logic follows that if one piece of software signed with a given certificate is malicious, others signed with the same certificate will be too. This is WAY better than a Yara signature.
On the other hand, some people trust signed software more, especially if it is signed with a 'trusted' certificate. But what does trusted really mean anyway? How much scrutiny is required to get a code signing certificate? None, really. It's more about jumping through the financial hurdles than about proving you are a legitimate company. In any case, software with a valid digital signature, trusted or not, may escape extra scrutiny during a post-mortem forensic investigation. Users may also miss some pop-up warnings when installing (probably inadvertently) properly signed malware, making it more desirable for malware authors.
So, it's a mixed bag. A year ago, I wouldn't have predicted a mass migration to signed malware for the reasons I mentioned above. But I would have been wrong. It turns out that malware authors have a huge financial motive to get this right, and they are moving to signed malware. This is an endorsement for the technique's perceived efficacy, if nothing else.
If you need a tool to examine digital signature, you can use disitool on Linux. If you enjoy working on the Windows platform, signtool is a Microsoft provided tool for verifying digital signatures.
Parting Thoughts
Stay tuned for tips on dealing with digitally signed malware and other thoughts on malware in general.
A couple of notes
New rootkit samples detected remain flat, but down substantially overall since last year. I don't know what, if anything, this says about security posture. However, once we break into the numbers a little more, we can see that new samples of TDSS are down substantially while new samples of ZeroAccess are up. I think in this case that TDSS was a victim of its own success. You can only own so much of the open Internet before the community figures you out. I think the growth in ZeroAccess is largely replacing the void left by TDSS.
On another front, new Fake AV samples are still running strong. This one never ceases to amaze me. I mean, who the heck falls for this stuff? Apparently enough people must to make it at least appear profitable to scammers. McAfee noted nearly a million new Fake AV samples this quarter alone.
Signed Malware
Signed malware is on the rise, according to McAfee. However, the growth rate was slower than the last quarter of 2012. McAfee reported almost 800,000 new signed malware samples in the first quarter of 2013. Signed malware presents an interesting problem for an attacker. The digital signature is a specific forensic artifact that aids in attribution. No more looking for .pdb filenames embedded in your malware. Once researchers identify one piece of malware signed with a specific certificate, they can instantly determine if any suspect piece of software was signed by the attacker. The logic follows that if one piece of software signed with a given certificate is malicious, others signed with the same certificate will be too. This is WAY better than a Yara signature.
On the other hand, some people trust signed software more, especially if it is signed with a 'trusted' certificate. But what does trusted really mean anyway? How much scrutiny is required to get a code signing certificate? None, really. It's more about jumping through the financial hurdles than about proving you are a legitimate company. In any case, software with a valid digital signature, trusted or not, may escape extra scrutiny during a post-mortem forensic investigation. Users may also miss some pop-up warnings when installing (probably inadvertently) properly signed malware, making it more desirable for malware authors.
So, it's a mixed bag. A year ago, I wouldn't have predicted a mass migration to signed malware for the reasons I mentioned above. But I would have been wrong. It turns out that malware authors have a huge financial motive to get this right, and they are moving to signed malware. This is an endorsement for the technique's perceived efficacy, if nothing else.
If you need a tool to examine digital signature, you can use disitool on Linux. If you enjoy working on the Windows platform, signtool is a Microsoft provided tool for verifying digital signatures.
Parting Thoughts
Stay tuned for tips on dealing with digitally signed malware and other thoughts on malware in general.
Tuesday, June 4, 2013
NZ Business Friendly Forensics Ruling
On Friday, a New Zealand judge ruled for Kim Dotcom in the case of MegaUpload. The judge ruled that the seizure was overly broad and directed that drives and drive clones be returned to the plaintiffs. Yep, that was three day ago and it is old news. So why am I writing a blog post about this?
Here's what I haven't seen publicized. The plaintiff sought an order that "the Police pay, on an indemnity basis, the plaintiffs’ costs of and incidental to this proceeding and any order thereon." The judge didn't rule in this portion of the case. Rather, it was left this to a future ruling. The fact that it wasn't ruled on isn't a loss for the plaintiffs. I view it as a loss for law enforcement. If the request had no merit, the judge would have dismissed it out of hand or ruled as such on it in the 31 page ruling. I expect the state to ultimately lose on this and be forced to pay at least some of DotCom's legal expenses and possibly other damages.
Let that sink in for just a minute. If you work for DoJ or law enforcement, let that sink in for more than a minute. I have no idea what the legal costs were here, but the judge found this warrant to be so egregious that law enforcement will pay the legal costs of the defense. Wow. I have no idea what the cost of Kim DotCom's defense team is, but I'm betting he isn't using a public defender.
The bigger question on this has to do with the breadth of data obtained (seized) by the government. In the past, many have questioned the breadth of digital evidence seized by the government in the course of an investigation. In particular, the issue has to do with the seizure of personal digital media that does not contain evidence. Of course defendants lose access to this media during the course of an investigation, which may span years. Critics have correctly stated that law enforcement can and should clone or image suspect media and return the original to the defendant.
Traditionally, if defendants want access to data via cloned media, they have been forced to pay for the media and the production costs. Given that the US Government famously pays $100 for a toilet seat, I can't imagine what data production costs would be for a cloned drive (even when you provide the drive). Here's what's significant in this case: the judge ordered that law enforcement incur the cost of sorting seized data to locate actual evidence. The judge also ordered law enforcement to pay for any costs associated with cloning media to the originals could be returned to the defendant.
Considering that the MegaUpload seizure was the largest in history, this ruling was particularly significant. Unfortunately this ruling was made in NZ, not the US so we don't yet know what implications it will have on the US case against MegaUpload and the other seizures associated with this case. In any case, I think the ruling is a game changer. Unless it is overturned, this makes NZ an extremely business friendly environment for data storage (in the event of litigation).
Here's what I haven't seen publicized. The plaintiff sought an order that "the Police pay, on an indemnity basis, the plaintiffs’ costs of and incidental to this proceeding and any order thereon." The judge didn't rule in this portion of the case. Rather, it was left this to a future ruling. The fact that it wasn't ruled on isn't a loss for the plaintiffs. I view it as a loss for law enforcement. If the request had no merit, the judge would have dismissed it out of hand or ruled as such on it in the 31 page ruling. I expect the state to ultimately lose on this and be forced to pay at least some of DotCom's legal expenses and possibly other damages.
Let that sink in for just a minute. If you work for DoJ or law enforcement, let that sink in for more than a minute. I have no idea what the legal costs were here, but the judge found this warrant to be so egregious that law enforcement will pay the legal costs of the defense. Wow. I have no idea what the cost of Kim DotCom's defense team is, but I'm betting he isn't using a public defender.
The bigger question on this has to do with the breadth of data obtained (seized) by the government. In the past, many have questioned the breadth of digital evidence seized by the government in the course of an investigation. In particular, the issue has to do with the seizure of personal digital media that does not contain evidence. Of course defendants lose access to this media during the course of an investigation, which may span years. Critics have correctly stated that law enforcement can and should clone or image suspect media and return the original to the defendant.
Traditionally, if defendants want access to data via cloned media, they have been forced to pay for the media and the production costs. Given that the US Government famously pays $100 for a toilet seat, I can't imagine what data production costs would be for a cloned drive (even when you provide the drive). Here's what's significant in this case: the judge ordered that law enforcement incur the cost of sorting seized data to locate actual evidence. The judge also ordered law enforcement to pay for any costs associated with cloning media to the originals could be returned to the defendant.
Considering that the MegaUpload seizure was the largest in history, this ruling was particularly significant. Unfortunately this ruling was made in NZ, not the US so we don't yet know what implications it will have on the US case against MegaUpload and the other seizures associated with this case. In any case, I think the ruling is a game changer. Unless it is overturned, this makes NZ an extremely business friendly environment for data storage (in the event of litigation).
Saturday, June 1, 2013
(In)Security at DHS
So I've taken an interest in social media recently. To that end, I am very interested in the recently released list of keywords that DHS looks for when monitoring the Internet for terrorist and other "National Security" events. I downloaded the Analyst Desktop Binder (redacted) and started reading through it. In addition to search terms, I'm always interested in what information can be pulled out of these sorts of things.
Single sign on (but not the way we like):
DHS uses a single sign on for its people to access one of their systems. Now this appears to be a video system that does something with DirectTV, but still... really? Oh, by the way, it uses HTTP instead of HTTPS. Of course, I can't pick on HTTP vs. HTTPS when they use a single shared account.
Training DHS users to accept untrusted SSL certificates:
There's a reason for CA chains. I've heard it has something to do with certificate trust. The US Government has been horrible about this through the years. When I was in the Army, I always had to ignore certificate errors to get any work done. Apparently the trend continues at DHS. Basically, if you want to use the system, you should accept the invalid certificate for all sessions. Man in the middle attack anyone? Seriously, this is just bad systems administration. Distribute the trusted root cert to all of your clients so they trust this cert and won't be vulnerable to MITM attacks. Better yet, understand that you are TRAINING YOUR USERS to be vulnerable to future MITM attacks.
Save your password anyone?
So, as a penetration tester, I can't tell you how discouraging it is when someone forgets to save their passwords. I mean, I have to actually work to pivot throughout the network. Pen testers: never fear. DHS's custom applications offer the option to save the user's password.
Now in all fairness to DHS, their manual does inform users that saving the password is a bad idea since it poses a potential security risk. I have two questions then:
In case you were curious how this situation could get any dumber, take a look at the other highlighted text. You can assign a profile, something like "Mobile". Whoa. WTF? There's a custom DHS application that serves some sort of sensitive information (sensitive enough to require a login) and you're using it on a mobile device? Oh, and by the way it saves your passwords too. Awesome. I'm sure the security wizards at DHS have ensured full disk encryption on those mobile devices though so there's nothing to worry about.
Parting thoughts:
If you work for a government agency, don't forget that anything you write down is subject to a FOIA request (that's how this was released). The government can't hold back stuff that makes you look dumb, just stuff that compromises national security. Remember: if you'd rather the public (me) didn't blog about your bad procedures, don't write them down. Or better yet, get some better procedures... for all of us.
Jake Williams
@MalwareJake
Join me at the SANS DFIR Summit July 9-10 in Austin, TX.
Awesome speakers, great community, and DFIR NetWars!
Use promo code CSRgroup10 for 10% off your registration.
Single sign on (but not the way we like):
DHS uses a single sign on for its people to access one of their systems. Now this appears to be a video system that does something with DirectTV, but still... really? Oh, by the way, it uses HTTP instead of HTTPS. Of course, I can't pick on HTTP vs. HTTPS when they use a single shared account.
Training DHS users to accept untrusted SSL certificates:
There's a reason for CA chains. I've heard it has something to do with certificate trust. The US Government has been horrible about this through the years. When I was in the Army, I always had to ignore certificate errors to get any work done. Apparently the trend continues at DHS. Basically, if you want to use the system, you should accept the invalid certificate for all sessions. Man in the middle attack anyone? Seriously, this is just bad systems administration. Distribute the trusted root cert to all of your clients so they trust this cert and won't be vulnerable to MITM attacks. Better yet, understand that you are TRAINING YOUR USERS to be vulnerable to future MITM attacks.
Save your password anyone?
So, as a penetration tester, I can't tell you how discouraging it is when someone forgets to save their passwords. I mean, I have to actually work to pivot throughout the network. Pen testers: never fear. DHS's custom applications offer the option to save the user's password.
Now in all fairness to DHS, their manual does inform users that saving the password is a bad idea since it poses a potential security risk. I have two questions then:
- Why is the option still there when they know it's a security risk.
- Why restrict the warning to the manual when you could also put it on the GUI?
In case you were curious how this situation could get any dumber, take a look at the other highlighted text. You can assign a profile, something like "Mobile". Whoa. WTF? There's a custom DHS application that serves some sort of sensitive information (sensitive enough to require a login) and you're using it on a mobile device? Oh, and by the way it saves your passwords too. Awesome. I'm sure the security wizards at DHS have ensured full disk encryption on those mobile devices though so there's nothing to worry about.
Parting thoughts:
If you work for a government agency, don't forget that anything you write down is subject to a FOIA request (that's how this was released). The government can't hold back stuff that makes you look dumb, just stuff that compromises national security. Remember: if you'd rather the public (me) didn't blog about your bad procedures, don't write them down. Or better yet, get some better procedures... for all of us.
Jake Williams
@MalwareJake
Join me at the SANS DFIR Summit July 9-10 in Austin, TX.
Awesome speakers, great community, and DFIR NetWars!
Use promo code CSRgroup10 for 10% off your registration.
Subscribe to:
Posts (Atom)