You've probably heard about the TOR hidden service unmasking that was used by the FBI to bring down Silk Road and other illegal services hiding on TOR. There was later some speculation that FBI paid Carnegie Mellon University (CMU), a Federally Funded Research and Development Center (FFRDC) more than $1 million for research into unmasking TOR hidden services. Needless to say, folks over at the TOR project are really mad about this. The assertion also made CMU look bad. Both the FBI and CMU have denied that CMU was paid specifically to decloak TOR users.
You may have also heard that CMU released a statement sort of hinting that they weren't specifically paid for the TOR research. They don't really come out and say this, only suggest that's the case. Whether you believe this or not doesn't really matter. The implication is chilling.
It's probably also worth noting that CMU researchers submitted to speak at Blackhat, but withdrew their presentation. This was probably not the result of a subpoena. FFRDC's really do depend on their federal funding and giving away secrets the FBI is currently using is probably a bad way to secure that on a continued basis.
If CMU is truly alleging that they turned over the research based on a subpoena and CMU has no special relationship with the FBI/DoJ, then the effects for infosec researchers are potentially chilling. Not everyone wants to turn over their research to the FBI. For some, the reasons are financial - they may plan to turn their research into a product for sale. Others may simply not wish to assist law enforcement with a particular case (or any case).
As a security researcher, I for one would feel more comfortable if the FBI disclosed what help they received from CMU and how they received it. I actually don't care if they FBI paid CMU for their help in de-cloaking TOR users (as some have suggested). I I just want to know what my exposure as a security researcher is if the FBI wants my research to help with a case. Is this something they can just take from me if it will help them solve a case? If a subpoena truly was required for CMU, it is somewhat unlikely that CMU fought it very hard given their status as an FFRDC. Many companies who don't have to worry about receiving FFRDC funding may have a different view on fighting a subpoena.
In the end, I don't care what the arrangement between CMU and the FBI was. But as a security researcher, I would like to know what my liability and exposure are if the FBI wants my research. Because quite frankly, I'd be comforted to know that at least CMU was paid. The idea that the FBI can just subpoena my research because it's of use in a criminal investigation is a little terrifying.
Ramblings about security, rants about insecurity, occasional notes about reverse engineering, and of course, musings about malware. What more could you ask for?
Wednesday, November 25, 2015
Friday, November 20, 2015
Encryption backdoors are a fool's errand
It's no secret that the government wants encryption backdoors. But will they really help in the fight against terror? A convincing argument can be made that implementing encryption backdoors will actually hurt intelligence operations.
Today it seems reasonable to assume that any well-funded intelligence organization has some practical attacks against modern mainstream cryptography. What they likely don't have is unlimited time and resources to reverse engineer thousands of new custom cryptographic implementations. But if we backdoor cryptography, that's exactly what we'll get.
More custom crypto - is that a good thing?
The very people who lawmakers want to spy on with encryption backdoors will turn to custom solutions. Cryptographers may initially say this is good. The road is littered with the bodies of the many who have tried and failed to implement secure custom encryption systems. In tech we even have the phrase "don't roll your own crypto."
At Rendition Infosec, custom crypto implementations (and poor configurations of existing implementations) are always findings in any software assessment. In the current landscape where good crypto is unbreakable by anyone other than dedicated intelligence organizations (and even then, only with significant effort), rolling your own crypto almost always a mistake.
Almost always a mistake... That is unless you KNOW that your adversary can circumvent existing solutions. In that case, rolling your own crypto is the only sane thing to do. If we change the playing field, we must expect the players to adapt.
But will we get better intelligence?
Placing backdoors in encryption will lead to some intelligence gains. But they will be among low level assets of little Intel value who practice bad OPSEC. Those with good OPSEC (the really dangerous guys), will turn to custom solutions developed in house. Terrorists will employ mathematicians and computer scientists who share their views to help develop custom crypto solutions. There's some evidence that Al Qaeda is already doing this.
While custom implementations may have structural weaknesses, they won't be standards based. Each solution will require dedicated reverse engineering, and this is hard work. Remember, we aren't talking about standards based crypto here. EVERYTHING has to be reverse engineered from the ground up, totally black box. If they roll out new algorithms periodically, all the worse for intelligence agencies trying to monitor the terrorist communication.
What then? Just exploit the endpoint so we don't need to reverse engineer the custom cdatao? Yes, that sounds like a good plan... Or does it? If we can reliably attack terrorist endpoints today, then why worry about backdooring encryption?
Do something, do anything
In the wake of any terrorist attack, it's easy to feel like soemthing must be done. We drive on emotion and doing anything is better than nothing. But remember, that's how we got warrantless bulk collection programs in the first place. Perhaps "something" should be better sharing of intelligence data. According to open source reporting, several of the Paris attackers were on US intelligence watch lists.
Closing Thoughts
I'm not a terrorism expert. But I am computer security expert. I know that putting backdoors in encryption won't serve the original stated goals of monitoring only terrorist organizations. And that's because terrorist organizations won't use the backdoored cryptography. They'll know better. But after the expense of the backdoor projects, intelligence agencies will be forced to show results. They'll use the capability for parallel construction, much like the DEA is revealed to have done with other data. In the end, the crypto will only make all of our communucations weaker.
Today it seems reasonable to assume that any well-funded intelligence organization has some practical attacks against modern mainstream cryptography. What they likely don't have is unlimited time and resources to reverse engineer thousands of new custom cryptographic implementations. But if we backdoor cryptography, that's exactly what we'll get.
More custom crypto - is that a good thing?
The very people who lawmakers want to spy on with encryption backdoors will turn to custom solutions. Cryptographers may initially say this is good. The road is littered with the bodies of the many who have tried and failed to implement secure custom encryption systems. In tech we even have the phrase "don't roll your own crypto."
At Rendition Infosec, custom crypto implementations (and poor configurations of existing implementations) are always findings in any software assessment. In the current landscape where good crypto is unbreakable by anyone other than dedicated intelligence organizations (and even then, only with significant effort), rolling your own crypto almost always a mistake.
Almost always a mistake... That is unless you KNOW that your adversary can circumvent existing solutions. In that case, rolling your own crypto is the only sane thing to do. If we change the playing field, we must expect the players to adapt.
But will we get better intelligence?
Placing backdoors in encryption will lead to some intelligence gains. But they will be among low level assets of little Intel value who practice bad OPSEC. Those with good OPSEC (the really dangerous guys), will turn to custom solutions developed in house. Terrorists will employ mathematicians and computer scientists who share their views to help develop custom crypto solutions. There's some evidence that Al Qaeda is already doing this.
While custom implementations may have structural weaknesses, they won't be standards based. Each solution will require dedicated reverse engineering, and this is hard work. Remember, we aren't talking about standards based crypto here. EVERYTHING has to be reverse engineered from the ground up, totally black box. If they roll out new algorithms periodically, all the worse for intelligence agencies trying to monitor the terrorist communication.
What then? Just exploit the endpoint so we don't need to reverse engineer the custom cdatao? Yes, that sounds like a good plan... Or does it? If we can reliably attack terrorist endpoints today, then why worry about backdooring encryption?
Do something, do anything
In the wake of any terrorist attack, it's easy to feel like soemthing must be done. We drive on emotion and doing anything is better than nothing. But remember, that's how we got warrantless bulk collection programs in the first place. Perhaps "something" should be better sharing of intelligence data. According to open source reporting, several of the Paris attackers were on US intelligence watch lists.
Closing Thoughts
I'm not a terrorism expert. But I am computer security expert. I know that putting backdoors in encryption won't serve the original stated goals of monitoring only terrorist organizations. And that's because terrorist organizations won't use the backdoored cryptography. They'll know better. But after the expense of the backdoor projects, intelligence agencies will be forced to show results. They'll use the capability for parallel construction, much like the DEA is revealed to have done with other data. In the end, the crypto will only make all of our communucations weaker.
Wednesday, November 18, 2015
Kerberos silver tickets - unique attacker persistence
Last year there was a lot of talk about Golden Tickets in Kerberos allowing attackers to regain domain administrator if the KRBTGT account password in the domain wasn't changed. There was less fanfare about silver tickets since they don't offer unrestricted domain access, only unrestricted access to particular service accounts on a specific machine.
However, at Rendition Infosec we have recently observed an attacker using Kerberos Silver tickets for long term access. The silver ticket relies on compromising credentials to a particular service account (or more correctly the hash for that service account). For the case we observed, the attackers using the computer account password hash to create tickets with admin rights on the local machine. This means that even when local admin passwords are changed, the attacker can still access the machine as admin by using the machine password hash.
Doesn't the machine password change?
By default, the machine password should change every 30 days. However, machine password changes are recommendations and not enforced at the domain. In other words, even if the domain policy says to change the password every 30 days, a machine can go for years without its password changing and there's no change to the operation on the machine. Also, it's up to the machine to change its own password.
Gimme an IOC
On the machines where we observed this behavior, we saw the attackers updating a registry value to ensure that the machine password would never be updated. At this time, we believe this to be a reliable indicator of compromise and have not observed it on machines that were not under active attack. If you have seen this set elsewhere, please comment on the post or touch base out of band (jake at Rendition Infosec).
However, at Rendition Infosec we have recently observed an attacker using Kerberos Silver tickets for long term access. The silver ticket relies on compromising credentials to a particular service account (or more correctly the hash for that service account). For the case we observed, the attackers using the computer account password hash to create tickets with admin rights on the local machine. This means that even when local admin passwords are changed, the attacker can still access the machine as admin by using the machine password hash.
Doesn't the machine password change?
By default, the machine password should change every 30 days. However, machine password changes are recommendations and not enforced at the domain. In other words, even if the domain policy says to change the password every 30 days, a machine can go for years without its password changing and there's no change to the operation on the machine. Also, it's up to the machine to change its own password.
Gimme an IOC
On the machines where we observed this behavior, we saw the attackers updating a registry value to ensure that the machine password would never be updated. At this time, we believe this to be a reliable indicator of compromise and have not observed it on machines that were not under active attack. If you have seen this set elsewhere, please comment on the post or touch base out of band (jake at Rendition Infosec).
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Netlogon\ParametersHopefully you find this useful in hunting your favorite neighborhood APT.
DisablePasswordChange = 1
Tuesday, November 17, 2015
Auditing your Linux libraries
TL;DR just because you patched libpng doesn't mean that you're server isn't still vulnerable
Last week I posted about the need to audit the libraries that are used by your software. In the post last week, we talked about the fact that Dropbox was using a version of OpenSSL with numerous public vulnerabilities.
The post could not have been much more timely. It turns out that vulnerabilities were recently discovered in libpng. In my minimal searching, I’ve been unable to locate any Windows software using libpng, but I’m betting it’s out there somewhere. However, libpng is used in numerous open source projects on the Linux platform. That’s right, it’s time again to check your libraries.
But how do you know on Linux whether you application uses libpng? That’s where the ldd tool can come in tremendously handy.
You need not patch all of these applications, simply updating the library will suffice in most cases. But is the system secure then? Probably not…
There are two additional cases you should test for. Unlike Windows, Linux supports deleting library files that are currently in use. Also unlike Windows, your Linux server has probably been up for months or years. This means that you might have a library that has been updated on the system (it might even be deleted) but it continues to be loaded into a long running application (like a system service).
To check for this case, examine /proc/<pid>/map_files. It is important to note that even if you remove the old library from the system, the library isn’t really gone until there are no dangling references to it. This is true even if you can’t see it on the filesystem. Examining /proc/ will help you determine which processes must be restarted to take advantage of the new library. Rendition Infosec has put together a quick bash script to help you identify processes currently running that need to be restarted to take advantage of the library upgrade.
The other case to consider is that libpng might be statically linked into an executable. This is unfortunately more difficult to discover. Running strings on the binary and checking for libpng (and libpng related strings) is one option for checking. Another is checking configure scripts for your software (if you build from source). Given the flexibility of dynamic shared libraries, it is increasingly unusual to see statically compiled programs.
Hopefully this post helps you keep your job (or look like a rock star) when the next Linux library vulnerability is released.
Last week I posted about the need to audit the libraries that are used by your software. In the post last week, we talked about the fact that Dropbox was using a version of OpenSSL with numerous public vulnerabilities.
The post could not have been much more timely. It turns out that vulnerabilities were recently discovered in libpng. In my minimal searching, I’ve been unable to locate any Windows software using libpng, but I’m betting it’s out there somewhere. However, libpng is used in numerous open source projects on the Linux platform. That’s right, it’s time again to check your libraries.
But how do you know on Linux whether you application uses libpng? That’s where the ldd tool can come in tremendously handy.
root@kali:~# ldd -r /usr/bin/yersinia 2>/dev/null |grep libpngRendition Infosec has put together a simple bash script to help you check your system for binaries that dynamically load libpng. Certainly you could put together a MUCH more robust script but this will suffice more most users.
libpng12.so.0 => /lib/i386-linux-gnu/libpng12.so.0 (0xb6875000)
You need not patch all of these applications, simply updating the library will suffice in most cases. But is the system secure then? Probably not…
There are two additional cases you should test for. Unlike Windows, Linux supports deleting library files that are currently in use. Also unlike Windows, your Linux server has probably been up for months or years. This means that you might have a library that has been updated on the system (it might even be deleted) but it continues to be loaded into a long running application (like a system service).
To check for this case, examine /proc/<pid>/map_files. It is important to note that even if you remove the old library from the system, the library isn’t really gone until there are no dangling references to it. This is true even if you can’t see it on the filesystem. Examining /proc/ will help you determine which processes must be restarted to take advantage of the new library. Rendition Infosec has put together a quick bash script to help you identify processes currently running that need to be restarted to take advantage of the library upgrade.
The other case to consider is that libpng might be statically linked into an executable. This is unfortunately more difficult to discover. Running strings on the binary and checking for libpng (and libpng related strings) is one option for checking. Another is checking configure scripts for your software (if you build from source). Given the flexibility of dynamic shared libraries, it is increasingly unusual to see statically compiled programs.
Hopefully this post helps you keep your job (or look like a rock star) when the next Linux library vulnerability is released.
Sunday, November 15, 2015
How securely will presidential candidates handle your data?
If you haven't been living under a rock recently, you know that the presidential candidates can't talk enough about cyber security and how committed they are to it. But mostly they talk about making big corporations accountable for breaches, securing health care data, and making China/Iran/Whoever pay for stealing US intellectual property. Then they usually get some word in about the proliferation of cyber weapons and how we'll have to "do something" about that.
All of that is great - but how secure are the candidates themselves today? Johnathan Lampe at Infosec Institute has already covered how securely the candidates' websites are and the results are pretty abysmal. Most candidates can't even lead their own campaigns, devoid of US government beuaracracy to properly secure their own websites.
But when it comes to insecurely handling the data of the people that work for you (or want to), that's where I draw the line. If a candidate can't get it right inside their own campaign, I seriously doubt their ability to secure our data once they are elected. You can argue that it's not the candidate's job to secure the data. But that's a hollow argument. The judgment they exercise in selecting staff today is the same judgment they'll exercise in selecting appointees after they are elected.
To that end, I note that Hillary Clinton's campaign has done a terrible job in their intern application process by using an HTTP only page to have intern applicants upload potentially sensitive data.
I'm sure that in response to this blog post, site changes will be made so I've chosen to document a screenshot here.
I do a lot of interviews for tech companies looking to recruit top tier cyber talent. I can assert that candidates, particularly those right out of college put an amazing amount of private information in their resumes. Sometimes this includes date of birth, social security number, home address, etc. In essence, more than enough data to steal the identity of a candidate. Unfortunately, the very people most likely (in my experience) to upload their sensitive data are the very people Clinton is trying to attract - college students. They are also the most likely to upload their data over public, unencrypted wifi since many lack dedicated internet access otherwise.
I know there are those who will feel like I have an axe to grind with this post - I do not. When shenanigans like these are observed, it is our duty to call them out regardless of political leanings. Furthermore, in Clinton's specific case, this is another example of poor judgement surrounding IT security (her private email server with RDP exposed to the Internet is the prime example).
I hope Clinton's campaign issues apologies to all those who applied for internships and takes steps to resolve this mis-handling of personal data.
All of that is great - but how secure are the candidates themselves today? Johnathan Lampe at Infosec Institute has already covered how securely the candidates' websites are and the results are pretty abysmal. Most candidates can't even lead their own campaigns, devoid of US government beuaracracy to properly secure their own websites.
But when it comes to insecurely handling the data of the people that work for you (or want to), that's where I draw the line. If a candidate can't get it right inside their own campaign, I seriously doubt their ability to secure our data once they are elected. You can argue that it's not the candidate's job to secure the data. But that's a hollow argument. The judgment they exercise in selecting staff today is the same judgment they'll exercise in selecting appointees after they are elected.
To that end, I note that Hillary Clinton's campaign has done a terrible job in their intern application process by using an HTTP only page to have intern applicants upload potentially sensitive data.
I do a lot of interviews for tech companies looking to recruit top tier cyber talent. I can assert that candidates, particularly those right out of college put an amazing amount of private information in their resumes. Sometimes this includes date of birth, social security number, home address, etc. In essence, more than enough data to steal the identity of a candidate. Unfortunately, the very people most likely (in my experience) to upload their sensitive data are the very people Clinton is trying to attract - college students. They are also the most likely to upload their data over public, unencrypted wifi since many lack dedicated internet access otherwise.
I know there are those who will feel like I have an axe to grind with this post - I do not. When shenanigans like these are observed, it is our duty to call them out regardless of political leanings. Furthermore, in Clinton's specific case, this is another example of poor judgement surrounding IT security (her private email server with RDP exposed to the Internet is the prime example).
I hope Clinton's campaign issues apologies to all those who applied for internships and takes steps to resolve this mis-handling of personal data.
Tuesday, November 10, 2015
The libaries you use can make you vulnerable too
TL;DR - I'm not saying Dropbox is vulnerable to denial of service based on its use of out of date OpenSSL libraries. We didn't verify that and have no plans to do so. This is more a cautionary tale for businesses who haven't thought of auditing their apps (or the libraries they depend on).
Audit your apps
So you've audited your application and found no vulnerabilities. And that's good - you're in the top n% (where n < 10) of security programs out there if you are doing real security auditing on your applications. But it turns out that it's about more than just the code your developers write. It's the libraries your code uses too.
Vulnerable History
There's a rich history of applications being vulnerable through the use of vulnerable libraries. For example, in CVE 2014-8485 all the reporting seemed to be that the program strings was vulnerable. But that wasn't really the case. Rather, it was libbfd that actually had the vulnerability - strings was simply using libbfd.
Another obvious example is Heartbleed. Although the vulnerability was in OpenSSL, applications that used a vulnerable version of OpenSSL (compiled with DTLS heartbeats enabled) were also vulnerable.
Heartbleed has been patched - no more OpenSSL patches?
Unfortunately, Heartbleed wasn't the last vulnerability associated with OpensSSL. There have been a number of OpenSSL vulnerabilities in 2015 alone. Of those, CVE 2015-0209 is probably the most concerning since it is a use after free. So far, there is no publicly working code execution exploit for this (and a working exploit might not be possible). But that's not to say that the recent OpenSSL vulnerabilities don't matter. Several are Denial of Service (DoS) vulnerabilities, abusing a null pointer dereference. This means that software using a vulnerable version of OpenSSL would crash if one of these vulnerabilities were exploited.
Poor Dropbox...
Why am I talking about libraries now? I came across this last week looking at an implementation of Dropbox for a client of Rendition Infosec. As you may know, Dropbox and I have a rich, sordid history with Dropsmack.
While investigating the current version of Dropbox (3.10.11) we noticed that one of the support libraries (Qt5Network.dll, version 5.4.2.0) is linked with OpenSSL 0.9.8ze. This version of OpenSSL was released in January 2015 and is two versions out of date. We didn't try to exploit these any null pointer dereference vulnerabilities, but feel free to jump on that if you're interested.
So in this case, it looks like Dropbox developers rely on Qt5Network.dll, which in turn was linked with a vulnerable version of OpenSSL. It's a difficult environment indeed for developers who desire to code securely. Not only do you have to ensure that the libraries you use are up to date, but you also have to ensure that all the libraries they use are also up to date (recursively). This is why having a strong application security program in house (or via trusted partners) is so critical. I would dare say that Dropbox has more application security resources and wider distribution than most organizations who publish their own software for internal or external use. If it can happen to a major software vendor, it can happen to you too.
Remember to audit your apps (and the libraries they use) regularly.
Audit your apps
So you've audited your application and found no vulnerabilities. And that's good - you're in the top n% (where n < 10) of security programs out there if you are doing real security auditing on your applications. But it turns out that it's about more than just the code your developers write. It's the libraries your code uses too.
Vulnerable History
There's a rich history of applications being vulnerable through the use of vulnerable libraries. For example, in CVE 2014-8485 all the reporting seemed to be that the program strings was vulnerable. But that wasn't really the case. Rather, it was libbfd that actually had the vulnerability - strings was simply using libbfd.
Another obvious example is Heartbleed. Although the vulnerability was in OpenSSL, applications that used a vulnerable version of OpenSSL (compiled with DTLS heartbeats enabled) were also vulnerable.
Heartbleed has been patched - no more OpenSSL patches?
Unfortunately, Heartbleed wasn't the last vulnerability associated with OpensSSL. There have been a number of OpenSSL vulnerabilities in 2015 alone. Of those, CVE 2015-0209 is probably the most concerning since it is a use after free. So far, there is no publicly working code execution exploit for this (and a working exploit might not be possible). But that's not to say that the recent OpenSSL vulnerabilities don't matter. Several are Denial of Service (DoS) vulnerabilities, abusing a null pointer dereference. This means that software using a vulnerable version of OpenSSL would crash if one of these vulnerabilities were exploited.
Poor Dropbox...
Why am I talking about libraries now? I came across this last week looking at an implementation of Dropbox for a client of Rendition Infosec. As you may know, Dropbox and I have a rich, sordid history with Dropsmack.
While investigating the current version of Dropbox (3.10.11) we noticed that one of the support libraries (Qt5Network.dll, version 5.4.2.0) is linked with OpenSSL 0.9.8ze. This version of OpenSSL was released in January 2015 and is two versions out of date. We didn't try to exploit these any null pointer dereference vulnerabilities, but feel free to jump on that if you're interested.
OpenSSL 0.9.8ze vulnerabilities |
Remember to audit your apps (and the libraries they use) regularly.
Friday, November 6, 2015
Outsource your classified programming projects to... Russia.
The federal contractor CSC took a contract from the DoD Defense Information Systems Agency (DISA) to develop code for classified systems. CSC then subcontracted portions of the coding to NetCracker Technology. Subcontracting, especially in federal contracts, is completely legitimate - nothing wrong with that. But in 2011, a NetCracker employee named John Kingsley filed a whistleblower lawsuit against NetCracker for using Russian (yes you read that right, Russian) programmers to write code for the classified DISA project. Kingsley correctly noted that this was contrary to the contract guidelines which required CSC to staff the contract with properly cleared US employees.
Although it took four years, the lawsuit was eventually settled last month for a total of $12.75 million. CSC agreed to pay $1.35 million and NetCracker will pay $11.4 million. The settlement does not prohibit the Justice Department from filing criminal charges in the matter. There are two potential criminal charges, one for contract fraud and one for allowing foreign entities access to classified data. Kingsley reports that the programs written by the Russians contained "numerous viruses" though DISA did not confirm this, citing national security concerns.
Perhaps more concerning than viruses or other malware in the Russian code is that intentional weaknesses could have been baked into the code. Subtle vulnerabilities would be very difficult to detect and would require millions of dollars in auditing to discover. It is unclear at this time how much damage DISA and other DoD agencies suffered from the malware running on their systems. It is also not clear how much of the Russian code ever made it into production and how much is still there.
For reporting the contracting violations, Kingsley reportedly received $2.4 million dollars. It's not often that doing the right thing pays such huge dividends, but in this case it paid off huge for Kingsley.
The infosec hooks here are clear. When outsourcing your programming, do you really know who you are outsourcing to? What is their security posture? What are your subcontracting agreements? How are they enforced? Do you perform regular code audits to ensure that a contractor is providing secure code that is free of malware and vulnerabilities? Remember that a contractor may be providing malware without willful participation. In this case, CSC claims to be as much a victim as DISA due to the actions taken by NetCracker.
The bottom line is that if Russians can get viruses into DISA's code (even while getting paid to do it), you had better believe that your adversary can too. That makes things the overall situation look pretty bleak. What can you do then to ensure your organization is not a victim? At Rendition Infosec, we recommend the following to clients:
Although it took four years, the lawsuit was eventually settled last month for a total of $12.75 million. CSC agreed to pay $1.35 million and NetCracker will pay $11.4 million. The settlement does not prohibit the Justice Department from filing criminal charges in the matter. There are two potential criminal charges, one for contract fraud and one for allowing foreign entities access to classified data. Kingsley reports that the programs written by the Russians contained "numerous viruses" though DISA did not confirm this, citing national security concerns.
Perhaps more concerning than viruses or other malware in the Russian code is that intentional weaknesses could have been baked into the code. Subtle vulnerabilities would be very difficult to detect and would require millions of dollars in auditing to discover. It is unclear at this time how much damage DISA and other DoD agencies suffered from the malware running on their systems. It is also not clear how much of the Russian code ever made it into production and how much is still there.
For reporting the contracting violations, Kingsley reportedly received $2.4 million dollars. It's not often that doing the right thing pays such huge dividends, but in this case it paid off huge for Kingsley.
Was this the Russian programmer who inserted "numerous viruses" into DISA code? |
The infosec hooks here are clear. When outsourcing your programming, do you really know who you are outsourcing to? What is their security posture? What are your subcontracting agreements? How are they enforced? Do you perform regular code audits to ensure that a contractor is providing secure code that is free of malware and vulnerabilities? Remember that a contractor may be providing malware without willful participation. In this case, CSC claims to be as much a victim as DISA due to the actions taken by NetCracker.
The bottom line is that if Russians can get viruses into DISA's code (even while getting paid to do it), you had better believe that your adversary can too. That makes things the overall situation look pretty bleak. What can you do then to ensure your organization is not a victim? At Rendition Infosec, we recommend the following to clients:
- Work with your legal team to get appropriate language in your contracts to govern what can and can't be subcontracted.
- Maintain your right to audit subcontracting arrangements (if allowed).
- Audit portions of the code (at a minimum) produced under any external contract. Better yet, write in contract provisions that a third party will audit code and that vulnerabilities discovered will be fixed at the contractor's cost. If the contractor is forced to pay for security re-tests of the code, they have a built in motivation to get it right the first time.
- Any auditing you perform on code contributed by a third party (and even that developed in house) should be examined using more than automated scanners. Automated scanners are a good start, but they are just that - a start. Manual testing often unearths subtle vulnerabilities (especially elevation of privilege) not found with any automated scanner.
Thursday, November 5, 2015
Rogue Access Points - are they a priority in your environment?
Last year, I was working with a customer who has architected their network environment without considering security. They suffered a breach, lost a ton of intellectual property (thankfully none of it was regulated data), and were committed to moving forward with a more secure architecture. This customer has a relatively large number of wireless access points. One of their managers suggested that a quick win would be to locate and disable rogue wireless access points, citing a Security+ class he had taken years prior. But other managers wondered if this was a good use of limited resources, considering the abysmal state of overall security.
My take on this is that rogue access point detection is something that should only be done after other, higher priority items are taken care of. Are rogue access points a big deal? Sure. But are they a bigger deal than 7 character passwords, no account lockout, or access control policies that allow anyone to install any software not explicitly denied by antivirus? I think not.
As you probably would guess, I'm a big believer in the SANS top 20 critical security controls. The best thing about the SANS 20 CSCs is that they are actually ordered to help you figure out what to do first. What controls give you the best bang for your buck? No need to guess when you are using the SANS Top 20.
So where in the 20 CSCs do rogue access points fall? They fall well behind CSCs #1 and #2 (hardware and software inventory, respectively). If you don't have basic blocking and tacking down, do you really need to consider rogue access points? Wireless controls are #15 on the list of CSCs. Open wireless or WEP (gasp!) is a big deal. But if you've implemented WPA-PSK, how much should you worry about rogue APs?
Yes, rogue APs are a threat. but every one of us is carrying a rogue AP in our pocket these days (your smart phone). It's hard to track all of the access points that pop up. This particular client has office space in many different multi-tenant areas so we have to deal with other access points in the area that the client doesn't control. This makes it really hard to detect the rogues. Not impossible, mind you. Just difficult.
If you are in an area where mutil-tenancy isn't an issue, you'll still need good policies to prevent the use of wifi hotspots that all users have on their phones. With those polluting the spectrum, separating "authorized" hotspots from rogue access points can be a real challenge. At Rendition Infosec, we advise clients that hunting rogue access points is likely very low on the security spectrum. We advise that clients only undertake such an endeavor when they have achieved at least baseline success in more important security controls.
One important distinction that I'll make is that evil twin access points will always be a threat. These are access points with the same (or in some cases very similar) names to the legitimate access points in the environment. Periodic surveys of the wireless environment will help detect these. A good WIDS system can help with this as well. When discovered, evil twins should be physically located and investigated.
My take on this is that rogue access point detection is something that should only be done after other, higher priority items are taken care of. Are rogue access points a big deal? Sure. But are they a bigger deal than 7 character passwords, no account lockout, or access control policies that allow anyone to install any software not explicitly denied by antivirus? I think not.
As you probably would guess, I'm a big believer in the SANS top 20 critical security controls. The best thing about the SANS 20 CSCs is that they are actually ordered to help you figure out what to do first. What controls give you the best bang for your buck? No need to guess when you are using the SANS Top 20.
So where in the 20 CSCs do rogue access points fall? They fall well behind CSCs #1 and #2 (hardware and software inventory, respectively). If you don't have basic blocking and tacking down, do you really need to consider rogue access points? Wireless controls are #15 on the list of CSCs. Open wireless or WEP (gasp!) is a big deal. But if you've implemented WPA-PSK, how much should you worry about rogue APs?
Yes, rogue APs are a threat. but every one of us is carrying a rogue AP in our pocket these days (your smart phone). It's hard to track all of the access points that pop up. This particular client has office space in many different multi-tenant areas so we have to deal with other access points in the area that the client doesn't control. This makes it really hard to detect the rogues. Not impossible, mind you. Just difficult.
If you are in an area where mutil-tenancy isn't an issue, you'll still need good policies to prevent the use of wifi hotspots that all users have on their phones. With those polluting the spectrum, separating "authorized" hotspots from rogue access points can be a real challenge. At Rendition Infosec, we advise clients that hunting rogue access points is likely very low on the security spectrum. We advise that clients only undertake such an endeavor when they have achieved at least baseline success in more important security controls.
One important distinction that I'll make is that evil twin access points will always be a threat. These are access points with the same (or in some cases very similar) names to the legitimate access points in the environment. Periodic surveys of the wireless environment will help detect these. A good WIDS system can help with this as well. When discovered, evil twins should be physically located and investigated.
Monday, November 2, 2015
Where in the org chart does your Threat Intel team belong?
I didn't think this was really an issue, but two recent experiences have proven me wrong. Although organized infosec threat intelligence teams are relatively new, business intelligence teams have been around roughly as long as business. At Rendition Infosec, I always advise clients that any threat intelligence program needs to understand business context in order to succeed. This makes integration with the business intelligence teams a natural synergy, even if the data collection and processing methods are different for cyber threat intelligence (CTI).
The other place in the organizational structure where a threat intel team makes sense is working closely with the SOC and IR teams. The CTI team can provide indicators to search for, advise in tuning the SIEM and other devices to minimize false positives, and provide good overall value for the security dollars spent.
So where shouldn't you place your threat intel team? Two places I've seen them fail recently is under vulnerability management (VM) and governance risk and compliance (GRC). The logic provided for attaching the CTI team to the VM team was that CTI often notified the organization of new zero day threats being used in the wild (e.g. Pawnstorm attacking flash, again) and these should be better understood by the VM team. On the GRC side, the organization figured that if there was a threat identified by CTI, that GRC should create policies to prevent the threat actor from being effective.
Both VM and GRC sound almost sane, but neither is a logical home for a CTI team. This isn't to say that the CTI team can't provide value to both GRC and VM, but simply doesn't line up best there. And the org chart matters. Units that fall at the wrong place in the org chart consistently see their budgets slashed and results marginalized. And that's not because they aren't doing great work, only because they had the misfortune of working in a sub-optimal reporting chain.
The other place in the organizational structure where a threat intel team makes sense is working closely with the SOC and IR teams. The CTI team can provide indicators to search for, advise in tuning the SIEM and other devices to minimize false positives, and provide good overall value for the security dollars spent.
So where shouldn't you place your threat intel team? Two places I've seen them fail recently is under vulnerability management (VM) and governance risk and compliance (GRC). The logic provided for attaching the CTI team to the VM team was that CTI often notified the organization of new zero day threats being used in the wild (e.g. Pawnstorm attacking flash, again) and these should be better understood by the VM team. On the GRC side, the organization figured that if there was a threat identified by CTI, that GRC should create policies to prevent the threat actor from being effective.
Both VM and GRC sound almost sane, but neither is a logical home for a CTI team. This isn't to say that the CTI team can't provide value to both GRC and VM, but simply doesn't line up best there. And the org chart matters. Units that fall at the wrong place in the org chart consistently see their budgets slashed and results marginalized. And that's not because they aren't doing great work, only because they had the misfortune of working in a sub-optimal reporting chain.
Sunday, November 1, 2015
Don't believe everything you read on the Internet...
This is pretty universally true. Take everything you read on the Internet with a grain of salt. This morning, more than 741k Twitter followers of the "History in Pictures" account (@HistoryInPix) were treated to this completely fictional rendition of poor President Lincoln (who along with Jefferson is one of the most oft misquoted historical figures ever).
The problem of people misquoting historical figures is so huge that John Oliver recently launched his own website where I "learned" that famous physicist Albert Einstein weighed in on genital herpes.
But the original offending image (which has almost 1400 retweets) is from the book "Abraham Lincoln, Vampire Hunter" and the picture is just as fictional as the book. What's the infosec hook here?
First off, a Twitter account with 741,000+ followers was distributing garbage. If an attacker took it over and posted a random link (say to an exploit kit), how many people would click on the link? My guess is the number would be at least as many people as retweeted the fake picture (so minimum 1400).
The second infosec hook has to do with half-truths. At Rendition Infosec, we regularly work with clients who have read some half truth on the Internet but take it as gospel. One of my personal favorites is "we use SSL, so our web applications are safe." Wow. SSL only prevents outsiders from snooping on your web traffic. No other protections are offered by SSL. SSL definitely doesn't protect you from XSS, SQLi, or CSRF, contrary to popular belief in some circles. Another favorite half truth is that if you deploy a WAF, you don't need to remediate issues in your vulnerable web applications. This is a really bad idea for a whole number of reasons, but it's a truth that some clients have clung to with nearly religious zeal.
The moral of the story? I'll defer back to Lincoln for this - he sums it up as well as I ever could.
One problem - it isn't real... |
The problem of people misquoting historical figures is so huge that John Oliver recently launched his own website where I "learned" that famous physicist Albert Einstein weighed in on genital herpes.
Thanks HBO. I had no idea until now Einstein cared so much about STDs! |
First off, a Twitter account with 741,000+ followers was distributing garbage. If an attacker took it over and posted a random link (say to an exploit kit), how many people would click on the link? My guess is the number would be at least as many people as retweeted the fake picture (so minimum 1400).
The second infosec hook has to do with half-truths. At Rendition Infosec, we regularly work with clients who have read some half truth on the Internet but take it as gospel. One of my personal favorites is "we use SSL, so our web applications are safe." Wow. SSL only prevents outsiders from snooping on your web traffic. No other protections are offered by SSL. SSL definitely doesn't protect you from XSS, SQLi, or CSRF, contrary to popular belief in some circles. Another favorite half truth is that if you deploy a WAF, you don't need to remediate issues in your vulnerable web applications. This is a really bad idea for a whole number of reasons, but it's a truth that some clients have clung to with nearly religious zeal.
The moral of the story? I'll defer back to Lincoln for this - he sums it up as well as I ever could.
Subscribe to:
Posts (Atom)