As an update to yesterday's post, we've since learned that a map update for the DCA airport was to blame for the American Airlines (AA) iPad outage. AA explains that this was why the whole fleet wasn't experiencing issues, only those who tried to access the DCA map.
I think this is a great case study for some infosec discussion. It isn't immediately something you'd call an infosec problem, but it has lots of potential infosec angles. Yesterday, I talked about pre-deployment security testing, disaster recovery, and MDM as possible infosec angles to look at. Today since we now know the cause of the failure, I'd like to add in pre-deployment code testing or tighter integration between development and production. This tight integration is often reffered to by the buzzword DevOps.
Obviously, AA tested the code before they shipped out any updates. But obviously they didn't catch the case of the update impacting someone who was updating a copy of the DCA map rather than installing one fresh. Whether or not this would have been caught with DevOps, it's a neat fringe case for sand table exercise in your security department. It's new and it brings in some different people you might not have in a more traditional sand table exercise.
Ramblings about security, rants about insecurity, occasional notes about reverse engineering, and of course, musings about malware. What more could you ask for?
Thursday, April 30, 2015
Wednesday, April 29, 2015
What can we learn from the American Airlines iPad failure?
ICYMI, American Airlines had several planes delayed when something went wrong with their iPads on 737 aircraft. I'm pretty sure we'll never know the whole story on this, but are there any infosec takeaways? I think so.
- Mobile device management is complicated. It gets extra complicated when devices are randomly connected and disconnected from the Internet. Whatever problem impacted the iPads was at least in part resolved by returning to the gate to get an active Internet connection. Whether these devices are 4G enabled or not is unclear. Even if they are, I don't know if they could use 4G in the cockpit after the boarding door is closed. Next time someone is talking about mobile device management like it's child's play, ask them how they deal with a partial app update or similar regarding this situation.
- What's your DR plan? Every new technology comes with liabilities. The iPad rollout was used to eliminate 24 million pages of paper carried from gate to gate by the American Airlines pilots. Paper never runs out of batteries, needs a software update, or suffers from a glitch. It's also difficult to update 8,000 copies of the same paper (forget about automatic updates). So the replacement sounded like a good idea. But what's your DR plan when the technology fails? Do you have one? Can you roll back to the old technology? If so, how quickly? What's the mitigation. Again, this is a great real world scenario that speaks to DR.
- In my circles this morning, people openly asked questions about whether or not some cyber security incident may have been to blame for the iPad failure. Then more questions came out about the security of the devices in general. Every technology should be evaluated for security before being rolled out to the enterprise. Twice this month, I've worked with clients who didn't do this and found out the hard way that they should have included security assessments in the project deployment plan. I trust that American Airlines did this, but if they didn't they need to get some independent security assessors to look at the technology now. Why independent? Because in any project, internal security teams have politics to deal with and feelings they want to avoid hurting. But for projects already deployed, it's way too easy to conveniently ignore or play down issues.
Friday, April 24, 2015
Should we upgrade from Windows Server 2003?
The hype is all over the need to upgrade from Windows Server 2003. But is that more hype than reality? I'm probably in the small minority of security professionals who thinks so. Sure you won't be able to get patches for Server 2003 after July - but how big a deal is this? Does it signal death for companies that can't upgrade? I think not.
"You can't get patches anymore" isn't a business case to upgrade from Server 2003. If you want a business unit to upgrade, you have to make a business case. Do they have software that won't run on Server 2008 or later? What about some application that is really tied to the OS installation? If that application supports a critical business process, can you really afford the risk to upgrade? Certainly not hastily.
Note: If you have regulatory requirements for upgrading, then ignore all the "business" arguments I'm making here. You need to upgrade.
Sure, not patching presents a risk. But hastily migrating to a new platform also has risks. In infosec, we are best served ignoring the hype and supporting the business in helping them create realistic risk models. If you haven't started your migration plans yet, rushing to get the migration complete before July is probably going to introduce more risk than it will mitigate.
Does that mean you should do nothing? No, not at all. If you have Windows 2003 servers that you won't upgrade and aren't getting patches, you should consider additional risk mitigation strategies. What would these entail? I'd start by recommending enhanced logging. Of course more logs mean more analysis work. If you don't have a SIEM, get one to help you now. I recommend an AlienVault deployment for a quick fix - if you don't have resources to upgrade from server 2003, you don't have resources to deploy a complex SIEM product. You would want something that you can have up and running in no time and something that's cost effective. AlienVault fits the bill for both.
Beyond logging, what else is there? You'll want to restrict the functions that the server performs. Maybe you can't move that database or web application that's heavily tied to your 2003 server, but there's no excuse to be hosting SMB file shares from the same server. Those should be relatively trivial to move to a newer server. Just like anything else, you should seek to minimize your attack surface by minimizing the number of services offered. But when you can't patch the OS the services are running on, that makes the case for minimizing your attack surface becomes much stronger.
Lest you think that staying on server 2003 forever is an option, let me dissuade you further. There will be vulnerabilities in the apps that you run on the server. And those apps will be easier to exploit on server 2003 than they will be running on a more modern version of Windows. Newer versions of Windows support kernel ASLR and other exploit mitigation technologies not present in Windows Server 2003. So before you think that server 2003 might be a permanent solution, consider that like a leech sucking blood from your arm, it makes everything running on it a bit weaker.
So is the impending death of server 2003 hype or reality? Actually a little of both. You should be making plans to get off of server 2003, but if you haven't made plans yet, your networks won't fall tomorrow if you don't do so immediately.
"You can't get patches anymore" isn't a business case to upgrade from Server 2003. If you want a business unit to upgrade, you have to make a business case. Do they have software that won't run on Server 2008 or later? What about some application that is really tied to the OS installation? If that application supports a critical business process, can you really afford the risk to upgrade? Certainly not hastily.
Note: If you have regulatory requirements for upgrading, then ignore all the "business" arguments I'm making here. You need to upgrade.
Sure, not patching presents a risk. But hastily migrating to a new platform also has risks. In infosec, we are best served ignoring the hype and supporting the business in helping them create realistic risk models. If you haven't started your migration plans yet, rushing to get the migration complete before July is probably going to introduce more risk than it will mitigate.
Does that mean you should do nothing? No, not at all. If you have Windows 2003 servers that you won't upgrade and aren't getting patches, you should consider additional risk mitigation strategies. What would these entail? I'd start by recommending enhanced logging. Of course more logs mean more analysis work. If you don't have a SIEM, get one to help you now. I recommend an AlienVault deployment for a quick fix - if you don't have resources to upgrade from server 2003, you don't have resources to deploy a complex SIEM product. You would want something that you can have up and running in no time and something that's cost effective. AlienVault fits the bill for both.
Beyond logging, what else is there? You'll want to restrict the functions that the server performs. Maybe you can't move that database or web application that's heavily tied to your 2003 server, but there's no excuse to be hosting SMB file shares from the same server. Those should be relatively trivial to move to a newer server. Just like anything else, you should seek to minimize your attack surface by minimizing the number of services offered. But when you can't patch the OS the services are running on, that makes the case for minimizing your attack surface becomes much stronger.
Lest you think that staying on server 2003 forever is an option, let me dissuade you further. There will be vulnerabilities in the apps that you run on the server. And those apps will be easier to exploit on server 2003 than they will be running on a more modern version of Windows. Newer versions of Windows support kernel ASLR and other exploit mitigation technologies not present in Windows Server 2003. So before you think that server 2003 might be a permanent solution, consider that like a leech sucking blood from your arm, it makes everything running on it a bit weaker.
So is the impending death of server 2003 hype or reality? Actually a little of both. You should be making plans to get off of server 2003, but if you haven't made plans yet, your networks won't fall tomorrow if you don't do so immediately.