Ramblings about security, rants about insecurity, occasional notes about reverse engineering, and of course, musings about malware. What more could you ask for?
We've posted the 14th challenge in the "Infosec Advent" series. This one is a Linux server intrusion case. You get syslog and auth.log. Unfortunately that's all that was being forwarded.
We have some Linux syslog and authentication logs download here. Download and analyze the logs for signs of intrusion. Based on the log data, let us know what you think has happened.
Specifically, we're looking to understand the following:
How many attackers compromised the server?
What did the attackers do once on the server?
What steps should be taken to recover from the incident?
What, in your opinion, is the likely root cause of the incident?
In all cases, please show your work (e.g. back your analysis with facts, where available). In cases where data is not available to back your hypothesis, let us know what data you would need and where you would look to collect it.
Please limit your submissions to 1500 words. The best characterization of this web server intrusion will receive a $25 Amazon gift card (subject to contest rules). The winner will be announced 21DEC17.
If you were looking for another Digital Forensics and Incident Response (DFIR) related challenge, here you go. Have fun!
We've posted the 13th challenge in the "Infosec Advent" series. This one is a web server intrusion case where we will ask you to analyze the logs and let us know what you find.
We have a set of web server logs that you can download here. Download and analyze the logs for signs of intrusion. Based on only the web log data (yes, we know that makes it harder) write a narrative that explains what happened.
Is this a realistic scenario to only have logs and not an image of the web server filesystem? Unfortunately, the answer is yes. Rendition Infosec worked a case this year where logs were available but the server image was unavailable. We would prefer more data to work with, but in infosec as in life, you have to play what you've got.
Please limit your submissions to 1500 words. The best characterization of this web server intrusion will receive a $25 Amazon gift card (subject to contest rules). The winner will be announced 20DEC17.
If you were looking for some Digital Forensics and Incident Response (DFIR) related challenges, here you go. Have fun!
Rendition Infosec is sponsoring a new contest this holiday season to up your infosec skills and make you think (at least a little) about infosec each day. We're calling the challenge "Infosec Advent" and have set aside $1,000 in prizes to sweeten the pot for those who wish to participate.
In all honesty, it would have been way cooler if we could have launched this on December 1st like we planned. But unfortunately, attackers don't schedule their attacks on our clients so this project got put on hold while we did some end-of-year incident response. We almost decided not to run it this year, but then realized that was dumb. When it comes to this sort of thing does late really matter? After all, we're talking about free infosec education and free money...
We're releasing a series of hard and soft skill challenges between now and December 24th (the first 12 are posted now). While we'll admit that the initial set of challenges are relatively soft skill focused, we don't think that's a bad thing. Soft skill challenges are accessible to anyone, while hard skill challenges require more specific skills to perform. That said, you can expect some PCAP, memory dumps, and a few other surprises before Christmas.
We have no idea how popular this will be (or if anyone will care) but we wanted to give back to the broader community and this seemed like a great way to do it. We will posting the entries of the winners (and probably honorable mentions) so that everyone can learn throughout the holiday season.
If I'd written this last week, the post would have been very different. I would have pondered whether cybersecurity awareness month should even be a thing. Granted I live in the infosec echo chamber, but I often wonder how many out there aren't already inundated with information about staying safe online. Does one more phishing assessment or security reminder poster really matter? Sure, I regularly perform incident response and forensics, so I know attacks happen. But the extent to which we can stop them with additional training is questionable.
One idiot, two keyboards
But that was last week... This week a good friend of mine who is a high profile APT target hit me up for some cybersecurity advice. Now before I tell the rest of this story, it's important to me that you know that he's been educated in cybersecurity hygiene and receives regular briefings on security from his organization. His organization uses regular phishing tests. He's a smart guy. I'm not mentioning names, but I bet if I did most of you would know who he is and would understand why he's a no joke nation state (dare I say APT?) target.
Should Antivirus (AV) software be part of your threat model? Strictly speaking, yes it probably should be. AV is potentially dangerous to an organization and should be tested thoroughly before being deployed. As argued in the recent WSJ article about Kaspersky (note that the article is behind a pay wall), AV software could threaten the confidentiality of a protected system.
But as any infosec professional can tell you, information security is about more than just confidentiality. The security triad is referred to by the acronym CIA, which most reading this post will know stands for Confidentiality, Integrity, and Availability. In every security program, one of these items takes precedence over the other two.
In the case of the NSA contractor who placed classified material on their home computer, confidentiality was clearly the most important of the three. However, there are few organizations for whom a breach of confidentiality is really the most damaging impact. In the vast majority of organizations, devastating compromises to integrity and availability would have a far greater impact to organizational health.
Read the full post (including scenarios for compromising integrity and availability) on the Rendition Infosec blog.
In this post, we’ll discuss a few early lessons learned from the Equifax breach announced yesterday. We’ll also recommend a six point plan to avoid becoming “the next Equifax” based on what we know today about the breach. Rendition is in no way involved with the breach assessment for Equifax and we have no inside knowledge. However, we will discuss the publicly available information so organizations can take action to avoid a similar breach.
Note: In the coming days and weeks, you’ll likely be inundated with vendor pitches claiming they can stop you from becoming “the next Equifax.” Be wary, be very wary. If it sounds too good to be true, it probably is. In information security, there are no silver bullets. But that’s okay – werewolves probably aren’t part of your threat model anyway…
At Rendition Infosec, we endorse the SANS Institute six step Incident Response (IR) process. For those not familiar with the process, the steps are:
Preparation
Identification
Containment
Eradication
Recovery
Lessons Learned
This conveniently spells PICERL. A handy mnemonic to remember this is “Patched Infrastructure Could’ve Easily Reduced Losses.” This is great because it’s simple to remember AND true.
For this post, we’re going to focus on the preparation and identification phases since those are what we know the most about so far.
Like many information security firms, Rendition Infosec has worked many ransomware attacks over the last several years. If you’re reading this post, you probably know about the obvious things you can do to prepare for a ransomware event. We often talk about having good backups (and testing them). We also know that most ransomware is distributed through phishing, so having good phishing defenses helps too.
When it comes to ransomware, an ounce of prevention is worth a pound of cure…
But lets assume that you’ve checked those two boxes (as well as anyone can). Let’s face it, sooner or later you are likely to have to deal with a ransomware threat in your environment. So what else can you do to prepare for the inevitable ransomware compromise? In this post, we’ll detail a few things that can be done to quickly ensure security for your machines in the event of a ransomware attack.
The five preparation steps are:
Enable Volume Shadow Copies and increase allocated space
Remove users from the local administrators group
Limit the number of shares that a user has write access to
Use hidden file shares
Only map file shares while in use
In the rest of the post on the Rendition Infosec blog, we’ll discuss the rationale for each of these recommendations.
An interesting article came through our feed today mentioning the need for cyber security in law firms. As an information security company that works with law firms, we couldn't agree more. The article makes a number of points, but leaves a couple of critical things out, and we'd like to cover those here. It's worth noting that the advice here applies to practically any organization (not just law firms).
The article suggests the following five items for all law firms to increase their security:
Use password managers
Update computer software
Use encryption software
Use encryption software
Use multi-factor authentication (MFA/2FA)
Risk landscape for law firms
At Rendition Infosec, we don't fundamentally argue with any of these. We do however think that this falls well short of information security best practices for most law firms. The reality is that lawyers deal with sensitive data every day and that makes them a target for attackers. Sensitive data might include mergers and acquisitions information from clients. This data, if compromised, can have major economic impacts to both the law firm and the client.
Attackers may also target law firms for more than just the data they have. Many users, even those at the most secure organizations, expect email communication from external counsel. Attackers may target law firms as a way to get into other networks more easily. Law firms should think of this as extending their cyber risk to their clients.
Over the last year, there have been numerous dumps of stolen classified data posted on the Internet for all to see. The damage from these dumps has obviously been huge to the US intelligence community. In this post, we won’t discuss the actual damage of the dumps to the intelligence community (many others have already pontificated on that). Instead, this post will focus on the need for CTI analysts to perform analysis of the dumps.
For the first time, CTI analysts have a view of what appears to be a relatively complete nation state toolset in the Shadow Brokers dumps and insight into tool development and computer network exploitation (CNE) tool requirements in the Vault 7 dumps. These are game changers for CTI analysts. We define threat as the intersection between intent, opportunity, and capability. These tools and documents highlight the capabilities of an APT adversary. Whether you believe the US intelligence services have the intent to attack your network, it is likely (almost certain) that other nation state attackers have developed similar capabilities. Analyzing the data you have available (Shadow Brokers and Vault 7) can help shed light on what you don’t have available (every other nation state attacker’s toolset in a single dump).
Note: We understand that this is a sensitive topic. When classified data is released, it is still considered classified until declassified by a classification authority. There is no evidence that any classification authorities have declassified the data in the Shadow Brokers or Vault 7 dumps. It is likely that they remain classified to this day. The advice in this article may put those with security clearances at odds with the advice of their security officers. Please proceed with care.
Over the last few months we’ve seen multiple cases of warnings about plugins and extensions for various software packages threatening the security of users. We’ve recently seen the Copyfish and and Web Developer Chrome plugins compromised and used to push malware to users.
While Chrome is likely safe and should probably not be considered a threat, perhaps your plugins should be. Plugins are developed by potentially malicious third parties. Even if your plugin developers are not themselves malicious, they have security concerns just like everyone else. And make no mistake about it: when understanding software supply chain issues, their security is your security.
The US DoJ recently released guidance on running vulnerability disclosure programs (aka bug bounties). The document is nothing earth shattering, but does provide some free advice to organizations considering such programs.
Rendition’s advice to organizations considering a bug bounty program? Think VERY carefully about how it will impact your monitoring and detection strategies. People looking for bugs will create noise in your network – a lot of it. And the noise will look like attacks, because technically they ARE attacks. How will you separate this non-malicious attack traffic from real attack traffic you should be concerned about?
So far, Rendition has posted on the Kaspersky debate twice. In the first post, Rendition educated the public on why a software audit would not address the fears raised by the Senate. The second post explained the damage that any antivirus software could perform in a network if its operation were taken over by a foreign government. The second post is about more than just Kaspsersky - as Rendition made clear in the post, it could apply to any antivirus software.
Bloomberg's reports previously unknown Kaspersky involvement with Russian government
Yesterday, Bloomberg wrote an article claiming that Kaspersky is far deeper involved with Russian intelligence than was publicly known. At Rendition, we think parts of that reporting were careless, especially the interpretation of the words "active countermeasures." "Active countermeasures" is not an industry standard term, a pet peeve of Rendition's founder Jake Williams, who has spoken on the topic at various industry events. Bloomberg took the phrase "active countermeasures" to mean the following.
"Active countermeasures is a term of art among security professionals, often referring to hacking the hackers, or shutting down their computers with malware or other tricks.
We know of no such standard definition for "active countermeasures." Even if Bloomberg got this definition from an infosec expert, any expert worth quoting would have told Bloomberg that their definition was one of many and not "generally accepted" by the community. That this wasn't reported makes the whole article reek of bias - where there's smoke, there's usually fire. Kaspersky responds to Bloomberg
Eugene Kaspersky posted a retort that addresses the Bloomberg article point by point. Kaspersky calls out some of the obvious problems with the article, including talking around the point made above. But in his response, Kaspersky says something that is misleading if not outright false, and we think that needs to be addressed as well.
Recently we learned that the US Senate was pushing to add language to the National Defense Authorization Act (NDAA) that would prohibit the purchase and use of Kaspersky software anywhere in the DoD. This is nearly certainly a political move and CyberScoop’s Patrick Howell O’Neill did a great job of covering this story already from a political angle. It is entirely possible that the Senate’s statements about the NDAA are just political messages meant to rattle the sabers.
But should antivirus be part of your threat model? Perhaps it should. As Tavis Omandy has shown over the last year, antivirus software is often full of security vulnerabilities. This is especially concerning because antivirus runs with elevated privileges. And the elevated privileges make antivirus software so dangerous.
In considering this debate, it is important to consider the types of threats that antivirus software could pose if the vendor were subject to “influence” from a government. Obviously we are talking about this because of Kaspersky and the NDAA, but it is important to note that this any antivirus company could be subject to the same attacks. The risk is not only for antivirus companies that could be influenced – any software manufacturer with automatic updates could be used as an attack platform by a government. If one was hacked by an APT group (most likely a nation state), their customers would also be vulnerable (whether the software in question is antivirus or something else).
After reviewing the awesome Dragos Inc report on CRASHOVERRIDE, Rendition analysts received a similar alert from US Cert and NCCIC. After reviewing the guidance from NCCIC, we were less than thrilled. The second recommendation from NCCIC (take measures to avoid watering hole attacks) is impossible by its very definition. A watering hole attack first compromises a remote site that you would already be visiting in an attempt to compromise your network. The fact is that the victim is not being tricked into visiting a rogue site as is the case in phishing. There is frankly no way for an organization to do this. Unfortunately, the fact that this "mission impossible" is set as recommendation #2 means that many will stop trying to implement anything further down the stack, assuming that the rest may also be impossible by definition.
Last week Rendition Infosec founder Jake Williams contributed an article for next month’s issue of Power Grid International magazine. The article highlights the need for utilities to monitor their IT networks in order to protect their OT networks from compromise. Today’s release of the excellent CRASHOVERRIDE report by Dragos Inc only reinforces the points Williams’ made in his article.
While a simple Shodan search will show many ICS devices directly connected to the Internet, these organizations obviously aren’t following best practices in the first place. Monitoring would certainly help these organizations to detect threats as well, but they honestly have bigger problems that start with segmenting their networks.
For those utilities that have already segmented IT from OT (operational technology), monitoring the IT network is absolutely critical. Most attackers enter the OT network from the IT side of the network through phishing emails or other commodity exploits. They then noisily stumble through the network looking for the bridge between IT and OT. Even if the networks are completely airgapped (few truly are in our experience), attackers will eventually find a way to get malware to the OT side. But along the way, attackers usually make a ridiculous amount of noise trying to find the places where the IT and OT networks are joined.
Recently we had a client call us about a problem on their network. Rendition Infosec runs a 24×7 security monitoring service and had a client call about an antivirus alert for PUA (potentially unwanted application). This class of alert is often difficult to tune out since attackers and administrators often use the same software tools.
Frequent examples of this are netcat (nc.exe) and psexec from SysInternals. These tools are like the infamous “dual use technology” we hear so much about when sanctioning oppressive regimes. When we receive an alert like this, we most frequently find that the alert can be attributed to the activity of a systems administrator. However, there is a possibility that the alert represents the activities of an attacker.
Rendition Infosec is sponsoring a petition asking Microsoft to disclose telemetry data around MS17-010. We've highlighted a number of reasons why we feel this is important for the security community as a whole.
It is almost certain that Microsoft has data around how these vulnerabilities were exploited by attackers. Revealing this data will help us better understand decisions made in the vulnerability equities process. It will also enhance understanding about how likely it is that vulnerabilities discovered by APT attackers are independently rediscovered by others attack groups. Finally, it will help policy makers assess whether the exploits reportedly stolen (and subsequently released) by Shadow Brokers were likely used to exploit other targets before being released to the general public. If you work in infosec, think computer security is a good thing to have, and/or believe in transparency, please consider signing our petition, linked below:
I've posted some notes from the latest Rendition Infosec Internet wide scans for DOUBLEPULSAR. Despite some reports to the contrary, it's not getting any better. In fact, it's a bit worse than earlier this week despite the uninstallation scripts moving around the Internet (note that Rendition Infosec does NOT recommend using these tools).
After reading some early articles mentioning that DOUBLEPULSAR (reportedly NSA malware) infections were widespread on the Internet, my folks at Rendition Infosec thought the numbers might be inflated due to poorly implemented scans. After performing some of our own scans, we are confident that these numbers are not inflated and at least 3% of the machines with TCP port 445 exposed to the Internet are infected with DOUBLEPULSAR.
Microsoft released their idea of a “Digital Geneva Convention” to help normalize behavior on the cyber battlefield. The document, linked here, is generally well written and documents the need for a document of its type.
While the idea of regulating the cyber domain is not a bad one, the proposal depends on attribution, a field that is sorely lacking in reliability and repeatability. I've outlined some of those problems here.
The Shadow Brokers have dumped their cache of exploits for Windows systems (supposedly stolen from NSA). Although some were originally reported as zero-days exploits, this has since been proven to be incorrect due to recent Microsoft patches. However, there's still plenty of business impact. In what I'm sure will be the first of many posts on this topic, I'm focusing on the problem of Windows Server 2003, which continues to be widely deployed.
Read the full post, complete with recommendations for businesses here.
Russia is likely using the latest Shadow Brokers release to attempt to control the news cycle and take coverage away from the Syria conflict. Yesterday, in a political rant using broken English, the Shadow Brokers released the password for the encrypted zip file they seeded last year (link).
This release gives threat intelligence teams unprecedented insight into the capabilities of the Equation Group hackers. The dump appears to contain only Linux and Unix tools and exploits, so organizations running only Windows don’t need to react to tools in this release (though they should check their available netflow and firewall logs for evidence they have communicated with redirection hosts posted here). For organizations running Linux and/or Unix, it should be noted that most of the exploits target older software version. However the dump is still significant for threat intelligence professionals. Because Equation Group is likely typical of other nation state hacking groups, the dump offers unprecedented insight into the capabilities and targets of an Advanced Persistent Threat (APT) actor.
I was recently interviewed for an article explaining what CTI is. The article is published, but I think my full answers provide a little bit better context. You can read the article here. If you want the full, unpublished answers I gave, check it out on my corporate blog.
You may have heard about the TicketBleed vulnerability that plagued some F5 devices. Though it's been a while, I figured it would be worth the time to put out this case study on why it makes the case for packet capture at strategic locations.
I personally don't think a bug impacting so few devices (it requires a non-default configuration) is worthy of naming, but hey to each their own. If you are an F5 customer, you certainly care about the bug and I don't blame you a bit. Otherwise, meh.
But I think the #TicketBleed vulnerability is a good example of why you need full network traffic capture. I covered the HeartBleed vulnerability pretty extensively for SANS and dealt with the fallout in customer environments. One of the questions people asked me consistently was "how can I tell if I've been exploited?" Unfortunately, successful exploitation would not create any logs. Netflow is also useless for detecting HeartBleed exploitation. The same is likely true for TicketBleed.
But with packet capture, HeartBleed (and now TicketBleed) can be detected. But where should your taps for packet capture be located? Most would say after traffic is decrypted (e.g. after a VPN concentrator). I would normally agree with this advice, but vulnerabilities like TicketBleed and HeartBleed show that this isn't always the right answer. There are exceptions to every rule. In order to execute the discovery Course of Action (CoA), you need some amount of packet capture.
Any time a serious vulnerability is discovered, there is always a question about whether you may have been exploited before you were patched. This is especially concerning for those who have nation states as part of their threat model. If you haven't considered a packet capture or other network security monitoring, please talk to us at Rendition Infosec and we'll be happy to help you plan for success in your next investigation.
I saw a wild story in the NYT about the use of restricted sale spyware to spy on proponents of the soda tax in Mexico. I regularly talk to clients at Rendition Infosec who say "We wouldn't be targeted, who would hack us?" I always respond that if you have something that lets you do business better than someone else, that data is valuable. And someone might target you to get it.
But it's not just intellectual property that directly supports business. If you are influencing public policy, you might also be targeted. There's obviously a lot of money involved in public policy. That's what seems to be happening in Mexico. Proponents of the soda tax have been exploited using malware that is supposedly only sold to governments.
There seem to be three possibilities here, all clearly disturbing. The first is that the proponents of the soda lobby are being hacked by a government using the government only tools. The second possibility is that the government only tools have been sold to a non-government. The third possibility is that the spyware really has only been sold to the government, but the tools were leaked or stolen by an outsider.
Can you really keep hacking tools private?
I'll leave the first two possibilities for your imagination. I'd like look at the third possibility. If the tools were stolen or leaked, that would be extremely disturbing, but probably not unprecedented. There are certainly suspicions that Harold Martin, an NSA insider, was the source of the Shadow Brokers tool leaks. In the case of Shadow Brokers, there is believed to be only one source for the hacking tools (supposedly this is NSA). In the Mexico case, it isn't clear how many different governments have access to the commercial spyware. But understand that each legitimate customer of those tools is likely a nation state hacking target. Think about that the next time you hear your government talking about public policy for it's hacking tools, exploits, and backdoors. Or how about the FBI wanting backdoors to get into your iPhone? Given the Shadow Brokers leaks, losing any backdoor isn't outside the realm of possibilities. Given time and daylight, we may find that the source of the cyber attacks targeting the Mexican soda lobby was a government who legitimately purchased the tools, but fell victim to hacking themselves.
I'll start this post by saying I half expect to get a cease and desist over it. If that happens, know that HP doesn't stand by their marketing and is using lawyers to silence those who would challenge them.
HP kicked off RSA with a big push on printer security. I stopped by the booth and they have a few people standing around (customers and employees). I asked (seriously, not trolling) what the sales pitch is. I said I'd like to hear the technical side. I say "I'm an infosec consultant, what real world examples of printer exploitation can HP share with me that will get my clients serious about printer security?" The obvious sales lady passes me off to one of two engineers at the booth. The other engineer is already talking to someone. The other engineer (the one not talking to me) says to the guy he's talking to:
You have to take print security seriously. Did you know that Stuxnet attacked the print spooler?
Now this is some USDA Grade A FUD. First, Stuxnet didn't attack printers at all. The print spooler? Sure. On Windows. If a tag line for your printer security campaign is a vulnerability patched in 2010 (MS10-061) on something that isn't even your platform, well, bravo. I'd like to induct this salesperson/engineer into the snake oil hall of fame (or ninth circle of hell, I'm actually not picky here).
Before I can interrupt "Mr Stuxnet" my engineer starts talking to me. I ask him the same question I asked the "sales only" person at the booth. He says:
Did you hear about the tens of thousands of printers hacked in the last few weeks? It's been all over the news.
Um yeah (he's talking about this), but those were printers directly connected to the Internet. I'm not concerned about that, or Weev, spewing garbage to my printers. I ask about the narrative they seem to be suggesting where the printer is used to compromise the rest of the network. Can your printer really be used to compromise the rest of the network? Have you ever seen it? Do you have a single case, even without naming the victim, that you can point to?
*Note: I don't need anyone to cite a talk at *CON where someone did a PoC. I'm talking real world.
It turns out the answer is no. The HP engineer says that they have responded to dozens of networks (later throws out the number 60) and says that in two of them they've found printer firmware that "had a different checksum from ours." He won't commit to saying it was modified, just repeats that "well, the checksum was different." I'm intrigued, tell me more. He says that they suspect it would have been nation state since these networks were known to have been hacked by nation state actors in the past. Okay, cool. Since you found this "modified" firmware, you certainly reverse engineered it, right? Well, yes they did, but "unfortunately HP Labs is very secretive" and is "unlikely to share anything they find." And this was like four years ago and I haven't heard anything back so lets move on.
Wait, WUT?! You found firmware that wasn't yours loaded onto printers four years ago, sent if for reversing and you got nothing back? This is the sort of thing that literally screams "take my money" and you have no information on it. Okay, I think I see what you found: nothing.
When pressed for more examples of printer exploits, we got stuff like "Do you know what can be done with PCL and PostScript?" Yes actually, which is why you're really not scaring me. Again, show me examples of this happening in the wild. Do you have document hashes in VirusTotal that show demonstrations of attackers doing this stuff? My clients want to focus on real threats, not hypothetical ones.
Luckily I didn't pay attention to the video playing in the HP booth while I was there. When I came back to my hotel room I actually watched this vile piece of filth. WARNING: you'll never get this six minutes of your life back.
This video features tag lines like "nothing is safe" and "There are hundreds of millions of business printers in the world. Only 2% of them are safe." This is FUD to the nth degree. The fact that Christian Slater, who stars in Mr. Robot, is hacking the printers is designed to give it a more realistic bent. But it's not realistic. How did Slater get on the wireless network to pwn these printers? And were they configured with default credentials? And how did he just happen to have firmware for these specific models. And why FFS were the printers not checking a digital signature on the firmware. So many questions...
And this all made me mad until Twitter user Jason Testart told me that HP was sending electronic marketing material to executives (likely CIOs) to get them to invest in printer security. Then I was just furious.
According to Jason, that thing in the middle is a cell phone screen that plays marketing material (presumably the Christian Slater video?). In today's digital age, it is environmentally irresponsible to create such waste in advertising materials. Cell phone screens? Really? I'm sure this is flashy and I'm sure people look at it before throwing it away. But I'm a little outraged at the waste - especially when used to spew such deceptive FUD.
Real Security
Securing your printing environments isn't hard. If you need help, contact me at Rendition Infosec and I'll be happy to help you. But I'll save you all sorts of money up front and tell you that it involves basics like not connecting your printer directly to the Internet, not exposing unnecessary services (turn off LDAP on your multi function device if you don't use it), and change default credentials on your print devices that have a web interface. Turn off wifi if you don't use it (you shouldn't be using wifi for your printer, honestly). Then back this all up with network monitoring. Find your printer beaconing to China? Isolate it from the rest of the network, then call someone who can reverse engineer the printer implant. While you're at it, hunt the rest of your network. The attackers didn't just compromise the printer.
Closing Thoughts
In closing, things like this make me sad to be in infosec. I know that I'm not making much of a dent here raging on how dumb this printer security push is (in the overall scheme of things). But please don't make my efforts in vain. Your CIO probably got a cardboard encrusted cell phone screen from HP. Talk to them. Tell your CIO this is FUD. Tell them you probably need to invest money in basic infosec blocking and tackling. Then feel good about changing the world for the better.
I am not a lawyer. I don't even play one on TV. But a group of lawyers did get together to try to decode the laws of many nations apply to cyber warfare. The Tallinn 2.0 manual is the output of this effort.
Some of the lawyers in this group got together yesterday to have a panel and answer questions. They try to answer questions such as "is this an act of war" and the law of peacetime cyber operations. The text is dense and probably requires a law degree to completely comprehend, but the Q&A session linked here was pretty good for background noise while driving.
If you work at DHS (and honestly, probably anywhere in .gov) you should brace yourself for more mandatory training. H.R. 666 (yes, I immediately noticed the irony of that number) has passed the house and is heading to the Senate. It would require that Homeland Security develop awareness programs (read more mandatory annual training) to help users spot insiders. The bill of course does much more than that.
I'm particularly impressed by these two items. If the legislation passes the Senate, (G) will ensure that by law the Insider Threat Program is informed about current technology and trends. This is much better than relying on your adviser's BS in IT from ten years ago to steer decisions (sadly, this isn't a hypothetical).
I also like section (H) where metrics are required. As we regularly tell Rendition Infosec customers, metrics are critically important to ensuring program success. Of course some customers hate metrics, and we get it. They aren't sexy (downright boring in many cases), but they are critical. Effectiveness for an insider program will be difficult to measure, so it will be really interesting to see what metrics DHS selects for the program.
I've seen a few people talking on social media about how CVE-2017-0016 is just not a big deal. They correctly point out that it can't trigger Remote Code Execution (RCE) and can only be used for Denial of Service (DoS). Both of these are correct. More than a few people have made the mistake of saying something to the effect of "if you have SMB listening on the Internet, you have all sorts of other problems."
But those making the latter statement don't understand the vulnerability. Unlike many previous SMB vulnerabilities like MS-08-067 (used by Conficker) that would require a host to be listening on SMB ports (TCP 139 and TCP 445) to be exploited, this vulnerability requires that the vulnerable host be able to talk to an attacker on these ports. So SMB need not be exposed to the Internet in a traditional sense for a host to be exploited. If an attacker can get a user (or an automated process) to visit a malicious link over SMB, the exploit will be successful and the machine will crash.
What do you need to do?
Microsoft has not yet made a patch available. However, there is a publicly available PoC script so attackers can cause mischief today with no work on their part. You need to ensure that your networks don't allow TCP 139 or TCP 445 outbound. Due to the SMB worms of the past, most residential ISPs block TCP 139 and TCP 445. Most business ISP connections do not.
There are tons of reasons to not allow TCP 139 and TCP 445 outbound. They are too numerous to mention here and there's no reason to repeat them here. If you want to test your network, I set up a test before Badlock last year. The instructions for running it are here. You really should make sure you block SMB outbound from your network, anything less is a ticking time bomb. When we audit small business networks at Rendition Infosec, we see SMB allowed outbound with a surprising regularity.
Update: According to @cybergibbons he checked with the hotel in the story and found out the story was fake. That's disturbing. So we won't be using this as a cautionary tale. We also won't be using this as a cautionary tale. Locking people in their rooms seemed far fetched, but locking people out of their rooms is still a huge (and believable) risk. I'm going to leave this post up for the time being. We've seen backdoors left behind by cyber extortionists before. We think it's wise to segment networks. For those reasons alone, we think the post (though originally based on fiction) is worth keeping up.
Original post
I read a story today about a cyber attack causing safety issues, or more specifically a threat to human life. The attackers took over the key management system at a hotel with 180 guests and locked guests out of their rooms. Supposedly, even the guests in their rooms couldn't get out. This is an obvious safety issue for guests.
The hotel paid a ransom in Bitcoin to restore service. The attackers only asked for 1,500 EUR, but honestly could have probably gotten far more given the seriousness of the mayhem they were causing the hotel.
A more important note is that attackers left a backdoor in the hotel's system and tried to come back. It's not the first time the hotel has been attacked. The hotel has been attacked at least twice before, though there are no details about the previous attacks offered in the article. The hotel management also noted that they've been in contact with other hotels that have had similar ransom situations.
Takeaways
There's some interesting takeaways here. First, if you need an example of a cyber ransom attack causing a possible threat to human life, here you have it. I'll certainly be holding this one in my back pocket for future discussions with Rendition Infosec clients. The possibly liability here should be obvious (it's enormous).
Perhaps a more important takeaway is that the attackers planted a backdoor in the hotel's systems. I don't disagree with the idea of paying a ransom. Do what you have to do to ensure safety. People locked in their rooms are a fire hazard. People who can't get into their rooms to get life saving medication are also obviously at risk. So paying the ransom is the right thing to do. But after you pay the ransom, organizations should aggressively hunt for attackers on the network. Machines that have been compromised cannot be reliably cleaned of all backdoors and malware. Best practices require that the systems be rebuilt (not restored from backup).
After rebuilding the computer systems the hotel decoupled some of its systems from its core network. This is very similar to best practices in ICS (industrial control systems) networks where IT (information technology) networks are separated from OT (operational technology) networks. There's really no reason for the hotel's key control system to be on the corporate network in the first place. Only bad things can happen from this extra connectivity. It's worth noting that "decoupled" could mean any number of things. We can only hope that the system is truly separated from the corporate network.
Finally, the article says the hotel will be replacing electronic keys with regular keys. This creates a whole new threat model, but the good news is that real keys never get demagnetized (as a traveler, I hate this). The hotel will have to evaluate whether replacing digital keys with physical keys is best for the safety of its guests. But this is a good note of how maybe we shouldn't connect everything to the Internet (yes, I'm looking at you IoT).
Conclusion
This is a great educational story that could have ended very poorly. Instead, the hotel responded quickly and took steps to keep it from happening again. They pulled victory from the jaws of defeat. I'm sure there are some who will say the hotel was wrong for paying the ransom, that paying encourages the attackers to target other victims. But it's unlikely that those making these claims have ever faced such a situation.
I read this hilarious Motherboard article the other night about a witch who claims she can use magic to drive out computer viruses. Sure, this article is humorous (at least to infosec people reading this blog). But I got to thinking that there are many products and services being seriously marketed in infosec that are no more effective than magic. In fact, on more than one occasion, a vendor has described a process to me as "nearly magic." Um, no. You've lost my attention. Go sell your magic beans to someone else. I don't need your beanstalk screwing up my security architecture.
I'm not sure which security firm (see what I did there?) will be the first to offer WaaS, but whoever it is would be wise not to take the advice of the real witch.
The witch says she "called in earth, air, fire, and water" to aid in clearing a virus. But every A+ technician working the bench at Geek Squad knows that at least three of those things are bad for computers. Something is already a little fishy about this technique.
Looking for the root cause? Feel "a snag"
However, she claims to be able to find where a virus got in. It's where she feels "a snag." I doubt I'll be using witchcraft in any Rendition Infosec incident response in the future. But the next time a lawyer asks me if I've explored every option, I'll point them to this article (all the while hoping they don't call in a witch or a psychic to uncover the logs that rolled over 6 months ago).
A little further into the interview, the witch is asked about demons infecting computers. I'm pretty sure the interviewer meant to say "daemons" instead of "demons" and everyone just got confused...
Let's get serious for a minute
This has been fun (for me at least), but let's seriously talk about the logical fallacy that she uses to deal with those who discount her work. She essentially says "before you can challenge me, you must first read The Spiral Dance" (paraphrasing here). She discounts the naysayers, saying "they say incredibly stupid things" and appears to presume it is their lack of knowledge that causes them to question her magic. This is an example of the "tu quoque" fallacy. Rather than addressing the merits of the argument, you say "your argument is ridiculous" and dismiss it.
Unfortunately, I see this approach used in infosec far too often. Infosec professionals may assume that those they are arguing with lack knowledge to "see the light" as it were. Sometimes this is true, but I caution you to use this (hopefully) humorous example to learn about logical fallacies so you can avoid them in your own work.
There's some shocking news out of Russia this morning that the head of computer incident investigations at Kaspersky Labs was arrested for treason. According to this article, he was arrested in December and his arrest may be linked to another arrest in the FSB around the same time.
Update: Forbes is reporting that the charges stem back to an investigation into the deputy head of the FSB's information security center (CDC) Sergei Mikhailov. Moscow Times reports that the arrest was related to taking money from foreign sources and also draws a connection to Mikhailov and the CDC.
According to this source there were changes to Russia's treason laws in 2012 that include the following definition for treason:
providing financial, technical, advisory or other assistance to a foreign state or international organization . . . directed against Russia's security, including its constitutional order, sovereignty, and territorial integrity
It's worth noting that the definition above is very broad and could meet the definition of publishing (or attempting to publish) information about a Russian state sponsored hacking group. This would definitely be "technical assistance" and would definitely aid "a foreign state." Also, it's pretty easy to see how that would hurt "Russia's security." All the definitions are met here for a hypothetical scenario where a security researcher in Russia could be charged with treason for "outing" a Russian state sponsored hacking group.
Where is Russia on this?
The GRIZZLY STEPPE report made it clear that Russian state actors were involved in attempting to manipulate the US elections. The arrests happened in December shortly after the elections, but it would be an illusory fallacy to assume that the timing of the arrest is connected to the elections. However, this an obvious connection that many will jump to (including several members of the press who have called me today). The FSB is smart enough to know this connection will be assumed. If they wanted to get out in front of it, they could. But they haven't. I assess that whether the timing is connected, the FSB is comfortable with people assuming that it is (or at least raising the question).
Keep up the good fight
In any case, today I thank my lucky stars that I perform incident response in the United States where the government doesn't overtly try to suppress my freedoms. That's not to say I don't have a healthy fear of our government when it comes to publishing security information (more on that in a later post). But I seriously doubt that the US government would charge treason for investigating an incident involving our own network exploitation assets. On the contrary, I feel pretty confident Russia could.
For those living and working under oppressive regimes, keep up the good fight. But also remember that no incident response report or conference talk is worth jail time (or worse).
Also, to the GREAT researchers at Kaspersky Lab (I love your work), I hope this incident doesn't in any way tarnish your reputation. The actions of one individual should not be a measure of the group.
You may have heard that the sale of Yahoo to Verizon is being delayed. This is obviously bad news for Yahoo. But honestly, it's probably great news for infosec.
At Rendition Infosec, we've worked a fair number of breaches over the years involving new organization acquisitions. In every case, the acquiring organization failed to perform good due diligence on the purchased organization. They certainly did a financial audit, but failed to perform a security audit. The value they paid to acquire the organizations was in every case too high, since the price was calculated without knowing about an ongoing breach.
So is the case with Yahoo, only the deal isn't complete yet. You can bet that Verizon will pay less for Yahoo than originally planned if the deal goes through at all.
However, this isn't the case with most acquisitions. In most cases, the purchase is complete before the breach is discovered. And unfortunately, the purchasing organization is left holding the bag in these cases. They paid more for an organization than it was worth and likely have buyer's remorse. For smaller acquisitions they might also spend more on the incident response, breach notification, and reputation damage than they paid to acquire in the first place.
Then there's the very real concern that the smaller organization is being used as a compromise vector for the acquiring organization (which likely has better security). We've seen evidence strongly suggesting this has happened in at least one case (and circumstantial evidence for other cases).
Given the importance of cyber security in today's marketplace, M&A teams would be wise to use threat hunting from external teams as a resource. The cost of threat hunting, while not cheap, is far cheaper than making a bad purchase. We anticipate that contracts can be structured such that if compromise is found, the acquired organization pays the bills for threat hunting. Even if this isn't the case, the cost is cheap insurance for the acquiring organization.
If I'm reading the tea leaves correctly, this means more threat hunting jobs by external teams. It should go without saying, but internal teams are certainly not what you want doing threat hunting for this purpose. All in all, this is great for infosec, particularly firms that specialize in DFIR. I'd be remiss not to mention to you that Rendition Infosec provides these services using our own internally developed proprietary hunting software. But whether you use us or someone else, don't acquire another organization without the due diligence of threat hunting.
Over the last several years, I've noticed the role of CIO transitioning increasingly to business experience rather than IT. Likewise, the CISO position is often given to those lacking any hard skills in security.
I'm not trying to reignite the hard vs soft skills debate, but I do think it's worth discussing how important security understanding is to being a CISO. One organization I ultimately opted not to work with hired their "CISO" from within their app development ranks. You can't convince me you're serious about security if this is your plan. If you want to put someone on a career development track to eventually be CISO, fine. But an application development manager is no more qualified to be a CISO on day one than I am qualified to take over control of a 747 mid-flight.
At Rendition Infosec we regularly meet with infosec leaders - some have a wealth of domain experience and others unfortunately do not. We're happy to help organizations of all shapes and sizes with directors from all backgrounds. However, we know that we have better outcomes when dealing with leaders who really understand the problem infosec space.
Some will argue that a great leader can be a great CISO by surrounding themselves with good people. I think this is fundamentally wrong. Honestly, I doubt this works really well in any field where you lack general domain knowledge. You could surround yourself with great electricians, but you'd still be a bad chief electrician if you didn't understand panel load or ohms law. And sorry if I offend any electricians in saying this, but information security is way more complicated. In other words, more domain knowledge is needed.
Armchair Experts
But infosec lends itself to armchair experts. It's in our very nature to over estimate our capabilities, especially on topics we find partly familiar. For better or worse, we all use smart phones and computers these days. So perhaps we feel like we already have more domain knowledge here than say, pipe fitting. But this is deceptive since the problems of infosec are hugely complex and require massive amounts of domain knowledge. My ex-father in law was a doctor - a legitimately smart man. He had no problem paying a plumber to fix a leaky faucet or even a painter to paint a wall. But when faced with a computer problem, he couldn't get over the idea that he was paying someone to do something he could fix himself. He couldn't understand the value because he overestimated his domain knowledge for cyber security.
Is this really a problem?
You bet it is. A great example of this is Giuliani. Say what you will about the man, he has a lot of leadership experience. He's also reasonably adept at talking to the press. So I was a little shocked to read this Market Watch article where he totally confused about cause and effect on a topic as simple as Y2K. As far as Giuliani was concerned (as of January, 2016) we digitized systems to deal with Y2K.
Obviously this is wrong, and hideously so. But it highlights the type of messaging that can occur when leaders overestimate their domain knowledge in infosec. So what, we're never going to have another Y2K you say? True. But we will continue to face complex security challenges and others will assume you know what you're talking about. Think I'm wrong? Market Watch didn't question Giuliani's response at all and continued giving him airtime. * The Giulaini reference isn't meant to be political in any way. It just happens to be the best high profile example I can muster of leaders thinking they know more than they actually do about infosec.
Just get the checklist done
I've regularly meet with infosec "leaders" (and I use that term lightly) who proudly present me with their list of things they will do to "finish securing our networks for good." Whoa - no professional believes they will simply check a few boxes and be done. That's insane. 100% delusional. Unfortunately, directors and executives eat this stuff up and are convinced that soon they will be "done solving the information security challenge."
Infosec is a domain that requires professionals at the helm. I don't want someone with a MBA doing hip surgery because he "understands business and has a hip he uses every day, so you know, he's qualified." We need to stress the importance of understanding the security domain to our leaders. If you already have a leader who doesn't understand the domain, you have two fundamental choices - either educate them or get a new job. Sadly, usually the latter works best.
Who are the Shadow Brokers? Are they nation state? If so, are they Russian government or Russian government sponsored?
The timing of the latest releases certainly makes that seem likely. Along with the release of the GRIZZLY STEPPE report detailing Russian hacking, a number of Russian "diplomats" (probably spies) were kicked out of the US. Apparently we also took their summer vacation home.
One week later, the Shadow Brokers released a dump including a file listing of Windows tools supposedly stolen from US intelligence agencies. They also posted screenshots detailed here and here. It's hard not to see this as a retaliation for the US expelling the Russian diplomats. If it's not a retaliation, make no mistake about it: the Shadow Brokers knew that analysts would likely come to this conclusion.
But then in the early morning of January 12th, 2017 the Shadow Brokers dumped 61 Windows binaries (.dll, .exe, and .sys files). They claim they only dumped the 58 tools that were detected by Kaspersky AV, but the dump contained 61 files. A little anonymous birdie told me that Kaspersky only detects 43 of these files as of mid-day on the 12th. I don't like Russian software on my machines so I can't confirm whether or not that's true.
Shadow Brokers "final message"
So why dump the actual files themselves? I think that since the dump of the filenames on Sunday there's been a lot of behind the scenes diplomatic talks and Russia decided the US wasn't taking them seriously. In this case, releasing 61 files is a good way to be taken seriously, while holding back a huge cache of files. "Feel some pain, but know we can hurt you again and again and again."
Of course, I could totally be wrong about this, but it sure is fun to watch what appear to be two country's intelligence agencies battle it out in public.
This message I got on LinkedIn today is a great example of how NOT to recruit infosec candidates. Nothing about this message says "I actually looked at your profile." Everything about this says "I never looked at your profile, but some word on there matches a search term."
Want to get serious about recruiting? Talk salary. Talk benefits. Tell me something that says "I read your profile." Be careful about targeting business owners. Think before you just send. Maybe ask yourself:
What am I offering that would make this job attractive to a business owner? So attractive they'd leave the business and come work for me?
If you can't answer those questions, think again about even sending your offer. When I make successful sales at Rendition Infosec, I research the organization I'm selling to. Honestly, infosec/cyber recruiters need to start doing the same.
I was working on a piece of malware for a Rendition Infosec client recently and noticed a novel malware sandbox evasion. Malware often tries to determine if it's in a sandbox and if so, performs different functions than when it is on an endpoint system.
This particular malware enters a loop and tries to connect to www.google.com. If the malware connects successfully, it goes on and does bad things. If not, it sleeps and does it again. And again. And again. Good news for sandbox evasion: until the malware successfully connects to Google, there's no way that you'll see anything bad. For this (and other) reasons, this malware had really low detection and had no trouble bypassing antivirus on the client's system.
The attacker knows however that tools like FakeDNS and a simple HTTP server could easily trick the malware into thinking it was on the Internet. But here the attacker reads the data returned and checks the first four bytes of the return to find "<!do". This string is likely the "<!doctype html>" tag that is found at the start of the Google website (and others). I checked a few sandbox programs that try to mimic the Internet and most of them just serve up an HTML page without the "<!doctype html>" tag. I'd recommend adding this to your sandbox program if your sandbox is configurable.
This is a great time to remind everyone that sandboxes are useful tools but are no replacement for a good reverse engineer. If you don't have a dedicated reverse engineering staff but would like to have the capability at your disposal, talk to us at Rendition Infosec and we can get you up and running on a retainer quickly. Once you have a reverse engineering capability at your disposal, it's pretty amazing how much you'll actually use it.
Yesterday, I blogged about the Shadow Brokers dump and some take aways. I wanted to introduce another potential takeaway. One of the lines in this screenshot published by Shadow Brokers says psp_avoidance. What is Psp_Avoidance? Is someone looking to avoid the Playstation Personal? Paint Shop Pro? Doubtful...
I downloaded the screenshots published by the Shadow Brokers (which oddly doesn't include this screenshot). However, it does include the output of the find command across the dump. After searching through the directory list output for the string "psp" we find a number of different XML files (among other Python files and others). Note the output below.
We have no idea what a pspFPs is, but what we see here seems to indicate that psp is a security product. We also get some idea of what antivirus products are of interest to the group Shadow Brokers stole the tools from.
This additional find command output data seems to support that psp is nomenclature for security product.
A few Google searches later with, this one with the obvious terms "psp computer network operations" we get back as the fifth result this wonderful page from ManTech. It details the ACTP CNO Programmer Course. The course documentation indicates that PSP is an acronym for "Personal Security Product."
Thanks ManTech!
So, circling back around, what is Psp_Avoidance? Obviously, we don't know - but if the acronym is correct, it would seem to be software built to evade personal security products, which directory listings suggest (as does ManTech) are antivirus programs.
Should you run antivirus products? Sure. At Rendition Infosec we tell customers that operating without AV is like driving a car with no airbags. But this dump suggests that advanced attackers have mitigations for antivirus products - a sobering reality for organizations without defense in depth. Bottom line, AV is valuable but the new dumps casts a shadow on the effectiveness of antivirus against APT attackers.
Shadow Brokers are at it again, this time offering apparent Windows exploits and toolkits. The timing of this does not seem coincidental. If Shadow Brokers are to be believed, they've been holding the tools for some time and just now releasing the Windows toolkits. Previously, they have released other tool sets, but nothing that operated against or exploited Windows.
The Tools
What of the tools? There's little specific information about the tools, but I've included some images here from Twitter. In the last I've embedded tweets, but when accounts get suspended, the tweets are no longer available. It's at least plausible that this account will be suspended...
This screenshot shows the price for individual components. Most interesting perhaps is the fact that the exploits contain a possible SMB zero day exploit. For the price requested, one would hope it is a zero day. The price is far too high for an exploit for a known vulnerability.
This screenshot shows a number of names of apparent tools in the dump. Of particular interest are the version numbers. Note that most of the tools have apparently been through multiple revisions, adding apparent legitimacy to the claim that these exploits are real. Though another screenshot hints at a possible zero day SMB exploit, there's no indication of which exploit names involve SMB (or any other target service).
The exploits named "touch" in the screenshot do however seem to offer some ideas of services that might be interesting. Of particular interest is WorldClientTouch - suggesting that perhaps one of the code-named exploits work against MDaemon's web based email client?
Finally, this screenshot seems to show some information about the tools available. Some capabilities like "GetAdmin" and "PasswordDump" seem rather obviously needed capabilities.
However, the listed plugin "EventLogEdit" is significant for digital forensics and incident response (DFIR) professionals investigating APT cases. While we understand that event logs can be cleared and event logging stopped, surgically editing event logs is usually considered to be a very advanced capability (if possible at all). We've seen rootkit code over the years (some was published on the now defunct rootkit.com) that supported this feature, but often made the system unstable in the process.
Knowing that some attackers apparently have the ability to edit event logs can be a game changer for an investigation. If Shadow Brokers release this code to the world (as they've done previously), it will undermine the reliability of event logs in forensic investigations. Cyberark recently claimed that event logs might be subject to tampering, though it doesn't appear that they were discussing the Shadow Brokers capability specifically.
The Timing
So what do we make of the timing? It's hard believe that the timing is purely coincidental and has nothing to do with the release by US intelligence about the Russian hacking of the DNC.
The theory that immediately comes to mind is that Shadow Brokers are Russian or Russian operatives and the release of the Windows toolkit is retaliation for the report. Unlike previous dumps, this dump goes a bit further, showing screenshots of the GUI tools and execution of some scripts. However, it is important to note that no tools are offered for proof of the dump this time. Only screenshots and descriptions of the tools are offered.
An alternative theory is that Shadow Brokers are not Russian and are timing this release to shift the blame to Russia. There's unfortunately no way to test this theory.
Finally, there's the theory that the timing of this latest Shadow Brokers release has nothing to do with the intelligence community report. This seems the least likely. Shadow Brokers must have known that people would make this analytic leap, so even if they scheduled this release some time ago, the decision to go ahead given the release of the report on Russian hacking was done with the understanding that connections would be made.
Conclusion
Regardless of your feelings on timing or what we know of the tools themselves, this is certainly an interesting development.
At Rendition Infosec, we're trying to put some science into Cyber Threat Intelligence (CTI). We're really interested in how customers are using the GRIZZLY STEPPE (Russian APT) report. We'll publish a report on the responses (in aggregate) in the coming week. Please make your voice heard and contribute to the community's understanding of how this data was used in the real world.
The Joint Activity Report (JAR) on GRIZZLY STEPPE did far more harm than good. I've had numerous clients of Rendition Infosec question me on what the indicators mean and whether they should be concerned.
Concerned about Russian hackers in your network? Not based on those indicators (most of them).
Concerned about the competence of government cyber analysts (or lack thereof)? Yeah, definitely.
There are 876 IP addresses in the GRIZZLY STEPPE IOCs. There are several from Amazon EC2, and absent a date of when those IPs were actively used by Russian hackers, they are useless. Less than useless.
My favorite IP address in the report though has to be 65.55.252.43. This resolves to watson.telemetry.microsoft.com. This makes it clear that nobody competent vetted the report. Either that or someone at NCCIC has it out for Dr. Watson.
What's an indicator anyway?
These indicators aren't indicators. To be more than data, an indicator has to indicate something. These fall well short of that. The report thankfully doesn't recommend blocking the IPs in the report, but also fails to say how hopelessly under-vetted they are.
My recommended action for these indicators is to ignore them until they have been better vetted. NCCIC honestly owes a large number of network operators and incident response teams a formal apology for the time they wasted responding to this farce of a report. The IOCs have triggered countless false positives in my customers' networks. Even Rob Graham noted that he had two of the IPs in his browser DNS cache. But more than that, the report communicates to corporate leadership to ignore future reports from NCCIC. One day they'll have something useful to share, and based on this clown show, nobody will be left paying attention.