Lenny Zeltser sent me an article last week about a new keylogger (md5sum 21f8b9d9a6fa3a0cd3a3f0644636bf09). The article mentioned the fact that the keylogger uses ToR to make attribution more difficult. That's interesting, sure. But once I got the sample, I noticed that it wasn't compiled with the Visual Studio of Delphi I'm used to seeing malware use. Then I re-read the article and found a mention of it being compiled in Free Pascal. Well, that changes the calling conventions a little, but nothing we can't handle. However, it is worth noting that IDA doesn't seem to have FLIRT signatures for this compiler, so library identification is out (sad panda time).
EDIT: FLIRT is used by IDA to identify runtime libraries that are compiled into a program. Reverse engineers can save time by not analyzing code that is linked into the binary, but not written by the malware author. It is not unusual for a binary to get 30-50% (or more) of it's functions from libraries. In this case, IDA identifies 3446 functions. However, none of them are identified as library functions. To find probable user code, we'll anchor on cross references from interesting APIs unlikely to be called from library code.
The first thing I need here is an IOC to track this thing. After all, as I tell all my classes, IOCs pay the bills when it comes to malware analysis. Sure, I could fire it up in a sandbox, but I'd so much rather do some reversing (okay, if not for this blog post, I'd use a sandbox, especially given the lack of FLIRT signatures for this sample).
For IOCs, I always like to find a filename, something that is written to the system. I searched the imports table for API's that would return a known directory path. In this case, I found GetTempPath, a favorite of malware authors. Guess where system.log can be found? You guessed it, in the %TEMP% directory of the user who executed the code.
However, it is worth noting that later in the function, there is a call to DeleteFile. I didn't take the time to fully reverse the function yet so I don't know if this will always be called (a quick look makes it appear so). But we're after some quick wins here (and this isn't a paid gig), so we're moving on. This means that our %TEMP%\system.log file may not be there after all. Bollocks... Well, you win a few, you lose a few.
Well, now that's interesting... A call to GetTickCount where the return value is placed in a global variable. This might be some sort of application coded timer. Or, it could be a timing defense. Sometimes malware will check the time throughout program execution. If too much time has passed between checks, the malware author can infer that they are being run under a debugger (a bad thing for a malware author). Note that GetTickCount returns the number of milliseconds since the machine booted. Millisecond precision may not be sufficient for some manufacturing processes, but for detecting debuggers it will do just fine.
Let's see if we can find the timing check and see what's actually being done here. To do this, cross reference from dword_462D30.
Good: there's only two other references that IDA found. This might be easy after all. Also, to keep myself sane, I'm going to rename this global variable to time_check.
So is this a debugger timing check? If it is, the malware author is doing it wrong (really, really, wrong). Nope, in this case, the malware author is checking to see if more than a full day has passed since the original check to GetTickCount. The old value is moved into ecx. The return value of GetTickCount is placed in eax (like the return value from all Windows APIs, per the stdcall convention). Then, the old value is subtracted from the new value. A check is performed to determine whether more than 86,400,000 milliseconds have passed since the original GetTickCount call. That value should look familiar to programmers, it's the number of milliseconds in a 24 hour period. Okay, so this means that the malware is going to do something once per day while the machine is booted...
Examining the code further, we note that the only difference in execution at this location is a possible call to sub_42BBB0. Wow. Glad I wasn't debugging this! I might never have seen whatever was in that subroutine (my debugging sessions tend to last far less than 24 hours).
After jumping to sub_42BBB0, I found that it was the subroutine that brought us here in the first place. This makes sense. To prevent this code from executing over and over, the value in the time_check global variable would have to be updated. So maybe the %TEMP%\system.log IOC is a winner after all... Maybe it is purged and recreated once every 24 hours? I don't know yet, but I've started to unravel some functionality that my sandbox wouldn't have (and that's what real reversing is all about).
I'll continue later this week with a further look at the malware. I know we didn't hit the keylogger portions at all. However, in all fairness I was writing this as I was reversing. I still have holiday shopping (and courseware updates) to do today, so this will have to suffice for now. Hopefully this is of some value to those who are interested in reversing.
I fully expect this sample to show up in one of my SANS courses (FOR526 and/or FOR610). It has some neat properties and is a real treat to dive in to. If you'd like to get a copy of this (and other samples), join me at the SANS CTI Summit in DC this February where I'll be teaching malware reverse engineering. This year, I added a sixth day to the course: a pure malware Netwars capture the flag challenge. This means you get a chance to put your new reversing skills to the test in the class! I look forward to seeing you in a future course.
Ramblings about security, rants about insecurity, occasional notes about reverse engineering, and of course, musings about malware. What more could you ask for?
Saturday, December 21, 2013
Thursday, December 12, 2013
The courts STILL don't get it
You've probably seen the story of Eric Rosol, the man who was just ordered to pay $183,000 to Koch Industries for participating in a DDoS attack against their website.
According to publicly releasable information, the site only went offline for 15 minutes as a result of the attack. The attack itself reportedly lasted less than 5 minutes and Mr. Rosol only participated in the attack for 1 minute. As far as we know, Mr. Rosol did not initiate the attack, which was accomplished using Low Orbit Ion Cannon (LOIC). LOIC is a DDoS attack tool that supports crowd sourced attacks over IRC. Mr. Rosol might have connected his LOIC instance to IRC or manually started and stopped the attack (I don't know which one for sure, and it isn't relevant for this case). He does however admit that he participated in the attack.
So what were the damages?
The actual damages for a DDoS attack on a website are hard to quantify. If you took down amazon.com for instance, it would be easier to quantify the losses by examining a comparable sales period. But in the case of amazon.com, the website directly drives revenue. What happens when the site doesn't generate revenue directly? What if it's a site that only serves as a "front door" or advertisement for the company? Certainly a loss is still incurred when the site goes offline. Investors get scared about the company's security and real system admin time is used to monitor and respond to the incident. But these costs get pretty murky to quantify. In this case, Koch determined that the cost of the outage was $5,000.
Should Mr. Rosol be responsible for damages?
Personally, I think it's a big stretch to say that Mr. Rosol should even be responsible for the entire $5k cost (if that really is the cost). He may be the only person who was arrested in this specific case, but the first 'D' in DDoS means Distributed. There were lots of people involved. Now, please understand that I am not a lawyer, so I could be really wrong here. But when multiple people are captured on surveillance video performing acts of vandalism but only one is caught, are they fined for the entire damages? What if additional suspects are caught? Will they also be fined for the entire damages? That sounds dumb to me, since it appears that victims could obtain multiples of the actual damages.
Wait, was it $5,000 or $183,000?
So this is where the case gets strange, and quite honestly, infuriating. When Koch Industries suffered downtime due to the DDoS that Mr. Rosol participated in, they decided to bolster their defenses against future attacks. To that end, they hired outside security contractors. It isn't known what the expenses entail, but they reportedly spent $183,000 with the contractor. This value was used by the judge to order a fine for Mr. Rosol.
Mr. Rosol did the crime, he should pay.... right?
The $183,000 fine represents a significant misunderstanding on the part of the justice system about computer crime. If you disagree, work through this intellectual exercise with me. Suppose that Mr. Rosol committed a physical crime, such as forcibly blocking the entrance to a convenience store. He was only able to block access to the store for a short time before the police forcibly removed him from the premises. During the "blockade" the convenience store estimates that they lost $5,000 worth of business (a hard number to quantify). The convenience store does not want this type of attack to ever happen again. The store hires a contractor to study the event. The contractor realizes that Mr. Rosol exploited a design flaw in the store entrance layout that allowed him to block access in the first place. The contractor recommends changes to the store entrance, some of which are implemented. The total cost for the contractor and store renovations is $183,000. In this physical crime analogy, would Mr. Rosol be on the hook for the $183,000 spent studying the event and making store renovations? Of course not. I can't think of any examples where this might be true.
Great analogy, why did he get fined $183,000 then?
I have no idea why Mr. Rosol got fined so much. I don't have the transcript of the sentencing proceedings, but I'd love to know what Mr. Rosol's lawyer argued to the court. Did he or she use a similar analogy? If so, did the court fail to understand the argument or did it just not care? I predict that Mr. Rosol's fine will be challenged in the legal system. I don't know the legality of any challenge since Mr. Rosol plead guilty to the offense. In any case, I think that this is a wakeup call for everyone in the computer security field that the justice system still doesn't "get it." We need reform of the CFAA (the law under which Mr. Rosol was charged) and we need it now. We need better sentencing guidelines. But what we really need are courts that understand how technology and computer crime actually work.
According to publicly releasable information, the site only went offline for 15 minutes as a result of the attack. The attack itself reportedly lasted less than 5 minutes and Mr. Rosol only participated in the attack for 1 minute. As far as we know, Mr. Rosol did not initiate the attack, which was accomplished using Low Orbit Ion Cannon (LOIC). LOIC is a DDoS attack tool that supports crowd sourced attacks over IRC. Mr. Rosol might have connected his LOIC instance to IRC or manually started and stopped the attack (I don't know which one for sure, and it isn't relevant for this case). He does however admit that he participated in the attack.
So what were the damages?
The actual damages for a DDoS attack on a website are hard to quantify. If you took down amazon.com for instance, it would be easier to quantify the losses by examining a comparable sales period. But in the case of amazon.com, the website directly drives revenue. What happens when the site doesn't generate revenue directly? What if it's a site that only serves as a "front door" or advertisement for the company? Certainly a loss is still incurred when the site goes offline. Investors get scared about the company's security and real system admin time is used to monitor and respond to the incident. But these costs get pretty murky to quantify. In this case, Koch determined that the cost of the outage was $5,000.
Should Mr. Rosol be responsible for damages?
Personally, I think it's a big stretch to say that Mr. Rosol should even be responsible for the entire $5k cost (if that really is the cost). He may be the only person who was arrested in this specific case, but the first 'D' in DDoS means Distributed. There were lots of people involved. Now, please understand that I am not a lawyer, so I could be really wrong here. But when multiple people are captured on surveillance video performing acts of vandalism but only one is caught, are they fined for the entire damages? What if additional suspects are caught? Will they also be fined for the entire damages? That sounds dumb to me, since it appears that victims could obtain multiples of the actual damages.
Wait, was it $5,000 or $183,000?
So this is where the case gets strange, and quite honestly, infuriating. When Koch Industries suffered downtime due to the DDoS that Mr. Rosol participated in, they decided to bolster their defenses against future attacks. To that end, they hired outside security contractors. It isn't known what the expenses entail, but they reportedly spent $183,000 with the contractor. This value was used by the judge to order a fine for Mr. Rosol.
Mr. Rosol did the crime, he should pay.... right?
The $183,000 fine represents a significant misunderstanding on the part of the justice system about computer crime. If you disagree, work through this intellectual exercise with me. Suppose that Mr. Rosol committed a physical crime, such as forcibly blocking the entrance to a convenience store. He was only able to block access to the store for a short time before the police forcibly removed him from the premises. During the "blockade" the convenience store estimates that they lost $5,000 worth of business (a hard number to quantify). The convenience store does not want this type of attack to ever happen again. The store hires a contractor to study the event. The contractor realizes that Mr. Rosol exploited a design flaw in the store entrance layout that allowed him to block access in the first place. The contractor recommends changes to the store entrance, some of which are implemented. The total cost for the contractor and store renovations is $183,000. In this physical crime analogy, would Mr. Rosol be on the hook for the $183,000 spent studying the event and making store renovations? Of course not. I can't think of any examples where this might be true.
Great analogy, why did he get fined $183,000 then?
I have no idea why Mr. Rosol got fined so much. I don't have the transcript of the sentencing proceedings, but I'd love to know what Mr. Rosol's lawyer argued to the court. Did he or she use a similar analogy? If so, did the court fail to understand the argument or did it just not care? I predict that Mr. Rosol's fine will be challenged in the legal system. I don't know the legality of any challenge since Mr. Rosol plead guilty to the offense. In any case, I think that this is a wakeup call for everyone in the computer security field that the justice system still doesn't "get it." We need reform of the CFAA (the law under which Mr. Rosol was charged) and we need it now. We need better sentencing guidelines. But what we really need are courts that understand how technology and computer crime actually work.
Friday, December 6, 2013
Memory image file formats
The bulk of this blog post came from the answer I gave to a question that one of my SANS FOR526 (Memory Forensics) students sent me about file formats and extension names. Specifically, he wanted to get some information on the difference between files with a .vmem extension and the .raw files output by DumpIt, a great, free memory dumping utility. I told him:
My student went on to ask whether he could use Bulk Extractor on a .raw file acquired by DumpIt. In FOR526, one of the things we teach is using Bulk Extractor to parse memory for artifacts such as email addresses, URLs, and facebook IDs (among others). If you aren't using BE in your cases, you owe it to yourself to give it a try. At the banner price of free, it's something we can all afford. I told my student:
The .vmem extension is used by vmware to indicate that a file represents the contents of physical memory on a guest virtual machine. You would get a filename of .vmem if you used the snapshot method of obtaining a memory capture from a VM. Alternatively you can capture the .vmem by pausing the VM, but this is less ideal since network connections are broken and VMware tools provides notifications to software in the VM guest.
In the case of a physical machine, dumpit will provide a filename with a .raw extension. Presumably this is used to differentiate it from memory captures that include capture specific metadata in the file format (HBGary's .hpak format is one such example). Another example of a memory capture with metadata might be an .E01 captured with winen.exe (provided by EnCase). Your tools will work identically on a .raw and a .vmem file.
Of course there are many other file formats where physical memory may be found. One such format is the hibernation file. I love using hibernation files in cases, especially when volume shadow copy is enabled on the machine. Sometimes I have several historical memory images that I can perform differential analysis on. This may help determine when a compromise occurred, particularly if anti-forensics techniques were employed to destroy timestamps on the disk.
A final memory image format that comes to mind is the crash dump. While this requires that a machine be appropriately configured to create a dump, many are (especially servers). The crash dump is particularly relevant to rootkit detection as the fateful BSOD is most common when loading new kernel mode software (many rootkits are implemented as device drivers). There are several tools that can convert kernel memory dumps (.DMP files) into physical memory dumps (to be consumed by memory analysis tools). But they aren't needed if all we want to do is run Bulk Extractor (BE). Because the memory contents in .DMP files are not compressed, the data can still be accessed. The additional metadata added to a .DMP file (debugging related) isn't a concern for a tool such as BE that ignores internal file structure.
My student went on to ask whether he could use Bulk Extractor on a .raw file acquired by DumpIt. In FOR526, one of the things we teach is using Bulk Extractor to parse memory for artifacts such as email addresses, URLs, and facebook IDs (among others). If you aren't using BE in your cases, you owe it to yourself to give it a try. At the banner price of free, it's something we can all afford. I told my student:
In a larger sense though bulk extractor can be used on any image file of any format that doesn't use compression (it won't natively handle EnCase .E01 compression for instance). But otherwise, just point bulk extractor at the image file and go to town. That's one of the things that makes BE so magical. If you have an SD card or USB drive from a device that uses some unknown filesystem, BE can still do it's magic because it doesn't try to understand the filesystem at all. Same goes for memory, it's just doing pattern matching, so the underlying container structure doesn't matter.If this sort of thing is up your alley and you want more information, come take FOR526 at an upcoming event. We introduce Windows memory forensics and cover it in sufficient depth to immediately apply memory analysis skills in your investigations. Rather than focus purely on theory, we ensure that you walk away with skills to hit the ground running.