After the Shmoocon talk last weekend, I was accused of
accused of aiding and abetting the enemy.
First, let me thank those who gave positive feedback on the talk with
Alissa Torres (@sibertor). The talk was
on forging artifacts in memory forensics that confuse our current tools. Those who know me know that I am not a fan of
PBF (push button forensics). If you are doing
forensics investigations, you are doing important work. I’m a huge advocate of you actually knowing how to do it. Crazy, I know, but the attitude among many
pseudo-professionals today is that they should be able to just push the button
and get the evidence. Think I’m
exaggerating? Check out the Forensicator
Pro April Fool’s Day gag.
The tool I introduced this weekend clearly has
offensive purposes. When I announced it,
I couldn’t imagine a single defensive use for it. Some have told me that they might use the
tool to create training scenarios. I think
that sounds pretty unlikely, but I’ll wait to see how that plays out. In general though, it’s something that will
likely only ever be used by attackers. Period. End of story.
So why did I present on and release the tool? First, I feel sure that someone else is
already attacking memory forensics processes.
I AM NOT THAT SMART. Someone else is likely already doing
this. If I am the first, someone else
was going to think of it later. In any
case, the concern was that without demonstrating the ability to fool our tools,
investigators don’t include it in the “realm of possibilities.” I’m sick of hearing people tell me that
what’s in RAM is ground truth. That can’t
always be true. ADD shows pretty
conclusively that many OS objects can be forged. Pushing the envelope is better for everyone in the long term, even if it causes some short term pain.
What’s next?
Well, I’m planning to expand the tool. After the talk, several people approached me
and explained how the tool would be useful for building training scenarios, if
only it implemented X or Y. I’ll be
adding some features as time permits.
Multiple OS support is also on the radar (as noted in the slides, the
reference implementation only works on Win7x86SP1).
Shoring up testing so ADD also fools other
forensics tools is also on the horizon. I
tested some support against Mandiant’s Redline and it doesn’t detect the forged
processes (this is a good thing). Does this mean that is has
better sanity checks or does it not scan for the process structures? Because it isn’t open source, I don’t
know. I haven’t checked the EULA, but I’m
guessing that there’s an anit-reversing provision in there. If that’s the case, I’ll be making lots of “educated
guesses.”
Memory images and tool release?
I expect to release source code on the Google code page by the end of the week. I also expect to get some memory images out there quickly as well. After those are out, I plan to step through them and show how to detect the current versions of ADD in Volatility (TM). However, for legal reasons I should note that I am not in any way associated with the Volatility team or Verizon (which holds the Volatility trademark).
Can't wait Jake to load it in my sandbox and play with it
ReplyDelete...I was accused of accused of[sic] aiding and abetting the enemy.
ReplyDeleteBy whom? Publicly, or privately? By one person, or multiple? If more than one, how many?
...not a fan of PBF..
You're not a fan of Nintendo forensics, great. But there's a bit of a difference between "knowing how to do it", and understanding the underlying data structures, how the operating system works and "does it's thing", and what artifacts should be there.
...I couldn’t imagine a single defensive use for it.
Forget "training scenarios"...those will be too contrived and seem too fake compared to actual real world work. They always do.
How about testing? Someone could use this tool, and then test their analysis process against the use.
Two other statements I found interesting in your post were:
...it’s something that will likely only ever be used by attackers. Period. End of story.
This may be where the "accusation" came from...
...and...
Someone else is likely already doing this.
I'm not disagreeing with you, but I am curious as to what data or findings you have to base this on...or is it just a "feeling"? Again, I'm not disagreeing, just asking if there's something that you've seen...that's all.
So, I see your point, but I honestly don't think that this is a matter of fooling tools, per se, because tools are just that...tools. Really, it's much more about subverting the perceptions, training, and methodology of the analyst.
This is nothing new. Anyone remember this presentation from 2005? Analysts and their process/methodology have been targeted for a long while now.
Something else to consider: this presentation from 2007. In section 7, the authors address what you're referring to in your presentation.