Monday, August 11, 2014

AFoD Blog Interview With Andrew Case

I can’t remember how long I’ve known Andrew. It seems like we’ve been corresponding on social media and email for ages now and I’ve been a fan of his work for a very long time. He’s been a heroic contributor to the open source digital forensics community and I figured it was long past time for me to do a proper AFoD Blog interview with him.

Andrew Case’s Professional Biography

Andrew is a digital forensics researcher, developer, and trainer. He has conducted numerous large scale digital investigations and has performed incident response and malware analysis across enterprises and industries. Prior to focusing on forensics and incident response, Andrew's previous experience included penetration tests, source code audits, and binary analysis. He is a co-author of “The Art of Memory Forensics”, a book being published in summer 2014 that covers memory forensics across Windows, Mac, and Linux. Andrew is the co-developer of Registry Decoder, a National Institute of Justice funded forensics application, as well as a core developer of The Volatility Framework. He has delivered trainings to a number of private and public organizations as well as at industry conferences. Andrew's primary research focus is physical memory analysis, and he has presented his research at conferences including Blackhat, RSA, SOURCE, BSides, OMFW, GFirst, and DFRWS. In 2013, Andrew was voted “Digital Forensics Examiner of the Year” by his peers within the forensics community.

AFoD Blog: What was your path into the digital forensics field?

Andrew Case: My path into the digital forensics field started from a deep interest in computer security and operating system design. While in high school I realized that I should be doing more with computers than just playing video games, chatting, and doing homework. This led me to starting to teach myself to program, and eventually taking Visual Basic 6 and C++ elective classes in my junior and senior years. Towards the end of high school I had become pretty obsessive with programming and computer security, and I knew that wanted to do computer science in college.

I applied to and was accepted into the computer science program at Tulane University in New Orleans (my hometown), but Hurricane Katrina had other plans for me. On the way to new student orientation in August 2005, I received a call that it was cancelled due to Katrina coming. That would be the closest I ever came to being an actual Tulane student, which obviously put a dent in my college plans.

The one bright side to Katrina was that from August 2005 to finally starting college in January 2006, I had almost six months of free time to really indulge my computer obsession. I spent nearly every day and night learning to program better, reading security texts (Phrack, Uninformed, Blackhat/Defcon archives, 29A, old mailing list posts, related books, etc.), and deep diving into operating system design and development. My copy of Volume 3 of the Intel Architecture manuals that I read cover-to-cover during this time is my favorite keepsake of Katrina.

This six month binge led to two areas of interest for me - exploit development and operating system design. Due to these interests, I spent a lot of time learning reverse engineering and performing deep analysis of runtime behavior of programs. I also developed a 32-bit Intel hobby OS along the way that, while not overly impressive compared to others, routed interrupts through the APIC, supported notions of userland processes, and had very basic PCI support.

By the time January 2006 arrived, I was expecting to finally start college at Tulane. These plans came to a quick halt though as in the first week of January, two weeks before classes were to start, Tulane dropped all of their science and engineering programs. Due to the short notice Tulane gave, I had little time to pick a school or risk falling behind another semester. The idea of finding a school outside of the city, moving to it, and starting classes all in about ten days seemed pretty daunting. I then decided to take a semester at the University of New Orleans. This turned out to be a very smart decision as UNO’s computer science program was not only very highly rated, but it also had a security and forensics track inside of CS.

For my first two years of college I was solely focused on security. At the time forensics seemed a bit… dull. My security interest led to interesting opportunities, such as being able to work at Neohapsis in Chicago for two summers doing penetration tests and source code audits, but I eventually caught the forensics bug. This was greatly influenced by the fact that professors at UNO, such as Dr. Golden Richard and Dr. Vassil Roussev, and students, such as Vico Marziale, were doing very interesting work and publishing research in the digital forensics space. My final push into the forensics came when I took Dr. Golden’s undergraduate computer forensics course. This class throws you deep into the weeds and forces you to analyze raw evidence starting with a hex editor. You eventually move onto more user-friendly tools and automated processing. Once I saw the power of forensic analysis I was hooked, and my previous security and operating systems knowledge certainly helped ease the learning curve.

Memory forensics was also become a hot research area at the time, and when the 2008 DFRWS challenge came out it seemed time for me to fully switch my focus to forensics. The 2008 challenge was a Linux memory sample that needed to be analyzed in order to answer questions about an incident. At the time, existing tools only supported Windows so new research had to be performed. Due to my previous operating systems internals interest, I had already studied most of the Linux kernel so it seemed pretty straightforward to extract data structures I already understood from a memory sample. My work on this, in conjunction with the other previously mentioned UNO people, led to the creation of a memory forensics tool, ramparser, and the publication of our FACE paper at DFRWS 2008. This also led to my interest in Volatility and my eventual contributions of Linux support to the project.

AFoD: What was the memory forensics tool you created?

Case: The tool was called ramparser, and it was designed to analyze memory samples from Linux systems. It was created as a result of the previously mentioned DFRWS 2008 challenge. A detailed description of the tool and our combined FACE research can be found here: http://dfrws.org/2008/proceedings/p65-case.pdf. This project was my first real research into memory forensics, and I initially had much loftier goals than would ever be realized. Some of these goals would later be implemented inside Volatility, while some of them, such as kernel-version generic support, still haven’t been done by myself or anyone else in the research community.

Soon after the DFRWS 2008 paper was published, the original ramparser was scrapped due to severe design limitations. First, it was written in C, which made it nearly impossible to implement the generic, runtime data structures that are required to support a wide range of kernel versions. Also, ramparser had no notion of what Volatility calls profiles. Profiles in Volatility allow for plugins to be written generically while the backend code handles all the changes between different versions of an operating system (e.g. Windows XP, Vista, 7, and 8). Since ramparser didn’t have profiles, the plugins had to perform conditional checks for each kernel version. This made development quite painful.

ramparser2 (I am quite creative with names) was a rewrite of the original ramparser in Python. The switch to a higher-level interpreted language meant that much of the misery of C immediately went away. Most importantly, dynamic data structures could be used that would adapt at runtime to the kernel version of the memory sample being analyzed. I ported all of the original ramparser plugins into the Python version and added several new ones.

After this work was complete, I realized that, while my project was interesting, I had no real way of getting other people to use or contribute to it. I also knew that Windows systems were of much higher interest to forensics practitioners than Linux systems and that Volatility, which only supported Windows at the time, was beginning to see wide spread use in research projects, incident handling, and malware analysis. I then decided that integrating my work into Volatility would be the best way for my research to actually be used and improved upon by other people. Looking back on that decision now I can definitely say that I made the right choice.

AFoD: For those readers who are not familiar with digital forensics or at least not familiar with memory forensics, can you explain what the Volatility Project is and how you became involved with it?

Case: The Volatility Project was started in the mid-2000s by AAron Walters and Nick Petroni. Volatility emerged from two earlier projects by Nick and AAron, Volatools and The FATkit. These were some of the first public projects to integrate memory forensics into the digital investigation process. Volatility was created as the open source version of these research efforts and was initially worked on by AAron and Brendan Dolan-Gavitt. Since then, Volatility has been contributed to by a number of people, and has become one of the most popular and widely used tools within the digital forensics, incident response, and malware analysis communities.

Volatility was designed to allow researchers to easily integrate their work into a standard framework and to feed off each other’s progress. All analysis is done through plugins and the core of the framework was designed to support a wide variety of capture formats and hardware architectures. As of the 2.4 release (summer 2014), Volatility has support for analyzing memory captures from 32 and 64-bit Windows XP through 8, including the server versions, Linux 2.6.11 (circa 2005) to 3.16, all Android versions, and Mac Leopard through Mavericks.

The ability to easily plug my existing Linux memory forensics research into Volatility was one of the main points that led me to more deeply explore the project. After speaking with Brendan about some of my research and the apparent dead-end that was my own project, he suggested I join the Volatility IRC channel and get to know the other developers. Through the IRC channel I met Jamie, Michael, AAron, and other people that I now work with on a daily basis. This also got me in touch with Michael Auty, who is the Volatility maintainer, and who worked with me for hours a day for several weeks in order to get the base of the Linux support. Once this base support was added I could then trivially port my existing research into Volatility Linux plugins.

AFoD: I know we have people who read the blog who aren't day-to-day digital forensics people so can you tell us what memory forensics is and why it's become such a hot topic in the digital forensics field?

Case: Memory forensics is the examination of physical memory (RAM) to support digital forensics, incident response, and malware analysis. It is has the advantage over other types of forensics, such as network and disk, in that much of the system state relevant to investigations only appears in memory. This can include artifacts such as running processes, active network connections, and loaded kernel drivers. There are also artifacts related to the use of applications (chat, email, browsers, command shells, etc.) that only appear in memory and are lost when the system is powered down. Furthermore, attackers are well aware that many investigators still do not perform memory forensics and that most AV/HIPS systems don’t thoroughly look in memory (if at all). This has led to development of malware, exploits, and attack toolkits that operate solely in memory. Obviously these will be completely missed if memory is not examined. Memory forensics is also being heavily pushed due to its resilience to malware that can easily fool live tools on the system, but have a much harder time hiding within all of RAM.

Besides the aforementioned items, memory forensics is also becoming heavily used due to its ability to support efficient triage at scale and the short time in which analysis can begin once indicators have been found. Traditional triage required reading potentially hundreds of MBs of data across disk looking for indicators in event logs, the registry, program files, LNK files, etc. This could become too time consuming with even a handful of machines, much less hundreds or thousands across an enterprise. On other hand, memory-based indicators, such as the names of processes, DLLs, services, and kernel drivers, can be checked by only querying a few MBs of memory. Tools, such as F-Response, makes this fairly trivial to accomplish across huge enterprise environments and also allow for full acquisition of memory if indicators are found on a particular system.

The last reason I will discuss related to the explosive growth of the use of memory forensics is the ability to recover encryption keys and plaintext versions of encrypted files. Whenever software encryption is used, the keying material must be stored in volatile memory in order to support decryption and encryption operations. Through recovery of the encryption key and/or password, the entire store (disk, container, etc.) can be opened. This has been successfully used many times against products such as TrueCrypt, Apple’s Keychain, and other popularly used encryption products. Furthermore, as files and data from those stores are read into memory they are decrypted so that the requesting application (Word, Adobe, Notepad) can allow for viewing and editing by the end user. Through recovery of these file caches, the decrypted versions of files can be directly reconstructed from memory.

AFoD: The rumor going around town is that you're involved with some sort of memory forensics book? Is there any truth to that?

Case: That rumor is true! Along with the other core Volatility developers (Michael Ligh, Jamie Levy, and AAron Walters), we have recently written a book: The Art of Memory Forensics: Detecting Malware and Threats in Windows, Linux, and Mac Memory. The book is over 900 pages and provides extensive coverage of memory forensics and malware analysis across Windows, Linux, and Mac. While it may seem like it covers a lot of material, we originally had 1100 pages of content before the editor asked us to reduce the page count. The full table of contents for the book can be found here.

The purpose of this book was to document our collective memory forensics knowledge and experiences, including the relevant internals of each operating system. It also demonstrates how memory forensics can be applied to investigations ranging from analyzing end-user activity (insider threat, corporate investigations) to uncovering the workings of the most advanced threat groups. The book also spends a good bit of time introducing the concepts that are needed to fully understand memory forensics. There is an entire chapter dedicated to memory acquisition – a deeply misunderstood topic that can have drastic effects on people’s subsequent ability to perform proper memory analysis. An added bonus of this chapter is that we worked with the authors of the leading acquisition tools to ensure that our representation of the tools were correct and that we accurately described the range of issues that investigators need to be aware of when performing acquisitions.

The book is structured so that we introduce a topic (processes, kernel drivers, memory allocations, etc.) for a specific operating system, explain the relevant data structures, and then show how Volatility can be used to recover the information in an automated fashion. Volatility was chosen due to our obvious familiarity with it, but also due to the fact that it is the only tool capable of going so deeply and broadly into memory. The open source nature of Volatility means that readers of the book can read the source code of any plugins of interest, modify them to meet the needs of their specific environment, and add on to existing capabilities in order to expand the field of memory forensics. With that said, the knowledge gained from the book is applicable to people using any memory forensics tool and/or those who wish to develop capabilities outside of Volatility.

Along with the book, we will also be releasing the latest version of Volatility. This 2.4 release includes full support for Windows 8, 8.1, Server 2012, and Server 2012 R2, TrueCrypt key and password recovery modules, as well as over 30 new Mac and Linux plugins for investigating malicious code, rootkits, and suspicious user activity. In total, Volatility 2.4 has over 200 analysis plugins across the supported operating systems.

AFoD: What do you recommend to people who are looking to break into the digital forensics field? What would you tell someone who is in high school compared to someone who is already in the middle of a career and looking to make the switch.

Case: To start, there are a few things I would tell both sets of people. First, I consider learning programming to be the most important and essential. The ability to program removes you from the set of investigators that are limited by what their tools can do for them, and the skill also makes you highly attractive to potential employers. As you know, when dealing with advanced malware and attackers, existing tools are only the starting point and many customizations and “deep dives” of data outside tools’ existing functionality are needed to fully understand what occurred. To learn programming, I would recommend starting with a scripting language (Python, Ruby, or similar). These are the easiest to learn and program in and the languages are pretty forgiving. There are also freely accessible guides online as well as great books on all of these languages.

The other skill I would consider essential is at least a moderate understanding of networking. I don't believe that people can be fully effective host analysts without some understanding of networking and how data flows through the environment they are trying to protect or investigate. If the person wants to become a network forensic analyst, then they obviously need the base set of skills to even be considered. To learn the basics of networking, I would recommend by starting reading a well-rated Network+ study book. This will teach you about routing, switching, physical interfaces, sub-nettings, VLANS, etc. After understanding the hardware devices and how they interact, you should then read Volumes 1 of TCP/IP illustrated. If you can read C, I would recommend reading Volume 2 as well, but know that the book can be brutal. It is over 1000 pages and walks you literally line-by-line through the BSD networking stack. You will be a master if you can finish and understand it all. It took me a month of reading everyday after work to get through it during a summer in college. If someone hasn't read TCP/IP illustrated then I seriously question his or her networking background. To quote a document that I find very inspirational related to security and forensics: "have you read all 3 volumes of the glorious TCP/IP Illustrated, or can you just mumble some useless crap about a 3-way handshake".

As far as specific advices to the different audiences, I would strongly recommend that high school students learn at least some electronics and hardware skills. If you are going to do computer science make sure to take some electrical engineering courses as electives in order to get hands-on experience with electronics. I plan on expanding on this more in the near future, but I truly think that in the next few years not being able to work with hardware will limit one’s career choices and will certainly affect your ability to do research. In short, current investigators can get away with only interacting with hardware when performing tasks like removing a hard drive or disassembling components by hand. As phones, tablets, and other devices without traditional hard drives or memory become standard (see “Internet of Things”), the ability to perform actions, such as removing flash chips, inspecting hardware configurations, and interacting with systems through hardware interfaces will become common place. Without these skills you won’t even be able to image a hard drive - for example if I gave an investigator with the currently most useful skills a “smart TV” and told he or she to remove the hard drive in a forensically sound manner, do you think it would happen? Would the person grab an electronics kits and start pulling electrical components out? Most people in forensics would have no idea how to do that - myself included.

For people already in the field, I would play to your strengths. If you have a background in programming then use that to your advantage. Explain to your future employer how your programming background will allow you to automate tasks and help out in cases where source code review is needed. Being able to automate tasks is a huge plus and greatly increases efficiency while removing the chance for human error. If a person’s background is networking, then there are many ways he or she could transition into network forensics roles, whether as part of a SOC or a consultant. When transitioning roles I would make sure to ask any prospective employers about training opportunities at the company. If a person with an IT background can really get into the forensics/IR trenches while also getting quality training once or twice a year then he or she will quickly catch up to their peers.

AFoD: So where can people find you this year? Will you be doing any presentations or attending any conferences?

Case: The remainder of the year will actually be quite busy with speaking engagements. Black Hat just wrapped up and while there we did a book signing, released Volatility 2.4 at Black Hat Arsenal, and threw a party with the Hacker Academy to celebrate the book’s release. In September I will be speaking at Archc0n in St. Louis, and in October I will be taking my first trip to Canada to speak at SecTor. I may also be speaking at Hacker Halted in October. In November I will be speaking at the Open Memory Forensics Workshop (OMFW) and the Open Source Digital Forensics Conference (OSDFC) along with the rest of the Volatility team. I also have pending CFP submissions to BSides Dallas and the API Cyber Security Summit, both in November. I am currently eyeing some conferences for early next year including Shmoocon and SOURCE Boston, neither of which I have spoke at previously. Finally, if any forensics/security people are ever coming through New Orleans then they should definitely reach out. Myself, along with several other local DFIR people, can definitely show out-of-towners a good time in the city and have done so many times.

Wednesday, April 2, 2014

The State Of The Blog

I get enough people asking me about the fate of the blog where I thought it would make more sense to just crank out a blog post. I’m still here, but my time continues to be so limited that I’ve had to continue to put the blog on hold. A couple years back I started a great new job building out a world class cyber investigations team and that continues to take up the majority of my time. I’m planning a few blog posts about what I have learned as a hiring manager to help those who are looking to break into the field and how best to approach things like resumes, cover letters, and interviews.

What really killed my ability to stay on top of the blog was starting an MBA program last fall. It turns out a full-time job coupled with being a full-time MBA student doesn’t leave much free time. Pretty much anything that doesn’t involve my family, job, or school will be on hold until I graduate in the spring of 2015 or the University of Florida punts me out of their MBA program. I’ve managed to survive one term so far and I’m cramming for finals for my second term this week.Three more terms to go after that.

I’m hoping to carve out some time to crank out a couple blog posts now and again before graduation and then get back to my usually blogging schedule of a blog post every two to four weeks. I have a couple blog posts that I really want to get out this year including some interviews.