Thanks a Million

Tuesday, May 24, 2016 Posted by Corey Harrell 5 comments
Last week a new member on my $DayJob’s team reached the point in his in-house training where they started to read articles on jIIr. After I cracked a joke about the blog’s author he mentioned how my blog had over one million page views. To be honest, I haven’t looked at my jIIr’s statistics for months and I didn’t even know about the page views. The milestone really made me reflect on my journey and how it wouldn’t had been possible without others so I wanted to take the time to say thank you.

Thanks to everyone who has stopped by jIIr to read my content. Thanks to all the other bloggers who had linked back to my site or posted links directing their readers to my site. Thanks to everyone who posted links to my content on websites, social media, forums, and DFIR email lists to direct people to my posts. I especially wanted to thank those who took the time to leave a comment or contact me by email about something I wrote whether if it is positive or criticism. I wanted to give a shout out to Harlan for the advice he provided to me. I was just a random person who reached out to him looking for advice on starting a blog. Not only did he provided me with great advice (which showed me I was really over thinking things) but he also mentioned jIIr on his own blog, which helped my content gain more exposure. Lastly, I wanted to thank the Christian men’s group I was in all those years ago who walked with me on how we could use the passions God blessed us with to serve others.

In addition to saying thanks I also wanted to apologize. I wanted to apologize to those who left comments on my blog over the past few months and I never responded. To those who contacted me by email and I either took an extremely long time to respond or never responded at all. To those who may had been visited my blog only to be disappointed due to the lack of new content being posted on jIIr since last September. This was not the way I would had preferred to hit this milestone compared to hitting the milestone due to a great article that pushed me over a million page views. Sitting where I am today I wouldn’t had done it any other way. I needed some time to focus on my walk with Christ and spend more time in God’s word. In essence, I realigned priorities in my life and how I was spending my time. Outside of my commitments (family, $DayJob, $AcademiaJob, and church) I pretty much disconnected from everything else to focus on my faith. The DFIR community and jIIr was part of this everything else category that I temporarily put on hold while I spent time refocusing. Stay tuned as I start working my way through my blog idea hopper that has built up over the months.

It’s been a long journey to reach this milestone. I started out as a digital forensic analyst/ vulnerability assessor looking to get into the incident response field to becoming a security analyst who built and manages a Computer Security Incident Response Team (CSIRT) performing security monitoring and incident response. jIIr has been a place where I have shared my thoughts during this journey in hopes that someone somewhere would find the content useful and helpful. God willing, I’ll continue publishing content and my research for another six years to help those their own journeys.


~ Matthew 4:4

Breaking Out of Routines

Thursday, May 19, 2016 Posted by Corey Harrell 0 comments
I was digging a hole to plant my blackberries plants when I kept hearing a noise of something moving around the corner of my house. I stopped digging and walked around the house to see what was making the noise. I didn’t see anything anywhere so I shrugged it off and went back to digging the hole. Shortly thereafter I heard the noise again so I went back to look around the corner. Again, I didn’t see anything so I went back to work thinking maybe it was the wind. After a few minutes I heard the noise for a third time and this time I was determined to figure out what was making the noise. I went around the corner of my house but I still didn’t see anything. Then I looked down to my right to my basement window well that sits below ground and saw what was making the noise. Sitting next to my window inside the window well was a squirrel, which wasn’t moving since it saw me standing right above it.

I walked a few feet away so the squirrel couldn’t see me but I could still see it. I stood on top of my air condition unit to see what the squirrel was doing. After a minute, the squirrel started to move around. Not just in any manner but it started to walk the boundary of the window well making a circle. As I stood there watching the squirrel I realize what occurred. I built up the soil on that side of my house to prepare for our garden but this caused the soil to be close to the top of my window well. The squirrel must had been walking and fell into the window well before I was able to buy window well covers. The trapped squirrel searching for a way out turned it into a routine. The routine of walking in circles trying to find a way to escape but not finding one. The squirrel keeps walking searching for a way out. In the end, the squirrel is just walking in a small circle. As I was watching the squirrel I could see it had been trapped for some time; maybe for hours or maybe the entire day.

I thought about how I could help the squirrel escape without it biting me. My first attempt was to put a branch into the window well. This way the squirrel could climb up the branch to escape. I dropped the branch down into the window well and went back to my spot to watch what happens. The squirrel started to walk the circle and approached the branch. Then the squirrel walked over the branch and continued looking for a way out. My first thought was maybe the branch was too small so I replaced it with a piece of lumber. The same thing occurred with the squirrel walking right over the lumber and not seeing that the wood was its way out from being trapped. I stood there watching the squirrel and thought to myself the squirrel is trapped in its own routine. For hours the branch and lumber were not there so the squirrel was walking right past it since it was not expecting it. My neighbor came over to help me get the squirrel out. It took a few minutes but he was able to manage to lift the now freaked out squirrel out of the window well with the shovel. The squirrel panicked and jumped right back down into the window well. However, this time the squirrel was no longer trapped in its routine since the experience with the shovel was a jolt to its senses. My neighbor now struggled to get the squirrel on the shovel so he decided to set a brick on the bottom of the window well. The squirrel immediately saw the brick and used it to jump out of the window well to free itself.

At times we can find ourselves trapped in our routines and this is especially true when performing analysis for security monitoring, digital forensics, or incident response. Routines make our job easier because we can perform certain actions without having to think really hard about how to do it. The downside of routines is they tend to put us on auto-pilot, which blinds us to seeing something new that is right in front of us. Similar to the squirrel’s routine blinding it to seeing the way to escape. Every now and then when you are performing routine analysis tasks take the time to stop and think about what you are doing, what you are trying to accomplish, and what you are seeing. If you don’t then you may never see what you are missing because we don’t have the luxury of someone giving us a jolt to break us out of our routines.

Triage Practical Solution – Malware Event – Proxy Logs Prefetch $MFT IDS

Tuesday, April 5, 2016 Posted by Corey Harrell 6 comments
Staring at your Mountain Dew you think to yourself how well your malware triage process worked on triaging the IDS alert. It’s not perfect and needs improvement to make it faster but overall the process worked. In minutes you went from IDS alerts to malware on the system. That’s a lot better than what it used to be; where IDS alerts went into the black hole of logs never to be looked at again. Taking a sip of your Mountain Dew you are ready to provide your ISO with an update.

Triage Scenario

To fill in those readers who may not know what is going on the following is the abbreviated practical  scenario (for the full scenario refer to the post Triage Practical – Malware Event – Proxy Logs Prefetch $MFT IDS):

The junior security guy noticed another malware infection showing up in the IDS alerts. They grabbed a screenshot of the alerts and sent it to you by email. As soon as you received the email containing the screenshot shown below you started putting your malware triage process to the test.

Below are some of the initial questions you had to answer and report back to the ISO.

        * Is this a confirmed malware security event or was the junior analyst mistaken?
        * What do you think occurred on the system to cause the malware event in the first place?
        * What type of malware is involved and what capabilities does it have?
        * What potential risk does the malware pose to your organization?
        * What recommendation(s) do you make to the security team to strengthen its security program to reduce similar incidents occurring in the future?

Information Available

Despite the wealth of information available to you within an enterprise, only a subset of data was provided for you to use while triaging this malware event. The following artifacts were made available:

        * IDS alerts for the timeframe in question (you need to replay the provide pcap to generate the IDS alerts. pcap is not provided for you to use during triage and was only made available to enable you to generate the IDS alerts in question)

        * Parsed index.dat files to simulate proxy web logs (the parsed index.dat information was modified to remove items not typically found in a web server’s proxy logs)

        * Prefetch files from the system in question (inside the Prefetch.ad1 file)

        * Filesystem metadata from the system in question (the Master File Table is provided for this practical)

Information Storage Location within an Enterprise

Each enterprise’s network is different and each one offers different information for triaging. As such, it is not possible to outline all the possible locations where this information could be located in enterprises. However, it is possible to highlight common areas where this information can be found. To those reading this post whose environments do not reflect the locations I mention then you can evaluate your network environment for a similar system containing similar information or better prepare your network environment by making sure this information starts being collected in systems.

        * Proxy web logs within an enterprise can be stored on the proxy server itself and/or in a central logging system. In addition to proxy web logs, the potentially infected system will have web usage history for each web browser on the system

        * IDS alerts within an enterprise can be stored on the IDS/IPS sensors themselves or centrally located through a management console and/or central logging system (i.e. SIEM)

        * Prefetch files within an enterprise can be located on the potentially infected system

        * File system metadata within an enterprise can be located on the potentially infected system

Collecting the Information from the Storage Locations

Knowing where information is available within an enterprise is only part of the equation. It is necessary to collect the information so it can be used for triaging. Similar to all the differences between enterprises’ networks, how information is collected varies from one organization to the next. Below are a few suggestions for how the information outlined above can be collected.

        * Proxy web logs will either be located on the proxy server itself or a centralized logging solution. The collection of the logs can be as simple as running a filter in a web graphical user interface to export the logs, copying an entire text file containing log data from the server, or viewing the logs using the interface to the central logging system.

        * IDS alerts don’t have to be collected. They only need to be made available so they can be reviewed. Typically this is accomplished through a management console, security monitoring dashboard, or a centralized logging solution.

        * Prefetch files are stored on the potentially infected system. The collection of this artifact can be done by either pulling the files off remotely or locally. Remote options include an enterprise forensic tools such as F-Response, Encase Enterprise, or GRR Rapid Response, triage scripts such as Tr3Secure collection script, or by using the admin share since Prefetch files are not locked files. Local options can use the same options.

        * File system metadata is very similar to Prefetch files because the same collection methods work for collecting it. The one exception is the file can’t be pulled off by using the admin share.

Potential DFIR Tools to Use

The last part of the equation is what tools one should use to examine the information that is collected. The tools I’m outlining below are the ones I used to complete the practical.

        * Excel to view the text based proxy logs
        * Winprefetchview to parse and examine the prefetch files
        * MFT2CSV to parse and examine the $MFT file

Others’ Approaches to Triaging the Malware Event

Placeholder since none were known at the time of this post

Partial Malware Event Triage Workflow

The diagram below outlines the jIIr workflow for confirming malicious code events. The workflow is a modified version of the Securosis Malware Analysis Quant. I modified Securosis process to make it easier to use for security event analysis

Detection: the malicious code event is detected. Detection can be a result of technologies or a person reporting it. The workflow starts in response to a potential event being detected and reported.

Triage: the detected malicious code event is triaged to determine if it is a false positive or a real security event.

Compromised: after the event is triaged the first decision point is to decide if the machine could potentially be compromised. If the event is a false positive or one showing the machine couldn’t be infected then the workflow is exited and returns back to monitoring the network. If the event is confirmed or there is a strong indication it is real then the workflow continues to identifying the malware.

Malware Identified: the malware is identified two ways. The first way is identifying what the malware is including its purpose and characteristics. The second way is identifying and obtaining the malware from the actual system.

Root Cause Analysis: a quick root cause analysis is performed to determine how the machine was compromised and to identify indicators to use for scoping the incident. This root cause analysis does not call for a deep dive analysis taking hours and/or days but one only taking minutes.

Quarantine: the machine is finally quarantined from the network it is connected to. This workflow takes into account performing analysis remotely so disconnecting the machine from the network is done at a later point in the workflow. If the machine is initially disconnected after detection then analysis cannot be performed until someone either physically visits the machine or ships the machine to you. If an organization’s security monitoring and incident response capability is not mature enough to perform root cause analysis in minutes and analysis live over the wire then the Quarantine activity should occur once the decision is made about the machine being compromised.  

Triage Analysis Solution

I opted to triage this practical similar to a real security event. As a result, the post doesn’t use all of the supplied information and the approach is more focused on speed. The triage process started with the IDS alert screenshot the junior security analyst saw then proceeded to the proxy logs before zeroing in on the system in question.

IDS alerts

The screenshot below is the one supplied by the junior security analyst. In this practical it was not necessary to replay the packet capture to generate these IDS alerts since the screenshot supplied enough information.

Typically you can gain a lot of context about a security event by first exploring the IDS signatures themselves. Gaining context around a security event solely using the IDS signature names becomes second nature by doing event triage analysis on a daily basis. Analysts tend to see similar attacks triggering similar IDS alerts over time; making it easier to remember what attacks are and the traces they leave in networks. For other analysts this is where Google becomes their best friend. The screenshot shows three distinct events related to the malware in this security incident.

1.  ET CURRENT_EVENTS Possible Dyre SSL Cert: these signatures indicate a possible SSL certificate for the Dyre malware. Dyre is a banking Trojan and it uses SSL to encrypt its communication. The practical did not include this but another way to detect this activity is by consuming the SSL Blacklist and comparing it against an organization’s firewall logs or netflow data to see if any internal hosts are communicating with known IP addresses associated with Dyre

2. ET POLICY Internal Host Retrieving External IP via icanhazip[.]com: this signature flags an internal host that contacts a known website associated with identifying the public IP address. Depending on the organization this may or may not be normal behavior for web browsing and/or end users. However, some malware tries to identify the public facing IP address, which will trigger this IDS signature

3. ET TROJAN Common Upatre Header Structure: this signature flags traffic associated with the Upatre Trojan. Upatre is a downloader that installs other programs.

One of the IDS alerts could had been a false positive but it is unlikely for this sequence of alerts all to be false positives. This confirms what the junior analyst believed about the machine being compromised. Specifically, the machine was infected with the Upatre downloader, which then proceeded to install the Dyre banking Trojan.

Web Proxy Logs

IDS alerts provide additional information that can be used in other data sources. The practical doesn’t provide the date and time when the IDS signatures fired but it does provide the destination IP addresses and domain name the machine communicated with. These IP addresses were used to correlate the IDS alerts to activity recorded in the web proxy logs. The web_logs.csv file was imported into Excel and the data was sorted using the date. This puts the log entries chronological order making it easier to perform analysis.

The web logs provided with the practical were very basic. The only information recorded was the date/time, URL, and username. Unfortunately, the destination IP address was not recorded, which is typical with web proxies. As a result, the logs did not contain any entries for the IP addresses and

The search on the domain icanhazip[.]com also came up empty. At this point the web proxy logs provide no additional information with a date and time to go on. This analysis did reveal these web proxy logs suck and the organization needs to make it a priority to record more information to make analysis easier. The organization also needs to ensure the network routes all HTTP/HTTPs web traffic through the proxy so it gets recorded and prevents users and programs from bypassing it.

Prefetch files

At this point in the analysis the machine in question needs to be triaged. Reviewing programs executing on a system is a quick technique to identify malicious programs on a system. The high level indicators I tend to look are below:

        * Programs executing from temporary or cache folders
        * Programs executing from user profiles (AppData, Roaming, Local, etc)
        * Programs executing from C:\ProgramData or All Users profile
        * Programs executing from C:\RECYCLER
        * Programs stored as Alternate Data Streams (i.e. C:\Windows\System32:svchost.exe)
        * Programs with random and unusual file names
        * Windows programs located in wrong folders (i.e. C:\Windows\svchost.exe)
        * Other activity on the system around suspicious files

The collected prefetch files were parsed with Winprefetchview and I initially sorted by process path. I reviewed the parsed prefetch files using the general indicators I mentioned previously and I found the suspicious program highlighted in red.

The first suspicious program was SCAN_001_140815_881[1].SCR executing from the Temporary Internet Files directory. The program was suspicious because it is executing from the lab user profile and its name resembles a document name instead of a screensaver name. To gain more context around the suspicious program I then sorted by the Last Run time to see what else was executing around this time.

The SCAN_001_140815_881[1].SCR program executed at 8/15/2015 5:49:51 AM UTC. Shortly thereafter another executed named EJZZTA8.EXE executed from user’s Temp directory at 8/15/2015 5:51:03 AM UTC. Both prefetch files did not reference any other suspicious executables in their file handles. At this point not only do I have two suspicious files of interest but I also identified the exact date and time when the security event occurred.

Web Proxy Logs Redux

The date and time of the incident obtained from the Prefetch files can now be used to correlate the IDS alerts and suspicious programs to the activity in the web proxy logs. The picture below shows leading up to when the SCAN_001_140815_881[1].SCR program executed the user was accessing Yahoo email.

The rest of the web logs continued to show the user interacting with Yahoo email around the time the infection occurred. However, the web logs don’t record the entry showing where the SCAN_001_140815_881[1].SCR program came from. This occurred either because the web proxy didn’t record it or the web proxy sucks by not recording it. I’m going with latter since the web proxy logs are missing a lot of information.

File system metadata

At this point the IDS alerts revealed the system in question had network activity related to the Upatre downloader and Dyre banking Trojans. The prefetch files revealed suspicious programs named SCAN_001_140815_881[1].SCR that executed at 8/15/2015 5:49:51 AM UTC and EJZZTA8.EXE that executed at 8/15/2015 5:51:03 AM UTC. The web proxy logs showed the user was accessing Yahoo email around the time the programs executed. The next step in the triage process is to examine the file system metadata to identify any other malicious software on the system and to try to confirm the initial infection vector. I reviewed the metadata in a timeline to make it easier to see the activity for the time of interest.

For this practical I leveraged the MFT2CSV program in the configuration below to generate a timeline. However, an effective and faster technique - but not free - is using the home plate feature in Encase Enterprise against a remote system live. This enables you to triage the system live instead of trying to collect files from the system for offline analysis.

In the timeline I went to the time of interest, which was 8/15/2015 5:49:51 AM UTC. I then proceed forward in time to identify any other suspicious files. The first portion of the timeline didn’t show any new activity of interest around the SCAN_001_140815_881[1].SCR file.

Continuing going through the timeline going forward in time lead me to the next file EJZZTA8.EXE. The activity between these two files only showed files being created in the Temporary Internet Files and Cookies directories indicating the user was surfing the Internet.

At this point the timeline did not provide any new information and the last analysis step is to triage the suspicious programs found.

Researching Suspicious Files

The first program SCAN_001_140815_881[1].SCR was no longer on the system but the second program (EJZZTA8.EXE) was. The practical file file_hash_list.csv showed the EJZZTA8.EXE’s MD5 hash was f26fd37d263289cb7ad002fec50922c7. The first search was to determine if anyone uploaded the file to VirusTotal and a VirusTotal report was available. Numerous antivirus detections confirmed the program was Upatre, which matches one of the triggered IDS signatures.

A Google search of the hash located a Hybrid Analysis report and Malware report. The Hybrid Analysis report confirm the sample sends network traffic. This information could then be used to scope the incident to identify potentially other infected machines.

The resources section in the program contains an icon confirming the file tried to mimic a document. This makes me conclude it was a social engineering attack against the user.

The reports contained other information that one could use to identify other infected systems in the enterprise. However, as it relates to the practical there wasn’t much additional information I needed to complete my triage analysis. The next step would be to contact the end user to get the phishing email and then request for the email to be purged from all users’ inboxes.

Triage Analysis Wrap-up

The triage process did confirm the system was infected with malicious code. The evidence that was present on the system and the lack of other attack vector artifacts (i.e. exploits, vulnerable programs executing, etc.) leads me to believe the system was infected due to a phishing email. The email contained some mechanism to get the user to initiate the infection. The risk to the organization is twofold. One of the malicious programs downloads and installs other malware. The other malicious program tries to capture and exhilarate credentials from the system. The next step would be to escalate the malware event to the incident response process so the system can be quarantined, system can be cleaned, end user can be contacted to get the phishing email and then it can be determined why end users have access to web email instead of using the organization's email system. 
Labels: , , ,

Blaming Others

Monday, February 8, 2016 Posted by Corey Harrell 0 comments
As we marched across the parade deck from the side we looked as one. The sound of about 70 Marines' heels hitting the pavement but sounded as one. The sound of the hoarse drill instructor's voice echoed throughout the 3rd Battalion. The sight from the side must had been one to see. 70 Marines appearing as only a few walking in a single line. In one instant, in one brief moment the few became many. The drill instructor echoed one command followed by quickly correcting himself with a different command. The 70 Marines who were marching as one became many as they tried to adjust. The stress of making a mistake on his first platoon must have added to the pressure. As the Marines marched across the parade deck the drill instructor kept echoing the wrong commands forcing the Marines to adjust. The stress of the Marines striving to take first place must have added to the pressure. They lost their focus and were no longer in sync with the Marine standing next to them. It must have been a sorry sight from the side seeing close to 70 arms and legs marching with the sound of 70 heals hitting the parade deck at different times. Cluster is the most G-rated description one can give seeing the Marines march across the parade deck that afternoon.

The evaluation was over and the 70 Marines filed back into their barracks. The brief moment of reflection in their minds was broken as the sound of a footlocker being kicked broke the silence. The roar of the two other drill instructors’ hoarse voices followed the loud bang of more footlockers being kicked. The blame for the cluster on the parade deck was placed squarely on the recruits. That afternoon the Marines spent quality time doing sandpit hopping across 3rd Battalion in Parris Island. For those not acquainted with this tradition the following is what occurs. Recruits are forced exercise in what seems like a giant sandbox by following the orders barked by their drill instructor. Jumping jacks, mountain climbers, jumping jacks, push ups, mountain climbers, etc.. This goes on for a period of time before the recruits then run to the next sandbox to be smoked again in the same manner before running to the next sandbox. This continues until the drill instructors get bored or the recruits need to be somewhere. Words don’t do justice describing getting smoked so please take a few minutes to see a Pit Stop in action. In the sweltering heat of South Carolina, the recruits had sweat powering down their faces as they were covered in sand with sandfleas biting them. As much as they tried to ignore it they could only focus on the feeling of bugs feasting on them and not being able to do anything about it (one scratch typically ends with a lot longer time being smoked). That afternoon the recruits (me being one of them) thought to ourselves why are we being punished when our drill instructor messed up.

It was easier to blame even though it was hard to tell what even happened. It was easier to blame then it was to take responsibility so it wouldn't happen again. It was easier to blame then it was to admit we messed up; despite the circumstances we lost focus and resembled nasty civilians instead of Marines marching in sync. It was easier to blame to distract us from our current reality of shit.

Moral of the Story

It is wise to direct your anger towards problems - not people; to focus your energies on answers - not excuses.

- William Arthur Ward


Triage Practical – Malware Event – Proxy Logs Prefetch $MFT IDS

Wednesday, January 6, 2016 Posted by Corey Harrell 0 comments
The ISO was thrilled and excited about the possibilities after you successfully triaged the previous suspicious network activity. They got a glimpse of the visibility one attains through security monitoring and the information one can get leveraging incident response. As you sit at your desk drinking a Mountain Dew you don’t have time to reflect on the days when your security team was like an ostrich with its head buried in the sand. You are slowly working on improving and formalizing your organization’s security monitoring and detection capabilities as you detect and respond to security events. In the background you hear the junior security guy say “we got another one.” You already know he is referring to a malware infection so you say to him “Grab a screenshot of the alerts and send it to me in an email.” As you wait for the email to arrive you start to wonder is it wrong to get excited and look forward to an alert that means your organization may have a problem. You brushed the thought aside as the email arrives and you see the screenshot below (dates and times have been censored). You put down the Mountain Dew and put your hands to the keyword as you start putting your malware triage process to the test.

Triage Scenario

The above scenario outlines the activity leading up to the current malware security event. Below are some of the initial questions you need to answer and report back to the ISO.

        - Is this a confirmed malware security event or was the junior analyst mistaken?
        - What do you think occurred on the system to cause the malware event in the first place?
        - What type of malware is involved and what capabilities does it have?
        - What potential risk does the malware pose to your organization?
        - What recommendation(s) do you make to the security team to strengthen its security program to reduce similar incidents occurring in the future?

Information Available

In an organization’s network you have a wealth of information available to you for you to use while triaging a security incident. Despite this, to successfully triage an incident only a subset of the data is needed. In this instance, you are provided with the following artifacts below for you to use during your triage. Please keep in mind, you may not even need all of these.

        - IDS alerts for the timeframe in question (you need to replay the provide pcap to generate the IDS alerts. pcap is not provided for you to use during triage and was only made available to enable you to generate the IDS alerts in question)
        - Parsed index.dat files to simulate proxy web logs (the parsed index.dat information was modified to remove items not typically found in a web server’s proxy logs)
        - Prefetch files from the system in question (inside the Prefetch.ad1 file)
        - Filesystem metadata from the system in question (the Master File Table is provided for this practical)

Supporting References

The below items have also been provided to assist you working through the triage process.

        - The jIIr-Practical-Tips.pdf document shows how to: update the IDS signatures in Security Onion, replay the packet capture in Security Onion, and mount the ad1 file with FTK Imager.

        - The file hash list from the system in question. This is being provided since you do not access to the system nor a forensic image. This can help you confirm the security event and any suspicious files you may find.

        - The file hashes of the practical files for verification purposes

The 2016-01-06_Malware-Event Web Logs Prefetch MFT IDS practical files can be downloaded here

The 2016-01-06_Malware-Event Web Logs Prefetch MFT IDS triage write-up is outlined in the post Triage Practical Solution – Malware Event – Proxy Logs Prefetch $MFT IDS 

For background information about the jIIr practical’s please refer to Adding an Event Triage Drop to the Community Bucket article
Labels: , , ,