WEBVTT

0:00:03.660000 --> 0:00:06.880000
 Detection Rules for Incident Responders.

0:00:06.880000 --> 0:00:11.380000
 So welcome to the next section of this
 course where I should say subsection

0:00:11.380000 --> 0:00:17.920000
 where we'll be covering detection engineering
 very briefly and more specifically

0:00:17.920000 --> 0:00:22.720000
 to begin with in this video I'm going
 to be explaining detection rules

0:00:22.720000 --> 0:00:31.680000
 because I mentioned them in the previous
 video more specifically when

0:00:31.680000 --> 0:00:40.420000
 we went through the lab effectively
 using Splunk where I mentioned or

0:00:40.420000 --> 0:00:44.920000
 we went through the process of creating
 searches to find specific activity

0:00:44.920000 --> 0:00:50.340000
 or logs as it were and I mentioned
 towards the tail end of that video

0:00:50.340000 --> 0:00:56.980000
 that it's, you know, I sort of explained
 the importance of, you know,

0:00:56.980000 --> 0:00:59.740000
 knowing how to write searches or, you
 know, finding what you're looking

0:00:59.740000 --> 0:01:05.260000
 for with regards to malicious or suspicious
 activity and I pointed out

0:01:05.260000 --> 0:01:10.360000
 the importance of that with regards
 to, you know, being able to write

0:01:10.360000 --> 0:01:16.380000
 rules, detection rules that can then
 be used to automatically trigger

0:01:16.380000 --> 0:01:22.440000
 alerts for that malicious activity that
 you are creating the search for.

0:01:22.440000 --> 0:01:26.860000
 So this video is aptly titled Detection
 Rules for Incident Response but

0:01:26.860000 --> 0:01:30.980000
 to begin with, you know, if you're not
 familiar with detection engineering

0:01:30.980000 --> 0:01:36.480000
 as a process or as a practice, I think
 it's only right that we go through

0:01:36.480000 --> 0:01:45.420000
 it because the process of writing detection
 rules is a subset that falls

0:01:45.420000 --> 0:01:47.120000
 under detection engineering.

0:01:47.120000 --> 0:01:50.240000
 So what is detection engineering?

0:01:50.240000 --> 0:01:54.580000
 Detection engineering is the discipline
 of designing, developing, testing

0:01:54.580000 --> 0:01:59.280000
 and refining detection logic, which is
 something we did to a certain extent

0:01:59.280000 --> 0:02:05.600000
 that identifies or in order to identify
 suspicious or malicious behavior

0:02:05.600000 --> 0:02:07.660000
 in an environment, right?

0:02:07.660000 --> 0:02:12.020000
 And in the context of incident response,
 it is the strategic and tactical

0:02:12.020000 --> 0:02:16.040000
 layer that bridges the gap between threat
 intelligence, real world attacks

0:02:16.040000 --> 0:02:18.680000
 and actionable security alerts.

0:02:18.680000 --> 0:02:22.380000
 So detection engineering ensures that
 incident responders are equipped

0:02:22.380000 --> 0:02:27.100000
 with timely, relevant and high fidelity
 alerts which enable them or you

0:02:27.100000 --> 0:02:31.800000
 to detect adversary behavior early in
 the attack chain, minimize the dwell

0:02:31.800000 --> 0:02:35.880000
 time and accelerate triage
 and containment efforts.

0:02:35.880000 --> 0:02:41.120000
 So this is a very important thing or
 process for you to understand, not

0:02:41.120000 --> 0:02:45.300000
 because, you know, as an incident responder,
 you'll be involved too much.

0:02:45.300000 --> 0:02:50.960000
 You're not going to be involved too
 much in detection engineering but

0:02:50.960000 --> 0:02:54.500000
 understanding what it's all
 about is very important.

0:02:54.500000 --> 0:02:59.960000
 The extent to which you will be involved,
 I will get to shortly, especially

0:02:59.960000 --> 0:03:03.040000
 in the context of writing
 detection rules.

0:03:03.040000 --> 0:03:07.260000
 So before we get to detection rules,
 again, let's build on what I just

0:03:07.260000 --> 0:03:10.840000
 said by taking a look at the role of
 detection engineering in incident

0:03:10.840000 --> 0:03:16.620000
 response. And I'm referring to incident
 response here as the whole, you

0:03:16.620000 --> 0:03:18.380000
 know, incident response process.

0:03:18.380000 --> 0:03:22.940000
 So firstly, pre-incident or preparation,
 which we covered in the previous

0:03:22.940000 --> 0:03:26.640000
 course. So what's the role of
 detection engineering here?

0:03:26.640000 --> 0:03:30.620000
 Well, to build and deploy detection
 rules based on known threat actor

0:03:30.620000 --> 0:03:34.940000
 TTPs, so think of, you know, the MITA
 attack framework as an example or

0:03:34.940000 --> 0:03:40.260000
 using that knowledge base as an example.

0:03:40.260000 --> 0:03:44.820000
 Secondly, simulate attacks and refine
 rules through threat emulation or

0:03:44.820000 --> 0:03:49.060000
 purple teaming, so you can utilize detection
 engineering or that's where

0:03:49.060000 --> 0:03:51.460000
 this comes into play.

0:03:51.460000 --> 0:03:57.300000
 And you know, as I said, that typically
 comes under the preparation phase.

0:03:57.300000 --> 0:04:01.660000
 But of course, you know, you may be
 asking yourself, what role does it

0:04:01.660000 --> 0:04:03.500000
 play during an incident?

0:04:03.500000 --> 0:04:07.860000
 So what we're covering in this course,
 which is detection and analysis,

0:04:07.860000 --> 0:04:14.380000
 right? So alert logic triggers based
 on specific attacker behaviors and

0:04:14.380000 --> 0:04:19.360000
 incident responders rely on these detections
 to guide investigations,

0:04:19.360000 --> 0:04:22.140000
 validate alerts and scope compromise.

0:04:22.140000 --> 0:04:24.380000
 So that's the role it plays.

0:04:24.380000 --> 0:04:28.740000
 Yeah. So it's very important because
 without good detection engineering,

0:04:28.740000 --> 0:04:34.420000
 which you typically would facilitate
 or perform in the preparation phase,

0:04:34.420000 --> 0:04:38.300000
 detection and analysis will
 be affected severely.

0:04:38.300000 --> 0:04:42.860000
 You then have, you know, post incident
 or also known as the lessons learned

0:04:42.860000 --> 0:04:44.340000
 phase of incident response.

0:04:44.340000 --> 0:04:47.000000
 So how does detection engineering
 apply here?

0:04:47.000000 --> 0:04:50.220000
 This is also quite important to
 you as an incident responder.

0:04:50.220000 --> 0:04:56.400000
 Well, responders essentially analyze
 gaps in detection and feed insights

0:04:56.400000 --> 0:04:59.340000
 back into improving rule
 logic and alert coverage.

0:04:59.340000 --> 0:05:03.960000
 Now, generally speaking, this would
 mean you as the incident responder

0:05:03.960000 --> 0:05:08.360000
 after or post the incident sending
 back, you know, your documentation

0:05:08.360000 --> 0:05:12.340000
 or what you've documented and your intelligence
 to the detection engineering

0:05:12.340000 --> 0:05:17.320000
 team or to the SOC team and essentially
 telling them, hey, you know, you

0:05:17.320000 --> 0:05:23.100000
 guys should probably build or, you know,
 improve the our detection capabilities

0:05:23.100000 --> 0:05:27.240000
 by integrating the following, the following
 could be intelligence, et

0:05:27.240000 --> 0:05:38.500000
 cetera. Not as common as the first that
 I just mentioned, what you also

0:05:38.500000 --> 0:05:44.420000
 see is or I should say our instances
 where you as the incident responder

0:05:44.420000 --> 0:05:46.860000
 will probably have to write.

0:05:46.860000 --> 0:05:52.200000
 Let's say, for example, detection rules
 based on your findings, right?

0:05:52.200000 --> 0:05:58.420000
 So post incident and sort of hand them
 over to the, to the SOC team or,

0:05:58.420000 --> 0:06:02.120000
 you know, the team responsible or the
 individual responsible for detection

0:06:02.120000 --> 0:06:05.660000
 engineering and telling them to integrate
 it into the seem to improve

0:06:05.660000 --> 0:06:07.360000
 detection. Right.

0:06:07.360000 --> 0:06:09.140000
 So very, very important.

0:06:09.140000 --> 0:06:12.580000
 That's the role detection engineering
 plays in instant response.

0:06:12.580000 --> 0:06:16.600000
 So another question that you might
 have or you may have had since the

0:06:16.600000 --> 0:06:20.320000
 beginning of this video is why is this
 important or why does detection

0:06:20.320000 --> 0:06:26.520000
 engineering matter to you as an incident
 responder or an aspiring incident

0:06:26.520000 --> 0:06:31.180000
 responder? Well, detection engineering
 is typically seen if you're familiar

0:06:31.180000 --> 0:06:35.120000
 with it as a back-end task,
 but it really isn't.

0:06:35.120000 --> 0:06:37.060000
 It's not just a back-end task.

0:06:37.060000 --> 0:06:41.500000
 It's a collaborative function that
 enables proactive, responsive, and

0:06:41.500000 --> 0:06:47.400000
 accurate instant handling and to the
 extent that we're covering it, which

0:06:47.400000 --> 0:06:52.540000
 is, you know, one of the reasons or
 may have been one of the points that

0:06:52.540000 --> 0:06:57.320000
 you had in your mind with regards to why
 this matters because it's collaborative

0:06:57.320000 --> 0:07:03.340000
 or a collaborative function, it's very
 important that at the point of

0:07:03.340000 --> 0:07:08.020000
 what it's all about and how, you know,
 detection engineering relates to

0:07:08.020000 --> 0:07:09.820000
 you as an incident responder.

0:07:09.820000 --> 0:07:14.400000
 So by understanding detection engineering,
 instant responders can, A,

0:07:14.400000 --> 0:07:18.380000
 familiarize themselves with how alerts
 are built, which is very important

0:07:18.380000 --> 0:07:23.220000
 and what they truly represent because
 as an incident responder, your primary

0:07:23.220000 --> 0:07:29.180000
 input in terms of when your
 job begins is alerts, right?

0:07:29.180000 --> 0:07:30.600000
 That have been escalated to you.

0:07:30.600000 --> 0:07:35.520000
 And if detection engineering is weak
 or is not done correctly, you're

0:07:35.520000 --> 0:07:39.160000
 going to deal, you know, as part of the
 triage with a lot of false positives,

0:07:39.160000 --> 0:07:40.460000
 so on and so forth.

0:07:40.460000 --> 0:07:43.960000
 And what that means is it makes your
 job a lot more cumbersome than it

0:07:43.960000 --> 0:07:50.120000
 should be. B, it can contribute to
 detection, tuning, and validation.

0:07:50.120000 --> 0:07:55.560000
 And C, it helps prioritize and evolve the
 organization's detection strategies

0:07:55.560000 --> 0:07:57.660000
 based on real attack scenarios, right?

0:07:57.660000 --> 0:08:01.940000
 So essentially using intelligence from
 real incidents that you are dealing

0:08:01.940000 --> 0:08:08.860000
 with and sort of feeding that intelligence
 back in to the feeding it back

0:08:08.860000 --> 0:08:15.540000
 into the process to, in this particular
 case, improve detection or detection

0:08:15.540000 --> 0:08:20.020000
 capabilities. So that brings us to another
 key point because you may be

0:08:20.020000 --> 0:08:23.740000
 asking yourself or you may be saying,
 okay, that makes sense, Alexis,

0:08:23.740000 --> 0:08:29.160000
 but with regards to me as an incident
 responder and detection engineering

0:08:29.160000 --> 0:08:34.560000
 being something that I should be familiar
 with, what are the, you know,

0:08:34.560000 --> 0:08:37.780000
 core detection engineering skills
 that I need to possess?

0:08:37.780000 --> 0:08:42.800000
 Well, firstly, this is again based on
 my experience as well as industry

0:08:42.800000 --> 0:08:49.560000
 requirements for incident responders
 are not always or not, this is not

0:08:49.560000 --> 0:08:53.760000
 an extensive or complete list that essentially
 says this is all that you

0:08:53.760000 --> 0:08:55.700000
 need to know and nothing else.

0:08:55.700000 --> 0:08:59.880000
 But this is generally speaking what
 I found and, you know, based on my

0:08:59.880000 --> 0:09:04.440000
 experience working in a SOC being an
 incident responder, et cetera, you

0:09:04.440000 --> 0:09:06.860000
 have writing and testing detection rules.


0:09:06.860000 --> 0:09:11.420000
 So you should have, you know, very
 basic ability, I wouldn't say even,

0:09:11.420000 --> 0:09:16.440000
 you know, expert ability, but basic ability
 to craft detection logic using,

0:09:16.440000 --> 0:09:22.940000
 you know, the languages specific
 to the seams that you're using.

0:09:22.940000 --> 0:09:29.780000
 So, you know, we take SPL, which is
 specific to Splunk, KQL, you know,

0:09:29.780000 --> 0:09:35.180000
 specific to the elk stack or seams that
 are built on top of the elk stack.

0:09:35.180000 --> 0:09:41.620000
 So think of Sentinel as another example,
 Yara or vendor specific DSLs,

0:09:41.620000 --> 0:09:49.320000
 right? What this also means is the ability
 to validate and test detection

0:09:49.320000 --> 0:09:52.160000
 rules using real or emulated data.

0:09:52.160000 --> 0:09:55.400000
 So what this means is during the preparation
 phase or when you're not

0:09:55.400000 --> 0:09:59.280000
 dealing with an incident and you're
 working on, you know, you're working

0:09:59.280000 --> 0:10:03.920000
 with the detection engineering team or
 the SOC team to, you know, improve

0:10:03.920000 --> 0:10:08.820000
 detection capabilities, you should be
 able to, you know, validate or test

0:10:08.820000 --> 0:10:14.360000
 detection rules, I should say newly
 created detection rules to see if

0:10:14.360000 --> 0:10:18.520000
 they are being, if the consequent
 alerts are being triggered.

0:10:18.520000 --> 0:10:21.520000
 And the way to do that is to emulate
 the attacks that they're supposed

0:10:21.520000 --> 0:10:26.560000
 to detect, that those rules are supposed
 to detect an alert, SOC analysts

0:10:26.560000 --> 0:10:32.480000
 or responders to an example of how you can
 do that is through attack simulation,

0:10:32.480000 --> 0:10:36.200000
 threat emulation, and that can be achieved
 using tools like atomic red

0:10:36.200000 --> 0:10:42.200000
 team, right? You then have the other
 core skill with regards to detection

0:10:42.200000 --> 0:10:45.580000
 engineering. And that's all to do with
 alert triage and false positive

0:10:45.580000 --> 0:10:50.780000
 reduction. So you should be able
 to analyze the quality of alerts.

0:10:50.780000 --> 0:10:55.780000
 So think of relevance, context and noise,
 and then tune rules by adjusting

0:10:55.780000 --> 0:10:59.400000
 filters, thresholds or
 enrichment sources.

0:10:59.400000 --> 0:11:03.860000
 Another skill that, you know, is quite
 useful to have is incident to detection

0:11:03.860000 --> 0:11:08.740000
 feedback. So you should be able to translate
 incident learnings or what

0:11:08.740000 --> 0:11:14.300000
 you've learned into new detection content
 or, you know, you should essentially

0:11:14.300000 --> 0:11:17.780000
 take what you've learned and use that
 to make the detection capabilities

0:11:17.780000 --> 0:11:19.900000
 better next time, right?

0:11:19.900000 --> 0:11:24.740000
 And you should be able to create retrospective
 queries to detect undetected

0:11:24.740000 --> 0:11:30.140000
 past activity. And then collaboration
 with threat hunters and engineers.

0:11:30.140000 --> 0:11:34.840000
 So you need to be able to communicate
 early, sorry, clearly with detection

0:11:34.840000 --> 0:11:39.160000
 engineers to propose rule improvements
 or request new logic.

0:11:39.160000 --> 0:11:43.640000
 And you should also, you know, be sharing
 TTPs and IR findings in actionable

0:11:43.640000 --> 0:11:48.300000
 formats or formats that, you know, fairly
 easy for the detection engineering

0:11:48.300000 --> 0:11:53.040000
 engineers to incorporate, which is
 why I said being familiar with how

0:11:53.040000 --> 0:11:57.700000
 to, you know, let's say, write rules
 can be very important because if

0:11:57.700000 --> 0:12:01.820000
 you can give the detection engineering
 team or individual responsible

0:12:01.820000 --> 0:12:05.880000
 for that, you know, if you can give
 them the rule, you know, ready to

0:12:05.880000 --> 0:12:09.800000
 go, then they can immediately
 incorporate it, right?

0:12:09.800000 --> 0:12:13.700000
 Which is even, you know,
 which is quite useful.

0:12:13.700000 --> 0:12:18.520000
 So now that you have an understanding
 of what detection engineering is

0:12:18.520000 --> 0:12:22.720000
 all about, let's take a look at one aspect
 of detection engineering, which

0:12:22.720000 --> 0:12:28.920000
 in my opinion is quite important for
 you to understand as a, I wouldn't

0:12:28.920000 --> 0:12:30.240000
 even say it's quite important.

0:12:30.240000 --> 0:12:33.940000
 It's very important for you to understand
 as an instant responder or an

0:12:33.940000 --> 0:12:35.680000
 aspiring instant responder.

0:12:35.680000 --> 0:12:41.560000
 And that is the process or the ability
 to write detection rules.

0:12:41.560000 --> 0:12:45.080000
 But before we get into actually writing
 detection rules, you need to be

0:12:45.080000 --> 0:12:48.120000
 familiar with the various
 types of detection rules.

0:12:48.120000 --> 0:12:55.400000
 So to begin with, what
 are detection rules?

0:12:55.400000 --> 0:12:59.800000
 Rules are logic based instructions that
 are used by security platforms.

0:12:59.800000 --> 0:13:04.900000
 So think of your seem endpoint detection
 and response systems or NDRs

0:13:04.900000 --> 0:13:10.480000
 to identify suspicious or malicious activity
 across an environment, right?

0:13:10.480000 --> 0:13:14.280000
 And these rules help in automating the
 monitoring alerting and sometimes

0:13:14.280000 --> 0:13:17.960000
 even initial triage of security events.

0:13:17.960000 --> 0:13:22.700000
 So these rules are predefined logic
 based expressions used to identify,

0:13:22.700000 --> 0:13:28.360000
 as I said, suspicious behavior, malicious
 activities, or even deviations

0:13:28.360000 --> 0:13:32.400000
 from expected baselines within
 an IT environment.

0:13:32.400000 --> 0:13:36.720000
 So detection rules enable automated
 monitoring and real time alerting,

0:13:36.720000 --> 0:13:41.820000
 allowing security teams to swiftly identify
 and triage potential threats,

0:13:41.820000 --> 0:13:45.600000
 whether they're deployed in a seem
 platform like Splunk, an EDR system

0:13:45.600000 --> 0:13:49.520000
 or a network detection and
 response system, NDR.

0:13:49.520000 --> 0:13:54.060000
 Detection rules act as the first line
 of defense and threat detection.

0:13:54.060000 --> 0:13:58.980000
 So if you remember when we were going
 through the lab effectively using

0:13:58.980000 --> 0:14:05.380000
 ELK, we were pretty much, you know,
 we needed to, because there weren't

0:14:05.380000 --> 0:14:09.500000
 any predefined searches available for
 us on that particular deployment

0:14:09.500000 --> 0:14:13.920000
 of ELK and, you know, there
 weren't any alerts created.

0:14:13.920000 --> 0:14:20.440000
 We needed to, we needed to create searches
 to look for activity that,

0:14:20.440000 --> 0:14:22.980000
 you know, is malicious.

0:14:22.980000 --> 0:14:27.260000
 And that is not how a seem should work.

0:14:27.260000 --> 0:14:32.840000
 A seem should be automatically telling us
 what is wrong or should automatically

0:14:32.840000 --> 0:14:38.100000
 be notifying us of, you know, what suspicious
 activity is being recorded.

0:14:38.100000 --> 0:14:43.980000
 And the way to do that is to, again,
 write detection rules that take the

0:14:43.980000 --> 0:14:49.580000
 logic of the searches that we created
 and automatically, you know, keeps

0:14:49.580000 --> 0:14:55.240000
 an eye out for logs that meet the criteria
 specified within the searches.

0:14:55.240000 --> 0:15:01.560000
 And when it meets that criteria, an
 alert, you can configure a rule to

0:15:01.560000 --> 0:15:05.720000
 trigger an alert that then notifies
 the relevant parties that should be

0:15:05.720000 --> 0:15:09.060000
 notified to begin with.

0:15:09.060000 --> 0:15:12.320000
 So for instant responders,
 these rules are critical.

0:15:12.320000 --> 0:15:17.020000
 They provide visibility into the what,
 when and where of an incident,

0:15:17.020000 --> 0:15:22.280000
 helping analysts prioritize and investigate
 alerts with context-rich data.

0:15:22.280000 --> 0:15:26.220000
 The key point here is that well-crafted
 detection rules can mean the difference

0:15:26.220000 --> 0:15:31.540000
 between detecting an adversary early or
 only realizing a breach has occurred

0:15:31.540000 --> 0:15:35.860000
 after the fact. So again, if you go
 back to that lab demo or the video

0:15:35.860000 --> 0:15:41.020000
 where we went, we used the lab effectively
 using ELK, can you, if you

0:15:41.020000 --> 0:15:44.660000
 remember, the data set
 was from 2018, right?

0:15:44.660000 --> 0:15:49.220000
 And we were pretty much detecting malicious
 activity within 2018 or that

0:15:49.220000 --> 0:15:52.420000
 occurred within that particular
 period in time now granted.

0:15:52.420000 --> 0:15:57.300000
 That data set was ingested manually
 for the purposes of learning.

0:15:57.300000 --> 0:16:02.000000
 But can you imagine if we were using
 that ELK deployment just, you know,

0:16:02.000000 --> 0:16:07.620000
 as it was without creating any alerts
 or writing any detection rules,

0:16:07.620000 --> 0:16:12.300000
 we would have only found this malicious
 activity had we known what to

0:16:12.300000 --> 0:16:17.300000
 look for, which is something that's
 even more scary, right?

0:16:17.300000 --> 0:16:21.200000
 So anyway, just wanted to use that example
 to contextualize what I'm saying

0:16:21.200000 --> 0:16:24.260000
 here to show you the importance
 of detection rules.

0:16:24.260000 --> 0:16:26.920000
 So finally, the types of detection rules.


0:16:26.920000 --> 0:16:30.000000
 Now, this is very important
 to understand.

0:16:30.000000 --> 0:16:34.860000
 And you, as I said, it's fairly easy
 to understand, but you need to see

0:16:34.860000 --> 0:16:38.040000
 it in the real. That's why
 I've provided examples.

0:16:38.040000 --> 0:16:40.320000
 So the first is signature
-based detection.

0:16:40.320000 --> 0:16:43.920000
 That's arguably the most basic or, you
 know, the earliest type of detection

0:16:43.920000 --> 0:16:48.560000
 out there, which works by
 matching known indicators.

0:16:48.560000 --> 0:16:51.360000
 So think of file hashes, IPs, or strings.


0:16:51.360000 --> 0:16:53.700000
 Against observed data.

0:16:53.700000 --> 0:16:57.780000
 So you match known indicators of malicious
 activity or activity that you

0:16:57.780000 --> 0:17:01.140000
 want to detect against observed data.

0:17:01.140000 --> 0:17:05.520000
 So for example, if you classify a specific
 Windows event ideas malicious

0:17:05.520000 --> 0:17:12.460000
 or interesting, let's say, then you would
 essentially create a, you know,

0:17:12.460000 --> 0:17:15.560000
 signature-based, or you would essentially
 be dealing with signature-based

0:17:15.560000 --> 0:17:19.220000
 detection. Now, of course, I know in
 this particular case, it's referring

0:17:19.220000 --> 0:17:23.360000
 specifically to indicators of compromise.


0:17:23.360000 --> 0:17:27.920000
 So that's why the examples provided
 are file hashes, IPs, or strings.

0:17:27.920000 --> 0:17:33.620000
 So essentially using what is known to be
 malicious or indicators of compromise

0:17:33.620000 --> 0:17:37.180000
 against observed data or the logs.

0:17:37.180000 --> 0:17:40.260000
 And this is typically, signature-based
 detection is typically used to

0:17:40.260000 --> 0:17:44.620000
 detect known malware, exploits,
 or specific threat-acted TTPs.

0:17:44.620000 --> 0:17:46.880000
 And here's an example of a Yara rule.

0:17:46.880000 --> 0:17:49.640000
 I'll not get into what that is right now.


0:17:49.640000 --> 0:17:52.800000
 But the rule syntax is fairly
 easy to understand.

0:17:52.800000 --> 0:17:54.820000
 The rule is called suspicious mimicat.

0:17:54.820000 --> 0:17:59.120000
 So by dinged off the description of
 the rule, you can probably tell what

0:17:59.120000 --> 0:18:01.980000
 it's supposed to detect.

0:18:01.980000 --> 0:18:04.420000
 It's supposed to detect mimicats, right?

0:18:04.420000 --> 0:18:07.000000
 So this is a very basic rule.

0:18:07.000000 --> 0:18:11.520000
 In fact, you can actually use this,
 but I'm trying to explain the logic,

0:18:11.520000 --> 0:18:16.040000
 right? So what you're looking for,
 because it's signature-based, it's

0:18:16.040000 --> 0:18:18.780000
 obviously going to rely on signatures.

0:18:18.780000 --> 0:18:21.880000
 In this case, the signature
 would be a string.

0:18:21.880000 --> 0:18:25.800000
 So in this case, a variable is created
 called a that holds the actual

0:18:25.800000 --> 0:18:28.440000
 string of the signature
 that you want to detect.

0:18:28.440000 --> 0:18:33.520000
 In this case, what we're trying
 to look for is mimicats.

0:18:33.520000 --> 0:18:35.760000
 And you then have your condition.

0:18:35.760000 --> 0:18:40.700000
 And in this case, we specify condition
 a needs whatever the case.

0:18:40.700000 --> 0:18:42.300000
 So it's not really a variable.

0:18:42.300000 --> 0:18:47.020000
 In this case, in the context of
 a seam, it would be the field.

0:18:47.020000 --> 0:18:54.660000
 So you say match or create an alert when
 the following rule is triggered.

0:18:54.660000 --> 0:18:58.380000
 The rule is specifying that if a particular
 field, let's say if it was

0:18:58.380000 --> 0:19:01.700000
 event ID, so the field
 name would be event ID.

0:19:01.700000 --> 0:19:09.960000
 When event ID is equals to 4, 5, 6, 9,
 for example, then do this or trigger

0:19:09.960000 --> 0:19:11.860000
 this particular alert.

0:19:11.860000 --> 0:19:12.820000
 So that's pretty much it.

0:19:12.820000 --> 0:19:17.820000
 As I said, signature-based detection
 is very specific to malware exploits

0:19:17.820000 --> 0:19:20.900000
 or specific threat actor TTPs.

0:19:20.900000 --> 0:19:23.880000
 You then have behavior-based detection.

0:19:23.880000 --> 0:19:28.400000
 Behavior-based detection works by monitoring
 sequences of actions or unusual

0:19:28.400000 --> 0:19:33.160000
 behavior patterns that deviate
 from predefined baselines.

0:19:33.160000 --> 0:19:37.060000
 And it's typically used to detect activity
 like credential dumping, privilege

0:19:37.060000 --> 0:19:38.820000
 escalation, or lateral movement.

0:19:38.820000 --> 0:19:43.460000
 Things that can be quite difficult
 to detect in the entirety and would

0:19:43.460000 --> 0:19:44.820000
 require some correlation.

0:19:44.820000 --> 0:19:50.340000
 And that's why the technology, the
 detection technology that leverages

0:19:50.340000 --> 0:19:53.960000
 behavior-based detection is
 obviously going to be EDRs.

0:19:53.960000 --> 0:19:56.620000
 So here's an example of an EDR rule.

0:19:56.620000 --> 0:19:58.980000
 And the logic in this
 case is fairly simple.

0:19:58.980000 --> 0:20:03.740000
 So if the process name, that's your
 field there, but if process name is

0:20:03.740000 --> 0:20:10.040000
 equal to lsas.exe and parent name is not
 equal to winlogon.exe, then alert.

0:20:10.040000 --> 0:20:13.160000
 So what this means in the context of
 Windows, which is what this rule

0:20:13.160000 --> 0:20:19.660000
 has been created for, is if you see
 a process called lsas.exe and the

0:20:19.660000 --> 0:20:26.520000
 originator is not winlogon
.exe, then trigger an alert.

0:20:26.520000 --> 0:20:32.240000
 And why this is representative
 of malicious activities?

0:20:32.240000 --> 0:20:37.780000
 Because lsas, or if we speak specifically
 to the second logical operation

0:20:37.780000 --> 0:20:45.220000
 here, you're looking for any invocations
 of lsas, which a normal user

0:20:45.220000 --> 0:20:49.040000
 would not be launching lsas.exe.

0:20:49.040000 --> 0:20:53.900000
 We'll get into that when we'll be taking
 a look at when we'll be getting

0:20:53.900000 --> 0:20:57.380000
 into the analysis section of this course,
 where these processes will become

0:20:57.380000 --> 0:21:05.500000
 relevant. A normal user would not be
 instantiating lsas.exe manually,

0:21:05.500000 --> 0:21:08.800000
 so it's suspicious behavior.

0:21:08.800000 --> 0:21:19.340000
 And this is quantified by specifying that
 if lsas is not, if there's another

0:21:19.340000 --> 0:21:25.060000
 lsas process and it's not been created
 or triggered by winlogon.exe, then

0:21:25.060000 --> 0:21:26.120000
 create the alert.

0:21:26.120000 --> 0:21:29.860000
 And I'll not get into what that means
 here, but it means that if lsas

0:21:29.860000 --> 0:21:35.060000
 is executed under any other context,
 other than the one it should be when

0:21:35.060000 --> 0:21:38.500000
 a user logs onto their system, lsas
 is already created because of the

0:21:38.500000 --> 0:21:44.740000
 purpose it serves, then that should
 be something, or that's something

0:21:44.740000 --> 0:21:49.540000
 that is an indicator or something
 that needs to be analyzed.

0:21:49.540000 --> 0:21:56.900000
 You then have the other type of detection
 rules, which is heuristic or

0:21:56.900000 --> 0:21:58.420000
 anomaly based detection.

0:21:58.420000 --> 0:22:03.340000
 So as the name suggests, this uses statistical
 methods or machine learning

0:22:03.340000 --> 0:22:06.080000
 to detect anomalies from normal behavior.


0:22:06.080000 --> 0:22:10.460000
 So it is used to detect outliers in
 user behavior, network traffic, or

0:22:10.460000 --> 0:22:11.800000
 file access patterns.

0:22:11.800000 --> 0:22:18.780000
 And an example here of what this detection
 rule would look like, detection

0:22:18.780000 --> 0:22:21.220000
 rule would look like is the following.

0:22:21.220000 --> 0:22:24.040000
 So you can see it's very heuristic based.


0:22:24.040000 --> 0:22:29.760000
 So create an alert or trigger an alert
 if data transfer exceeds 10 gigabytes

0:22:29.760000 --> 0:22:33.160000
 an hour, and user is not in IT group.

0:22:33.160000 --> 0:22:37.260000
 So you can see this is based on user
 activity, user behavior, network

0:22:37.260000 --> 0:22:41.640000
 traffic, stuff like this, which
 again is very, very useful.

0:22:41.640000 --> 0:22:45.160000
 You then have threshold based detection,
 which you probably are familiar

0:22:45.160000 --> 0:22:48.900000
 with. So we have a lot of different
 types of detection rules.

0:22:48.900000 --> 0:22:51.220000
 I'm sure you're sharing alerts when
 a predefined threshold is exceeded.

0:22:51.220000 --> 0:22:55.240000
 What that threshold is is entirely
 up to you to unprefined.

0:22:55.240000 --> 0:22:59.840000
 And the use case for this type of detection
 rules, as I said, you're probably

0:22:59.840000 --> 0:23:03.180000
 familiar with it, even though you don't
 know you haven't been able to

0:23:03.180000 --> 0:23:09.180000
 describe it logically or technically,
 is you know, to detect DDoS attacks,

0:23:09.180000 --> 0:23:11.640000
 brute force attempts,
 or mass file deletion.

0:23:11.640000 --> 0:23:16.400000
 So here's an example of a sim rule
 that's written in the kibana query

0:23:16.400000 --> 0:23:21.800000
 language. So the name here is called
 failed logons, self explanatory.

0:23:21.800000 --> 0:23:27.100000
 So you start off by saying where the
 result type is equal to we have a

0:23:27.100000 --> 0:23:30.800000
 hex value, there's not really important
 what that means, summarize the

0:23:30.800000 --> 0:23:37.860000
 count by account name.

0:23:37.860000 --> 0:23:41.700000
 And then where the count
 is greater than 20.

0:23:41.700000 --> 0:23:51.360000
 Okay. So where result type is specific
 to specific fields within a within

0:23:51.360000 --> 0:23:57.880000
 a Windows event log, or is specific
 to a Windows event ID.

0:23:57.880000 --> 0:24:01.220000
 In any case, we don't need
 to get into that right now.

0:24:01.220000 --> 0:24:04.940000
 But that's what threshold based
 detection is all about.

0:24:04.940000 --> 0:24:13.920000
 You then have, I wouldn't call this a
 detection rule per se, but you know,

0:24:13.920000 --> 0:24:17.060000
 for, I think it's quite important
 for you to factor this in.

0:24:17.060000 --> 0:24:18.940000
 So that is TTP based detection.

0:24:18.940000 --> 0:24:22.720000
 So this is typically, or not typically,
 this is MITOTAC aligned.

0:24:22.720000 --> 0:24:26.640000
 So the way this works is by matching
 tactics, techniques, and procedures,

0:24:26.640000 --> 0:24:33.000000
 also known as TTPs, which are native
 or specific to the MITOTAC framework,

0:24:33.000000 --> 0:24:36.620000
 used by or attributed to adversaries
 based on framework.

0:24:36.620000 --> 0:24:40.560000
 So the MITOTAC framework, which I've
 not yet introduced, will be covering

0:24:40.560000 --> 0:24:44.960000
 it in the next course, will be taking
 a look at threat intelligence and

0:24:44.960000 --> 0:24:50.980000
 threat hunting. The MITOTAC framework
 is, is a knowledge base or a tool

0:24:50.980000 --> 0:25:07.660000
 used by, accurately map out or identify
 adversary TTPs, so tradecraft.

0:25:07.660000 --> 0:25:09.540000
 So you know, what adversaries do.

0:25:09.540000 --> 0:25:15.640000
 And the way the MITOTAC framework works
 is by using or breaking down the

0:25:15.640000 --> 0:25:18.940000
 attack lifecycle into tactics,
 techniques, and procedures.

0:25:18.940000 --> 0:25:25.060000
 Attactic represents a particular phase
 within the attack lifecycle.

0:25:25.060000 --> 0:25:31.940000
 So initial access, privilege escalation,
 stuff like this, techniques.

0:25:31.940000 --> 0:25:37.880000
 Represent specific, as the name suggests,
 techniques or objectives within

0:25:37.880000 --> 0:25:41.500000
 a particular attack, and procedures represent
 different ways of implementing

0:25:41.500000 --> 0:25:43.180000
 a technique. It doesn't really matter.

0:25:43.180000 --> 0:25:48.580000
 The bottom line is, you can leverage
 what is already, you can leverage

0:25:48.580000 --> 0:25:56.720000
 tradecraft, already attributed, observed
 and attributed tradecraft on

0:25:56.720000 --> 0:26:00.120000
 the MITOTAC framework website.

0:26:00.120000 --> 0:26:04.960000
 You know, tradecraft that's already been
 attributed to known threat actors

0:26:04.960000 --> 0:26:12.260000
 as a way of taking a look at their tradecrafts
 or what tactics, techniques,

0:26:12.260000 --> 0:26:13.920000
 and procedures they've been known to use.


0:26:13.920000 --> 0:26:19.420000
 And then using that information to
 build detection rules for that, to

0:26:19.420000 --> 0:26:24.020000
 essentially detect that specific activity
 or adversarial tradecraft.

0:26:24.020000 --> 0:26:28.780000
 So the use case, as I've already explained
 in a, in quite a lengthy way,

0:26:28.780000 --> 0:26:31.240000
 is coverage mapping for
 attack techniques.

0:26:31.240000 --> 0:26:35.000000
 So what basically what this means is,
 if you're not familiar with the

0:26:35.000000 --> 0:26:39.980000
 attack framework is, let's say I want
 to, you're essentially using the

0:26:39.980000 --> 0:26:44.400000
 attack framework and its definitions,
 the technique ID, so on and so forth,

0:26:44.400000 --> 0:26:46.780000
 to build detection rules.

0:26:46.780000 --> 0:26:51.320000
 So if you want to detect, build a detection
 rule for, you know, let's

0:26:51.320000 --> 0:26:59.680000
 say, privilege escalation via specific
 technique, you can do so.

0:26:59.680000 --> 0:27:04.060000
 But this is an example of what this
 type of detection rule would look

0:27:04.060000 --> 0:27:06.860000
 like in the Splunk SPL language.

0:27:06.860000 --> 0:27:10.060000
 So index is equal to win event log.

0:27:10.060000 --> 0:27:11.640000
 We're already familiar with Splunk.

0:27:11.640000 --> 0:27:13.000000
 So you know what that means.

0:27:13.000000 --> 0:27:18.080000
 So we're looking for, we're limiting
 the search to a specific index.

0:27:18.080000 --> 0:27:20.940000
 Event code is 4688.

0:27:20.940000 --> 0:27:24.860000
 And then you say where the new process
 name is PowerShell.exe and command

0:27:24.860000 --> 0:27:28.640000
 line is equal to the wildcard
 there encoded command.

0:27:28.640000 --> 0:27:32.880000
 So you're essentially leveraging what
 is, you know, adversarial activity

0:27:32.880000 --> 0:27:38.700000
 used to, or used by or attributed to adversaries,
 you know, based on frameworks

0:27:38.700000 --> 0:27:42.840000
 like MITE ATTACK to, you know,
 write your detection rules.

0:27:42.840000 --> 0:27:46.440000
 So coverage mapping for
 attack techniques.

0:27:46.440000 --> 0:27:50.400000
 So you're using it in many ways, one
 of which is a framework for coverage

0:27:50.400000 --> 0:27:52.920000
 mapping with regards to detection.

0:27:52.920000 --> 0:27:57.700000
 The other is to, you know, build detection
 rules specific to the tradecraft

0:27:57.700000 --> 0:28:00.520000
 attributed to a particular threat actor.

0:28:00.520000 --> 0:28:04.780000
 So let's say you can create detection
 rules that will detect activity

0:28:04.780000 --> 0:28:11.100000
 that has been attributed to a known
 threat actor like APT 40.

0:28:11.100000 --> 0:28:15.440000
 And the framework through which you facilitate
 this or that this is facilitated

0:28:15.440000 --> 0:28:17.480000
 is the MITE ATTACK framework.

0:28:17.480000 --> 0:28:20.540000
 So I know I've sort of taken a long
 time to explain that, but it's very

0:28:20.540000 --> 0:28:24.700000
 important. You then have correlation
 rules, which I should have put at

0:28:24.700000 --> 0:28:29.880000
 the beginning, but they'll, they will
 make much more sense right now.

0:28:29.880000 --> 0:28:34.320000
 So correlation rules work by combining
 multiple detections or events over

0:28:34.320000 --> 0:28:36.180000
 time to detect complex attacks.

0:28:36.180000 --> 0:28:40.760000
 So the use case here is multi step to detect
 multi step attacks like fishing,

0:28:40.760000 --> 0:28:44.380000
 which leads to credential theft
 and then lateral movement.

0:28:44.380000 --> 0:28:48.680000
 So an example of what a correlation
 rule would look like here, which can

0:28:48.680000 --> 0:28:53.260000
 be very advanced to write out, but
 this is what it will look like.

0:28:53.260000 --> 0:28:58.980000
 So trigger an alert when, you
 know, based on the following.

0:28:58.980000 --> 0:29:03.060000
 So there's the rule number one,
 detect phishing email delivery.

0:29:03.060000 --> 0:29:06.920000
 Okay, rule number two, detect
 unusual VPN login after email.

0:29:06.920000 --> 0:29:08.560000
 Now those are standard rules.

0:29:08.560000 --> 0:29:11.360000
 It doesn't matter what type
 of detection rule they are.

0:29:11.360000 --> 0:29:15.180000
 Rule one and two can be signature
 based or whatever.

0:29:15.180000 --> 0:29:20.240000
 But the correlation rule is triggered
 if both rule one and two occur within

0:29:20.240000 --> 0:29:25.100000
 30 minutes. Okay, which makes sense
 because, you know, phishing attack

0:29:25.100000 --> 0:29:32.560000
 would constitute a lot of activity
 and not activity that would happen,

0:29:32.560000 --> 0:29:42.860000
 you know, that would occur, you know,
 in, you know, in this particular

0:29:42.860000 --> 0:29:48.200000
 case, if we look at this particular example,
 you're not associate or you're

0:29:48.200000 --> 0:29:52.920000
 not trigger a correlation rule or configure
 a correlation through a correlation

0:29:52.920000 --> 0:29:55.380000
 rule to be triggered.

0:29:55.380000 --> 0:30:00.800000
 If, you know, if there's a detection
 of a fishing email and then you would

0:30:00.800000 --> 0:30:05.800000
 not say, okay, trigger if both occur
 within two days, because, you know,

0:30:05.800000 --> 0:30:10.060000
 unusual VPN login may be something
 completely different.

0:30:10.060000 --> 0:30:14.360000
 So correlation is very, very important,
 or correlation rules are very,

0:30:14.360000 --> 0:30:18.660000
 very important in terms of, you know,
 what they are firstly, but what

0:30:18.660000 --> 0:30:23.060000
 they mean and how they should be written,
 because you're trying to, as

0:30:23.060000 --> 0:30:29.420000
 the name suggests, correlate two different,
 two or more different types

0:30:29.420000 --> 0:30:35.720000
 of activity to represent a larger,
 let's say incident or, you know, a

0:30:35.720000 --> 0:30:44.680000
 larger attack and correlation rules
 typically rely on the triggering of

0:30:44.680000 --> 0:30:50.360000
 other rules, which is, as I said, or
 using this example, how they work.

0:30:50.360000 --> 0:30:55.000000
 So, with that being said, I know that
 this was quite a lengthy introduction

0:30:55.000000 --> 0:30:58.920000
 engineering, but the focus
 here was detection rules.

0:30:58.920000 --> 0:31:04.420000
 Hopefully you have a better understanding
 of what detection engineering

0:31:04.420000 --> 0:31:08.260000
 is all about. And I know that this
 was specific to incident response,

0:31:08.260000 --> 0:31:14.020000
 but now that we have that out of the
 way we can proceed with our journey,

0:31:14.020000 --> 0:31:18.300000
 and hopefully you're getting the complete
 picture of, you know, where

0:31:18.300000 --> 0:31:22.780000
 the logs come from, how they're shipped,
 where they end up, how you search

0:31:22.780000 --> 0:31:28.360000
 for activity, you know, in a scene
 where the logs have been collected,

0:31:28.360000 --> 0:31:30.840000
 aggregated past, etc.

0:31:30.840000 --> 0:31:35.740000
 What to do when you've found, you know,
 malicious activity, or you've

0:31:35.740000 --> 0:31:39.360000
 come up with a search to detect
 malicious activity.

0:31:39.360000 --> 0:31:40.480000
 Yeah, we're now at that point.

0:31:40.480000 --> 0:31:44.940000
 So, you know, that point being creating
 detection rules that then trigger

0:31:44.940000 --> 0:31:49.060000
 alerts at which point the
 exciting stuff begins.

0:31:49.060000 --> 0:31:53.760000
 So once an incident responder has been
 alerted to an incident or has been

0:31:53.760000 --> 0:31:57.640000
 told to investigate something, that's
 where we have the analysis phase

0:31:57.640000 --> 0:32:03.460000
 of the detection and analysis phase of
 the larger incident response process.

0:32:03.460000 --> 0:32:05.320000
 So hopefully this is starting
 to make sense.

0:32:05.320000 --> 0:32:08.220000
 With that being said, that's going
 to be it for this video.

0:32:08.220000 --> 0:32:10.200000
 And I will be seeing you
 in the next video.

