Beyond the Firewall: Converging Cyber and Physical Defense
Sep 26, 2025
Modern organizations face hybrid threats that exploit the inherent gaps between information systems and physical facilities, making security convergence a daily operational necessity. We detail the foundational framework of risk assessment—which combines threat, vulnerability, and consequence—to ensure both physical access points and digital assets are holistically protected. The episode explores advanced strategies like adversarial Red Teaming to test processes and human behavior, alongside the use of randomization practices to deter sophisticated insider and external attacks. Sponsor www.cisomarketplace.services (http://www.cisomarketplace.services) https://ssaephysicalsecurity.com
View Video Transcript
0:00
When you think about
0:02
securing um a really major event like a
0:06
championship game, I mean these venues
0:08
can be huge, right? Anywhere from maybe
0:09
15,000 seats up to well over a 100,000
0:12
for a big football game.
0:14
Absolutely. Massive scale.
0:15
Yeah. And your mind immediately goes to
0:16
the obvious stuff, the fences, the
0:18
vehicle checks, you know, traffic plans,
0:20
VIPs, all that outside stuff, the lines
0:22
at the gates.
0:23
Mhm. The visible security layer.
0:24
Exactly. But what if the biggest risk
0:27
isn't someone trying to break in, but
0:29
someone who's uh already inside, someone
0:32
with a badge?
0:33
That's precisely the issue we're digging
0:35
into today. We've been looking at NCAA
0:37
best practices alongside some pretty
0:39
advanced uh non-traditional red teaming
0:41
methods.
0:42
Okay.
0:42
And what comes through is that the
0:44
strong outer shell, it often protects a
0:46
much kind of softer interior. So, our
0:48
mission for this deep dive is really to
0:50
expose those internal vulnerabilities,
0:52
the ones people don't think about as
0:54
much,
0:54
right? looking behind the curtain,
0:55
seeing how weaknesses get found by
0:57
simulating different internal roles,
0:59
almost like an undercover boss approach,
1:01
but uh for security testing,
1:03
an undercover boss for security. I like
1:05
that. So, testing resilience not against
1:08
some hacker miles away, but maybe
1:10
against the person serving food or
1:12
cleaning the floors.
1:14
Exactly. People in uniforms, people with
1:16
credentials, the ones you're meant to
1:17
trust. It moves way beyond just like
1:20
basic pen testing. It sounds fascinating
1:23
and maybe a bit unnerving.
1:24
It evaluates the whole system, the
1:26
people, the processes, the tech against
1:29
someone who knows how things work from
1:30
the inside.
1:31
Okay, before we unpack all that, just
1:32
want to give a quick thank you to our
1:34
sponsor. This kind of deep dive into
1:36
comprehensive security, especially that
1:38
internal risk side. It's supported by
1:40
www.seomarketplace.services.
1:43
Great folks over there.
1:44
So, let's start with the basics, the
1:45
blue team side. just the sheer scale of
1:48
the normal defense needed to even open
1:51
the gates. There's this thing called the
1:53
event action plan, the EAP.
1:55
The EAP, it's the Bible for the event.
1:58
And it covers way more than just the
1:59
game itself, right? Like pregame stuff,
2:01
fan zones, tailgating.
2:03
Yeah. Pregame operations, postgame
2:05
operations, even other related events
2:07
happening nearby, maybe blocks away.
2:10
It's a huge logistical puzzle,
2:11
which is why that outer defense has to
2:14
be so so solid.
2:15
It really does. The sources we looked at
2:17
strongly recommend setting up a secure
2:20
hardened vehicle perimeter. And not just
2:22
close, but like at least 100 ft away
2:25
from the actual venue walls.
2:27
100 ft. Wow. So, what are we talking
2:29
like concrete barriers?
2:30
Yeah. Jersey barriers, maybe big
2:32
reinforced planters, sometimes even
2:34
large trucks parked strategically.
2:36
Anything to stop a vehicle being used to
2:38
force entry.
2:39
Makes sense. Keep vehicle threats far
2:41
away. But pushing it out 100 feet,
2:43
doesn't that just create a bigger zone
2:44
outside the barriers that needs
2:46
controlling?
2:47
It does create a large buffer zone.
2:48
Yeah.
2:48
Yeah.
2:49
And that zone needs really strict access
2:50
control, too. Operationally, managing
2:53
all the different groups involved, venue
2:55
security, local police, maybe federal
2:58
agencies.
3:00
It gets complicated fast.
3:01
I can imagine. How do they coordinate
3:03
that?
3:03
That's where this thing called NIMS
3:04
comes in the National Incident
3:05
Management System. It's actually
3:07
mandatory in these situations.
3:09
Mandatory, not just a suggestion.
3:11
Nope. Required. NIMS provides a standard
3:14
structure, a common language.
3:15
Ah, so everyone knows who's in charge,
3:18
what the terms mean.
3:19
Exactly. It mandates setting up a
3:21
unified command post or UCP with a
3:24
really clear chain of command,
3:26
and they have to test it beforehand, run
3:28
emergency drills before the championship
3:30
period even starts.
3:31
Drills are key. Practice makes perfect,
3:33
or at least less chaotic. If police,
3:36
fire, and venue security aren't speaking
3:38
the same language when something
3:40
happens, the response just it falls
3:43
apart.
3:43
Okay. So, NIMS forces that common
3:45
operational language. Now, about access
3:47
inside that perimeter, the sources
3:49
mention very specific rules for
3:50
credentials, right? Only ticketed guests
3:53
or credentialed people allowed inside
3:55
the venue walls.
3:56
Correct. Everyone goes through security
3:57
inspection. But there's a really
3:59
interesting point about credentials for
4:00
law enforcement or security personnel
4:02
themselves.
4:03
Oh, yeah. What's that? The guidance is
4:05
to keep those extremely limited.
4:07
Really fewer security credentials. Why?
4:09
It's a bit counterintuitive maybe, but
4:11
the thinking is if you have too many
4:13
people with security level access, it
4:16
actually dilutes the control at
4:17
checkpoints. And crucially, it increases
4:20
the potential attack surface if one of
4:22
those insiders goes bad.
4:24
Ah, okay. So, limiting those highlevel
4:27
credentials makes each one more
4:29
scrutinized and reduces the insider risk
4:32
slightly. Makes sense. Which means the
4:34
credential itself becomes a very
4:35
valuable target.
4:36
And the planning isn't just about bad
4:38
actors, is it? They have to plan for
4:39
things like well severe weather.
4:41
Definitely big crowds, open venues,
4:45
weather is a huge factor. You need clear
4:48
definitions and plans for a weather
4:49
watch versus a weather advisory.
4:51
What's the difference again?
4:52
A watch basically means bad weather is
4:54
possible, conditions are right. So you
4:56
need your shelter in place plans ready
4:58
to go. An advisory means it's happening
5:00
now or it's definitely about to. That
5:02
requires immediate action.
5:03
Gotcha. Watches, keep an eye out.
5:05
Advisory is take cover
5:06
pretty much. And another big one now is
5:08
uh drones, unauthorized, unmanned aerial
5:12
systems.
5:12
Oh yeah. You hear about those near
5:13
airports and stuff, venues, too.
5:16
Absolutely. The plans have to cover how
5:18
to respond based on federal, state,
5:21
local rules,
5:22
right?
5:23
But an unauthorized drone over a packed
5:26
stadium that immediately becomes a high
5:28
priority security threat,
5:30
right? Could be surveillance, could be
5:32
worse. Okay, so that's the defensive
5:34
structure layers, protocols, drills.
5:36
Now, let's pivot to the weak spot. The
5:39
thing red teaming is designed to find
5:42
the insider threat.
5:44
Yeah, and this is where it gets really
5:45
interesting. The sources are clear.
5:47
Attacks by insiders, whether they're
5:49
current staff or maybe disgruntled
5:51
former employees, they are just
5:53
notoriously hard to detect and stop.
5:56
Why is that? Because they already know
5:57
the layout.
5:58
That's part of it. They have knowledge
6:00
of the systems. They often have
6:01
legitimate credentials, at least
6:02
initially. And crucially, they have
6:04
physical access. They can often just
6:06
walk past primary security checks that
6:08
an outsider couldn't beat.
6:09
Okay. So, this brings us back to that
6:10
undercover boss red team idea. You're
6:12
not just hacking from outside. You're
6:14
putting your testers inside the
6:15
operation.
6:16
Exactly. Posing as staff, maybe
6:18
security, maybe food service,
6:19
concessions. Yeah.
6:20
Even surprisingly, janitorial staff.
6:23
Janitorial.
6:24
Absolutely. And this isn't just a red
6:26
team theory. It's backed up by the
6:27
official guidance. the actual risk
6:29
assessment team for the venue. It's
6:32
required to include non-security people
6:35
like who?
6:36
Like maintenance staff, IT folks,
6:37
communications, concessions managers,
6:39
and yes, custodial services.
6:41
They're seen as vital parts of the
6:43
security assessment process.
6:45
Why? Why is the person cleaning the
6:47
floors or serving drinks considered such
6:50
a key vector in a place with maybe a
6:51
100,000 people? What access do they have
6:54
that's so special? It often comes down
6:56
to perception and uh just their normal
6:59
pattern of movement. Think about it.
7:00
A police officer or an external
7:03
contractor
7:04
might get stopped and questioned at
7:06
multiple internal checkpoints,
7:07
right?
7:08
But a custodial worker pushing a cart,
7:10
they're expected to be in service
7:11
corridors, loading docks, maybe near
7:13
administrative offices, sometimes after
7:15
hours when regular security patrols
7:16
might be less frequent or focus
7:18
elsewhere.
7:18
Ah, they blend in. They have a reason to
7:20
be almost anywhere.
7:21
Exactly. And the vulnerability analysis,
7:24
the official planning, it has to account
7:26
for attack paths involving things like
7:28
food contamination, suspicious mail or
7:31
deliveries, even service repairs.
7:34
Imagine a red teamer posing as a
7:37
facilities tech. Okay,
7:38
they could walk into a server room or
7:40
even an executive's office saying they
7:42
need to check the AC unit or fix a
7:44
network port. Who's going to question
7:46
that?
7:46
Probably no one. It's unchallenged
7:48
access potentially near very sensitive
7:51
areas or critical systems. Security
7:53
might just not see the cleaner as a
7:55
threat.
7:55
And that's precisely the kind of
7:56
assumption the red team exploits. They
7:58
use that cover to test several things
8:01
physically. Can they get into offlimits
8:03
areas? Can they use an old or maybe a
8:05
terminated ID badge to get through
8:07
secondary screening?
8:08
So testing if the system actually
8:10
catches deactivated credentials quickly,
8:13
right? How fast does that process work?
8:14
And then there's the cyber side. What
8:16
can an undercover redteamer do on the
8:18
cyber front? Maybe pretending to be IT
8:19
support or something.
8:20
They can try much more targeted attacks
8:23
because they're inside. One classic
8:25
method is uh precision fishing. Not
8:28
blasting emails everywhere, but sending
8:31
very carefully crafted fake emails to
8:34
just a few specific internal departments
8:36
like finance or HR.
8:38
Exactly. Groups that handle sensitive
8:40
data or might have higher system
8:41
privileges. They leverage the trust
8:44
associated with an internal email
8:46
address.
8:46
Turning a trusted source into an attack
8:49
vector. That's clever
8:50
and effective. We've seen case studies
8:52
where just one successful targeted
8:54
fishing email yielded access to hundreds
8:57
of valid employee credentials.
8:58
Wow.
8:59
It shows flaws not just in email filters
9:01
maybe, but also in user awareness
9:02
training. And it gives the insider a way
9:04
to escalate privileges without ever
9:06
touching the external firewall.
9:08
So, a lot of the biggest failures might
9:09
actually be in the processes, not just
9:11
the tech. How does the red team test
9:14
those processes?
9:15
They look at metrics like uh the
9:17
repossession rate for things like
9:18
badges, keys, maybe even uniforms when
9:21
employee leaves or is terminated.
9:23
Ah, okay. If someone gets fired, how
9:25
quickly do you get their access stuff
9:26
back?
9:27
Exactly. The best security protocols in
9:30
the world don't mean much if a
9:32
disgruntled former employee still has
9:34
their key card and uniform in their car.
9:36
A red team might find say only 60% of
9:39
keys are recovered within 48 hours.
9:42
Leaving a 40% window of potential risk
9:44
still active. That's significant.
9:46
It really is
9:47
managing all this the physical the cyber
9:49
the processes especially with
9:50
potentially a 100,000 people milling
9:52
around. It demands constant really
9:55
specialized internal risking which again
9:58
is exactly the kind of strategic
10:00
thinking our sponsor supports. Thanks
10:02
again to www.sizomarketplace.services.
10:06
Okay, so we've got the what of red
10:08
teaming, finding these physical and
10:10
digital holes using this undercover
10:12
approach. Now, let's talk about the why.
10:14
Why is this structured approach needed?
10:15
Why can't organizations just, you know,
10:18
assess their own risks? Why does human
10:20
nature kind of get in the way?
10:21
This is really the core of why red
10:23
teaming exists as a formal discipline.
10:25
It's structured specifically to fight
10:27
against our own brains. Essentially, our
10:29
systematic deviations from rational
10:30
thought, what we call cognitive biases.
10:32
Cognitive biases. Okay. Security
10:34
planners, event managers, they're human.
10:36
We all use mental shortcuts, heruristics
10:39
to make sense of the world quickly. But
10:41
those shortcuts can lead to predictable
10:43
and sometimes really dangerous blind
10:45
spots in security planning.
10:47
So the red team is like an antidote to
10:49
the organization's own subconscious
10:52
biases. A reality check,
10:54
a structured reality check. Yeah. The
10:56
sources we looked at highlight three
10:58
really key biases that red teaming is
11:00
designed to combat headon.
11:02
All right, let's hear them. First up is
11:04
status quo bias. This is just our
11:06
natural preference for things to stay
11:08
the same. Planners tend to stick with
11:10
last year's plan. You know, they defend
11:12
the current setup unless there's really
11:14
overwhelming evidence to change.
11:16
If it ain't broke, don't fix it. Even if
11:18
it's maybe kind of broken or vulnerable
11:20
to new things.
11:21
Exactly. They become resistant to
11:23
adopting new maybe complex or expensive
11:26
counter measures, even if the threat
11:28
landscape has changed.
11:29
Okay. Status quo bias. What's next? That
11:30
leads naturally to the second one which
11:32
is probably familiar. Normaly bias.
11:34
Uh the it can't happen here syndrome.
11:36
Precisely. The tendency to believe a
11:39
disaster won't happen simply because you
11:41
haven't personally experienced it
11:42
before. If your venue have never had say
11:45
a sophisticated combined physical and
11:47
cyber attack,
11:48
it's hard to justify spending money and
11:50
effort to prevent one.
11:52
Right? And this really hurts the ability
11:54
to prepare for genuinely new threats.
11:56
things like uh domestic violent
11:58
extremists with complex maybe hybrid
12:01
ideologies that don't fit old profiles.
12:03
Normaly bias makes you dismiss those
12:05
novel threats.
12:06
Okay. So status quo normaly bias. What's
12:08
the third one?
12:09
The third is outcome bias. This one's a
12:11
bit subtle. It's judging a past decision
12:14
based only on how things turned out, not
12:16
on the quality of the information you
12:18
had when you made the decision.
12:19
Can you give an example?
12:20
Sure. Let's say you decided not to
12:22
implement a new screening procedure
12:23
based on shaky intel, but you got lucky
12:26
and nothing bad happened at the event.
12:28
Outcome bias tells you, "See, that was
12:30
the right decision."
12:31
Even though the decision itself was
12:33
based on poor information or flawed
12:35
logic.
12:36
Exactly. So, you keep the flawed process
12:38
in place for next time because the
12:39
outcome was good last time. You learned
12:41
the wrong lesson.
12:42
Okay. Those three biases, status quo,
12:45
normaly, outcome, they paint a pretty
12:48
clear picture of how well-intentioned
12:50
plans can still be weak. How does the
12:52
red team methodology actively push back
12:54
against these?
12:55
The main tool is using structured whatif
12:57
scenarios instead of asking could our
13:00
HVAC system be compromised, which
13:02
invites normaly bias.
13:04
Probably not. It hasn't happened before,
13:06
right? Instead, the red team starts by
13:08
assuming the negative event has already
13:09
happened. They ask, "Okay, the HVC
13:12
system was compromised by someone posing
13:13
as a maintenance worker. How did they do
13:16
it? What were the exact steps the TTPs
13:18
involved?"
13:19
Ah, TTP is tactics, techniques, and
13:21
procedures.
13:22
Yes, it forces you to think backwards
13:24
from the undesired outcome, mapping out
13:26
the realistic steps an adversary would
13:28
take. It completely flips the
13:30
perspective from if to how.
13:31
Takes away the possibility of just
13:32
dismissing it. You have to engage with
13:34
the mechanics of the potential failure.
13:36
That's the cognitive shift. And the red
13:38
team doesn't just make stuff up. They
13:40
simulate attacks using established TTPs
13:43
that real adversaries, criminals,
13:45
terrorists, state actors actually use.
13:47
So they're using the real playbook
13:49
against you.
13:49
They are. TTPs aren't just about finding
13:52
one vulnerability. They represent the
13:54
entire attack chain, the whole recipe
13:56
for how an adversary achieves their
13:58
goal.
13:59
Gotcha. And how does this play out in
14:01
practice? Is it just people writing
14:03
reports or do the red team and blue team
14:06
interact? Oh, it's very interactive.
14:08
They often use combined field exercises,
14:10
actual physical tests along with
14:12
discussion based exercises. This gives
14:14
immediate feedback.
14:15
So, the blue team sees what the red team
14:18
is doing or trying to do.
14:20
Sometimes yes, sometimes no, depending
14:22
on the test objectives, but the whole
14:24
process is highly structured. It always
14:26
culminates in what's called a replay
14:27
workshop.
14:28
A replay workshop.
14:29
Yeah. And this workshop isn't just a
14:31
debrief. It doesn't end until the blue
14:33
team, the defenders, can demonstrate
14:35
they have either detected the red team's
14:37
specific TTPs or they've implemented
14:40
changes that render those TTPs
14:42
ineffective.
14:42
Wow. So, it forces adaptation. You can't
14:44
just hear the results and file them
14:46
away. You have to show you've fixed it.
14:48
Exactly. It directly breaks that status
14:50
quo bias cycle by demanding demonstrable
14:53
change based on the simulated attacks.
14:55
This whole deep dive, it really shows
14:57
that securing these huge events, it's so
14:59
much more than just guards and gates.
15:01
It's this constant layering, this
15:03
constant testing, and crucially thinking
15:06
like the bad guy, especially the bad guy
15:07
who might already be inside wearing your
15:10
uniform.
15:11
True security isn't just about buying
15:13
the latest gadget. It's about that um
15:17
that synergy between the people, the
15:18
processes you have in place, and the
15:20
technology, and constantly, rigorously
15:23
testing how well they actually work
15:24
together. It's a continuous cycle, not a
15:26
one-off fix.
15:27
And you know, the lessons here, they go
15:29
way beyond just sports stadiums. Think
15:31
about hospitals, large office buildings,
15:34
university campuses,
15:36
any large complex organization.
15:38
Sure, the principles apply broadly.
15:40
So, here's a thought to leave everyone
15:41
with. Remember, we said the official
15:43
guidance mandates that the risk
15:44
assessment team includes custodial
15:46
staff, food service vendors, IT support.
15:49
Yeah, those often overlooked internal
15:51
roles. Knowing what we now know that
15:53
insider threats are often the most
15:55
effective precisely because they are
15:58
overlooked and trusted, which of those
16:00
critical internal functions do you think
16:02
presents the biggest unaccounted for
16:04
risk in the places you go every day?
16:07
Your workplace maybe. That's the
16:10
question that should probably keep
16:11
security planners up at night.
16:13
That is a very challenging thought.
16:15
Makes you look at familiar places
16:16
differently. Thanks for walking us
16:18
through this uh this really hidden world
16:19
of security assessment. My pleasure.
16:21
It's critical stuff.
16:22
And thanks to all of you for diving deep
16:24
with us today. Just a final reminder
16:25
that this episode was supported by
16:27
www.siso
16:29
marketplace.services.