APIs are the "nervous system" of modern applications, making them the number one attack vector, with flaws like Broken Object Level Authorization (BOLA), Broken Object Property Level Authorization (BOPLA), and Broken Function Level Authorization (BFLA) accounting for a high percentage of breaches. This episode delves into the multi-layered "defense-in-depth" strategies required to mitigate these threats, focusing on input validation, rate limiting, and centralized enforcement via API Gateways We explore how integrating security testing into the CI/CD pipeline and maintaining a proper inventory helps organizations eliminate "shadow" or "zombie" APIs and build a true culture of digital resilience.
Sponsors:
https://cloudassess.vibehack.dev
https://vibehack.dev
https://airiskassess.com
https://compliance.airiskassess.com
https://devsecops.vibehack.dev
Show More Show Less View Video Transcript
0:00
Welcome back to the deep dive. Our job here is uh taking the critical research,
0:05
the data, the insights you need, and really distilling it down, making it actionable.
0:10
That's right. And today we're tackling something huge, securing the entire digital ecosystem. It's uh it's a
0:17
massive undertaking. It really is. And the scale. Yeah. You mentioned what 83% of internet traffic is APIs now.
0:23
Yeah, something like that. APIs are basically the nervous system. They connect everything. mobile apps, microservices, SAS, it all hinges on
0:31
APIs. They're the new perimeter really. So if you're building mobile apps, which are essentially just windows into those
0:38
backend services, then securing that API connection, that's everything. It's the highest stakes game there is right now.
0:44
Absolutely. So today, we're going deep. We're looking at the essential mobile app and API security practices, focusing
0:51
on well, what you need to know heading into 2025. Yep. We want to uncover those layered
0:56
defense strategies. You know, the things that separate the really robust systems from the ones that are frankly
1:01
vulnerable. And we're not just talking definitions. We want to get into the implementation. Policy engines, crypto on the client
1:08
side, the practical stuff. Exactly. Think digital castles, not just walls. We're talking layers. Security
1:16
that spans the cloud, the network, the code, even the device itself. So if you're managing products, architecting
1:21
systems, or just need to get up to speed fast on digital defense, yeah, this is for you. We want to give you
1:28
that knowledge quickly but thoroughly without the information overload hopefully. Right. Uh before we dive into the
1:33
threats though, a quick thank you to our sponsor. Ah yes, building these modern API heavy apps is
1:40
tough. Vipack provides tools and insights to help make sure your code and cloud are secure from the get-go.
1:46
Integrating security right into the pipeline, which is exactly what you need these days. Definitely check them out at
1:51
vibehack.dev. Okay, let's ground this. You call the API the nervous system. The mobile app
1:57
is like the sensor what the user sees, right? The tip of the iceberg. But the back end, that's the brain, the data, the logic, all the sensitive
2:04
stuff. Compromise the API and the brain's exposed. Simple as that. So step one in protecting that system,
2:12
getting authentication and authorization straight seems basic, but you said it still trips people up. Oh, constantly, especially with legacy
2:19
systems, but even new developers sometimes mix them up. They sound similar, but they're fundamentally different jobs.
2:24
Okay, let's nail it down for everyone listening. Authentication is authentication is purely about who you
2:30
are. Verifying identity. Did you type the right password? Did your face stand match? Is that JWT token valid? It's
2:38
proving you are who you claim to be. Got it. Identity verified. Then authorization comes in.
2:43
Exactly. Authorization is about what you're allowed to do after you've proven who you are. Can you read this specific
2:49
file? Can you hit the delete customer button? Can you access the admin panel? It's about permissions for resources.
2:55
Oh, precisely. And every single API call needs a rigorous authorization check for
3:00
the specific resource being accessed. Not just once at login, every time. And
3:06
when that authorization check fails or isn't done right, that's where we get the big one from OAS, right? Broken
3:12
access control or BAC. Yep. Year after year, it's right up there. It's a massive problem.
3:17
Why? Especially with modern microservices. You'd think we' have solved this. Well, it's because of microservices
3:23
partly. Things are so distributed. You might have dozens, hundreds of little services, each needing its own
3:28
authorization rules, maybe talking to different databases. Keeping those rules consistent and correct across all of
3:34
them. It's incredibly complex and really easy for a human to make a mistake.
3:40
Forget one check in one tiny service and you've potentially opened a door to the whole system. Exactly. And attackers are uh very good
3:48
at finding those overlooked doors. You broke BAC down into three main types. Let's start with the first one.
3:55
Horizontal escalation. Sounds like moving sideways. That's a good way to put it. Horizontal is accessing someone else's data but
4:01
staying at the same permission level. Think uh two bank customers, Alice and Bob. Both are just standard users.
4:08
Okay. Alice is poking around. Maybe she sees an API endpoint like AP accounts 1 2 3 4 5. That 1 2 345 is her account ID. She
4:17
tries changing it to 67890 which happens to be Bob's ID. And if the API isn't checking if Alice
4:22
is allowed to see account 67890, boom, she gets Bob's data. That's
4:28
classic IDOR insecure direct object reference. It's often simple URL manipulation, but devastating if the
4:34
backend doesn't tie the resource ID directly to the authenticated user's permissions. Right. That seems frighteningly common.
4:41
Now, contrast that with vertical escalation. That sounds more serious. Oh, it is. That's the jackpot for an
4:46
attacker. Vertical is moving up the privilege ladder. Going from a regular user to say a content moderator or
4:54
worse, a full system admin. How does that usually happen? Is it harder? Generally, yes. It often requires
5:01
showing vulnerabilities or finding more subtle flaws, maybe exploiting a misconfiguration that leaks policy
5:06
details, or finding an insecure admin function that wasn't properly protected. Once they get admin rights though,
5:12
game over. They can move anywhere. Steal data, plant malware, pretty much full control. Okay. And the third type, contextbased
5:19
escalation. This one sounds tricky. It's about business logic. Yeah, this is where it gets interesting.
5:24
It's not just about roles or simple permissions. It's exploiting flaws in the process or the state of the
5:30
application. The how and when of a request matter. Can you give an example? Sure. Imagine an e-commerce checkout.
5:37
You add three items to your cart. The API correctly authorizes the initial cart total. But then maybe there's
5:44
another less obvious API call perhaps for applying a discount code that lets you sneakily change the quantity of an
5:50
item after that initial price check. Ah, so you authorized for three items but managed to manipulate the state later to
5:57
get four while still paying the price for three. Exactly. The context changed midflow,
6:02
but the authorization logic didn't revalidate or wasn't designed to catch that specific sequence as invalid.
6:08
Attackers use this to bypass payment steps, mess with workflows. It's about gaming the application's expected
6:14
sequence of operations. Wow. Okay. That means securing the state machine itself, not just individual API
6:20
calls. Very complex. It is. All right. Moving slightly but related. Broken function level authorization.
6:27
BFLA. How's this different from general BAC? BFLA is a specific type of BAC where the
6:32
application simply fails to check permissions for accessing certain functions or administrative endpoints at
6:38
all. So the function exists like maybe AP admin delete user, right? And maybe the front-end UI hides
6:45
the button for regular users, but the API endpoint itself, it might have no authorization check or a very weak one.
6:52
So an attacker just guesses the URL, maybe finds it in JavaScript files or network traffic.
6:57
Yep. They craft the request directly, hit the endpoint, and if there's no proper serverside check for that
7:02
specific function call tied to their role, they might just succeed. They bypass the UI entirely. Okay, so the
7:08
defense here has to be really strict server side checks on every single function. Absolutely. Two key principles are vital
7:15
here. First, deny by default. No access is granted unless a policy explicitly
7:20
allows it for that user and that function. No implicit permissions. Makes sense. Second, isolate and protect admin APIs.
7:27
Treat them differently. Maybe put them on a different network segment. Require separate authentication and apply the
7:33
strictest possible authorization checks. Every single request, any function needs
7:38
that serverside validation. Got it. Okay. Third major threat, and it feels like we'll be talking about this
7:44
forever. Injection flaws, especially SQL injection. Mhm. Why is this still such a problem in, you
7:50
know, 2025? GDPR fines are massive. Up to 20 euro or 4% of global turnover.
7:56
Yeah. It persists because fundamentally it's about failing to handle user input correctly. Treating untrusted data as
8:02
potential code instead of just data. developers under pressure maybe skipping sanitization sometimes. Yeah. Or using outdated
8:09
libraries or constructing SQL queries by just smashing strings together with user input. If malicious SQL gets injected
8:15
into an input field, the database interpreter just runs it leading to data theft, database deletion, full compromise often. And the cleanup,
8:23
audits, reputational damage, the fines are just the beginning. It can a business. And you mentioned a scary evolution
8:29
here. AI powered injection attacks. Yeah, that's the cutting edge now. Attackers aren't just trying simple or
8:35
our 101 tricks anymore. They're using AI and machine learning to probe an application, learn its defenses, and
8:42
then autogenerate really sophisticated customized payloads. So, the AI figures out how to bypass the
8:49
specific filters or waif rules. Exactly. It can rapidly iterate through thousands of complex variations, trying
8:55
different encodings, syntax variations until it finds something that slips through. This makes manual detection
9:02
much harder and shrinks the window to patch vulnerabilities. It forces a move towards inherently safer architectural
9:08
patterns, not just reactive fixes. Okay, so that need for safer architecture brings us to the next point. We know putting complex
9:15
authorization logic directly in the application code is messy, errorprone, hard to audit
9:20
especially with microservices. Yeah, it becomes unmanageable. So the modern approach is to abstract
9:25
it, pull it out, right? Which leads us to the concept of a centralized policy engine. And this usually involves two key pieces. The
9:32
policy enforcement point or PP and the policy decision point or PDP. PPP and PDP. Okay. How do they work
9:38
together to decouple the logic? So the PP the policy enforcement point usually sits close to the application.
9:45
It could be middleware in your API gateway or maybe a sidecar proxy alongside your microser. Its only job is
9:52
to intercept incoming requests that need authorization. Okay, it catches the request. Then what?
9:58
Instead of the app figuring out access, the PP bundles up the relevant details of the request, who is making it, the
10:05
principal, what resource were they trying to access, the resource, and what are they trying to do, the action, read,
10:10
write, delete, etc. It packages that info up. Yep. Usually as a JSON payload, and it
10:16
sends that package over to the PDP, the policy decision point. The PDP is the brain of the operation. and it holds all
10:21
the authorization rules, all the policies, and the PDP just looks at the request details in the rules and makes a simple clear decision allow
10:28
or DDNY. It sends that decision back to the P which then enforces it, letting the
10:33
request through or blocking it. Exactly. The application code itself becomes much simpler. It just relies on
10:39
the PPP's enforcement and the policies. They're centralized, easier to audit, easier to update consistently across all
10:46
your services. It's a huge win for manageability and security. Okay, that makes sense. But to make that PDP work,
10:53
you need to define the rules. You need an authorization model. We talked about RBAC, ABAC, and a hybrid approach. Let's
11:01
start with RBAC. Role-based access control. RBAC is the classic. It's pretty straightforward.
11:06
You define roles like admin, editor, viewer, and you attach permissions to those roles. If you have the editor
11:12
role, you can perform editor actions. Simple to understand, simple to manage, probably generally, yes. But it can be quite
11:17
blunt. What if you need more nuance like an editor can only edit documents they created or only during business hours?
11:24
RBAC struggles with that kind of fine grain context dependent rule. It's often too coarse, right? Which leads us to attribute-based
11:30
access control ABAC. This sounds more flexible. It is much more so. Abxe makes decisions
11:37
based on attributes characteristics of the user, the resource, the action, and even the environment like time of day,
11:44
location, device security status. So you can write a rule like like allow principal Alice to perform
11:49
action. Write on resource dox if Alice's department attribute is marketing and
11:55
the resources sensitivity attribute is public and the current time attribute is between 9 or5 p.m.
12:01
Wow. Okay. That's incredibly granular. Maximum control. It sounds like it offers potentially very precise
12:06
control. Yes. But there's always a but what's the trade-off? Simplexity. Bingo. Managing a complex ABAC system can be a
12:14
nightmare. You end up with potentially hundreds of attributes and intricate rules interacting in ways that are hard
12:20
to predict or debug. Auditing becomes really challenging. A small change to one attribute definition could have
12:26
unintended ripple effects across the whole system. So it's powerful but potentially brittle. It can be which is why many
12:32
organizations land on a hybrid approach. They use RBAC for the broadstrokes defining the basic job functions and
12:39
then layer ABAC rules on top for the specific contextaware exceptions or data
12:46
filtering where that extra granularity is really needed. Best of both worlds potentially. That's the goal.
12:51
Okay. So you've chosen your model. You need tools to actually build and manage these policies in your PDP.
12:57
You mentioned two main players. Open policy agent OPA and Amazon verified
13:02
permissions AVP. Start with OPA. Open Policy Agent or OPA is probably the big one in the open source world. It's
13:09
designed to be a general purpose policy engine. You can use it for Kubernetes admission control, API authorization,
13:15
infrastructure policies, pretty much anything. How does it work? What language does it use? It uses a declarative language called
13:21
RIGO. Declarative means you write policies that state what outcome is desired under certain conditions rather
13:27
than spelling out the step-by-step logic. So a regal policy might say allow is true if the input users group is
13:33
admin and applications interact with OPA app. They typically query OPA over a simple
13:39
rest API. They send the JSON context principle action resource attributes.
13:44
OPA evaluates that against the loaded RIGO policies and sends back a JSON decision usually just true or false or
13:50
maybe some data. Okay. Open source general purpose. Yeah. How does Amazon verified permissions AVP
13:57
compare? It's a managed service, right? AP is an AWS managed service, which is a big difference operationally.
14:04
And it's more specialized than Opium. It uses a different policy language called Cedar, which Amazon designed specifically for application level
14:10
authorization, defining permissions on your application's resources like documents, folders, etc.
14:15
So, not for controlling AWS resources themselves. That's still IM. Exactly. AP is for your application's
14:21
authorization logic, not AWS infrastructure. Cedar is built around principles like lease privilege and
14:26
analyzing policies for safety. Being a managed service also means it integrates nicely with other AWS services like
14:32
cloud trail for logging and monitoring which is a plus. Cedar RIO different languages different
14:37
focus choice depends on your ecosystem and needs. I guess pretty much OPA offers flexibility and
14:43
vendor neutrality. AVP offers specialization for application authorization within the AWS ecosystem
14:50
and managed operations. Now a critical challenge for any sauce platform using these multi-tenant isolation. How do OPA
14:58
or AVP ensure tenant A can't possibly access tenant B's data? This is crucial. You absolutely cannot
15:05
have policy mistakes allowing cross tenant access. There are two main approaches to structuring your policy
15:11
store. The ideal often considered best practice is the per tenant policy store.
15:16
Meaning each tenant gets their own separate set of rules. Exactly. Like a silo model. Tenant A has policy store. Yeah, tenant B has policy
15:23
store. When a request comes from tenant A, the PP knows to query only tenant A's store. If tenant B has some custom rule,
15:30
it only exists in their store. That sounds safest. Maximum isolation. Errors in one tenants's policies can't
15:36
affect others. Precisely. It simplifies auditing for specific tenants, too. The downside is potentially more operational overhead
15:42
managing all those separate stores. That's the alternative. The one shared multi-tenant policy store. Here, everyone's policies live in
15:49
the same place. It's often simpler to manage from an infrastructure standpoint. That sounds riskier. How do you
15:55
guarantee isolation? You have to be extremely careful with your policy writing. The non-negotiable
16:01
first step is a foundational deny or forbid policy that blocks all
16:06
cross-tenant access by default. How does that work? It checks an attribute, let's say tenanted on both the user making the
16:12
request, the principal, and the resource they're trying to access. The default policy says deny unless principal.tened
16:19
tenanted equals resource.tenitented. Only then do you evaluate other more
16:24
specific allow rules. So access is only even considered if the tenant IDs match. It relies heavily on
16:30
that one rule being perfect. Absolutely. It requires rigorous testing and careful policy design. But it can
16:36
work. It centralizes management but shifts the burden to meticulous policy crafting.
16:41
Okay. Managing these policies is key. But like you said, even the best policy engine is useless if the underlying
16:47
infrastructure isn't secure. These PDPs, PDPs, they run somewhere. Yep. They run on servers, in containers,
16:55
on Kubernetes in the cloud. If your cloud configuration is weak, say the S3
17:01
bucket with tenant data is accidentally public or the AM role running the policy
17:06
engine has way too many permissions, then the fancy authorization logic doesn't matter much.
17:12
Exactly. The whole stack needs to be secure. Which is why understanding your cloud security posture is so vital. You need
17:19
to know if things are configured correctly, if you have excessive permissions, if data stores are exposed,
17:24
continuous assessment is key. And that brings us to our sponsor again. If you need that kind of continuous,
17:30
unbiased view of your cloud environment to ensure everything supporting these secure systems is locked down, check out
17:35
cloud assessment from Vipack. It helps find those misconfigurations. Find it at cloudasses.viphack.devdev.
17:41
Dev Dev. All right, let's shift gears. We've focused a lot on the back end, the brain. Now, let's talk about the client,
17:47
the mobile app itself running out there on devices you don't control. The public client. Yes. And this brings us to what you call the
17:53
client credential trap. You sounded quite passionate about this one needing to die. Huh, maybe a little, but it's a
17:59
fundamental misunderstanding that causes so many problems. It comes from treating a mobile app or a browserbased single
18:06
page app SPA like it's a confidential client like a secure back-end server because they can't keep secrets.
18:11
Exactly. The client credentials grant type in Oolith was designed for serverto-s server communication where
18:17
one server can securely store a client secret. Mobile apps run in untrusted environments. They cannot securely store
18:24
a secret. So embedding that cliented and client secret pair directly in the mobile apps code or config files. Bad idea.
18:32
Terrible idea because it's trivial for an attacker to get them out. Download the app package, APK for
18:38
Android, IPA for iOS, decompile it. The secrets are often just sitting there in plain text in a config file like app
18:45
settings.json or hard-coded in the code. Even if they try to hide or encrypt it, a determined attacker can still find it.
18:51
They can use runtime analysis tools, hook into the application's memory on a rooted or jailbroken device, and just
18:57
observe the secret when the app actually uses it to make an API call. And once the attacker has the apps
19:02
cliented in secret, they can impersonate the application itself. They can make API calls directly, potentially bypassing user
19:09
authentication entirely depending on how the API is set up. It's a huge security hole. Okay, so client credentials grant for
19:16
public clients equal N. That's the right way. The modern gold standard. The standard, the required way now is
19:22
the authorization code flow with proof key for code exchange. PKCE pronounced
19:28
Pixie. PKCE. Okay. How does PKCE work without a secret? How
19:33
does it secure the flow? It cleverly replaces the static long-ived client secret with a dynamic
19:39
one-time use secret generated on the client. Here's the flow. Lay it out for us. Okay. First, the mobile app before
19:45
starting the authorization process generates a random high entropy string called the code verifier. Think of this
19:51
as a temporary single-use secret. It keeps this verifier private. Stays on the device. Got it. Then the app takes that code verifier
19:57
and transforms it using a standard hashing function, usually SHA 256, sometimes with B 64 encoding. The result
20:04
of this transformation is called the code challenge. So challenge is derived from the SQL verifier. Exactly. Now when the app redirects the
20:11
user to the authorization server to log in, it includes this code challenge and
20:16
the hashing method used in the request. The authorization server stores this challenge associating it with this
20:23
specific login attempt. Okay, server knows the challenge. User logs in, server sends back the temporary
20:29
authorization code. Right now the app needs to exchange that authorization code for the actual access
20:35
token. When it makes that request to the token endpoint, it includes the authorization code and the original
20:41
secret code verifier it generated back at the start. Uhhuh. It reveals the secret verifier
20:46
only at the very last step. Precisely. The authorization server receives the code and the verifier. It
20:52
then performs the same transformation, the SHA256 hash on the submitted code verifier. It compares the result to the
20:59
code challenge it stored earlier. And if they match, if they match, the server knows this token request is coming from the exact
21:06
same client instance that started the process. It issues the access token. If they don't match, it means someone intercepted the
21:12
authorization code but didn't have the original secret verifier. Request denied. Exactly. It binds the authorization code
21:19
to the specific client session that initiated it, defeating interception attacks, like if malware on the phone
21:25
tried to steal the code via a malicious redirect. PKCE is non-negotiable for
21:31
public clients. Crystal clear. Okay. So, we've securely obtained our tokens, the shortlived
21:36
access token and the longer lived refresh token. Now, we need good token management and session hygiene on the
21:42
device itself. Where do we store these things? Absolutely critical point. You must use the secure encrypted storage provided by
21:49
the operating system. On Android, that's encrypted shared preferences. On iOS, it's the keychain.
21:55
Why those specifically? because they leverage hardwarebacked encryption where possible and provide process isolation.
22:01
The stored tokens are protected from other apps snooping around. What you absolutely never do is store tokens in
22:06
insecure places like the browser's local storage if it's a web context within an app or standard shared preferences on
22:12
Android or just plain files because those are vulnerable to what XSS file system access on rooted devices.
22:19
Both XSS can steal from local storage. Root access makes standard file storage trivial to read. Use the OS provided
22:26
secure elements. Okay. Secure storage lock down. What about token lifetime? We want to minimize the damage if a token does get
22:33
compromised. Right. Yes. Short lifespans are key. Access tokens like JWTs should expire quickly,
22:40
maybe 15 minutes, maybe even 5 minutes depending on your risk profile. You also need strict session timeouts based on
22:46
user inactivity. And the refresh token handles getting new access tokens without forcing the
22:52
user to log in constantly. Right. Refresh tokens have longer lifetimes, hours, days, maybe weeks, but
22:58
even they need careful handling. And this brings us to an important technique, token rotation.
23:03
Rotation. How's that different from just using the refresh token until it expires? With basic refresh, you use the same
23:08
refresh token over and over. With rotation, every time the client successfully uses the refresh token to
23:14
get a new access token, the authorization server issues a brand new refresh token alongside the new access
23:19
token and immediately invalidates the refresh token that was just used. Ah, so each refresh token is essentially
23:26
single use pretty much. This provides a powerful defense against refresh token theft. If
23:32
an attacker steals a refresh token and tries to use it after the legitimate user has already used it and thus
23:37
rotated it, the server will detect that the stolen token has already been consumed and invalidated.
23:43
It acts like a replay detection mechanism. Exactly. It can immediately flag the compromised account and potentially
23:49
invalidate the whole session. It's more complex to implement, especially coordinating invalidation across
23:55
distributed systems, but it significantly boosts security. Makes sense. We also touched on adaptive
24:00
authentication tying security posture to risk. Yeah, this is about dynamic risk
24:06
assessment during the session. Is the user suddenly coming from a different country, a brand new device? Are their
24:12
interactions looking suspiciously automated? Based on these risk signals, the system can adapt. It might require
24:17
the user to reauthenticate, maybe step up to multifactor authentication, MFA, even if they passed it initially, or it
24:24
might just shorten the session lifetime. It's about reacting to changes in perceived risk. And you mentioned privilege changes,
24:30
too. Critically important. If a user's permissions change midsession, maybe they're granted temporary admin rights.
24:37
You must invalidate their existing tokens immediately and force them to get new ones that reflect the new correct
24:44
permission set. Don't let old tokens linger with outdated privileges. Okay, good hygiene. Now, network level,
24:51
network resilience, and encryption. HTTPS TLS is baseline obviously mandatory non-negotiable for anything
24:58
transmitting sensitive data. TLS uses asymmetric crypto for the initial handshake verifying the server and
25:04
agreeing on keys then switches to faster symmetric crypto for the actual data flow but sometimes we need more like for
25:11
messages or financial data. Yeah. For highly sensitive payloads within the TLS tunnel you should consider endto-end encryption E2 using
25:20
strong well-vetted crypto libraries. thinks signal protocol as the gold standard to encrypt the data before it
25:25
even hits the TLS layer. This means only the true sender and final recipient can decrypt it, offering protection even if
25:31
the intermediary servers are compromised. Okay. Now, the advanced technique for mobile specifically certificate pinning.
25:37
This is about stopping man-in-the-middle attacks. Right. Exactly. It's a powerful defense against sophisticated MITM scenarios like
25:44
compromised certificate authorities or rogue Wi-Fi networks trying to intercept TLS traffic using fake but trusted
25:52
certificates. How does it work? You hardcode something in the app. You essentially embed the expected public key or a hash of it or the entire
26:00
certificate of your backend server directly into the mobile app's code during the build process. So when the app connects,
26:06
it doesn't just rely on the device's standard trust store of CAS. It explicitly checks does the public key of
26:12
the certificate presented by the server match the exact key I have pinned. If there's any mismatch even if the
26:17
certificate chains up to a trusted CA the app refuses the connection instantly
26:23
blocks the MITM attempt. Correct. It creates a very rigid trust anchor but and this is vital
26:28
the operational caveat. Yes, you must implement pinning with a backup pin. Pin the public key of your
26:34
current server certificate, but also pin the public key of a spare certificate you hold in reserve or maybe the key of
26:40
an intermediate CA you control. Why the backup? Because if you only pin the primary key and that certificate expires or gets
26:46
compromised and you need to revoke and replace it quickly, your app will suddenly stop working for all users. It
26:52
can't connect to the new valid certificate because its key doesn't match the old PIN key. The app is
26:58
effectively bricked until you force everyone to update. Ouch. So the backup pin provides a
27:04
fallback path for rotation without breaking the app. Exactly. It's crucial for operational
27:09
sanity with pinning. Got it. Lastly for the client, self-defense,
27:15
RAP and obfuscation, protecting the app when it's out there in the wild. Right. RP stands for runtime application
27:22
self-p protection. Think of it as embedding a security agent inside the application itself.
27:27
What does it do? It actively monitors the environment the app is running in. Is someone trying to attach a debugger? Is it running on a
27:33
rooted jailbroken device? Is it inside an emulator known for malware analysis? Has the app's own code been tampered
27:39
with since it was installed? And if it detects something suspicious, it can take immediate defensive action
27:44
based on preconfigured policies. It might shut itself down cleanly. It might selectively wipe sensitive data stored
27:51
locally or it might just send an alert back to your servers. It's about making dynamic analysis and tampering much
27:58
harder. And you said RASP works best with code obuscation. Yes, they're synergistic. Obuscation
28:04
happens at build time. It scramles the application's code, renames classes and methods to meaningless strings, removes
28:10
debug symbols, maybe adds confusing control flows. Doesn't stop a dedicated attacker, but makes their life harder,
28:16
much harder. It significantly increases the time and effort needed for reverse engineering just figuring out what the
28:22
code does or where the interesting dits are. So opuscation acts as a passive deterrent, slowing the attacker down.
28:28
RASP is the active defense that detects them when they finally try to hook into or modify that opuscated code at
28:34
runtime. They work together to protect the app's integrity. Okay, we've hit architecture, policy, client security.
28:42
Now we need to weave this all into how we actually build software, operationalizing it. This is dev sec ops
28:48
territory and the core principle there is shift left, right? Moving security earlier in
28:53
the process. Exactly. Don't wait until the final QA phase or worse production to find security flaws. Embed security thinking,
29:01
security tools, and security checks right from the design and coding phases. Make it part of the developer daily
29:07
workflow. It becomes everyone's responsibility, not just a separate security team's problem at the end.
29:13
Precisely. Empower developers with the right tools and training. integrate automated security scanning into their
29:18
IDs into the CI/CD pipeline for every commit, every pull request. Finding and
29:24
fixing flaws earlier is exponentially cheaper and faster. And that integrated testing shouldn't be
29:29
just one thing. You advocate for a comprehensive testing trio. SAS, DAST, and manual pin testing. Let's break
29:36
those down. SAS first. SAS static application security testing. Think of it as automated code review for
29:42
security bugs. Tools like Snick, Sonar Cube, check marks. They scan the source code, the bite code, the dependencies
29:49
without running the application. What kind of things does SAS find? It's good at finding known bad patterns.
29:55
Things like hard-coded passwords or API keys, use of known insecure functions,
30:01
potential SQL injection points based on code structure, outdated libraries with known vulnerabilities. That's software
30:07
composition or analysis often bundled with SAS. It checks the code structure. Okay. static analysis. Then DAST
30:14
DAST dynamic application security testing. This tests the application while it's running. Tools like OASP,
30:21
Zapier or Burkswuite Pro act like an attacker from the outside. They send crafted, often malicious requests to the
30:27
application's interfaces, APIs, web pages, and analyze the responses. What does DAS catch that SAS might miss?
30:34
Runtime issues. things related to configuration, how different components interact, session management flaws,
30:39
issues with HTTP headers, authentication authorization problems that only manifest when the system is live. It
30:45
tests the application's behavior in a running state. And the third leg of the stool, manual penetration testing. Why do we still
30:52
need humans if we have these automated tools? Because the tools are great at finding known patterns and common
30:58
vulnerabilities, but they're generally terrible at understanding business logic or finding complex multi-step attack
31:04
chains that exploit subtle design flaws. Remember that contextbased escalation
31:10
example? Yeah, the e-commerce cart manipulation. A human tester, an ethical hacker thinks
31:15
creatively like an attacker. They'll try to understand the application's purpose and workflows and find ways to abuse
31:21
them that automated scanners just wouldn't conceive of. They provide that crucial outside in adversarial
31:27
perspective. You need all three SAS, DAST, and manual testing for comprehensive coverage.
31:32
Makes sense. Okay. Another huge operational challenge especially with CI/CD managing secrets, API keys,
31:39
database password, certificates. Oh yeah, this is a constant source of breaches. The cardinal sin is hard-
31:45
coding secrets directly in source code, config files, environment variables checked into git or build scripts. Just
31:51
don't. So what's the right way? Use a vault. Yes, a dedicated secrets management
31:56
system is essential. Things like HashiP vault, Azure Key Vault, AWS Secrets Manager. These are hardened systems
32:03
designed specifically to store secrets securely with encryption, access control, and audit trails.
32:09
And the application retrieves the secret when it needs it. Exactly. At runtime, the application
32:14
authenticates to the vault using a secure identity mechanism like an IM role or Kubernetes service account and
32:21
pulls the secret it needs, often just into memory. The secret itself never lives on disk in the codebase or config
32:26
files. And these vaults also help with proactive key rotation. They make it feasible. Rotation should be automated and policydriven. Configure
32:34
the vault to automatically rotate keys, database passwords, API keys on a regular schedule. Maybe every 90 days,
32:41
maybe every 30 days, maybe even daily for highly sensitive keys. Reduces the window of opportunity if a
32:46
key is compromised massively. If a key is only valid for 24 hours, the potential damage from its
32:51
compromise is limited. Rotation should also be triggered automatically by your CSC pipeline whenever infrastructure
32:58
changes or manually immediately if a user's role changes or they leave the organization. Old tokens and keys
33:05
associated with changed privileges need to be invalidated. Now good discipline. Okay, let's circle back
33:10
to the API gateway. We mentioned it as part of the PP PDP architecture, but it plays a broader defensive role too,
33:16
right? It's the front door. Absolutely. Your API gateway like a AWS API gateway, Azure API management, Kong,
33:23
Apogee, is a critical enforcement point. It handles several security tasks before traffic even reaches your backend
33:29
services like what foundational stuff. Rate limiting and throttling are huge. Protecting your backend from denial of service attacks
33:35
or simple resource exhaustion caused by buggy clients or brute force attempts. Okay. Traffic shaping. What else?
33:40
Often it handles the initial authentication check, validating JWTs, API keys offloading that work from every
33:46
single microser. It can terminate TLSS SSL ensuring encryption from the client.
33:52
And critically, it's a prime location for input validation. Ah, checking the incoming requests for
33:57
bad data. Yes, remember the mantra, never trust data from the client. Never. The gateway
34:04
can enforce schema validation on request bodies, check for expected headers, validate parameter formats and ranges,
34:11
and reject malformed or suspicious input before it gets anywhere near your application logic or database. And for
34:17
SQL injection specifically, you mentioned the ultimate defense earlier. Yes. While input validation at the
34:24
gateway helps, the definitive defense against SQA happens at the application layer where the database query is
34:29
actually constructed. Right. Use parameterized queries. Right. Also called prepared statements.
34:34
Remind us why they're so effective. Because they fundamentally separate the SQL command logic from the data being
34:39
inserted or queried. You first send the structure of the query to the database with placeholders for the variables.
34:45
Then you send the actual user input separately. So the database knows the input is just data not part of the command.
34:51
Exactly. It treats the user input string even if it contains SQL commands like drop table as literal data to be
34:58
inserted or searched for. It never gets interpreted or executed as a command. It breaks the mechanism SQL relies on. Use
35:05
them always. Full stop. Got it. Parameterized queries are non-negotiable pretty much. go. This whole shift
35:12
towards automation, CI/CD integration, embedding security testing, it really
35:17
defines modern high velocity development, doesn't it? It has to. You can't bolt security on at
35:22
the end anymore. If you want to move fast, it needs to be builtin automated part of the culture. And achieving that level of integration,
35:28
making sure your pipelines support SAS, DA, secrets management, policy checks without slowing developers down. That's
35:35
the goal. It's the core of practical DevSec Ops, which leads us to our sponsor once more.
35:41
If you're looking to operationalize this, embedding security smoothly into your development workflows, Vivehack's
35:46
DevSec Ops solutions focus on automating these security practices within CI/CD.
35:51
You can learn more at devscops.videhack.dev. One last operational piece, monitoring
35:57
and observability. Don't forget logging, right? Log everything. Well, log strategically but
36:03
comprehensively. Every API access attempt successful or failed, especially authentication and authorization
36:10
decisions, any significant errors, it all needs to go to a centralized logging
36:15
system. Cloudatch, Splunk, an ELK stack, whatever. Why centralized? For incident response and auditing. When
36:21
something goes wrong, you need one place to quickly search and correlate events across all your services to understand
36:26
what happened. Scattered logs are useless in a crisis. And when things do go wrong, how should
36:32
the API respond to the client? error messages. Keep them generic. Never ever return
36:38
detailed internal error information like stack traces, database error codes, or internal file paths back to the client
36:44
because attackers can use that information. Absolutely. It gives them clues about your system architecture, technologies
36:49
used, potential vulnerabilities. Just return a standard generic error message like an internal server error occurred
36:56
and log the detailed error internally for your own team to debug. Don't leak internal detail.
37:02
Outro. Wow. Okay. Hey, we have covered a lot of ground today from high level policy down to very specific crypto details for
37:09
mobile apps. It's a deep topic. It really is. But the core message seems clear. Security has to be layered.
37:15
There's no single silver bullet. Exactly. Think of that castle analogy. You need the strong outer wall of the
37:22
API gateway with rate limiting and input validation. You need the internal
37:27
controls, the decoupled policy engine like OPA or ADP handling fine grained
37:32
authorization with proper multi-tenant isolation. And for the client side, the mobile app itself, dig client credentials, use PKCE
37:41
for the offflow, store tokens securely in the keychain or encrypted shared preferences. Use token rotation,
37:47
consider and opuscation. And operationally, shift left. Use the testing trio sast down tests, manual pen
37:55
tests, manage secrets properly with a vault and automated rotation, and please use parameterized queries against your
38:01
database. That's a solid checklist. Okay, time for our final provocative thought. We spent this whole time talking about users
38:07
Alice Bob the admin as the principle being authenticated and authorized. That human element,
38:12
right? But you briefly mentioned things like the model context protocol, AI agents acting on behalf of users using
38:18
their own restricted tokens via oath. Yes, the rise of machine principles, autonomous agents performing actions.
38:25
So, here's the question. How does authorization policy need to change fundamentally when the principal, the
38:31
actor, isn't a human with human behavioral patterns, but an autonomous piece of code, a machine identity.
38:38
Ooh, that's a big one. Because so many of our advanced techniques, especially in ABAC and adaptive authentication,
38:45
rely on observing behavioral anomalies. Humans have patterns. Machines acting at scale. Their normal behavior is
38:51
different. Their attack behavior might also look different. So, how do you write policies for them? How do you detect a rogue AI agent
38:57
versus a legitimate one if they both just make API calls incredibly fast? Exactly. Our current anomaly detection
39:03
might fail. We probably need to shift towards policies based more on the declared intent or the scope granted to
39:10
the agent via its token and rigorously verifying that scope against the requested action and resource. Can we
39:16
cryptographically bind an agent's identity to its approved operational parameters? How do we handle credential
39:23
rotation for potentially millions of autonomous agents? What new attack vectors emerge when the user can operate
39:29
at machine speed and potentially learn and adapt its attack strategy? Defining and enforcing policies for
39:35
machine intent, not just human roles or attributes. Yeah, that's a whole new frontier for authorization.
39:40
It really is something for everyone to start thinking about as these agents become more common. Definitely food for thought. Well, thank
39:46
you for guiding us through this incredibly complex landscape today. My pleasure. It was a great discussion. And thanks to all of you for joining us
39:52
on the deep dive. A final thank you to our sponsors, vipac.dev, cloudasses.vipack.dev,
39:59
and deadsychops.bipac.dev. We'll see you next time.

