0:00
AI can crank out code fast now. The part that's
0:03
not speeding up is our ability to ship it safely.
0:06
And if you don't have guardrails, you are basically
0:08
just moving failures faster. Hey, I'm Brian.
0:27
I work in DevOps and SRE, and I run Tellers Tech.
0:31
Ship It Weekly is where I filter the noise and
0:33
focus on what actually matters when you are the
0:36
one running infrastructure and owning reliability.
0:39
Most weeks, it's a quick news recap. In between
0:41
those, I do interview episodes with folks who
0:44
are actually building and operating real systems.
0:47
Today is one of those interviews. I'm joined
0:50
by Mike Lady, a senior DevOps engineer and the
0:53
creator behind Enterprise Vibe Code on YouTube.
0:56
Mike's deep in distributed systems, and we're
0:58
talking about day two readiness, guardrails,
1:01
and why that stuff matters even more now that
1:04
AI can generate code fast. We get into what day
1:07
two readiness means in plain English. the symptoms
1:11
you see when teams ship without guardrails, what
1:14
a real day -two audit looks like, and how Mike
1:17
thinks about AI can write code, but it can't
1:20
ship safely without gates. We also talk about
1:23
AI agents watching builds, branch protection,
1:26
quality gates, and where the line is between
1:29
AI -assisted development and AI taking actions
1:32
in production. All right, let's jump in. Today,
1:40
I'm joined by Mike Lady, a senior DevOps engineer
1:42
from Enterprise Vibe Code on YouTube. He's focused
1:45
on distributed systems. And we're talking about
1:48
day two readiness, guardrails, and why the ability
1:51
to ship safely matters even more now that AI
1:54
can generate code fast. Mike, thank you for joining
1:57
me. Thanks. Thank you for having me. So I'm curious,
2:00
give me your thesis on why do guardrails matter
2:03
more than good code? I feel like it's a kind
2:07
of... process like we we can we're constraining
2:10
people and ai to create like uh code that meets
2:15
the bar meets a certain standard i feel like
2:18
these it's the it's basically the same thing
2:20
a human generated code ai generated also all
2:23
go into the same like pipeline all go into the
2:26
same like build test deploy phases right so i
2:31
feel like people are freaking out like making
2:35
a big deal out of this when we have the same
2:38
system for people as we do for AI code, right?
2:41
And ultimately it's people reviewing and approving
2:45
this code anyway, right? So I feel like this
2:49
is, I don't know, we're in the same thing. We're
2:53
still doing the same thing and we can do it even
2:56
better now in the age of AI -aided development
2:59
or five code, whatever you want to call it. People
3:02
like to call it different things. So for sure,
3:04
yeah. So what do you what is day two readiness
3:06
mean in plain English for people out there that
3:09
may not know the term or understand the meaning
3:12
behind it? At least for for what how I'm talking
3:15
about is like day zero is like you don't have
3:18
an app. You just have an idea. Day one is you
3:20
launch the app. You go from zero to one. You
3:23
launch the app. Hooray. You you have something
3:25
on the Internet that people can use. And the
3:28
day two is the part that comes after that. It's
3:30
like. How do we maintain this long term? How
3:33
do we add new features? How do we fix bugs? How
3:36
do we add security updates and whatnot? Like,
3:39
how do we change the app over time and maintain
3:42
it? Because it's one thing to go from zero to
3:45
one, make it, and that's like the first part
3:48
of the work. It's a much longer, much more long
3:51
tail of work after day two, basically. Yeah,
3:54
for sure. So what symptoms do you see when teams
3:57
don't have guardrails, typically? They just pray.
4:02
They put code into production and just hope that
4:07
it works. There's some famous quote, it's like,
4:11
hope is not a strategy, right? You want to be
4:14
able to try to prove out that this code that
4:17
you make is as correct as possible. So if you
4:20
don't have guardrails, you're going to have weird
4:22
bugs, you're going to have features that just
4:25
completely get deleted or something like that,
4:28
or just don't work. Your users are going to be
4:30
pissed. It's just not a good thing to just kind
4:34
of ship, like merge directly to main and then
4:37
ship that directly to prod and just blindly do
4:41
that. Yeah. So do you think that guardrails end
4:46
up slowing teams down or speeding them up? It's,
4:49
yeah, ultimately speeding them up. People who
4:52
are like really... I don't know, early startup
4:56
run and gun. They're going to feel like, oh,
4:57
it feels like it feels slow. It's more process.
5:00
Like we can't get things done. Like we have to
5:03
fix the tests. But like now in the age of AI
5:07
driven development, vibe coding, it's like the
5:10
AI can do all the hard stuff for you. Like it
5:13
can like write the tests. It can fix the tests.
5:16
It can. do all the stuff that you don't want
5:19
to do. And then you can focus on the stuff that
5:20
you don't want to, you want to focus on with
5:23
features and whatever bug fixes that you want
5:25
to do and kind of product development. We get
5:28
to operate at a higher level now rather than
5:31
dealing with the build breaking, right? Like
5:34
we could just say, like my favorite thing to
5:36
do with AI agents is to have it watch the PR
5:41
build and it monitors it. It sees the, whatever,
5:45
the break, like whatever. thing it fails on and
5:48
then goes and tries to fix it. And just it goes
5:50
off in its own loop and just works to make the
5:53
build pass. Of course, I need to check the code
5:56
to make sure it doesn't like just delete the
5:57
test to make the build pass. Right. But like
5:59
there's a certain like abstraction or certain
6:02
like like I'm giving the agent a little bit more
6:04
control autonomy. I'm delegating the boring developer
6:08
task of, all right, fix the build. type of thing
6:11
so i think i got a little off track of what you're
6:13
going but no no speed up versus slow down yeah
6:16
so ultimately it speeds you up because you're
6:18
not dealing with all the failure like all the
6:21
bugs that crop up if you don't have guardrails
6:23
so yeah i i think too it goes back to like the
6:26
whole philosophy between but behind iac like
6:29
slowing down to speed up yep to a certain degree
6:32
but even that barrier is gone now to a certain
6:34
degree with ai but yeah i've also run into ai
6:37
agents saying oh you need this test to pass no
6:40
matter what okay i'll just remove the test or
6:42
i'll just have it return one it passed see aren't
6:44
you happy right exactly yep for sure yeah ai
6:47
agents will lie cheat and steal to make their
6:50
reward function happy and that that's a steve
6:52
yeagy quote i'm a yeagy fan boy from uh he has
6:55
a book and her book vibe code vibe coding highly
6:58
recommend it written with gene kim gene devops
7:01
god right absolutely yeah so that that's why
7:04
i like inherit like why i'm working like i'm
7:07
a big like stevie like beads fan and his newest
7:10
thing gas town i'm a contributor on it now it's
7:13
uh yeah i'm super hyped on it cool anyways awesome
7:17
so walk me through what you check in like a day
7:20
two readiness audit like what would be like your
7:23
checklist like your top i don't know three things
7:25
that you would check right so are you properly
7:28
using source control is it like Okay, you're
7:31
using Git probably because everything's on GitHub.
7:33
Hopefully. Hopefully, yeah, right? Hopefully.
7:36
Okay, are you using branches? Like, are you just
7:40
pushing the main? Are you using PRs on GitHub,
7:43
right? Like, and are you blocking those PRs with
7:47
quality gates? That's kind of the next thing.
7:50
Like, do you have a... Is main protected? Branch
7:54
protection. Yeah, branch protection. Make sure
7:56
the agent doesn't cheat and just pushes it straight
7:59
to main, right? And you actually have to go through
8:01
all the quality gates on the PR to make sure
8:06
it passes and you can merge into main. So that's
8:08
like source control and then quality gates. It's
8:10
like, okay, does it build? Does it test different
8:13
levels of tests, like your unit integration,
8:16
functional? Again, all these tests are essentially
8:18
free now. like they just cost tokens like they
8:21
don't cost developer time right so it's that's
8:24
why i'm so like excited about this idea is like
8:26
before with when i was i'm still devops but like
8:30
when i first uh started doing devops i was got
8:33
into through the like testing i was kind of like
8:36
a ui automator first and then they were like
8:39
oh how are we going to orchestrate these ui tests
8:41
it's like oh okay wow grow into a DevOps role
8:44
for Mac. It was actually iOS DevOps. So I was
8:48
mobile DevOps first. And we were racking Mac
8:51
minis and running these UI tests, making sure
8:53
the app doesn't break. But it was a pain in the
8:55
ass to maintain those tests, right? Everyone
8:58
knows UI tests most brittle. The things on the
9:01
screen change. You click the button. Maybe you
9:04
have to click it again because the click didn't
9:05
send through or whatever. Like there's all kinds
9:08
of things where UI tests are like very flaky
9:11
in general. But now, Like you can just tell it
9:14
to update it and it'll do it right. And it'll
9:17
probably fix it in a way that's satisfactory.
9:20
So that's why I'm so like hyped on this testing
9:23
quality gate type of thing. Plus like you can
9:26
run like test driven development. So you can
9:29
like have the AI write the test first and then
9:32
it writes the implementation to pass, make the
9:34
test pass. So I think it's a huge like boon to
9:37
all these like methodologies that we all know
9:41
we should. have been doing like TDD this whole
9:44
time. Like, like whatever, for 20 years, I don't
9:47
know how long TDD has been around, but like I
9:49
first learned about it in college and I've never
9:51
done it until now. Test driven development. Yeah.
9:55
Yeah. Test driven development. Yes. Yeah. So,
9:57
so yeah, that's like the, whatever the quality
10:00
gates, the second thing, config and secrets management.
10:03
Like, are you like for, for me, I have like a
10:07
demo app that lives in GitHub. I store all the
10:10
secrets. uh in the git uh not the git repo in
10:15
the like configuration for the repo and it has
10:18
like a little secrets place or whatever for it
10:20
but for um adbs there's secrets management for
10:24
all these other things have like very specific
10:27
ways to handle secrets because like that you're
10:30
gonna have api keys you're gonna have uh things
10:33
that you need to like connect to or whatever
10:35
and deploy to so we want to handle those properly
10:38
so those are And I'll throw in a bonus fourth
10:41
thing, like deployment model, like how, how do
10:44
you deploy? Do you like deploy to like a staging
10:48
environment first? And then do you like test
10:50
against that staging environment? Then once that,
10:53
that test is good, like, do you deploy to prod
10:55
after that? So like having some sense of like,
10:58
like test deploying the thing, whether that be
11:02
on a PR, whether it be on a staging environment,
11:05
making sure those. We deploy it somewhere first
11:10
and ensure that everything comes up before just
11:13
deploying straight to prod and then breaking
11:15
prod. Yeah, I think there's certainly a case
11:19
to be made for testing in prod with the right
11:21
feature flags and the right segmentation where
11:24
you could do 1 % or 0 .01 % of traffic, but certainly
11:29
not day one, day two, day three. You need to
11:32
have a mature practice before you're able to
11:34
sign on to that. Right. Yep. Exactly. Okay, so
11:38
jumping, we've talked a little bit about AI.
11:40
So jumping into the AI coding, guardrails. So
11:45
you have a couple posts on LinkedIn where you
11:47
talk about AI can write code, but it can't ship
11:50
it safely without guardrails. What does that
11:53
actually look like in practice? We talked a little
11:55
bit about this already, but I'm just kind of
11:57
curious. What's your philosophy behind that?
11:59
Do you treat AI as like a junior developer or
12:01
junior engineer that you... build gates on or
12:05
what gates do you insist with that AI agents?
12:07
Yeah. So it's a process. So like we have a whole,
12:12
like we kind of have to force it to go down a
12:15
path and you do a process. So like that's at
12:19
the, like we have the same gates at the PR level
12:22
for everyone, right? For human generated code
12:24
or AI generated code. But even before that, I
12:27
have kind of a, kind of like a plan, like have
12:31
it plan itself. have other agents look at the
12:34
plan and comment on the plan using like stevie's
12:37
beads framework but like you could probably do
12:39
it with other things but and then kind of like
12:41
think about the plan given those other comments
12:43
like is these agents aren't even necessarily
12:46
like the same it's not just claude like i use
12:48
claude gemini codex whatever else is out there
12:52
cursor with like rock or something like that
12:54
like i i think all these different um models
12:57
are kind of different perspectives. They're all
13:00
trained a little bit differently. Yeah, they
13:02
may be all trained on the internet in general,
13:04
but like they all have like different, a little
13:07
bit different perspectives. So I treat those
13:09
as like a team that kind of like review the plan.
13:12
And with beads, at least like it can comment
13:15
and it's non -destructively adding to the plan,
13:19
like adding different perspectives to the plan.
13:20
So it's a beads is like a issue tracking, but
13:23
for agents where. It's kind of like Jira, you
13:26
know, like you get a Jira, you can comment on
13:28
the Jira, you can like help people like work
13:32
through like making the Jira better or whatever,
13:34
right? And in a non -destructive way without
13:36
just saying edit and whatever, completely changing
13:40
the source material, right? Like without changing
13:42
the description box, say. Adding comments is
13:46
kind of like a way to give your perspective,
13:48
give your take without. completely changing the
13:50
original material. So there's the plan phase.
13:53
This is kind of like the process guardrail. Plan
13:57
phase, implementation phase, I have like a belief
13:59
that you should probably use like one model or
14:05
one agent as a daily driver. Like you know that
14:07
agent in and out. Like mine is Claude, but it
14:11
could be anything. If you're comfortable with
14:13
Cursor and you know how Cursor responds or whatever.
14:15
You learn its quirks. Yeah. Yeah. You learn how
14:18
it's what it tends to do when it when it tries
14:21
to lie, cheat and steal. You learn when what
14:26
it yeah, when it tries to take shortcuts and
14:28
you can recognize that right away. So you implement
14:30
with your daily driver and then you can review,
14:33
like make a PR and then have all those same agents
14:36
that like did the planning with you do the PR
14:39
review and they can all like I forgot a step
14:42
in the. the the planning is like you incorporate
14:44
those comments from those other agents into the
14:47
plan right and then in the pr review you can
14:49
say all right take a look at the pr take a look
14:52
at the the beads the issues and see like how
14:54
well does it like implement the plan and is there
14:57
anything that's missing or are there any issues
15:00
like security issues or perform it like you can
15:03
have it come at different angles so that's all
15:06
like before we enter our like main pipeline So
15:10
that's like a pipeline within itself, a pipeline
15:13
within like the implementation. I have implementation
15:16
as like a stage name, but like in the AI agent
15:19
driven development process, I guess. So I would
15:22
say that that's like a guardrails, but like human,
15:25
the shaping that we do, like when we're interacting
15:28
with the AI. Are you setting? Yeah, that makes
15:30
sense. Do you? Are you setting specific like
15:33
pre -prompt personalities, personas, or are you
15:36
just relying on like Opus and Sonnet and Gemini
15:40
like to have their different personality traits
15:42
because of their training, their LLM training?
15:44
Yeah, I'm just going with the kind of the base
15:47
model. A lot of people like to play house. This
15:50
is not my quote, but like I'm taking this from,
15:53
I think this dude from Human Layer, his name
15:55
might be Dax or something. I saw. he was giving
16:00
like a talk youtube video if you search like
16:02
human layer dude on on youtube uh you'll probably
16:05
find it and he says like oh yeah people like
16:07
to play house and have their their teams of agents
16:09
with their their dev and their qa their manager
16:13
their product manager their you're like it feels
16:16
like you're doing uh i don't know like theory
16:20
crafting like you're you're playing dnd you're
16:22
the expert like whatever with 20 years experience
16:25
it's like do you really need to hype the agent
16:27
up that much? Or is the, is the model, is the
16:30
stuff already in the model? Yeah, exactly. By,
16:33
by asking it, it can, you like review the security.
16:36
You don't need to prompt it as you're a security
16:38
researcher. You're a security engineer. Yeah.
16:40
It's like, you're only, you're kind of obfuscating
16:42
what it can do at that point too, because you're,
16:44
you're, you're narrow cat, like narrowing what
16:46
it's, what it's pulling from or what it can.
16:48
Yeah, for sure. So yeah, no, I, I think, yeah,
16:51
I just use the, as vanilla as possible. the models
16:54
so okay just curious then do you notice you would
16:58
mention as we use more model different models
17:00
we notice that there's traits when they lie cheat
17:03
and steal what's your take on obviously the landscape's
17:06
always changing but what's your take on models
17:08
right now like is there a specific model that
17:10
oh this is well and above great for devops versus
17:12
this one isn't great for devops or cic cic right
17:17
yeah uh i use claude mostly like that that's
17:20
like a good general I mean, all of them are coding
17:23
models, but like it seems to be like pretty,
17:27
like pretty good. But I do like when I do Codex,
17:32
Codex seems very thorough when it does its reviews.
17:35
It always takes the longest. Gemini seems to
17:38
like be really quick and like it gives some good
17:41
feedback or whatever, but like Codex seems to
17:43
take its time and really like analyze what's
17:45
going on. It really goes through like the issue
17:47
and usually has some pretty good feedback. So
17:50
yeah. I think they're all pretty valuable. So
17:55
big tech, big AI doesn't want you to know that
17:58
you can use all of them, that you can use all
18:01
of them together. Like all of them just want
18:02
all of your tokens, right? Quad wants all of
18:05
your enterprise's tokens, like Gemini or OpenAI,
18:09
whoever. They want your entire token budget.
18:12
But if you can, I don't know, spare the $20 a
18:16
month for multiple providers. I know it's hard
18:18
out there. I know, I know. whatever, tough economy
18:21
and whatnot, but like I view spending on AI is
18:26
learning. Like you're investing in yourself.
18:28
You're, you're this as a developer, this is where
18:31
the industry is going. And I, I'm never going
18:34
to, I'm hopefully never going to write another
18:36
line of code. I'll say that every day and I'll
18:38
still end up writing a couple of lines here and
18:40
there, but like it's the, the probability is
18:42
becoming less and less every day. Right. So.
18:45
Yeah, you're the manager or the, I guess, the
18:48
senior working with the junior that's actually
18:51
doing the work. So Cursor even has the ability
18:55
now to run multiple models at the same time.
18:58
Right. And I think you can even do multiple iterations
19:01
or runs of the same prompt and verify the output.
19:05
To be fair, it's on my Cursor. I don't use it.
19:08
I haven't used that yet, but I have used like
19:10
the multiple models to kind of see which is best
19:12
for. And it depends. I mean, it's everyone always
19:16
wants to use the max model, you know, because
19:18
they always think that their problem is the most
19:20
important deep thinking problem. Yeah, that's
19:23
generally not the case. Yep. Generally, you can
19:26
be you can do quite well with just using auto
19:28
or even just using their auto and their their
19:31
switcher is actually really good. They're being
19:33
able to switch models and let them decide which
19:36
model to use actually works out really well.
19:37
I mean, they are seeing whatever. millions of
19:40
requests a day. So they probably know what they're
19:43
talking about, right? You mentioned things like
19:46
CICD, branch protections, and agents .md. How
19:49
do these fit together? How do they fit together?
19:52
Okay. So the agents .md is kind of like your
19:55
implementation. Like how is the agent on your
19:59
computer going to act? How is it going to...
20:03
It's kind of like the base instructions for...
20:07
how you operate with it on your computer, right?
20:10
So then agents to AMD are kind of like the guardrails
20:13
on your laptop, what you are trying to steer
20:17
your agent to do as long as it can remember.
20:22
Like when you run out of contacts, you're, yeah,
20:25
it's bound to forget stuff, right? But yeah,
20:28
agents to AMD is kind of like guardrails on your
20:30
computer. You steer towards the problem. You're
20:33
trying to solve the problem. You can have like...
20:35
There's new things out there that you can have
20:37
agent orchestration where you have multiple agents
20:40
running in parallel. They're all trying to like
20:42
solve different problems potentially on your
20:44
laptop. You open up all these different PRs and
20:47
then the CICD pipeline guardrails kind of take
20:51
over from there. And then you try to make your
20:53
builds pass and hopefully you don't like, but
20:57
you have good enough tests, good enough like
21:00
code coverage. too like that's another big one
21:02
is like there's like no excuse to not have 80
21:05
code coverage now like it's tests are free they
21:07
just cost tokens so i feel like the the cicd
21:11
pipeline blocks like basically code but that
21:14
doesn't work with your existing tests and you
21:16
can gate the kind of like the code coverage part
21:19
and say oh it has to be above 80 if you're adding
21:22
a bunch of code and you gate on 80 and it goes
21:27
below 80 it can require hey you have to add tests
21:29
like a heart like A hard requirement, you're
21:32
not allowed to merge until you have a certain
21:34
level of testing. And then, oh, branch protection.
21:38
Yeah, so you don't, like, merge, like, commit
21:41
directly in main. Like, that's, like, the worst
21:44
offense is, like, from your developer laptop,
21:46
you merge directly in main, you, whatever. Potentially,
21:50
it goes out to production, but you probably,
21:52
it probably won't because, like, if you don't
21:54
have CICD pipelines, then you probably wouldn't,
21:57
like, have branch protection to produce. to begin
22:00
with. I mean, I guess you could. Like, if you
22:02
know CICD pipelines, I don't know why you wouldn't
22:05
have branch protection, but, like, yeah. These
22:07
are just all kinds of, like, multi -dimensional
22:09
guardrails to, like, steer and kind of, like,
22:12
put your agent on rails and go in the direction
22:14
that you want it to go. Absolutely. What do you
22:17
think the line is between AI -assisted development
22:20
and AI takes action in prod? Ooh, yeah. Like,
22:24
so, I've done it myself for my little test app,
22:28
and, like, it's... Like, I don't care. Like,
22:30
I've got node users right now. Like, I can let
22:33
it do stuff in my AWS account and it solves things
22:37
pretty quickly. Like, it's pretty nice to have
22:39
it give it, like, access to my AWS account. It
22:42
looks at all the logs and, like, honestly, I'm
22:44
not that, like, I'm not great at AWS. Like, I
22:47
don't know all the terminology or even whatever,
22:50
but, like, I'm deploying this app with Terraform
22:52
into multiple AWS accounts and I have, like,
22:55
basically a... dev AWS account and prod AWS account
22:59
and I'm kind of like segregating it in that way
23:01
and yeah if I have like some issue like it didn't
23:06
this was an issue that was having was it wasn't
23:09
cleaning up so it deploys as a lambda it whatever
23:12
tears down it wasn't tearing everything down
23:15
it was the ENIs were sticking around for some
23:17
reason and I was like all right just look at
23:20
the AWS account and see like what why were they
23:23
sticking around and it uh made some like github
23:27
action like cleanup uh job that like ran afterwards
23:31
to like make sure everything was cleaned up so
23:33
uh again like i'm this is some people might scoff
23:37
at that people may be like oh you don't actually
23:39
know the real cause or whatever i'm like well
23:41
but like the agent will know the real cause and
23:44
like i'm kind of delegating that like you wouldn't
23:46
know the real cause of like i don't know if you're
23:50
a senior developer and then your junior developer
23:53
on your team does something and it works, like,
23:57
yeah, you can kind of see how it works, but you
23:59
weren't the one writing the code, right? Like,
24:02
ultimately, we're delegating some amount of responsibility
24:05
to other developers. And this is just one more
24:08
step. We're delegating to a developer on our
24:10
computer. So I feel like people have this. sense
24:14
of they're losing control or they're losing like
24:16
like they're almost like identity like their
24:19
sense of importance as a developer of just of
24:21
knowing things if they delegate it out they're
24:24
like what do i do you just get up leveled right
24:27
like you you're you're now a manager like you
24:29
you you are now responsible you are still responsible
24:32
for the quality and what comes out like you're
24:34
still the one approving pr merging it so of course
24:37
you have to like kind of know what it looks like
24:39
you you have to kind of know what's going on
24:41
but I don't know for my small little test app.
24:44
Like, yeah, I don't really know what's going
24:46
on, but like for, so assisted development, totally.
24:49
Everyone should be using it for touching prod.
24:52
Yeah. Depends like maybe read only potentially.
24:55
Like if, if, if it can give you the logs and
24:57
can say, given like you give it just a read only
25:02
I am role say, and it can read all the logs,
25:05
you read the events, it can read through the
25:07
traces and like, it can come up with the answer
25:09
probably much quicker than you can. Like. Like
25:12
as I was, I was doing this exercise, the other,
25:15
not exercise, but like I caught myself reading
25:18
through a Jenkins log and trying to like trace
25:20
through like my, uh, like the team's GitHub and
25:24
the shared library and like, where is this coming
25:27
from? And I'm like, oh wait, I can just throw
25:29
it to quad and like quad can figure it out for
25:31
me like much quicker. Whereas that would have
25:33
taken me half a day potentially before. So I
25:36
feel like the, the habit to build is. all right,
25:39
how do I use AI first? Like with this thing,
25:42
like, how do I, how do I throw clot at this thing?
25:44
And then if it can't, if it's having issues or
25:47
you can't figure it out, all right, maybe I need
25:49
to like step in and whatever, do it the old way.
25:52
But like, that's always like the fallback option.
25:56
Right. So. Yeah. I've worked at companies where
25:59
you've set up MCP servers that would have read
26:01
-only access to Argo CD, Cube CTL, just to be
26:06
able to get a. you know, maybe Datadog logging,
26:09
if there's logs there that need to be, or Sumo
26:11
Logic or whatever else. And then it just compiles
26:14
all that information, looks through it, and can
26:16
give you a quicker answer than you can by SSH
26:19
-ing into a box and looking, jumping around,
26:21
looking at it. Yeah, poking around. So yeah,
26:22
I think there's 100 % merit to that. Using AI
26:25
to help figure out root cause is important. For
26:29
sure. And to your earlier point, you know, back
26:31
in the day, if you didn't know why... a network
26:34
interface was still existing in AWS, you would
26:37
probably try to read through the docs or reach
26:40
out to AWS support. And all you're doing is streamlining
26:43
that process. You're still, it's still the same
26:45
net result, but you're, and if you do go to AWS
26:48
support now, you'll probably get an AI -assisted
26:50
response anyway, right? So. Yeah. Right. That's
26:53
funny. But I think the interesting thing is like
26:56
people are afraid of responsibility. They're
26:59
afraid, like this is a point that. That's fair.
27:02
Again, like, stevie they bring this up is like
27:05
you're you're now the head their analogy in that
27:07
book is you're a head chef in a kitchen full
27:10
of agents like you're of agents as chefs like
27:14
that you're not making every little single piece
27:16
of the ingredient for the the dish but you're
27:19
ultimately responsible those are your michelin
27:22
stars on the line if something go bad goes out
27:25
to a customer right so somehow even like with
27:29
just we have regular software development organizations,
27:32
entire teams are kind of built on kind of this
27:37
delegation of responsibilities, right? The manager
27:40
managing a team isn't reading, like isn't coding
27:43
the code, right? Like they may be reading, they
27:45
may be in the PRs, they may be like, whatever.
27:48
But like the less, like I would say an ideal
27:51
manager is technical, but like they're not like,
27:53
they're not like a super engineer, right? Like
27:55
they're not the best engineer on the team. Like
27:59
they're delegating that responsibility to their
28:01
team. So, and I would say managers have been
28:05
vibe coding this whole time and we're just now
28:07
like ICs are now like having to deal with that
28:10
kind of delegation or responsibility, like delegation
28:13
of like implementation, but still having the
28:15
responsibility of making sure it's good. So,
28:18
yeah. Does that, does that make sense? No, it's
28:21
a hundred percent. Yeah. And I love that analogy
28:24
about being like a restaurant or being chefs
28:27
and then being like your sous chefs or your.
28:30
your assistants. I think that that makes a lot
28:32
of sense. So embracing AI is important for a
28:35
hundred percent. Now more than ever too, because
28:38
if you're not, then you're behind, like you're
28:41
going to get outpaced by someone else that is.
28:43
Yeah. Potentially outperformed. Yeah. Like that's
28:46
where, I mean, we haven't seen that yet, but
28:49
like it may be coming, like it may be the performance
28:52
reviews this, if not this year, next year, like
28:55
just like getting really like granular here.
28:58
Like if you are, If someone on your team is using
29:01
AI, they may be putting out like 10 times, potentially,
29:04
these are, you know, round numbers, orders of
29:07
magnitude, whatever, two, five, 10 times more
29:10
work than you do. And then come performance review
29:13
time, like maybe not this year, it's kind of
29:16
early still, but like next year, maybe you don't
29:18
get a promotion because somebody on your, even
29:21
though you may think you deserve it or whatever,
29:23
someone on your team is. using AI, embracing
29:26
it. They're a year ahead of you in learning how
29:29
to use it. It's going to be tough to catch up.
29:30
Yeah. Anyways. So wrapping up, given that, the
29:34
talk of AI, is there any hype take about AI and
29:38
engineering that you want to kill? Killing a
29:41
hype take. Interesting. I mean, what haven't
29:44
I, like with the raining on people's parade about
29:46
playing house or whatever, but I don't know.
29:49
Let's see. I think so. Okay. People. like to
29:53
be smug and say i'm gonna end up cleaning up
29:57
your slop code in whatever six months to a year
29:59
or whatever i'm like bro nobody's gonna hire
30:02
you if you're gonna insult like what they did
30:04
like if they built if they vibe coded an entire
30:08
product whatever that was successful enough to
30:11
make money to then have to hire somebody to help
30:14
clean up the code they're not gonna hire you
30:16
yeah who's who like posting openly on linkedin
30:19
has a whole x feed of like ai vibe coding hate
30:23
they're not going to hire you and i think people
30:25
this is again like the the identity thing is
30:28
so like kind of wrapped up in this and i think
30:31
that's kind of underneath all of this is that
30:33
people want to feel smart and they and they are
30:36
smart like they're for sure like they're all
30:38
these people are probably well more qualified
30:40
at coding than i am it's just i'm willing to
30:42
operate at a whatever i'm willing to give that
30:46
up like maybe because I'm not great. I'm not
30:49
like, I haven't, I've never been like the best
30:51
developer or whatever, but I'm like, I see how
30:55
these systems connect together. And like with
30:58
being DevOps, we glue all these things together.
31:01
We're at kind of the crux of all these systems
31:04
interacting and testing and building and whatnot,
31:07
deploying. I can, I'm willing to say, all right,
31:10
I don't really enjoy that whole like coding process
31:12
anyway. Let me just take a step up and like kind
31:15
of connect all these systems together at a. whatever
31:18
higher level like I know that I hate that term
31:20
like that makes me feel like high and mighty
31:22
or whatever but like it's a level up the stack
31:25
like people aren't manually coding and assembly
31:28
anymore people use the high level language of
31:31
C and people were like oh you're never gonna
31:33
like know what the what the actual bits are and
31:36
like we had this whole like that that was the
31:39
same conversation happened then right So I feel
31:42
like people being smug online and like having
31:45
these AI wars or whatever, I take advantage of
31:48
it. I like to be a little edgelord online, being
31:50
pro vibe coding or whatever. But then people,
31:54
whatever, come by and be negative. And I'm like,
31:56
thank you for the engagement. But yeah, people
31:58
aren't going to hire the smug people online who
32:01
are anti -AI, who have successfully vibe coded
32:04
applications. So, okay. So wrapping up, you had
32:07
mentioned the vibe coding books, Gene Kim, Steve
32:09
Yegge. Um, is there anything else you'd like
32:12
to leave our listeners with? I have a YouTube
32:15
channel where I live stream. It's called Enterprise
32:17
Vibe Code. I live stream myself, vibe coding,
32:20
just kind of building in public type of thing.
32:22
I do it most mornings. I'm a morning person.
32:25
I get up early, weirdly enough, and people from
32:28
around the world join the stream and jump in
32:30
the chat and talk. So yeah, I ended up talking
32:33
more than building, which is fine by me. Like
32:36
I'm whatever, like it's just interesting to,
32:39
yeah. recently started that. And I think people
32:42
are enjoying kind of like talking with other
32:45
people about this thing and like trying to get
32:47
a, get a grasp of what is possible because there
32:49
is no manual, right? Like nobody, like this is,
32:52
this is the closest thing we have to a manual
32:53
and it's not even a manual. Like there, there's
32:56
no like specific, uh, way to do it or specific
33:00
way to teach it other than just do it. Right.
33:02
So, so I, I, I do it for other people in front
33:06
of on live and hopefully people get something
33:09
out of that. Yeah, it seems like it's been resonating
33:11
too. I looked at your channel. We spoke a little
33:14
bit before we started recording, which is awesome.
33:15
So check out Enterprise Vibe Code on YouTube,
33:18
Mike Lady on LinkedIn, and I'll have all the
33:20
information and everything we talked about in
33:23
the show notes. Thanks for coming on, Mike. Really
33:24
appreciate it. Awesome. Thanks for having me,
33:26
Brian. Appreciate it. All right, that's my conversation
33:28
with Mike Lady from Enterprise Vibe Code. My
33:31
biggest takeaway is his framing that guardrails
33:34
aren't process for process sake. They are what
33:36
makes speed real. Without them, you are just
33:39
shipping faster failures. And with AI in the
33:42
mix, that matters even more. AI can write code
33:46
and even fix builds, but it will absolutely take
33:49
shortcuts if you let it. So branch protection,
33:52
CI quality gates, and sane deployment paths.
34:19
Thanks for listening, and I'll see you later
34:21
this week.