When to Choose Serverless Versus Fixed Size Resources
12K views
Oct 30, 2023
Designing infrastructure for new applications can be challenging--you don't know what the workload is going to be, and over provisioning can be costly, while under provisioning can provide poor user experiences. The cloud offers a solution to help you solve these problems--serverless resources which autoscale and meet the performance needs of your application keeping costs reasonable. In this session you will learn about utilization patterns that are a good fit for serverless workloads, and when to choose serverless versus fixed size resources. Software Architecture Conference 2023 (https://softwarearchitecture.live/) #SoftwareArchitectureConf23 #CSharpTV #csharpcorner
View Video Transcript
0:00
Hi, everybody. Thank you again for hosting me. It's always a pleasure to work with C Sharp Corner
0:04
The environment and the way that they share all of this knowledge is phenomenal
0:10
They have a reach beyond any other platform that I know of, and I'm happy to be able to share my knowledge with you
0:17
This is a brand new session. It's when to choose serverless over a fixed size resource
0:22
I haven't done this one before, so I'm looking for great feedback if you guys have it for me
0:26
If there's things that I don't cover and you wish that I did, let me know
0:30
I would absolutely love to get that feedback from you. So hopefully you can see my screen
0:35
You can hear me okay. Those that know me as SQL Espresso
0:41
I am Sequel Espresso on Twitter. I'm on the new blue sky
0:45
I am on Mastodon. I am on all of those wonderful social platforms that are trying to get some legs and grow
0:52
I've got my blog, SQLespresso.com. I'm known as SQL espresso because I am always high on espresso
1:00
I talk a mile a minute. So those that do not speak English as their first language, I apologize
1:06
I do my best to slow down on my tempo, but I don't always reach that goal that I want
1:13
So just bear with me if I go a little too fast. This is recorded. You can watch it on a slower speed and hopefully pick up all of the concepts that I'm going to give you today
1:23
I was laughing when I saw your intro, Simon, because you had Katie in the picture
1:31
I tend to give this slide all the time when I'm doing virtual presentations because I have cats notoriously that stay out of my office most of the day until I go online and do a virtual presentation
1:43
And during those presentations, you're going to see Katie, Theo, Cora, and this disastrous Luna, she's my newest kitten
1:50
and she goes on my desk and tears it apart. So don't be surprised if you see a tail coming back and forth as I talk
1:56
I try really hard to ignore them and actually try my best to ignore them and just carry on
2:05
So hopefully you can too, maybe get a little chuckle as they stop by and visit us
2:09
as I do assume that they will. So anyways, just a little bit of side note that I was laughing, Simon
2:15
because you noted my special guest and that cracked me up. Okay, what are we going to talk about
2:20
I love to break this down so you guys kind of get an idea of what to expect in the session
2:25
We're going to talk about what serverless is, the type of serverless offerings that are out there
2:32
not only in Azure, but maybe AWS or Google Cloud, depending on what you're using
2:37
It is a concept that some people grasp really, really easily, and some others just kind of struggle with what exactly it is and how that cost
2:46
There's a different cost between having a service. serverless option versus having a fixed option
2:52
And we're going to go into those. And then we're going to talk about scaling, dynamic scaling
2:57
It can actually scale your resources based on your usage, which could really be a cost savings and we'll tie that in
3:04
But it can also cause other heartaches. And we'll talk about that when it is best to use these kind of things
3:11
in your environment to get the most bang for your buck and not end up with a huge Azure bill
3:17
like what's talked about earlier, you know, when they say, spent that million dollars in that seven months that was brought up in that session rather than
3:24
you know, having that year-long million dollar cost that they budgeted for. So we'll tie that in a
3:30
little bit. I was glad that was mentioned earlier. We'll talk about the workloads. We'll talk
3:34
about what type of workloads you want to have and what you need to consider before determining
3:39
if serverless is the best option for you. And then number five is where I really want to stick
3:44
I want to actually go and look into what is the gotchas and what are you going to expect when you go
3:49
to a serverless offer. What kind of things that you're going to have to change in your code
3:54
What kind of things you're going to have to really a lot for when you decide that you're
3:58
going to get this? These are gotchas that I've run into with my clients or that my clients have run into
4:03
or other people that we know that have run into that's really put a halt to using serverless
4:08
and had them switch back to a fixed option. So we're going to talk about that
4:12
And then we'll see what kind of questions you might have at the end and see if we can
4:17
address this. But we only got 35 minutes. So actually a little less
4:21
So let's go ahead and get into it. We're going to talk about, again, serverless, what is it
4:26
Now, a little story, my dad, when I started talking about, hey, I'm starting to work with the cloud
4:31
and I work in databases. I've been doing databases for 25 years. He was having a really hard time concept with what the cloud is
4:38
He imagined our data flowing through the air, and we just kind of grab the data out of the air
4:45
And if it rained, our data was in trouble. No lie. That was an actual conversation we had
4:49
So when we talk about serverless, I try to explain it as, okay, where did the server go
4:55
If you're looking in this room, it's like serverless. We took all the serverless is out, all the servers out of the room, but where are you actually doing the data load
5:02
So serverless is kind of a hard concept for a lot of people to get. But what I like to talk about is to just kind of get your toes into it is it can be commonly used as a function as a service
5:13
So maybe you're using Azure functions or Amazon Lambda. and you're paying for functions to be executed
5:20
So you pay when you're not deploying to an infrastructure, you do not need a server, so your server list
5:27
you just need to be able to have some code executed. And you want to be able to pay as you go with that code
5:33
So maybe you pay for how many executions of that function that happens in your environment
5:39
Maybe it's a thousand and you're paying for 1,000 executions of a function
5:43
Or maybe you're paying for a function that runs, just an application for a little bit of time and you need to throw it out somewhere and have it execute in the cloud
5:54
Azure Logic apps, that's another version of this where you have a program that exists doing something
6:00
without the notion of the computer and the compute it's executing on
6:05
So there's truly hardware behind it, but you're not owning the hardware
6:09
Now, I'm a database person, so of course we're going to be talking about Azure SQL database
6:13
Now, that's one thing where it's a logical server, yes, but you're on hardware, but we're
6:21
only paying for the little piece of that hardware that we're using, the compute side
6:27
We want to pay for the resources we're using as we're using them, and we don't want to pay
6:31
for it when we're not using them. So we get this concept of auto scaling that we're going to talk about
6:36
But the key to this is your data still lives somewhere. So you will always be paying the storage portion of this
6:43
So that's something to consider. And then you will be paying the compute resources separately
6:49
So those are the two pieces, and that's kind of important to know. Serverless doesn't mean it doesn't land on a server
6:54
It is on a server, but you're only paying to use that server when you need it
6:59
So hopefully that's a little clearer for people when it comes to the concept of server list
7:04
There's a lot of options out there for you to go ahead and read on to get a little bit more information on that
7:09
So let's talk about some of the options that are available for you out there. such as, again, I said the Azure functions, right
7:17
Azure functions, AWS calls it Lambda. Again, this is things like you will pay as usage
7:24
So functions are paid for at runtime. So here's the knowing, here's the thing you got to think about
7:30
If you have long running code, if you have a function that somehow ends up in a loop
7:34
and you end up calling that function too many times, you're going to talk about big dollars
7:40
So when you start to think about using these things that are available, in the cloud, you're going to be paying per usage, per execution
7:48
So you have to be really careful in evaluating your code whether you want to actually have
7:54
something in a serverless option, the pay as you go, pay as execution
7:58
Now they do have an option where you can actually reserve capacity for these functions
8:04
out there. So you have two different ways you can use the cloud when it comes to calling a function So your apps can just go and grab the function within the cloud And you get the fixed as well as the usage
8:16
Now I had a client that actually used this really, really well. They had a function that was called when they were doing their nighttime
8:22
workloads or certain reports that they would run. And so they only paid for it to run, you know, one or two times a night
8:29
And it was very advantageous for them within their app to just pay that little tiny bit
8:34
for those run times. Again, you have to be careful with your code and you're testing your code before you do these things
8:40
so you don't end up with millions and thousands of executions in a really large bill
8:45
So we have Azure SQL database. Highly recommend Azure SQL database. It is again a pay-as-you-go option
8:52
With Serverless, and I have a screenshot that we can take a look at those availability
8:59
for us on Azure SQL database. Really good tool out there. You've got Azure database, again, for MySQL, Post-Squal
9:05
AWS Aurora. So it's not something that's just within Azure. We have lots of different or SQL, right
9:13
We have lots of different ways we can use this Postgres is a great example of an
9:18
option that you can use inside this. Azure containers, somebody talked about containers earlier
9:23
You can do a serverless option for containers as well. Cosmos DB and Redshift
9:29
You can do your ytic workloads on a pay as you go. And we'll talk about when that's good to do and when that's not good to do
9:35
especially if you have a lot of processing that happens that you need to pay for those seconds of CPU cycle
9:43
We'll talk about how the pay structure is done, which can bite you, but can also save you
9:48
So let's talk a little bit about the, I'm going to kind of go through these slides a little fast so we can get to the good gotches at the end
9:54
So serverless constructs. So these are kind of things you need to know for the basis of serverless and why it can actually save you money
10:02
So auto-scaling and auto- pausing is really, really great. We're going to talk about that a little more in-depth
10:09
It can scale, meaning scale out, give you more CPUs and more power as your workload increases
10:18
And then I can actually scale down when your workload is smaller
10:22
So you can go from all the way up to like 16 V-CPUs during your height of the day, and then
10:28
you scale down to maybe two V-CPUs in the evening. So there's lots of things that you can do to save costs by scaling up when you maybe have an on sale
10:36
Say you are a ticket client where you've got to sale a big thing, ticket master
10:41
You've got to sell a bunch of tickets during peak times. And then lots of times where you have downtime where there's no sales going on
10:48
So you can scale up really quickly and have it max out on its own without you interfering to your upper limit where you've scaled and do your ticket on sales
10:58
And then when the on sale is done, you could scale back down and save all of the. those resources. And you can do that automatically, which is awesome. You can also pause
11:06
turn off the server compute resources when you're not using it. So the cost savings could be
11:12
huge, but I want you to look at the second one. It's paid per second. Yes, per second for those
11:21
CPU cycles that are going to happen. So you have to really pay attention to see if a fixed
11:26
cost you're actually reserving and taking off, you know, eight, you know, eight, you know
11:31
It's CPU reserved for a Azure SQL database or whatever you're using and you're paying for that eight all of the time
11:39
Or you're just going to actually pay for the time you have it live and doing the cycles per second
11:44
That's the big keyword on there per second. And we'll show you that cost here in a minute
11:49
But it can really reduce the cost when you use it for things like dev test. So not everybody's testing 24 hours a day
11:56
So maybe you've got some code that's going to go in and you have your testers testing for a few hours
12:00
you could pay for those resources and scale up while they're testing certain loads to match your production system
12:07
That's a great way. And you can actually save money by only using that server and having those resources available as you need them
12:14
So that's a great way to save cost there. Understanding where your databases have their workload, right
12:22
Are you working mostly during the day and no workload at night? Or are you doing a lot of cycles at night and not a lot of activity during the day
12:30
Again, you've got that peak on sale thing I was talking about. You really have to know your workload and when your users are using that database
12:38
Maybe you are a nine to five shop and you only have a workload when the users are connected from nine to five
12:44
Well, that means at night, it can auto scale and go really low and shut down and pause during the nighttime
12:51
So you're not paying for that server at nighttime because you're not using anything
12:55
So again, functions I talked about a little bit earlier, you're calling it as your function
13:00
You can save time on that call time or any resources that you might have reserved for those functions
13:06
But you got to make sure all that code is tuned before you decide to do something like that
13:11
So this is a quick thing on cost. So here's the cost. Let's say I go ahead and I'm going to do a fixed price model, a pay as you go
13:19
and I'm going to reserve that 600 gigs or 600 CPU capacity here
13:25
It's a cost. It's going to be, I'm sorry, that's on cost. It's going to be my cost is going to be the same every month
13:34
It's going to be a provisioned resource. I'm going to say I need eight VCPUs
13:38
I need this much memory. And I know every month my bill should be the same when it comes to my compute cost
13:44
I've reserved a server and that is what it's going to be. Now, when I start to go through consumption-based costs, I'm paying what I'm using
13:54
I'm actually paying different peaks. I don't have a monthly bill that I can expect every month
13:59
It's hard for accounting and budgeting, but I'm able to possibly save the company millions and millions of dollars
14:05
by not paying that standardized bill and reserving those particular instances when they're not being used
14:12
I mean, if you have dev test that only gets used once a month, why are you paying that standard cost for once a month
14:18
When you could use something like serverless that allows it to pause throughout the month
14:22
and then it can beef up and work and take that load as it needs it by those CPU cycles
14:29
So that's a really good way to look at it. You're going to be either paying that same budgeted bill every month that you guys have
14:36
or you can get that dynamic bill based on your usage. So those are the two things, the fixed price or the consumption based or that pay-as-you-go type thing
14:46
When we talk about performance efficiency, that's when you're developing your infrastructure
14:51
That's when you're trying to decide what exactly I need in the cloud that's going to cover my workload
14:58
and what are my peaks and valleys of performance. And so you can kind of decide what kind of hardware you're going to need
15:06
Are you going to need to scale? Is there something where you're starting off really small and you're going to scale over time
15:12
You've got to figure out when you're choosing your hardware and you're choosing which type of service that you want
15:18
and what type of servers you're looking for, what architectures you're going to, to have, you're going to have to figure out what are your sizes and what are your usage patterns
15:27
So there's a lot of ysis that happens, but it is your job, like they said earlier
15:33
as an infrastructure architect, to figure out what that efficiency is. And that can be really hard to do. Going to an option like this can actually help you when
15:44
you're starting out. You can start to look at what your peaks and valleys are and what your usage patterns are using a pay as you've got
15:51
you go type service as serverless is. And then you can start to see what the scalability needs to be
15:58
How many times did it scale up? What did we need to do? When did it actually pause
16:03
And then you kind of get an idea of, yes, we probably need to go to a fixed environment
16:11
because we're scaling too often and we're always at this peak or what have you
16:15
Or we able to scale down to two VCPUs for 75 of our load and then we scaling scaling up to maybe 16 at our peaks And then you get that money savings during the time where you can actually stay at that two VCPUs
16:29
for example. But it gives you this auto scaling ability to where it can be hands off from the
16:35
infrastructure side. You're able to sell, and I'll show you in the next slide, but you're able to
16:40
determine what exactly I need to have for allocating resources based on my performance
16:46
it starts to watch your workload. If your workload starts to get really heavy and robust
16:52
it will auto-scale the amount of CPU and memory examples, for example, for you without you having to touch it
17:00
It makes it really easy when you're in a big, large environment where you have thousands of servers
17:05
and you have to watch and figure out, oh, no, we've got to scale up tonight because we have a big on sale that's about to happen
17:11
Or we're going into tax season, I need to go ahead and scale up. or we're going into whatever it is, your big reporting season or whatever happens
17:20
inside your work environment where you need to manage the scale up and scale down or
17:25
fail over to get new hardware, all of that great stuff that we have to manage inside the cloud
17:30
Serverless options do that for you. You can set on the screen a minute and a max of where you can go
17:37
So you get a little bit of control over the money aspect of it, not a lot, but you get a little
17:42
the control. Here's an example for Azure SQL database. You can see here as I'm picking my tiers
17:48
I have this serverless option and forgive me for looking sideways because that's where my thing is
17:52
here. I have this serverless option where I can actually say, give me serverless and you say
17:58
what kind of hardware you want, and then you set a min and a max. So let's say I want to have a
18:04
I know my workload is a minimum of 2B cores, but I want to max it out to 16. I don't want to pay no more
18:10
than 16 cores. It will go through that scale as it needs to size and auto scale for you to get
18:17
your workload through at the maximum efficiency. And you can keep hands off. Once I set this
18:23
then I never have to touch it again. And it'll manage the auto scaling for me. And it has a little bit
18:27
more information. It's going to give you a minimum of six gigs in memory, 48 gigs max. So you can
18:33
control all of that. Again, now look at the pricing. See that cost summary? That's the big thing here
18:38
and hopefully you guys can see my mouse here. But you'll say it's going to give you the storage cost, right
18:44
Because you're always paying for storage and it puts it in cost per gigabit. And then it gives you the max storage that it's going to allow you to have
18:51
So you can control that cost a little bit. And remember, that cost is not going away
18:55
That's going to be a steady cost based on the amount of storage being consumed
18:59
But here's the thing here. It'll give you the estimate of storage cost, which is great
19:03
But look at your compute cost. The V-Core second cost. So you're being charged that amount per second of usage of those CPUs
19:13
So if you're a smaller scale and you're saving money because you can actually be scaled down for a lot of your workload, that's fantastic
19:19
But it gets really, really scary when we run into situations which we'll talk about in a little bit where you're getting charged per second and you don't have connections that release
19:29
So it's still alive and not paused and we're going to start getting billed per second of all those connections still being attached to the server
19:36
So that's the scary part. The auto pausing, it will actually shut down your server, pause it during specific windows
19:45
or when you don't have any activity. So again, at night, we don't have an activity
19:51
It starts to see I don't have any active connections. It's not being used. It will pause, turn off the server during that window until somebody wakes it up
20:00
So you're not paying for weekend usage or end of the month usage or whatever it is or night
20:06
time usage or a couple hours during the day when that server isn't being touched or when
20:11
the devs and test people aren't using it, it's paused. You're not being billed for it, minus the storage, and it's wonderful
20:18
You don't have to control it. It actually watches. In this case, we enabled autopod, and if there's an hour lag, meaning no activity in an hour
20:26
it's going to bring the server down. And then when we start to get activity, it'll bring the server back up
20:31
So that's a really, really great way to save money and where I would need to be
20:36
not have if I had done the fixed cost and chose the model, I'm paying for it all month long
20:41
whether I'm using it or not. The auto pause feature allows us to get away from that
20:45
Okay, so here's the big question, right? This is the question of when do I use fixed or
20:50
when does serverless actually benefit me? And this, again, I'm going to say one more time
20:56
dev test. If you build it, they will come, right? If you have dev test environment and it's not
21:01
used all of the time, we tend to test in cycles, right? We're not necessarily running a test load
21:06
24-7 unless you're a big giant enterprise. So why are we paying for a fixed resource, reserved
21:12
resource for something that only gets used a few hours a week or a few hours of day? And it can be
21:18
small environments. So maybe we have it used at two cores when we want to test something, but we also
21:25
want to actually test it at a higher core thing. We can push a workload and see how many cores
21:30
that workload actually requires. It will scale up if it needs more. So by the time
21:36
you're done with your testing, you may be at two cores when you started, but after you run your
21:40
test for a while and it notices the workload, it may max out at 32 cores and then you know your
21:46
peak time needs 32 cores. That's a great way to do that. And remember, we're only paying for that
21:51
usage during the testing time. And then it'll actually pause and we can let that go. Really
21:56
really, really great way to use serverless. Now, here's another one, early product development
22:02
Maybe you have a small amount of data and you're launching a brand new app
22:07
Maybe you have those Azure functions that you're using. And we only have to execute it a few times a day or a few times an hour
22:14
Or we only need the server lightly during the time. We could pay for those CPU cycles on a serverless one and see as we grow
22:23
Maybe we start to outgrow the serverless. We've saved a bunch of money. We start to outgrow and then we go over to a fixed cost
22:29
Kind of getting your workload environment stable and then moving to what you think you truly need at the end
22:36
And you can actually use serverless to determine that within its auto scaling and it's pausing
22:42
And you'll know what your patterns are for that data usage and the proper sizing
22:46
Big way to save money in those environments. Cyclical workloads. So the standardized up and down in your workload, beginning of the month, you have a lot of usage
22:57
end of the month you have a lot of usage, but during the middle of the month, you have barely any usage at all
23:03
Great use of serverless. There is no reason to waste resources on a fixed server and a fixed tier
23:10
when you can actually not need it for the mid-month work at that scale. So really good on that
23:17
We have a client that at the beginning of the month from like the first through the fourth of the month
23:22
they only have two V-cours. It's the max they ever hit. But like at the end of the month
23:27
they're doing their month in reporting and their month in processing and billing that happens
23:31
they need nine cores. So it automatically scales up for them. They're very, very happy. We don't
23:37
get calls and say, hey, we need you to do a scale up for us. We're going to have to take an outage
23:42
or we're going to have to fail over to scale up. That might take an outage, different kind of
23:46
things that we need to do to scale or they need to have somebody actually create something
23:52
that would help us programmatically scale or what have you or a person to interfere on scale up
23:59
This will do it for us and we don't have to manage it. They're very, very happy. It's hands off
24:03
They don't have to call us. It works wonderful. So here's the thing about gotchas
24:10
This is when we go to serverless and we figure out, hey, this is not working exactly how we planned
24:17
So we have to mitigate what I'm about to show you. So I'm really lazy
24:25
The server is really lazy. It goes really really slow When a server needs to when the server auto pauses that means it stops right It going to say hey you not using the server You haven used it for an hour
24:39
I'm going to pause. I'm going to bring all your resources down. You're not going to get charged during that window
24:44
I have paused. Well, let's say you need to get a connection and you need to start using this server
24:49
You've got to start hitting that database. It does it automatically. You do a query and all of a sudden it's backup
24:55
It needs time to re-warm up and allow those users. your connections to go. So you need to build in something within inside your app or maybe an ETL
25:04
process or whatever you're doing to allow it to wake up. Maybe it takes 30 seconds to wake up
25:10
Maybe it takes a minute to wake up. Maybe it takes 20 seconds to wake up. Either way, just know
25:16
when that server starts to be used again after it's auto paused. You have to give it time to
25:21
wake up. You may get some timeouts. You may get some connections denied. You may get some things
25:26
that you're going to have to put in some retry logic to actually have it wake up for you
25:32
Thanks to keep in mind. Here's my grumpy one, and here's where you cost a lot of money that sneaks up on you
25:38
This is when you actually think the server is supposed to go to sleep
25:46
So you kind of know what your workload is. You know you actually go to sleep
25:50
It auto pauses. That's what I mean. It auto pauses every night
25:54
So you expect your bill to be. a few thousand dollars. It's kind of been pretty stable. You've got a stable workload
26:01
Serverless is perfect for you, but a user doesn't close their connection. An application
26:08
doesn't end its transaction. There is something that somebody has developed that keeps a connection
26:14
open. Now, if I have any activity whatsoever on that server list and I'm expecting it to pause
26:22
and go to sleep. Somebody keeps an active transaction. I am paying per second for that active
26:29
transaction to keep existing. Maybe you have something that pings over and over again, that server
26:35
to kind of make sure it's awake or somebody's built in something into an app that you don't know
26:40
Every single time it hits that server, the server thinks it needs to be awake, so you never get
26:46
the sleep time. You never get the auto pause and you might end up with these big giant bills
26:50
So there's things that have open connections and user connections and things that unintentionally keep the server awake, which can mean big bucks
26:59
So that's something you have to keep in mind and you have to watch. Maybe you have to build in logic that automatically kills connections a certain time of day or what have you, whatever your app does, when you're expecting that time to pause and you actually get that cost savings that you're expecting
27:14
The minute somebody keeps something awake and open, big money per second
27:19
So that's a big gotcha that can do it. So the next one, poof
27:25
Poof is the big word. I know, very scientific word. This is when the server goes to sleep
27:32
It's kind of like a restart, but it doesn't come back up. So when it comes to SQL server, I'm talking SQL because that's where I live
27:41
we get a lot of really good information that the optimizer uses or we use to make performance decisions
27:48
inside the database. So we have statistics and we have different things that are housed inside the DMVs
27:55
the dynamic management views that give DBAs information on how the server is doing things
28:01
The statistics of what kind of weights we're having, weight stats, what kind of weights we're expecting or what kind of weights are going on
28:08
so we can troubleshoot high CPU or something like that. Statistics for the optimizer to know our data
28:15
how many of what's our data distribution inside the database so it knows what to expect and what
28:21
kind of decisions to make as we run queries well when you bring a server down and restart a normal
28:27
on-prem server or vm or whatever all of those things get wiped out we know that we know every
28:34
reboot actually wipes those out and we have to have new metrics or the server has to build new
28:39
statistics or whatever we need to do with inside a SQL server so
28:45
think of a pause when it comes to serverless as a restart because we lose everything
28:52
Poof, goodbye. All of that great stuff that we have is no longer kept for us to reuse over and over again
28:59
and make decisions. I was very, very sad when the first time this happened, I actually reached out to the
29:05
product group of Microsoft and said, hey, all of this stuff is gone
29:09
I really need this for performance because performance guru, that's what I do
29:13
and they said, oh, sorry, it's gone. We don't actually keep that
29:19
We will spin up a new database for you or whatever option that I've chosen as far as serverless
29:26
and it doesn't propagate and keep all that information for you. So keep that in mind, all the good stuff
29:31
It does take a performance hit, in my opinion, when it comes to queries
29:36
especially with that and anything else that I rely on as a DBA
29:40
Remember, it acts like a reboot when you pause. So we lose a lot of good stuff
29:46
So the last thing that I have is auto scaling. I told you it's going to auto scale based on our workload, right
29:52
Which takes that pressure off of us. It's going to say, I'm at two VCPUs
29:58
Now we've got a bigger workload coming in. I'm going to go to four. Wait a second
30:02
We got a big giant workload. I'm going to go to 16. And it's going to interfere and watch those workloads for us and scale up as needed
30:08
Now it does it on a snail's pace. Your data is coming in like a puma
30:13
It's going really, really fast or a cougar. It's coming through really fast that you actually want the scale up to happen faster for you
30:20
If it starts to get that big on sale amount of volume that comes in for ticket sales
30:25
you want it to be able to match and scale up. It doesn't do that. It's going to watch your workload
30:30
It's going to make some determination on that workload. It's going to make sure that workload is steady at the higher thing
30:36
Then it's going to decide, hey, I'm going to scale up those resources for you
30:40
To me, it's not fast enough in reaction to my workload because sometimes I don't want, I want to be able to jump
30:48
I won't want to go from a snail's pace all the way up to that Puma as fast as my data changes and that workload is changing in my environment
30:55
And it does not do that. It's going to be a gradual scale up for you, not total snail pace, but pretty slow as it adapts to your workload
31:05
So just keep that in mind. There will be sometimes where you're still seeing the high CPU or high memory usage or paging out or whatever it's going to do on that smaller scale until it makes the determination to scale up and you get right sized automatically
31:20
Really, really, really great features. Serverless is a fantastic option to save money
31:26
It allows us, again, to scale down, to pause and only pay for what we use
31:31
But these gotchas is what I really wanted to focus on and make sure that you are aware that this stuff happens
31:36
happens. You have to plan for it. Don't expect this miracle product when it comes to the serverless options that it's going to solve everything for you. And it can get you in the wallet. It can add big dollars to your bill, but it can also reduce your bill by millions depending on what you're doing and what your use case is. So it's something to definitely look at and consider these. All right, Simon, I know I went over because you said we had a little bit of extra time or I'm wow, look at that. I'm almost right on time. So thank you everybody
32:06
for allowing me to talk with you today. Here's all my contact information
32:11
You can email me, catch me on Twitter. You can catch me on Blue Sky, all the great stuff
32:15
My blog is out there, and you can get me on LinkedIn as well. The best bet is hit me up with a message on LinkedIn
32:21
I usually am pretty good at replying to all of that. I work for Denny Cherry and Associates
32:26
We're a consulting firm where I do a lot of performance tuning. So you can always find me there as well
32:31
But that's all I've got. Thank you, Simon. I truly appreciate it
#Distributed & Cloud Computing
#Networking