Cut your Kubernetes cloud bill in half with CAST AI - Product Showcase
6K views
Nov 9, 2023
Join us on June 15 with Leon Kuperman & Annie Talvasto on the Product Showcase live show. ABOUT CAST.AI CAST AI is a single cloud optimization platform for Kubernetes. We use AI to significantly reduce cloud costs, reduce the amount and complexity of tasks which DevOps have to do, and prevent downtime. All that on a single platform which is both simple to use and allows for control when you need it. C# Corner - Community of Software and Data Developers https://www.c-sharpcorner.com #castai #liveshow #csharplive #csharpcornener
View Video Transcript
0:30
Hi everyone. Welcome back to C.Shapcona live show. I'm a host Stephen Simon and we are back with another episode of a product showcase. And in fact, this episode is going to be very different and very exciting. As this time, I'm not your only host. I have co-host with me, Annie. She's joining. She is from Cast A.I. Team. Without any further ado, let's invite Annie. Hi, welcome to the live show
0:56
Hi, thank you so much for having me. So Annie, I have been doing you for a while now and you have been doing amazing stuff for the community
1:05
So thanks for the time today. Would you quickly like to go ahead and introduce yourself what who you are and what you do
1:12
Yeah, that would be interesting. Yeah, of course, super happy too. And I'm so excited to be talking with CSOP Shore Corner again
1:19
Very excited to chat once again here. But yeah, hi to everyone. I'm Annie. I am the product marketing manager at Cast AI
1:28
joined rather recently. It's a very nice company in general. And as far as my background goes
1:35
I have been, as Simon said, really highly involved with community activities for a long time now
1:41
I run my own podcast called Cloud Gossip. I've been organizing the CNCF and Kubernetes meetup for
1:46
over three years now. And I also speak a lot at international tech events. But yes
1:52
nowadays the product marketing manager at Cast AI. Absolutely amazing. So, Annie, you did say that you recently moved to Cast AI
2:03
Could you give a brief introduction about what is Cast AI and what's the product all about
2:10
Yes, very much so. So Cast AI combines instant AI driven cloud optimization with Kubernetes
2:17
It's one platform that cuts your cloud bill in half, boosts the power of DevOps 510
2:23
and guarantees business continuity by preventing downtime. And that's kind of the gist of the gist of
2:28
but what it actually means that we are building the industry leading cost optimization platform for Kubernetes
2:35
And at the moment, we have the best offering essentially for AWSE
2:40
And we will come up with the GKE and the AKS version very soon
2:47
So stay tuned for updates on that phone. Wow, that's great. Now, if everyone's watching now, if they look at the title of today's show
2:56
it clearly says that Cut Your Kubernetes Cloud. within half with Cast AI, right? That's a pretty bold statement to make, right, to cut to
3:04
have and I've been visiting your website on past couple of weeks. First, absolutely amazing website
3:10
I highly encourage everyone to visit Castor AI website. They have built it very nicely. And you say
3:17
that we're going to build it to half. How does this actually work? Can you give a brief idea about that
3:24
Yeah, of course. Well, essentially for like a quick brief, It is the wonderful AI we have created within the company
3:33
The company itself is a bit over a year old, one year and a half about
3:37
We launched the product during KubeCon Europe a month ago. But I think the best way to really experience the product
3:44
we get to know with the value, we get to know what happens and how it happens will be within the demo
3:49
that our amazing CEO has agreed to do quite soon to us
3:55
Yeah, exactly. So everyone's watching. Stay tuned as you're going to have a hands-on demo by Mike Leon
4:02
He's going to do a demo on Cast AI. Before we moved that in just a couple of minutes
4:07
it did seem that you have a community background, you have been speaking it, and now when you move to Cast AI
4:14
definitely the community part is very, very important too. So how would someone get connected with the community
4:20
If you have any channels or platforms, people can go and connect with other people
4:25
who work on in the same way. Yeah, of course. We are really loving the community that we're building and we already have up and running
4:32
and very much welcome everyone to join. We're super healthy to help you get started, to help you get involved with Castaa as well
4:41
We have a Discord and a Slack channel that everyone can join. If you need help joining or you need help using the product, always happy
4:49
I'm very happy to receive your questions. You can reach out to me on Twitter or obviously do the Castega official accounts as well
4:56
We have a really wonderful customer success team really very happy and ready to help you out as well
5:02
So either to me or the cast AI official ones if you have any problems
5:06
And I think the Slack community is also a really great place to start to get involved as well
5:13
Absolutely amazing. So what would you just did say? I did drop all the links in the comments on all platforms
5:20
So people can just go ahead. Don't click on this right now because then you're going to leave the live show
5:24
click on these links after we end the live show and it will always be there so go ahead and be a part of that
5:31
community and and yeah connect with other members too having seen that annie i think it's time to go
5:37
ahead and invite leon who has been very kind for waiting behind the scene so uh yeah let's let me add
5:44
hi leon and welcome to the live show hey guys what's going on how's everyone doing
5:50
We're doing great. Annie, I've been talking about some Cast AI, an overview
5:55
It looks pretty promising. And yeah, so it looks like next 15, 20 minutes of hands on
6:01
on Cast AI. It should be absolutely amazing. And I'm going to leave you
6:05
both, and I'm going to go back and grab a tea here in India. Then I'll watch
6:10
your demo. Fantastic. So we're doing a live demo. Excellent. Good. Good. Why don't we get
6:17
started? Any, can you see my screen okay is that is is the screen shared perfect okay so welcome everyone I'm
6:29
gonna take you through the story kind of from the very genesis of a Kubernetes
6:32
cluster so we'll just very quickly walk through I've pre-created a cluster because
6:38
it takes about 25 minutes to create on a AWS and then we'll walk through
6:42
like deploying an application and like kind of sizing your cluster and kind of
6:47
the estimates that DevOps engineers make and then we'll see why the vast majority of clusters are actually way over provisioned and poorly configured
6:57
right? Because we're all using our past experience and best efforts estimates to decide on
7:05
what our initial scaling should look like and what our overall auto scaling should look like
7:09
And this is where we see a ton of room for optimization and improvement. So I'm going to get into
7:14
the details in just a second. So let's go back and I'll show you what we've done
7:19
I created an EKS cluster using AWS's EKS-CTL tool. So if I bring up just my console here
7:30
hopefully you guys can see this, yeah. So this is kind of the completion of the EKS-CTL tool
7:38
I use the command to create a very vanilla default cluster with six nodes
7:43
I want some availability and some capacity. And then now we're kind of ready to
7:49
to use this cluster to deploy an application. So just a couple of setup notes
7:58
One thing I've done is I've run this really cool tool called OpsView, and I've connected Opsview to the cluster
8:08
So what does Ops View look like? It's this image right here
8:13
So I think that's pretty visible for everyone. And if you haven't used the tool before
8:19
it's basically a kind of a graphical representation of all of the nodes
8:23
There are no master nodes in this visualization because the master nodes actually sit in the AWS control plane They don sit in your tendency as an AWS customer So all we see here is the data plan nodes or the worker nodes right So we got about six nodes here no application deployed
8:42
So our next step will be, let's go ahead and deploy the application
8:49
So I'm going to use the command line tool to do that
8:53
I've got my little C sharp corner demo here. And let's do this
8:58
Let's apply. And I think it, oh, no, let's see what the demo is boutique
9:09
And so this is an e-shop demo that we're going to use
9:13
It kind of comes from Google Open Source. I'll show you the app in just a sec
9:17
Very, very simple e-commerce app. But let's go ahead and apply it
9:22
So I'm applying all of the objects that are about eight or nine microservices that are being deployed
9:28
Some of them are well replicated. And if we actually look at the OpsView view right now
9:36
you'll see that all of those pods or deployments are being deployed
9:42
So in just a second, this whole app will get deployed. If you wanted to take a look at what the actual app is
9:48
it's demo.coma.coma.i.com.com. And it's just a very simple app that basically runs a shopping card, payment microservice
10:00
a recommendation microservice, and so forth. Cool. So if we go back to the operational view, we should see the app kind of fully deployed here
10:09
But there's a problem. What's the problem? You may say, well, Leon, okay, I deployed my app
10:13
I've got six notes. Like, what better can we do? Well, the problem is that we, the Kubernetes by default
10:22
and the default scheduler is attempts to, utilize all the resources at its disposal
10:29
So it just deploys all of the apps across all of the notes
10:34
So if I had 10 nodes, I would be using 10, the app would be distributed on 10 notes
10:39
Well, what happens there? Well, everything becomes severely underutilized. So in terms of requests for CPU and memory
10:47
we're really just scratching, we're not using hardly any of the cluster
10:53
So this is where CASC comes in. So now we've got a vanilla E.K.S. cluster, we've got our application deployed
10:59
Let's now take the next step, which is to really connect the cluster and get some insight
11:04
into what we could be doing better. Cool. So to do that, you register on cast
11:11
You'll get a, my view as kind of lots of clusters that are already been used
11:16
So you're going to get a walkthrough view as you do this
11:20
And what we're going to do is we're going to add a cluster to our configuration, but instead
11:25
creating a cluster, I'm just going to connect an existing one. So this is like an existing cluster you've had
11:30
CAST had nothing to do with its creation. So we're going to go ahead and connect cluster
11:35
Excellent. So now what we have is we have a very simple set of instructions
11:40
And by the way, you'll see there are different tabs for GKE for COPS
11:44
which is Kubernetes Operations Tool and AKS is coming soon. Like the GKE and COPS pieces were just literally released a couple days
11:55
ago, maybe this morning. So I personally haven't tried GKE yet. I've heard the results are pretty cool compared to AWS as well
12:04
Cool. So let's go ahead and grab this curl command. And then we'll go back to my console
12:10
And we're just going to apply it. And what is this going to do
12:15
It's going to make an API call to cast, and it's going to deploy an agent
12:20
So if we go back to our ops view, we should see, oh, yeah, there's an agent
12:25
a little damon that's deployed. And what's the response? And this is a read only
12:30
Notice how we didn't ask you for access keys, we didn't ask you for any permissions
12:34
This is a pure read only agent. And it's open source. So you can see exactly what we're doing in your environment
12:41
One of our principles is if we install software in your Kubernetes cluster
12:46
you should be able to see exactly what that software is doing
12:50
So right now, what the agent is doing, it's collecting all your pod information
12:55
All your node information, it's using AWS's instance metadata service to go and pull for all of the details around the financials, the instance type, is it versatile
13:09
What's the architecture? Is it arm? Is it Intel and so forth? And if we go back to the console, we'll see now that we've got our C-sharp demo savings report is available to see. Excellent. So let's first time
13:25
I'm running this today, so let's give it a whirl and make sure we've done the appropriate
13:30
sacrifices to demo gods. Awesome. So what is this saving screen saying? It's saying that you should be paying
13:40
less than, you should be paying 66.5% less than you're paying now for this cluster. So let's take a look
13:49
Right now at the bottom, there's a bottom table here that says, look, you're currently paying
13:55
for six M5 large nodes. If M5 stands for memory optimized, right
14:01
So it's eight gigs of RAM relative to two CPUs. You're paying about 10 cents an hour, which sounds pretty good
14:08
It's all reasonable, but your cluster is ending up costing you $414.72 a month, right
14:15
So that's not great, especially since Cass is saying, look, we can get you down to $138 a month
14:22
and your monthly savings should be around $275. If you add that up for the year
14:30
you're saving yourself $3,300. That's a pretty good MacBook with an M1 chip
14:37
if you can save that money. So I would save these dollars anywhere I could
14:42
So now you may be saying, okay, Leon, great. So what are you suggesting here
14:47
Well, the first thing we're suggesting is these M5 instances aren't right for you
14:52
after the ysis of your workloads, and we've done all of the kind of precalculations
14:56
looked at all the permutations of the nodes that we can possibly use in HADPS
15:01
We're saying we believe that you should be running on two C5A large nodes
15:07
and then you should be using a spot instance. So very quickly, for those of you who don't know
15:11
what a spot instance is, only about 7% of AWS customers use spot instances
15:18
And here's why. A spot instance is a computer that you get from AWS
15:22
at a severe discount. You can get, depending on the popularity of the computer
15:28
it could be 50% off, 60% off, 70% off, and there are some obscure instance types
15:34
that are like 80% off. So why doesn't every customer use spot instances you may be asking
15:40
Well, the reason why nobody, or very few people use spot instances
15:44
is because it creates chaos. When a real customer, someone who hasn't bought that instance type as a spot
15:51
comes to you and comes to AWS and says, I need a computer urgently. I'm willing to pay on-demand prices
15:57
AWS gives you two minutes or will remove that instance from your environment within two minutes
16:03
and you have to terminate everything and wrap up all your work within
16:07
those two minutes and give the computer back or it'll be forcefully taken from you
16:11
With Google, it's even worse. You only have one minute to relinquish control of the computer
16:19
So one of the key kind of findings that we've had and the key operational urecas we have is we can fully automate the interruption process
16:29
So we, and I'm going to show you this guys as part of the demo, we basically intercept the lifecycle of these spot instances
16:37
And we replace that instance with either another spot instance, maybe a different instance type, maybe an on-demand instance, depending on market pressures
16:47
So if there are no spot instances available at the moment, not to worry
16:51
We just go back to inventory. We pull the next most appropriate on-demand instance
16:56
And when the market opens up again, we'll just replace it again. So it a very hard problem to solve if you automating it yourself This platform makes it super simple Like you just turn on a policy which I show you in a second and all of the spot instances all of that management will get automated for you
17:13
So why are we recommending three nodes? Last point on the state in this report
17:16
We're recommending three months because it's kind of the minimal node count you need for a reasonably highly available setup, right
17:23
So you don't want to have a single point of failure. You don't want to have one node
17:26
So we kind of recommend an odd number of nodes, you have three or five. And in this case, we're recommending three
17:31
because of the size of the workload. So enough to yammering, let's get started and actually do the savings
17:38
So what I'm going to do is I'm going to click on the start savings button
17:42
And sure enough, a CASP platform does need to have some greater level of access to your cluster
17:52
So what we've done is we've pre-created a script that you can run in Shell that will create a role
17:59
and let's let me describe it while we're running so that we're not wasting time here
18:04
So what will this do? It's going to create a role. It's going to make sure that my current access
18:11
he has all of the valid access to touch the CPS cluster
18:15
It's going to then create very specific permissions for our, and I'm going to kind of flash this and then I'll delete it very quickly afterwards
18:25
but it'll create a very specific user with very specific policies that will only
18:29
only be scoped to the VPC and the cluster that we're considering
18:34
So in other words, it's not a, it's not got access. We don't have access to your whole account
18:39
with this set of credentials. We just have access to manage this cluster
18:44
And very specifically, it's managing the node groups. And it's also able to create new EC2 instances
18:52
and add them to the cluster. Cool. So while that's creating, let's go to our EPS cluster
18:59
I just want to make sure that we look at the auto scaling group for a second
19:06
Cool. And then the auto scaling group that we have is this one. This auto scaling group has a desired capacity of six and the minimum capacity of six
19:17
So we're going to just change the minimum capacity to be zero so that we don't get
19:24
we don't let the auto scaling group kind of get in the way of this particular
19:29
script. Cool. So now we've got our access key and we've got our secret. I'm going to go ahead and
19:37
plug that into the cast console. So we've got our access key. We've got our secret. And then the only
19:47
thing that we have to do is give access. This script is going to go, this service is going to go ahead and
19:53
validate. But yeah, oh, we've got a bad request. So let me just make sure
19:59
that I've copied this correctly and I've copied this correctly. One thing I cannot control is the EWS control link
20:18
So let's see if this works and then we'll be able to kind of take you to the
20:25
yeah it's it's not very happy right now unfortunately. No worries. There was a few questions from the audience as well if we want to take them while we see. Oh, did it
20:39
Yeah. Well, do you want to take a couple questions or do you want to kind of continue the floor anyway? What do you think
20:46
Yeah, maybe we could have a few questions and then we can continue just because there was quite a lot already
20:51
Awesome. I think we covered this a bit already, but I think it's good to repeat which clouds are supported by Casta
20:59
Cool. So AWS, EKS is supported right now. Within a couple of weeks, you're going to be able to do the exact same process on Google, so on GKE. These are Kubernetes managed services on those clouds. And then we have a, in a couple of weeks, you're also going to be able to, if you've created your own clusters using COPS, which is Kubernetes operations tools, you'll be able to use past as well for COPS. And then within a couple of months, we're also going to be able to use CAST as well for COPS. And then within a couple of months, we're also
21:29
going to be supporting Azure's managed Kubernetes platform, which is called A-K-S or Azure Kubernetes service. Perfect
21:36
Then the other one is, will Azure and Cast AI bill be separate or together or I guess
21:43
any cloud bill and the Cast-A-A-A. Great question. So we don't buy the compute on your behalf, right
21:50
These are your accounts. So in the Azure context, it's called a resource group and your bill remains a, it remains
21:59
inside of Azure, and then CASS just charges you a very small management fee
22:03
So it's not nearly in the scope of the cloud bill. So you'll get a bill from CASS, and you'll get a bunch of billing reports from CASS
22:10
showing how your clusters are trending in terms of their spend, but the actual bill you pay
22:16
is for Azure or AWS or Google, any cloud that you work with today
22:20
Perfect. And the last one for this middle Q&A session, am I giving my data to a third party
22:26
So you are, so it depends. So are you giving user data to a third party
22:34
No. Your data stays in your data planes. If you've got a database with user data in it that needs to be protected from a PI perspective
22:44
or from a data residency perspective, nothing changes. Are you giving any data to cast
22:50
Yes, you are. You're giving some data to cast. What are you giving us? You're giving us access to your
22:56
your pod configurations, right? You're giving us access to your node configurations
23:01
but you're not giving us access to any of your secrets and Kubernetes. In fact, we make sure that we scrape all of those out
23:06
before we send any data. So there is a filtering process to make sure that no sensitive data makes it to cast
23:13
So we're really looking at your workload characteristics. We don't care about anything else
23:17
And if you have secret data that is actually exposed unknowingly like, let me say in a config map
23:24
or something like that, our part plan is to warn you about those things. So on top of the savings report, you saw, we're going
23:30
to actually have a conformance and security report that will tell you, hey, you've got some weak
23:35
links here, you've got a config map with a database password that shouldn't be there, take care of it
23:40
and remove it. So yes, you are revealing some workload-related data to cast, but you're not
23:47
revealing any user data to cast, if that makes sense. Perfect. I think then we can continue with
23:54
the demo. Awesome. So let's see. start optimizing. So we got our cluster connected now and we really want to see some of these
24:03
savings, right? So I'm going to walk through some of these steps a little faster now. There are four
24:08
basic policies that you have to set. The first one is just the limits policy. I don't know
24:13
Let's just update the up this to 40. So we don't have a cluster that grows too crazy based on
24:20
workload. We want to enable the node autoscaler. So when unscheduled, pods occur in your environment
24:28
we are going to then create infrastructure to manage those on schedule pods
24:33
We also want to enable preemptive instances or spot instances in AWS case
24:40
so that we can take advantage of the cheapest possible inventory. And then finally, no good to scale up a cluster
24:46
if you can't remove it. So this is kind of where some of the magic comes in
24:50
So what you want to do is enable the node deletion policy
24:56
Before we do that, we're going to install this little add-on called in the Victor, right
24:59
So what I'm going to do is I'm going to open our documentation
25:03
and I'm going to go into this little command. And I'm going to, I think the helm repo is already added
25:14
but I'm going to install a helm chart in this cluster that's going to install our evictor component
25:20
and we're going to watch it do its magic in just a second. So actually, maybe let's do this
25:26
Let's enable the node deletion policy here, real quick. And now let's go ahead and install the Victor
25:34
This goes back to I going to clear the screen So my access keys aren on the screen and I going to add the Helm repo And then I going to install the Evictor
25:45
So you see a Victor does not exist, installing now. Awesome. So it's up and running
25:51
And in fact, remember if we go back to our Kubernetes ops view. Oh, cool
25:55
Now we see the Evictor is installed on one of the nodes
26:00
And actually, if you look into the Evictor logs, you'll see it's doing leader election
26:04
is highly available and after leader election, it's going to start working on the following problem
26:11
You see how we have this nice distribution of pods all over the place
26:14
Well, this is just too much, too much distribution. So what the evictor does is it works with our SaaS data
26:23
collection platform and it says, which one of these nodes are the worst from a cost effectiveness perspective
26:30
We're just going to pick one victim now. And that victim is going to get drained
26:36
And the evictor is going to do its magic in terms of moving the pods to another
26:40
schedulable node. And one of the things that we do is a pre-flight check to say, look, if we bring down
26:46
this node or if we drain this particular set of pods out of the node, are we going to
26:53
bring down a service? If the answer to that is yes, the evictor will reject the victim and we'll move on to find
27:00
another victim. So you guys see the red here that's shown up. Basically what it's done is it's
27:07
targeted that node and it said, look, we'll distribute the work on other nodes. This node is gone
27:13
And it does it through by painting the node. So no other scheduled work can occur on that node
27:21
while we're doing this kind of bin packing process. So really this is a bin packing problem
27:26
How do we get the best and most efficient use set of a cluster by packing in all of the workloads into the fewest possible nodes without sacrificing uptime and without sacrificing availability
27:39
So if we wait another couple seconds, what we'll see probably, let me make sure that I did
27:44
Now I'm like, yeah, I did turn on the node deletion policy. If we wait another couple seconds, what we'll see is the node deletion policy will kick in
27:52
And there is a TTR because you don't want to take too drastic action on this cluster
27:56
Within about 10 minutes, you're going to see that that node will disappear and that it'll probably choose at least two more victims
28:06
And our whole goal is to get some, let me open the screen kind of in split mode here
28:14
And our whole goal is to get the savings report as close to as close to optimize as possible
28:24
So what we see here is the saving report has already said
28:28
cool, you've already enabled all four of your policies. You started the journey and we're looking for this green bar kind of to flip into the optimal zone
28:37
I don't know if we're going to get there because of the time we have, but I think we're certainly going to see some marked improvement as this node goes away
28:44
So what do we see here? Oh, now we have one, two, three, four, five nodes
28:51
These two little pods are damien sets, they're going to go away. And then within a few seconds, the savings report progresses
28:57
And what we call this the slow and low migration strategy, because our whole goal here is to do the migration and the node reduction and node creation very slowly
29:09
We don't want to do anything that will jar your production or your pre-production environments
29:14
So our whole goal here is to go slow and low. And then within a few hours, you have a really nicely optimized cluster
29:21
So let's take a look at the savings report here. and see what we got because I think we should have a little bit of savings coming up now
29:32
Cool. So you guys see that the cluster cost dropped from $414 to $345
29:38
and now you see that now there are five nodes, and eventually that will kind of trickle down to three
29:44
And by the time we are finished with the process and go through the lifecycle
29:48
of adding nodes and deleting, adding pods and deleting, bleeding floods your normal life cycle, we're going to get to this $138 kind of optimized environment
29:59
So Annie, I don't think we have time for another cycle, but hopefully this gives folks a great
30:04
idea of how the product works through the covers. And we'll be able to take folks through this
30:10
in more detail, come onto our Slack, and we'll, or Discord, and we'll walk you guys through it
30:17
Perfect. I think that's a great next step. Before the final kind of
30:22
comments. There was another question on, is there any white papers available? And I can at least say that there will be actually a white paper published in few days. So no worries. Keep your eye out for that. But obviously, if you only have anything to add, you can do so. Well, you are much more versed on that side than I am. So, yeah, I think we have a whole bunch of stuff in the pipeline case studies and white papers. It's going to be really exciting when it all comes out
30:52
So stay tuned for those. But that's starting to be it, but I think I really want to emphasize to everyone that you can go to test AI and
31:02
get your free trial and the always free yzer to see and yze your cluster cost and
31:09
after that optimize it. Is there any more? Yes, I think the link will be up on all of our feeds so you can click through that and get started on
31:20
on using and optimizing your clusters. Yeah, and just as we were talking, another note got signaled for as a victim
31:30
It's getting it's getting cleaned up right now and then we'll start to see that cluster shrink
31:35
to its full optimized capacity. Perfect. Okay. Absolutely amazing. Well, it goes all over my head now, but it looks like people really enjoyed your session
31:47
People did ask. So, Leon and Annie, it looks like this is an amazing and promising product, right
31:53
I already see your website. Apart from all this community and all, do you have like docs or something or a place
32:00
Any other call to action resources if someone wants to go and learn more about it
32:05
Would you like to go and talk about that and then we'll wrap it up
32:09
Yeah, of course. I think the best place to find we do have extensive documentation in the site as well where you can see
32:16
All the things. You can reach out to any of us, and we will help you get started, join the Slack group
32:22
And I think one of the best links is the EKS Optimizer link that you can use to straight jump into the ETS landing page
32:30
and find out more about that product there that we showcased today
32:35
All right, everyone. So I think that's the wrap. Thank you everyone for joining
32:39
I see people really love this session. Lauren says thank you. Asipa says, a great session
32:44
Thanks for joining in Asipo. who just says interesting session. Static world says
32:49
interesting session. So it looks like people are looking at a lot of your demo and the product
32:54
Beyond, thank you so much for time. Once again, I really appreciate it. Any final thing you want to plug in before we close the show
33:02
Yeah, I mean, guys, from my perspective, this is all about saving your time
33:08
Our mission here is to get you out of the plumbing and into the higher order thinking
33:13
there's no reason for us in 2021 to be doing these low-level operations manually
33:19
So really encourage you to try the automation. And I'm really excited about kind of the next steps that we're going to be taking very
33:29
I believe they will be industry changing even more so than what we've shown you today
33:34
So really looking forward to showing that. And Simon, maybe when we have some of those interesting demos that come up
33:39
we'll revisit and have a chat with the community again. Yeah, we have an Azure Summit coming up in this September
33:46
That's going to, that's with a 120 speakers. If time perverts, we would love to host you back there too
33:51
It's an amazing event that we are doing. So thank you once again, Leon and Annie. And thank you very much joining
33:56
That was it. This episode of Product Showcase. My name is Stephen Simon. This time, co-host with Annie and an amazing guest, Leon
34:02
We'll see you in the next episode of Product Showcase. Until then, take good care of yourself and get vaccinated
#Business Operations
#Distributed & Cloud Computing
#Web Services