Azure SQL Database – The Database of Choice by Sanjay Mishra || Lightup Conference
Nov 16, 2023
This session will cover the foundation of the Azure SQL Database service (the most matured PaaS offering on Azure), and showcase through multiple demos, why it is a database of choice for a variety of applications. Will cover various modern application use cases and customer examples.
Conference Website: https://www.2020twenty.net/lightup
#lightup #2020twenty
Show More Show Less View Video Transcript
0:00
Thank you. Welcome to the to the
0:02
session we have today. Thank you for supporting such a novel cause and for such a great cause and
0:09
we are here for the light of virtual conference for in support of UNICEF
0:14
And topic today that we're going to talk is Azure SQL database
0:19
the database of choice. I'm Sanjay Misra. I run the product management and
0:24
program management team in Microsoft for Azure SQL database and I'm very
0:28
I'm very excited to be with you here today talking about what this product is all about
0:32
why this would be your database of choice for any kind of application that you are building today
0:38
or will build in the future. Moving on, please donate to UNICEF
0:45
This is a novel cause and a call out to our sponsors who have been helping through this event
0:53
To start with, let's talk about soccer, or some people call it football
0:58
whatever is your preferred way to call it. But seeing some kids playing soccer on a greenfield is awesome
1:06
It's so so refreshing to see. We are not seeing that often these days because of the pandemic going on
1:11
But let's talk about little bit about a serious professional football of professional soccer when the stakes are high
1:19
I'm going to show you a video of. 99 women's FIFA World Cup final
1:27
Stakes are very high and sorry the video is a bit blurred
1:31
It is 20 plus year old old video, so a bit not so high quality
1:36
but I want you to see a key moment here. This is the final between USA and China
1:41
The final was decided based on penalty kicks because in the during playtime
1:45
they could not achieve a result. What you will see is that is the player is about
1:51
to kick the ball for the penalty and there is a goalkeeper. And as you probably familiar with the game
1:56
you'll know that the goalkeeper will make a call whether to dive left or dive right
2:01
And then ball could go to her left or right, or she could save it or it could be go
2:10
And I'll play just a few seconds of video and you will see what happens
2:22
That's it. That's all the video is about. So the goalkeeper dive to the right ball
2:27
go to the left and by the time the player who is kicking the ball is going back
2:33
celebrating to her teammates, there are also a bunch of people who are celebrating and those
2:39
not just the supporter, but people who have put bet on it. People put bet on sports
2:43
active live real time bet. They will be betting in a real game situation
2:48
There will be all kinds of complex bet who the ball is going to pass, etc
2:52
In a penalty kick, it's very simplistic. One person kicking, one person trying to defend the ball
2:57
Simplistic, but there's still many options. Ball could go to the left of the goalkeeper
3:01
to the right of the goalkeeper. It could be saved, it could go high, it could go on the ground
3:05
and people will be online real-time betting. And in such a real-time, their bets will be paid off
3:12
by the time the person who kicked the ball and she's going back celebrating
3:16
The people who had a successful bet are also celebrating because they have got the money back
3:21
In a matter of few seconds, all these things happen and in an event like this
3:25
they're not 100, not 1000, but millions of people for such a popular game
3:29
Millions of people all over the world putting a bet on it
3:33
So you need a fantastic database to be able to support this kind of application
3:38
Or maybe let's talk about assisted care facility. Let's say a bit less entertainment
3:45
more life centric thing in an assisted care facility when there are people who are in a physical state
3:51
they cannot take care of themselves. Their nurses were taking care of them and the nurses have to feed and administer
3:58
special medication on these patients. What this means is that once the medication
4:04
has been administered, it has to be taken a note on when it was administered at exactly what time
4:09
because there are certain time interval you can feed. If these medications are so high power
4:14
that overdose or underdose is life threatening. That means if you have fed
4:21
you have given the medication to the person and by mistake, you have taken note or the data is lost
4:27
or you have not taken a note of that, and within a certain period of time
4:33
the next nurse comes in and say, oh, this medication has not been fed, let me give this medication to the person again
4:40
It could be life threatening. So in this situation, you really want a database that has
4:44
very, very strong transitional consistency. You cannot lose any data. You cannot you cannot make any error
4:51
in terms of data integrity here. So what kind of database characteristic do you need to serve
4:56
such an application? You need something that is super fast. As you saw within a few seconds
5:00
all the money has to be paid back to the people who put the bet on. You need something that is highly
5:06
transitional consistent high concurrent, not so much for the assisted care facility
5:10
but for the for the game scenario, it's a lot of people doing
5:14
database stuff at the same time. Highly reliable. You need something that is secure because
5:20
there on healthcare side there is a PII information on the game
5:23
side. There is money changing hands. Data could be large or small. You need a great price performance
5:30
for a database and you want something developer friendly so people can build application easily on top of it
5:37
And easy to deploy, easy to manage and many more. If these are the things that you're looking for in a database, then
5:43
choice is Azure SQL database. It's the best database for building modern cloud applications
5:50
And uh. Many of you already running databases, some in the cloud
5:56
some probably not in the cloud, some in on premises and you'd have
6:00
experienced that managing database systems on premises has certain kind of task you to go by purchase the hardware
6:06
install provision the hardware, keep taking backups of your system, keep doing health checks
6:12
design higher availability disaster recovery, make sure it is compliant, secured every few months or weeks
6:19
You do upgrades, patches and all this stuff. What we have done for Azure SQL for you is actually we are
6:26
taking care of all these in with through built in capabilities. We take care of built in provisioning
6:32
We take our built in high availability, giving you 99.995 SLA. We provide you easy way to do geo redundancy
6:41
We take care of updates and all things. Basically, we take care of all the database
6:46
stores so that you can focus on your application, whether it is a sports betting or gaming or a
6:53
bank or a sister care facility. Or your is a kids school education system
6:58
Whatever is your application, whatever is your business, you focus on that. Leave the database shows to us
7:04
And so this is what we call platform as a service or database as a service
7:09
depending on what you what do you like to call? Azure SQL Database, the database of the discussion today
7:18
is the most mature past service on Azure. It has been there for 10 years
7:24
And it has grown in lips and bounds and what it has also acquired is a most mature database service across any cloud You go to any cloud you won find a more mature cloud database service than Azure SQL database
7:37
It's also a leader in price performance. We have actually published some price performance benchmarking comparing with other cloud providers
7:45
And we are clearly the leader for this database service on all types of database workloads
8:17
have an hyperscale and many more. No wonder this database is very popular
8:22
and we have more than 6 million databases running on this service today
8:29
A colleague of mine presented this in a through a video and through a blog
8:34
and he calls it out here. Here are the 10 reasons why you
8:38
should use Azure SQL in your next project. And here's a link to the to that article
8:44
You should see that I'll call out a few here and my goal is to give a demo of a few things so that you
8:50
can you can feel excited about this database. It is not just a
8:54
relational database. It can store and serve data in many different. User models it you can do Jason here. You can do graphs here
9:05
You can do special here. You can do column store, row store. You can do in memory. You can do on disk any kind of storage or any
9:12
kind of representation you want to do with this data. You can do
9:16
that on Azure SQL database. So we will see some demos because
9:20
demos speak 1000 words and they will tell you how powerful this database is
9:26
What are things you can do with it? To start with, let's look into the
9:34
How to create a database. Let's let's go to the portal and it's fairly easy
9:39
If you have not created database before, I will show you quick way
9:44
If you have done this before, this may be a small repetition for you
9:49
but just bear with me. This is Azure portal where we are. You can go and then you can either
9:55
You can either see as your SQL database or you can just type Azure SQL database
9:58
and this will pop up and then you can pick here. You can pick a resource group
10:05
You can put a database name and you can put a server and that server will tell you which location
10:10
it should be in and then you should create. That's all. That's all it takes three
10:15
four clicks and then you are you're done with the database. You can actually configure the database on your way
10:20
You could pick a general purpose database. You can pick a hyperscale database
10:23
You can pick a business critical database and no matter which type of database you pick
10:28
you can pick the number of V course you want, the amount of database size you expect to keep in the database
10:34
and then apply and then come back and then kick and then then click and then you are done
10:39
It's so easy to create a database. That's the benefit of a pass service
10:45
because we don't have to provision VMs for you. We don't have to provision disk for you
10:49
We don't do provision all this stuff for you. This is already done run by the service
10:53
Depending on your size and shape that you choose for your database
10:57
we're going to put that database into one of those already provisioned clusters
11:04
You can create a database using Azure CLI. Is that's your way to do it
11:07
PowerShell is your favorite tool. You can use PowerShell to create database as well. Moving on to a demo where we I'm going to
11:18
show you an IoT workload that can scale to two tons of rows and
11:26
records. And let's see, we will go to I will log into a VM
11:33
where I have this workload set up to run and here it is
11:37
So what I have here is actually an application that simulates going back to this picture actually
11:49
What you see here is actually the red dots are actually the smart grid power meters
11:53
These are the places where the power meters has been put. And this electric power grid is actually collecting meter data from all this and sending it to a central database
12:02
And there are hundreds of these meters put all over the place. I'm showing it for the for my region where I live
12:08
And this is this is showing it on a map and going to
12:13
the to the actual demo. What we'll see here is that I have a VM on which I
12:18
have simulated those hundreds of power meters and they will be sending records to the database and I
12:23
have a database behind the scene which will be capturing all this data and let's see how many it
12:30
can how many rows I scale this up to. So my application is client and when I run this
12:39
you'll see a UI pop up and it will start sending inserting data
12:44
You can keep and this data is real. This is actually data going into the database
12:49
You can see over 2 million, over 3 million rows being inserted per second into this database
12:56
And this database can hold up. Why do we do that? We can actually run some queries and see what kind of data this is
13:04
coming into into here. Let's see if this is a DMV that can show you the resource stats on this database and you can see that I'm pushing high because my average CPU percentage is pretty, pretty high
13:15
And then I'm pushing data into what this is actually doing is actually collecting all this data into in memory OLTP tables, which is super fast for high concurrency inserts
13:28
And then but memory is limited. If the memory is limited, we need to put that data into somewhere
13:34
and the behind the scene. It is also running a stored procedure to take data out of
13:39
Uh, out of that in memory table and put it into into a column store compressed table so
13:48
we can we can collect lots of data over there. And then what we can also collect over is
13:53
is that they will keep clearing in memory table that looks something like this
13:57
the least top 25,000 then inserts into this in memory table from the in memory table
14:03
It one sets into a column store table. That's a variable to keep track
14:07
You can see that 4 million over 4 million. By the way, while this is going on
14:12
you may like to see that I need to know how many I'm actually
14:18
how many data that has actually come in and I would like to play
14:21
with some of those data. But guess what? My CPU is pretty high
14:26
I don't want to run my queries on this on this same database
14:31
because I may actually slow down my rate of inserts. Don't worry what we have with this
14:38
because this is a business critical tier. I showed you three, four, three tiers on the portal
14:46
So this is a business critical tier, which have multiple high availability replicas behind the scene
14:52
What we also do we actually give you a one of those replicas and we say that OK guess what You can actually run your read workloads on top of it and you can do that continuously while you write workload is going on
15:07
So I'm connected in this one to the to the to the read replica and how do I do that is I go here
15:15
through application intent equal read only in my connection string. When I do that, I am I my connection lands on a on
15:24
on a on on the read replica and to prove that I'm the replica you can
15:29
run this query for sorry it's a read only workload and what I can do
15:34
is that see this has this is not much action happening so what I'm
15:39
going to do actually run a similar one for my read only and then you'll
15:46
see how much rows I'm reading here. And I'm reading hundreds and thousands
15:52
of rows at the same time while my right is going on at 4 million inserts per second and readers are
15:59
not affecting anybody else because they are running on a separate replica and then the writers are
16:04
going on fantastically on on on this one so this is uh the power of azure sql database
16:12
just sit back for a moment and grasp that number in over 4 million inserts per second on a single
16:20
database. And. In some terms, you can query that database from your
16:27
replica as well. So that's the kind of a high throughput IoT
16:32
workload that you can. You can run on this one. Not all
16:35
workloads are IoT workload, so you may see what about if my
16:39
database has lots of Jason data. I want to deal with Jason data in my workload. Or what if my database needs to deal with
16:46
graph data? I would like to see my. hey, get some social networking data
16:54
I want to build a graph among them and try to find out who likes what
16:58
who goes to which restaurant, and who is friend with whom. How about that
17:03
Or he could say that, you know, I want to deal with special data
17:09
Geography information, distance between places, etc, etc. You can do all this stuff
17:15
So to the all the power to show all the power here, what I have done is that I put a demo
17:44
Yelp is a site where people can actually go review their comments
17:49
They can put comments about a business they have. They have used etc etc and what Yelp does is Yelp
17:57
provides the datasets. It's open for anybody for academic or research purpose
18:01
Now education purpose and then you can actually download. You can download 8 million reviews
18:06
209000 businesses and several thousand across 10 metropolitan areas and you can download 8 million reviews
18:12
metropolitan areas and if you download this data set, you have to just sign up a some agreement
18:17
Then you can download and then you can play with them. You can build graphs among them
18:22
You can have geographic information within them. You do all the stuff
18:25
Yelp provides all this data in Jason format. So when you actually download the data that looks like this
18:32
you can see them. They are Jason files. So what we have done here is actually put together
18:40
a GitHub project for you where we have actually put the data and
18:46
put lots of scenarios that you can play with with the data and
18:49
I'm going to show some scenarios that that you can play with that. So once you get the Jason data, your objective is actually
18:56
to how would I load the data into the database and then and
19:00
then play with it. You can actually download that GitHub project and then to your own kind of experimentation with the
19:07
I'll show you some some typical. Exercises here today. First thing first is this is actually the same thing
19:18
that you saw in GitHub. I've downloaded all those SQL scripts here. I'll skip a few because I have already created
19:23
the databases and all this stuff. Here it is. How do import the data
19:27
Import the data as your SQL and as well as SQL Server
19:31
provides a way for you to open. Jason files and read them and then load them
19:36
into actually a SQL table. As simple as that by doing that
19:40
your data gets converted from Jason to SQL table and then you could run your familiar T SQL queries on top of it
19:46
and then will be good on that one. That's the simple part of this one
19:50
Then what we're going to do is that we're going to find do some graph exercises
19:54
Our goal is to find out that we will convert that data into some kind of
19:59
graph tables and through these graph tables will be able to show you
20:03
How do we do that? And then we're going to play with some. Will go play with some of the graph algorithms
20:16
One of the popular graph algorithms is actually. Finding a nearest neighbor between between two
20:25
people are finding a shortest path between between users. As you saw there, Yelp has so many businesses
20:34
so many people's reviews over there. If you want to find out
20:39
are two people connected in this data set? So we have picked two people
20:45
All this data is obfuscated, so there is no personally identified information in the database
20:50
so you can play with that. There are people and businesses are replaced by grids
20:55
So what we're going to find out that one person is this, another person is this
20:58
I'm going to find out how they are connected. What is the shortest path of connection between them
21:05
And this is a query that you can run and you'll see
21:09
that this query will show you how many different ways these two people are connected
21:15
And this is a pretty intensive query because it is doing union all of multiple queries
21:21
It is going against the same table and then trying to do a self-join of the same table to find our origin user
21:26
as well, distance and user. It takes a bit bit of time because it is intensive
21:31
but this is also a very inefficient way to find out. Uh, the connection between two people because it
21:39
first goes is this person connected by one hop. If not by two hops or by three hops like that
21:46
and you can see the query is still running and then it also produce something
21:50
Some datasets for us. What we have introduced in SQL Server 2019
21:55
and before that in Azure SQL database, is an algorithm called shortest path
22:01
People who deal with graph data, they may be familiar with the shortest path algorithm
22:06
which is very common in graph problems. What we have done is that we have made
22:11
an efficient implementation of that shortest path algorithm within SQL engine. Thereby, you'll be able to find out
22:19
the shortest path between these two same people using this shortest path function which is here as you can see that here And this query is not only elegant query is shorter easy to maintain and is super efficient as well
22:36
And it should come back within within a few seconds to give us that these two people are connected
22:41
by by two hops so. So this is this origin person. This is the distance person
22:48
There is one hop between them, so that gives you a shortest possibility between these two people
22:53
and it will the same query will work If this person were not connected by two
22:57
but by many by longer path. So that was one example of using this data
23:03
in the power of SQL database. Do not do you don't have to leave
23:08
your database to do graph operations? How about if I want to find out
23:13
something even more interesting? We talk about sports a bit before. How about if I want to go to see a sports
23:20
A few years back there was a Super Bowl. in Phoenix in Phoenix Stadium
23:28
also known as the State Farm Stadium. And what I'm trying to find out
23:32
I want to go watch that Super Bowl, so I want that either before I go or after
23:38
I want to find some food so that I don't go hungry. I have some specific food for interest
23:44
I'm going to find what are the nearby food places close to this stadium
23:50
So I can actually define a geography within the database. the coordinates can define a geography and I can find out if
23:58
five zero and then I'm converting it to five miles, converting them into into
24:06
Meters so that I can calculate because the data I have is in
24:10
matrix format and I am going to find out what are the top 10
24:15
food places. Next to the stadium because I know the stadiums coordinates
24:20
And this data same data which is in relational story is going
24:24
to give me here. Here are the top 10 food places. I said awesome
24:28
There are some nice bakeries, taco places, etc, etc. So I can
24:35
satisfy my culinary desire there. So but how about? How far are they from here? Let me. This is actually graphics
24:45
way to see this as well. This also outputs. You can see
24:50
there are a few points here. Some here, one here, and one here
25:01
So there are a few ones. And I said, OK, good. I will not go hungry so I can find a good place
25:07
to go eat my food. But I do not live in Phoenix
25:12
I live close to Seattle. So if I go there, I have to stay overnight in a hotel
25:16
I would like to go and stay in a hotel which is close to the to the stadium
25:22
And how about if I find a stadium hotel which is nearby
25:28
Let's find all the hotels that are close to the to the stadium. So this is doing the same thing
25:33
Same 5 miles distance, same geography for the stadium and trying to find out which
25:40
are hotels instead of food. So same queries as before. Just different filter and I find the
25:48
There is a lot of good ones. I'm a Marriott Award. I have a Marriott membership
25:59
so I would like to stay in a Marriott property and I love residency. Looks like there's a one residence
26:04
in very close to it. However, I do not know how good is this
26:08
residency because they have different management and some locations residencies are good
26:12
Some locations may not be as good. How do I know if it is good or not
26:17
The criteria that I'm using today is if anybody in my friend
26:22
list, anybody that I'm connected to in my graph, if they have ever
26:28
stayed here and left a review. That's good enough for me to think that yeah, the hotel is
26:36
good enough because people in my network live there. So think of
26:39
this. What I'm doing there is I'm joining graph. I'm bringing graph bringing in geography
26:45
And then combining them in a query to find out anybody in my network
26:52
Has stayed and left a review here. So I'm using very complex structure
26:57
You can see that and we are printing that I'm this user of my user ID
27:01
identified by this grade. It happens that it starts with SQL. I don't know how that happened
27:06
but it happens sometimes and anybody has stayed in this business. So we will see we'll find people who are in my shortest path and
27:16
then if they have left a review because you are also looking
27:20
into the review table. So let's do this query and we'll see what happens here
27:25
It's a pretty complex query as you can see, and this could take
27:30
a significant few minutes or so to run to show me a result that
27:35
yes, somebody has stayed and put a review up there in the Yelp database
27:40
After running for a few seconds, I realized that the complexity of the query
27:45
is pretty pretty heavy here. Let me see if I have enough resources for this query
27:50
because this database could be. May not have enough horsepower here, so this is a query to find out how many
27:57
what kind of service objective I have put in here, and I can see that there's only two
28:03
These two here indicates that it's a two core. All these were doing all this graph
28:08
everything that you want or just on a two core machine. So you can think about how powerful this database services
28:14
How would I survey to 16 core or 32 core? Okay, this is a simple way to change it
28:18
I'm in a mood to go to 32 cores today. It comes back easily, quickly
28:24
Actual resizing from two cores to 32 cores is not instantaneous, but not as much time it takes in your
28:35
maybe in an on-prem environment where you actually We have to. Move this database to another servers which has got those many
28:44
cores available on that for server. It's pretty fast. Yeah, could still take a few minutes
28:51
It usually takes 4045 seconds to come back. We can keep running this one to figure out if it has done
28:58
Looks like it is. It will take maybe a few more seconds. In the meantime, we can take a quick look at the at the query
29:07
And if you can see this is using. It it looks for top on if a single
29:13
person has left a review. That's good enough for me. There may be more people might have left review
29:18
I don't care. I just care for at least one query, at least one review
29:22
Then I'll be good and I'm looking for people mean. And anybody in my shortest path
29:31
Anybody who is connected to me through a shortest path has left a review and where left
29:37
a review for this business at this business with the same business that
29:42
we saw in the previous query. Residency in close to the to the stadium
29:48
And I'm providing a has one hint here to make the query faster
29:52
Let's see if it is done. Prolit is completing now. Now it is done
29:58
You see it is so fast. It took maybe 40 seconds this time from to go from two
30:03
course to 32 course. Now my database is more powerful and this query should
30:09
come back much faster without with two course my previous run. So that usually it comes back in 2 minutes or 3 minutes time
30:19
with 16 or 32 course. It usually come back in 30 seconds or less
30:24
So this is the holistic power of Azure SQL database. You can see that we could do Jason on the same database
30:32
We could do graphs on the same database. We could do special queries on there's the same database
30:36
We could combine all this stuff, which means that you can write very intelligent applications
30:42
You can write very modern application. For example, the graphics, the geospatial that you saw
30:48
you could actually run this query and show if the mobile app has a graph has a map on it
30:53
you can actually show the businesses there. And so you could do those kind of modern applications
30:59
You could write so it came back. This time looks like it took 41 seconds
31:03
And now I know somebody has actually left a review there. And these are the my friends
31:08
This QAH person left a review and this QAH is connected to me
31:12
through 2N. And since they are connected to me and left a review
31:18
I'm satisfied. I'm booking my tickets to go to watch that Super Bowl
31:23
in the Phoenix Stadium. So that's the story here you can find
31:28
You I I request you that take a look at the GitHub and try more
31:33
Actually can do more. You can do a word cloud with this. You can do a text classification
31:36
You can do you can play with this. This data very, very interesting
31:40
experimental ways, so I encourage you to do that. And now that we saw many different ways
31:47
that you can use data you want all this data to be secure, don't you
31:51
Because once you put your business data, you don't want them to be
32:00
Unprotected so you want them. So as your SQL provides you ways to
32:06
authenticate and access management data production through encryption network security through VPN and private links
32:14
monitoring, logging, auditing facilities, and managing your security infrastructure. And we have tons of features and
32:21
That will help you through this one. Today we don't have time to go through all
32:28
over this one this covering this page itself will take take an hour
32:33
but this is to show you the intelligence and the security that goes behind
32:39
to make this database ready for any type of application, any type of business
32:45
whether you are a hospital or a bank or a credit card company or a small
32:51
shop in respect to what you are. You have this database and it has got all the
32:56
knobs and bells and whistles ready for you. The default choices are pretty easy to choose
33:01
The default choices make you the most secure that you want and then you can
33:06
alter based on your application needs. And while these are all secured and
33:13
this database is very powerful, you would also want your database to
33:18
be completely resilient to any kind failures could be hardware failures, could be hardware failures
33:24
could be disasters, could be all kinds of stuff. So what we have done is that we have built in redundancy for this database and we have zonal redundancy within a given region We have multiple zones and we have copies of the database in multiple zones to give you
33:42
protection and you can choose one of those configurations to get higher availability
33:48
We have also provide geo redundancy. You can pick a geocopy of your database
33:53
into multiple locations to protect you against any kind of geo disasters
33:59
And to top all this stuff, we are so confident about our high availability and disaster recovery
34:09
design that we provide a financially backed SLA up to 99.995% uptime
34:17
And all these details are in this document here. This also detail of high availability and
34:24
and disaster recovery architecture and practices. You can take an hour. I'm just leaving you with some nuggets here
34:30
so that you can do research on your own and be confident that your database is actually
34:37
being fully protected in all situations and scenarios. Not just that. What we are doing to this database is
34:48
where do you get continuous innovation on this one? The modern database application needs serverless models
34:53
modern database application need limitless database capability. So recently we have introduced
34:58
something called serverless and something called hyperscale here. I'll quickly walk through those
35:03
We talked about the IoT scenario about the electric power meters. Just change that power meter to
35:09
maybe weather sensing devices. And you can think about the same type of application just sending
35:15
data to the to the to the central database. But during a storm
35:18
lot more people use the weather app than during normal days. So suddenly your database workload could spike
35:25
but you don't want to have wasted resources when there is no storm
35:28
The app is not much used. How do you do that? Do that through something called
35:33
it auto scale and auto pause and resume for your database and you do
35:39
the billing to be to be commensurate with that. And then thereby you can save tons of money
35:45
without having to worry about the management of when do I scale up
35:49
When do I scale down? You saw how I scaled up the database. I could scale down to the database the same way
35:54
but it is a manual intervention in a serverless mode without any manual intervention database
35:59
can scale itself up and down within your defined boundaries. And it can pause if it is not used for
36:05
a predefined amount of time. So here I'm showing you the copy of an Azure portal
36:11
You can see that database is being used here. The left side is that use of databases
36:16
I you can see that my database is spotty. It runs for some time then doesn't run for
36:20
sometimes that does not in last 24 hours. And then what I have here is on the right side
36:25
that builds of the database. So if I click here, I can see the database is used and
36:30
database is being built. If I click here in some time when database is not being used
36:35
And if you see that there is zero build for the database. So this is how the the serverless
36:41
works behind the scene for you. With the new application model application
36:48
the there is data is limitless. People want very large databases in cloud
36:55
However, what happens is that challenges come with how do I do my size of data operations
37:01
How do I backup my database? HDR would be so hard for a very large database I want to scale the compute independent of my database size To do that we introduce something called SQL hyperscale And in this SQL hyperscale this is simplified picture
37:17
This is we took the same SQL server engine that you are
37:21
familiar with for decades. The same Azure SQL database that you are familiar with for last 10 years
37:27
but we did the new cloud born architecture where we separate the data
37:32
Layers into three or four components. Compute this is where your queries are running
37:40
This is where that user connect to to do read and write and everything and behind the scene we have
37:45
separated the database into something called page servers, something called a log service and the actual durable
37:50
data is stored in Azure Blob stories. This gives us independent scaling of compute and
37:55
independent scaling of page servers. If you have more a bigger database size
38:00
all we do is they give you more page servers. There where you get bigger database size or if you
38:07
Suddenly you have a more need for compute. We give you more compute without having to worry
38:12
about the physical storage of the database. And we also give you readable replicas
38:17
where you can run your read queries. Putting all this together, this is a very compelling service
38:23
but the proof is in the pudding. Unless I show you something, you will not not believe how we
38:29
deal with database. Big databases come with big problems, which means that- Sanjay
38:34
I do want to interrupt you and let you know that you're at 547, so maybe this is a very fast demo
38:42
This is the fastest demo and this is the last demo. OK, so what we will show you here is actually
38:51
two copies of the. Of the same database. K and in this database what we have is. A
39:33
Let me run this to give you a dramatic effect here and I'll explain what I'm running here
39:39
And I'm doing the same thing on this site. Yes, both queries are executing now
39:47
I'll explain what it is. What we have here is that database where
39:55
I have a copy of the same size database on two separate databases and they are of
40:00
the same size and have a table called which is 20 billion rows
40:04
What we sometimes do was that we sometimes do unintentional mistakes and here I have done it
40:11
I started a transaction to update the data, but I forgot. While doing some test I was I had my
40:19
where clause commented out and when actually executed on the big database
40:24
on the big table, I forgot to uncomment my where clause. What it did was that it's actually
40:30
the huge update on a 20 billion rows table. In the past, making these mistakes could be
40:39
could be very costly because undoing these changes or rolling back such a huge transaction can take a really really long long long time As you will see here hope I realized after maybe a minute
40:55
that oh I did a mistake. I should cancel the query and now we can see that this has
41:01
gone into the cancel query query mode and it is taking taking
41:06
time to cancel the query. Now on the right side, I have the same database
41:13
same size, a different copy, whereas I have accelerated database recovery. One means enabled. So you can see that on means one is enabled
41:26
On this one, I had it purposefully disabled to show you the dramatic effect
41:31
This recovery will continue. Rollback will continue for some time. On this one, I took a bit longer to realize I made a mistake and what I do was that I
41:39
cancel the query and instantaneous. This capability we introduce it for very large databases for hyperscale
41:47
site databases, but it is a by default. It is enabled for all Azure SQL databases
41:52
so I welcome you to go make mistakes to your heart's content only then you
41:57
realize how powerful these databases. We as a as human beings we have
42:01
on the right to make mistakes. Failure is our birthright, so fail fast if you fail fast
42:07
If you fail fast, then you learn fast too. Instead of waiting for the rollback to come
42:12
and take too long and thereby you can see that. OK, now you can go about go about saying that
42:16
you know I will uncomment it and go about your business and then then do the right thing
42:21
So these are the powerful capabilities of this database that I wanted to show you today
42:26
And there is many more for the database to talk about. We have awesome story for DevOps
42:32
We don't have full. This itself is a full one hour session. We don't have that one
42:36
but I'm pointing you to a awesome blog by one of my colleagues
42:40
Please go through that. I saw part of the previous session that was a very good DevOps presentation too
42:47
So you can look at the previous recordings in this conference as well. And to close this, I'll say that what we have actually
42:55
what we talked today is about Azure SQL database. It's part of a family of product in college or SQL family
43:01
What we talked about is a database. The same capabilities are available in an instance
43:05
If you love an instance, you want the same things to do an instance. Awesome, you can do that too
43:11
If you want a VM, we offer you a VM in Azure SQL in Azure SQL family
43:16
You can have more control on the VM and you can do that too
43:20
Same capabilities, same power, same kind of price performance. So a number of what you choose
43:25
you are the winner. I will end my session now. You please send the feedback about the speaker
43:34
about the event through these these links. And thank you here to find me
43:39
I'm putting my email here and my Twitter handle, my team's Twitter handle
43:43
You can find and reach out to us if you have any questions, any thoughts, any suggestions
43:48
Thank you so much. Let me go and see if there is any comments or any questions
43:53
that I can take. No questions, but I do encourage everyone to reach out to you
43:58
as you've left your email address and your feedback links. So we appreciate your time and your contribution
44:06
to this great cause. Thank you. Thank you. Awesome. I enjoyed the season
44:14
I enjoyed it. I hope everyone did as well. Thank you. Bye now
44:18
Bye bye


