0:00
I'm really so excited presenting at this virtual conference and this is the first time I'm presenting with the Shishap Corner's virtual conference
0:11
And thanks Simon and your team for doing a good job and it's not easy to organize a conference, pulling in a lot of presenters across the world
0:20
And you are doing a good job and thanks for yourself and your team. So let us get started with our final session of the day, if I understand that correct, which is
0:30
developing AI-enabled applications in Power Apps using AI Builder. I'm Anupamana Atrajan
0:35
I think there was a great introduction given about me. So I'm not going to take a lot of time reiterating that
0:42
So I'm a data and AI consultant. I live in Wellington, New Zealand. And today I'm going to talk to you about the AI Builder product that we have got with Power Apps
0:53
Just a little bit about myself. Like I work in the IT industry for the past 20 plus years
0:58
And yes, from a wide range. of roles that I have performed
1:02
I started as a developer back in India. So I was born and brought up in India and moved to New Zealand and been working as
1:11
developer, data warehouse specialist. And I really have passion with data because that is what was driving me to change different
1:18
roles and try and look at what will really make me happy. And the data is what I was passionate about
1:24
And also like how do we get value out of the data and expose that data to
1:30
our business users and so that they can also derive value out of that
1:34
So I always look for opportunities like in terms of tools and techniques that will help us to get
1:38
value out of data. So whether I'm exposing data in the form of visualizations using power
1:43
BI or whether I'm exposing the data in the form of APIs or finally like now I have a better
1:49
opportunity of automating as well as providing value for organizations through artificial
1:55
intelligence. And that's why now AI is becoming more or less my passionate piece of work or project
2:02
because it's another way of exposing value of your data that organizations have got for your business
2:08
users. So if you have to reach out to me, post this session and I have got my social media channels in
2:14
there and as well as my email address and my blogs, feel free to reach out to me for any questions
2:20
and I'm happy always to get back to you. Of course, I'm at Microsoft Certified
2:25
trainer and speaker and I do organize conferences and things based in the Wellington, New Zealand
2:32
So the agenda for today's session, I will briefly introduce you around AI Builder and we will look
2:38
at what as pre-built models and custom models mean in the context of AI builder. And how do we
2:44
add intelligence to applications? Because what is this AI builder do? What am I going to do with
2:50
models? Is it going to be really hectic for me to do this activity or do, do we? Or do
2:55
I need to create anything from scratch. Do I need to have programming language experience
3:00
So all these questions will be going on in your mind. And that's what I would try to answer here
3:04
And finally, I will showcase a demo in terms of how easy and simple it is for you to use
3:10
the AI builder in Power Apps. So the context of AI Builder, you can use it in Power Apps as well as in Power Automate
3:17
But I will be showcasing more in terms of Power Apps, but always you have better opportunities
3:23
to embed in Power Automate. I think in my previous session that was presented
3:27
I was also able to see how they leveraged Power Automate and AI Builder
3:35
So before we get into AI Builder, just I have overview around the Power Platform products
3:41
and you have your Power BI for Business ytics, Power Apps for Application Development
3:46
which is more or less, I would say, rapid application development, and Power Automate, of course, to do the workflow automation
3:53
So easily you can connect to any data sets that you want to connect to, and you can get business insights out of that
4:00
The main idea here is all about how quickly Microsoft can make people to develop things
4:08
If you want to build reports, it should be easy. If you want to develop applications, it should be really straightforward, and you don't
4:15
need to have any programming language experience. And of course, like if you want to do workflow automation, it should be a piece of cake
4:22
So I loved Power Automate even from the days like it was coming as flow and similarly Power Apps because it's always handy to have a tool in your tool belt like that can help you with application development in a quite like quite quicker time frame
4:40
I still remember the days when I was working as a consultant and meeting my clients where I was trying to explain to them how much value Power BI was adding
4:48
and one of the frequent questions I get is like, okay, Anfamah, I'm happy to see these visualizations
4:54
but I want an application where I can update my target data or sales data
4:59
and I want the visualizations to immediately reflect that. Okay, so I used to think at that time and said
5:06
like, okay, do we have an option like this in the power, in Power BI? No, so, okay, do I need to build a custom application
5:13
and then serve the Power BI report side by side? Now with Power Apps, totally like it has blown out in terms of my clients
5:21
because you can do this quite easily by embedding your Power Apps inside Power BI
5:26
So that's the power, I think the Power Platform tools are actually bringing into our customers
5:32
Why not? In the Gartner chat, always like Microsoft will be on the top when it comes to Power BI and Power Apps
5:38
They are investing a lot of time and effort, and you can see the difference because year after year, like I think they are
5:46
just becoming the leaders in the power ytics as well as the application development platform community
5:55
So the current topic for the day is all about AI builder. So this is providing a low code AI capability
6:02
across the entire power platform, like what do you say, products. So Microsoft thought about artificial intelligence
6:11
This is being a bus word and every organization wants to jump and start using AI
6:16
But where they were struggling is the knowledge and the skills that is required in order to build these models and embed in the applications
6:24
So Microsoft were able to quickly understand that this is the fundamental issue that they are facing with a lot of organizations
6:31
And they started looking at, okay, how can we just make life easy for these organizations
6:37
So let us give them a low-code AI capability where these models can be easily created and also can be embedded in tools like Power BI
6:46
Power apps, Power Automate, and of course, like now, you have got Power Virtual Agents joined in the Power Platform
6:52
product stack, where it provides you a capability that you can chat with the, using
7:00
you can chat with virtual agents. So long story short, AI Builder is mainly to help organizations
7:07
to quickly embed AI capability inside the Power Platform products. And how do you go about do that is what we'll be covering in session
7:15
in this session mainly around integrating with Power Apps. So the Data Connectors, AI Builder, and DataWords
7:23
which is your common data service that got a new name currently
7:27
So all these things are going to be leveraged by all these power platform tools
7:33
So AI Builder is now not only embedded in Power Apps, but also like you can leverage that in Power Automate, Power BI
7:39
and the Power Virtual Agents products in the Family Catalog. So what do we get out of AI builder
7:49
So Microsoft have provided some pre-built models that is available already, and you can leverage them in your applications
7:56
And this is, I'm talking in the context of Power Apps and the Power Automate family
8:02
So you have a business card reader model, which automatically pull the contact information
8:07
from the business cards by just scanning that image. You can do sentiment ysis
8:14
So you can detect a positive, or negative or neutral sentiments based on the comments that the people put in the social media
8:21
Really handy, right? Like if you think about a customer who immediately wants to know what they
8:26
like the organization is thinking of what their customers are thinking about their organization
8:32
immediately they can pull their comments from the social media channels and do sentiment ysis
8:38
So then you have key phrase extraction. So if you have a lot of text documents that needs to be
8:43
looked into and look for some main points or key phrases out of those documents
8:48
Now you can easily extract that information with the pre AI capability models Language direction of if you have huge amount of documents and you want to see what a predominant language that has been used across these documents and finally text recognition I don know how
9:04
many of you have recognized text, sorry, text recognition as a pain point. I personally have gone through
9:12
that a lot in my early stages of my career. I go and work for a lot of organizations where they do a lot
9:19
of scanning of documents. And there is a lot of digital presence in front of the customers, but all the back office
9:25
has been, what I have seen is scan images on a daily basis, then do the keying, and then do
9:32
the verification. So the reason for that is because like the OCR or the machine readable capabilities are
9:39
not strong enough, especially when it comes to handwritten text. Because a lot of these forms I have worked in these organizations or filled by people, like
9:47
using handwritten text. And there wasn't, like, even though we had OCR
9:52
it's not powerful enough in terms of recognizing text. I think the models that we are Microsoft are creating with the AI builder
10:01
these models are better enough. Like if I remember, even with AI, Microsoft released a version 1.0 of the models
10:09
but also they are now created a 2.0 version of the OCR capability
10:16
So which means they are actually fine tuning and refining to make sure the OCR text recognition capability is really powerful when it comes to using the AI builder
10:26
So this gives you some of the idea around the pre-built models and their availability
10:33
Mostly everything is available as a GA, except for a few things like there are new ones that have come as preview
10:39
like receipt processing, category classification, and invoice processing. So these are in preview modes and which may turn into GA quite soon
10:48
So we saw some pre-built models. You can also do some custom models and within AI builder
10:58
So you can create your own models, you can train them, and then you can start using them in your applications
11:03
You can do prediction and that is like a model creation that was used in this previous session before mine
11:11
And it was nice to see the dating application how it was using the prediction capabilities for the application
11:18
application itself. Similarly, you have form processing and where you can read, extract, and process data out of emails, PDFs, or even images
11:28
And you have object detection, and this is what the demo that we'll be looking in today's session, is you can easily build an AI model
11:36
You tag the images, and you can tag them with the right saying this is the object name
11:41
And finally, you can also use, you can detect objects in images based on the model that you have trained
11:48
You have category classification as well as entity extraction. So if you want to extract specific information about your business from data
11:57
then the entity extraction as well as the category classification will also will help you
12:02
So not only the predefined models that you can use, you can also create custom models using AI builder and you can leverage that within your applications
12:15
So how do I add intelligence to applications? Like first thing is you choose the AI model type that we have seen in the previous slides, few AI model types that you can use
12:26
So whether you go for a completely pre-built model or you are going to create a custom model
12:32
And then what you do is you connect to the data source where you can pull through
12:36
So whether we are talking about images or text for doing sentiment ysis
12:40
So you do all those things in terms of where your data source is there and connect to that
12:45
Then you tailor your AI model. So here, am I going to do like forms recognition or am I going to do object direction
12:53
whatever model I'm going to create and what I need to do is to tailor whatever is required
12:59
in terms of giving inputs like tagging to my AI model? Then it comes in terms of training your AI model because once you finish off your tagging
13:08
process, you identified your source, the next thing you basically will do is to start training
13:14
the AI model. finally consuming it right once you are trained your AI model of course you will be testing it
13:21
and once the testing process is successful it's all about how do you consume that AI model inside
13:27
your application to get better insights so this is pretty much i would say like a simple workflow
13:32
that you will follow if you want to add intelligence to your applications so enough of
13:40
talking about a lot of things around the AI and the product stack and stuff let us quick
13:45
look at some, a simple demo around how do we do object detection using the AI builder in Power Apps
13:55
So what I have got here is I've logged on to make.poweraps.com and this is the home page of your
14:03
power apps. So here you can see the option where I have got AI builder. Maybe I'll zoom a bit. I think
14:09
hopefully it is clear on screen. So this is my AI builder. So I have got two options in here where I
14:15
I can build my custom models or I can just look at the list of models that I have already generated and I can use it
14:23
So here I will hit on build. So I build a sample model today, especially this morning, just to make sure like in case for saving time
14:33
So here is the list of models that you can build. Like you can do category classification, entity extraction, forms processing, you can do object detection, basically recognizing the objects and
14:45
count the number of objects that you have in images. And finally, prediction, which is using the historical data
14:52
predict and finding the future. So predicting the future using this model
14:57
So this is what I would call it as the custom models, as we discussed
15:01
And here are the ones that we talked about in terms of pre-built models
15:05
So I can just use a business card reader to automatically detect
15:09
I give an image or a business card image, and it will automatically detect the information
15:14
Similarly, in voice processing, key phrase extraction, language deduction, text translation, text recognition, sentiment ysis
15:24
So these are out of the box AI models that you can use
15:28
Like this is pre-built models that you can use then and there. So you don't need to worry about training or anything
15:33
So the pre-built models or AI models that are been already pre-trained and ready to be used by yourself
15:40
So Microsoft have done all that work behind the scenes in terms of training
15:44
these models and making it ready. So there is a common question I get asked when I do presentations about AI
15:52
So, okay, so we have got the REST APIs, which provide us to like forms recognize our API
15:59
the like what is a computer vision API, custom vision API. So on whom I like, what is the difference between that and what is the difference between
16:08
using these APIs inside the power platform? So when Microsoft started building the pre-built APIs
16:15
of course they started exposing them as APIs that can be integrated with applications
16:21
So then they realized saying, as I said, not a lot of organizations will have developers
16:26
who will like a lot of developers to start integrating them in each and every app
16:31
And also like it was about how do we think and make things really easy for our customers
16:37
So what they decided was better to give these pre-built AI models and integrate them into the right Microsoft tool stack also
16:46
So what they have taken is taken that APIs that they have developed and now they have
16:51
already integrated that in tools like Power Apps, Power BI as well as Power Automate
16:56
So if you are thinking about like are they different, no, underlying they are the same APIs
17:01
One is giving you the ability to integrate those APIs into your custom applications
17:06
And the other one is Microsoft themselves took those APIs and integrated with their product stack
17:11
within the power platform family. So that's pretty much behind the scenes
17:15
So if you learned about those APIs, those APIs are the ones behind the scenes
17:19
got integrated in here. So that's how things work behind the scenes
17:25
Okay, so enough of chatting about that. Let us get into our demo. So here is our object detection
17:31
So the first thing with object detection is I want to give a name to my AI model
17:37
So what I'll do is my object detection. and maybe I give object direction model demo So what you will need and so these are some of the prerequisites that is already it is going to tell you So in terms of the needs it is asking you to say like
18:01
okay, I want at least 15 plus images for each object you would like to detect
18:06
Okay, so I know what I need to provide for that. And if you need some sample examples or best practices
18:12
you can go here and you can start working on that. So one of the example, which I'm going to do is already
18:18
you will get to see that when you go back and start practicing, AI builder within power apps
18:24
So hit on create. So now this is going to create a model and it will give me the steps that I have to carry out
18:30
in order to successfully build my object detection model. So this is going to take a little bit of time
18:37
So patience is important. So model domain. So I have got complex objects that I can choose or I can choose like saying objects on retail
18:46
shelf that I want to model or I can use like a brand logo
18:50
So if I can just scan through some images and highlight the brands
18:54
then you can go ahead and do that. If you look at this, this is similar to what you can see
18:58
in the computer vision part of the AI, like what is it, the AI APIs that you looked at it
19:05
in terms of cognitive services. Okay, here I'm going to use common objects
19:10
And the next step I need to do is to add some
19:13
okay, I need to add some objects. What are the objects that I'm going to deduct here
19:19
So the objects that I'm going to detect is I have got a whole series of images
19:24
which is going to give me some capability. Sorry, let me move this over
19:30
So which is going to give me some capability of adding images
19:36
and also detecting the objects out of those images. So when I talk about adding images and detecting sources out of those images
19:45
so what I'm going to do is like I have some images which talks about the different images
19:49
flavors of tea so when I say different flavors of tea I have got the green tea and
19:55
the mint tea as well as like cinnamon tea so if I want to go and create the object I can just
20:02
say like I can go for green tea mint tea and cinnamon tea and cinnamon tea
20:17
so three objects I'm just going to create here here and so basically objects or these are the labels I'll be using in my images in order to tag them
20:26
So now I have given the object names, it's time to add some images and start tagging them or adding objects to those images so that I can try in my model
20:36
So I'll click on add images. The source I'm going to take it is from my list of examples I have got
20:44
So let me choose some images from my samples. Cool. So I have got a series of images in here
20:54
Normally they say 15 plus would be really good, but 50 plus is also much better
21:00
So you can also like tag multiple images in here. So some of the quick tips, if I can see
21:06
showcase to you while it is uploading. So you can start with 15 plus images per object
21:11
but it's always good to have 50 plus. Image requirements, what formats of the image are supported
21:16
And also like it's okay to tag multiple objects. objects in the same image and it is good enough to automatically detect that
21:23
So now the images are getting uploaded. The next process is all about tagging the images with the objects I have created
21:33
Cool, so I have got around 30 images. So now the tagging process starts and this is going to be an interesting process
21:42
but it will be good because you can just start with the process and then automatically refine it
21:47
So I can go here and say, okay, so, So this is my green tea and I can go here and say
21:54
so here I can be clever enough to reduce this size of the
22:02
So sometimes it predects for you or sometimes it, when you have multiple objects
22:06
it's nice to resize the object and then you can start doing the tagging
22:13
So I can say like, okay, it's a green tea cinnamon. So similarly, like
22:19
you can go ahead and start tagging like this. So whether it's a mint or a cinnamon or
22:25
okay, I'll just go to put this as a mint. So I'll remove this one
22:35
Let me do the retagging of that. So you can just say like, okay
22:41
whether it's a green tea or a mint tea or a cinnamon tea
22:45
and then you can start looking at. So this is a green tea with mint flavor. This is a green tea with cinnamon flavored
22:51
and then you almost have a rose flavor one also. So I'm not going to keep tagging here
22:58
but it gives you the idea around how many images that you keep tagging, and once you are done with the tagging
23:04
you can move on. So I'm just going to continue saying I have done the tagging
23:08
but it is clever enough to understand that I haven't tagged the enough images
23:14
So what I'll do is rather than just going in here, I'll just reopen
23:18
the model that I have already created. So the idea behind this, you have to do this process and you have to tag each and every
23:30
images. So I don't want to just tag like 15 images right now and wasting everyone's time
23:36
So what I can do here is I will show you the model that has been already trained. So this is my
23:42
model. So as you can see in the previous screen, I had two models. The other model was the work in
23:48
progress model. So this is the model that I have previously created and published it. So if I want to
23:55
show you, I can click the edit model, say like, okay, start from the previous published version
24:03
Just to showcase like it's a similar process I have gone through. And I have tagged them, I have
24:10
trained them, and also I have published that particular model. So here I can just showcase quickly
24:16
and once it loads in terms of the images that I have used and the tags that I have done
24:22
So same process. Click on Next. So the three objects that has been made
24:33
and then I have got all the images that has been tagged successfully. And finally, I have got a model summary
24:40
So this is how I was doing the tagging. Let me go back in here
24:46
Get back to my build and models. Cool. So here is the model that I have pre-trained and made it really available
25:01
So one of the thing you can see with this, so I can discard the draft
25:05
I don't need a draft model anymore. So once I tag my images, then I can just say like, okay, create the model
25:16
Once the model is created, you have to train the model. The training process is nothing, but it takes all the images and also like the objects or the labels that you have tagged with those images
25:28
And it creates the model which you can use in terms of testing your models
25:32
So here I have trained my model. And what I can do is I can do a quick test of my model
25:38
So if I want to just upload a sample image from my thing here, I can just go ahead and choose
25:46
an image and it can automatically detect what is the type of tea I have got
25:52
So here I have got a green tea row. So this is some based on the tagging, it will be able to detect that
25:58
So what I have get is the 92% of a based on decisions around what you really want to do with this particular model
26:08
So you can say like, okay, if I want to give you a real world example
26:13
so sometimes you can use this detection process saying, find objects from images
26:18
If it is more than a particular threshold, like 85% or 90%, you can confidently route it to say
26:25
yes, it has detected it correctly. But if you have issues like where it is saying
26:30
like less than 80%, then probably you may need to redirect those images to a different workflow process in your organization
26:37
So think it in such way, that's where the accuracy percentage will help you
26:42
in order to decide on how do you tailor your workflow process. So I have created the model I have published the model and got it ready now So what am I going to do with this So am I going to just keep on doing a quick test with this model That not the case So what you can do with this model is now you can take this model
27:01
and you can quite quickly build an application in order to utilize this model in your app
27:07
So the use model button here, it will allow you to say like, do you want to create a new application in Power Apps
27:14
So this can be a Canvas app or do you want to create a new
27:18
flow which is using the power automate. So these are the two things that are supported
27:24
If you think about Power BI, Power BI already has got some AI capabilities embedded in that
27:30
So you can easily use these models in Power BI if you need to
27:34
And currently it is with Power Apps and Power Automate. You can leverage it straight from here
27:40
So I want to create a new app using this model. So this is going to open
27:48
my Power App Studio that will allow me to create the things with the model
27:54
So conscious of the time, I have got an application already created
27:59
but if you want to create an application, that's what you will be doing. So use model, decide on whether you want to create a Power Apps application or
28:06
a Power Automate application, and you will be taken to an application editing screen
28:12
So here I'm going to just load the application that I have created
28:16
So where I have named the application as object detector, and I have got two controls here
28:22
One is an object detector control, and then I have got a data table
28:27
So mainly what it does is, when I upload an image, this is going to go ahead and detect the object
28:35
And this particular data table is now tied with the output of the object that I will be given
28:41
And it is going to give me the columns, like in terms of the tagging. So how many tags it
28:46
can find in the what tags it can find in the image and what is a total number of object count
28:52
So when it comes to this particular object detector, again, of course, you can add these things in here itself
28:59
So how I added this object detector is under insert, I went into AI builder. So here I can see the object detector. So control, so I have just put that component directly onto my editor here
29:13
The next thing I need to do is to tie up with a model. So here, my AI model was pre-listed
29:21
and I just chose that model that I created, and that's what the first thing you need to do
29:25
The next one is I inserted a data table, and in the advance, what I have done is in the properties
29:33
I have added tag name as well as the object count as the columns for my data table
29:38
And now this is in order to tag, in order to tag this particular one with my
29:46
object detector here, I also just set a few things around that this tag name is actually coming from this particular object detector
29:56
Cool. So they here. So when you choose the data table, as you can see in the formula bar, I have specified the object director's results is what I'm going to add as columns into this particular data table
30:09
This is where I have just tied the link between the data table as well as my object detector control
30:16
So we have got this configured. So as simple as this, you created a model by training the model using some images
30:25
Of course, you did the tagging process. Then straight away, once the model is trained
30:29
you did a quick test. And once the test is successful and you're happy with the model
30:34
then probably like you can go ahead and publish the model. As part of publishing the model
30:39
you can start clicking on using the model to create your canvas app or power automate object
30:45
and then you can go here and start designing on how you want to utilize the model
30:51
Once you have created the Power App, you have the ability to preview it, right
30:55
So I clicked on this play button, which allowed me to just open the app to demo it
31:03
So I'll click on Detect. So here I'm going to give the same image that we used
31:07
and what is going to do is this is going to give me
31:11
okay, how many tags it can detect, and what is the total object count of those
31:15
tags. So there you go. So it was able to detect the green tea rose and it is only one object that it was created, it was able to detect, but it can't see any of the mint or the synonyms in here. But when we tagged like there is a whole heap of tags that we provided across for different images. So the idea behind here is that now you can actually use this particular model in your applications quite easily, whether you develop the application in Power Apps or whether you develop the application in Power Automate
31:45
Cool, so that is the other one. So then you can start saving this and you can start sharing the application with whoever you want to do that
31:55
So this is the main reason what I wanted to show is because how easy it is to create a AI model using AI builder
32:04
And you can start embedding this in the applications that you really want to use
32:09
So you can think about where you can leverage these type of applications is
32:15
I had a client who had to look at meter readings for their residential customers
32:23
Here we can give the app, this type of app can be provided to the end users, your customers
32:29
where they can take a photo of that meter reading image because sometimes like when they go for a meter reading
32:34
the people won't be at home. So they can detect just take a photo. It will automatically extract the text out of the meter reading image
32:42
and then immediately when they click on a button on the app, it should automatically send the meter reading to your organization back
32:49
So a classic thing like where you have a business problem, and you can think of how to leverage these type of AI builder models or capabilities
32:57
in order to address those type of business problems. So let us come back to our slides in here
33:07
So if I want to recap the intelligence to adding intelligence to your application
33:12
What we saw was a classic example where we took, decided on the AI model type
33:18
In this case, I wanted to do object detection. We connected to the data
33:23
The data, what we had was a series of images that we wanted to put that into the model for tagging purposes
33:30
We tailored our AI model. So here, tagging played an important role for object detection model where we took a series of images
33:38
We started tagging them and then we decided like, okay, it's time to train our AI model
33:44
Two things that we did with the object detection model was we created those objects
33:50
then we used to those objects to identify them in the image and tag them appropriately
33:56
Some of the key things to remember here is around the number of images you choose
34:01
The more images you tag, the more the accuracy that you can get. I used around 30 images in the example, but I would say like if you are using in real world scenarios, use at least a 50
34:11
images to start with the minimum. The more images you have, the more the accuracy
34:17
Then we created the model and we trained the model. Once the model is trained, you have the ability to create either a power app application
34:25
or a power automate application straight from the model. And what we did was we created a canvas app
34:32
We just had a couple of controls. One is the object detector and the data table
34:36
And we just linked to these two things together. And in the data table, we made sure
34:41
like we showed what is the object name and what is the total number of count of objects that the model is detecting out of the image we are uploading
34:50
Finally, we did a preview of uploading an image and then allowing the model to extract the information out of the image to populate the data table
34:59
So quite a simple example, but quite powerful in terms of how can you leverage the AI capabilities quite easily
35:07
And here we created a custom model. That's the point I want to highlight. This is not a pre-built model
35:13
You created a model based on the data set that you had in your organization
35:18
You did the tagging and now you are utilizing it. So this is not like a pre-built model, like a business card reader or something that you are using
35:26
So that's something you need to be proud about. You have the ability to create custom models and leverage that within your application
35:34
So that's the demo we quickly saw. And thanks for providing me the opportunity to present in today's session