The cloud is for many scenarios the way to process data and apply business logic. But processing data in the cloud is not always the way to go, because of connectivity, legal issues, or because you need to respond in near-real time.
In this session, we dive into how Azure Machine learning and Azure IoT Edge can help in this scenario.
Using Azure Machine Learning service, a cloud service to track as you build, train, deploy, and manage models, we train a custom model.
When the model is ready we use Azure IoT Edge to deploy the model to an Edge device and find out how it can operate by itself.
At the end of the session, you have learned how to train a model using Azure Machine Learning and use IoT Edge to deploy this model to an IoT Edge device.
About Speaker:
Henk Boelman is a Cloud Advocate specializing in Artificial intelligence and Azure with a background in application development. He is currently part of the regional cloud advocate team in the Netherlands. Before joining Microsoft, he was a Microsoft AI MVP and worked as a software developer and architect building lots of AI powered platforms on Azure.
He loves to share his knowledge about topics such as DevOps, Azure and Artificial Intelligence by providing training courses and he is a regular speaker at user groups and international conferences.
Conference Website: https://www.2020twenty.net/iot-virtual-conference/
Show More Show Less View Video Transcript
0:02
in this session we're going to cover
0:04
in this session we're going to cover
0:04
in this session we're going to cover what machine learning is and the ways we
0:06
what machine learning is and the ways we
0:06
what machine learning is and the ways we can implement this in iot scenarios
0:09
can implement this in iot scenarios
0:09
can implement this in iot scenarios going to create a classifier using the
0:12
going to create a classifier using the
0:12
going to create a classifier using the custom vision service
0:14
custom vision service
0:14
custom vision service and run that classifier in an iot edge
0:16
and run that classifier in an iot edge
0:16
and run that classifier in an iot edge solution
0:18
solution
0:18
solution we will begin with an introduction
0:20
we will begin with an introduction
0:20
we will begin with an introduction scenario which we'll use as a reference
0:23
scenario which we'll use as a reference
0:23
scenario which we'll use as a reference throughout this presentation
0:26
in our world today there are many
0:28
in our world today there are many
0:28
in our world today there are many different creatures
0:30
different creatures
0:30
different creatures that are on a mission to make our lives
0:32
that are on a mission to make our lives
0:32
that are on a mission to make our lives miserable
0:33
miserable
0:33
miserable i believe every country will have their
0:35
i believe every country will have their
0:35
i believe every country will have their own little enemies
0:37
own little enemies
0:37
own little enemies but in america there is this big problem
0:40
but in america there is this big problem
0:40
but in america there is this big problem with raccoons
0:41
with raccoons
0:41
with raccoons mashing up the trash in the streets
0:44
mashing up the trash in the streets
0:44
mashing up the trash in the streets i think we can all sympathize with the
0:46
i think we can all sympathize with the
0:46
i think we can all sympathize with the american nation
0:47
american nation
0:47
american nation that this is a real problem
0:50
that this is a real problem
0:50
that this is a real problem one day karissa thought enough is enough
0:53
one day karissa thought enough is enough
0:53
one day karissa thought enough is enough raccoons no more trashing my garbage and
0:56
raccoons no more trashing my garbage and
0:56
raccoons no more trashing my garbage and she started building a trespander
0:58
she started building a trespander
0:58
she started building a trespander defense system so she could sleep again
1:01
defense system so she could sleep again
1:01
defense system so she could sleep again at night
1:02
at night
1:02
at night she started drawing and came up with the
1:05
she started drawing and came up with the
1:05
she started drawing and came up with the following design
1:07
following design
1:07
following design when the camera detects the raccoon on
1:09
when the camera detects the raccoon on
1:09
when the camera detects the raccoon on the trash
1:10
the trash
1:10
the trash the alarm should go off scaring away the
1:13
the alarm should go off scaring away the
1:13
the alarm should go off scaring away the raccoon
1:15
raccoon
1:15
raccoon so she went to the store and bought a
1:17
so she went to the store and bought a
1:17
so she went to the store and bought a raspberry pie
1:19
raspberry pie
1:19
raspberry pie a camera a lamp because she didn't want
1:22
a camera a lamp because she didn't want
1:22
a camera a lamp because she didn't want to make noise for the neighbors
1:24
to make noise for the neighbors
1:24
to make noise for the neighbors and she got an azure subscription
1:28
and she got an azure subscription
1:28
and she got an azure subscription now that she bought all the components
1:30
now that she bought all the components
1:30
now that she bought all the components she had to start thinking about how to
1:33
she had to start thinking about how to
1:33
she had to start thinking about how to build
1:33
build
1:33
build this solution to solve this problem with
1:36
this solution to solve this problem with
1:36
this solution to solve this problem with the camera
1:37
the camera
1:37
the camera she needs to build something that can
1:39
she needs to build something that can
1:39
she needs to build something that can look at an image and detects
1:41
look at an image and detects
1:41
look at an image and detects if there is a raccoon in the image or
1:43
if there is a raccoon in the image or
1:43
if there is a raccoon in the image or not
1:44
not
1:44
not and this is a problem that is hard to
1:46
and this is a problem that is hard to
1:46
and this is a problem that is hard to solve for traditional programming
1:48
solve for traditional programming
1:48
solve for traditional programming but fairly easy with machine learning
1:52
but fairly easy with machine learning
1:52
but fairly easy with machine learning first it is important to understand what
1:54
first it is important to understand what
1:54
first it is important to understand what machine learning
1:55
machine learning
1:55
machine learning actually is the most common definition
1:58
actually is the most common definition
1:58
actually is the most common definition is
1:59
is
1:59
is that it is giving your computer the
2:01
that it is giving your computer the
2:01
that it is giving your computer the ability to learn
2:04
ability to learn
2:04
ability to learn without explicitly programming it
2:08
without explicitly programming it
2:08
without explicitly programming it when i'm programming i would write an
2:11
when i'm programming i would write an
2:11
when i'm programming i would write an algorithm
2:12
algorithm
2:12
algorithm run data through that algorithm and get
2:14
run data through that algorithm and get
2:14
run data through that algorithm and get an answer
2:16
an answer
2:16
an answer but with machine learning this is
2:18
but with machine learning this is
2:18
but with machine learning this is switched instead of creating the
2:21
switched instead of creating the
2:21
switched instead of creating the algorithm
2:21
algorithm
2:22
algorithm i let a computer create this algorithm
2:24
i let a computer create this algorithm
2:24
i let a computer create this algorithm for me
2:25
for me
2:25
for me by providing a data with answers as
2:27
by providing a data with answers as
2:27
by providing a data with answers as input
2:29
input
2:29
input in machine learning world the algorithm
2:31
in machine learning world the algorithm
2:31
in machine learning world the algorithm is called a model
2:34
is called a model
2:34
is called a model and what we can do now is we can run new
2:37
and what we can do now is we can run new
2:37
and what we can do now is we can run new data through this model
2:39
data through this model
2:39
data through this model and get a prediction about the input
2:43
and get a prediction about the input
2:43
and get a prediction about the input for our trespander defense system we
2:46
for our trespander defense system we
2:46
for our trespander defense system we would have to create a classification
2:48
would have to create a classification
2:48
would have to create a classification model
2:49
model
2:49
model you can see this as a function that
2:51
you can see this as a function that
2:51
you can see this as a function that looks at an incoming video frame
2:53
looks at an incoming video frame
2:53
looks at an incoming video frame and gives a prediction back if there is
2:55
and gives a prediction back if there is
2:55
and gives a prediction back if there is a raccoon on the image
2:57
a raccoon on the image
2:57
a raccoon on the image and to create this model we need three
3:00
and to create this model we need three
3:00
and to create this model we need three things
3:01
things
3:01
things we need a lot of images with raccoons on
3:04
we need a lot of images with raccoons on
3:04
we need a lot of images with raccoons on it
3:05
it
3:05
it we have to select a training algorithm
3:08
we have to select a training algorithm
3:08
we have to select a training algorithm and we need an environment where we can
3:10
and we need an environment where we can
3:10
and we need an environment where we can run this training algorithm
3:14
run this training algorithm
3:14
run this training algorithm these three things will deliver as a
3:16
these three things will deliver as a
3:16
these three things will deliver as a model that we can later
3:17
model that we can later
3:18
model that we can later use in our trash banner defense system
3:22
use in our trash banner defense system
3:22
use in our trash banner defense system but what if suddenly unicorns become a
3:25
but what if suddenly unicorns become a
3:25
but what if suddenly unicorns become a problem
3:27
problem
3:27
problem if that would be the case she would need
3:29
if that would be the case she would need
3:29
if that would be the case she would need to retrain a model
3:31
to retrain a model
3:31
to retrain a model so it can also recognize unicorns
3:34
so it can also recognize unicorns
3:34
so it can also recognize unicorns and this is done by adding images of
3:36
and this is done by adding images of
3:36
and this is done by adding images of unicorns to the dataset
3:38
unicorns to the dataset
3:38
unicorns to the dataset and train the model again now that we
3:41
and train the model again now that we
3:41
and train the model again now that we understand what we have to create
3:43
understand what we have to create
3:43
understand what we have to create let's have a look which tools are
3:45
let's have a look which tools are
3:45
let's have a look which tools are available on azure
3:47
available on azure
3:47
available on azure first there are the domain-specific
3:50
first there are the domain-specific
3:50
first there are the domain-specific pre-trend models
3:52
pre-trend models
3:52
pre-trend models these models are created run and
3:55
these models are created run and
3:55
these models are created run and maintained by microsoft
3:58
maintained by microsoft
3:58
maintained by microsoft and exposed to you through an api in
4:00
and exposed to you through an api in
4:00
and exposed to you through an api in azure
4:01
azure
4:01
azure the only thing you have to do to access
4:03
the only thing you have to do to access
4:03
the only thing you have to do to access these models would be go into the azure
4:05
these models would be go into the azure
4:06
these models would be go into the azure portal
4:06
portal
4:06
portal and create the one you like
4:09
and create the one you like
4:09
and create the one you like there are around 40 different models
4:12
there are around 40 different models
4:12
there are around 40 different models available divided in four main areas
4:17
available divided in four main areas
4:17
available divided in four main areas there is vision to make your application
4:19
there is vision to make your application
4:19
there is vision to make your application see the world
4:21
see the world
4:21
see the world there is speech to make your application
4:23
there is speech to make your application
4:24
there is speech to make your application talk
4:24
talk
4:24
talk and listen to you language to understand
4:28
and listen to you language to understand
4:28
and listen to you language to understand what is being spoken
4:30
what is being spoken
4:30
what is being spoken and knowledge to give your application a
4:32
and knowledge to give your application a
4:32
and knowledge to give your application a brain
4:35
brain
4:35
brain but if you dive deeper and you want to
4:37
but if you dive deeper and you want to
4:37
but if you dive deeper and you want to create your own models
4:39
create your own models
4:39
create your own models you can bring your own tools and
4:40
you can bring your own tools and
4:40
you can bring your own tools and frameworks and use services like azure
4:43
frameworks and use services like azure
4:43
frameworks and use services like azure machine learning
4:44
machine learning
4:44
machine learning or create a machine learning virtual
4:46
or create a machine learning virtual
4:46
or create a machine learning virtual machine to boost your productivity
4:48
machine to boost your productivity
4:48
machine to boost your productivity and last but not least there is a lot of
4:50
and last but not least there is a lot of
4:50
and last but not least there is a lot of powerful compute available
4:52
powerful compute available
4:52
powerful compute available that makes the training of your model
4:54
that makes the training of your model
4:54
that makes the training of your model fast and reliable
4:56
fast and reliable
4:56
fast and reliable to create our classification model
4:59
to create our classification model
4:59
to create our classification model there are a few options available we
5:02
there are a few options available we
5:02
there are a few options available we could use the computer vision api
5:04
could use the computer vision api
5:04
could use the computer vision api this is an api that is pre-trained on a
5:07
this is an api that is pre-trained on a
5:07
this is an api that is pre-trained on a large data set
5:08
large data set
5:08
large data set and can classify most common objects
5:12
and can classify most common objects
5:12
and can classify most common objects a really nice bonus is this that it can
5:14
a really nice bonus is this that it can
5:14
a really nice bonus is this that it can generate a description of what it sees
5:16
generate a description of what it sees
5:16
generate a description of what it sees on the image
5:18
on the image
5:18
on the image and tells you where the found objects
5:20
and tells you where the found objects
5:20
and tells you where the found objects are
5:21
are
5:21
are besides detecting objects it also has an
5:24
besides detecting objects it also has an
5:24
besides detecting objects it also has an ocr function
5:25
ocr function
5:26
ocr function it can read handwriting and detect
5:28
it can read handwriting and detect
5:28
it can read handwriting and detect celebrities on the image
5:31
celebrities on the image
5:31
celebrities on the image this is a very easy way to get started
5:33
this is a very easy way to get started
5:33
this is a very easy way to get started with vision
5:35
with vision
5:35
with vision but not all objects are detected with
5:37
but not all objects are detected with
5:37
but not all objects are detected with the computer vision api
5:39
the computer vision api
5:39
the computer vision api and sometimes you just want to create
5:40
and sometimes you just want to create
5:40
and sometimes you just want to create something that can just detect
5:43
something that can just detect
5:43
something that can just detect that specific object like in our case
5:46
that specific object like in our case
5:46
that specific object like in our case we want to be able to detect racoons and
5:49
we want to be able to detect racoons and
5:49
we want to be able to detect racoons and unicorns
5:50
unicorns
5:50
unicorns for this scenario you can use the custom
5:53
for this scenario you can use the custom
5:53
for this scenario you can use the custom vision service
5:54
vision service
5:54
vision service this is a service that helps you easily
5:57
this is a service that helps you easily
5:57
this is a service that helps you easily build models
5:58
build models
5:58
build models that can perceive particular objects
6:01
that can perceive particular objects
6:01
that can perceive particular objects this service comes with a user-friendly
6:04
this service comes with a user-friendly
6:04
this service comes with a user-friendly interface
6:05
interface
6:05
interface that walks you through developing and
6:07
that walks you through developing and
6:07
that walks you through developing and deploying custom computer vision models
6:10
deploying custom computer vision models
6:10
deploying custom computer vision models then either use the api to quickly
6:12
then either use the api to quickly
6:12
then either use the api to quickly predict images
6:13
predict images
6:13
predict images or export a model to a device to run
6:16
or export a model to a device to run
6:16
or export a model to a device to run real time image understanding if you
6:20
real time image understanding if you
6:20
real time image understanding if you want to control the complete life cycle
6:22
want to control the complete life cycle
6:22
want to control the complete life cycle of your model
6:23
of your model
6:23
of your model you can use azure machine learning this
6:25
you can use azure machine learning this
6:26
you can use azure machine learning this service helps you accelerate
6:28
service helps you accelerate
6:28
service helps you accelerate the end-to-end machine learning life
6:30
the end-to-end machine learning life
6:30
the end-to-end machine learning life cycle
6:31
cycle
6:31
cycle and empowers developers and data
6:33
and empowers developers and data
6:33
and empowers developers and data scientists with a wide range of
6:35
scientists with a wide range of
6:35
scientists with a wide range of productive
6:35
productive
6:36
productive experiences for building training and
6:38
experiences for building training and
6:38
experiences for building training and deploying machine learning models faster
6:41
deploying machine learning models faster
6:41
deploying machine learning models faster and accelerates your time to market and
6:43
and accelerates your time to market and
6:43
and accelerates your time to market and enables team collaboration with industry
6:46
enables team collaboration with industry
6:46
enables team collaboration with industry leading emelops
6:47
leading emelops
6:47
leading emelops devops for machine learning the platform
6:50
devops for machine learning the platform
6:50
devops for machine learning the platform is secure
6:51
is secure
6:51
is secure and designed for responsible ml to use
6:54
and designed for responsible ml to use
6:54
and designed for responsible ml to use the custom vision service
6:55
the custom vision service
6:56
the custom vision service you will need to create a custom vision
6:58
you will need to create a custom vision
6:58
you will need to create a custom vision training and prediction resource in
7:00
training and prediction resource in
7:00
training and prediction resource in azure
7:01
azure
7:02
azure create a new resource select ai
7:05
create a new resource select ai
7:05
create a new resource select ai and machine learning and select custom
7:09
and machine learning and select custom
7:09
and machine learning and select custom vision
7:10
vision
7:10
vision click create we want to create both the
7:14
click create we want to create both the
7:14
click create we want to create both the training resource
7:16
training resource
7:16
training resource this is the resource that will train the
7:18
this is the resource that will train the
7:18
this is the resource that will train the model and we want to have the prediction
7:20
model and we want to have the prediction
7:20
model and we want to have the prediction endpoint
7:22
endpoint
7:22
endpoint we don't need a prediction endpoint for
7:24
we don't need a prediction endpoint for
7:24
we don't need a prediction endpoint for the trespander defense system
7:25
the trespander defense system
7:25
the trespander defense system but in this demo i want to show you how
7:28
but in this demo i want to show you how
7:28
but in this demo i want to show you how to use this
7:30
to use this
7:30
to use this select a resource group give your
7:33
select a resource group give your
7:33
select a resource group give your resource a name
7:36
resource a name
7:36
resource a name select for both endpoints the location
7:39
select for both endpoints the location
7:39
select for both endpoints the location closest to you
7:42
we are now choosing for the s0 tier but
7:45
we are now choosing for the s0 tier but
7:45
we are now choosing for the s0 tier but you can try it out for free using the
7:47
you can try it out for free using the
7:47
you can try it out for free using the free tier
7:48
free tier
7:48
free tier click review and click create
7:52
click review and click create
7:52
click review and click create you can also create these resources
7:54
you can also create these resources
7:54
you can also create these resources through the azure cli
7:56
through the azure cli
7:56
through the azure cli or by using aram templates
8:01
let's open the custom vision portal here
8:04
let's open the custom vision portal here
8:04
let's open the custom vision portal here we can create a model through a visual
8:06
we can create a model through a visual
8:06
we can create a model through a visual interface
8:07
interface
8:07
interface everything you see in this demo can also
8:09
everything you see in this demo can also
8:09
everything you see in this demo can also be done by using the api
8:11
be done by using the api
8:11
be done by using the api or with one of our sdks to create your
8:15
or with one of our sdks to create your
8:15
or with one of our sdks to create your project
8:16
project
8:16
project select new project enter a name
8:19
select new project enter a name
8:19
select new project enter a name and a description for the project then
8:22
and a description for the project then
8:22
and a description for the project then select a resource group
8:24
select a resource group
8:24
select a resource group if your signed in account is associated
8:26
if your signed in account is associated
8:26
if your signed in account is associated with an azure account
8:28
with an azure account
8:28
with an azure account the resource group drop-down will
8:29
the resource group drop-down will
8:29
the resource group drop-down will display all of your azure resource
8:31
display all of your azure resource
8:31
display all of your azure resource groups
8:32
groups
8:32
groups that include a custom vision service
8:34
that include a custom vision service
8:34
that include a custom vision service resource
8:36
resource
8:36
resource select classification under project
8:38
select classification under project
8:38
select classification under project types then under classification types
8:41
types then under classification types
8:41
types then under classification types choose multi-class multi-label
8:44
choose multi-class multi-label
8:44
choose multi-class multi-label classification applies
8:46
classification applies
8:46
classification applies any number of your text to an image zero
8:49
any number of your text to an image zero
8:49
any number of your text to an image zero or more
8:49
or more
8:49
or more when multi-class classification sorts
8:52
when multi-class classification sorts
8:52
when multi-class classification sorts images into single categories
8:54
images into single categories
8:54
images into single categories every image you submit will be sorted
8:57
every image you submit will be sorted
8:57
every image you submit will be sorted into the most likely tag
8:59
into the most likely tag
8:59
into the most likely tag you will be able to change the
9:00
you will be able to change the
9:00
you will be able to change the classification type later if you want to
9:03
classification type later if you want to
9:03
classification type later if you want to next select one of the available domains
9:07
next select one of the available domains
9:07
next select one of the available domains each domain optimizes the classifier for
9:10
each domain optimizes the classifier for
9:10
each domain optimizes the classifier for a specific type of images
9:12
a specific type of images
9:12
a specific type of images you can change the domain later if you
9:14
you can change the domain later if you
9:14
you can change the domain later if you wish
9:16
wish
9:16
wish general optimized for a broad range of
9:19
general optimized for a broad range of
9:19
general optimized for a broad range of image classification tasks
9:21
image classification tasks
9:21
image classification tasks if none of the other domains are
9:23
if none of the other domains are
9:23
if none of the other domains are appropriate or if you're unsure of which
9:25
appropriate or if you're unsure of which
9:25
appropriate or if you're unsure of which domain to choose
9:26
domain to choose
9:26
domain to choose select a generic domain food
9:30
select a generic domain food
9:30
select a generic domain food optimize for photographs of dishes as
9:32
optimize for photographs of dishes as
9:32
optimize for photographs of dishes as you would see them on a restaurant menu
9:35
you would see them on a restaurant menu
9:35
you would see them on a restaurant menu if you want to classify photographs of
9:37
if you want to classify photographs of
9:37
if you want to classify photographs of individual fruit
9:38
individual fruit
9:38
individual fruit or vegetables use this domain
9:41
or vegetables use this domain
9:42
or vegetables use this domain landmarks optimized for recognizable
9:45
landmarks optimized for recognizable
9:45
landmarks optimized for recognizable landmarks
9:46
landmarks
9:46
landmarks both natural and artificial this domain
9:49
both natural and artificial this domain
9:49
both natural and artificial this domain works best when a landmark is clearly
9:51
works best when a landmark is clearly
9:51
works best when a landmark is clearly visible in the photo
9:53
visible in the photo
9:53
visible in the photo this domain works even if the landmark
9:55
this domain works even if the landmark
9:55
this domain works even if the landmark is slightly obstructed by people in
9:57
is slightly obstructed by people in
9:57
is slightly obstructed by people in front of it
9:59
front of it
9:59
front of it retail optimized for images that are
10:01
retail optimized for images that are
10:02
retail optimized for images that are found in a shopping catalog or shopping
10:03
found in a shopping catalog or shopping
10:04
found in a shopping catalog or shopping website
10:05
website
10:05
website if you want high precision classifying
10:07
if you want high precision classifying
10:07
if you want high precision classifying between dresses pens and shirts
10:09
between dresses pens and shirts
10:09
between dresses pens and shirts use this domain and finally their
10:12
use this domain and finally their
10:12
use this domain and finally their compact domains
10:14
compact domains
10:14
compact domains these domains are optimized for
10:16
these domains are optimized for
10:16
these domains are optimized for real-time classification on mobile
10:17
real-time classification on mobile
10:18
real-time classification on mobile devices the model generated
10:21
devices the model generated
10:21
devices the model generated by compact domains can be exported to
10:24
by compact domains can be exported to
10:24
by compact domains can be exported to run locally
10:25
run locally
10:25
run locally because we are going to run this model
10:28
because we are going to run this model
10:28
because we are going to run this model on a raspberry pi
10:29
on a raspberry pi
10:30
on a raspberry pi we are going to use general compact
10:33
we are going to use general compact
10:33
we are going to use general compact click create project
10:36
now we have to choose our training
10:38
now we have to choose our training
10:38
now we have to choose our training images
10:39
images
10:39
images as a minimum it is recommended you used
10:42
as a minimum it is recommended you used
10:42
as a minimum it is recommended you used at least 30
10:43
at least 30
10:43
at least 30 images per tag in the initial training
10:46
images per tag in the initial training
10:46
images per tag in the initial training set
10:47
set
10:48
set you also want to collect a few extra
10:50
you also want to collect a few extra
10:50
you also want to collect a few extra images to test your model once it is
10:52
images to test your model once it is
10:52
images to test your model once it is trained
10:53
trained
10:53
trained in order to train your model effectively
10:56
in order to train your model effectively
10:56
in order to train your model effectively use images with visual variety
10:58
use images with visual variety
10:58
use images with visual variety select images that can vary by camera
11:01
select images that can vary by camera
11:01
select images that can vary by camera angle
11:02
angle
11:02
angle lighting background and visual style
11:05
lighting background and visual style
11:05
lighting background and visual style i took out bit our trespander for a
11:08
i took out bit our trespander for a
11:08
i took out bit our trespander for a photoshoot
11:09
photoshoot
11:09
photoshoot and got photos from different angles
11:11
and got photos from different angles
11:11
and got photos from different angles with different backgrounds click the add
11:14
with different backgrounds click the add
11:14
with different backgrounds click the add images button
11:15
images button
11:15
images button and then browse local files select open
11:19
and then browse local files select open
11:19
and then browse local files select open to move to tagging your text selection
11:23
to move to tagging your text selection
11:23
to move to tagging your text selection will be applied
11:24
will be applied
11:24
will be applied to the entire group of images you have
11:25
to the entire group of images you have
11:26
to the entire group of images you have selected to upload
11:27
selected to upload
11:27
selected to upload so it is easier to upload images in
11:29
so it is easier to upload images in
11:29
so it is easier to upload images in separate groups
11:31
separate groups
11:31
separate groups according to their desired text you can
11:33
according to their desired text you can
11:33
according to their desired text you can change the text for individual images
11:35
change the text for individual images
11:35
change the text for individual images after they have been uploaded our
11:38
after they have been uploaded our
11:38
after they have been uploaded our classifier is going to have two tags
11:41
classifier is going to have two tags
11:41
classifier is going to have two tags bit or not bit so i've created a group
11:45
bit or not bit so i've created a group
11:45
bit or not bit so i've created a group of images
11:46
of images
11:46
of images that not contain bit but other objects
11:49
that not contain bit but other objects
11:49
that not contain bit but other objects like a can
11:50
like a can
11:50
like a can or a robot
11:53
i also add these images and tag them
11:55
i also add these images and tag them
11:55
i also add these images and tag them with a negative tag
11:59
to train the classifier select the
12:01
to train the classifier select the
12:01
to train the classifier select the terrain button
12:03
terrain button
12:03
terrain button the classifier uses all the current
12:05
the classifier uses all the current
12:05
the classifier uses all the current images to create a model that
12:07
images to create a model that
12:07
images to create a model that identifies the visual quality of each
12:09
identifies the visual quality of each
12:09
identifies the visual quality of each stack
12:11
stack
12:11
stack after the training is completed the
12:13
after the training is completed the
12:13
after the training is completed the model's performance is estimated and
12:15
model's performance is estimated and
12:15
model's performance is estimated and displayed
12:16
displayed
12:16
displayed the custom vision service uses the
12:18
the custom vision service uses the
12:18
the custom vision service uses the images that you've submitted for
12:20
images that you've submitted for
12:20
images that you've submitted for training to calculate precision and
12:22
training to calculate precision and
12:22
training to calculate precision and recall
12:23
recall
12:23
recall using a process called k-fold
12:25
using a process called k-fold
12:25
using a process called k-fold cross-validation
12:27
cross-validation
12:27
cross-validation precision and read call are two
12:29
precision and read call are two
12:29
precision and read call are two different measurements of the
12:30
different measurements of the
12:30
different measurements of the effectiveness of your classifier
12:33
effectiveness of your classifier
12:33
effectiveness of your classifier precision this indicates the fraction of
12:36
precision this indicates the fraction of
12:36
precision this indicates the fraction of identified classifications that were
12:38
identified classifications that were
12:38
identified classifications that were correct
12:39
correct
12:39
correct for example if the model identified 100
12:42
for example if the model identified 100
12:42
for example if the model identified 100 images
12:43
images
12:43
images as bit and 99 of them were actually bit
12:47
as bit and 99 of them were actually bit
12:47
as bit and 99 of them were actually bit then the precision would be 99
12:51
then the precision would be 99
12:51
then the precision would be 99 recall indicates the fraction of actual
12:53
recall indicates the fraction of actual
12:54
recall indicates the fraction of actual classifications that were correctly
12:55
classifications that were correctly
12:56
classifications that were correctly identified
12:57
identified
12:57
identified for example if there were actually 100
12:59
for example if there were actually 100
13:00
for example if there were actually 100 images of bit
13:01
images of bit
13:01
images of bit and a model and identified 80 images as
13:04
and a model and identified 80 images as
13:04
and a model and identified 80 images as bit
13:05
bit
13:05
bit the recall would be 80 percent
13:08
the recall would be 80 percent
13:08
the recall would be 80 percent the probability slider on the left pane
13:10
the probability slider on the left pane
13:10
the probability slider on the left pane of the performance step
13:12
of the performance step
13:12
of the performance step this is the level of confidence that the
13:15
this is the level of confidence that the
13:15
this is the level of confidence that the prediction needs to have
13:16
prediction needs to have
13:16
prediction needs to have in order to be considered correct for
13:19
in order to be considered correct for
13:19
in order to be considered correct for the purpose of calculating precision and
13:21
the purpose of calculating precision and
13:21
the purpose of calculating precision and vehicle
13:23
vehicle
13:23
vehicle now that we have finished training the
13:24
now that we have finished training the
13:24
now that we have finished training the model we can test it to see if it can
13:27
model we can test it to see if it can
13:27
model we can test it to see if it can recognize a bit
13:28
recognize a bit
13:28
recognize a bit our trash banner on images that the
13:30
our trash banner on images that the
13:30
our trash banner on images that the model has never seen
13:33
model has never seen
13:33
model has never seen custom vision offers an easy interface
13:35
custom vision offers an easy interface
13:35
custom vision offers an easy interface to do this
13:37
to do this
13:37
to do this in the top right you'll find a quick
13:39
in the top right you'll find a quick
13:39
in the top right you'll find a quick test button when you click on this
13:41
test button when you click on this
13:41
test button when you click on this you can test the model you have two
13:43
you can test the model you have two
13:44
you can test the model you have two options here
13:46
options here
13:46
options here provide an image url in the url field
13:51
provide an image url in the url field
13:51
provide an image url in the url field and if you want to use a local stored
13:53
and if you want to use a local stored
13:53
and if you want to use a local stored image instead
13:54
image instead
13:54
image instead click the browse local files button and
13:56
click the browse local files button and
13:56
click the browse local files button and select an image file
14:00
the image you select appears in the
14:01
the image you select appears in the
14:02
the image you select appears in the middle of the page
14:03
middle of the page
14:03
middle of the page then the result appears below in the
14:05
then the result appears below in the
14:05
then the result appears below in the image in the form of a table with two
14:07
image in the form of a table with two
14:07
image in the form of a table with two columns
14:07
columns
14:08
columns labeled text and confidence
14:12
labeled text and confidence
14:12
labeled text and confidence the images you send to your model can be
14:14
the images you send to your model can be
14:14
the images you send to your model can be used to retrain your model
14:16
used to retrain your model
14:16
used to retrain your model you can find those images under the tab
14:18
you can find those images under the tab
14:18
you can find those images under the tab predictions
14:20
predictions
14:20
predictions here you see the same images as used for
14:23
here you see the same images as used for
14:23
here you see the same images as used for the test
14:24
the test
14:24
the test to add an image to your training data
14:26
to add an image to your training data
14:26
to add an image to your training data select the image
14:28
select the image
14:28
select the image select attack and then select save and
14:31
select attack and then select save and
14:31
select attack and then select save and close
14:32
close
14:32
close the image is removed from predictions
14:34
the image is removed from predictions
14:34
the image is removed from predictions and added to the training images
14:37
and added to the training images
14:37
and added to the training images you can view it by selecting the
14:38
you can view it by selecting the
14:38
you can view it by selecting the training images tab
14:41
training images tab
14:41
training images tab you can click train again to retrain
14:43
you can click train again to retrain
14:43
you can click train again to retrain your model with the new images
14:46
your model with the new images
14:46
your model with the new images before we continue with the demo let's
14:49
before we continue with the demo let's
14:49
before we continue with the demo let's talk a little bit about how we can
14:51
talk a little bit about how we can
14:51
talk a little bit about how we can improve a classifier
14:53
improve a classifier
14:53
improve a classifier the quality of our classifier depends on
14:56
the quality of our classifier depends on
14:56
the quality of our classifier depends on the amount
14:57
the amount
14:57
the amount quality and variety of the label data we
14:59
quality and variety of the label data we
14:59
quality and variety of the label data we provided
15:00
provided
15:00
provided and how balanced the overall data set is
15:04
and how balanced the overall data set is
15:04
and how balanced the overall data set is a good classifier has a balanced
15:06
a good classifier has a balanced
15:06
a good classifier has a balanced training data set
15:07
training data set
15:07
training data set that is representative of what will be
15:09
that is representative of what will be
15:09
that is representative of what will be submitted to the classifier
15:12
submitted to the classifier
15:12
submitted to the classifier the process of building such a
15:14
the process of building such a
15:14
the process of building such a classifier is iterative
15:16
classifier is iterative
15:16
classifier is iterative it is common to take a few training
15:18
it is common to take a few training
15:18
it is common to take a few training routes to reach the expected results
15:22
routes to reach the expected results
15:22
routes to reach the expected results a general pattern that will help you
15:24
a general pattern that will help you
15:24
a general pattern that will help you build a more accurate
15:25
build a more accurate
15:26
build a more accurate classifier is first
15:29
classifier is first
15:29
classifier is first a general training round we add more
15:32
a general training round we add more
15:32
a general training round we add more images and balance the data
15:34
images and balance the data
15:34
images and balance the data we retrain then we add again some more
15:37
we retrain then we add again some more
15:37
we retrain then we add again some more images
15:38
images
15:38
images with varying backgrounds lighting object
15:40
with varying backgrounds lighting object
15:40
with varying backgrounds lighting object size camera
15:41
size camera
15:42
size camera angle and style and we retrain it again
15:46
angle and style and we retrain it again
15:46
angle and style and we retrain it again four we use new images to test
15:49
four we use new images to test
15:49
four we use new images to test predictions
15:51
predictions
15:51
predictions and then we can modify existing training
15:53
and then we can modify existing training
15:53
and then we can modify existing training data
15:54
data
15:54
data according to those prediction results
15:59
so one thing we want is to prevent
16:01
so one thing we want is to prevent
16:01
so one thing we want is to prevent overfitting
16:02
overfitting
16:02
overfitting sometimes a classifier will learn to
16:04
sometimes a classifier will learn to
16:04
sometimes a classifier will learn to make predictions based on things
16:06
make predictions based on things
16:06
make predictions based on things images have in common for example if you
16:09
images have in common for example if you
16:09
images have in common for example if you are creating a classifier
16:11
are creating a classifier
16:11
are creating a classifier for apples via citrus and you have used
16:14
for apples via citrus and you have used
16:14
for apples via citrus and you have used images of apples in hands
16:17
images of apples in hands
16:17
images of apples in hands and citruses on white plates
16:20
and citruses on white plates
16:20
and citruses on white plates the classifier may give an undue
16:22
the classifier may give an undue
16:22
the classifier may give an undue importance to enhance
16:24
importance to enhance
16:24
importance to enhance phase plates rather than apples
16:27
phase plates rather than apples
16:27
phase plates rather than apples via citrus to prevent this from
16:30
via citrus to prevent this from
16:30
via citrus to prevent this from happening
16:31
happening
16:31
happening use the following guidance
16:35
data quantity the number of training
16:39
data quantity the number of training
16:39
data quantity the number of training images
16:39
images
16:39
images is the most important factor we
16:42
is the most important factor we
16:42
is the most important factor we recommend using at least 30 images per
16:44
recommend using at least 30 images per
16:44
recommend using at least 30 images per label as a starting point
16:47
label as a starting point
16:47
label as a starting point with fewer images there is a higher risk
16:49
with fewer images there is a higher risk
16:49
with fewer images there is a higher risk of overfitting
16:51
of overfitting
16:51
of overfitting and while your performance numbers may
16:53
and while your performance numbers may
16:53
and while your performance numbers may suggest good quality
16:54
suggest good quality
16:54
suggest good quality your model may struggle with real-world
16:57
your model may struggle with real-world
16:57
your model may struggle with real-world data
16:59
data
16:59
data also important to consider is the data
17:02
also important to consider is the data
17:02
also important to consider is the data balance
17:03
balance
17:03
balance for instance using 500 images for one
17:06
for instance using 500 images for one
17:06
for instance using 500 images for one label
17:07
label
17:07
label and 50 for the other one makes an
17:09
and 50 for the other one makes an
17:09
and 50 for the other one makes an imbalanced
17:10
imbalanced
17:10
imbalanced training data set this will cause the
17:12
training data set this will cause the
17:12
training data set this will cause the model to be more accurate in predicting
17:15
model to be more accurate in predicting
17:15
model to be more accurate in predicting one label
17:16
one label
17:16
one label than the other this will cause the model
17:18
than the other this will cause the model
17:18
than the other this will cause the model to be more accurate
17:20
to be more accurate
17:20
to be more accurate in predicting one label than the other
17:23
in predicting one label than the other
17:23
in predicting one label than the other you're likely to see better results if
17:25
you're likely to see better results if
17:25
you're likely to see better results if you maintain at least a one to two ratio
17:28
you maintain at least a one to two ratio
17:28
you maintain at least a one to two ratio between the label with the fewest images
17:30
between the label with the fewest images
17:30
between the label with the fewest images and the label with the most
17:32
and the label with the most
17:32
and the label with the most images for example if your label with
17:35
images for example if your label with
17:35
images for example if your label with the most images has 500 images
17:38
the most images has 500 images
17:38
the most images has 500 images the label with the least images should
17:40
the label with the least images should
17:40
the label with the least images should have at least
17:41
have at least
17:41
have at least 250 images for trading
17:46
250 images for trading
17:46
250 images for trading let's talk about data variety be sure to
17:48
let's talk about data variety be sure to
17:48
let's talk about data variety be sure to use
17:49
use
17:49
use images that are representative of what
17:51
images that are representative of what
17:51
images that are representative of what will be submitted to the classifier
17:53
will be submitted to the classifier
17:53
will be submitted to the classifier during normal use
17:55
during normal use
17:55
during normal use include a variety of images to ensure
17:57
include a variety of images to ensure
17:57
include a variety of images to ensure that your classified
17:59
that your classified
17:59
that your classified can generalize well
18:02
can generalize well
18:02
can generalize well let's look at a few things that can make
18:03
let's look at a few things that can make
18:04
let's look at a few things that can make your data set more diverse
18:07
your data set more diverse
18:07
your data set more diverse backgrounds provide images of your
18:09
backgrounds provide images of your
18:09
backgrounds provide images of your objects
18:10
objects
18:10
objects in front of different backgrounds photos
18:13
in front of different backgrounds photos
18:13
in front of different backgrounds photos in natural backgrounds
18:14
in natural backgrounds
18:14
in natural backgrounds are better than photos in front of
18:16
are better than photos in front of
18:16
are better than photos in front of neutral backgrounds as they provide more
18:18
neutral backgrounds as they provide more
18:18
neutral backgrounds as they provide more information for the classifier
18:21
information for the classifier
18:21
information for the classifier lighting provide images with a variety
18:24
lighting provide images with a variety
18:24
lighting provide images with a variety and lightning
18:25
and lightning
18:25
and lightning especially if your image is used for
18:27
especially if your image is used for
18:27
especially if your image is used for prediction have different lighting
18:29
prediction have different lighting
18:29
prediction have different lighting settings
18:30
settings
18:30
settings it is also helpful to use images with a
18:33
it is also helpful to use images with a
18:33
it is also helpful to use images with a variety of saturation
18:34
variety of saturation
18:34
variety of saturation view and brightness
18:38
object size provide images in which the
18:41
object size provide images in which the
18:41
object size provide images in which the objects
18:41
objects
18:41
objects vary in size and number for example a
18:45
vary in size and number for example a
18:45
vary in size and number for example a photo of a bunch of bananas
18:47
photo of a bunch of bananas
18:47
photo of a bunch of bananas and a close-up of a single banana
18:50
and a close-up of a single banana
18:50
and a close-up of a single banana different sizing
18:51
different sizing
18:51
different sizing helps the classifier generalize better
18:55
helps the classifier generalize better
18:55
helps the classifier generalize better camera angle provide images with
18:57
camera angle provide images with
18:57
camera angle provide images with different camera angles
19:00
different camera angles
19:00
different camera angles style provide images of different styles
19:04
style provide images of different styles
19:04
style provide images of different styles of the same class
19:05
of the same class
19:05
of the same class for example different variety of the
19:08
for example different variety of the
19:08
for example different variety of the same fruit
19:09
same fruit
19:09
same fruit however if you have objects of
19:11
however if you have objects of
19:11
however if you have objects of drastically different styles
19:14
drastically different styles
19:14
drastically different styles we recommend you label them as separate
19:16
we recommend you label them as separate
19:16
we recommend you label them as separate classes
19:17
classes
19:17
classes to better represent their distinct
19:18
to better represent their distinct
19:18
to better represent their distinct features
19:22
back to the demo let's add some more
19:24
back to the demo let's add some more
19:24
back to the demo let's add some more diverse images
19:25
diverse images
19:25
diverse images of our trash pen a bit to our training
19:27
of our trash pen a bit to our training
19:27
of our trash pen a bit to our training data set
19:29
data set
19:29
data set we go to training images and click add
19:32
we go to training images and click add
19:32
we go to training images and click add images
19:34
images
19:34
images select 60 more images with bits on it
19:38
select 60 more images with bits on it
19:38
select 60 more images with bits on it and select 60 more with not bit on it
19:44
you might have noticed that i didn't
19:46
you might have noticed that i didn't
19:46
you might have noticed that i didn't check my new images
19:48
check my new images
19:48
check my new images so these images can now be found under
19:50
so these images can now be found under
19:50
so these images can now be found under untaxed images
19:54
we're going to use our model we have
19:56
we're going to use our model we have
19:56
we're going to use our model we have just trained
19:57
just trained
19:57
just trained to tag these images for us
20:02
click on suggested text
20:05
click on suggested text
20:05
click on suggested text and click get started it can take a few
20:09
and click get started it can take a few
20:09
and click get started it can take a few minutes before it is stacked all the
20:10
minutes before it is stacked all the
20:10
minutes before it is stacked all the images
20:12
images
20:12
images when the tagging is done click on the
20:15
when the tagging is done click on the
20:15
when the tagging is done click on the tag
20:17
tag
20:17
tag take a look if the text are correct and
20:20
take a look if the text are correct and
20:20
take a look if the text are correct and click
20:20
click
20:20
click confirm tags
20:25
now these images are added to our
20:27
now these images are added to our
20:27
now these images are added to our training dataset
20:28
training dataset
20:28
training dataset and we can click on train to retrain our
20:32
and we can click on train to retrain our
20:32
and we can click on train to retrain our model
20:35
model
20:35
model for our trespander defense system we're
20:37
for our trespander defense system we're
20:37
for our trespander defense system we're going to export the model
20:39
going to export the model
20:39
going to export the model you can export every iteration of your
20:41
you can export every iteration of your
20:42
you can export every iteration of your model to export your model
20:44
model to export your model
20:44
model to export your model go to the performance tab and click on
20:47
go to the performance tab and click on
20:47
go to the performance tab and click on the iteration
20:47
the iteration
20:48
the iteration you want to export
20:51
in the top you can click export here you
20:54
in the top you can click export here you
20:54
in the top you can click export here you can choose for
20:55
can choose for
20:55
can choose for a tensorflow model which will run on
20:57
a tensorflow model which will run on
20:57
a tensorflow model which will run on android
20:58
android
20:58
android a core ml model for ios 11
21:01
a core ml model for ios 11
21:01
a core ml model for ios 11 and onyx for windows ml
21:05
and onyx for windows ml
21:05
and onyx for windows ml you can also get a docker container for
21:07
you can also get a docker container for
21:07
you can also get a docker container for windows
21:08
windows
21:08
windows linux or arm architecture
21:11
linux or arm architecture
21:11
linux or arm architecture the container includes a tensorflow
21:13
the container includes a tensorflow
21:13
the container includes a tensorflow model and code to self-host a custom
21:15
model and code to self-host a custom
21:15
model and code to self-host a custom vision api
21:17
vision api
21:17
vision api another way of using your model is to
21:19
another way of using your model is to
21:20
another way of using your model is to submit images to the prediction api
21:23
submit images to the prediction api
21:23
submit images to the prediction api you will first need to publish your
21:25
you will first need to publish your
21:25
you will first need to publish your iteration for prediction
21:26
iteration for prediction
21:26
iteration for prediction which can be done by selecting publish
21:29
which can be done by selecting publish
21:29
which can be done by selecting publish and specify a name for the published
21:31
and specify a name for the published
21:31
and specify a name for the published iteration
21:34
this will make your model accessible to
21:36
this will make your model accessible to
21:36
this will make your model accessible to the prediction api of your custom vision
21:38
the prediction api of your custom vision
21:38
the prediction api of your custom vision azure resource
21:40
azure resource
21:40
azure resource when you have published your integration
21:43
when you have published your integration
21:43
when you have published your integration you can look up the details and start
21:45
you can look up the details and start
21:45
you can look up the details and start submitting images to the endpoint
21:48
submitting images to the endpoint
21:48
submitting images to the endpoint let's start postman and submit an image
21:50
let's start postman and submit an image
21:50
let's start postman and submit an image to the endpoint and take a look at the
21:52
to the endpoint and take a look at the
21:52
to the endpoint and take a look at the response that the api sends back
21:59
response that the api sends back
21:59
response that the api sends back copy the url and the prediction key
22:11
and select the image we want to classify
22:16
when we look at the json we see that a
22:18
when we look at the json we see that a
22:18
when we look at the json we see that a model running in our prediction endpoint
22:20
model running in our prediction endpoint
22:20
model running in our prediction endpoint has classified the image as an image
22:23
has classified the image as an image
22:23
has classified the image as an image containing bit
22:26
containing bit
22:26
containing bit in this demo we have seen how to create
22:30
in this demo we have seen how to create
22:30
in this demo we have seen how to create a custom vision training and prediction
22:32
a custom vision training and prediction
22:32
a custom vision training and prediction endpoint in the azure portal
22:35
endpoint in the azure portal
22:35
endpoint in the azure portal seen how we can upload images tag them
22:39
seen how we can upload images tag them
22:39
seen how we can upload images tag them and train a model
22:42
we learned what the different metrics
22:44
we learned what the different metrics
22:44
we learned what the different metrics mean like precision
22:45
mean like precision
22:45
mean like precision and recall we talked about
22:49
and recall we talked about
22:49
and recall we talked about how you can improve your model and
22:50
how you can improve your model and
22:50
how you can improve your model and prevent overfitting
22:53
prevent overfitting
22:53
prevent overfitting retrained our model by adding more
22:55
retrained our model by adding more
22:55
retrained our model by adding more diverse images
22:57
diverse images
22:57
diverse images and used the first iteration of our
22:59
and used the first iteration of our
22:59
and used the first iteration of our model to auto attack the images
23:03
and took a look at how we can export the
23:05
and took a look at how we can export the
23:06
and took a look at how we can export the model so we can use it in a mobile app
23:09
model so we can use it in a mobile app
23:09
model so we can use it in a mobile app and in our trash banner defense system
23:12
and in our trash banner defense system
23:12
and in our trash banner defense system we also learned how we can publish the
23:14
we also learned how we can publish the
23:14
we also learned how we can publish the model to our prediction endpoint
23:17
model to our prediction endpoint
23:17
model to our prediction endpoint and use it as an api now that we have
23:19
and use it as an api now that we have
23:19
and use it as an api now that we have created our recruit classification model
23:22
created our recruit classification model
23:22
created our recruit classification model it is time to find out how we can
23:24
it is time to find out how we can
23:24
it is time to find out how we can implement this model in our trash banner
23:26
implement this model in our trash banner
23:26
implement this model in our trash banner defense system
23:28
defense system
23:28
defense system first let's zoom out and look at what we
23:31
first let's zoom out and look at what we
23:31
first let's zoom out and look at what we have to create
23:33
have to create
23:34
have to create we would have this thing that generates
23:37
we would have this thing that generates
23:37
we would have this thing that generates data
23:38
data
23:38
data that would be the camera that generates
23:40
that would be the camera that generates
23:40
that would be the camera that generates frames
23:43
frames
23:43
frames then we need to analyze the data that
23:46
then we need to analyze the data that
23:46
then we need to analyze the data that would be running the frames through the
23:48
would be running the frames through the
23:48
would be running the frames through the model
23:49
model
23:49
model and finally we have to take an action in
23:52
and finally we have to take an action in
23:52
and finally we have to take an action in our example
23:53
our example
23:53
our example it would be turning the lamp on or off
23:57
it would be turning the lamp on or off
23:57
it would be turning the lamp on or off and there are multiple ways of doing
23:58
and there are multiple ways of doing
23:58
and there are multiple ways of doing this and we're going to zoom into
24:01
this and we're going to zoom into
24:01
this and we're going to zoom into two of these scenarios the first
24:04
two of these scenarios the first
24:04
two of these scenarios the first scenario
24:05
scenario
24:05
scenario would be generating the data on the iot
24:08
would be generating the data on the iot
24:08
would be generating the data on the iot device
24:09
device
24:09
device analyze the data in the cloud and take
24:12
analyze the data in the cloud and take
24:12
analyze the data in the cloud and take the action
24:13
the action
24:13
the action on the device in our case
24:17
on the device in our case
24:17
on the device in our case the trespander defense system would send
24:19
the trespander defense system would send
24:19
the trespander defense system would send every frame to the cloud
24:22
every frame to the cloud
24:22
every frame to the cloud run the frame through the model
24:26
run the frame through the model
24:26
run the frame through the model and send the prediction back and the
24:28
and send the prediction back and the
24:28
and send the prediction back and the device would turn the lamp on or off
24:32
device would turn the lamp on or off
24:32
device would turn the lamp on or off this this scenario enables us to quickly
24:36
this this scenario enables us to quickly
24:36
this this scenario enables us to quickly tune the model in the cloud
24:38
tune the model in the cloud
24:38
tune the model in the cloud but it takes a long time before the lamp
24:40
but it takes a long time before the lamp
24:40
but it takes a long time before the lamp is turned on or off
24:42
is turned on or off
24:42
is turned on or off and sending all the frames to the cloud
24:44
and sending all the frames to the cloud
24:44
and sending all the frames to the cloud requires a fast internet connection
24:49
another approach would be running the
24:50
another approach would be running the
24:50
another approach would be running the model on the device itself
24:53
model on the device itself
24:53
model on the device itself a common misconception is that you would
24:55
a common misconception is that you would
24:55
a common misconception is that you would need a very powerful computer to run a
24:57
need a very powerful computer to run a
24:57
need a very powerful computer to run a model
24:58
model
24:58
model but for simple models like this you can
25:00
but for simple models like this you can
25:00
but for simple models like this you can run it on a raspberry pi
25:03
run it on a raspberry pi
25:03
run it on a raspberry pi however training them all is very
25:05
however training them all is very
25:05
however training them all is very compute heavy
25:06
compute heavy
25:06
compute heavy and cannot be done on a raspberry pi and
25:08
and cannot be done on a raspberry pi and
25:08
and cannot be done on a raspberry pi and is highly recommended to do in the cloud
25:12
is highly recommended to do in the cloud
25:12
is highly recommended to do in the cloud in this scenario the video frames are
25:14
in this scenario the video frames are
25:14
in this scenario the video frames are not leaving the device
25:16
not leaving the device
25:16
not leaving the device and are processed locally meaning the
25:18
and are processed locally meaning the
25:18
and are processed locally meaning the device could even function
25:20
device could even function
25:20
device could even function without a connection to the internet and
25:23
without a connection to the internet and
25:23
without a connection to the internet and act without any delay on the outcome of
25:25
act without any delay on the outcome of
25:25
act without any delay on the outcome of the model
25:26
the model
25:26
the model we could even add something that would
25:28
we could even add something that would
25:28
we could even add something that would only send the output of the model to the
25:30
only send the output of the model to the
25:30
only send the output of the model to the cloud
25:31
cloud
25:31
cloud so we can create a trashpanda defense
25:34
so we can create a trashpanda defense
25:34
so we can create a trashpanda defense system control center
25:35
system control center
25:36
system control center both of these approaches have their own
25:38
both of these approaches have their own
25:38
both of these approaches have their own pros and cons
25:39
pros and cons
25:39
pros and cons let's have a look at a few of them if
25:41
let's have a look at a few of them if
25:41
let's have a look at a few of them if you look at iot in the cloud
25:43
you look at iot in the cloud
25:43
you look at iot in the cloud it is really good if you don't need a
25:45
it is really good if you don't need a
25:45
it is really good if you don't need a real-time action being performed on the
25:47
real-time action being performed on the
25:47
real-time action being performed on the device itself
25:49
device itself
25:49
device itself like for instance remote monitoring and
25:51
like for instance remote monitoring and
25:51
like for instance remote monitoring and management
25:53
management
25:53
management also in the cloud you have access to
25:55
also in the cloud you have access to
25:55
also in the cloud you have access to infinite compute
25:56
infinite compute
25:56
infinite compute and storage to train compute intensive
25:59
and storage to train compute intensive
25:59
and storage to train compute intensive ai models
26:01
ai models
26:01
ai models the biggest advantage of running iot on
26:03
the biggest advantage of running iot on
26:03
the biggest advantage of running iot on the edge
26:04
the edge
26:04
the edge is next to the low latency for real-time
26:06
is next to the low latency for real-time
26:06
is next to the low latency for real-time response
26:07
response
26:07
response is that you can pre-process the data on
26:09
is that you can pre-process the data on
26:09
is that you can pre-process the data on the device itself
26:11
the device itself
26:11
the device itself meaning that a video feed from your
26:13
meaning that a video feed from your
26:13
meaning that a video feed from your camera or
26:14
camera or
26:14
camera or any other data generated by the device
26:17
any other data generated by the device
26:17
any other data generated by the device never have to leave the device
26:19
never have to leave the device
26:19
never have to leave the device and this is a good thing if you have
26:21
and this is a good thing if you have
26:21
and this is a good thing if you have heavy security and privacy requirements
26:25
heavy security and privacy requirements
26:25
heavy security and privacy requirements the best scenario for our trespander
26:27
the best scenario for our trespander
26:27
the best scenario for our trespander defense system is
26:28
defense system is
26:28
defense system is that we are going to build test and
26:30
that we are going to build test and
26:30
that we are going to build test and manage our solution in the cloud
26:33
manage our solution in the cloud
26:33
manage our solution in the cloud and deploy the solution to our raspberry
26:35
and deploy the solution to our raspberry
26:35
and deploy the solution to our raspberry pi device to run locally
26:37
pi device to run locally
26:37
pi device to run locally this will give us the flexibility and
26:39
this will give us the flexibility and
26:39
this will give us the flexibility and power of the cloud to train the model
26:42
power of the cloud to train the model
26:42
power of the cloud to train the model the low latency of running the model on
26:44
the low latency of running the model on
26:44
the low latency of running the model on the device to scare the raccoon away
26:46
the device to scare the raccoon away
26:46
the device to scare the raccoon away fast
26:48
fast
26:48
fast because everything is processed locally
26:49
because everything is processed locally
26:49
because everything is processed locally on the device it is not taking up
26:52
on the device it is not taking up
26:52
on the device it is not taking up any bandwidth and the device could even
26:55
any bandwidth and the device could even
26:55
any bandwidth and the device could even be placed in a location where there is
26:56
be placed in a location where there is
26:56
be placed in a location where there is no internet available
27:01
now that we have defined our application
27:03
now that we have defined our application
27:03
now that we have defined our application strategy
27:04
strategy
27:04
strategy we can start building and deploying it
27:06
we can start building and deploying it
27:06
we can start building and deploying it to build and deploy it we're going to
27:08
to build and deploy it we're going to
27:08
to build and deploy it we're going to take a look at azure iot edge
27:11
take a look at azure iot edge
27:11
take a look at azure iot edge azure iot edge is a fully managed
27:13
azure iot edge is a fully managed
27:13
azure iot edge is a fully managed service built on azure iot hub
27:16
service built on azure iot hub
27:16
service built on azure iot hub this service enables you to deploy your
27:18
this service enables you to deploy your
27:18
this service enables you to deploy your cloud workloads
27:19
cloud workloads
27:19
cloud workloads to run on an internet of things edge
27:21
to run on an internet of things edge
27:21
to run on an internet of things edge device
27:22
device
27:22
device via standard containers it works with
27:26
via standard containers it works with
27:26
via standard containers it works with linux and windows devices
27:27
linux and windows devices
27:28
linux and windows devices that support container engines the
27:30
that support container engines the
27:30
that support container engines the runtime is free and open source
27:32
runtime is free and open source
27:32
runtime is free and open source under the mit license iit h
27:36
under the mit license iit h
27:36
under the mit license iit h runs docker-compatible containers in
27:38
runs docker-compatible containers in
27:38
runs docker-compatible containers in these containers you can run your own
27:41
these containers you can run your own
27:41
these containers you can run your own business logic
27:42
business logic
27:42
business logic and through the cloud interface you can
27:44
and through the cloud interface you can
27:44
and through the cloud interface you can manage and deploy the workloads to the
27:46
manage and deploy the workloads to the
27:46
manage and deploy the workloads to the device
27:47
device
27:47
device iot adds uses modules for our trespasser
27:50
iot adds uses modules for our trespasser
27:50
iot adds uses modules for our trespasser defense system
27:52
defense system
27:52
defense system we would need three modules a camera
27:55
we would need three modules a camera
27:55
we would need three modules a camera module
27:55
module
27:55
module that is taking care of the connection
27:57
that is taking care of the connection
27:57
that is taking care of the connection with the camera and extracts the frames
27:59
with the camera and extracts the frames
27:59
with the camera and extracts the frames from the camera feed
28:01
from the camera feed
28:01
from the camera feed an ai module in this module we would run
28:03
an ai module in this module we would run
28:03
an ai module in this module we would run our machine learning model
28:05
our machine learning model
28:05
our machine learning model that has been trained in the cloud and
28:08
that has been trained in the cloud and
28:08
that has been trained in the cloud and last
28:08
last
28:08
last a module that will handle the alarm
28:11
a module that will handle the alarm
28:11
a module that will handle the alarm every module
28:12
every module
28:12
every module is a separate docker container that is
28:15
is a separate docker container that is
28:15
is a separate docker container that is stored in an azure container registry
28:18
stored in an azure container registry
28:18
stored in an azure container registry to deploy this module to the edge device
28:20
to deploy this module to the edge device
28:20
to deploy this module to the edge device we need to create a deployment manifest
28:23
we need to create a deployment manifest
28:23
we need to create a deployment manifest in this file we specify where the
28:25
in this file we specify where the
28:25
in this file we specify where the modules are located
28:27
modules are located
28:27
modules are located and how they should communicate with
28:28
and how they should communicate with
28:28
and how they should communicate with each other
28:31
each other
28:31
each other via iot hub we can deploy this manifest
28:34
via iot hub we can deploy this manifest
28:34
via iot hub we can deploy this manifest to our connected devices when the
28:36
to our connected devices when the
28:36
to our connected devices when the deployment is done
28:38
deployment is done
28:38
deployment is done the modules run and communicate locally
28:41
the modules run and communicate locally
28:41
the modules run and communicate locally on the iot edge device
28:43
on the iot edge device
28:44
on the iot edge device let's zoom in a little bit deeper and
28:45
let's zoom in a little bit deeper and
28:45
let's zoom in a little bit deeper and take a look at what is running on the
28:47
take a look at what is running on the
28:47
take a look at what is running on the iot edge device
28:49
iot edge device
28:49
iot edge device on the device the iot edge runtime is
28:52
on the device the iot edge runtime is
28:52
on the device the iot edge runtime is installed
28:53
installed
28:53
installed by default this comes with two modules
28:56
by default this comes with two modules
28:56
by default this comes with two modules the edge agent that takes care of the
28:58
the edge agent that takes care of the
28:58
the edge agent that takes care of the modules and deployment
29:00
modules and deployment
29:00
modules and deployment and at hub a local iot hub
29:03
and at hub a local iot hub
29:03
and at hub a local iot hub that enables our modules to communicate
29:05
that enables our modules to communicate
29:05
that enables our modules to communicate with each other
29:07
with each other
29:07
with each other we have our three modules that were
29:09
we have our three modules that were
29:09
we have our three modules that were installed
29:10
installed
29:10
installed through the iot edge deployment manifest
29:13
through the iot edge deployment manifest
29:13
through the iot edge deployment manifest the camera module connects the camera
29:15
the camera module connects the camera
29:15
the camera module connects the camera and sends the frames
29:17
and sends the frames
29:17
and sends the frames to the custom ai module the custom ai
29:20
to the custom ai module the custom ai
29:20
to the custom ai module the custom ai module
29:21
module
29:21
module puts a model score on the local iot edge
29:23
puts a model score on the local iot edge
29:23
puts a model score on the local iot edge hub
29:24
hub
29:24
hub the alarm module receives the module
29:27
the alarm module receives the module
29:27
the alarm module receives the module score from the edge hub
29:29
score from the edge hub
29:29
score from the edge hub in the lr module if the model score
29:31
in the lr module if the model score
29:31
in the lr module if the model score reaches a certain threshold
29:33
reaches a certain threshold
29:33
reaches a certain threshold the lamp is put on and a notification is
29:36
the lamp is put on and a notification is
29:36
the lamp is put on and a notification is put back on the edge hub
29:37
put back on the edge hub
29:37
put back on the edge hub which is sent to the iot app in azure
29:40
which is sent to the iot app in azure
29:40
which is sent to the iot app in azure for monitoring purposes
29:42
for monitoring purposes
29:42
for monitoring purposes now that we know what to build let's
29:45
now that we know what to build let's
29:45
now that we know what to build let's actually implement a model we have
29:46
actually implement a model we have
29:46
actually implement a model we have created using custom vision
29:49
created using custom vision
29:49
created using custom vision in an iot edge solution and deploy it
29:52
in an iot edge solution and deploy it
29:52
in an iot edge solution and deploy it to this raspberry pi device here you see
29:55
to this raspberry pi device here you see
29:55
to this raspberry pi device here you see a raspberry pi 4
29:56
a raspberry pi 4
29:56
a raspberry pi 4 with the latest raspbian installed
29:59
with the latest raspbian installed
29:59
with the latest raspbian installed it is connected to the power and i've
30:03
it is connected to the power and i've
30:03
it is connected to the power and i've connected it to the wi-fi
30:04
connected it to the wi-fi
30:04
connected it to the wi-fi and enabled ssh so i can connect to it
30:07
and enabled ssh so i can connect to it
30:07
and enabled ssh so i can connect to it from my computer
30:09
from my computer
30:09
from my computer i also connected a usb camera and
30:12
i also connected a usb camera and
30:12
i also connected a usb camera and connected some leds
30:14
connected some leds
30:14
connected some leds to the gpio pins on the raspberry board
30:20
to get started we first have to set up
30:23
to get started we first have to set up
30:23
to get started we first have to set up two things
30:23
two things
30:23
two things in azure we need to create an iot app
30:27
in azure we need to create an iot app
30:27
in azure we need to create an iot app and an azure container instance to
30:29
and an azure container instance to
30:29
and an azure container instance to create the iot app
30:31
create the iot app
30:31
create the iot app go to the azure portal and create a new
30:33
go to the azure portal and create a new
30:33
go to the azure portal and create a new resource
30:35
resource
30:35
resource go to internet of things and select iot
30:38
go to internet of things and select iot
30:38
go to internet of things and select iot hub
30:39
hub
30:39
hub select your subscription id create a new
30:43
select your subscription id create a new
30:43
select your subscription id create a new resource group
30:48
select the region closest to you and
30:51
select the region closest to you and
30:51
select the region closest to you and give your hub
30:52
give your hub
30:52
give your hub a name
30:57
click review and create
31:08
next we need to create an azure
31:10
next we need to create an azure
31:10
next we need to create an azure container registry
31:12
container registry
31:12
container registry in this registry we're going to store
31:13
in this registry we're going to store
31:14
in this registry we're going to store our iot edge modules
31:16
our iot edge modules
31:16
our iot edge modules to create the iso container registry go
31:19
to create the iso container registry go
31:19
to create the iso container registry go to containers
31:20
to containers
31:20
to containers click on container registry you can
31:22
click on container registry you can
31:22
click on container registry you can deploy it in the same research group
31:25
deploy it in the same research group
31:25
deploy it in the same research group as the iot hub give it a name
31:29
as the iot hub give it a name
31:30
as the iot hub give it a name select the right region
31:33
select the right region
31:33
select the right region and the standard sq is enough for what
31:36
and the standard sq is enough for what
31:36
and the standard sq is enough for what we are going to do
31:38
we are going to do
31:38
we are going to do click review and create
31:45
when the registry is created we have to
31:47
when the registry is created we have to
31:47
when the registry is created we have to enable login with the username and
31:49
enable login with the username and
31:49
enable login with the username and password
31:51
password
31:51
password open the resource and click access keys
31:56
open the resource and click access keys
31:56
open the resource and click access keys enable here admin user
32:00
enable here admin user
32:00
enable here admin user the first step is now completed we have
32:03
the first step is now completed we have
32:03
the first step is now completed we have created an
32:03
created an
32:04
created an iot hub and an azure container registry
32:07
iot hub and an azure container registry
32:07
iot hub and an azure container registry we have done this now by using the azure
32:10
we have done this now by using the azure
32:10
we have done this now by using the azure portal
32:11
portal
32:11
portal but you can also create these resources
32:13
but you can also create these resources
32:13
but you can also create these resources using the hdcli
32:15
using the hdcli
32:15
using the hdcli or by creating an arm template which you
32:18
or by creating an arm template which you
32:18
or by creating an arm template which you can run from azure devops
32:20
can run from azure devops
32:20
can run from azure devops now that we have our resources set up in
32:22
now that we have our resources set up in
32:22
now that we have our resources set up in azure it is time to connect the
32:24
azure it is time to connect the
32:24
azure it is time to connect the raspberry pi
32:25
raspberry pi
32:26
raspberry pi to the iot hub to do this
32:29
to the iot hub to do this
32:29
to the iot hub to do this connect to your device over ssh
32:33
connect to your device over ssh
32:33
connect to your device over ssh the azure iot edge runtime is what turns
32:36
the azure iot edge runtime is what turns
32:36
the azure iot edge runtime is what turns the device
32:37
the device
32:37
the device into an iot edge device first
32:40
into an iot edge device first
32:40
into an iot edge device first we need to register a microsoft key and
32:42
we need to register a microsoft key and
32:42
we need to register a microsoft key and a software repository feed
32:46
copy the generated list and
32:49
copy the generated list and
32:49
copy the generated list and install the public key
32:53
install the public key
32:53
install the public key next we need to install a container
32:55
next we need to install a container
32:55
next we need to install a container runtime
32:56
runtime
32:56
runtime azure iot edge relies on an oci
32:59
azure iot edge relies on an oci
32:59
azure iot edge relies on an oci compatible container runtime
33:01
compatible container runtime
33:01
compatible container runtime for production scenarios we recommend
33:03
for production scenarios we recommend
33:03
for production scenarios we recommend that you use the mobi-based engine
33:06
that you use the mobi-based engine
33:06
that you use the mobi-based engine the mobi engine is the only container
33:08
the mobi engine is the only container
33:08
the mobi engine is the only container engine officially supported with izer
33:10
engine officially supported with izer
33:10
engine officially supported with izer iot edge
33:12
iot edge
33:12
iot edge docker container images are compatible
33:14
docker container images are compatible
33:14
docker container images are compatible with the mobi runtime
33:18
first we need to update our package list
33:24
first we need to update our package list
33:24
first we need to update our package list install the mobi engine
33:28
install the mobi command line interface
33:31
install the mobi command line interface
33:31
install the mobi command line interface the cli is useful for development but
33:34
the cli is useful for development but
33:34
the cli is useful for development but optional for production deployments
33:37
optional for production deployments
33:37
optional for production deployments now we can install the azure iot edge
33:40
now we can install the azure iot edge
33:40
now we can install the azure iot edge security daemon
33:42
security daemon
33:42
security daemon the iot add security daemon provides and
33:45
the iot add security daemon provides and
33:45
the iot add security daemon provides and maintains security standards
33:46
maintains security standards
33:46
maintains security standards on the iot edge device the daemon starts
33:50
on the iot edge device the daemon starts
33:50
on the iot edge device the daemon starts on every boot
33:51
on every boot
33:51
on every boot and bootstraps the device by starting
33:53
and bootstraps the device by starting
33:53
and bootstraps the device by starting the rest of the iot edge runtime
33:57
we first update the package list on our
34:00
we first update the package list on our
34:00
we first update the package list on our device
34:02
device
34:02
device we check to see which versions of iot
34:04
we check to see which versions of iot
34:04
we check to see which versions of iot edge are available
34:06
edge are available
34:06
edge are available and install the most recent version of
34:08
and install the most recent version of
34:08
and install the most recent version of the security daemon
34:11
the final thing we need to do on the
34:13
the final thing we need to do on the
34:13
the final thing we need to do on the device is to add
34:15
device is to add
34:15
device is to add the connection string to the
34:18
the connection string to the
34:18
the connection string to the configuration file
34:20
configuration file
34:20
configuration file to get the connection string we have to
34:22
to get the connection string we have to
34:22
to get the connection string we have to go back to the azure portal
34:24
go back to the azure portal
34:24
go back to the azure portal and open our iot hub we navigate in the
34:27
and open our iot hub we navigate in the
34:28
and open our iot hub we navigate in the left menu
34:28
left menu
34:28
left menu to the section automatic device
34:31
to the section automatic device
34:31
to the section automatic device management
34:32
management
34:32
management and open iot edge
34:36
and open iot edge
34:36
and open iot edge here we click on add an iot edge device
34:41
here we click on add an iot edge device
34:41
here we click on add an iot edge device we give it a name and click save
34:47
click on the device and copy the primary
34:51
click on the device and copy the primary
34:51
click on the device and copy the primary or secondary connection string go back
34:55
or secondary connection string go back
34:55
or secondary connection string go back to the device
34:56
to the device
34:56
to the device to configure the security daemon the
34:59
to configure the security daemon the
34:59
to configure the security daemon the daemon can be configured using the
35:01
daemon can be configured using the
35:01
daemon can be configured using the configuration file at
35:02
configuration file at
35:02
configuration file at slash atc iot edge config.jamo
35:07
slash atc iot edge config.jamo
35:07
slash atc iot edge config.jamo the file is right protected by default
35:09
the file is right protected by default
35:09
the file is right protected by default you might need elevated permissions to
35:11
you might need elevated permissions to
35:11
you might need elevated permissions to edit it
35:13
edit it
35:13
edit it open the configuration file find the
35:16
open the configuration file find the
35:16
open the configuration file find the provisioning configurations of the file
35:18
provisioning configurations of the file
35:18
provisioning configurations of the file and uncomment the manual provisioning
35:20
and uncomment the manual provisioning
35:20
and uncomment the manual provisioning configurations section
35:22
configurations section
35:22
configurations section update the value of device connection
35:24
update the value of device connection
35:24
update the value of device connection string with the connection string from
35:26
string with the connection string from
35:26
string with the connection string from the iot edge device
35:29
the iot edge device
35:29
the iot edge device make sure any other provisioning
35:30
make sure any other provisioning
35:30
make sure any other provisioning sections are commented out
35:35
after entering the provisioning
35:36
after entering the provisioning
35:36
after entering the provisioning information in the configuration file
35:38
information in the configuration file
35:38
information in the configuration file restart the daemon all the
35:41
restart the daemon all the
35:41
restart the daemon all the infrastructure is now ready
35:43
infrastructure is now ready
35:43
infrastructure is now ready to run our trespander defense system
35:45
to run our trespander defense system
35:45
to run our trespander defense system modules
35:46
modules
35:46
modules i've already created all the modules so
35:49
i've already created all the modules so
35:49
i've already created all the modules so let's clone the repository and take a
35:51
let's clone the repository and take a
35:51
let's clone the repository and take a look what is inside
35:55
in this repository are a few important
35:57
in this repository are a few important
35:57
in this repository are a few important files
35:58
files
35:58
files first there is the
36:01
first there is the
36:01
first there is the deployment.template.json
36:03
deployment.template.json
36:03
deployment.template.json a deployment manifest is a json document
36:06
a deployment manifest is a json document
36:06
a deployment manifest is a json document that describes which modules to deploy
36:09
that describes which modules to deploy
36:09
that describes which modules to deploy and how the data flows between the
36:10
and how the data flows between the
36:10
and how the data flows between the modules
36:12
modules
36:12
modules there is a folder called modules in this
36:15
there is a folder called modules in this
36:15
there is a folder called modules in this folder
36:16
folder
36:16
folder our three modules are located
36:19
our three modules are located
36:19
our three modules are located the camera capture module takes care of
36:21
the camera capture module takes care of
36:21
the camera capture module takes care of the camera
36:23
the camera
36:23
the camera image classifier service is running the
36:26
image classifier service is running the
36:26
image classifier service is running the machine learning model
36:29
machine learning model
36:29
machine learning model and simple led turns the led on
36:32
and simple led turns the led on
36:32
and simple led turns the led on and off let's take a look at the simple
36:35
and off let's take a look at the simple
36:35
and off let's take a look at the simple led module
36:37
led module
36:37
led module every module has a docker file
36:40
every module has a docker file
36:40
every module has a docker file this file contains the configuration of
36:42
this file contains the configuration of
36:42
this file contains the configuration of the container
36:44
the container
36:44
the container like which base image to use and which
36:46
like which base image to use and which
36:46
like which base image to use and which packages to install
36:49
packages to install
36:49
packages to install then there is the module json
36:53
then there is the module json
36:53
then there is the module json this file holds the iot edge module
36:55
this file holds the iot edge module
36:55
this file holds the iot edge module configuration
36:57
configuration
36:57
configuration in this file we see variables like the
37:00
in this file we see variables like the
37:00
in this file we see variables like the container registry address
37:03
container registry address
37:03
container registry address if you build the iot edge solution this
37:05
if you build the iot edge solution this
37:05
if you build the iot edge solution this is the registry
37:06
is the registry
37:06
is the registry where it restore your containers
37:09
where it restore your containers
37:10
where it restore your containers these variables we can set in the dot m
37:12
these variables we can set in the dot m
37:12
these variables we can set in the dot m file
37:13
file
37:13
file and we can retrieve these details from
37:15
and we can retrieve these details from
37:15
and we can retrieve these details from the container registry
37:16
the container registry
37:16
the container registry to the portal let's open the azure
37:19
to the portal let's open the azure
37:19
to the portal let's open the azure portal
37:20
portal
37:20
portal and navigate to the container registry
37:23
and navigate to the container registry
37:23
and navigate to the container registry click on access keys
37:27
and copy the login server
37:35
the username
37:40
and the password
37:45
we are almost ready to build the
37:46
we are almost ready to build the
37:46
we are almost ready to build the solution but first we need to get a
37:49
solution but first we need to get a
37:49
solution but first we need to get a model we have trained
37:50
model we have trained
37:50
model we have trained from the custom vision service
37:54
let's open our project and go to the tab
37:57
let's open our project and go to the tab
37:58
let's open our project and go to the tab performance there we click on
38:02
performance there we click on
38:02
performance there we click on export
38:05
export
38:05
export we choose here for the docker file
38:09
we choose here for the docker file
38:09
we choose here for the docker file and choose for the arm raspberry pi file
38:13
and choose for the arm raspberry pi file
38:13
and choose for the arm raspberry pi file click download open the zip file
38:17
click download open the zip file
38:17
click download open the zip file and copy the labels and model file
38:20
and copy the labels and model file
38:20
and copy the labels and model file to the image classifier app directory
38:37
we also need to install the iot edge
38:39
we also need to install the iot edge
38:40
we also need to install the iot edge extension in visual studio code
38:43
extension in visual studio code
38:43
extension in visual studio code you can find this extension under the
38:45
you can find this extension under the
38:45
you can find this extension under the menu item extensions
38:47
menu item extensions
38:47
menu item extensions and search for iot edge
38:52
when you have installed this extension
38:54
when you have installed this extension
38:54
when you have installed this extension you should see azure iot hub
38:56
you should see azure iot hub
38:56
you should see azure iot hub in the bottom in the explorer
39:00
in the bottom in the explorer
39:00
in the bottom in the explorer here you can connect visual studio code
39:03
here you can connect visual studio code
39:03
here you can connect visual studio code to your iot hub
39:04
to your iot hub
39:04
to your iot hub and view your devices click on the three
39:08
and view your devices click on the three
39:08
and view your devices click on the three dots
39:08
dots
39:08
dots and choose select iot hub to connect
39:13
and choose select iot hub to connect
39:13
and choose select iot hub to connect because we have the iot edge extension
39:16
because we have the iot edge extension
39:16
because we have the iot edge extension installed
39:17
installed
39:17
installed we can do a right click on the
39:18
we can do a right click on the
39:18
we can do a right click on the deployment template and click build and
39:21
deployment template and click build and
39:21
deployment template and click build and push
39:21
push
39:21
push iot edge solution
39:24
iot edge solution
39:24
iot edge solution this command will build the docker
39:26
this command will build the docker
39:26
this command will build the docker containers for the modules
39:28
containers for the modules
39:28
containers for the modules and will push them to our repository
39:31
and will push them to our repository
39:31
and will push them to our repository this can take a while
39:34
this can take a while
39:34
this can take a while when all the containers are built and
39:36
when all the containers are built and
39:36
when all the containers are built and pushed we can go to the acr
39:38
pushed we can go to the acr
39:38
pushed we can go to the acr and view the repositories there
39:41
and view the repositories there
39:41
and view the repositories there to deploy these modules to our iot edge
39:44
to deploy these modules to our iot edge
39:44
to deploy these modules to our iot edge device
39:45
device
39:45
device we have to generate an iot edge
39:47
we have to generate an iot edge
39:47
we have to generate an iot edge deployment manifest
39:50
deployment manifest
39:50
deployment manifest this can be done by right-clicking on
39:51
this can be done by right-clicking on
39:52
this can be done by right-clicking on the deployment template
39:53
the deployment template
39:53
the deployment template and select generate deployment manifest
39:58
and select generate deployment manifest
39:58
and select generate deployment manifest the deployment manifest will appear in
40:00
the deployment manifest will appear in
40:00
the deployment manifest will appear in the config folder
40:03
the config folder
40:03
the config folder to deploy this manifest you can right
40:06
to deploy this manifest you can right
40:06
to deploy this manifest you can right click on the manifest
40:08
click on the manifest
40:08
click on the manifest and select create deployment for single
40:10
and select create deployment for single
40:10
and select create deployment for single device
40:12
device
40:12
device in the top official studio code you can
40:15
in the top official studio code you can
40:15
in the top official studio code you can select your iot edge device now
40:19
select your iot edge device now
40:19
select your iot edge device now select the device and wait for the
40:21
select the device and wait for the
40:21
select the device and wait for the deployment
40:22
deployment
40:22
deployment to finish
40:25
to finish
40:25
to finish there are a few ways how you can view if
40:28
there are a few ways how you can view if
40:28
there are a few ways how you can view if the deployment is successful
40:30
the deployment is successful
40:30
the deployment is successful in visual studio code we can use the
40:32
in visual studio code we can use the
40:32
in visual studio code we can use the extension to see which modules are
40:34
extension to see which modules are
40:34
extension to see which modules are running on the device
40:40
or we can see it on the device itself by
40:42
or we can see it on the device itself by
40:42
or we can see it on the device itself by using the iot edge command line command
40:45
using the iot edge command line command
40:45
using the iot edge command line command i have to add list
40:50
here you can see all our modules are
40:53
here you can see all our modules are
40:53
here you can see all our modules are up and running now it is time to see it
40:57
up and running now it is time to see it
40:57
up and running now it is time to see it in action
40:58
in action
40:58
in action let's release trash pen a bit to the
41:01
let's release trash pen a bit to the
41:01
let's release trash pen a bit to the stage
41:02
stage
41:02
stage and see if we can scare him away with
41:04
and see if we can scare him away with
41:04
and see if we can scare him away with our led lights
41:07
our led lights
41:07
our led lights that worked no more trust pandas messing
41:10
that worked no more trust pandas messing
41:10
that worked no more trust pandas messing up my trash
41:12
up my trash
41:12
up my trash this is the end of the iot edge demo in
41:15
this is the end of the iot edge demo in
41:15
this is the end of the iot edge demo in this demo we have seen
41:17
this demo we have seen
41:17
this demo we have seen how to create an azure iot hub and azure
41:20
how to create an azure iot hub and azure
41:20
how to create an azure iot hub and azure container registry
41:21
container registry
41:21
container registry in azure how to install iot edge
41:25
in azure how to install iot edge
41:25
in azure how to install iot edge on an edge device learned about the
41:28
on an edge device learned about the
41:28
on an edge device learned about the project structure
41:29
project structure
41:29
project structure of an iot edge solution build the iot
41:33
of an iot edge solution build the iot
41:33
of an iot edge solution build the iot edge solution
41:34
edge solution
41:34
edge solution and created a deployment manifest
41:37
and created a deployment manifest
41:37
and created a deployment manifest deployed the solution to a single iot
41:40
deployed the solution to a single iot
41:40
deployed the solution to a single iot edge device
41:42
edge device
41:42
edge device and saw that the lights scared away our
41:45
and saw that the lights scared away our
41:45
and saw that the lights scared away our trash band a bit
41:46
trash band a bit
41:46
trash band a bit in this session you got peak in how you
41:48
in this session you got peak in how you
41:48
in this session you got peak in how you can use azure iot edge
41:51
can use azure iot edge
41:51
can use azure iot edge and azure ai to create a trashpanda
41:53
and azure ai to create a trashpanda
41:53
and azure ai to create a trashpanda defense system
41:55
defense system
41:55
defense system we covered what machine learning is and
41:57
we covered what machine learning is and
41:57
we covered what machine learning is and which tools there are available in azure
42:00
which tools there are available in azure
42:00
which tools there are available in azure we talked about the difference between
42:02
we talked about the difference between
42:02
we talked about the difference between iot and the cloud
42:03
iot and the cloud
42:04
iot and the cloud and iot on the edge and showed you how
42:07
and iot on the edge and showed you how
42:07
and iot on the edge and showed you how to use azure iot edge to manage your
42:10
to use azure iot edge to manage your
42:10
to use azure iot edge to manage your solution
42:11
solution
42:12
solution for links to the relevant documentation
42:14
for links to the relevant documentation
42:14
for links to the relevant documentation resources
42:15
resources
42:15
resources demos used in this presentation and a
42:18
demos used in this presentation and a
42:18
demos used in this presentation and a follow-up presentation
42:19
follow-up presentation
42:19
follow-up presentation check out ak.oms slash iot 30
42:23
check out ak.oms slash iot 30
42:23
check out ak.oms slash iot 30 slash resources if you are interested in
42:26
slash resources if you are interested in
42:26
slash resources if you are interested in using this presentation
42:27
using this presentation
42:27
using this presentation and or this video recording for an event
42:30
and or this video recording for an event
42:30
and or this video recording for an event of your own
42:31
of your own
42:31
of your own the materials can be found on github at
42:33
the materials can be found on github at
42:33
the materials can be found on github at akms
42:35
akms
42:35
akms slash iot 30. if you enjoyed this
42:38
slash iot 30. if you enjoyed this
42:38
slash iot 30. if you enjoyed this presentation and are interested in other
42:40
presentation and are interested in other
42:40
presentation and are interested in other topics covered in the iot learning pad
42:43
topics covered in the iot learning pad
42:43
topics covered in the iot learning pad you can find them all at ak dot mess
42:46
you can find them all at ak dot mess
42:46
you can find them all at ak dot mess says iot lp we have covered quite a few
42:50
says iot lp we have covered quite a few
42:50
says iot lp we have covered quite a few topics in this session
42:51
topics in this session
42:52
topics in this session and would like to share that we have
42:53
and would like to share that we have
42:53
and would like to share that we have curated the collection of modules on a
42:55
curated the collection of modules on a
42:55
curated the collection of modules on a microsoft learn platform
42:57
microsoft learn platform
42:57
microsoft learn platform which pertain to the topics in this
42:59
which pertain to the topics in this
42:59
which pertain to the topics in this session
43:01
session
43:01
session this will allow you to interactively
43:03
this will allow you to interactively
43:03
this will allow you to interactively learn how to build intelligent edge
43:05
learn how to build intelligent edge
43:05
learn how to build intelligent edge applications
43:06
applications
43:06
applications and how to manage them using azure iot
43:09
and how to manage them using azure iot
43:09
and how to manage them using azure iot edge
43:10
edge
43:10
edge there are also many modules on how to
43:12
there are also many modules on how to
43:12
there are also many modules on how to build ai models
43:14
build ai models
43:14
build ai models using tools like azure custom vision and
43:16
using tools like azure custom vision and
43:16
using tools like azure custom vision and azure machine learning
43:18
azure machine learning
43:18
azure machine learning this presentation and the associated
43:21
this presentation and the associated
43:21
this presentation and the associated learn modules
43:22
learn modules
43:22
learn modules can help guide you on a path to official
43:24
can help guide you on a path to official
43:24
can help guide you on a path to official certification
43:25
certification
43:25
certification if you are interested in obtaining
43:27
if you are interested in obtaining
43:27
if you are interested in obtaining accreditation that can help you stand as
43:30
accreditation that can help you stand as
43:30
accreditation that can help you stand as a certified microsoft azure iot
43:32
a certified microsoft azure iot
43:32
a certified microsoft azure iot developer
43:33
developer
43:33
developer we recommend checking out the az 220
43:36
we recommend checking out the az 220
43:36
we recommend checking out the az 220 certification
43:38
certification
43:38
certification you can find details on the topics
43:40
you can find details on the topics
43:40
you can find details on the topics covered and schedule an exam today
43:42
covered and schedule an exam today
43:42
covered and schedule an exam today at ak.mess iot 30
43:46
at ak.mess iot 30
43:46
at ak.mess iot 30 certification for more interactive
43:49
certification for more interactive
43:49
certification for more interactive learning content
43:50
learning content
43:50
learning content check out microsoft learn at
43:51
check out microsoft learn at
43:51
check out microsoft learn at microsoft.com learn
43:53
microsoft.com learn
43:53
microsoft.com learn to begin your own custom learning pad
43:56
to begin your own custom learning pad
43:56
to begin your own custom learning pad with resources on the latest topics and
43:58
with resources on the latest topics and
43:58
with resources on the latest topics and trends in technology
43:59
trends in technology
43:59
trends in technology thank you again for attending this
44:02
thank you again for attending this
44:02
thank you again for attending this [Music]
44:06
[Music]
44:06
[Music] session
44:09
session
44:09
session [Music]
44:11
[Music]
44:12
[Music] you


