Cloud Microservices to Serve the Next Billion || Code Quality & Performance Virtual Conference
Nov 9, 2023
Cloud Microservices to Serve the Next Billion: The food wastage in India is 70 tons per year, and there is mismanagement at several layers. Approximately 20-30% of the wastage happens in the last mile, between wholesale traders, and retail mom-and-pop stores. Is there something we can do about food wastage?
This was the problem statement I attempted to solve as a first engineering hire at a startup. Our customers were 12.8 million retail owners that deal in FMCG (Fast-moving consumer goods, such as food grains, tooth paste, etc.). The goal was to develop a platform for retail traders (mom and pop shop owners / small and medium business owners) to buy FMCG products from wholesale traders using an Android app.
We were attacking a deeply entrenched business practice to help solve a societal goal. For a section of the population which is not very well versed with smartphones and technology, the user experience had to be designed from the ground up to be multilingual, fungible, unstructured, and relevant. In this talk, I cover how we went about iterating the solution from a simple SMS based system to a full fledged app backed by microservices. Having a micro-service architecture provided us the agility to experiment and iterate quickly, and we were able to push out changes much faster, and help solve wastage problems even sooner.
I will discuss the several problems we faced in this segment with regards to unstructured data, and how our data models had to adapt. We used a bunch of cloud native services such as DynamoDB, Redshift, Kinesis, Lambdas, etc. to develop this marketplace, and I will discuss how these services came together in a cogent form.
After having worked in bigger companies on software projects that scale to millions of devices, this was a unique challenge for me, and something I am very proud of. I would like to share my experience in building empathetic software for the masses.
Conference Website: https://globaltechconferences.com/event/code-quality-performance-virtual-conference-2021/
C# Corner - Community of Software and Data Developers: https://www.c-sharpcorner.com
C# Live - Dev Streaming Destination: https://csharp.live
#Cloud #Microservices #Azure #Codequality #virtualconference
Show More Show Less View Video Transcript
0:00
Hi, everyone. Today, I'm going to talk about how you would build cloud microservices for the next billion users
0:10
And the agenda of my talk would be as follows. I'll first start off with an introduction. Who am I? What am I doing? How did I reach here
0:20
Problem space definition. What exactly are we trying to solve here by building microservices
0:24
requirements of the problem space and ideating a solution. How do you get towards iteratively
0:32
designing using agile techniques to reach a solution? And finally, some results and takeaways
0:40
as an engineer. And here is a picture that I have of a typical retail store in India
0:47
As you can see here, there are a bunch of food items, perishable items and non-perishable items
0:54
that are present here. And India is dotted, like the cities and the towns of India are dotted by
1:01
like millions of these stores. And these are small and medium businesses, mom and pop shops that are
1:07
run by common people for common people. So we found an opportunity to disrupt the space. And we
1:13
felt that this would then disrupt like technology and introduce technology to a billion people. So
1:20
that's why the name of the talk. Let me first begin by an introduction to myself. My name is
1:26
Tejas Chopra. I am a senior software engineer at Netflix and I'll talk about my journey. So I
1:32
started off with a master's degree in Carnegie Mellon University with a specialization in computer
1:37
systems. From the year 2012 to 2016, I worked at Apple and Samsung where I was working on
1:43
distributed systems. In the year 2016, I started working at a startup where I was the first hire
1:51
And what this startup was doing was it was building a marketplace for FMCG. And I'll get
1:56
into what that is. You can think of it as just like Amazon, but Amazon is B2C. This one is B2B
2:03
where the buyers are retail shop owners and the sellers are wholesale traders
2:07
And my goal was to build software that can enable them to do their business in a better way, thereby enriching their lives and the lives of the customers that they serve
2:22
From 2017 onwards, I started working at a startup called Datrium and then at Box, where my job was distributed file systems, storage, replication, cloud storage
2:33
And 2020 onwards, I joined Netflix. So I'm a part of the data storage platform team here at Netflix
2:39
My job is to build architect storage infrastructure for Netflix studios and Netflix streaming platform to store and efficiently retrieve petabytes to exabyte scale data that is produced at Netflix
2:54
Coming to the problem space definition. Like I said, there's a category of goods called fast moving consumer goods
3:03
And this is an image that captures it quite well. These are packaged goods, beverages, toiletries, cosmetics, food, other perishables
3:12
And they are typically found in the U.S. in big stores such as Macy's and Target
3:17
But in India, they are found also in a bunch of retail stores that are run by small and medium businesses
3:25
The market for these is around $50 billion in India. And the business model, like I said, is there are wholesale traders on the one end
3:34
and the retail shop owners on the other end. And in India, there are 0.33 million of these wholesale traders
3:40
and 12 million of these retail shop owners. So that's the business model that they have
3:49
The problems in this domain are that there is a lot of wastage of food
3:53
and the wastage is actually 70 tons per year for perishable items
3:59
And there are several reasons why there is this wastage. Firstly, the last mile delivery is very untimely
4:05
So the last mile is between the wholesale traders and the retail shop owners. And the delivery is generally handled by the wholesale traders, the shipment
4:12
And that is very untimely, very fragmented, which results in these consumable items perishing
4:18
So that's caused a lot of wastage. There is no feedback mechanism
4:22
So if a food item goes bad or is close to getting bad, there is no way for the customer to inform the store from which
4:31
he or she has bought this, that, you know, this food is bad. And there's no way to relay that
4:35
information upstream. There is no catalog maintained. So generally, the way that this
4:44
domain works is they use pen and paper to just keep a track of, you know, what is the inventory
4:49
in their store. So there is no technical way of managing the catalog. And finally
4:54
There are deeply entrenched relationships. So typically, a retail store owner has relationships
5:02
with one or two, just a couple of wholesale traders, and they only deal with them. So that
5:08
way, they do not have choice to explore other wholesale traders. And these are some of the
5:13
reasons why there has been a lot of wastage. And we thought, let us try to use technology to
5:18
disrupt this space. This is a typical wholesale retailer that, as you can see, it is pretty
5:26
fragmented. Very little technology has touched this sector. So we thought, what if we could
5:33
prevent this food loss or make a dent into it? Create a marketplace for FMCG, which does not
5:39
exist today for small and medium enterprises in India. The buyers would be the retail shop owners
5:46
sellers would be wholesale traders what it does is it gives buyers a choice and sellers a platform
5:52
choice is good you can explore offers from different sellers you will also have the ability
6:00
to provide feedback back to the wholesale consumer wholesale traders from whom you're buying this
6:07
stuff it also gives you the ability to search and seek offers so typically when you open up amazon
6:15
or any other app, you can search for products, look at different offers, create a cart with a
6:22
bunch of stuff, and then get the amortized cost of shipment. So that's what we wanted to bring
6:28
to this sector. And finally, we wanted to build empathy for customers. And I'll get, and I'll
6:34
expand this point a bit in the next slide. But what we wanted to do is we did not want to just
6:39
take the model that worked in different sectors and just apply it blindly here
6:43
one of our learnings was that it had to be a customer is at the forefront so you need to
6:50
recognize what are some of the challenges that they face and try to solve them and that is a
6:57
good segue into what were some of the challenges that we saw initially first and foremost this
7:04
sector the people that we were trying to solve the problem for had never used a smartphone
7:08
So to explain to them what an application is, what a smartphone is, how do they interact
7:14
with an app was a challenge and something that we actually did on our own
7:18
So we provided them smartphones and we taught them how to use a smartphone, which was a very
7:23
enriching experience as an engineer They care about battery life Generally the retail stores work on extremely thin margins So they want to save on as much electricity and other resources as they can
7:38
So for them, any product that consumes less battery is a win. And so they really focused
7:44
on having a good battery life for whatever app they wanted to use. Thirdly, generally
7:52
you see a product like an iPhone or a Samsung phone, they have data written in English about
7:58
the features of the product, the attributes. But a lot of our customers were not very well-versed
8:03
with English language. So we had to localize the product images for consumption by these people
8:10
And that was another challenge to do. The concept of a cart, the concept of order history or tracking
8:17
of orders was new to them. So we had to teach them what these things are and how they work
8:23
And finally, product images. So typically, we would, when we look at a product page in
8:32
these different apps, we read about what the product is. And generally, we do not focus too
8:38
much on the images beyond a point. But our customer base actually just focused on the image
8:43
for them it's easier to reorder the same stuff that looked similar to what they had and they
8:49
don't go about reading what exactly are the features of the product so product images had
8:55
to be much bigger than the text that they had so these were some of our learnings by talking to a
9:00
bunch of customers and whatever solution we built had to take into account these learnings and try
9:06
to solve them for our for our base that actually led to defining requirements what exactly should
9:12
our app do or platform do. The table stakes for a marketplace are product search. You should have
9:18
the ability to search and browse products that have different images and different variants of
9:25
the products. Have an order management system that gives you an ability to add items to a cart
9:32
track your orders, and look at your history. Authentication when you log into the app
9:37
whether it is OTP or some other way of authenticating a user. Offers, which are associated
9:43
with bulk orders, and they are provided by sellers. So that is, again, a feature that you
9:48
will generally see in Amazons of the world. And finally, a way for the sellers, which is the other
9:54
side of the spectrum, to ingest their products and offers into our platform. So these are some
9:59
things that we have to build for it to even take off. And some of the initial design choices that
10:06
we took were we started with cloud because cloud was the easiest way to bootstrap any idea it allows
10:13
you to have elasticity scalability security manageability is handled by them so it's it's
10:20
it's a very low barrier to entry if you want to build systems we relied on dynamo db so dynamo db
10:27
is a database that is no sql database and the problem domain that we were solving or trying to
10:33
solve i did not know the structure of the data there because products can have so many attributes
10:39
and the list is endless for different types of products in this domain so um we uh i started by
10:45
creating tables in dynamo db um i used kinesis which is um event like producer consumer uh model
10:53
where uh on the one end are producers that produce events and on the other end are consumers that
11:00
consume those events. It's like Kafka, but it is managed by AWS. And it is leveraged for
11:06
asynchronous updates and other async workflows. We used Redshift. Redshift is a data warehouse
11:13
where you can track different user clicks or product searches or browsing. And that data can
11:22
form insights that you can build businesses on top of it. And you can provide a view into
11:30
let's say a consumer is about to buy more chocolates because it's Christmas time. So
11:35
that is the kind of insights that you can get from a data warehouse, and that will help our
11:39
sellers to get inventory beforehand. Given that we've chosen this system and a set of requirements
11:48
we have to now start working towards a solution. The simplest way is to take a long time
11:54
and to build the perfect thing, but there is no such thing as perfect. And perfect is the enemy of good, as they say sometimes
12:01
So the right way to do it is to use agile techniques and design iteratively and seek feedback
12:07
That was my motivation because I was so new to this field. So I wanted to see what works for customers and what doesn't
12:15
So I started off with a pilot And the pilot was very humorously simple. It was an SMS based solution. So I would send I would have a Lambda that sends an SMS every day with a list of all the products
12:28
and it would be a long SMS. And we would send it to the buyers
12:38
which are retail shop owners. And they would respond by scrolling on the SMS
12:44
and then saying that they wanted a particular product and how much quantity of the product they wanted
12:48
So the advantages of this approach were it got us to bootstrap in AWS
12:53
by using Lambda DynamoDB. It got our buyers the ability to work on feature phones without requiring to buy smartphones right away
13:04
And it allowed us to scale up to 10 stores, but not beyond that
13:10
So it was a solution that was a pilot just to see, you know, if such a thing can work in this market
13:16
The backend would have sellers uploading product information in DynamoDB. And on the other hand, the Lambda would send SMSs
13:25
What was my conclusion from this? First of all, Lambda works great for timed delivery of messages
13:32
Human vetting is painful, and it has to be replaced by programs
13:36
And this approach is very rigid. It has no offers, no real-time quantity updates, no searchability, browsability
13:44
and it's just scroll-based. The person has to scroll to find which product they want to buy
13:49
And even with this broken sort of a solution, we could see a surge in orders
13:55
and we were doing about $500 revenue per day. So once we were fine with the pilot
14:02
we thought let's start developing services. And the first stage was to split the seller portal APIs
14:11
from the users, the app APIs. And we did that by having a monolithic service
14:17
for the seller portal and a Lambda for the app. So whenever an order was placed by the user
14:23
a Lambda would get invoked, which would actually deal with the order. And the sellers would ingest the product information
14:32
through this service into DynamoDB. And finally, the Lambda that is invoked on an order placement
14:41
would interact with DynamoDB and do real-time quantity updates. So this worked very well beyond 10 customers as well
14:50
And there was no human vetting required for this. It was automated, the flow
14:55
But there were challenges even with this approach For example it works with single orders There was very minimal state management and there was no concept of a cart here
15:05
So people just had one item that they would order. Also, let's say that the seller changed the image of a product
15:14
In that case, it was very difficult because we did not have any async workflows in place
15:20
It was difficult to reflect it on the consumer's app. And one learning we had was AWS Lambda is extremely slow to warm up
15:30
So the orders would take multiple seconds to even be placed. It did work, but it was not the greatest of user experiences
15:38
So we went to the second part. We replaced Lambda by a monolithic service
15:43
This service had multiple database tables that it was interacting with, such as products, order management system, notification, authentication
15:54
And thus there was a lot of business logic inside this monolithic service
16:00
The framework that I used to build this is DropBizzer. And I will get into more details
16:07
It did not have any browse or search capability. It was still scrolling down to find the product
16:13
and the one thing that i did introduce was a swipe based order so the user would look at the
16:20
products into the quantity that they wanted and just swipe and it would place an order on the
16:25
back end and it did result in a lot of false positives but it was simpler experience for our
16:31
customers also we introduced localization which means that we we now have product images that had
16:38
content, not just in English, but in other languages as well. One thing to note is that
16:44
this does not have any relationship models. Let me get into a bit more details on what I mean
16:49
When you look at a Samsung phone today, you know that the brand name is Samsung or Galaxy
16:56
The manufacturer is Samsung. The product line and product verticals are very well defined
17:01
So product vertical is that it's a smartphone and the product line, it's like it's Galaxy
17:06
series five or something. So these are different attributes of a product. And you can search on
17:12
your app today by saying, give me all products that are created by Samsung or give me all products
17:20
in the Galaxy brand. So that means that your searchability and browsability is based on these
17:25
attributes and you want a collection of products that contain these attributes. So that means that
17:29
there is a need to map these relationships between products, manufacturers, brands, product vertical
17:36
et cetera. And that did not exist. So that is something that we had to build to enable
17:43
browse and searchability. And once we realized the pitfalls of monolithic services, we went
17:50
the microservice way. So what microservices do is the problems with monolithic services are that
17:56
they're difficult to scale, difficult to modify and test and release. One single service is dealing
18:02
with multiple database tables, so there's a lot of mixing of business logic, there is no clear
18:07
separation of concerns. And monolithic services generally will only scale up to a point. There
18:14
will always be a need to split it into multiple microservices. And microservices, again, there is
18:21
something called as a curse of too many microservices. So you must be very careful that you do not have
18:26
too few or too many because i've seen people sometimes create small small microservices each
18:33
one talking to a separate table and what that results in is more like latency when it comes to
18:40
cross service interactions and you'll have to make a network hop which you can avoid if multiple
18:46
similar functionalities can exist coexist in a single microservice generally the rule of thumb
18:53
is if there are two or more functionalities, such as cart and order management, that live and die together and scale together
18:59
then I try to put them in a single microservice. So we introduced microservices
19:06
That was the first step that we did. The second thing that we did is we did not end the flow at DynamoDB
19:13
We wanted to have asynchronous flows to workflows that can like update either our insights or some other metrics. So we introduced Amazon Kinesis
19:26
Amazon Kinesis is like Kafka that is managed by AWS. Any update to a DynamoDB table can be
19:33
captured by Kinesis and you can write connectors and indexers that can consume these updates
19:39
So as you can see here, we have three different indexers here. And the catalog indexer is when
19:45
the seller updates a quantity or updates an offer to a product, the catalog indexer will read that
19:51
data and then it will finally update either the S3 bucket or the DynamoDB table again
19:56
to reflect that quantity on the app. A Redshift indexer is used to track all the metrics that
20:03
the user has captured and put them into Redshift for building insights into the data. And the
20:09
Elasticsearch indexer, so we used Elasticsearch for searching and this indexer's job was to
20:15
whenever a new product was ingested, it'll figure out the different attributes
20:20
such as manufacturer, brand, product vertical, product line, and then index it appropriately
20:26
in Elasticsearch clusters so that the user can get browse and searchability
20:34
I was talking about DropWizard. So what DropWizard is, is a collection of services
20:40
It's also a framework, but it's mostly a collection of libraries like Jackson, Jetty, Jersey, and Metrix
20:47
Jetty enables an HTTP server to be embedded inside the main method
20:53
Jersey maps the REST requests to Java objects and Jackson allows JSON serialization, deserialization
21:00
And it's very simple to use DropVizard. It's similar to Spring Boot
21:03
and I could have used Spring Boot as well, but a quick Google search
21:07
showed DropVizard above Spring Boot. So I just started with that. And there is really no reason
21:11
why we couldn't have gone that way. As you can see here, it's very simple to define the resource path, what is produced, and give query parameters, have a get method that is timed, implemented
21:25
This is a very simple example of a hello world resource that we've created
21:30
And with Spring Boot, if we would have gone that path, we would have got a dependency injection that came with it
21:36
So that's just a little nugget on how we built our microservices
21:40
and i was talking about the different attributes so i thought i'll just share a good diagram or a
21:46
good picture of what these different attributes are so this captures all the products or most of
21:52
the products by procter and gamble which is a manufacturer as you can see here these are the
21:58
separate verticals and these are product verticals like home hair grooming product line is effectively
22:06
pampers diapers would be a product line and you can see that there are two
22:09
products that are sitting in that product line so that's an example of how
22:14
there are product verticals product lines manufacturers associated with the product and the problem in this particular domain is that some products
22:23
may just have a manufacturer some of them may have a brand some of them may have a product vertical and a brand and a product line So it really not a tree anymore It a graph So we wanted to model relationships that are dynamic in this world on our DynamoDB tables And for a lot of them it is very difficult to get this information which is the brand which is the product vertical So we had to rely on a lot of consumer customer inputs to fill those gaps there
22:53
What we wanted is the ability to search products by one of these attributes
22:57
So if you wanted to search a product by manufacturer, you would have products by manufacturer ID and brand ID or products by product vertical
23:06
To allow building such a system, we had to have a lot of secondary indexes in these DynamoDB tables so that you can search a subset of products from this humongous list of products
23:16
And so we had to place the product vertical ID, product line ID, brand ID, all of that in the product table
23:24
Similarly, in the brand table, I had to put the manufacturer ID. And it was the job of the catalog and the Elasticsearch indexer to, whenever a product is ingested
23:33
to update these different tables and put their secondary indexes in place
23:38
So these were some of the search and browse conundrums that we had to deal with. At this point, after we dealt with all of this, the state of the app was that the buyer could do product search and browse and place objects in a cart and order
23:53
And there was an order management system as well in place where he could track the shipments
23:57
He or she could track the shipments and inventory and look at their order history
24:03
Sellers were able to ingest new products, update quantities, offers and get invoices for all the orders that have been placed
24:11
At this point, I think I'll get into some of the key takeaways that I had from my experience building these microservices
24:22
And I'll divide them into technical learnings and product learnings. Technical learnings. DynamoDB is not the best database for relationships
24:33
So like I said, there are different relationships between products, manufacturers, and brands
24:38
they cannot really be modeled very well with dynamo db instead i should have used a graph
24:44
database a graph database allows you to model these relationships and it is not a secondary
24:49
index lookup the lookups are really o of one because the relationships are a first class
24:56
citizen in a graph database so in hindsight i could have used an amazon neptune which back in
25:02
2016 did not exist, I think. But I could have still used Neo4j, which is a GraphDB, backed by
25:08
DynamoDB. And I relied on DynamoDB to generate these event streams for asynchronous workflows
25:15
By having Neo4j backed by DynamoDB, I can still leverage these event streams. But the advantage
25:21
is it makes those indexers very simple. I do not have to now manage these secondary indexes
25:28
Secondly, I leveraged a lot of Kinesis, which is Amazon-provided Kafka alternative
25:34
But Kafka is much faster than Kinesis, and it allows for faster retrieve and stores the data for longer periods of time
25:40
I think Kinesis by default is seven days. Kafka is much more than that, and it can be configured
25:46
Back then, Kafka did not have a lot of connectors for DynamoDB or Redshift or Elasticsearch
25:52
But today, Kafka has connectors for all of them as source and destination
25:57
So it would be much easier to design things with Kafka than Kinesis
26:01
The only challenge would be to manage it because we'll have to manage the zookeeper or the clusters of Kafka
26:09
Containerization is another thing. So the way we deployed it, it was several EC2 instances of these services
26:18
But if one of them went down, we had to manually check for that
26:22
And we had alerts for it. But we had to manually bring up another one or even set up some scaling
26:27
But I think making these apps as containerized apps and using Docker and Kubernetes, it would have made our lives much simpler
26:37
Because I think the manageability aspect and the automation frameworks that they provide for microservices is really useful
26:46
CICD pipelines, again, are the backbone of agile development, and we could have done a better job at introducing them earlier in the development cycle
26:57
Spinnaker is one of the open source alternatives that is actually developed by Netflix
27:03
Had I known that back then, I would have used that for automated deployment on cloud, different types of cloud, and not just stick to AWS
27:10
today what's something that's picking up is the communication between microservices so there are several ways in which you can have one microservice talk to the other microservice
27:21
it could be rest calls it could be http calls or graphql or whatever sort of interfaces you provide
27:29
but i think that things such as service mesh like envoy or haproxy are picking up because they allow
27:37
you to build a framework for microservice to microservice interaction, embed metrics in it
27:44
and get more insights into the kind of data that flows between microservices. It also allows you
27:49
to have authentication and authorization between microservices. So that would be a very nice thing
27:55
to include in our architecture. So these were all the technical learnings. Then come the product
28:02
learnings. So first and foremost, what we did was we allowed customers, we give customers this
28:11
notion of a cart, but we understood that customers were not able to grasp this concept of a cart
28:17
For them, placing an order by doing a swipe was much easier and a better user experience than
28:23
saving it in a cart and at the end of the day, trying to place an order on the cart
28:28
So the only challenge is when you place individual orders, you will incur a shipping cost for all of these
28:36
Whereas if you place a bulk order, the shipping cost gets amortized. So what we did on our end, which Amazon has been doing forever, is we would bulk up these different orders and create a virtual cart on the back end
28:49
And that resulted in around 27% savings for customers. like i said searchability and browsability were to give a better user experience was one of our
29:01
top priorities and we did a lot of customer research to fill the gaps between these attributes
29:07
that a product has and per product there were different attributes by which the customers
29:13
wanted to search for example customers wanted to search chocolates by their brand but they wanted
29:19
to search for some other commodities such as toiletries by the manufacturer. So having a curated
29:26
list of what these different API calls should be was very critical in giving a good experience
29:32
And that increased the orders by 24% by just introducing these different product-based
29:40
searching. I mentioned that we have Redshift. That is our data warehouse for capturing insights
29:48
What we saw is that by just introducing insights into the data
29:53
we increased the revenue by 34%. And that told us a lot about customer behavior
29:59
For example, when customers during the summer season, there was an increase in demand of ice creams and
30:06
chocolates as well. And we saw that customers were trying to browse them beforehand. So that
30:13
gave us insights that we could forward to our sellers and they could have the inventory ready
30:18
in case customers order that. Also, customers tend to order products in clusters. So for example
30:24
rice and wheat were ordered together generally. Sugar was never ordered in monsoon season
30:30
So all of these learnings and insights were things that we learned from the data that we collected, and it was very useful in driving our revenues
30:39
And finally, coming to the results of all of this work that we did, within a span of six months, the number of wholesale traders on our platform, that is the sellers, increased from one to seven
30:52
Our retail shop owners increased from 10 to 400, and they were touching a population of around 4 million people
30:59
the insights that we captured increased from 7 000 data points per day in redshift to 780 000
31:07
data points per day in redshift and our revenue increased from 200 per day to around 8 000 per
31:14
day and finally the just like the reason why i'm giving this talk is uh the efficiency improvement
31:21
so we calculated that because of our efforts we were able to save 27 percent of the food wastage
31:27
that would have otherwise occurred if we were not in the play. So I think that was really the biggest takeaway for me
31:35
And I've worked in bigger companies such as Apple and Samsung. And the way things work there, the technology there is very different
31:47
And sometimes when you are on the infrastructure side you not closely like you do not see the customer directly and you building something to enable workflows for someone that may not be the direct customer But this experience in the startup made me wear
32:05
several hats and gave me exposure to product, gave me exposure to customers. And that's why it was a
32:11
really fruitful experience, I would say. And here are some of the takeaways that I take from this
32:18
experience that I have. Firstly, you must always build software for the customer and show empathy
32:25
So in my case, I realized that the customers are not very tech savvy. So you cannot directly apply
32:31
things that work for a tech savvy audience to these customers. And you need to enable them
32:37
to take baby steps towards the technology and build a system and a user experience that they
32:43
are comfortable using. One of the very good examples of this was the localization part of it
32:48
where I realized that they were not very well-versed with the languages that we had. So
32:53
we had to go back and click pictures and curate the content that we show them. And that increased
33:00
their experience, improved their experience. More code is more bugs. That's a common adage that
33:07
there is a bug every 100 lines of code. So if you want to have less bugs, write less code
33:11
The goal is to write simple code, not too complicated code. And pick and choose your battles
33:18
So if you have a small array of five or 10 elements, you do not need to write an extremely complicated search algorithm
33:26
You can just simply iterate over the array. And if it's very irregular, you can iterate over the array and find an element
33:33
So that's an example of simple lucid code that can save you a long time in debugging
33:39
Pains of scale. Things work when you scale to like 10 people
33:44
but as soon as it becomes 1 people things fail And there may be so many reasons that they fail It could be that AWS places limits on read and write throughput of the database So things are slower because they are throttled It could be just that your network cannot take up that much load
34:01
What's important to know is to design systems in which you fail safe, which means that you can fail
34:07
when the customer is about to place an order. And you can say that, oh, I'm unable to place an order
34:12
right now for you and we'll get back to you. But you can't fail when the customer has already
34:16
placed an order and not send them that order. So how do you design systems that fail, that it's
34:24
okay to fail, but that fail in a good manner? Performance on smartphones. So a lot of
34:31
infrastructure where these customers are, they do not have proper networking infrastructure there
34:38
So what that means is a lot of calls to the cloud may not work there
34:44
So how do you employ caching techniques on the phone to allow them to at least browse products
34:49
And then how do you also batch a bunch of updates together and send one network call instead of multiple heartbeats to the server
34:58
Those are some of the things that you should always be cognizant of when designing systems
35:03
invest in learning about the business and the pain points and see if tech can solve some of them
35:09
as software engineers we often tend to just go into our own like rabbit hole and try to just
35:16
focus on the problem at hand and sometimes you do not care too much about the business and we
35:21
assume that there'll be a layer above us that will take care of translating the needs of the
35:26
business to tech and tell us about it but i i felt that it is also very important as an engineer to
35:32
learn about the business and to see what are some of the places where technology can improve their
35:37
life One good example was that like I said there was no good way to track inventory for these customers And I thought that you know our orders order history can be a very good way to for them to know what they ordered and maybe the first
35:52
steps towards tracking inventory. So I just built something on top of the order history to tell them
35:58
how many things they have in their store. And that was something that was orthogonal to what we were
36:04
actually trying to give them but it really enabled them and empowered them so it was something that
36:11
was just because I wanted to learn about the pain points. Data is gold. Having insights and metrics
36:21
will always keep you a step ahead of what the customer is wanting to tell you so you can always
36:29
infer their behavior from the data. And that's why investing in data during bootstrapping is
36:36
often the right thing to do. Cloud costs spiral out of hand if not designed correctly. That's
36:42
something that I learned through Redshift, my experience with Redshift back in 2016
36:47
when we went from just $500 per month to $4,500 per month just because we had so many more insights
36:54
that to look at. So it's very important to design intelligently for the cloud and see
36:59
what is the footprint of the data and the costs that you have in cloud
37:05
These are some of the takeaways. I hope that I have been able to communicate my learnings. And
37:11
I hope that as an engineer, I have been able to put a seed in your mind as to how to design
37:18
systems and software that is empathetic towards the user. With that, I end my session
37:24
And you can reach me on LinkedIn, email, or Twitter. Thank you
#E-Commerce Services
#Retail Trade
#Retail Equipment & Technology
#Grocery Delivery Services
#Food & Grocery Delivery


