How Grab configured their data layer to handle multi-million database transactions a day!

  Рет қаралды 21,145

Arpit Bhayani

Arpit Bhayani

Жыл бұрын

System Design for SDE-2 and above: arpitbhayani.me/masterclass
System Design for Beginners: arpitbhayani.me/sys-design
Redis Internals: arpitbhayani.me/redis
Build Your Own Redis / DNS / BitTorrent / SQLite - with CodeCrafters.
Sign up and get 40% off - app.codecrafters.io/join?via=...
In the video, I discussed how Grab manages millions of food and Mart orders daily, focusing on the critical database infrastructure. I explored the high-level architecture of Grab's order platform, emphasizing high availability, stability, and performance at scale. Additionally, I introduced a system design course with a practical approach for engineers to learn real-world system building. The key points covered Grab's design goals of stability, cost-effectiveness, and consistency, along with the architecture of transactional and analytical databases using DynamoDB and MySQL.
Recommended videos and playlists
If you liked this video, you will find the following videos and playlists helpful
System Design: • PostgreSQL connection ...
Designing Microservices: • Advantages of adopting...
Database Engineering: • How nested loop, hash,...
Concurrency In-depth: • How to write efficient...
Research paper dissections: • The Google File System...
Outage Dissections: • Dissecting GitHub Outa...
Hash Table Internals: • Internal Structure of ...
Bittorrent Internals: • Introduction to BitTor...
Things you will find amusing
Knowledge Base: arpitbhayani.me/knowledge-base
Bookshelf: arpitbhayani.me/bookshelf
Papershelf: arpitbhayani.me/papershelf
Other socials
I keep writing and sharing my practical experience and learnings every day, so if you resonate then follow along. I keep it no fluff.
LinkedIn: / arpitbhayani
Twitter: / arpit_bhayani
Weekly Newsletter: arpit.substack.com
Thank you for watching and supporting! it means a ton.
I am on a mission to bring out the best engineering stories from around the world and make you all fall in
love with engineering. If you resonate with this then follow along, I always keep it no-fluff.

Пікірлер: 80
@ravi77003
@ravi77003 17 күн бұрын
Implementing GSI and updating the data in OLAP was amazing.
@BiranchiPadhi-id1le
@BiranchiPadhi-id1le Жыл бұрын
Falling in love with your Content Arpit - Breaking complex topics intro smaller chunks and explaining it like explaining to a beginner is your strength. Keep bringing such content!
@nuralikhoja8773
@nuralikhoja8773 Жыл бұрын
feeling proud of myself after watching this i implemented a similar thing a few days back on dynamodb, i found the dynamodb docs helpful where they suggested use of global secondary index for filtering
@safwanmansuri3203
@safwanmansuri3203 Жыл бұрын
seriously your videos are so informative for a software engineer, it is really a gold mine for us please continuing making such amazing videos.
@architshukla8076
@architshukla8076 Жыл бұрын
Thanks Arpit fr explaining such a brilliant architecture!!
@ADITYAKUMARking
@ADITYAKUMARking Жыл бұрын
Brilliant architecture. Thanks for explaining.
@_SoundByte_
@_SoundByte_ 7 ай бұрын
Amazing Arpit. Easy and powerful! Thank you!
@karanhotwani5179
@karanhotwani5179 Жыл бұрын
Liking this video half way. Just brilliant explanation. Thanks
@Hercules159
@Hercules159 Жыл бұрын
Awsome from Grab, Now I can use the same concept in interview and tell this will works
@kuldeepsharma7499
@kuldeepsharma7499 8 ай бұрын
Simple and great explanation! Thanks
@NithinKumarKv
@NithinKumarKv Жыл бұрын
One question Arpit, Since GSI is eventually consistent, would we get a consistent view of Ongoing orders at any point in time?
@Bluesky-rn1mc
@Bluesky-rn1mc Жыл бұрын
Amazing analysis...mza aa gya 😃
@anupamayadav3800
@anupamayadav3800 5 ай бұрын
Fantastic design by Grab.. really loved it.. and most importantly thank you for presenting it in such a simplified way. Love your content
@AsliEngineering
@AsliEngineering 5 ай бұрын
Glad you found it interesting and helpful 🙌
@loveleshsharma5663
@loveleshsharma5663 Ай бұрын
Awesome explanation of the concept.
@aashishgoyal1436
@aashishgoyal1436 Жыл бұрын
Seems exciting
@sounishnath513
@sounishnath513 Жыл бұрын
Thank you so much for this Banger on 1st, 2023.❤
@rushikesh_chaudhari
@rushikesh_chaudhari 11 ай бұрын
Great explanation Arpit💯
@SwikarP
@SwikarP 6 ай бұрын
Great video. What if DLQ on aws down?
@nguyennguyenpham2289
@nguyennguyenpham2289 8 ай бұрын
Thanks very much for the video. This is really helpful in understanding how Grab can handle both large and spiky requests coming during rush hour. I just wonder in our company case, we also need to use the historical data to validate the promotion of customers based on their order history. For example, one promotion is only applicable for first time customer (simplest case). In that case, do we need to use the analytical data to calculate this?
@GaneshSrivatsavaGottipati
@GaneshSrivatsavaGottipati 2 ай бұрын
nice explanation! But how orders svc writes in database and in kafka is it async for both or sync?
@architbhatiacodes
@architbhatiacodes Жыл бұрын
AFAIK If we are not able to process an SQS message, it goes to DLQ. If SQS is down DLQ will also be down and we will not be able to publish the message there. Great video btw!
@architbhatiacodes
@architbhatiacodes Жыл бұрын
Read the blog and they have mentioned same, "When the producer fails, we will store the message in an Amazon Simple Queue Service (SQS) and retry. If the retry also fails, it will be moved to the SQS dead letter queue (DLQ), to be consumed at a later time.". So I think we will not be able to do anything if both Kafka and SQS are down (It might be a very rare event though)
@Arunkumar-eb5ce
@Arunkumar-eb5ce Жыл бұрын
Amazing content. Quick question: Updating the table based on timestamp will not be reliable right - in a distributed system we cannot rely on system clocks for ordering messages.
@dhruvagarwal7854
@dhruvagarwal7854 Жыл бұрын
Can you give an example where this scenario could occur? Unable to understand this.
@yashswarnkar1702
@yashswarnkar1702 Жыл бұрын
Thanks for the content Arpit. Have a small doubt, how are we handling the huge spikes? Does dyanmo hot key partioning + lean gsi index do the job? I am assuming the peak duration will last for sometime since order delivery isn't just done in a 10 minute window. So at peak time even the index would start piling up. Would you say using dyanmo is the cost effective solution here? [I am assuming team wanted a cloud native solution and cost effectiveness involved calculating maintenance cost for in house solution]
@AsliEngineering
@AsliEngineering Жыл бұрын
DDB is cost effective as well as great at balancing the load. It can easily handle a huge load given how it limits the queries to near KV use case. Also, indexing will not be hefty because the index is lean.
@hardikmenger4275
@hardikmenger4275 Жыл бұрын
Dynamodb is eventual consistent and not strongly consistent right? If they needed strong consistency they would want to shift to postgres or something right?
@sagarnikam8001
@sagarnikam8001 Жыл бұрын
One question: If they are using SQS with DLQ(100% SLA guaranteed by AWS) in data ingestion, what could be the reason of using Kafka in the first place? Why can't they just use SQS(with DLQ) only?
@Bluesky-rn1mc
@Bluesky-rn1mc Жыл бұрын
could be due to cost. kafka is open source.
@imdsk28
@imdsk28 Жыл бұрын
I think Kafka maintains order when compared to queue and only when Kafka is down we will be using SQS… SQS doesn’t maintain order that’s why we have two edge cases to handle upserts and updates with fresh time stamps
@AsliEngineering
@AsliEngineering Жыл бұрын
Sqs maintains order but Kafka provides higher write and read throughput.
@imdsk28
@imdsk28 Жыл бұрын
@@AsliEngineering thanks for correcting… the architecture explanation is great…
@SaketAnandPage
@SaketAnandPage Ай бұрын
There are standard and fifo queue in sqs. Standard have high throughput. Also DLQ is not for the purpose of fallback of Primary queue, instead that is for the consumer that if message fails to successfully consumed X number of times. Then it should be moved to DLQ. I think the explanation for what if SQS is down is not correct.
@GauravSharma-pj3dj
@GauravSharma-pj3dj Жыл бұрын
please make video on what kind of database to use in what situation that will be very helpful thanks
@AsliEngineering
@AsliEngineering Жыл бұрын
I cover that in my course, hence cannot put out a video on it. I hope you understand the conflict, but thanks for suggesting.
@krgowtham
@krgowtham Жыл бұрын
Thanks for the informative video Arpit. I have a doubt on handling out of order messages at the end of your video. While deciding upon update #1 to be processed first over update #2 using timestamp, how does the consumer know that the message is the oldest one over the other as consumer #1 may have update #1 and consume #2 may have update #2? I would think of versioning the data and make sure the data we update is the next available version of the one present in the analytical database. Is this approach correct?
@srinish1993
@srinish1993 Жыл бұрын
I got the similar question in a recent interview, question: consider a order mgmt system, with multiple instances same order service responsible for handling updates of orders(on a MySQL db). now three updates arrive in a sequence at t1 < t2 < t3 timestamps, at three different order service instances. Now how do we ensure the updates u1, U2 and u3 are applied in the same sequential order. any thoughts
@javeedbasha6088
@javeedbasha6088 Жыл бұрын
​@@srinish1993 maybe can we send the timestamp along with the data? so that when it is processed by the consumer, it can create SQL query to update the data WHERE updated_at < data.timestamp and id=data.order_id.
@prajwalsingh7712
@prajwalsingh7712 Жыл бұрын
@@javeedbasha6088 yeah correct, usually its a good practice to send event_timestamp field in the kafka msgs, to decide the order of msgs.
@javeedbasha6088
@javeedbasha6088 Жыл бұрын
After some reading, I discovered that updating data based on timestamps is not a reliable method since there are inconsistencies in the system clock. A more effective approach is to utilize end-to-end partitioning. In this method, all messages related to a specific partition key are written to the same kafka partition. As long as the producer sends these messages in order to kafka, kafka will maintain their order within the partition, although ordering is not maintained across different partitions. This specific partition is then consumed by only a single consumer instance, ensuring that related events are process by the same consumer. for example, suppose we have two messages: t1( create profile 'A' ), t2( update profile 'A' ) The same consumer will receive and process t1 and t2 sequentially. This approach ensures that order is maintained in event-driven architecture. And this approach can also handle concurrency.
@ankuragarwal9712
@ankuragarwal9712 8 ай бұрын
@@javeedbasha6088 What about comparing version of the order document ? Because if we keep version then we can avoid the need of kafka , simple SQS will work
@gauravraj2604
@gauravraj2604 Жыл бұрын
Hi Arpit, 1 quick question. So when we do upsert, 1. operation type whether it would be insert / update is always decided based on availability of primary key in database? 2. when we are doing upsert, we always need to provide all mandatory parameters in SQL query?
@dhruvagarwal7854
@dhruvagarwal7854 Жыл бұрын
1. You can define what is the update field/fields in your query or ORM layer. Doesn't need to be primary key. 2. While doing upsert, always provide ALL the parameters, even those that are not mandatory but have a non-null value, because every field will be overwritten.
@gauravraj2604
@gauravraj2604 Жыл бұрын
@@dhruvagarwal7854 Hi Dhruv Thank you for clarifying. So if I understood correctly, upsert can be decided based on any field but that field should be unique in nature. Is this correct? 2nd point I am clear now. it makes sense to provide all parameters as every parameter is going to be overwritten and we won't want any parameter to be lost.
@628sonu
@628sonu Жыл бұрын
a small doubt, user_id_gsi is stored with user_id when placing an order, what if there are 2 orders from the same user (maybe from different devices or even the same), won't GSI have 2 duplicate entries even if the orders are different
@imdsk28
@imdsk28 Жыл бұрын
We can have multiple orders for same index and it’s not a primary key… It’s just to index the data… when you fetch ongoing orders of that user you need to fetch both the orders and through with this index it can fetch those 2 quickly.
@syedaqib2912
@syedaqib2912 Жыл бұрын
Order Ids are always unique for each user
@rahulprasad2318
@rahulprasad2318 Жыл бұрын
The analytics DB can be a write only, right?
@AsliEngineering
@AsliEngineering Жыл бұрын
What's the use of write if we never read?
@yuvarajyuvi9691
@yuvarajyuvi9691 Жыл бұрын
Just a small doubt. As transactional queries are handled synchronously, won't there be any issue while handling huge number(millions) of synchronous writes to db during peak traffic hours.Like there is a possibility that the DB servers can choke while handling them right? BTW loving watching ur videos, great content!
@imdsk28
@imdsk28 Жыл бұрын
I believe this will be handled by dynamo DB itself by creating partitions internally when the tiers are hot
@AsliEngineering
@AsliEngineering Жыл бұрын
Yes. They would slowdown. Hence consumers will slow their consumption
@yuvarajyuvi9691
@yuvarajyuvi9691 Жыл бұрын
@@AsliEngineering For transaction queries we will be hitting the db server directly right as they need to be synchronous? You said there won't be any messaging queue involved in such cases.Crct me if I am wrong
@yuvarajyuvi9691
@yuvarajyuvi9691 Жыл бұрын
@@imdsk28 Is it true for the RDS as well?? I don't think soo but not sure
@imdsk28
@imdsk28 Жыл бұрын
@@yuvarajyuvi9691 not completely sure… need to deep dive
@shantanutripathi
@shantanutripathi Жыл бұрын
How would they get to know just by looking by the former timestamp that its not the latest one? (1. newer update hasn't came yet. 2. would they query transactional db just for that?)
@AsliEngineering
@AsliEngineering Жыл бұрын
no. You just discard the updates you receive with older timestamp. No need to query anything.
@blunderfoxbeta
@blunderfoxbeta Жыл бұрын
One pain point of dynamodb is handling pagination during filter and search. It skips the record if it doesn't match the query criteria and we have to run a recursive loop to meet the page limit. Ex. You are running a query on 1000 records with page limit 10 and filter status for in-active users and there are only 2 in-active records and first record is in first row and 2nd record is at 1000th row, now you have to run 100 query iteration in order to get just 2 records As per my understanding this is the biggest disaster on dynamodb. Does anyone have any solutions here? Hi Arpith have you come across this limitation in dynmodb?
@AsliEngineering
@AsliEngineering Жыл бұрын
Yes. DDB is not meant for such queries, hence should not be used for such cases, unless you can manipulate indexes (LSI and GSI).
@ekusimmu
@ekusimmu 4 ай бұрын
Why SQS and kafka both ? Why can't only SQS with DLQ for high availability ?
@AsliEngineering
@AsliEngineering 4 ай бұрын
Because the usecase was of a message stream and not a message queue. Hence Kafka being preferred. SQS is just a fall back to ensure no loss of events.
@HellGuyRj
@HellGuyRj Жыл бұрын
Yo. What if instead of timestamp difference, we just use versioning + counters ?
@AsliEngineering
@AsliEngineering Жыл бұрын
maintaining version / vector clocks is a pain and in most cases an overkill. TS works well for the majority of workloads.
@ankuragarwal9712
@ankuragarwal9712 8 ай бұрын
What if we use CDC (Dynamo Streams) to achieve the same?
@ankuragarwal9712
@ankuragarwal9712 8 ай бұрын
instead of order service pushing the data to kafka?
@SaketAnandPage
@SaketAnandPage Ай бұрын
There are standard and fifo queue in sqs. Standard have high throughput. Also DLQ is not for the purpose of fallback of Primary queue, instead that is for the consumer that if message fails to successfully consumed X number of times. Then it should be moved to DLQ. I think the explanation for what if SQS is down is not correct.
@swaroopas5207
@swaroopas5207 Ай бұрын
Yes ur crt, if consumer is unable consume even after retries, then it moves it to DLQ.
@itsrahulraj
@itsrahulraj Жыл бұрын
One question on the last part: If there are two updates in out of order, then you mentioned that we can use "updatedTimestamp" to discard the past updates. But what should we do in the following scenario: Update 1 (new timestamp): changes field1 to some value changes field2 to some value Update 2 (old timestamp): changes field1 to some value changes field3 to some value In this scenario, discarding "Update 2" would be a wrong thing to do, right? Because, then we loose the data update made in "field3".
@AsliEngineering
@AsliEngineering Жыл бұрын
The replication setup is Row based replication and not statement based.
@itsrahulraj
@itsrahulraj Жыл бұрын
What I meant to say is: Update 1 (new timestamp): Update the fields: "field1" & "field2" of "row1" Update 2 (old timestamp): Update the fields: "field1" & "field3" of "row1" So, both updates try to update the same row -> "row1". Now, since these updates have multiple field updates, how should we handle it? Since we don't want to miss the "field3" update. Am I missing something here? Thanks for your time clarifying my doubts :)
@AsliEngineering
@AsliEngineering Жыл бұрын
@@itsrahulraj DB Transactions solve exactly this.
@itsrahulraj
@itsrahulraj Жыл бұрын
@Asli Engineering by Arpit Bhayani Thanks , it makes sense.
@varshakancham5944
@varshakancham5944 Жыл бұрын
Can you please elaborate? @AsliEngineering ?I didn't get how Db Transactions will handle this.
@gurumahendrakar65
@gurumahendrakar65 Жыл бұрын
a = "Guru" b = "Guru" c = input() -> Guru c ki id different Q🤔 (a aur b Ki Id Same hai) guru.capitalize() -> Guru # is also different id jab functions use kar rahe hai to different Ids q bana rahe ....... Already allocated Huve adress pe point q nahi kara raha hai🙄
@abhishekvishwakarma9045
@abhishekvishwakarma9045 Жыл бұрын
Generally, I spend my weekends for learning something new and your content helped me a lot, thanks arpit sir 🫡 🔥 I was totally amazed by the Transaction DB part 😎
How @twitter keeps its Search systems up and stable at scale
15:00
Arpit Bhayani
Рет қаралды 13 М.
Why do databases store data in B+ trees?
29:43
Arpit Bhayani
Рет қаралды 29 М.
Can You Draw A PERFECTLY Dotted Line?
00:55
Stokes Twins
Рет қаралды 64 МЛН
How Razorpay scaled their notification system
17:32
Arpit Bhayani
Рет қаралды 18 М.
Microservices with Databases can be challenging...
20:52
Software Developer Diaries
Рет қаралды 17 М.
How Instagram efficiently serves HashTags ordered by count
12:18
Arpit Bhayani
Рет қаралды 13 М.
Database vs Data Warehouse vs Data Lake | What is the Difference?
5:22
Alex The Analyst
Рет қаралды 731 М.
How to decide which technology to learn and invest time in?
19:04
Arpit Bhayani
Рет қаралды 22 М.
GitHub Outage  - How databases are managed in production
23:41
Arpit Bhayani
Рет қаралды 4,7 М.
Lid hologram 3d
0:32
LEDG
Рет қаралды 8 МЛН
Cadiz smart lock official account unlocks the aesthetics of returning home
0:30