Bagaimana Grab Memproses Jutaan Order Per Hari | PZN Reaction

  Рет қаралды 65,697

Programmer Zaman Now

Programmer Zaman Now

Күн бұрын

Пікірлер: 105
@MrKeliv
@MrKeliv Жыл бұрын
Kesimpulan yang saya tanggkap utk Grab 2 Database 1 Dynamo DB for high traffic + transaction ---> Temporary data/Log transaction (auto delete longest record/manual delete) 1 MySQL RDS for recorded --> historical/analytical Yang jd pertanyaan. Kenapa OLTP nya ga pake MongoDB aja yah ? Malah MongoDB lebih powerfull utk recorded tanpa hrs ada dependencies data. Dan lightweight.
@ProgrammerZamanNow
@ProgrammerZamanNow Жыл бұрын
mungkin cost nya, soalnya mereka kan pake cloud, kalo bayar mongo atlas, mungkin cost nya makin mahal
@MrKeliv
@MrKeliv Жыл бұрын
​@@ProgrammerZamanNow Kalo di AWS sih emang mahal. Serba dikenain charge. Load Balancer + Auto Scalling AWS sih emg powerfull. Ya tapi itu. Ada harga ada kualitas. Tq mas tambahan ilmunya...
@scalarcoding
@scalarcoding Жыл бұрын
Kalo yg saya tangkap, grab ini kan lini bisnisnya B2C, Alias business to costumer. Sehingga transaksi jumlah besar itu udh pasti hukumnya. Dan yg mereka butuhkan adalah ELB dari amazon. Dan supaya primary DB nya yg OLTP yg critical ini bisa running well, akhirnya pakai lah yg opinionated, jd 1 vendor. Kalo pke mongoDB lg ntr beda server lagi. Jd kesimpulannya ELB yg lebih mereka butuhkan daripada NoSQL DB nya. Misal Mongo DB bisa di run di ELB cloud nya AWS, bisa jd mereka pertimbangan buat pake MongoDB. Firebase jg salah satu kompetitor sexy, sayangnya sama2 vendor locked.
@rifkiaz
@rifkiaz Жыл бұрын
Mongo atlas cost nya d aws agak anu...
@MrKeliv
@MrKeliv Жыл бұрын
@@rifkiaz oo bisa jadi karena pgn lebih murah.
@deafebrianoyuvica311
@deafebrianoyuvica311 Жыл бұрын
00:08 Grab processes millions of orders per day with a distributed system. 02:25 Grab uses different databases to serve transactional and analytical queries. 04:42 Transactional and Analytical databases are used differently in Grab's order processing system. 07:03 Grab processes millions of orders per day by sending the data to OLTP and Eng data pipeline for processing. 09:19 DynamoDB is used by Grab for its transaction database because it is scalable and highly available. 11:30 DynamoDB uses adaptive capacity to handle hot key traffic 13:43 DynamoDB uses Global Secondary Index (GSI) for batch queries 15:51 Grab processes millions of orders daily using DynamoDB and MySQL. 18:00 Grab uses RDS and Kafka to process millions of orders per day. 20:12 Grab uses two databases for transactions and analytics with asynchronous synchronization using Kafka. 22:18 Grab processes millions of orders by using two different databases for transaction and analytical purposes. Crafted by Merlin AI.
@suryaelz
@suryaelz Жыл бұрын
mantappp om, jadi bisa baca bareng, biasanya baca-baca sendiri belum tentu paham wkwk, ini dijelasin pulak, mantap
@ProgrammerZamanNow
@ProgrammerZamanNow Жыл бұрын
Semoga bermanfaat 👍
@emailberjo4579
@emailberjo4579 8 ай бұрын
iya bener mas wkwkw, bahkan malah salah paham
@rachadiannovansyah9926
@rachadiannovansyah9926 Жыл бұрын
mantap konten2 kyk gini nih, terbaik PZN 👍
@sadaharu2149
@sadaharu2149 Жыл бұрын
wah ilmu bagus ni, lanjutkan pak Ekoo
@irfandyjip3246
@irfandyjip3246 Жыл бұрын
🎯 Key Takeaways for quick navigation: 00:01 📊 *Overview of Grab's Order Processing* - Grab processes millions of orders daily, with the potential for tens of millions or more. - The video introduces the backend system handling Grab food and GrabMart orders. - Focus on understanding the real-world handling of orders after a user places a Grab Food order. 01:11 🎯 *Design Goals for Database Solution* - Design goals include stability, scalability, cost-effectiveness, and consistency. - Importance of distinguishing between transactional and analytical queries. - Examples of transactional queries critical for online order creation and completion. 03:40 🔄 *Distinction Between Strong and Eventual Consistency* - Grab distinguishes strong consistency for transactional queries and eventual consistency for analytical queries. - Strong consistency ensures real-time processing for critical transactional queries. - Eventual consistency is acceptable for less critical analytical queries. 04:24 🗃️ *Separate Databases for Transactional and Analytical Queries* - The first design principle involves using different databases for transactional and analytical queries. - Transactional databases are critical for real-time online order processing. - Analytical databases store historical and statistical data and keep data for a more extended period. 05:33 ⚖️ *Benefits of Different Databases for Query Patterns* - Different databases fulfill various query patterns and requirements. - Enables better stability by selecting databases tailored to specific query types. - Addresses the challenge of balancing real-time processing and historical/statistical data storage. 06:33 🔄 *Data Ingestion Pipeline for Consistency* - Introduction of the second design principle: Data Ingestion Pipeline. - Explains how the pipeline ensures consistency between transactional and analytical databases. - Orders are initially stored in the OLTP database and asynchronously pushed into the data ingestion pipeline. 07:40 🌐 *Architecture Details of OLTP Database* - OLTP database contains two categories of queries: key-value queries and batch queries. - DynamoDB is used for transactional queries due to its scalability and high availability. - Challenges and solutions for handling high-traffic queries and maintaining full capacity. 09:31 💡 *DynamoDB Features and Adaptive Capacity* - DynamoDB's adaptive capacity handles hotkey traffic by distributing higher capacity to high-traffic partitions. - Explanation of adaptive capacity mechanism to optimize usage based on traffic. - Overview of DynamoDB's three-way replication for stability and availability. 11:52 🔄 *Global Secondary Index for Batch Queries* - Introduction of DynamoDB Global Secondary Index (GSI) for supporting batch queries. - GSI acts like a normal Dynamo table and enables querying by attributes other than the primary key. - Use of GSI to facilitate batch queries like "get ongoing orders by Passenger ID." 14:54 🔍 *Details on DynamoDB Global Secondary Index* - Explanation of GSI as a table with its own partition key. - GSI allows querying based on attributes other than the primary key. - Comparison to materialized views, where data is duplicated for optimized querying. 15:49 🔄 *Data Retrieval and Tables Structure in DynamoDB* - DynamoDB usage for order and passenger retrieval. - Explanation of key-value and batch queries in DynamoDB. - Introduction to DynamoDB Global Secondary Index (GSI) for batch queries. 16:30 🕒 *Data Retention Challenges in DynamoDB* - DynamoDB's time-to-live feature and its impact on data retention. - Challenges with adding time-to-live (TTL) to large tables. - Strategy to manually delete items without TTL attribute. 17:10 🔄 *Choice of Analytical Database (MySQL) and Data Retention* - Adoption of MySQL for analytical purposes. - Decision to use RDS (Relational Database Service) due to maturity. - Limiting data retention in DynamoDB to three months. 18:19 📊 *Data Ingestion Pipeline and Message Handling* - Usage of Amazon Kinesis Data Streams for the data ingestion pipeline. - Handling failures through Amazon Simple Queue Service (SQS). - Implementation of back-off retry for stream events. 19:32 🔄 *Ensuring Consistency in Data Ingestion Pipeline* - Back-off retry strategy for stream events and database level consistency. - Utilization of Dead Letter Queue for unsuccessful retries. - Possibility to rewind stream events from Kafka in worst-case scenarios. 21:05 🔄 *Conclusion: Stability, Scalability, and Cost Efficiency* - DynamoDB for high availability in online order processing. - RDS for scalability in supporting business requirements. - Cost efficiency achieved through data retention strategies. 21:49 🛠️ *Areas of Improvement and Future Plans* - Exploration of NoSQL databases like Elastic for more complex queries. - Acknowledgment of potential improvements in the existing database solution. - Continuous evaluation and refinement of the current system. 22:43 🚀 *Key Takeaways and Insights for Application Design* - Understanding the strategy of handling millions of orders. - Distinction between transactional and analytical processes. - Adoption of a dual-database approach for optimized performance. Made with HARPA AI
@ibnusina1373
@ibnusina1373 Жыл бұрын
Mantap pak, sering-sering bahas use case seperti ini
@dimasputraari
@dimasputraari Жыл бұрын
wah menarik banget si ini mas kalo ada lagi boleh di share ni mas, kedepannya kali aja bisa mas coba menjelaskan apa itu orchestrasi dan choreography untuk design architecture saat membuat aplikasi karena sekarang banyak banget udah mulai memakai choreography untuk proses transaksinya seperti kafka atau kafka connect untuk duplikasi database
@WilliamSuryaDarma
@WilliamSuryaDarma Жыл бұрын
orkestra yang musik bosku
@alamgresik
@alamgresik 5 ай бұрын
Bayangkan, kalau aplikasi pelayanan publik juga seperti ini, cuma bayangkan aja dulu hehehe
@dhanarputra555
@dhanarputra555 Жыл бұрын
Kalau bahas praktikal solution seperti ini, saya sangat tertarik. Ilmu development-nya keluar semua.
@sanovalaw
@sanovalaw Жыл бұрын
GSI itu kayak kita bikin key / index baru pada kolom di tabel, supaya nanti bisa di sorting / grouping berdasarkan index baru tersebut
@ProgrammerZamanNow
@ProgrammerZamanNow Жыл бұрын
mantap
@edricgalentino
@edricgalentino Жыл бұрын
​@@ProgrammerZamanNowtapi kalo ngomongin untuk resource dan performa dari dbnya kira2 ada efek tertentu gak ya mas dan kang? itu kan sama aja kayak duplikat tablenya nah apakah itu gak bikin jadi double si resourcenya kang? dan juga soal performa untuk si DB menduplikasi datanya juga itu bagaimana?
@sanovalaw
@sanovalaw Жыл бұрын
@@edricgalentino betul, pasti duplicate seperti NoSQL. tapi ada konsep single table design kok untuk dynamodb ini. saya juga sempet kesulitan di awal, karena kesalahan desain sehingga tagihannya tinggi
@wahyono1739
@wahyono1739 Жыл бұрын
saya pakai dynamoDB, id nya gak bisa di set manual harus dari mereka,, itu setiap selesai simpan apakah bisa return id no brp yang tercreate ?
@sanovalaw
@sanovalaw Жыл бұрын
@@wahyono1739 ketika add item ke tabel maksudnya mas?
@yogamahendra4967
@yogamahendra4967 2 ай бұрын
Keren penjelasannya mas eko
@skzulka
@skzulka Жыл бұрын
Sering sering pak kaya gini. menarik
@NandaWidyatama
@NandaWidyatama Жыл бұрын
Asik nih bisa tau arsitektur system yang lumayan gede trafficnya...
@maiing1144
@maiing1144 Жыл бұрын
keren si pzn content progammingnya unique ... mantep bang
@muhammadanwar-oh8cp
@muhammadanwar-oh8cp Жыл бұрын
Akhir nya gas pak
@asheaven1st
@asheaven1st Жыл бұрын
Kok keren.. Gilee 👏 Ini Grab keren banget dah mau bagi real-world case beginian.. 🙇‍♂
@MuhammadRizki-cl3ru
@MuhammadRizki-cl3ru Жыл бұрын
grab mas
@asheaven1st
@asheaven1st Жыл бұрын
@@MuhammadRizki-cl3ru Wah.. Beda rupanya.. Oke.. Saya revisi..
@fxpianochannel
@fxpianochannel Жыл бұрын
Thank you gan!
@hidayahapriliansyah
@hidayahapriliansyah Жыл бұрын
Pak, saya mau request. tolong bahas cara handle perubahan stock yang bisa berubah dari mana aja yang case nya seperti yang pernah di post di ig.
@berthojoris
@berthojoris Жыл бұрын
Keren bang....Sering2 bahas ginian gas
@RedianFikri
@RedianFikri Жыл бұрын
Mas boleh request tutorial use case spt ini yang sederhana aja? pakai java/go/node mungkin
@saldisaid6209
@saldisaid6209 Жыл бұрын
Wah gas pake Eko.
@RifkiArri
@RifkiArri Жыл бұрын
up
@orek7327
@orek7327 Жыл бұрын
dynamo gsi, mungkin lebih mriip ketika pakai trigger insert/update/delete di mysql kali ya mas. jadi automatis store/update/delete data ketika si table utama menjalankan operasinya
@ProgrammerZamanNow
@ProgrammerZamanNow Жыл бұрын
beda, kalo itu kita harus bikin trigger manual nya
@orek7327
@orek7327 Жыл бұрын
@@ProgrammerZamanNowtapi cara kerjanya sama ndak ya, blum pernah pakai soalnya
@wisnuwardoyo8311
@wisnuwardoyo8311 Жыл бұрын
sebenernya end goalnya lebih ke cost saving ya, performance mungkin ga jauh beda karena pada dasarnya mereka udah pakai dynamoDB dan Aurora sejak awal.
@RianY2K
@RianY2K Жыл бұрын
Mantap penjelasannya
@oshirogery
@oshirogery Жыл бұрын
Boleh mas buat live projectnya dong Biar kebayang 😊
@alcodee
@alcodee Жыл бұрын
keren pak, pembahasannya
@mukhlisaryanto1454
@mukhlisaryanto1454 Жыл бұрын
bahas lagi pak, materi tentang teknologi teknologi yang di pakai perusahaan kaya gini
@maduresenerd5716
@maduresenerd5716 Жыл бұрын
Kalo DynamoDB nya di replace sama Cloud Firestore kira2 memungkinkan kah? kira2 apa ya pros and cons nya?
@vaporizel
@vaporizel Жыл бұрын
Yang susah nerapin sistem2 ideal gini kalo projectnya bergantung pada klien dan kliennya model ga mo tau, harus diiyakan semua atau langsung cabut. Contoh sistem dah berjalan 5 tahun tapi ditengah2 klien request seluruh detail trx tersimpan selama 5 tahun dan bisa export semua datanya ke satu excel file (1 bulan avg 35 juta record trx). Kalo dibilangin terlalu mustahil dan diberikan another problem solve klien akan mengancam tutup project dan tidak segan2 langsung pindah
@RobbyDianputra
@RobbyDianputra Жыл бұрын
dikasih contoh aja mas, history bank aja dibatasin.. history pembelian tokopedia juga dibatasi.. klo kliennya tetap ngeyel keren sih
@muhawi9
@muhawi9 Жыл бұрын
Kalau sempet tolong bikin kan study case nya mas eko
@khairuddin7339
@khairuddin7339 Жыл бұрын
Skil spesifik apa yang kira kira yg bagus diplajari sekarang dalam dunia IT
@naninfinitybigintnumbertrue
@naninfinitybigintnumbertrue Жыл бұрын
hash map
@imamariefrahman5038
@imamariefrahman5038 Жыл бұрын
bisa running dynamodb emulator secara local bisa pake docker kok
@yohanessuryoprabowo8760
@yohanessuryoprabowo8760 Жыл бұрын
Kategori akun ada gacor, normal, gagu di mana Gacor =30 orderan Normal= 18 orderan Gagu =3 orderan
@faqihfahmi1342
@faqihfahmi1342 Жыл бұрын
Pertalite pak
@RIKOARIshowreel
@RIKOARIshowreel Жыл бұрын
Hoo, ada aws jg ya.. Kirain grab exclusive di azure..
@rifkiadnan743
@rifkiadnan743 Жыл бұрын
gak nyangka kalau perusahaan sekelas grab masih pakai MySQL
@AhmadsutonoSutono-bc6ok
@AhmadsutonoSutono-bc6ok Жыл бұрын
Saran bang,, ane driver grab, gmn bang agar akun ane bisa gacor
@PamudasanTutorial
@PamudasanTutorial Жыл бұрын
Pak, bahasa program yang Bapak pertama kali belajar bahasa apa ?
@ProgrammerZamanNow
@ProgrammerZamanNow Жыл бұрын
php
@utuhlahur9817
@utuhlahur9817 Жыл бұрын
pertamax
@scalarcoding
@scalarcoding Жыл бұрын
Kalau genre yg mereka pake itu OLTP using key-value pair berarti firebase bisa dipake utk OLTP dong bang? Dia bisa generate autoID jg utk document numbernya. Sedangkan data nya jg unstructured, yg berarti dalam satu document bisa buanyak bgt column
@ProgrammerZamanNow
@ProgrammerZamanNow Жыл бұрын
bisa aja, cuma grab mungkin pengen pake aws, firebase kan punya google
@brainplusplus1
@brainplusplus1 Жыл бұрын
dynamo db bisa kok di install di laptop tp yg versi dynamodb-local
@podebiz
@podebiz Жыл бұрын
2 DB, 1Dynamo Db, 1MYSQL to history, i think it..
@thearka443
@thearka443 Жыл бұрын
Hmm berarti data yg dilihat user di aplikasi itu cuma bertahan 3 bulan ya, selebihnya disimpen buat analisis. *But why? Saya sebagai User jadi ga tau pernah makan apa aja :D (jadi ga bisa repeat order lebih dari 3 bulan, karena history-nya hilang)
@skzulka
@skzulka Жыл бұрын
biasanya setelah itu apalagi data banyak jadi issue. semakin dalam semakin besar loadnya. jadi menurut saya sudah lebih dari cukup 3 bulan itu.
@thearka443
@thearka443 Жыл бұрын
@@skzulka Tapi bukannya kalo transaksi udah selesai, data itu ya udah jadi data "diam", cuma bisa di view (get) sebagai history aja. Tetep berat kah?
@didi_abdillah
@didi_abdillah Жыл бұрын
@@thearka443 walaupun diam juga kan tetep makan storage dan memory yang artinya bakal nambah cost
@skzulka
@skzulka Жыл бұрын
@@thearka443 setau saya semakin dalam datanya berada walaupun get 1 data performa db akan di uji gan. jika data ada 1jutaan pasti kerasa sekali impactnya. walau pakai indexing tetap akan sama. jadi kondisi tersebut hanya di lakukan jika dibutuhkan saja biasa di sisi internal untuk analisa data. Kalo misal ada 1000 user ambil data lama servernya pasti akan sesak juga. makanya dibatasin paling lama 1 tahun terarkhi (dependent seberapa besar server bisa menangani itu di sisi aplikasi) setau saya tokped dan lain juga ada batasan ada yang cuma 3 bulan terakhir, 6 bulan max 1 tahun. cmiiw 🙏
@ProgrammerZamanNow
@ProgrammerZamanNow Жыл бұрын
3 bulan itu di transaction database, kalo liat history tetap lewat analytical database, jadi masih bisa diliat datanya
@afifahmad1292
@afifahmad1292 Жыл бұрын
Wow jutaan order, kantor gw baru sanggup 50an ribu 😢😢
@IbnuSjahid
@IbnuSjahid Жыл бұрын
Mengapa ya data ingest harus lewat kafka dulu knp gk langsung ke mysql?
@ProgrammerZamanNow
@ProgrammerZamanNow Жыл бұрын
biar ada buffer kalo speed mysql gak bisa ngejar speed dynamodb nya
@IbnuSjahid
@IbnuSjahid Жыл бұрын
@@ProgrammerZamanNow makasih pak eko
@Kanookoochn
@Kanookoochn Жыл бұрын
Kenapa gak pake vittes aja
@ProgrammerZamanNow
@ProgrammerZamanNow Жыл бұрын
mungkin pengen pake solusi cloud aws
@Cebong-qj1sv
@Cebong-qj1sv Жыл бұрын
pak mau tanya, kalau mau ambil AI/Data Science itu harus belajar kayak Backend juga atau gimana pak ? soalnya masih bingung antara backend sama bagian AI/Data Science
@rafi4637
@rafi4637 Жыл бұрын
kl ambil ai/data di kuliah biasanya ambil jurusan data science jd bljr kyk machine learning,ai,visdat,deep learning gt2
@Cebong-qj1sv
@Cebong-qj1sv Жыл бұрын
@@rafi4637 iya kalo itu saya ngerti, cuma cara belajarnya sama kayak backend atau enggak itu sih yg masih bingung
@rafi4637
@rafi4637 Жыл бұрын
@@Cebong-qj1sv hrs ngerti statistik gt2 sih kl ngoding ttg data pk python biasanya sih g se kompleks kyk web dev gt tp teorinya hrs paham bgt
@ajrulrn
@ajrulrn Жыл бұрын
kira-kira apa alasan buat keep data nya sampe 3 bulan ?
@ProgrammerZamanNow
@ProgrammerZamanNow Жыл бұрын
gak ada kan, on going order yang sampai 3 bulan? berapa jam juga selesai. bahkan kayaknya 3 bulan juga kelamaan, soalnya datanya sudah ada di mysql untuk liat history nya kan bisa liat di mysql
@rikid972
@rikid972 Жыл бұрын
Kelamaan ngabisin resources 😊
@RohitSormaes-t6j
@RohitSormaes-t6j 4 ай бұрын
Martin Laura Garcia David Thompson Sharon
@EllenAllen-v6n
@EllenAllen-v6n 3 ай бұрын
Maddison Villages
@baekbaek-aja
@baekbaek-aja Жыл бұрын
Pertadex
@indahshe3160
@indahshe3160 Жыл бұрын
Gw pengen lu langsung jelasin, bukan malah baca lah mahami 1/1 kata bahasa ingris, keliatan lu gak prepare .
@ProgrammerZamanNow
@ProgrammerZamanNow Жыл бұрын
ya maaf bang, langsung aja baca link nya di deskripsi, jangan marah2
@agungindrawan3867
@agungindrawan3867 Жыл бұрын
lah kok ngatur :v
@professorbrainstorm7765
@professorbrainstorm7765 Жыл бұрын
Women ☕🗿
@kukuhaditya9228
@kukuhaditya9228 Жыл бұрын
Buset bree klo lu ga ngerti maksud kang eko brrti skill lu blm sampe sana haha Jangan komen aneh aneh kalo skill lu aja blm lebih dari kang eko. Kang eko aja karna channel PZN udh mencetak banyak programmer berkualitas karna ilmu yang dia ajarin lah elu cuman nyinyir doang wkwk
@agungindrawan3867
@agungindrawan3867 Жыл бұрын
@@kukuhaditya9228 wkwkw betul bang, hanya chanel PZN paling niat buat playlist dasar dari pemula hingga mahir
Optimasi API Menjadi 1500% Lebih Cepat | PZN Reaction
22:14
Programmer Zaman Now
Рет қаралды 43 М.
Evolusi Arsitektur Backend di Blibli | PZN Reaction
27:37
Programmer Zaman Now
Рет қаралды 45 М.
Что-что Мурсдей говорит? 💭 #симбочка #симба #мурсдей
00:19
Tahun Kelam Buat Programmer | PZN Reaction
19:18
Programmer Zaman Now
Рет қаралды 65 М.
Form Post Gampang di Hack
20:08
Programmer Zaman Now
Рет қаралды 37 М.
PENGALAMAN GOKIL GUA IKUT BOOTCAMP IT. WORTH IT GA ? ENGGA TAU
20:26
Robbi Documentary
Рет қаралды 5 М.
Bagaimana Menjadi Backend Developer | PZN Reaction
19:02
Programmer Zaman Now
Рет қаралды 56 М.
Sudono Salim, Hebat Berkat Dekat dengan Pejabat?
16:33
THINK
Рет қаралды 4,3 М.
Penampilan siswa/i hebat di LKBB Bogor Championship X
9:19
Erny Widiyanti
Рет қаралды 195
Bikin Rating Ratingan Biar kek Yutuber Beneran
11:08
The Gabbukers
Рет қаралды 6 М.
Akhirnya ada yang pake Rust di Indonesia | PZN Reaction
26:51
Programmer Zaman Now
Рет қаралды 37 М.