What if GC got AI powered in future...can it change the game for java?
@TheGeekNarrator3 күн бұрын
GC has already significantly gotten better. Look at the new GCs available that are also optimised for massive heaps. About AI I am not sure how that would work and what will be the overhead of it. But only time will tell. Currently there are few applications that can have problems even with minimal GC pauses like stream processing, for those I think a non-GC language might be a better choice. But again its all about trade offs and there is no single answer.
@thatguyadarsh22 күн бұрын
Very detailed and to the point discussion. I am in awe as this is just what I needed. Very grateful for this discussion. Thanks for sharing!!
@TheGeekNarrator22 күн бұрын
Thanks for watching
@VahidOnTheMove23 күн бұрын
How about to edit/delete a record?
@arun55325 күн бұрын
absolute heat, thanks! I'm a big fan of the highlighted subtitles fwiw
@RandhirKrSingh-x9l26 күн бұрын
Thank you for creating this video Kaivalya! I learned so much about DynamoDB in just 90 minutes from you and Alex. Such valuable insights!
@arun55329 күн бұрын
solid, ty! The subtitles came in handy, I was pretty much relying on those when playing at faster playback speeds!
@hariharapamarjane2215Ай бұрын
How many parts will be there?
@TheGeekNarratorАй бұрын
At least 5. But will keep adding to this series. There is just so much to talk about 😀
@Alpheus2Ай бұрын
Cool format. Pleasantly surprising how much info you can convey in a fun way in under 20mins.
@PraveenKumarBNАй бұрын
This is amazing... Keep coming up with more content, especially on Graph Databases
@hanamufidah5809Ай бұрын
Thank you both for the vid. It's a great discussion. 30:02, Instagram allows user to edit the post dynamic content (description, caption, location, tags, etc)
@VaibhavPatil-rx7pcАй бұрын
Awesome explaination 🎉, thanks
@abhishekshrivastav7941Ай бұрын
He has not spent more time in industry where extremely busy IC schedule open office hours so that their work doesn't get impacted by each and everyone disturbing them.
@souravdhar47Ай бұрын
❤ this content is just awesome
@mkresАй бұрын
am I the only one questioning how packing 10.000 business transactions into a single DB transaction works consistency wise? :)
@DurgeshGautam-d2bАй бұрын
Bb
@harshavardhanreddykhvr2 ай бұрын
How is Hdfs different from the above mentioned approach?
@TheGeekNarrator2 ай бұрын
HDFS is a file system while we talked about object storage. File system is hierarchical while object storage is flat, which allows for infinite storage and scalability. File system on the other hand has limitations in that regards. File system might be better in terms of speed and performance but again depends on the total size etc. so yes the approach we discussed is purely on top of cloud object storage.
@KavirajKanagaraj2 ай бұрын
What is the rust library Simon talked about building flexible storage formats? I heard something like "archive". But failing to find the right one.
@TheGeekNarrator2 ай бұрын
Good question github.com/rkyv/rkyv
@simon_eskildsenАй бұрын
rkyv
@manish-mk2 ай бұрын
These podcasts are goldmine! Also I watched your only livestream today while I was in metro. Got many insights. You should do more livestreams.
@TheGeekNarrator2 ай бұрын
Thanks 🙏🏻 I will make a note for livestreams.
@Md_sadiq_Md2 ай бұрын
Pushing the algorithm ❤
@himanshupandey93032 ай бұрын
Thanks for updating the video. Great content. Keep it up.
@TheGeekNarrator2 ай бұрын
Your feedback helped me. thanks :)
@AmeliaArdath2 ай бұрын
Hi! So, I just integrated DuckDB into my little open source distributed trace viewer and came here to learn more about its internals, but I have to stop the talk for a second and thank you profusely for having clear, amazing subtitles, right in the video. As a programmer with profound hearing loss, it's usually a struggle for me to get through technical videos with the jumpy auto-generated captions KZbin provides, and it so rare that I get to just sit back and enjoy a talk like a hearing person would. Thank you for providing this experience!
@TheGeekNarrator2 ай бұрын
This comment has made my day. Thanks a lot for sharing that. Unfortunately I got a lot of comments asking me to remove those subtitles as they were distracting so I had to remove them from other videos. Now I am motivated to find a way to make it work for both use cases.
@HimanshuKanodia2 ай бұрын
👍
@HimanshuKanodia2 ай бұрын
👍
@HimanshuKanodia2 ай бұрын
👍
@HimanshuKanodia2 ай бұрын
I don't build product directly, but in kind of integration work, but honestly even if I try to think from user perspective, I don't see people around me, in teams, willing to work like that or let others do. 🤣🤣
@HimanshuKanodia2 ай бұрын
Nice podcast. Today only I started listening your podcast.
@VipulVaibhaw2 ай бұрын
This was awesome!
@TheGeekNarrator2 ай бұрын
Thanks a lot Vipul 🙏🏻
@bbowjazz2 ай бұрын
Heikki rocks!
@aneksingh44962 ай бұрын
Your podcasts and the way you dig down to detailed level is revolutionary...please keep making such insightful videos ...this has enhanced my knowledge many folds .. let's catch up if you want
@anthonya8802 ай бұрын
Please remove members only videos. It doesn't make sense yet for your channel.
@TheGeekNarrator2 ай бұрын
Thanks for the feedback. I am happy you added “yet”, which tells you believe 😀🙏🏻
@anthonya880Ай бұрын
@@TheGeekNarrator Yes your channel is different. It has lot of potential.
@RokijulIslam-sk4dt2 ай бұрын
😅o
@D9ID9I3 ай бұрын
The proper way is to rewrite using native languages like C++ or Rust. Use of java again is a plain joke.
@kaimingwan5262 ай бұрын
Language has never been an issue with Kafka. Embracing the Kafka ecosystem and being fully compatible with Kafka is an important trade-off.
@D9ID9I2 ай бұрын
@@kaimingwan526 Efficiency is the issue. It translates to number of resources you have to pay for monthly. That's why Redpanda exists.
@costathoughts3 ай бұрын
Thank you Kaivaly for the video, it was really well explained and the animations are amazing. I would like to point just one thing the title "Reinventing Kafka the right way" I think is not precise once the part that AutoMQ are improving is the Storage Engine system and not the Message Middleware Broker (if I understood correctly based on this video). It was more an opportunity to enhance an isolated component from Kafka for cloud storage instead a full rewrite. An example of "Reinventing the right way" in my point of view would be the ScyllaDB which has backwards compatibility with Cassandra although it was rewritten using C++. I wish the best brow keep going!!!! EDIT 1: And they created a platform over na existent Kafka concept
@TheGeekNarrator3 ай бұрын
Thanks for watching and the feedback. I appreciate it. I think you made a fair point, I guess re-engineered would have been a more precise word. I will change that.
@ShortGiant13 ай бұрын
Great video, learnt a lot!
@nosh30193 ай бұрын
Thanks for this interview. Great as always! One question: so if the WAL writes are batched and periodically flushed to storage, there in a window in which if the process crashes, the writes will be lost? I.e the client response that write is complete is not durable.
@TheGeekNarrator3 ай бұрын
Thanks for watching. Yes, there are ways to configure the behaviour. This might help you slatedb.io/docs/faq/#what-happens-if-the-process-goes-down-before-slatedb-flushes-data-to-object-storage
@ShortGiant13 ай бұрын
Yeah that’s the “linger” stuff they spoke about. The client won’t get an ack about the write being successful until the flush happens. Therefore it would be as if the write request never happened (in practice the client would retry)
@harshrai4563 ай бұрын
The storage architecture reminds me of AWS RDS (Aurora) Architecture with the control plane managing a muli tenant storage fleet and having write quoram for strong consistency. Great discussion, loved it.
@Jasin-p6w3 ай бұрын
Wonderful video
@jmitesh013 ай бұрын
Information dense conversation! And isn't it expected that size is sufficiently large then it would increase number of put calls to object storage. Why don't you do buffering based on time plus size based criteria?
@TheGeekNarrator3 ай бұрын
Good question. I think there are plans but haven’t really finalised yet. But maybe in the future a hybrid setting will be available. IIRC I asked this question to Chris.
@thrawn013 ай бұрын
Thank you for this, looking forward to assisting with the golang port! This was a great overview of the project, and a great way to get acquainted with some of the design decisions.
@TheGeekNarrator3 ай бұрын
Thanks for watching 🙏🏻
@plargyle3 ай бұрын
I love how you simplify complex topics in the crypto space.
@MarkHarrison-g7r3 ай бұрын
Like all duckdb users, I really enjoy hearing Mark Raasveldt talk. You did a great job of guiding the conversation into intesting and informative areas!
@TheGeekNarrator3 ай бұрын
Thanks a lot watching and sharing your thoughts 🙏🏻😀
@arkadutta7444 ай бұрын
This is super cool . Probably the only youtube video [> 1hr ] , which I watched in whole. I took a whole week to watch it ... made sure I understood every idea presented and discussed. Thank you so much. I have a question => The partition placement , is DDB still using consistent hashing , for deciding which partition goes to which node[physical server] ... or there is some other algorithm used now. The original Dynamo paper mentioned consistent hashing [I read half the paper] ... Precisely => 1. How is it decided which key goes to which partition ? 2. How is it decided which partition goes to which node[physical server] ? Consistent hashing ... works in a bit different way ... all of the people watching this video knows how . For example in normal consistent hashing I guess it is hard to keep a limit on partition size ... I have read some part of the original consistent hashing paper ... in my quest to understand that paper ... somehow I reached your video .. and I thank myself for that.
@imranfp4 ай бұрын
Well presented and explained well. The best part when Matthias use pointer to exactly mention what part in presentation is under discussion. Thanks 👏