pavlo possibly the most dangerous man in database systems
@hasan07708162683 жыл бұрын
7:37 agenda (sorting and aggregation) 8:17 why we need sorting? 11:26 external merge sort 13:01 2-way external merge sort 23:52 double buffering optimization 24:27 general k-way merge sort 29:05 B-tree for sorting 37:35 aggregation
@firstlast70863 жыл бұрын
38:32 the best definition for aggregation I've ever heard
@pb251934 жыл бұрын
The Indian dude asks really good questions. Good job Andy handling them masterfully.
@patrykuchman53883 жыл бұрын
25:02 - The general external merge sort. Andy mentions that in general if we have B buffer pool pages then we can use B-1 pages to store the sorted runs and 1 as output. But wouldn't it be better to dedicate more than 1 page for the output? If B is high and we only have 1 output buffer page, then every times it fills up we have to store that page to disk and wait until this operation finishes before we can proceed with merging
@galaxy_mosaic35872 жыл бұрын
my only constructive feedback would be that it would be nice for the content to be formatted so that the full slide is visible. then somehow adjust the proportions so that the recorded video replay is not always overlapping the lower right corner of the slide because sometimes important content gets hidden.
@th3sly5 жыл бұрын
was that guy wearing a MongoDB t-shirt? :)))
@abdelrhmanahmed13782 жыл бұрын
1:04:00 in rehash phase we do sequential scan over the buckets ? if yes , if it will be better to just do sequential scan over the original data ?
@420_gunna4 жыл бұрын
dj drop tables last heard muttering I aint goin back as he left
@akshpatel79353 жыл бұрын
21:31 how does pass #3 fit entirely in main memory? Wasn't that the precondition that whole dataset can't fit in main memory?
@andypavlo3 жыл бұрын
It doesn't fit entirely in memory. You fetch one page at a time per sorted run and then fill up the output buffer. When the output is full, you write it out to disk.
@akshpatel79353 жыл бұрын
@@andypavlo Thanks a lot Prof. Pavlo, that clears things up. I was not expecting a response so soon on a 2 year old lecture. And thank you for making the lectures, assignments and problem sets public.
@陈洋-r1s4 жыл бұрын
in the hash sorting,why we need rehash rather than just operate on the buckets from phase 1.
@AshishNegi16184 жыл бұрын
If first page has 1k key-values (some duplicates).. you will need some data structure to group values of same key together => hence 2nd hash table. Now, it looks like we could just use first hash table and second is redundant.. but from this lecture, it looks like idea is first to collect all values of a key together using first hash and then calculate aggregation using second hash..
@huangwentao4 жыл бұрын
@@AshishNegi1618 In my opinion, if we just use 1st hash functions, we can have different partitions, and following aggregation operations can be performed by scanning each key-value pair inside each partition. But such scanning may require a large number of scans, which is not that effective. If we use the 2nd hash function, we can avoid the aforementioned scans. Instead, we can just perform aggregations by a single scan over hash keys with the help of 2nd hash tables.
@zhchen66443 жыл бұрын
because you can't: the data is larger than you mem
@alexalexfff3 жыл бұрын
What text book is mentioned in the development hints slide?