I liked the video, but I miss a link to a paper/formal description of the algorithm. I've thought of historic versioning for backup systems. Those are similar but you have effectivly constant network disruption and maybe a slightly increasing buffer limit. But you want the size to increase slower than linearly with time and have uniform sampling back to epoch. Because more time means more history to cover, and you don't need tight uniform coverage over small periods of time to fill up your system.