Your comment on the Apple silica is literally why I bought my first max after years of fighting. It’s performant efficiency. My ability to be off charger for wayyyy longer under load. I got to test drive a x1 Lenovo and it last 1.75 hours under decently heavy load while my M1 Pro was about 7hrs. It’s something I’ve been annoyed at intel for a while at just pushing inefficient power to get more and more diminishing clock gains. Performant battery life is one of the best innovations of the last couple decade imo along with SSD storage.
@mitchierichie8 ай бұрын
The battery life on the M1 is insane. The bootup time is practically instant, too.
@pieterrossouw85968 ай бұрын
I've been a Windows/Linux guy, had plenty of fast ASUS gaming laptops and Dell Precision Workstations. Switched to a M2 Pro 32GB and not sure what could convince me to switch back at this point.
@theairaccumulator71443 ай бұрын
That's because Macbook CPUs are upscaled iPhone CPUs. Not a real processor.
@sc.frederick3 ай бұрын
@@pieterrossouw8596 I was a linux guy too for years, windows before that. Took a risk and bought an M1 pro 32gb the day they launched, the full performance with ample time away from the wall was too hard to pass up. Want to switch back to linux, but the battery life, quiet operation, trackpad, etc are just too hard to leave. Other manufacturers need to step up their game...
@maimee12 ай бұрын
@@pieterrossouw8596 Price, a desk job, CPU-intensive/GPU-intensive workload? The moment you want to start moving without a charger though, it's probably game over for Intel laptops. Maybe AMD has a chance? Never tried that.
@jaryd_yarid6 ай бұрын
Too complicated. Just use Python for loops.
@asdanjer5 ай бұрын
I mean... It is faster than 14 seconds XD
@vncstudioАй бұрын
kzbin.info/www/bejne/q6W3koONaJeagbs Very interesting optimizations in Python.
@themuslimviewАй бұрын
Hahaha. Wonder how fast python can get for this.
@ratsock23 күн бұрын
Few ppl got python to around 9-10s
@jperkinsdev8 ай бұрын
Hey Prime, this was one of the best programming videos I've ever seen. I really want to see you tackle some of these problems, and venture into that "unsafe" zone of Go that I'm so scared of.
@dejangegic3 ай бұрын
what's the "unsafe" part of go?
@jerofff3 ай бұрын
@@dejangegicditching go and using js
@nightshade4278 ай бұрын
My first stab would be to memory map the file. Split the data into number of hardware threads available, have each thread process it's chunk and create its own hash of sum, min, max, count, etc. Then join the threads and tabulate the final results. This way each thread is independent and there is no locking on anything. Not sure how it would perform in go or any other language, but that's the basic approach I would start with.
@daasdingo8 ай бұрын
Exactly! I was so frustrated that they have such complex structures, when there is a much simpler approach that is very likely much faster!
@daasdingo8 ай бұрын
The only obvious issue I can see with this is the overlap of IO with compute, so might be necessary to tune up thread count significantly higher than hardware thread count
@nightshade4278 ай бұрын
Looks like someone implemented a c version of what I mentioned above and it runs in about 1-1.5s
@egorandreevich78308 ай бұрын
One of the main principles of golang is "Share Memory By Communicating", which is why this route was chosen, I think.
@kippers12isOG7 ай бұрын
This is what the fastest Java impls all did. 🏆
@ErikBackman2428 ай бұрын
Kudos to the Java community - all jokes about the language and the jvm aside - for launching a challenge that inspired extensive collaboration and interesting articles even outside their own "domain". Credit where credit is due. Applause.
@KangoV8 ай бұрын
This was awesome to see the process that the developer went through. Good job. But, it amazes me that Java's JVM is so powerful that it can do this in 1.8 seconds without a native binary and 1.5 seconds with GraalVM.
@doublekamui8 ай бұрын
the jvm monitors frequently executed code and compiles them into native binary code on the fly. this means the next time the code is called, it can run much faster than before because it's binary code and tailored to the specific processor
@javierflores098 ай бұрын
@@doublekamui JIT doesn't really form part of the equation in this scenario considering this challenge is not a long-lived service but a short-lived program, hence why there are benefits in performance from using Graal native binaries. The reason this is very fast is because they're making use of recent additions in the Java API to do manual memory handling in order to load the file faster, aside from being smart with the data structure of course.
@neonmidnight62648 ай бұрын
C# does it in 1.2s (because Java does not have SIMD (Panama vectors have worse codegen and are higher level, have crossplat issues currently) primitives that allow to express the necessary code).
@spicynoodle74198 ай бұрын
I would read 1k lines per thread and base the hash on the bucket index (bucket-1000-0, bucket-1000-1..., bucket-2000-0, bucket-2000-1...)
@watchbro33198 ай бұрын
nah man u kiddin its 2024 i used PrimaGENAI 6x9B STFP model it finsihed a billion rows like in 0.00000000069s kk😎
@JacobyB8 ай бұрын
what @@watchbro3319
@CurtisCanby8 ай бұрын
Do it. See if you get better or worse results… FOR SCIENCE
@spicynoodle74198 ай бұрын
@@CurtisCanby Imma do it in Zig cuz i'm learning it now
@NuncNuncNuncNunc8 ай бұрын
You need to use monoids -- Haskell Fanboy I'd think IO would be the first thing to tackle -- find fastest way to read huge file without doing anything and then find the fastest way process each chunk, then fastest way to reduce the chunks.
@locusteamfortress38688 ай бұрын
Good old map reduce problem
@rj7250a8 ай бұрын
First thing is to copy each row 1 million times, because side effects are LE EVIL!
@NuncNuncNuncNunc8 ай бұрын
@@rj7250a Uh, yeah. If your scheme involves mutating the original data, you've already failed.
@rj7250a8 ай бұрын
@@NuncNuncNuncNunc well you need do 1 copy from disk to memory anyway, because computers can only copy, not move data. So in any way, you are keeping the original file intact. But side effects are essential for most fast algorithms. That is why sorting alorithms modify the array in place, imagine copying a array with 100 MB every time that a swap happens, 90% of the sorting time would be garbage collector work. If you want to keep the original array, you just copy it before the sorting and sort that copy, 1 copy vs millions of copies.
@NuncNuncNuncNunc8 ай бұрын
@@rj7250a Moving and mutating are not equivalent. It's splitting hairs to say a copy from disk is not moving data, but still beside the point that read is non-mutating. Taking up the Haskell Fanboy mantle, all copies are equivalent so n copies are accomplished in constant time and memory.
@aseeralfaisalsaad8 ай бұрын
I recently adapted Go and absolutely loving it as my personal primary backend language and got away from JS/TS for everything. I still use NodeJS those for current job but Go gives me a minimal syntax vibe of Python, modern C++ like characteristics with speed and concurrency.
@StrengthOfADragon138 ай бұрын
I love both this challenge and this article. It's an excellent example of the way I would expect optimization to work in industry, albeit in a more approachable format. Step 1 is make it work, then start to analyze and optimize approachable bottle necks iteratively until you hit your goal. What the article skips over briefly is the extra reading and deep diving you would need to do to go into what things seem like ACTUAL potential time saves if you aren't familiar w/ optimizations like this from the outset. Would LOVE to see something like this done for a website or other UI. (Make it work, then making the response time under 1/4 of a second, or whatever other goal you would have. That case is more of finding unexpected bottle necks and cutting them out)
@jrgf67788 ай бұрын
The record in java is 1.4 seconds, prescinding of the garbage collector, using multithreading with 8 cores, and using the Vector API
@Gusto200008 ай бұрын
It’s 02.957 on 32-core server. With 10k stations which is only fair way to evaluate because it says in the rules - 10k stations. 1.535 result achieved on quote “The 1BRC challenge data set contains 413 distinct weather stations, whereas the rules allow for 10,000 different station names”
@jrgf67788 ай бұрын
Check again my friend github.com/gunnarmorling/1brc
@jrgf67788 ай бұрын
@@Gusto20000 Check again the gunnar morling repo the 1st place is 1.536 sec uses a all of the above, the 32 cores that you said a treemap to get the info organized and bit operations to calculate the values per station
@jrgf67788 ай бұрын
Also there is a discussion if a linux kernel can be optimized to increase perfomance, because it hits the perfomance of compiling and executing the bytecode
@nafakirabratmu8 ай бұрын
The data is also preloaded on a ramdisk, so it's pretty much IO unbound
@josegabrielgruber8 ай бұрын
Great article, loved the scientific approach of handling the challenge!
@hirenpatel61188 ай бұрын
I'd probably do a boss worker model on a chunk of the lines, for each thread return a map of the thread local minimum/maximum/avg/count. On return you can then combine. There's no need for input state to be sync'd. Also as I'm watching more of this, IDK if an array vs struct would be that much different. Struct should be packed similarly to the array in this use case. You could probably even get away with using a tuple too. An easy way to validate this would be to look at the IR for go for this program.
@Nenad_bZmaj8 ай бұрын
Perhaps, instead of using a map, we could construct an array . I don't know how fast the hash function for maps is in Go, but perhaps it's worth trying the following. Convert three bytes from each name into 15 bits. Three-byte combination is unique for 10.000 names. We can, for example, take first three bytes ( If the characters in the string (name) are not all from ASCII, we should take the 4th, 8th and 12th byte, with len check first) Then, by using fast bit-operations, from each byte extract bits at 5 characteristic positions to three contiguous bit-chunks. This gives an unsigned number < 32.768 which is the index of an array element, a sub array consisting of min, max and avg. Sort using radix sort with the radix of 5 bits (the order should remain alphabetical, if I'm not wrong). The array will be two-thirds empty, though, but this is a rather small array, so it doesn't matter. Actually, not sure about preserving the alphabetical order if characters are not all from ASCII, but in that case we can still use the same procedure, bu store also the name as a string in the array element, and then sort by runes also using radix sort.. The first and last out of three stages of the radix sort can be done in parallel.
@TurtleKwitty8 ай бұрын
Array vs struct in that usage is the same, but you'd probably want to use a struct so you can try padding it out to cache line size and align the entire thing on cache line
@IshaanBahal8 ай бұрын
9:07 I'd personally have all routines process chunky reads from file, and return maps to merge using the coordinator routine (main), so that way i won't have to do any mutexes. Might cause a bottleneck on the merge part, but the row processing would be fast enough in threads but might be similar to mutex waits. Actually, i kinda wanna write both and find out now.
@ark_knight8 ай бұрын
DO IT!
@IshaanBahal8 ай бұрын
@@ark_knight alright. Shall update here.
@anonymousanon48228 ай бұрын
@@IshaanBahal Comment so I get a notification
@machinima14028 ай бұрын
would love to see you solving the challenge iteratively
@ashersamuel9588 ай бұрын
yeah
@nyahhbinghi8 ай бұрын
how would an iterative solution be different?
@shadowpenguin34828 ай бұрын
I think OP meant iterative as in iterative software development, starting off with a first working solution and gradually improving it
@ashersamuel9588 ай бұрын
@@nyahhbinghi more pain more good
@owlmostdead94928 ай бұрын
4:10 the apple encoders are very efficient BUT they're much much worse in quality at low bitrates (streaming bitrates). Currently the best quality h.264, h.265 and Av1 goes to Intel, then Nvidia, AMD and dead last Apple. To give you a reference Apples h.265 is about as good as Intel's h.264 at the same bitrate, so ~2x worse than Intel.
@ark_knight8 ай бұрын
And AMD has the smallest file size.
@owlmostdead94928 ай бұрын
@@ark_knight It's also worse in quality, size/quality efficiency is pretty much still CPU software > .. > Intel > Nvidia > AMD > .. > Apple
@ark_knight8 ай бұрын
@@owlmostdead9492H.265 encoders are pretty close to Nvidia and sizes are smaller. And then they have xilinx (I think xilinx?) fpga powered encoders which are superior than even Intels - though its not for consumers yet.
@bits360wastaken8 ай бұрын
Source?
@owlmostdead94928 ай бұрын
@@bits360wastaken numerous AV forums and I tested it myself. I used Netflix's quality evaluation algorithm VMAF on the test footage. I tested encoding of a 2020 Intel MacBook, 2021 Apple silicon M1 Max MacBook, 12th Gen Intel iGPU (Thinkpad X13) and nvenc on a 2080Ti. The VMAF score was largely also representative of the visual difference in quality I could perceive with my own eyes, so I'm not just quoting numbers here.
@constantinegeist18548 ай бұрын
Channels IIRC have mutexes internally. What I'd do: take a hash of the name, take a modulo, use it to locate the target goroutine in a finite list. Each goroutine is associated with a preallocated slice + start/end indexes (implement the queue manually, a lock-free circular buffer). That would avoid synchrosization costs. Enqueue while you read. Reading can be parallelized too (if it's not an HDD). Calculate on the fly without storing in intermediary data structures.
@ycombinator7658 ай бұрын
7:43: "I always reduce my loads when I can ... damn!" this is what i am here for, every single fkin day!
@vallariag8 ай бұрын
Really great article! Followed her on Twitter.
@chriss34048 ай бұрын
Gonna be honest, it hurt to immediately realize that map access was going to be a problem late game, and then learn at the end that they weren't going to do anything with it. This BRC thing is a really fun challenge!
@Exilum8 ай бұрын
People probably pointed it out, but the rounding is indeed ceiling as you figured out. Negative numbers, however, need to be rounded in the same direction, so there's some level of complexity to that. You can't /10*10 the integer like some people thought unless goland has a different opinion than most other programming languages. I couldn't find the specifics for goland, but integer division should point to 0, so negative numbers wouldn't be right.
@_fakedub8 ай бұрын
always bamboozeled by all the technical shit that goes on in this channel as im super new to coding but this one i swear i understood. great article
@earthling_parth8 ай бұрын
Same here 😂
@u9vata8 ай бұрын
Relevant info for those who say "Java can do it in 1.5 seconds" beware that the java tests ran on RAMDISK of this machine: "dedicated server (32 core AMD EPYC™ 7502P (Zen2), 128 GB RAM). Programs are run from a RAM disk (i.o. the IO overhead for loading the file from disk is not relevant), using 8 cores of the machine." It is not a bad article, but as a hiperf-nerd I find it also fascinating how the guys thinking process is: Like calculating sum/min/max instead of storing the file in memory first is so annoying he does not do at first try even without thinking (I saw this surprised prime too). Also mmap-ing the whole thing should work better (even with ramdisk I feel) plus doing threads the way the guy does here seems very wasteful. I literally think that a simd-style parallelism wins over threads here very bigly. But if one really wants to use threads I imagine it should be more like somehow doing big chunks of the file I/O separately - like starting from line X (by literally doing A, B, C, D threads or even 8 threads that read only parts of the file - but HUGE parts). This means to have 4-8 min-max-count-sum values for each thread for each processed block (that can still happen with simd) and then after all of them finished (for which you not need any inter-thread communication and can do it fully lock-free) you just mix together that result. I think on that heavy EPYC server on a ramdisk, with well set up I/O in C should be done in milliseconds or 10-100 of milliseconds at most to be honest. On "real" machines it should be bottlenecked by I/O performance of your hardware.
@dand44858 ай бұрын
I wonder one more optimization... Have a lookup table/hashmap for all temps? If you have a hash map of [strings][float] for example allTemps["-99.9"][-99.9], no parsing and a straight hash lookup... Should be much faster then parsing anything...? Assuming the data is value for the temp values should be perfect to get some perf :)
@sjoervanderploeg43402 ай бұрын
It depends on your storage medium, 512k is the "legacy" standard and 4096k is the "new" standard sector size! You want to read an entire block of data at once so multiples of sector size!
@Laminad182 ай бұрын
Imagine trying to solve this recursively. The poor call stack would be weeping
@earthling_parth8 ай бұрын
I would love if this became a mini-series on yt from your streams on your main channel or on one of your secondary channels. This would be immensely helpful to noobs like me on Go and just programming in general ❤
@goaserer8 ай бұрын
A fun exercise, even though I'd probably not go as far as to write my own string to int parser
@AtariWow8 ай бұрын
Now I want to see people do this in all sorts of languages and see just what comes of it. I also want to try this myself in Go and if I was still well versed in Python which I've given up for Go, maybe I'd try that and suffer.
@JArielALamus8 ай бұрын
17:46 wait a second, the rules states the rounding should be done in the output, not the input. Converting to int64 will cause a loss in precision and increase a lot the rounding error of the final mean value. Since all the values have only one fractional digit, why did the author not store this additional decimal value to do some fixed point arithmetic to reduce the rounding error?
@SmartassEyebrows8 ай бұрын
Even easier than that. Just take the numbers as whole numbers by ignoring the decimal (so 99.8 -> 998), do the sum, and then divide by 10 at the end to return to a Float with a single decimal, divide by count to get average, then round. Very straight forward with only two divisions and one round per station. Might be able to avoid that divide by 10 even with more clever manipulations to get down to just the divide by count (average) and round.
@SimGunther8 ай бұрын
@@SmartassEyebrows how can you tell the difference between 9.98 and 99.8 with your scheme? Checkmate! Edit: I totally forgot to read the rule being "1 exact fractional digit" for each value at 10:28 🤦♀️
@JohanIdstam8 ай бұрын
@@SimGuntherThe rules says there is always one decimal digit in the dataset.
@ScarabaeusSacer4358 ай бұрын
This is another case where go isn't really any faster than java-- although it does have better memory consumption-- and you give up so much power from being able to use other JVM languages like Scala and Clojure, not to mention all the third-party library support.
@tr7zw8 ай бұрын
Makes it even more insane that the fastest Java solution is 00:01.535 seconds with GraalVM(Compiling to native), and 00:01.880 seconds on any JVM.
@andrebrait8 ай бұрын
Yes, but I've read it and it does a lot of low-level stuff you'd normally not do in Java. It literally reads memory directly with Unsafe and then uses the fact the underlying architecture is little endian to parse integers without branching.
@Gusto200008 ай бұрын
Yeah, on 32core (64 threads) AMD Epic AND “data set contains 413 distinct weather stations”, not 10K stations. My Golang solution is 27sec on a laptop
@FilipCordas8 ай бұрын
Dotnet AOT at 1.197s and Jit 1.296
@deado72828 ай бұрын
U would be able to that in C# as well, but at this point u abandone everything that makes these languaged. Just use C/C++ before u end up doing C/C++ within another language.
@nafakirabratmu8 ай бұрын
The data is also preloaded on a ramdisk
@BAXEN8 ай бұрын
In java, I never used scanner. I only used file reader and processed data manually. I always found it to be quicker, and as a sanity check I had to know what the data being read was and how to handle reading it
@rich10514148 ай бұрын
Goroutines are coroutines, only in go. Coroutines are a fancy way of saying threads, except they are managed.
@macaroni_italic8 ай бұрын
Apple Silicon is significantly more energy efficient for basically every purpose. People can hate on Apple all they want, but the fact is that they are doing excellent work with their new ARM chips.
@LtdJorge8 ай бұрын
Not to detract from them, but part of their efficiency advantage is that they are always at least one litography node ahead.
@devcoffee8 ай бұрын
The Sam Bankman-Fried prison photo is adorable
@spottedmahn8 ай бұрын
5:00 Challenge accepted 😜! After many humans solve the 1BRC wouldn’t one be able to train a model & then it would output decent code? Possibly something the humans didn’t think of? 🤔
@juanitoMint7 ай бұрын
I used to process credit card operations dumped by banks and used some of the optimizations showed here but learned a couple more from this vid 💪
@AlexPaluzzi8 ай бұрын
I am loving the Go content so much.
@nyahhbinghi8 ай бұрын
Prime wants to do Go next, but that's a mistake. Golang itself was a mistake.
@AlexPaluzzi8 ай бұрын
@@nyahhbinghi you’re a mistake
@CatFace88856 ай бұрын
@@nyahhbinghi Why so? Go seems like a beautifully simple language.
@garagedoorvideos8 ай бұрын
made great sence in the process of solving the 1BRC. Bravo
@Amy-6018 ай бұрын
Exactly my first thought ( for first take/ baseline) min and max! Storing map/ array takes space- one 1️⃣ bil ?? Hmm 🤔! Nope! Also split the rows and read. Aggregate all max/ min from split data. And then we improv… but for first take baseline brute force , that’s what I’d do! 7:47 Number 3, classic producer/ consumer with the wait() and notify() !! Who’d have “ thunk” it? Lol 😂 12:57 nice 👍 Also wondering after processing, if he did a flush() of the buf channel / stream. Yeah I’d increase it to 1000 from 100..… 21:15 Now he’s talking my language- chunk read ( above) and min/ max.. lol 😂 yep 👍 agree 👍- simple incremental optimization- nothing broken. That’s how you’d want to introduce change in production also. Or you’d have 1 billion customers clamoring at your door 🚪!! 24:16 Except I’d have shot for min/ max in step1. But that’s just me! Lol 😂 GO is a lot like Java with the ReadBytes lol 😂.. Good 😊 article/ video ! Yay 😀! - Amy ( aside: TreeMap in Java is sorted by keys 🔑 - If the temp was the key per city… hmm 🤔 29:21 )
@Kane01238 ай бұрын
I’d have it output random values and just claim it was a hallucination if wrong.
@cfhay8 ай бұрын
I never tried to solve this myself, but my first thought was to MMAP the file and access the bytes like it was memory? Then don't need to think hard how to optimize reading file chunks into buffer. I don't know where this would end up though performance wise.
@__gadonk__6 ай бұрын
2:31 Hamburg Mentioned! RAAAAAH 🔥🔥🔥🔥🔥🔥 Es gibt kein schlechtes Wetter nur falsche Kleidung!!!
@ame71657 ай бұрын
i got python down to 3.3 seconds on a 16gb m2 macbook pro. i don't think i can get below 1 second like the java bros without resorting to rewriting it for compiling. vanilla single threaded python using csv module to iterate it a row at a time took a little over ten minutes, so there's a lot of improvement to be had from that starting point 😆
@ClaytonTownley2 ай бұрын
Code or it didn't happen.
@v2ike6udik8 ай бұрын
Seal of approval
@f.f.s.d.o.a.7294Ай бұрын
Arf arf arf
@v2ike6udikАй бұрын
@@f.f.s.d.o.a.7294 slaps flippers against the swollen tummy fast that is full on yak´s blood, wooly bully yak`s blood
@gregoryolenovich64402 ай бұрын
Thank you twitch chat. I was actually triggered when he made a seal sound instead of saying kissed by a rose.
@JensRoland8 ай бұрын
“Kiss From A Billion Rows” - Ceil
@chezzy636612 күн бұрын
15:39 How do you make those performance graphs?
@TheApeMachine8 ай бұрын
"Creating a new array might seem easier" true, but then you need to understand when and why sync.Pool is so much faster. Not at the end of the video yet, but my guess is a sync.Pool of pre-warmed goroutines is going to play a big part in a truly performant solution.
@aidanbrumsickle8 ай бұрын
My current computer only has 8 gigs of RAM, so i guess i'd have to use some cloud VM or something to really test this, but it does sound fun. I'd love to try this in Go and Zig. I wonder how much the differences in SIMD instructions between intel and arm would impact the results.
@G.Aaron.Fisher8 ай бұрын
I swear that no matter how long I'm in this profession I'm always going to read Go as the board game rather than the language. Somehow I thought this might be about a computer Go tournament on a 1 billion by 19 board.
@microcolonel8 ай бұрын
The real way to solve this problem is to fix the stupid file format. Register each station to an integer identifier, and each temperature to an integer value, and you can mmap the whole file and move slices to any number of threads with no communication. You could even put the whole record in 4 bytes easily, making it trivial to extract the temperature and accumulate the stats with SIMD or vector machines. There is a way to do the mmap split without fixing the format, but it's a bit messy.
@microcolonel8 ай бұрын
Wonder why they specify float rounding for something that fits losslessly in 11 bits of signed integer.
@7th_CAV_Trooper8 ай бұрын
Prime just invented the DynamoDB sharding model. It's not a surprise that switching to int was faster. I just am not sure how you get there quickly. Read left of dot, multiply by 10, add right of dot? 1 billion times? I don't know.
@TrebleWing4 ай бұрын
on what planet does rounding towards positive yield -1.5==1? the closest integer value towards +infinity is -1
@PristinePerceptions8 ай бұрын
5:26 You're not a General AI. You're Captain AI. 😎
@alkamino6 ай бұрын
WELCOME TO COSTO I LOVE YOU
@88Nieznany888 ай бұрын
I think trie was only for identifiers and inside we would've kept the same 4 values as in map.
@uuu123438 ай бұрын
Oh hey, its the picture of SBF in prison
@wannabelikegzus8 ай бұрын
Haha, so this is why this project was trending when I was looking around at golang projects.
@abhishek_k78 ай бұрын
noice way to learn optimization
@Soromeister8 ай бұрын
Just use AWK for this.
@ersetzbar.8 ай бұрын
I thought I was the only one doing the seal thing
@nexovec8 ай бұрын
Is this some insanely fast lockless tree implementation competition disguised as a file parser kind of exercise?
@digibrett8 ай бұрын
I also thought of Batman Forever.
@stefdevs8 ай бұрын
I think she resorted to threading way too early.
@johnathanrhoades77514 ай бұрын
For those who come later, a trie is a different data structure than a tree.
@spottedmahn8 ай бұрын
“Seal” 😂 2:10
@eric-seastrand8 ай бұрын
I want to see somebody do this in browser JavaScript. Spawn background processes with service workers and such.
@ycombinator7658 ай бұрын
you naughty naughty
@test-rj2vl8 ай бұрын
Would have wanted to see you code this challenge in this video.
@ArturdeSousaRocha4 ай бұрын
24:13 Yeah, I love that this optimization is not rocket science.
@richcole1578 ай бұрын
All these conditions, no attempt to account for file cache on an iOS bound problem.
@marceloguzman6468 ай бұрын
5:14 dall-e lol
@tobiasnickel37508 ай бұрын
one question: isn't reading all the data from disk or even SSD taking already much more time then these little computations for each line? I mean everything has to go through the sata cable.
@nikolaoslibero8 ай бұрын
Could just put it all into RAM first.
@tobiasnickel37508 ай бұрын
@@nikolaoslibero all right, if that is by the rules,...
@Zekian8 ай бұрын
Should probably memory map the file and delegate file reading to the OS.
@JoshPeterson8 ай бұрын
Lovitz is spot on
@complexity55458 ай бұрын
Start bringing back the obfuscation challenges...lol...
@heartthymes8 ай бұрын
I guess he could also pass pointers to channel instead of objects.
@Nocare898 ай бұрын
I had to pre-calculate distances between top 1000-ish cities on earth. I calculated this as 2-4 billion rows to be saved in sql. At first it looked like it was gonna take weeks running on a server. Then I put an sql table into memory instead of regular. Then it ran in less than like 2 hours with php.
@nyahhbinghi8 ай бұрын
1000 + 999 + 998 ... + 2 + 1 = 500,500.
@Nocare898 ай бұрын
@@nyahhbinghi I likely had both a-to-b and b-to-a rows. I know I had at least 2 billion rows. This was over 10 years ago. I don't have all the details squared away anymore lol. We also used some math shortcut to avoid comparing locations too nearby to each other. So maybe I remember the count of cities way wrong. Mostly I just remember manually fixing a ton of hebrew/arabic names that the db didn't play nice with.
@nyahhbinghi8 ай бұрын
@@Nocare89 nice, it's all good
@kouoshi8 ай бұрын
00:02 Kek "First photo of Sam Bankman-Fried in jail"
@bary4508 ай бұрын
Vedal indeed mentioned
@zuowang51858 ай бұрын
It's M3 now for work laptops
@huvineshrajendran68098 ай бұрын
Yes, Mistral is a thing.
@wolfgangsanyer35448 ай бұрын
@theprimeagen if you go full time content creation, your first project HAS to be automating turning off alerts while streaming.
@mad_t8 ай бұрын
Java made it in 1.53sec using a 32core EPYC monster. M1 pro 10C/16G is nowhere near that beast :) So we can't even compare the results
@MartinCharles4 ай бұрын
This approach is scientific, but the writer made the biggest gaff off all (and prime too), optimizing something where there is no bottleneck. I'm guessing the parsing is the bottleneck not the aggregate computation so putting a queue or any sort of fanout in front of the aggregation probably just introduces overhead. I really doubt the overhead of the go routine is less than adding a few numbers together.
@MasamuneX8 ай бұрын
i ran into a problem just like this reading a financial dataset that was 9 gigs of data tldr i just chunked it
@architbhonsle73568 ай бұрын
You can do a map reduce on this right?
@HjalmarEkengren8 ай бұрын
Yes all fast implementations do a variation of that. One of the hard(ish) things to do is to quickly divide the file into roughly same sized chunks so that you can do the map operation in parallel.
@architbhonsle73568 ай бұрын
@@HjalmarEkengren hmm. Could you open the file, seek ahead an estimated number of bytes (total bytes / n) then go forward till a newline and then jump again. Seems like a fun challenge.
@HjalmarEkengren8 ай бұрын
@@architbhonsle7356 That's the trick i think most use. They fast forward total_bytes/no_threads, scan til the first newline and store a pointer, then repeat. The challenge truly is fun. none of the tricks used are hard and feel obvious in hindsight but putting it all together gets some pretty interesting results. As well as demonstrating how far from a computers potential the code we usually write is. And something being IO bound is a myth/BS :D
@alankewem8 ай бұрын
Search about "Rinha de backend" a more interesting challenge
@Hamsters_Rage7 ай бұрын
TLDR - he is doing nothing, just reading somebody's else readme file
@alexandrustefanmiron77238 ай бұрын
Trie != Tree
@edumorango8 ай бұрын
me: .ceil 🧠: 🎶 But did you know that when it snows 🎶 🧑🏿🦲
@ThameemAbbas8 ай бұрын
Unless I am trying to show a poorer baseline, my thought process was to aggregate them per key.
@KillianTwew3 ай бұрын
14 seconds to release a billion swimmwrs is not great.