R Data Structures
23:56
Ай бұрын
Prep18 Advanced speech apps
25:07
7 ай бұрын
Text-to-Speech, lecture 1 (Prep15)
31:42
Sounds waves and sampling rate
15:36
Prep 10 ASR lecture2
39:09
8 ай бұрын
MFCCs in Praat
7:44
8 ай бұрын
Пікірлер
@paulmairo
@paulmairo Күн бұрын
Nice video, this comes in handy as I was indeed asking myself what use cases warrant reaching to PyO3. I am wondering though, if we convert the call `out_dict.get(w, 0)` to a "dummy" `if w in out_dict` won't it be faster than actually trying? Something I also find missing here in the video is the memory and CPU (cores) usage. Not that I think Python would do better there, but it would be interesting to check.
@sampathnkn1418
@sampathnkn1418 Күн бұрын
Great job, keep it up!
@ekbphd3200
@ekbphd3200 Күн бұрын
Thank you much!
@dustinhess6798
@dustinhess6798 8 күн бұрын
Well I did played around a bit and what found was if you just use mean() from the Statistics package instead of the the home grown straight forward for loop implementation you get an improvement even over the bytes method. Below is how I modified the function makes it a bit more simple and readable as well a performance boost with out the fancy byte stuff. ( Nothing wrong with fancy byte stuff, that was a good catch) function get_mattr(word_list::Vector{String}, window_span::Int = 50) n_words = length(word_list) effective_window_span = min(window_span, n_words) n_windows = n_words - effective_window_span + 1 if n_windows <= 0 return get_ttr(word_list) end mean_ttr = mean(get_ttr(word_list[i:(i + effective_window_span - 1)]) for i in 1:n_windows) return mean_ttr end Here is a link to the data and the output graph I generated. drive.google.com/drive/folders/1-AelwjZZtAPGKf_bLkhkLOC0ZZUWBuTf?usp=sharing
@dustinhess6798
@dustinhess6798 8 күн бұрын
Hey nice vid. I am a Physicist. I work for a photonics quantum computing company and use Julia for modeling in my work. One thing you may consider is using the BenchmarkTools for Julia. I am not sure but the tail at the beginning of your graph might be due to the JIT compiler optimizing. If this something you do often you could precompile the julia code, once optimized and that would negate the JIT start up time. I will play around a little bit and get back with you. I like the attitude of always willing to learn something from some one else. There is so much out there to learn if just listen and not jump to conclusion.
@StupidInternetPeople1
@StupidInternetPeople1 20 күн бұрын
Amazing doucheFace thumbnail! Congrats you look like every unimaginative, lazy creator on YT. Clearly intelligent people choose stupid face thumbnails because looking like an idiot is a huge indicator that your content must be amazing! 😂
@iraqi2015
@iraqi2015 24 күн бұрын
When I run it on linux. and click on start. it crashes and closes. I don't know why!
@ekbphd3200
@ekbphd3200 24 күн бұрын
Darn. Double check that you have the latest version and perhaps ask for help on their discussion board: www.laurenceanthony.net/software/antconc/
@ahmedal-attar3478
@ahmedal-attar3478 26 күн бұрын
Probably worth noting, Polars is quicker because it's multi-threaded and uses all the cores on the machine, were as Pandas is single threaded
@ekbphd3200
@ekbphd3200 25 күн бұрын
Thank you for pointing that out! I appreciate it.
@paulselormey7862
@paulselormey7862 25 күн бұрын
Nice take, benchmark must go beyond speed. How much resources are used (CPU, memory) to achieve the apparent faster speed?
@gardnmi
@gardnmi 27 күн бұрын
pandas has a join method. It's supposedly faster. You just have to set the join columns as the index before calling.
@ekbphd3200
@ekbphd3200 26 күн бұрын
Thanks for the comment. However, I can't get join() to be faster than merge(), in fact join() is 4x slower than merge() in my code. In the pandas section of my code here: github.com/ekbrown/scripting_for_linguists/blob/main/Script_polars_pandas_left_join.py when I comment out my merge() line and uncomment the two set_index() lines and the join() line, it is 4x slower. If you get set_index() + join() to be quicker than merge(), please leave a reply with how. Thanks!
@xoruporu310
@xoruporu310 29 күн бұрын
What is the difference between the original and the X version?
@ekbphd3200
@ekbphd3200 26 күн бұрын
As I understand it, the X version works better than the original version with bigger corpora and with XML. It has other features. Take a watch on Vaclav's (the lead on LancsBox and LancsBox X) webinar here, if you'd like: kzbin.info/www/bejne/oJqYhJKuop2ShJIsi=luso4bFa4gl7UP79
@xoruporu310
@xoruporu310 26 күн бұрын
@@ekbphd3200 thank you!!
@TheBIMCoordinator
@TheBIMCoordinator Ай бұрын
I have been learning rust so looking forward to watching this video!
@ekbphd3200
@ekbphd3200 Ай бұрын
Awesome!
@TheBIMCoordinator
@TheBIMCoordinator Ай бұрын
Great vid!
@ekbphd3200
@ekbphd3200 Ай бұрын
Thanks!
@TheBIMCoordinator
@TheBIMCoordinator Ай бұрын
I really enjoy this channel! I have been picking up rust from python trying to solve bottlenecks from speed
@ekbphd3200
@ekbphd3200 Ай бұрын
I’m so glad!
@iamwhoiam798
@iamwhoiam798 Ай бұрын
Blue & pink lines are roughly linear. Roughly at 70k, there could be some memory allocation or something else that has dropped the performance. More like a one time thing within each test above 70k. With this I tend to think that it's linear for normal hash reading ( blue & pink ).
@ekbphd3200
@ekbphd3200 Ай бұрын
Good points!
@gardnmi
@gardnmi Ай бұрын
I would assume the fastest way to access values is using dict.values() :)
@ekbphd3200
@ekbphd3200 Ай бұрын
Right!
@iamwhoiam798
@iamwhoiam798 Ай бұрын
you didn’t access them the way they meant to. You need to iterate keys and access each value using the key. Otherwise hashing is not required to get the map elements. Simply array can do that.
@ekbphd3200
@ekbphd3200 Ай бұрын
Thanks for the comment. I've created a video testing your idea (if I understand correctly your idea). Take a watch: kzbin.info/www/bejne/pZyzoJmPh7GeiM0
@reconciliation86
@reconciliation86 2 ай бұрын
This is just what I was looking for. I am working on business logic and there are a lot of SQL statements. Python takes about 9 seconds to have all 22k entries in a dict, a C++ program I had ChatGPT just write up based on the Python code (it took like 20 iterations of me telling ChatGPT well now I'm getting THIS error) was 10x faster. I won't learn c/++, I'll do some Rust!
@josecantu8195
@josecantu8195 2 ай бұрын
Thanks Professor! I'm learning on the job on my own how to implement python & rust together given my interests in software development, data science & biomedical science so this is an interesting series you made!
@ekbphd3200
@ekbphd3200 2 ай бұрын
Great to hear it!
@AndyQuinteroM
@AndyQuinteroM 2 ай бұрын
Great video, but the result is interesting. Mind if I can get eh full main.rs file and dataset. Would love to run the tests my self and perhaps improve upon it
@ekbphd3200
@ekbphd3200 2 ай бұрын
Sure thing! Any feedback that you have is welcome! I'm trying to improve my ability in Rust, so anything you see that could be done better, please let me know. Here's the main file: github.com/ekbrown/scripting_for_linguists/blob/main/main_mattr_native_rust.rs And here's the text file from the Spotify Podcast dataset that I used: github.com/ekbrown/scripting_for_linguists/blob/main/0a0HuaT4Vm7FoYvccyRRQj.txt
@pyajudeme9245
@pyajudeme9245 2 ай бұрын
Awesome, I was waiting for that video! I thought that Python's GIL blocking in your last video had a much stronger effect. I guess strings are in all programming languages pretty horrible, because utf-8 doesn't have a fixed byte size, so all programming languages have to use the slow techniques that python uses for all data types. Python is pretty good compared to other languages when talking about dicts and strings. The rest is very slow, but thanks God, it is very simple to speed it up if you need it.
@ekbphd3200
@ekbphd3200 2 ай бұрын
Yeah, I guess so. Python continues to impress.
@andrebieler7906
@andrebieler7906 2 ай бұрын
FWIW i got about a 30% speed increase for julia when working on bytes directly (vs strings) and passing @view of byte-vectors. wds = split(txt) bwds = [Vector{UInt8}(word) for word in wds] and then passing the @view of bwds instead of wds into the individual functions note: i also dropped all the println() and I/O operations in my code as i was mostly curios about the speed of julia and not I/O or print. (but fair play if it is included)
@ekbphd3200
@ekbphd3200 2 ай бұрын
Thanks for this! I'll give it a try.
@andrebieler7906
@andrebieler7906 2 ай бұрын
@@ekbphd3200 very interesting benchmarks and results. i personally had never anything to do involving heavy string manipulation and hence am by no means an expert in that area. for all my use cases julia is always orders of magnitude faster than python. had fun digging around in your examples. <3
@ekbphd3200
@ekbphd3200 2 ай бұрын
Thanks for your comments! I tried the advice in your previous comment and it works for me too! Thanks for pointing this out. Sounds like another video! I'll be sure to acknowledge my source (you).
@andrebieler7906
@andrebieler7906 Ай бұрын
@@ekbphd3200 Oh very cool, definitely did not expect this to trigger a new video :) I'm sorry I could have been a bit more helpful with my comment about the @view macro... For it to show an effect one also needs to add @view in the `get_matter` function like so: numerator += get_ttr(@view in_list[i : (i+window_span-1)]) Apologies for not pointing that out in my comment. Anyway the vast majority of the speed gain is from the bytes vector, but maybe something to consider if you want to give @view another shot in the future. Thanks a lot for the mention in the video <3
@pyajudeme9245
@pyajudeme9245 2 ай бұрын
Great video, I like your benchmark videos, but from Rust's perspective it's a little unfair. It's not representative when Rust is a slave of Python's GIL. However, I think the speed difference between the language is not that big, because UTF-8 chars don't have a fixed byte size. It would be nice to see a comparison between utf-8 and byte strings (fixed size). You could also add Zig to the benchmarks, I used it a week ago for the first time (RGB search in a picture). It was between 2 and 10(!) times faster than C, but I haven't tested it with strings yet.
@ekbphd3200
@ekbphd3200 2 ай бұрын
Yeah. Good point. Perhaps I'll time just the inside of my Rust function, after the Python list is converted into a Rust vector. Yeah, Zig looks interesting.
@Noodlezoup
@Noodlezoup 2 ай бұрын
Thank you for sharing this!
@ekbphd3200
@ekbphd3200 2 ай бұрын
My pleasure!
@RealLexable
@RealLexable 2 ай бұрын
But only as long Mojo isn't out there to perform python to it's coming new standard limits even faster than c++. The future is going to be fast as hell bro 🎉
@ekbphd3200
@ekbphd3200 2 ай бұрын
Awesome!
@nandoflorestan
@nandoflorestan 2 ай бұрын
Bah, Mojo is not even open source. That's repugnant.
@ekbphd3200
@ekbphd3200 2 ай бұрын
Maybe I'm misunderstanding, but I thought this means it's open-source: github.com/modularml/mojo/tree/main/stdlib for example, I can see the source code of the List object here: github.com/modularml/mojo/blob/main/stdlib/src/collections/list.mojo Perhaps, I'm not sure what you're saying.
@Navhkrin
@Navhkrin 2 ай бұрын
It will be made open source though. Chris clearly mentioned that making a language that has not reached v1.0 open source significantly slows down the progress because open-source projects that are led my committee's move slowly. They want to finalize the Mojo spec and features before making it open source. That being said, they already started making it open source. std lib and documents are currently open source. Mojo is designed around pushing as many features to lib's as possible so making std lib open source is already huge.
@marvinakuffo4096
@marvinakuffo4096 2 ай бұрын
How about using glob as a generator? Would that reduce the gap beteen the os.walk and glob? Because in your code, glob() loads everything into memory and that may adversely impact the runtime.
@ekbphd3200
@ekbphd3200 2 ай бұрын
I'll have to try that at some point in the future.
@SBrown-ou1xl
@SBrown-ou1xl 2 ай бұрын
I thought about this a bit more, and I think the MTLD_wrap algorithm has a time complexity of O(n^2). It might be interesting to try to fit a quadratic to the scatter plot instead of a line!
@ekbphd3200
@ekbphd3200 2 ай бұрын
Good idea. Is that different from the LOESS line?
@j-p-d-e-v
@j-p-d-e-v 2 ай бұрын
I tried PyO3 and its actually a really good library. BTW great content.
@ekbphd3200
@ekbphd3200 2 ай бұрын
Yeah, it seems to be well written and well documented. Thanks! I'm glad you enjoy my videos!
@playea123
@playea123 2 ай бұрын
This is fantastic! Thank you for sharing!!
@ekbphd3200
@ekbphd3200 2 ай бұрын
You're very welcome!
@NoX-512
@NoX-512 2 ай бұрын
If you convert the text into an array of integers, where each integer is an index into an array (or tree) of unique words from the text, you could possibly speed up things by a lot, depending on how long it takes to set up the arrays/tree.
@ekbphd3200
@ekbphd3200 2 ай бұрын
Very good idea! I'll have to try this.
@pyajudeme9245
@pyajudeme9245 3 ай бұрын
Nice! I would love to see the same thing, but with Numpy arrays instead of dicts in Rust.
@ekbphd3200
@ekbphd3200 3 ай бұрын
Good idea! I'll have to try that at some point in the future.
@pyajudeme9245
@pyajudeme9245 3 ай бұрын
What I learned the last couple of years is that Python is the best language for working with strings. It might not be the fastest, but the difference to the compiled languages is not that big (sometimes even faster - regex Python > regex C++), like with numeric data types. Practically all languages have a bad performance when leading with strings. However, working in Python's interactive mode, doing string operations is priceless. Guido van Rossum said in a recent interview with Lex Fridman that Perl is still the fastest when talking about strings (or regex - I don't remember exactly). However, it would be nice to see a comparison between Perl and other languages.
@ekbphd3200
@ekbphd3200 3 ай бұрын
Yeah, good points!
@SBrown-ou1xl
@SBrown-ou1xl 3 ай бұрын
That's a really cool project! Thanks for bringing it to our attention!
@ekbphd3200
@ekbphd3200 3 ай бұрын
No, no, thank you!
@zhaoziyang-c5h
@zhaoziyang-c5h 3 ай бұрын
This was great, thanks! Had no idea this was available. Going to implement it into my python ebook reader
@ekbphd3200
@ekbphd3200 3 ай бұрын
You're welcome! Best of luck!
@j-p-d-e-v
@j-p-d-e-v 3 ай бұрын
Nice video, the speed boost from Rust is almost twice. Can you do polars and pandas performance comparison?
@ekbphd3200
@ekbphd3200 3 ай бұрын
Thanks! Yeah, Rust wins again. Ah, interesting idea. I'll have to try that comparison at some point in the future.
@techinsider3611
@techinsider3611 3 ай бұрын
Also try mojo.
@ekbphd3200
@ekbphd3200 3 ай бұрын
Yeah, I need to try Mojo too. I'm finding that Mojo isn't yet good at text processing. I hope and assume that it will be get better as it is developed more and more.
@AsgerJon
@AsgerJon 3 ай бұрын
Instead of split(" "), I suggest split(). Omitting the argument splits on each consecutive whitespace.
@ekbphd3200
@ekbphd3200 3 ай бұрын
I'll try that!
@AsgerJon
@AsgerJon 3 ай бұрын
@@ekbphd3200 I learned that earlier this year from a certain Mr GPT, after years of stuff like: while ' ' in someString: # two spaces someString=someString.replace(' ', ' ') I think it was actually laughing at me.
@wld-ph
@wld-ph 3 ай бұрын
Have to tried different sizes of datasets, to see whether there is some underlying system cause... 230 million words is a lot, way more than I know... and it´s all in one file... and not very parallel...
@ekbphd3200
@ekbphd3200 3 ай бұрын
Yeah, I'm enjoying experimenting with Mojo after each release.
@abanoubha
@abanoubha 3 ай бұрын
what about Go ?
@ekbphd3200
@ekbphd3200 3 ай бұрын
I haven't yet ventured into Go for text processing.
@kilianklaiber6367
@kilianklaiber6367 3 ай бұрын
So rust essentially takes half the time than python....nice, but I thought rust would be a lot faster.
@ekbphd3200
@ekbphd3200 3 ай бұрын
Yeah. Nearly twice as fast.
@0xedb
@0xedb 3 ай бұрын
could be a lot faster. it all depends on what's being done and how efficient the code is. not always tho
@JavierHarford
@JavierHarford 2 ай бұрын
I can imagine 2x is just a function of the complexity x the sample size, which makes me wonder about the curve at scale. There would also be some unrelated but important measures, such as speed of development and the effect of higher level abstractions vs lower level optimisation too
@oterotube13
@oterotube13 3 ай бұрын
So at the end is Julia vs. C
@ekbphd3200
@ekbphd3200 3 ай бұрын
I guess. I don't know how Python's for loops compare to C's, but I guess the dictionary itself is implemented in C.
@exxzxxe
@exxzxxe 3 ай бұрын
This is the second time I have viewed this video. Thank you for performing the benchmark-testing work I would have had to do- saved me quite a bit of time. Now a question: Do you believe Mojo will progress to point where its dictionary performance will equal or exceed Python's?
@ekbphd3200
@ekbphd3200 3 ай бұрын
You're very welcome! I'm glad that enjoyed it. I hope and assume Mojo's native dictionary will get faster with future releases. In the changelog for Mojo v.24.4 the creators say: "Significant performance improvements when inserting into a Dict. Performance on this metric is still not where we'd like it to be, but it is much improved." docs.modular.com/mojo/changelog#v244-2024-06-07 With the "still not where we'd like it to be" I assume that they will continue to work on the native dictionary.
@murithiedwin2182
@murithiedwin2182 3 ай бұрын
That's a significant speed improvement, 3x folds faster in the newer version. However, it still doesn't explain why mojo code is still slower than actual identical python code, given that mojo was going for machine code compilation but with python syntax and ease, and not bytecode... For the little documentation overview i have read, mojo team explained that mojo is not python, but python will be mojo, in the sense that python will instead be an interpreted subset of the compiled mojo, and that features in python not yet implemented in mojo will dynamically switch to run in an included actual python runtime, but in the future, mojo will be self-contained to run all python code on mojo runtime.
@ekbphd3200
@ekbphd3200 3 ай бұрын
Cool! Thanks for looking that up.
@melodyogonna
@melodyogonna 3 ай бұрын
How come you know that dunder methods provide high-level sugar for Python, but you call the methods directly in Mojo? You don't to call object.__len()__. object.__setitem()__ etc directly in Mojo, they work pretty much the same way they do in Python.
@ekbphd3200
@ekbphd3200 3 ай бұрын
I'll have to try the sugar way in the future. Thanks for pointing this out.
@alextantos658
@alextantos658 3 ай бұрын
And ome could also try out the Dictionaries.jl package in Julia that is much more performant and efficient than he base Julia Dict type.
@ekbphd3200
@ekbphd3200 3 ай бұрын
Thanks for the idea. I just tried Dictionary.jl to get the frequencies of words across 40k files with 230m words, and it was only slightly faster than Base.Dict (47s vs. 51s). I'll have to implement Dictionary.jl with a deeply nested dictionary and see how it does.
@alextantos658
@alextantos658 3 ай бұрын
@@ekbphd3200 ​ Thanks for the nice videos and the work! Besides Dictionary.jl, Julia offers several other options from DataStructures.jl, such as SwissDict and other data structures that are claimed to be faster. What I appreciate about Julia is its diverse range of options, often re-implemented within the language itself without needing to track/tune C implementations of basic operations.. Therefore, while comparing base types between languages provides valuable insights, it doesn't fully capture the extent of Julia's capabilities. PS: I am a Python user and fan too..
@ekbphd3200
@ekbphd3200 3 ай бұрын
Here's a quick comparison with a simple frequency dictionary: kzbin.info/www/bejne/iIDKgnSJgrOSoqs
@francoisgrassard
@francoisgrassard 3 ай бұрын
Thank you so much for this video (and the Others). Really interesting.
@ekbphd3200
@ekbphd3200 3 ай бұрын
Glad you enjoyed it!
@indibarsarkar3936
@indibarsarkar3936 4 ай бұрын
Please try splitting the data in half and assigning each halves to a dictionary. Then measure the time taken to copy or interchange elements from one dictionary to another. Maybe the problem is in the file management and not in dictionary!!
@ekbphd3200
@ekbphd3200 3 ай бұрын
Mojo's dictionary has increased in performance (when inserting items) with v24.4. I found a 4x increase in speed with a particular linguistic task. Take a watch: kzbin.info/www/bejne/kKGoi2iwgrtkjtU
@TheRealHassan789
@TheRealHassan789 4 ай бұрын
I wonder if PYPY version of python is even faster, since it does JIT compiler…. Would love to see that result
@ekbphd3200
@ekbphd3200 4 ай бұрын
Good question/idea! I haven't yet tried PYPY. Sounds like a good research question to put to empirical testing!
@woolfel
@woolfel 4 ай бұрын
the real benefit of Mojo is it can easily target other hardware without having to write C code. Google's work with cuda acceleration and pandas is a good example.
@davea136
@davea136 4 ай бұрын
hashmaps are made for O(1)ish retrieval, insertion is far less important. So yeah, insertion gets slower, especially if you haven't chosen a hashing method specific to your data (this can be really important), but the true test of a hashmap/dicitonary is how the size of the corpus affects retrieval. It would be interesting to run the experment on the retrieval end, now that you know the performance of insertion, ansd see how they compare. (You may also want to see how pickling the dictionary and restoring it performs, since this is done a lot more than generatign the initial map in research.) Also, for initial trials, maybe speed things up and get a rougher idea by increasing the quantity by orders of magnitude - 10, 100, 1000, etc. A sparser plot some times helps things jump out of the data more clearly. Thank you for the post, it was good fun!
@ekbphd3200
@ekbphd3200 4 ай бұрын
Great ideas! Sounds like another test I need to run! Thanks for the comment.
@alexeydmitrievich5970
@alexeydmitrievich5970 4 ай бұрын
I think that "small" waa about 10-50 keys as most objects in everyday python are actually these tiny dicts.
@ekbphd3200
@ekbphd3200 4 ай бұрын
Okay. Yeah, that's small. Thanks for the clarification.