Zymposium: May 24th, 2024
55:14
6 ай бұрын
Zymposium: May 17th, 2024
57:46
6 ай бұрын
Zymposium: May 31st, 2024
56:44
6 ай бұрын
FS 23 Zoom   Samson Kamau Muiruri
23:59
Пікірлер
@FoldLeft
@FoldLeft 28 күн бұрын
Ever used dune with OCaml? You wanna try Lisp for your build files
@hubstrangers3450
@hubstrangers3450 Ай бұрын
Thank you, do not think this will be possible for life sciences(Nucleotide,Genomic, Teritary Protein Structure etc) or physics(HLC) scenario....believe it's limited in its scope
@Dr_Dude
@Dr_Dude Ай бұрын
nice! thanks for sharing, definitely archiving this one for later reference
@theoDSP
@theoDSP Ай бұрын
I had to use a CountDownLatch in the FiberImpl to make a proper wait until the thread joins because even with the Thread.sleep the main thread was still exiting early.
@thegeniusfool
@thegeniusfool 2 ай бұрын
Please use high contrast next time.
@LA-fb9bf
@LA-fb9bf 2 ай бұрын
Scala needs ohne killer Feature to survive. Maybe full wasm and wasi support?
@isaacyuen5409
@isaacyuen5409 2 ай бұрын
​ @kitlangton // 4. Companion Object Idioms // - constructors / higher-arity combinators // - example using Coord May I know know where can I find the content of this topic??? Any KZbin links and gists?
@mrmrigank7154
@mrmrigank7154 3 ай бұрын
i watched it twice, but it seems to be more about zio rather than the idea of functional effect systems and an introduction for functional beginners .
@hubstrangers3450
@hubstrangers3450 3 ай бұрын
Thank you.....
@gregzoller9003
@gregzoller9003 3 ай бұрын
Great concept-will take a couple of impressions to get it, but I like the idea. Some real innovation! Honestly I’m in favor of anything to help push forward the effectful functional model. Granted this is a first release, so I would expect everything to be optimized yet, but do you have any baseline benchmarks?
@hubstrangers3450
@hubstrangers3450 3 ай бұрын
Thank you......lot of errors in scientific papers!!!!!!! [ reproducability,!!! what happen to the 3R's ]
@hubstrangers3450
@hubstrangers3450 3 ай бұрын
Thank you, however, it sounds like returning to "MapReduce" scenario, in big data space using more of a concurrency/parallel processing paradigms for [bioinformatics, cheminformatics, and for deeper scientific computing scenario[from datasets form sites such as LHC, FCC and similar sites around globe etc], however, still few confusing statements such as is it a OnPremise or Cloud computing?, consider these are two different scenario and "regional" means more of a cloud computing, most scientist or research professional prefer the prior than the later...also would like state aware of academic credentials of both parties, and still would suggest folks without inter disciplinary lower level knowledge, skills and talents are not ideal folks to be associated with levels of development(POC), even though they might possess Phds in their respective fields, since most platforms available out there does not fulfill needs [Blast, NCBI, SNP, Galaxy etc] most deeper research professionals and scientific folks even in 2024...., though the story of Big Data from 2008....something like that, and functional programming is like 1940....something like that and the rest thought process should be able figure out....
@MrDejvidkit
@MrDejvidkit 3 ай бұрын
nice I love scala-cli
@joebowbeer
@joebowbeer 3 ай бұрын
16:04 "The rust language was designed for synchronous code" - really? I think rust was designed for programming with threads rather than with async/await. Another thing that makes async/await complicated for rust is that it does not have a standard async runtime nor does it require an async runtime.
@masterchief1520
@masterchief1520 Ай бұрын
Memory safety, type safety allowed fearless concurrency. But we are used to async pattern. I feel go did a great work. Balancing between performance and how simple it is do it.
@MrDejvidkit
@MrDejvidkit 3 ай бұрын
is it still posible to signup?
@Ziverge
@Ziverge 3 ай бұрын
Yes! Register here: share.hsforms.com/1VeS7vcAdRyqme1TLriu-YA444dk Watch the live here: kzbin.info/www/bejne/iIPIpWZuetyGpdE Join us on Discord to ask any questions (and download the guide): discord.com/invite/c3esdEfywj We'll expect you at 1pm EDT tomorrow 👀
@jayshah5695
@jayshah5695 3 ай бұрын
Sum types and subtyping are similar right?
@Ziverge
@Ziverge 4 ай бұрын
Make sure to subscribe to our designated Durable Computing channel, where we talk more about Golem 💪 www.youtube.com/@DurableComputing
@yfjolne
@yfjolne 4 ай бұрын
Nice demo, congratulations on the 1.0 launch! During the demo Daniel says that it's impossible to update the worker under certain conditions: namely when interface changes (though it seems like in the demo only the implementation was changed?). That brings a more general questions of how to evolve worker states: e.g. how to migrate the state from test2 to test22 atomically during update? And is there a shortcut for common cases, e.g. adding a new field with default value to the State class?
@DanielVigovszky
@DanielVigovszky 4 ай бұрын
Thanks for the question! I might have been wrong in the live demo about not being able to use worker update in that particular case. It was hard to quickly think it through while not getting out of the planned schedule. Golem supports two ways to update a worker to a new component version. Automatic update basically replays the whole history of the worker using the new codebase. This potentially can fail, if there were "too big" changes in the codebase. If it fails, the worker is restored to its original version and it works as if nothing happened (and the failed update attempt is available in the worker metadata). So this means that you can change anything (including the external interface) that has not run yet, but if something ran, then the possibilities are limited to small fixes that do not change the (observable) outcome. For example if you invoked function A, and it returned a value, that value is persisted, and if the modified version would return a different value, that is a divergence and fails the update. But if the change only affects things that are not observable (persisted) then it's fine. Of course there are cases when you have to do such a change that cannot be applied to a worker automatically, but you still need to preserve your worker AND have the new version running. For this we support something we call manual update; in this case you have to implement a `save` function in the old version (if it was not there yet, it can be added and auto-updated as it's something never ran yet!) and a `load` function in the new version, and you have to serialize/restore your state manually from an arbitrary byte array. Once the functions are implemented, the process of running them as part of the update process is orchestrated by Golem.
@yfjolne
@yfjolne 4 ай бұрын
@@DanielVigovszky Thanks for the great response! Having two cases makes sense, but doesn't it mean that Golem has to store the history of all arguments of each interface function invocation in order to attempt automatic update? That seems to be quite demanding on storage: does Golem support remote object stores for such metadata? And even then, if the first version of the worker was launched say a month ago and actively used, wouldn't automatic update be prohibitively time-consuming for any change to that worker? Meaning that save/load implementations would rather be the default approach.
@DanielVigovszky
@DanielVigovszky 4 ай бұрын
@@yfjolne You are right that for automatic update we always need to use the full history, even if snapshots would be used to optimise recovery (which is by the way on the roadmap but not enabled right now). So this is a tradeoff the user can make between having to implement custom code, or use the built-in machinery. Both makes sense depending on your workers and their lifetime. So if we need to store the journal forever, how do we do it? It is stored in multiple layers, the primary layer is the one being written in live - that's currently a redis stream. There are (configurable number of) secondary layers, either further Redis streams, or S3 buckets, where old entries got moved in compressed chunks. The default configuration right now is 1 archive layer in Redis and 1 in S3. Eventually if you are not accessing a worker, it gets completely moved down to the bottom layer. We only have redis+s3 implementation now, but it's designed in a way that it should be easy to implement connectors for other data storages (just need to implement a keyvaluestore, an indexedstore and a blobstore interface) See more at learn.golem.cloud/docs/operate/persistence#oplog
@yfjolne
@yfjolne 4 ай бұрын
@@DanielVigovszky I see, thank you for the detailed reply.
@sadikmadanialaoui7690
@sadikmadanialaoui7690 4 ай бұрын
amazing interview indeed
@kostian8354
@kostian8354 4 ай бұрын
Great trolling, none of these are trully a problems of Rust, those are issues with Scala.
@mackler
@mackler 4 ай бұрын
Too good to be true?
@ZelenoJabko
@ZelenoJabko 4 ай бұрын
John, here it Goes
@Rockyzach88
@Rockyzach88 4 ай бұрын
I'm learning SCALA right now. Why does it need WASM?
@alex-su81
@alex-su81 4 ай бұрын
Since Java got lambdas and streams, Scala does provide very few advantages. And tones of complications.
@mrsoomo
@mrsoomo 4 ай бұрын
Scala is great language. Thank you for supporting
@ramavishal8605
@ramavishal8605 5 ай бұрын
very nice detailed explanation for beginners
@jay-hinddoston8364
@jay-hinddoston8364 5 ай бұрын
Its a must watch, thanks for this session
@CarlosSaltos
@CarlosSaltos 5 ай бұрын
Why the fight when you can use Loom inside the ZIO or Kyo or Cats Effect one of these days !! 👍😎 ... This requires a heavy rewrite of these tools but it's possible or at least more productive than fighting 😅😇
@calvinfernandes1054
@calvinfernandes1054 5 ай бұрын
Great video Yisrael ❤
@MrDejvidkit
@MrDejvidkit 5 ай бұрын
Nice, I was waiting for next vide ;-=
@williamswaney2615
@williamswaney2615 6 ай бұрын
It's no Clojure... I have had no end of issues with Scala and backwards compatibility. Even just simple version changes w/libraries. Not with Clojure. I like Scala, but I love Clojure.
@JohanLiebhart
@JohanLiebhart 6 ай бұрын
If you have multiple instances, the STM cannot solve the transaction problem, right?
@Alex-xf8pl
@Alex-xf8pl 6 ай бұрын
Hey John, really cool and comprehensive subject, thanks for your contributions to the Scala community and to helping improve engineers. I would have a suggestion for a topic for your open-source series, which would be open-sourcing a commercial project.
@TJ-hs1qm
@TJ-hs1qm 6 ай бұрын
she got a glimpse into the Matrix sqeeze the people but not so much that they'll break
@TJ-hs1qm
@TJ-hs1qm 6 ай бұрын
Difficult to follow due to the ambient room reverb. It's generally better to use a mic.
@joan38
@joan38 6 ай бұрын
39:10 Isn't Scala.JS very much integrated in Scala?
@joan38
@joan38 6 ай бұрын
34:10 scala-cli is probably much better than cargo
@budiardjo6610
@budiardjo6610 7 ай бұрын
i am learn a lot from scala to understand rust
@mrdkyzmrdany8742
@mrdkyzmrdany8742 7 ай бұрын
Dependency Injection Attacks [19:26] 😂🤣😂🤣
@MaxChistokletov
@MaxChistokletov 8 ай бұрын
Scala value prop (for me at least): write reliable concurrent code on the JVM, easily.
@MrDejvidkit
@MrDejvidkit 8 ай бұрын
This is great! I like seeing things process.
@Swoogles
@Swoogles 8 ай бұрын
Excellent stuff, Nabil. I appreciate seeing the progression of the DSL and how it reached its awesome current state.
@carlosverdes
@carlosverdes 8 ай бұрын
What you describe with "you say what you want to do" is actually a command on CQRS, as I said you can apply your Flows as an abstraction that behind the scenes use CQRS + ES (each flow step is one command that generates 1 to n events)
@carlosverdes
@carlosverdes 8 ай бұрын
Some comments related with your comments on event sourcing: - you never retry events, you retry a command (and if the command fails 0 events are generated), events are always facts that happened in the past and hence there is no need to retry anything (commands are actions that produce events) - I do agree in most cases it's true using ES is accidental complexity and that is a new paradigm, but I dont agree that makes you change ALL your application. If you follow DDD it will only affect a part of your application, normally a bounded context where CQRS + ES is justified - Also you don't mention that once you have the events stored is super easy to reply them and create new views or feed other aggregates, in a super natural way compared with a traditional "store snapshot" approach with CRUD - Kafka is not a good fit for event sourcing as it doesn't have optimistic locks, I saw super good implementations using Postgres, event sourcing is NOT event streaming, it's a style of how you store your events - The example of the shopping cart is actually a good fit for event sourcing, for example when you need information for analytics it's super easy to replay all events from our users and create different views, but if you read from transactional tables you miss lot of information relevant for data science models - I actually think you can apply durable computing on top of event sourcing, they are compatible things solving different concerns, event sourcing is used to reflect business (it's linked to domain) where durable computing is about fixing non-functional issues like failures, hardware retries, etc. You can apply both together
@Datababble
@Datababble 8 ай бұрын
Really enjoying this series - thank you both, Nabil and John!
@alphabeta3029
@alphabeta3029 8 ай бұрын
Probably the most convoluted implementation of tic tac toe I've ever seen 😅
@michaelperucca4707
@michaelperucca4707 7 ай бұрын
Oh yeah. It definitely balloons when trying to be precise, and we’ve probably surpassed the point of “is it worth it?”
@Datababble
@Datababble 8 ай бұрын
Thanks Daniel, great session!
@MrDejvidkit
@MrDejvidkit 8 ай бұрын
Here he goes! 💪💪
@Heater-v1.0.0
@Heater-v1.0.0 9 ай бұрын
Pragmatist here: Correct me if I am wrong but as far as I can tell Scala has the following to learn from Rust: 1) It has to learn to work with out the JVM (or any kind of run-time system). Requiring the JVM implies bloat and poor performance. That excludes use of Scala from much of the work I do. 2) It has to learn to work without a garbage collector. A garbage collector introduces unpredictability in performance. It also excludes use of Scala from much of the work I do. 3) It has to learn to run in only kilobytes of code space on micro-controllers and the like. 4) It has to learn that maths is great and all, but maths is limited. Maths does not have a solution for the many body problem, for example. As for monoids, I had them surgically removed as a child.