This is great, I know Julia has been working towards this for a long time, but it is so much easier for me to share code and functions in the form of binaries with coworkers who never have to touch Julia. Most use R for day-to-day use so if I suggest Julia they kind of dismiss it. But in 1.12 I hope to be able to build those small binaries that can perform tasks for their pipelines in a simple way.
@pookiepats5 күн бұрын
Huge D here, also this language is NEXT
@Tbone9136 күн бұрын
Can i get a link to this code please?
@atibyte26 күн бұрын
I started to learn D about 1 month ago. Until now I like it.
@nuhuhbruhbruhАй бұрын
YOOO interfaces and traits!! this will be huge holy shit
@romeovalentin5524Ай бұрын
Possibly the best final slide of all time.
@praveennarayanan9451Ай бұрын
I was reading your paper. This is very hard to understand without your video. Danke Schoen!
@sathwikreddy53322 ай бұрын
I had issues with the fakeroot since it was not configured on my local hpc cluster you might have to this manually on the login node with "sudo singularity config fakeroot --add root"
@FoosterExtra3 ай бұрын
Great Points! Thanks for help! 😊 😊
@kcvinu4 ай бұрын
Huge D fan here! Thanks for the video.
@timcarpenter24416 ай бұрын
I really am interested in this working to remove unnecessary language gaps that lays out bear traps for the unwary. The binary protocol handling reminds me of the work I did with VAX PASCAL handling the compressed binary exchange feeds and market data. One defines the structures and permutations thereof and one could then just access the fields. No pointer arithmetic or bit shifting.
@pookiepats5 күн бұрын
😮amazing bro
@kwabenakorantengasiedu598210 ай бұрын
Thank you! Very simple and straightforward compared to building from source
@RajivGupta-t5c11 ай бұрын
After a huge search on RASER simulation, I got this video. The installation process is so complex because of its dependencies. I am stuck for Fenics module. Can you add me to your technical team?
@HEPSoftwareFoundation10 ай бұрын
Hi - thanks for your interest in the topic. Please contact the RASER team directly for questions about the installation of the software.
@RobGardnerJr Жыл бұрын
I was able to follow this on 11/15/2023, logged in (KeyCloak -> CI Logon), got the BinderHub landing, pasted your matthewfeickert/pyhep-notebook-talk-example repository, the image was still in Harbor, pulled the image, and the notebook launched within 2 minutes, talk.ipynb worked perfectly. Thanks Matthew!
@dodsjanne Жыл бұрын
D is actually kinda cool
@toddstrain3629 Жыл бұрын
Is this the same D language that Oracle used for DTrace?
@maxhaughton1964 Жыл бұрын
No
@GeorgeNoiseless9 ай бұрын
This is not even Oracle's fault, but Sun's. How naughty, D lang was there first!
@philippt4302 Жыл бұрын
D best language !!
@abhayrawat4426 Жыл бұрын
Where can i get the notebook that is being used in this video?
@bhaveshsirvi7712 Жыл бұрын
@abdalrhmanhs8521 Жыл бұрын
Appreciation is Due.
@m30walkdrive29 Жыл бұрын
in iminuit, how can I limit the fit range , e.g., I only want to fit data between x= 100 to x=500? Is it possible to use the m.limits for the data range?
@bambinodeskaralhes Жыл бұрын
Very good presentation ! I'll do the proposed method in my PHd work. I'm encountering problems to train my models with negative weights. I 'm only finding difficulties to understand in which sense the "likelihood ratio", the so called observable, is unbinned. We can't cluster events expecting them to have exact the same observable value. The proposed observable will be represented as a histogram and we'll cluster the events which reside in chosen real number intervals of the produced histogram. How can it be unbinned ?
@CoolDude911 Жыл бұрын
I got the gaussian one to run with cuda. It is faster on my PC than the parallel loop version without copying back to the host and with warming up the GPU a bit (i.e. run it a few times). @cuda.jit def cuda_guass_2d(output, height, width, scale): i, j = cuda.grid(2) i_in_bounds = 0 < i < width j_in_bounds = 0 < j < height if i_in_bounds and j_in_bounds: x = (i - width / 2) * (10 / width) y = (j - height / 2) * (10 / height) norm = x**2 / 2 + y**2 / 2 taylor_series = 1 + norm + norm**2 / 2 # Numba cuda supports math.exp output[i, j] = scale / taylor_series def execute_gauss_on_cuda(height, width): grid_cuda = cuda.device_array((height, width)) block_dim = (16, 16) grid_dim = ((height // block_dim[0]) + 1, (width // block_dim[1]) + 1) scale = 1. / np.sqrt(2 * np.pi) cuda_guass_2d[grid_dim, block_dim](grid_cuda, height, width, scale) cuda.synchronize() return grid_cuda
@simplified-code2 жыл бұрын
awesome explanation, never thought this concept can be this easy
@R_Harish2 жыл бұрын
Can you share the link for the slides presented here?
@HEPSoftwareFoundation2 жыл бұрын
Of course: indico.cern.ch/event/1160438/
@yangliu57272 жыл бұрын
Hello have to 2x for loop through a matrix by (1360 x 1024 ), and (slice aout a vector of 128 in length )it takes hell of time to do, how can I speed it up with Numba an CUDA, very much thanks.😃
@adriano66654 жыл бұрын
the github repo with the notebook: github.com/aoeftiger/pyhep2020 the slides: aoeftiger.github.io/pyhep2020/
@adriano66654 жыл бұрын
go to 1:18 to skip the initial fixing the screen sharing
@denkrop82734 жыл бұрын
Зачёт, не фига не понятно, но очень интересно :-)
@MuhammadFarooq-rc8br4 жыл бұрын
That's really amazing. I also work on di higgs analysis by ggF at integrated luminosity of 137fb-1. For the hh->bbtautau. It helpful please share the link about these descriptions.