Databricks Workflows
8:10
Күн бұрын
Using an AI/BI Genie Space
4:07
21 күн бұрын
Creating an AI/BI Genie Space
13:13
21 күн бұрын
Tech Innovators Meetup
1:28:53
21 күн бұрын
Пікірлер
@AdjeryUlate21
@AdjeryUlate21 17 сағат бұрын
He is the best!!❤
@BrixwellTeam
@BrixwellTeam Күн бұрын
Thanks so much well explained ❤🎉
@ankamv
@ankamv Күн бұрын
While creating Iceberg table in Snowflake, it needs METADATA_FILE_PATH right? I have tried same command and getting an error - Iceberg table DEPT_ICEBERG creation lacks the required METADATA_FILE_PATH table option for the external catalog integration ICEBERGCATALOGINT. Can you explain how did you create unity_catalog_dev ? Appreciate more info on this. Thank you.
@wallco26
@wallco26 Күн бұрын
do you need a profile.yml file saved somewhere in databricks when using workflows or does defining the cluster within the UI take place of the yml file?
@laobaba
@laobaba 2 күн бұрын
In the very last step of the last notebook when you go to run the demo, I get the following error: "py4j.security.Py4JSecurityException: Method public java.lang.String com.databricks.backend.common.rpc.CommandContext.toJson() is not whitelisted on class class com.databricks.backend.common.rpc.CommandContext" How do you fix this?
@dioppapis
@dioppapis 2 күн бұрын
Thank you
@hheungsu
@hheungsu 2 күн бұрын
Awesome!!
@laobaba
@laobaba 3 күн бұрын
Please show a complete beginning to end demo of this, and quit skipping parts! I don't have a document chopped into chunks, show me how to start from a folder of PDFs to being able to talk to an LLM with RAG, not skip parts that make me have to search around to how to do a part of what you're demoing!
@laobaba
@laobaba 3 күн бұрын
I can't tell you how many of these presentations I've watched where I am 90% the way through and I get angry because I just want them to show me the damn steps to do it in the databricks UI. I'm a noob here just begging you to walk me through the process like i'm 5, showing me how to be able to do RAG against my PDFs and instead you waste hours of my time now trying ot find a damn video that show me how to do it instead of taking about the pros, cons, etc. This video even says "And how" but it doesn't show you how!
@LorenzSinger
@LorenzSinger 3 күн бұрын
That's really cool, could you add a link to an example notebook ? Thanks
@niteshgupta9697
@niteshgupta9697 4 күн бұрын
such engaging delivery
@Shubham-y3k
@Shubham-y3k 5 күн бұрын
Can I get the PPT ?
@ajithvenugopal007
@ajithvenugopal007 5 күн бұрын
Interesting feature on the byol side.. where can I read more about it? and how do I enable the feature for my databricks workspace?
@jasonkhaihoang781
@jasonkhaihoang781 5 күн бұрын
Can we test individual DLT models in notebooks now? Or we need to run the full pipeline to see and validate the results? Thanks.
@marcoferraro174
@marcoferraro174 6 күн бұрын
Pretty useful!
@ernestoflores3873
@ernestoflores3873 7 күн бұрын
Hi, nice video! The powerpoint is somewhere?
@thusharmohan96
@thusharmohan96 8 күн бұрын
I am being shown that the freq argument need to be passed as the current index index has none even though I had passed the frequency argument. how to solve this?
@Inceptionxg
@Inceptionxg 10 күн бұрын
After jensen, I thought to see what does it 2.0 actually means? Muaadh Rilwan
@andydataguy
@andydataguy 10 күн бұрын
I almost didn't click on this video. If you put the name of the speaker on the title I would have instant clicked this and watched immediately months ago.
@sharathreddygogula9926
@sharathreddygogula9926 10 күн бұрын
Great! That is just what I was looking for. Thanks :)
@narasimhanmb4703
@narasimhanmb4703 11 күн бұрын
Hey Databricks, Your videos are very informative. But, at your company scale, do you really need to make money from the youtube AD? You're raking-in money with tech, vision and talent and customers. You could help those interested in learning your platform by disallowing youtube ADs on your channel.
@unstoppabforce
@unstoppabforce 11 күн бұрын
woow
@sammail96
@sammail96 11 күн бұрын
Another very easy option is to use a snowflake jdbc connection.
@pavankumar-ni3my
@pavankumar-ni3my 13 күн бұрын
Not useful at all
@ignaciomorenobasanez3821
@ignaciomorenobasanez3821 13 күн бұрын
Dear Moez Ali, you blew mi mind with this Demo. Absolutely incredible. Thanks for sharing such a great information to the Supply Chain Data Science Community!
@nicky_rads
@nicky_rads 14 күн бұрын
Interesting. Is it common for companies to have both snowflake and databricks ?
@mikevladi
@mikevladi 14 күн бұрын
sha1 creates a hash (secure in its name is a misnomer -- it is just longer than an MD5 hash), which should not be considered a replacement for encryption. Python supports the HMAC algorithm through its hmac module, which allows you to mix in a secret key for encryption-like security. Otherwise the presentation is thorough. Thank you!
@carrieliu6969
@carrieliu6969 14 күн бұрын
This is very helpful thank you!
@nagusameta366
@nagusameta366 14 күн бұрын
How do I calculate the optimal numPartitions in repartition or coalesce of dataframe?
@sujaa1000
@sujaa1000 14 күн бұрын
Thank you, it was great!
@vam8775
@vam8775 14 күн бұрын
Now Amazon is moving to ray.. when I just started with spark😢
@jasonkhaihoang781
@jasonkhaihoang781 15 күн бұрын
Thanks for the useful demo. One question please. I assume that the DAB databricks.yml files are created in the same project to the notebooks, dbt models and AI/ML models correct? But the DAB code can be maintained separately from the data application code. Also, the Deployment (CD) should happen after we hit the "Approve Pull Request" button? I see that the Deployment happens right after the Unit Test/Validation passes?
@jasonkhaihoang781
@jasonkhaihoang781 15 күн бұрын
So my understanding is we will put the databricks.yml and resources folder for DAB inside the same code repo where we develop dbt models, notebooks and AI/ML models. Is that correct? We will not put this DAB project in a separate repo? Because I see that notebooks need to be referred to using relative path "../". Thanks :)
@GenBollywood-i9y
@GenBollywood-i9y 15 күн бұрын
CTOs have doubts and find it difficult..freshers are confident and find it easy. :-)
@amit1agrawal
@amit1agrawal 15 күн бұрын
Very nice explanation.My hunt for internal of delta lake ends here.
@jasonkhaihoang781
@jasonkhaihoang781 16 күн бұрын
Can we use CDF (Change Data Feed) for Apply Changes Into in Streaming tables? :)
16 күн бұрын
Anyone else getting the error: raise ValueError("Must specify a chain Type in config")? It looks like Langchain version im working on doesn't like "stuff"
@HaoyuWang-t1i
@HaoyuWang-t1i 13 күн бұрын
Got the same error when serving the model. It works fine one month ago but I'm not sure how to fix it now
13 күн бұрын
@@HaoyuWang-t1i i fixed it by downloading the exact langchain python modules as he did
@ser1ification
@ser1ification 16 күн бұрын
Getting permission issues all the time (access denied)
@malaka123456
@malaka123456 17 күн бұрын
Great demo!
@magicgoku
@magicgoku 18 күн бұрын
Thanks! Looks awesome
@dahof2789
@dahof2789 18 күн бұрын
Also seeing more migrations from Unity to Infa CGDC. Phase 1 is Catalog of Catalogs. Phase 2 is EOL of Unity.
@dahof2789
@dahof2789 18 күн бұрын
I see a lot of databricks in use. It's pretty much either DB or Snowflake as of Nov. 24. But I see more and more Databricks.
@Ahmdrahbi
@Ahmdrahbi 18 күн бұрын
love it
@raghav6516
@raghav6516 19 күн бұрын
How this model is working, how about data getting stored or there any retention period for this genie?
@golddata5151
@golddata5151 19 күн бұрын
When will I ever see a real demo on databricks ?? This is just a bunch of slides !!
@joanpareja3313
@joanpareja3313 19 күн бұрын
This was brilliant! Great presentation guys!!
@dpx89
@dpx89 19 күн бұрын
“Delta is open” hahaha People really believe in that? A OSS that dbx maintain and control…
@dragondove6197
@dragondove6197 19 күн бұрын
For the comparing Java part, I don't think it's a fair comparison because Java's standard library doesn't provide a pre-configured http client. It should be compared with library like `hutool-http` which provides something like `String resText = HttpUtil.get(url)`. But still, scala has scripting `.sc` format and has great tool like `scala-cli` which makes making scripting much easier.(And I like scala worksheet, it's like a simpler version of jupyter). For the comparing python part, your libraries are more comfortable than the same things from python's ecosystem. And python's lambda is awful. ( And I do hate python's variable scope and global variables which made things complex )