No, there's a lot of new things with Lakeview that weren't possible in Redash; performance and sharing being the biggest changes
@michelle446822 сағат бұрын
21:45 how to get yourself in a good position to use Databricks Lakehouse: 1) store data in data lake, 2) pick a standardized open format (like parquet or ORC), 3) figure out one of the technologies that are building blocks for building lakehouses (like Open-Source Project Delta Lake)
@michelle446822 сағат бұрын
20:33 once you can enable people to do online transaction processing directly on the lakehouse, that's gonna be a major technological breakthrough, and he'd be shocked if they weren't there in five years ~listening to this 3.5 years later, where is DB on this?
@michelle446822 сағат бұрын
16:30 Next issue to solve: awareness of lakehouse paradigm. There's a big shift of people moving from on-prem to the cloud. "And a lot of them are tempted to rebuild the same architectural pattern they had in the on-prem, into the cloud." Which doesn't really buy that much. People need to be reeducated on the lakehouse paradigm, and it needs to be spread wide.
@michelle446822 сағат бұрын
12:45 "If you look at data science and machine-learning tools, they're not built on top of SQL. So enable that downstream use-case." A data lakehouse = open-data lake + downstream data science + Business Intelligence (BI)
@michelle446822 сағат бұрын
11:37 "many projects on top of data lakes were failing, they were pulling us in to do professional services to fix the problems they had with the data lakes. So we just wanted to sort of fix it, automate it with software ones once and for all.."
@michelle446822 сағат бұрын
41:51 "we need to change our attitude from being gatekeepers of systems, to being shopkeepers keepers of data...how do we actually provide data to people with sufficient governance, for them to be able to use it with flexibility, but with a light enough touch that we're not over managing the system?"
@michelle446823 сағат бұрын
44:16 Getting beyond techno-chauvinism and the importance of the personal aspects in data science.
@michelle446823 сағат бұрын
27:19 Trying to drive people to a data-driven culture and to understand the language of data and it's getting better , but it's not perfect.
@michelle446823 сағат бұрын
4:11 Someone in her network tested her on her analytical skills and when she proved a question not to be accurate, she was hired, she saw the hole in the question.
@Prashanth-yj6qxКүн бұрын
Can anyone tell me why he reduced target size to 100 mB from 200mb
@zombieeplays3146Күн бұрын
So good still I can't center the dashboard title 😥
@Databricks8 сағат бұрын
Feature request raised 👍
@zombieeplays31468 сағат бұрын
@@Databricks yeah let it be like markdown in Jupyter Notebooks 😅
@Databricks4 сағат бұрын
Sorry for the confusion: The titles are already markdown and I show it at 0:51 for a brief few seconds, but it's basic markdown so you can't centre it within a text panel. What you can do is centre the text panel so it's at least semi centred. Holly
@anandahs6078Күн бұрын
Great feature, thanks for sharing
@lostfrequency89Күн бұрын
For notebooks should we use even integrate github or we can use dabs for that matter ? I’m kinda confused
@georges72982 күн бұрын
Fantastic DLT and pipeline training! well done!. Is there a github project with a complete version of the example codes shown in this video?
@gameversemaster3 күн бұрын
cool
@SpartanPanda3 күн бұрын
Great video.. complete coverage of a real business usecase..well explained
@shaileshdhumma90963 күн бұрын
great job !
@gustyflores3 күн бұрын
great! thank you
@sheelstera4 күн бұрын
i dont see the system.compute tables at all in my Azure Databricks workspace.. what could be the reason?
@DatabricksКүн бұрын
Two reasons I can think of, 1) you have to have Unity Catalog, it's where the compute comes from to deliver all the data, 2) you have to enable it with the API, to do that you'll need to be an Admin. Holly
@aliaksandr23364 күн бұрын
and why it better than usual %sql ?
@DatabricksКүн бұрын
Hi Ali, %sql makes the language for the cell SQL, which is useful if you're switching between languages. However, if you want to be SQL the whole way through and not use python then you can use Execute Immediate to build your dynamic queries. Holly
@dusk43778 сағат бұрын
@@Databricks why not just change the notebook to sql?
@Databricks4 сағат бұрын
@@dusk4377 this is a SQL only feature and can be used to replace Python variables so your code can be 100% SQL. Hope that's clearer, Holly
@ericsims33684 күн бұрын
I love Databricks Assistant and use it pretty much every day at work. A few days ago I knocked something out in 15min that would have taken me more than an hour if I had done it on my own.
@samirelzein10955 күн бұрын
great job!
@syndicatedmaps5 күн бұрын
Do you have any map data examples?
@ravisaxena15995 күн бұрын
You sounds pretty as per playback speed 0.75x 😊
@toniolora92265 күн бұрын
Do you have an example on Azure?
@erukullasrikanth155 күн бұрын
What is the additional advantage we get by using overwatch when compared using uc system tabled
@elziolima69185 күн бұрын
Data Lake is a component of Data Pipeline?
@elziolima69185 күн бұрын
First.
@stefanxhunga16816 күн бұрын
✅Interesting posts by Databricks
@tarshmidha58796 күн бұрын
what's the link to data preparation video that was mentioned and worked on by data engineering person?
@LearnWithDummy6 күн бұрын
strange, 1/ Bronze: Loading data from blob storage , and path is from S3? am i missing something here?
@zombieeplays31467 күн бұрын
Much needed feature. I had to use PySpark just to achieve this earlier.
@erkoo25547 күн бұрын
Jai Shree Ram
@goodstuff56667 күн бұрын
Very nice tutorial! Could you guys share the slides? Thanks.
@shankhadeepghosal7318 күн бұрын
I want do build a work flow where 2nd notebook should run only when a certain table count is more than 0 in notebook 1. How?
@dilipjha088 күн бұрын
Thanks for knowledge sharing to the technology user. It was very details about the dlt as well as streaming tables and comprison between it and demo of the topic was very perfect.
@vasanthbloginfo9 күн бұрын
Great talk and very useful info
@pritamdodeja9 күн бұрын
This is gold!
@stopthink900010 күн бұрын
Very interesting! I wonder, in the AutoML example at 19:47 why would they have used a "double" data type for age? The robot overlords must already be planning ahead lol!
@harshtrivedi70012 күн бұрын
didnt see the notebooks where are the usually deployed?
@elenavi201613 күн бұрын
outdated, it is useless and just wasting time. delete the video
@NoahPitts71313 күн бұрын
Hoping to implement this on my current project soon!
@AmineHosni-13 күн бұрын
Can we use Delta Sharing on Materialized Views that were defined in Delta Live Tables?
@booyaaaaaaa14 күн бұрын
Can you add VIM support for notebooks?
@hasski14 күн бұрын
Marred by the loud pointless music
@hkjpotato14 күн бұрын
why can’t we just show a visualization on the ui about this result?
@Databricks14 күн бұрын
Hi There! Our experience is that most people want to be able to highly customise their visualisations. The tables are designed in a way to allow for flexibility. At 0:25 when you see the results, there's a + next to the table; that allows you to make a visualisation from this data, you can add it to a dashboard and then schedule and share it as frequently as you'd like - Holly
@zombieeplays314615 күн бұрын
Recently used these for Lineage and Audit, looks cool