Azure Synapse Analytics - Lake Database Map Tool

  Рет қаралды 6,319

Advancing Analytics

Advancing Analytics

Күн бұрын

Пікірлер: 22
@MoinKhalid
@MoinKhalid 2 жыл бұрын
Finally Someone explained Lake Database clearly as compared to sql database.
@nagulmeerashaik5336
@nagulmeerashaik5336 2 жыл бұрын
Nice explained
@culpritdesign
@culpritdesign 2 жыл бұрын
Great video. Thank you for this content.
@7anishok390
@7anishok390 2 жыл бұрын
Hi Please help me. I have created an external table in the synapse lake database, Now I like to load the records from the external table into the dedicated SQL pool table. Please advice on the procedure.
@RonaldPostelmans
@RonaldPostelmans 2 жыл бұрын
nice video, i wanted to explore this new functionality, in my opinion it has (like you said) no better feeling than the data workflow. Question, i don't know much about delta lake, is that something you can also use as a business intelligence engineer/data engineer to create a star model which you can use to connect to with Power BI? is that usefull?
@hozefakanchwala8720
@hozefakanchwala8720 2 жыл бұрын
Yes, Delta Lake can be used for BI/ Data Engineering jobs - You can run SQL queries using Databricks SQL Endpoint and build Dashboards
@AdvancingAnalytics
@AdvancingAnalytics 2 жыл бұрын
Yep, absolutely. As a standard practice we now land our star schemas in the lake as Delta tables and serve it out to Power BI, Tableau etc from there, either through Synapse Serverless or Databricks SQL!
@googlegoogle1812
@googlegoogle1812 2 жыл бұрын
Do you know what is the difference between lake databases and delta lake project? Both seem to have roughly the same functionality - I can use Spark to do ETL tasks - and then use spark pools as well as serverless sql pools to query data.
@hubert_dudek
@hubert_dudek 2 жыл бұрын
It looks like Synapse engineers don't use their own product. Synapse includes spark, but they want you to load data from a single file instead of a directory. Catastrophe :-(
@crouch.g
@crouch.g 2 жыл бұрын
Why not just have a view over source files and then use a copy to create the new table files?
@AdvancingAnalytics
@AdvancingAnalytics 2 жыл бұрын
You can absolutely do that, although then we're using the serverless engine + integration movement to do it, rather than the spark engine. Both valid, but have their own pros/cons from a cost/performance approach.
@culpritdesign
@culpritdesign 2 жыл бұрын
@@AdvancingAnalytics I would pay to see a video explaining this
@culpritdesign
@culpritdesign 2 жыл бұрын
If this expands to allow variables and selecting folders instead is files then this could be pretty neat.
@Kinnoshachi
@Kinnoshachi 2 жыл бұрын
When GA on unity and DLT
@AdvancingAnalytics
@AdvancingAnalytics 2 жыл бұрын
That's a question for Databricks!
@Kinnoshachi
@Kinnoshachi 2 жыл бұрын
@@AdvancingAnalytics I know I just can’t wait, I hope for June
@alexischicoine2072
@alexischicoine2072 2 жыл бұрын
I feel like you why not just write pyspark. You can remove tedium by programming repetitive tasks with functions and do things like build a view from tables grabbing the comments which I definitely wouldn’t want to be doing manually over 200 columns and having to maintain it. Using parquet tables is also just asking for problems when you’re rewriting and someone is querying at the same time.
@AdvancingAnalytics
@AdvancingAnalytics 2 жыл бұрын
Usually because the team/company don't have pyspark skills, maybe they don't have /any/ internal programming skills, so diving straight in and writing some pyspark is a pretty steep learning curve. Completely agree that parquet isn't fit for an enterprise lake these days, and I would 100% use pyspark for this instead, but it's good to have tools for other data roles, and if they get it to a point where it's automated & slick, there's a lot of good that can do!
@NeumsFor9
@NeumsFor9 2 жыл бұрын
Microsoft recognizes not everyone has the same path to IT......esp those that become accidental data engineers or power users thinking about making a jump.....and perhaps facilitating to POC to automation cycle. Want to be better? Auto-gen a mapping data flow from someone else's spark code so that the visual ETL champion can become more of a coder by osmosis. At the end of the day, it's all doing the same task and same work. However, it can become a wasteland of different objects if not properly managed.
@tanimtanim
@tanimtanim Жыл бұрын
You are extra dramatic, sometimes it is so irritating and tough to concentrate. Please consider this point.
@AdvancingAnalytics
@AdvancingAnalytics Жыл бұрын
Other, more boring, channels are available :D
Azure Synapse News - March 2022
29:47
Advancing Analytics
Рет қаралды 1,6 М.
Synapse Analytics - Querying Delta Lake with Serverless SQL Pools
21:17
Advancing Analytics
Рет қаралды 15 М.
coco在求救? #小丑 #天使 #shorts
00:29
好人小丑
Рет қаралды 120 МЛН
人是不能做到吗?#火影忍者 #家人  #佐助
00:20
火影忍者一家
Рет қаралды 20 МЛН
小丑教训坏蛋 #小丑 #天使 #shorts
00:49
好人小丑
Рет қаралды 54 МЛН
Advancing Spark - Identity Columns in Delta
20:00
Advancing Analytics
Рет қаралды 10 М.
Azure Synapse Analytics - Automating Serverless SQL Views
30:04
Advancing Analytics
Рет қаралды 12 М.
Azure Synapse Analytics - Introduction to Azure Purview
33:35
Advancing Analytics
Рет қаралды 21 М.
Power BI Datamart - What is it and Why You Should Use it
22:04
The Azure Spark Showdown - Databricks VS Synapse Analytics
49:18
Modern Data Warehouse Using Azure Synapse Analytics
1:11:08
DBAFundamentals
Рет қаралды 10 М.
coco在求救? #小丑 #天使 #shorts
00:29
好人小丑
Рет қаралды 120 МЛН