Kindly provide notebook or code so that we can too try out these things. As I am newbie in this please help with the codes or notebooks
@slothc2 күн бұрын
Last update in 2022 and still doesn't support incremental deployment (but publish button does). Even most basic projects take 5-10 minutes to deploy no matter how large the changes are.
@curious-abc-xyz9 күн бұрын
Synapse is a 100% pure junk, if you considering implementing it, stay away. If you are already feeling the pain, then God bless you! :)
@vishnubasskarv10 күн бұрын
The stored procedure will not work when we have 10 tables
@melikagerami13 күн бұрын
what if I want to add an AAD group with multiple users and not a particular email address
@melikagerami13 күн бұрын
what if I want to add an AAD group with different users and not just one user?
@martinsmith867015 күн бұрын
Using MM-DD-YYYY format for partitioning looks like it will make searching for a date range more problematic than YYYY-MM-DD would?
@DataLuke15 күн бұрын
Shouldn't this channel be called Fabric not Synapse
@DataGuide19 күн бұрын
Is it possible to share one environment across workspaces?
@bhuvanpurohit995121 күн бұрын
Its funny to see Microsoft employee using MAC instead of Windows. haha just a joke but this video was very beneficial.
@olimjonn23 күн бұрын
Too much general , high level talk
@ozkaryАй бұрын
Looks like we can create a logical data warehouses with delta tables by defining dimensions and fact table and then running incremental updates on that table. Spark would need to handle the incremental updates, correct?
@sanjayj5107Ай бұрын
So we need managed private endpoint to connect to key vault in fabric workspace?
@pawewrona9749Ай бұрын
That depends on your Azure setup. I have key vaults where it's fine for me to connect to them without private endpoint.
@sanjayj5107Ай бұрын
how to allow only certain users to access semantic models and not any other items in Microsoft Fabric?
@asedfsdfsdf0y74308ghАй бұрын
How's about that network control for notebooks to stop people dumping the data out to wherever they want? How's that coming along?
@Kayalbabu-t8xАй бұрын
You can't start without this madness poison coffee 😂
@SurajKumar-vz9tjАй бұрын
i got error while creating connection, it automatically created a connection but i am not able to see the tables inside with the same credentials which i am using for snowflake, any idea?
@naperushivasarangi4415Ай бұрын
Do we have dataverse as destination?
@keen8fiveАй бұрын
is it possible to emit the logs to the "files" section of a Lakehouse (OneLake)?
@jennyjiang6301Ай бұрын
It is not possible right now, but we can add the request to our backlog.
@tannguyen0175Ай бұрын
How to get list schemas in a lakehouse
@ChrisxantixemoxАй бұрын
What is not clear at all, is how spark VCores apply to tenant capacity. F64 sku has 64 CU. But I might have other activities consuming some CU while my notebooks are running. Is my capacity still 128 V Cores, or is it 128 - (2 * x) ?
@MyachnikАй бұрын
Thanks a lot Man!
@ChrisxantixemoxАй бұрын
I’d like to see this compared to spark.read.csv I’ve found the type casting used by COPY INTO is insufficient, but when I use a schema on spark.read.csv I do not have those issues.
@debojyotimazumder6692Ай бұрын
Hi..i am working in a dw prod support (azure synapse only) one query is long running and getting stuck at this specific shuffle move operation..any idea how we can resolve this or general procedure to tackle this?
@jenszemke3967Ай бұрын
Getting an error message that I don’t have the permission to create notebook. 😮
@desaim94Ай бұрын
Hm, do you have permissions to create notebooks in your workspace? AutoML will create a notebook at the end, so you’ll need this permission to finish the wizard.
@jenszemke3967Ай бұрын
Yes I have created multiple notebooks which basically do something similar without the UI. Also checking with my admin, but doubt it’s an access issue.
@MarkBradyАй бұрын
I keep waiting for this to show up in my tenant.
@DrEhrfurchtgebietend2 ай бұрын
We need to denormalize the use of notebooks
@RainbowPigeon152 ай бұрын
I'm so lost
@sutit2 ай бұрын
Hi, Thanks for the video. I don't see the "Use a sample" tile in my Fabric Data Science Workspace. Where is it?
@keen8five2 ай бұрын
when you talk about "session" here, do you really mean the SparkSession object that is referenced in the "spark" variable? If so, that would mean that the Spark Cache is also shared between the notebooks, right?
@SanthoshKumarRavindranАй бұрын
As these notebooks are sharing a single Spark session, yes they would be also sharing the Cache
@maganurusunil76622 ай бұрын
I have one doubt -- if I am running pipeline in schedule mode which having notebooks how session dynamically will select as high concurrency session
@SanthoshKumarRavindranАй бұрын
Once the high concurrency mode option is enabled on the workspace settings, the notebooks get packed into the same sessions based on the session sharing criteria which is the notebooks should have same compute configurations, same environment, same default lakehouse and same session tag string
@maganurusunil7662Ай бұрын
@ I have scheduled pipeline which have notebooks to start session it took 1min 30sec and also I have tested by disabling high concurrency then also it took around 1min 30 sec ,,, so how this is working then
@kishanmatanhelia12212 ай бұрын
getting invalid credentials and I am able to do same in Power BI desktop
@craykuo30022 ай бұрын
At Fabric warehouse, how could we see the execution plan as shown at 3:54 in the video?
@OscarSteglich-x2x2 ай бұрын
Thank you so much for this amazing video! I need some advice: I have a SafePal wallet with USDT, and I have the seed phrase. (alarm fetch churn bridge exercise tape speak race clerk couch crater letter). Could you explain how to move them to Binance?
@robertowen33222 ай бұрын
Problem occurs if you stop and restart replication of a SQL replicated database. Tables get replicated but the select on a table with security on it never returns a result.
@sanjayj51072 ай бұрын
Can you please explain the use case of having multiple environment in same workspace? Follow up question, can we include environment in ci/cd of spark details
@sanjayj51072 ай бұрын
Why fabric trial subscription through email got revoked?
@fotsingtagne67702 ай бұрын
Great Demo! any notebook files related to this?
@kaidobor12 ай бұрын
This looks like a nice option to allow handling incremental loading in case facts may be changing. However, I have run into problem, that delete operation in Fabric Warehouse does not run reliably and I often run into having duplicates (need to run test after loading) and then to run reload procedure again. Not being SQL server expert it has really been a surprise to me that delete fails and procedure is not returning to pipeline, that it has failed. Is there any alternative approach?
@mortentang51862 ай бұрын
Very informative video end especially the deep dive into the pipeline settings and use cases for each setting. Thank you 👏
@PawełRzeszut2 ай бұрын
Is it possible to use "Authentication=Active Directory Managed Identity" in /TargetConnectionString for Publish action on Synapse serverless SQL pool? I tried and I am getting error message like this: *** ManagedIdentityCredential authentication unavailable. The requested identity has not been assigned to this resource. Status: 400 (Bad Request) Content: {"error":"invalid_request","error_description":"Identity not found"}. I am not sure if there is something missing or it's just not supported in case of Synapse serverless SQL? Command which I am executing: sqlpackage /a:"Publish" /sf:".\SQLFramework.dacpac" /tcs:"Data Source=MyBuiltinServerName.sql.azuresynapse.net;Initial Catalog=MyDbName;Connect Timeout=30;Encrypt=False;Trust Server Certificate=True;Command Timeout=30;Authentication=Active Directory Managed Identity"
@Eklandasy2 ай бұрын
Thank you for covering all relevant questions! I tried Fabric IDE and able to run Fabric activities via DAGs. However with ADO storge, seems DAGs are not recognized in airflow. Tried different connection options (Service principal, PAT etc) and confirmed folder structure that we have dags->--.py. Not sure if I missed something.
@keen8five2 ай бұрын
sounds awesome! Two questions: 1) will this functionality also come to Azure Synapse? 2) Can I execute CopyActivity from Airflow (without wrapping it in a Data Pipeline)?
The native execution engine supports the latest GA runtime version, which is Runtime 1.3 (Apache Spark 3.5, Delta Lake 3.2) - learn.microsoft.com/en-us/fabric/data-engineering/native-execution-engine-overview?tabs=sparksql
@kushihou69563 ай бұрын
Can we deploy the script and run it automatically?
@emersonafonsopaludeti38773 ай бұрын
Não dá para ver nada, pois o vídeo é muito pequeno
@matthiasraster23373 ай бұрын
Thank you very very much. Currently I have a customer facing lots of performance issuen. All the informations will help very much!
@sunilupadhyay88453 ай бұрын
Good Detailed session, Thank you!.. I have a question on How to deploy stored procedures of database using CI-CD pipeline of the database created in Dedicated pool.
@rekha16113 ай бұрын
Does this RLS applies to semantic model created for the same ??