Пікірлер
@mohamedtalal4468
@mohamedtalal4468 2 күн бұрын
thank you
@shivamchandan50
@shivamchandan50 4 күн бұрын
plz share the dataset
@Jaspbumrah
@Jaspbumrah 7 күн бұрын
Shortcut on delta File can only br created is this true?
@jainam_soni55
@jainam_soni55 8 күн бұрын
heyy this azure sql in docker website is not available rn it shows 404 error
@dfgdf434
@dfgdf434 Күн бұрын
yes it is same for me. did you fixed it?
@steveworley7053
@steveworley7053 9 күн бұрын
Use this for the Price and eleminate the Code column: =GOOGLEFINANCE(REGEXREPLACE(C4," ",""),"Price")
@safaesulayman2497
@safaesulayman2497 14 күн бұрын
I tried for MacBook Pro 15inch 2017 but when I reach to terminal I have an issue
@nsgamer...1635
@nsgamer...1635 15 күн бұрын
Microsoft.Data.SqlClient.SqlException (0x80131904): A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: TCP Provider, error: 40 - Could not open a connection to SQL Server) sir given the username and password after coming this issue...
@rahmiozerdemir8151
@rahmiozerdemir8151 17 күн бұрын
If RESTORE button not visible you may go to left bottom and click settings> click Command palette and write manually `restore`
@pavanvenkatasai10
@pavanvenkatasai10 17 күн бұрын
Hey brother everything is good but volume is low and explanation i feel is a bit slower even after switching to 2x still doesnt speed up also please dont keep clearing the queries you write and explain let them be on screen...all the best keep doing more 👏
@duhaelbashier9303
@duhaelbashier9303 21 күн бұрын
not working. another video that ia a COMPLETE waste of time.
@VladislavaVucetic
@VladislavaVucetic 28 күн бұрын
i also dont have Restore option available. I already have enabled preview features and also installed sql server dacpac so really not sure how to get that button :(
@alio3876
@alio3876 29 күн бұрын
Thanks for the effort
@lilasle
@lilasle 29 күн бұрын
I use Macbook air 2019, I follow the instruction but when the restore step I couldn't find it in my Azure Studio even I press refresh many times
@VladislavaVucetic
@VladislavaVucetic 28 күн бұрын
go to general setting icon (for me its bottom left), then hit Command Palette and in search bar put Restore. it pulls up same box as in video and it worked for me
@matveykokin4879
@matveykokin4879 21 күн бұрын
@@VladislavaVucetic Thanks boss!
@matveykokin4879
@matveykokin4879 Ай бұрын
Thank you sooo much!!!
@ANJANAKS-nh9ht
@ANJANAKS-nh9ht Ай бұрын
i can't choose the separator option while merging in power bi desktop. How to solve it
@bhavya4568
@bhavya4568 Ай бұрын
Unable to connect to localhost. Getting error ( A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: TCP Provider, error: 40 - Could not open a connection to SQL Server)
@KichereTheDataScientist
@KichereTheDataScientist Ай бұрын
could you share files you have used so we can follow along??
@enagandulashirisha6064
@enagandulashirisha6064 Ай бұрын
Volume is very low
@user-dy8xu7uj8k
@user-dy8xu7uj8k Ай бұрын
Hi Dominic, I have some complex "Scalar user defined functions" defined in MYSQL and I have to migrate them to fabric, but as of now fabric doesn't support creation of "Scalar user defined functions" in warehouse. In this scenario please let me know alternative options I can use. Thanks
@TheCodeWhisperer0o0
@TheCodeWhisperer0o0 Ай бұрын
you speaking like you're so sleepy person, 12 minutes could have been 5 minutes, its just slow talking
@zhenzhang6519
@zhenzhang6519 Ай бұрын
I really need to know how to remove a specific row.
@raghavverma7174
@raghavverma7174 Ай бұрын
Thanks for this!
@user-dy8xu7uj8k
@user-dy8xu7uj8k Ай бұрын
Hi Good Morning!, I have to convert the existing SQL server stored procedure into fabric environment, In my stored procedures there are CURSOR commands but fabric doesn't support CURSOR commands, in this case how do I proceed, is there any alternative.
@aminesaib
@aminesaib Ай бұрын
Where is the video where you show how to export your df to a single file?
@antonisxt0333
@antonisxt0333 Ай бұрын
Im trying to convert csv file to parquet, with small csv files the conversion is successful, but with big files the conversion is failed can you help me. Good work!
@antonisxt0333
@antonisxt0333 Ай бұрын
Code is: from pyspark.sql import SparkSession spark = SparkSession.builder \ .appName("CSV to Parquet") \ .getOrCreate() df = spark.read.csv("hdfs://master:9000/home/user/csv_files/warc.csv") df.write.parquet("hdfs://master:9000/home/user/parquet_files/warc.parquet") spark.stop()
@raniaaouida4183
@raniaaouida4183 Ай бұрын
thank you so much , you're a life savior
@vishnukumar3936
@vishnukumar3936 Ай бұрын
Hi Dominic, thank you for the video. I have an issue with my Azure Data Studio. Your screen for manage localhost server shows the 'Restore' button in the GUI as well as in the drop down but where as mine does not. How to resolve this? please help!
@ABDUllAH-of1gt
@ABDUllAH-of1gt Ай бұрын
You're the best Dominic
@user-dy8xu7uj8k
@user-dy8xu7uj8k Ай бұрын
I have a SQL server stored procedure which updates, deletes and merges data into a table , how do I convert the stored procedure to pyspark job, is it possible to update a table in fabric using pyspark?, please make a video on this topic
@user-dy8xu7uj8k
@user-dy8xu7uj8k Ай бұрын
nice video, thank you
@brysondickerson675
@brysondickerson675 Ай бұрын
bro just fucked my whole shit up
@lamarmohsen9582
@lamarmohsen9582 Ай бұрын
when I try and put the line with the password in terminal an error appears: Error response from daemon: Conflict. The container name "/sql" is already in use by container "0020551673ad3bbcb13eac8049832428465771979293ca52c035e8ce00e349d5". You have to remove (or rename) that container to be able to reuse that name. I am stuck!
@RS-nc5qx
@RS-nc5qx Ай бұрын
There are a few missing parts here. How do you connect to the localhost if it does not exist? do you have to deploy a server first? Please respond to my answer.
@pranavmanoj7746
@pranavmanoj7746 Ай бұрын
thanks bro
@ketankedar7312
@ketankedar7312 2 ай бұрын
Thanks mate.
@sathyaraj-hn7qk
@sathyaraj-hn7qk 2 ай бұрын
Is it possible move data from one workspace to another workspace with a fabric
@vijayarajan-bt5fk
@vijayarajan-bt5fk 2 ай бұрын
simple and sturdy clear explanation
@michaelthornton6095
@michaelthornton6095 2 ай бұрын
Thank you for this!
@mohammedzuber7554
@mohammedzuber7554 2 ай бұрын
Can I copy the select column only
@anonymous10765
@anonymous10765 2 ай бұрын
A connection was successfully established with the server,but then an error occurred during the pre login handshake.( Provider:TCP provider, error: 35-An internal exception was caught) Encryption was enabled on this connection, review your SSL and certificate configuration for the target SQL server,or enable ' Trust server certificate ' in the connection dialog
@kummarimallika7543
@kummarimallika7543 2 ай бұрын
how to practice KQl and what we need to download?
@alonaalona5967
@alonaalona5967 2 ай бұрын
wow, thank you very much 😃
@bobvance9519
@bobvance9519 2 ай бұрын
When I do df.write.csv("Export/exportcsv.csv", header=True), I get this long Py4JJavaError, and it creates a folder literally called exportcsv.csv inside the Export folder. What am I doing wrong? --------------------------------------------------------------------------- Py4JJavaError Traceback (most recent call last) Cell In[42], line 1 ----> 1 df.write.csv("Export/exportcsv.csv", header=True) File ~\anaconda3\lib\site-packages\pyspark\sql eadwriter.py:1864, in DataFrameWriter.csv(self, path, mode, compression, sep, quote, escape, header, nullValue, escapeQuotes, quoteAll, dateFormat, timestampFormat, ignoreLeadingWhiteSpace, ignoreTrailingWhiteSpace, charToEscapeQuoteEscaping, encoding, emptyValue, lineSep) 1845 self.mode(mode) 1846 self._set_opts( 1847 compression=compression, 1848 sep=sep, (...) 1862 lineSep=lineSep, 1863 ) -> 1864 self._jwrite.csv(path) File ~\anaconda3\lib\site-packages\py4j\java_gateway.py:1322, in JavaMember.__call__(self, *args) 1316 command = proto.CALL_COMMAND_NAME +\ 1317 self.command_header +\ 1318 args_command +\ 1319 proto.END_COMMAND_PART 1321 answer = self.gateway_client.send_command(command) -> 1322 return_value = get_return_value( 1323 answer, self.gateway_client, self.target_id, self.name) 1325 for temp_arg in temp_args: 1326 if hasattr(temp_arg, "_detach"): File ~\anaconda3\lib\site-packages\pyspark\errors\exceptions\captured.py:179, in capture_sql_exception.<locals>.deco(*a, **kw) 177 def deco(*a: Any, **kw: Any) -> Any: 178 try: --> 179 return f(*a, **kw) 180 except Py4JJavaError as e: 181 converted = convert_exception(e.java_exception) File ~\anaconda3\lib\site-packages\py4j\protocol.py:326, in get_return_value(answer, gateway_client, target_id, name) 324 value = OUTPUT_CONVERTER[type](answer[2:], gateway_client) 325 if answer[1] == REFERENCE_TYPE: --> 326 raise Py4JJavaError( 327 "An error occurred while calling {0}{1}{2}. ". 328 format(target_id, ".", name), value) 329 else: 330 raise Py4JError( 331 "An error occurred while calling {0}{1}{2}. Trace: {3} ". 332 format(target_id, ".", name, value)) Py4JJavaError: An error occurred while calling o150.csv. : java.lang.UnsatisfiedLinkError: 'boolean org.apache.hadoop.io.nativeio.NativeIO$Windows.access0(java.lang.String, int)' at org.apache.hadoop.io.nativeio.NativeIO$Windows.access0(Native Method) at org.apache.hadoop.io.nativeio.NativeIO$Windows.access(NativeIO.java:793) at org.apache.hadoop.fs.FileUtil.canRead(FileUtil.java:1249) at org.apache.hadoop.fs.FileUtil.list(FileUtil.java:1454) at org.apache.hadoop.fs.RawLocalFileSystem.listStatus(RawLocalFileSystem.java:601) at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1972) at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:2014) at org.apache.hadoop.fs.ChecksumFileSystem.listStatus(ChecksumFileSystem.java:761) at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1972) at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:2014) at org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter.getAllCommittedTaskPaths(FileOutputCommitter.java:334) at org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter.commitJobInternal(FileOutputCommitter.java:404) at org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter.commitJob(FileOutputCommitter.java:377) at org.apache.spark.internal.io.HadoopMapReduceCommitProtocol.commitJob(HadoopMapReduceCommitProtocol.scala:192) at org.apache.spark.sql.execution.datasources.FileFormatWriter$.$anonfun$writeAndCommit$3(FileFormatWriter.scala:275) at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23) at org.apache.spark.util.Utils$.timeTakenMs(Utils.scala:640) at org.apache.spark.sql.execution.datasources.FileFormatWriter$.writeAndCommit(FileFormatWriter.scala:275) at org.apache.spark.sql.execution.datasources.FileFormatWriter$.executeWrite(FileFormatWriter.scala:304) at org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:190) at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand.run(InsertIntoHadoopFsRelationCommand.scala:190) at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult$lzycompute(commands.scala:113) at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult(commands.scala:111) at org.apache.spark.sql.execution.command.DataWritingCommandExec.executeCollect(commands.scala:125) at org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.$anonfun$applyOrElse$1(QueryExecution.scala:98) at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$6(SQLExecution.scala:118) at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:195) at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:103) at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:827) at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:65) at org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.applyOrElse(QueryExecution.scala:98) at org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.applyOrElse(QueryExecution.scala:94) at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$transformDownWithPruning$1(TreeNode.scala:512) at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:104) at org.apache.spark.sql.catalyst.trees.TreeNode.transformDownWithPruning(TreeNode.scala:512) at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.org$apache$spark$sql$catalyst$plans$logical$AnalysisHelper$$super$transformDownWithPruning(LogicalPlan.scala:31) at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning(AnalysisHelper.scala:267) at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning$(AnalysisHelper.scala:263) at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:31) at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:31) at org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:488) at org.apache.spark.sql.execution.QueryExecution.eagerlyExecuteCommands(QueryExecution.scala:94) at org.apache.spark.sql.execution.QueryExecution.commandExecuted$lzycompute(QueryExecution.scala:81) at org.apache.spark.sql.execution.QueryExecution.commandExecuted(QueryExecution.scala:79) at org.apache.spark.sql.execution.QueryExecution.assertCommandExecuted(QueryExecution.scala:133) at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:856) at org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:387) at org.apache.spark.sql.DataFrameWriter.saveInternal(DataFrameWriter.scala:360) at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:239) at org.apache.spark.sql.DataFrameWriter.csv(DataFrameWriter.scala:847) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:75) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:52) at java.base/java.lang.reflect.Method.invoke(Method.java:578) at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244) at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:374) at py4j.Gateway.invoke(Gateway.java:282) at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132) at py4j.commands.CallCommand.execute(CallCommand.java:79) at py4j.ClientServerConnection.waitForCommands(ClientServerConnection.java:182) at py4j.ClientServerConnection.run(ClientServerConnection.java:106) at java.base/java.lang.Thread.run(Thread.java:1623)
@jask4423
@jask4423 2 ай бұрын
difference between onelake ,lakehouse and warehouse in fabric?
@jaggyjut
@jaggyjut 2 ай бұрын
One of the best channels to learn fabric. Thank you for your creating the awesome training content.
@4hmed-2jz
@4hmed-2jz 2 ай бұрын
please make a note for the future that the password must be at least 8 characters long and contain characters from three of the following four sets: Uppercase letters, Lowercase letters, Base 10 digits, and Symbols. Or else you will get an error when trying to connect
@charalamposkatsoukis8694
@charalamposkatsoukis8694 2 ай бұрын
really good video and good comments too. I might add that if the azure cannot connect and throws this error --- " A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: TCP Provider, error: 40 - Could not open a connection to SQL Server) "---- than you probably didn't start the docker. go to docker then to the containers tab and on the proper container (for me it is called sql) just press start. and now try to connect again. everything should be fine! A lot of the problems that someone could face during the procedure are already solved down on the comments' session !
@cathyordinola5816
@cathyordinola5816 2 ай бұрын
I am currently having this problem but also I don't know what to put as the username... I am a bit confused
@charalamposkatsoukis8694
@charalamposkatsoukis8694 2 ай бұрын
@@cathyordinola5816 you don’t need to change the username. Just let it as it is <SA>. Just put a strong password cause otherwise it might not work(did not have that issue personally but some others do refer to this problem).
@slayerop8759
@slayerop8759 2 ай бұрын
Hey brother what if we want to add or subtract 2 dates (Order date and ship date ) I selected the data type to Dates On Both the columns but the standard option goes grey as soon as select any one of those columns
@hadiqabukhari6780
@hadiqabukhari6780 2 ай бұрын
There is no "restore" button showing on my Azure Data Studio when I go to "Manage"
@matveykokin4879
@matveykokin4879 21 күн бұрын
Same... Were you able to solve the issue yet?
@schan263
@schan263 7 күн бұрын
Select a database (create a dummy one if necessary) and then select "Manage". You will see the Restore option. Don't worry about different database. It will be corrected once you select the bak file. However, azure sql edge doesn't work for tables with more rows (ie. employee table), it seems. You can only query against small tables. So I am switching to Azure SQL Server.