used aws glue under free teir cost a some amount after i month which i used because i was unaware of the extra charges..so i request u please provide a commet while making the video that which services can cost money and which services can be used under the free teir , that will be very helpfull for newbies like me...
@hlulaniwinners70769 ай бұрын
That was really good to follow...100% worked and I learned so much more in 40min😀😃
@adityatomar98208 ай бұрын
i got a bill of 2.80 dollars by just running glue etl once...I dont know how im gonna create more projects if they keep billing like thiss..i cant afford fee rn...? what can i do?
@BOSS-AI-202 ай бұрын
@@adityatomar9820 make new free tier account
@OlafKoch-j5y8 ай бұрын
thats what i was looking for. thank you :)
@OlafKoch-j5y8 ай бұрын
also, you should create a playlist with all data engineering projects you already done, gonna be easy to find :)
@ravi199008 ай бұрын
Amazing content... This is first AWS DE video I watched in practical and I am glad I found this video. Thank You Can you please share some automated way of doing ingestion process in s3 staging folder and some preprocessing demo followed up by some SCD Type 2 implementation on glue?
@TonyRydinger-bq9pk7 ай бұрын
Great video, can you please explain the preprocessing part, what exactly did you use to preprocess the datasets, was it a python script in pandas or something else?
@dggh78799 ай бұрын
Great project for beginners!!
@ganesh.majety526010 ай бұрын
Just watched 1 video, u gained a subscriber 🎉. Hope more from u😊
@rampatil-t6u6 күн бұрын
Incredible work on data engineering project! The ability to design, implement, and optimize the entire pipeline from data ingestion to processing and visualization is a true testament to your skillset. The attention to detail, efficiency in workflow, and the seamless integration of various tools and technologies are impressive. Your understanding of data architecture and best practices shines through in every step of the project. Keep up the fantastic work
@datewithdata1236 күн бұрын
Thank you very much!
@adityatomar98208 ай бұрын
OH GOD! AWS UI always made me overwhelmed and scared me....But you just explained everything so beautifully...Thankyou soo much mann.....I finally feel confident that i can learn AWS and build awesome projects... BTW will AWS charge us for using ATHENA and GLUE as they don't come under free trial...?
@datewithdata1238 ай бұрын
Yes. For completing this project the bill will be less than half a dollar(if you don’t run a glue job a lot).
@sanjeevpandey275310 ай бұрын
Nice one bhai, very precise and clear explanation
@datewithdata12310 ай бұрын
Glad you like it
@sanjeevpandey275310 ай бұрын
May I have your mail id please?
@datewithdata12310 ай бұрын
datewithdata1@gmail.com
@avinash700310 ай бұрын
can you do on s3,glue,emr,lambda,athena,redshift
@datewithdata1238 ай бұрын
Ongoing. Will be released soon
@swapnilgaikwad973810 ай бұрын
Good please again one end to end aws-data project video
@BommineaniSaiАй бұрын
I'm facing an issue with joining the Tracks with the album & artist as it showing as NO SOURCE KEY for the album& artist join condition, Can you help pls?
@bullshere1122Ай бұрын
same issue please help anybodyyyyyyyyyyyyyyyyyyyyyyyyy
@datewithdata123Ай бұрын
Please look if you have provided correct join condition.
@binod872015 күн бұрын
The crawler failed to automatically create the table from s3 directory where parquet datasets are stored, not sure what happened as i followed the exacts steps given the glue have been given access to s3 creating a role to that specific user, any feedback on how to resolve this issue?
@manojk14949 күн бұрын
Are you able to resolve this issue ?
@binod87209 күн бұрын
@manojk1494 yep resolved, thanks 😊
@ibrahimfadhili66212 ай бұрын
I love it ❤Thanks man
@datewithdata123Ай бұрын
I'm glad you like it
@ajtam058 ай бұрын
Would you know why the 'Data preview' on joins may not populate any data aka 'No data to display'? I did a sanity check and the albums and artists files (in excel) , do indeed have matching data in the artist_id (album) to id (artist). But when I join on those conditions, as you did, it doesn't populate any data. Just to see, I tried right and left join, and that actually populated data for each respective side (oddly enough). Seems like a glitch, but because the script it simple and the join script looks correct. Do you know if the data types are converted or something else occurs behind the scenes when you join in Visual ETL?
@ajtam058 ай бұрын
I basically can't do the project because the subsequent nodes require data being fed from previous nodes. But there's no data at the first join (album/artist). Really odd.
@datewithdata1238 ай бұрын
Please check you have your data in s3.
@himanshusaini0117 ай бұрын
Yes we do have the data in s3 but the same issue is also popup for me as well
@danielpequeno333 ай бұрын
I have the same problem, could any of you solve it? @himanshusaini @ajtam05
@mushkarasaiprakash19155 ай бұрын
how did you preprocess data, what all you removed or changed while preprocessing the data
@vishalkanvajiya-j8tАй бұрын
bro visualization ka bhi explain kro
@gnaneshwaripanthagani35157 ай бұрын
In AWS glue when I am creating pipeline in transform join I am not getting option to select any source key can u plzz help
@FredRohn7 ай бұрын
I used infer schema, and that seemed to fix the problem for me :)
@gnaneshwaripanthagani35157 ай бұрын
@@FredRohnThank you so much,it works for me
@kunalnkalore4 ай бұрын
in real time do we have to perform these task regarding IAM, etc or do we have to jst run terraform scripts or something similar and our architecture or cluster spins up? can you clear this real time working process?
@djsamxgaming57325 ай бұрын
I dont know why i am not able to see the output in datawarehouse, but i can see 100% success rate in job monitoring window. Could you tell me what will be the problem in this???
@TonyRydinger-bq9pk7 ай бұрын
Great video, can you let us know what did you use for preprocessing, was it a python script in pandas or something else?
@tulasipanthagani64017 ай бұрын
can u please help y crawler is not running, it is asking some permission ,which permission we need to add
@vidhyabharathi3947Ай бұрын
I am unable to run athena query it is showing unable to kind parquet format
@datewithdata123Ай бұрын
Have a look if you have provide correct path for s3, with right permissions
@kukhwa2 ай бұрын
I've followed all the steps you shared, but I can't run the crawler. It seems like the error, AccessDeniedException, is related to CloudWatch logs. So, I've added CloudWatch logs full access. But still not working. Do you have any insights?
@shaikgouse4uАй бұрын
Cuz I have fixed this one and created table successfully
@datewithdata123Ай бұрын
Please provide Iam permission to the user (administrator or necessary permissions)
@vidhyabharathi3947Ай бұрын
@@datewithdata123Even I faced the same issue i have provided full access but still unable to run the crawler
@hemanthkumar7782Ай бұрын
@@vidhyabharathi3947 I have added glue service role and then it worked.
@shaikgouse4uАй бұрын
@@vidhyabharathi3947 Try following below steps: Fix for crawler error: 1. Using Root user go into "AWS Glue console --> Getting Started page" 2. Click on "Setup roles & users" option 3. Choose your IAM User 4. Next stage select "Grant full access to Amazon S3" --> "Read and write" 5. Select the recommended "AWSGlueServiceRole" 6. Review & apply changes 7. Go to IAM console --> Access Management --> Roles. Here you'll see the role "AWSGlueServiceRole" created and assigned to IAM User selected in step-3 8. Re-run the crawler job and it'll complete successfully.
@ruben381510 ай бұрын
good job dude!
@adityatomar98208 ай бұрын
plz also tell how to push these kind of projects on GITHUB
@nguyentien47113 ай бұрын
this procedure should not be on your github, it's just a BI tool while github is the place to show your code skill and project build merely by code from scratch
@CricketLover-qy9nn8 ай бұрын
I'm unable to the trackid from the join album and artist. What might be the reason
@KomalChavan-ht7wm7 ай бұрын
same
@KomalChavan-ht7wm7 ай бұрын
hey how u resolved this issue?
@FredRohn7 ай бұрын
use infer schema, that fixed the problem for me@@KomalChavan-ht7wm
@FredRohn7 ай бұрын
try infer schema, that made it work for me
@danielpequeno333 ай бұрын
did you find a way to solve it?
@eugenia649010 ай бұрын
Question please. 26min:38sec timestamp - you mentioned that the job created multiple blocks. Why are there multiple blocks? Thank you!
@datewithdata12310 ай бұрын
We have created two worker nodes and since we have very little data. we could see that there were exactly 2 files in our warehouse table.
@eugenia64909 ай бұрын
@@datewithdata123 Thank you!
@mkdTech369Ай бұрын
More Video please
@Divya-gn5lh3 ай бұрын
hey @datewithdata firstly I like ur project playlist if uhh share the source code with us it would be helpful for us.....thank for content
@kumarsumit61178 ай бұрын
could you please help me after sucessfully running Glue pipline data s not stored in final s3 bucket
@datewithdata1238 ай бұрын
Please share your error SC at datewithdata1@gmail.com
@rahulcp70133 ай бұрын
Were you able to resolve this issue, I am also facing the same
@mwanthidaniel12549 ай бұрын
Is S3 a data warehouse or data lake?
@datewithdata1239 ай бұрын
S3 is neither a warehouse nor a data lake; it's an object storage service provided by AWS, but can be used as both because it can manage large volumes of structured and unstructured data for analytics, processing, and other purposes.
@shivam874806 ай бұрын
can anyone tell how to showcase the project in github or put it in resume????
@KomalChavan-ht7wm7 ай бұрын
at time of trasforming enable to join table on condtion data is not fetching at column? is anybody help me
@himanshusaini0117 ай бұрын
Same issue with me
@Gauravsingh-hx6lw3 ай бұрын
When i add policy for glue its not working can you help me
@AshutoshParashar-u5l3 ай бұрын
glue_s3_role which you have created assign glue access to it it will work!
@udaykirankankanala36357 ай бұрын
When i am trying to save visual etl job it is showing me error as create job:access denied exception What is the policy we have to add in root account?
@datewithdata1237 ай бұрын
iam:PassRole
@udaykirankankanala36357 ай бұрын
I am unable to find that policy in root account Please help me
@datewithdata1237 ай бұрын
Or provide iam full access.
@FredRohn7 ай бұрын
did you solve this issue? I am experiencing the same thing. @@udaykirankankanala3635
@FredRohn7 ай бұрын
how do i do this? I'm having a similar issue@@datewithdata123
@badboy15858 ай бұрын
hello bro, the services you are used in this project are comes in free tier right ? or we have to pay
@datewithdata1238 ай бұрын
Some of the services are not under free tier. For completing this project the bill will be less than half a dollar(if you don’t run a glue job a lot).
@adityatomar98208 ай бұрын
@@datewithdata123 i got 2.80 dollar bill just after running etl once in glue
@SS-gv8kh6 ай бұрын
@datewithdata123 when I am running glue job it's successful but ouput files are not created in s3. Did you or anyone face similar issue?
@AshutoshParashar-u5l3 ай бұрын
the visual ETL for every node are you seeing greed ticked if no ten the ETL process is not completed as per design. Make sure all the nodes are green then run it. I faced same error and have resolved and its working as expected.
@rahulteja484910 ай бұрын
While joining the tables in visual etl, i could not add the condition as i could not look for colum names it is not showing me any columns
@tokyochannel56849 ай бұрын
Solved?
@vichitravirdwivedi8 ай бұрын
Refresh it multiple times. it happened with me too
@datewithdata1238 ай бұрын
This may happen sometimes when you have slow internet connection. Bcz glue will read the schema from data present in S3. Hence the connection need to be set.
@himanshusaini0117 ай бұрын
@@vichitravirdwivedi I already did it multiple times but no output
@FredRohn7 ай бұрын
try to use infer schema, all of the fields popped up for me after doing that. @@himanshusaini011
@akshaypy41176 ай бұрын
Crawler will not run with just s3 full access as shown here right?
@datewithdata1236 ай бұрын
You may need to add IAM:Full Access if you are working as an IAMUser
@sidharthv10606 ай бұрын
@@datewithdata123 I have added IAM:Full Access also within the role glue_access_s3 but again failed to run crawler.
@VivekYadav-og4lt6 ай бұрын
@@sidharthv1060I think you need add AWSGlue service role
@supriya90476 ай бұрын
@@sidharthv1060 I am also facing the same issue repeatedly, even after providing all the required access.
@kshitijjoshi20925 ай бұрын
@@supriya9047same
@ajtam058 ай бұрын
iam:PassRole error when trying to attach the role to the project. iam:PassRole looks very confusing, but I'm not sure why no one else is encountering this issue.
@ajtam058 ай бұрын
User: arn:aws:iam::905418287400:user/proj is not authorized to perform: iam:PassRole on resource: arn:aws:iam::905418287400:role/glue_access_s3 because no identity-based policy allows the iam:PassRole action
@datewithdata1238 ай бұрын
In the beginning while creating IAM user, plz add IAMFullAccess. This is happening because the "iam:PassRole" action is required when a service like AWS Glue needs to pass a role to another AWS service.
@ajtam058 ай бұрын
@datewithdata123 OK, I will try that. I tried multiple solutions with regards to creating a new policy and attaching it to the user, but no luck. Hope that works. 🙏
@ajtam058 ай бұрын
@@datewithdata123 Yep, that worked. Thanks for that.
@ajtam058 ай бұрын
@@datewithdata123 I believe that change has affected the way joins are occurring. before i was able to join the album & artist join w/ the tracks. but now the ablum & artist join doesn't populate any data. it looks like people have similar issue when i google, but no solutions provide online. are you aware?
@vivekpawar30698 ай бұрын
sit please attach preprocessing of csv file code
@ishwariupadhyay812210 ай бұрын
Can you provide your github link for preprocessing data.
@datewithdata1238 ай бұрын
Sorry didn’t save the code. We have used visual etl so the code was auto generated.
@backgrounding48216 ай бұрын
Hello can you please update the Processed Data Link please.