Hi Naresh How can we improve performance of the read and writes using Spoon, I have 8M records that i want to write from oracle on-prem to oracle RDS. Currently it takes 8hrs and fails post this with "snapshot too old" error.
@DeepakKumar-yb9mo2 жыл бұрын
Hi Naresh. Does Pentaho support Continues Data Capture. For example : As if new data gets inserted to source table does it gets sinked with destination table ?
@nareshvadla4342 жыл бұрын
Hi Deepak. There are haddop components with we can implement hadoop funcionality as well Spark. Honestly I didn't work on them with PDI. refere this help.hitachivantara.com/Documentation/Pentaho/8.2/Data/Hadoop For your use case, I worked on another tool called Ascend.io it is also based on spark/hadoop. The pipeline on Ascend are like, if the data is available on source it will be transferred to destination. I hope this is your usecase. www.ascend.io/
@DeepakKumar-yb9mo2 жыл бұрын
@@nareshvadla434 - Thank you. Will check on this
@ryanshannon696312 күн бұрын
@@nareshvadla434 That's pretty cool. May have to check that out later. Thank you, sir! Do you know if you can cycle through all tables in a source database to dump data into the destination database? The destination database tables are the exact same in terms of data type compatibility and names, just empty of data. There are over 1000 tables and I'm super not thrilled with doing each one one-by-one. Thanks, again!