You mention it is possible to use dbms_parallel_execute to do Alter index rebuild…can you present an example for that ?
@laurentiuoprea064 жыл бұрын
Will this apply if I have a bigfile tablespace?
@SheetalGuptas4 жыл бұрын
Hi thanks for this session. Is it possible for you to share the script used in this session
@DatabaseDude4 жыл бұрын
Yes - its here github.com/connormcd/misc-scripts/tree/master/office-hours
@lizreen95633 жыл бұрын
Great site and scripts! I just can't find the one for this video.
@kaleycrum63504 жыл бұрын
Hi Connor! I don't understand how breaking it down by file helps. We're still doing table access by rowid range, right? Is the objective to ensure that multi-block reads are not interrupted by file breaks?
@DatabaseDude4 жыл бұрын
We guarantee that we won't ever have to scan a range of data that does not apply to this table. You only get a multiblock read breaks for the first smaller extents, but once they hit 1meg there will not be a break. And presumably you're only going to use this for a tables of some significant size.
@kaleycrum63504 жыл бұрын
@@DatabaseDude why would we be scanning data outside the current table?
@berndeckenfels4 жыл бұрын
You own list of chunks is not better than the parallel dunks, you still have multiple per File. It only might decrease the seeking for a given job, bu then it has much more jobs with less predictable overall size. So i am not sure it’s worth it (but the queries are neat, do they translate well to ASM and Exa?)
@DatabaseDude4 жыл бұрын
The number of jobs is unrelated to the number of chunks - it is governed by the job queue parameters. It is not multiple per file that is what we are trying to avoid, it is about guaranteeing that we won't ever have to scan a range of data that does not apply to this table.
@berndeckenfels4 жыл бұрын
@@DatabaseDude Ah I see, you mean DBMS_PARALLEL does not skip over file extends which are not part of the table. That does look like a important possible improvement.
@berndeckenfels4 жыл бұрын
@@DatabaseDude but it produces multiple tasks per file if they have multiple non-consecutive extends (however I guess it doesnt really matter if you access a single file in parallel or multiple, but since you explicitely mentioned that this happens with the standard method, it also happens with yours)