Even faster than DBMS_PARALLEL_EXECUTE

  Рет қаралды 4,196

SQL and Database explained!

SQL and Database explained!

Күн бұрын

Пікірлер: 13
@praveenkumar-fx5wx
@praveenkumar-fx5wx 4 жыл бұрын
Great lesson, thanks!
@bzezinahapolania9086
@bzezinahapolania9086 2 жыл бұрын
You mention it is possible to use dbms_parallel_execute to do Alter index rebuild…can you present an example for that ?
@laurentiuoprea06
@laurentiuoprea06 4 жыл бұрын
Will this apply if I have a bigfile tablespace?
@SheetalGuptas
@SheetalGuptas 4 жыл бұрын
Hi thanks for this session. Is it possible for you to share the script used in this session
@DatabaseDude
@DatabaseDude 4 жыл бұрын
Yes - its here github.com/connormcd/misc-scripts/tree/master/office-hours
@lizreen9563
@lizreen9563 3 жыл бұрын
Great site and scripts! I just can't find the one for this video.
@kaleycrum6350
@kaleycrum6350 4 жыл бұрын
Hi Connor! I don't understand how breaking it down by file helps. We're still doing table access by rowid range, right? Is the objective to ensure that multi-block reads are not interrupted by file breaks?
@DatabaseDude
@DatabaseDude 4 жыл бұрын
We guarantee that we won't ever have to scan a range of data that does not apply to this table. You only get a multiblock read breaks for the first smaller extents, but once they hit 1meg there will not be a break. And presumably you're only going to use this for a tables of some significant size.
@kaleycrum6350
@kaleycrum6350 4 жыл бұрын
@@DatabaseDude why would we be scanning data outside the current table?
@berndeckenfels
@berndeckenfels 4 жыл бұрын
You own list of chunks is not better than the parallel dunks, you still have multiple per File. It only might decrease the seeking for a given job, bu then it has much more jobs with less predictable overall size. So i am not sure it’s worth it (but the queries are neat, do they translate well to ASM and Exa?)
@DatabaseDude
@DatabaseDude 4 жыл бұрын
The number of jobs is unrelated to the number of chunks - it is governed by the job queue parameters. It is not multiple per file that is what we are trying to avoid, it is about guaranteeing that we won't ever have to scan a range of data that does not apply to this table.
@berndeckenfels
@berndeckenfels 4 жыл бұрын
@@DatabaseDude Ah I see, you mean DBMS_PARALLEL does not skip over file extends which are not part of the table. That does look like a important possible improvement.
@berndeckenfels
@berndeckenfels 4 жыл бұрын
@@DatabaseDude but it produces multiple tasks per file if they have multiple non-consecutive extends (however I guess it doesnt really matter if you access a single file in parallel or multiple, but since you explicitely mentioned that this happens with the standard method, it also happens with yours)
1 mistake and a child can hack your database in 5 minutes!
5:39
SQL and Database explained!
Рет қаралды 1,7 М.
Primary Key options for Partitioned Tables
16:40
SQL and Database explained!
Рет қаралды 1,3 М.
СОБАКА ВЕРНУЛА ТАБАЛАПКИ😱#shorts
00:25
INNA SERG
Рет қаралды 3,9 МЛН
Not Good Working Situation. iN YANGo Pro
11:44
Azhar Sial Taxi Wala
Рет қаралды 824
The FASTEST way to unload data to CSV
12:37
SQL and Database explained!
Рет қаралды 4,8 М.
My THREE rules for SQL TUNING
12:45
SQL and Database explained!
Рет қаралды 4,4 М.
The BEST way to FETCH from the database
11:58
SQL and Database explained!
Рет қаралды 1,9 М.
New Parallel DML Hint - Quirks and Features
10:32
SQL and Database explained!
Рет қаралды 3 М.
Should I reorganize my table to reclaim space ?
10:09
SQL and Database explained!
Рет қаралды 2 М.
"C" Programming Language: Brian Kernighan - Computerphile
8:26
Computerphile
Рет қаралды 1,9 МЛН
Why Rebuild Indexes? | #dailyDBA 20
30:50
DBA Genesis
Рет қаралды 25 М.