We institued the Insert into a dummy table append with nologging, and were able to complete the "update" in under 30 minutes.With nologging, if the system aborts, you simply re-run the 'update' again, as you have the original data in the main table. Asked: November 10, 2002 - pm UTC Answered by: Tom Kyte - Last updated: October 10, 2016 - am UTC Category: Developer - Version: 8.1.7 Viewed 100K times! One of our apps updates a table of several hundred million records. For loop approach for the update was calculated to take 53.7 years to complete! I dont want to do in one stroke as I may end up in Rollback segment issue(s). This is absolutely a viable approach, and one we have used repeatedly.Yeah ,of course it'll recompile itself when it called next time. There is no logical column to do partition., I guess the insert into a new table will take considerable time with 27 mil records.. November 12, 2002 - am UTC wait 10 days so that you are deleting 30 million records from a 60 million record table and then this will be much more efficient. 3 million records on an indexed table will take considerable time.
At first, it did not work (job_queue_processes was 0), but after I set it to 12 it started working. rows table and rebuild 5 non-partitioned indexes with 'compute statistics parallel nologging' clause.
In package I am writing, I do massive delete operation, then rebuilding indexes, then starting the next routine. November 19, 2002 - pm UTC You can use user_jobs or dba_jobs but -- you might just want to put some "logging" into your jobs themselves so you can monitor their progress and record their times.
What would be the best way to detect the end of rebuiding, in order to proceed with the next call? Thanks Tom, In our environment, we have partitioned tables and we use: ALTER TABLE table_name MODIFY PARTITION part_name REBUILD UNUSABLE LOCAL INDEXES and this rebuilds all the indexes in the partition at one shot.
If so -- sure, cause we could just drop partitions (fast) instead of deleting the data. We also have an absolutely incredible stored procedure that rebuilds all of our indexes concurrently after the load, using the Oracle job scheduler as the mechanism of allowing separate threads in pl/sql.
I work with John Bittner, one of the previous reviewers. This process was introduced to our environment by a master tuner and personal friend, Larry Elkins.
The only one difference between your code and mine is that I issue just one commit at the end. Here is the numbers I've got: rebuilding indexes sequentually consistently took 76 sec., while using dbms_job.submit() calls took around 40 - 42 sec.
I said "around", because the technique I used may not be perfect, though it served the purpose.
This is what we came up with concerning mass updates INV 50M INVINS 10M INVDEL 7M There are indexes on INV. KEY Execution Plan ( for deletes and updates ) ---------------------------------------------------------- 0 SELECT STATEMENT Optimizer=CHOOSE 1 0 FILTER 2 1 TABLE ACCESS (FULL) OF 'INV' 3 1 INDEX (RANGE SCAN) OF 'IX_INVDEL01' (NON-UNIQUE) alter table INVTMP nologging; -- INVINS contains delta inserts and updates insert /* APPEND */ into INVTMP select * from INVINS t1; -- INVDEL contains delta deletes and updates insert /* APPEND */ into INVTMP select * from INV t1 where not exists ( select null from INVDEL t2 where t2. KEY ); alter table INVTMP logging; drop table INV; rename INVTMP to INV -- build indexs etc This is what we came up with and is to the fastest approach we've tested.
Any comments or suggestions are welcome and appreciated.
When done, we swap the partition of original data with the 'dummy' table (the one containing new values), rebuild indexes in parallel, and wha-la! i.e, to update field2 in a table: 1) First create your dummy hold table: create table xyz_HOLD as select * from xyz where rownum Hi Tom, As u suggested us to create a new table ,then drop the original table and rename the new table to original table instead of updating a table with millions of records.
But what happen to dependent objects ,everything will get invalidated. We've a similar situation., We delete around 3 million records from 30 million rows table everyday.