By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
435,255 Members | 2,575 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 435,255 IT Pros & Developers. It's quick & easy.

Array selection and update.

P: n/a
Greetings,

I am working on a project and we need to write a conversion script to
initialize a new field on a table. The number of records on this table
is on the order of millions so routine selection and update takes a
long time.

I am tasked with writing a pl/sql proc that utilizes array processing
to update the column.

Since all of the fields I am going to update are null, does it pay to
create a array-based select prior to updating? I was thinking of
array-selecting and then array updating 5000 at a crack. Or should I
just write a single update statement? Does the former approach (think
F-former F-first) perhaps give us the ability to commit where the
later (l-later l-last) does not?

What I have tried to do is bulk select the key fields into appropriate
arrays, I then bind those arrays to the update fields. Then I call the
update statement. These steps are performed in a while loop until the
number of records retrieved is less than the the max number to
fetch/pass.

If you have a small little example of what I am trying to do, please
post it.

Thank You
}\/{ark
Jul 19 '05 #1
Share this Question
Share on Google+
3 Replies


P: n/a

"Mark A Framness" <fa****@netnet.net> wrote in message
news:81**************************@posting.google.c om...
Greetings,

I am working on a project and we need to write a conversion script to
initialize a new field on a table. The number of records on this table
is on the order of millions so routine selection and update takes a
long time.

I am tasked with writing a pl/sql proc that utilizes array processing
to update the column.

Since all of the fields I am going to update are null, does it pay to
create a array-based select prior to updating? I was thinking of
array-selecting and then array updating 5000 at a crack. Or should I
just write a single update statement? Does the former approach (think
F-former F-first) perhaps give us the ability to commit where the
later (l-later l-last) does not?

What I have tried to do is bulk select the key fields into appropriate
arrays, I then bind those arrays to the update fields. Then I call the
update statement. These steps are performed in a while loop until the
number of records retrieved is less than the the max number to
fetch/pass.

If you have a small little example of what I am trying to do, please
post it.

Thank You
}\/{ark


Why not
update mytable set myField=WhateverYouWantTheFieldToBe;
commit;
That will be fastest.
Jim
Jul 19 '05 #2

P: n/a
Hi,

Pls refere "Taking Advantage of Bulk Binds" chapter of PL/SQL User's
Guide and Reference Release 8.1.X for more deatails.
How Do Bulk Binds Improve Performance?
======================================
The assigning of values to PL/SQL variables in SQL statements is
called binding. The binding of an entire collection at once is called
bulk binding. Bulk binds improve performance by minimizing the number
of context switches between the PL/SQL and SQL engines. With bulk
binds, entire collections, not just individual elements, are passed
back and forth. For example, the following DELETE statement is sent to
the SQL engine just once, with an entire nested table:

DECLARE
TYPE NumList IS TABLE OF NUMBER;
mgrs NumList := NumList(7566, 7782, ...); -- manager numbers
BEGIN
...
FORALL i IN mgrs.FIRST..mgrs.LAST
DELETE FROM emp WHERE mgr = mgrs(i);
END;
In the example below, 5000 part numbers and names are loaded into
index-by tables. Then, all table elements are inserted into a database
table twice. First, they are inserted using a FOR loop, which
completes in 38 seconds. Then, they are bulk-inserted using a FORALL
statement, which completes in only 3 seconds.

SQL> SET SERVEROUTPUT ON
SQL> CREATE TABLE parts (pnum NUMBER(4), pname CHAR(15));

Table created.

SQL> GET test.sql
1 DECLARE
2 TYPE NumTab IS TABLE OF NUMBER(4) INDEX BY BINARY_INTEGER;
3 TYPE NameTab IS TABLE OF CHAR(15) INDEX BY BINARY_INTEGER;
4 pnums NumTab;
5 pnames NameTab;
6 t1 CHAR(5);
7 t2 CHAR(5);
8 t3 CHAR(5);
9 PROCEDURE get_time (t OUT NUMBER) IS
10 BEGIN SELECT TO_CHAR(SYSDATE,'SSSSS') INTO t FROM dual; END;
11 BEGIN
12 FOR j IN 1..5000 LOOP -- load index-by tables
13 pnums(j) := j;
14 pnames(j) := 'Part No. ' || TO_CHAR(j); 15 END LOOP;
16 get_time(t1);
17 FOR i IN 1..5000 LOOP -- use FOR loop
18 INSERT INTO parts VALUES (pnums(i), pnames(i));
19 END LOOP;
20 get_time(t2);
21 FORALL i IN 1..5000 -- use FORALL statement
22 INSERT INTO parts VALUES (pnums(i), pnames(i));
23 get_time(t3);
24 DBMS_OUTPUT.PUT_LINE('Execution Time (secs)');
25 DBMS_OUTPUT.PUT_LINE('---------------------');
26 DBMS_OUTPUT.PUT_LINE('FOR loop: ' || TO_CHAR(t2 - t1));
27 DBMS_OUTPUT.PUT_LINE('FORALL: ' || TO_CHAR(t3 - t2));
28* END;
SQL> /
Execution Time (secs)
---------------------
FOR loop: 38
FORALL: 3

PL/SQL procedure successfully completed.
To bulk-bind input collections, use the FORALL statement. To bulk-bind
output collections, use the BULK COLLECT clause.

Cheers!
Dhana

fa****@netnet.net (Mark A Framness) wrote in message news:<81**************************@posting.google. com>...
Greetings,

I am working on a project and we need to write a conversion script to
initialize a new field on a table. The number of records on this table
is on the order of millions so routine selection and update takes a
long time.

I am tasked with writing a pl/sql proc that utilizes array processing
to update the column.

Since all of the fields I am going to update are null, does it pay to
create a array-based select prior to updating? I was thinking of
array-selecting and then array updating 5000 at a crack. Or should I
just write a single update statement? Does the former approach (think
F-former F-first) perhaps give us the ability to commit where the
later (l-later l-last) does not?

What I have tried to do is bulk select the key fields into appropriate
arrays, I then bind those arrays to the update fields. Then I call the
update statement. These steps are performed in a while loop until the
number of records retrieved is less than the the max number to
fetch/pass.

If you have a small little example of what I am trying to do, please
post it.

Thank You
}\/{ark

Jul 19 '05 #3

P: n/a
Jan
adding some extra procedural code into single update statment would
probably not speed up the updation.

What you can do is to think about parallel update (if your machine has
more then 2 processors) - either by using Oracle parallelism - e.g.

update /*+ parallel(my_table, 4) */ my_table
set ....

or split table into n logical parts (by rowid) and schedule n jobs
with update procedure for a particular part of the table.
dh**********@db.com (Dhana) wrote in message news:<b6**************************@posting.google. com>...
Hi,

Pls refere "Taking Advantage of Bulk Binds" chapter of PL/SQL User's
Guide and Reference Release 8.1.X for more deatails.
How Do Bulk Binds Improve Performance?
======================================
The assigning of values to PL/SQL variables in SQL statements is
called binding. The binding of an entire collection at once is called
bulk binding. Bulk binds improve performance by minimizing the number
of context switches between the PL/SQL and SQL engines. With bulk
binds, entire collections, not just individual elements, are passed
back and forth. For example, the following DELETE statement is sent to
the SQL engine just once, with an entire nested table:

DECLARE
TYPE NumList IS TABLE OF NUMBER;
mgrs NumList := NumList(7566, 7782, ...); -- manager numbers
BEGIN
...
FORALL i IN mgrs.FIRST..mgrs.LAST
DELETE FROM emp WHERE mgr = mgrs(i);
END;
In the example below, 5000 part numbers and names are loaded into
index-by tables. Then, all table elements are inserted into a database
table twice. First, they are inserted using a FOR loop, which
completes in 38 seconds. Then, they are bulk-inserted using a FORALL
statement, which completes in only 3 seconds.

SQL> SET SERVEROUTPUT ON
SQL> CREATE TABLE parts (pnum NUMBER(4), pname CHAR(15));

Table created.

SQL> GET test.sql
1 DECLARE
2 TYPE NumTab IS TABLE OF NUMBER(4) INDEX BY BINARY_INTEGER;
3 TYPE NameTab IS TABLE OF CHAR(15) INDEX BY BINARY_INTEGER;
4 pnums NumTab;
5 pnames NameTab;
6 t1 CHAR(5);
7 t2 CHAR(5);
8 t3 CHAR(5);
9 PROCEDURE get_time (t OUT NUMBER) IS
10 BEGIN SELECT TO_CHAR(SYSDATE,'SSSSS') INTO t FROM dual; END;
11 BEGIN
12 FOR j IN 1..5000 LOOP -- load index-by tables
13 pnums(j) := j;
14 pnames(j) := 'Part No. ' || TO_CHAR(j); 15 END LOOP;
16 get_time(t1);
17 FOR i IN 1..5000 LOOP -- use FOR loop
18 INSERT INTO parts VALUES (pnums(i), pnames(i));
19 END LOOP;
20 get_time(t2);
21 FORALL i IN 1..5000 -- use FORALL statement
22 INSERT INTO parts VALUES (pnums(i), pnames(i));
23 get_time(t3);
24 DBMS_OUTPUT.PUT_LINE('Execution Time (secs)');
25 DBMS_OUTPUT.PUT_LINE('---------------------');
26 DBMS_OUTPUT.PUT_LINE('FOR loop: ' || TO_CHAR(t2 - t1));
27 DBMS_OUTPUT.PUT_LINE('FORALL: ' || TO_CHAR(t3 - t2));
28* END;
SQL> /
Execution Time (secs)
---------------------
FOR loop: 38
FORALL: 3

PL/SQL procedure successfully completed.
To bulk-bind input collections, use the FORALL statement. To bulk-bind
output collections, use the BULK COLLECT clause.

Cheers!
Dhana

fa****@netnet.net (Mark A Framness) wrote in message news:<81**************************@posting.google. com>...
Greetings,

I am working on a project and we need to write a conversion script to
initialize a new field on a table. The number of records on this table
is on the order of millions so routine selection and update takes a
long time.

I am tasked with writing a pl/sql proc that utilizes array processing
to update the column.

Since all of the fields I am going to update are null, does it pay to
create a array-based select prior to updating? I was thinking of
array-selecting and then array updating 5000 at a crack. Or should I
just write a single update statement? Does the former approach (think
F-former F-first) perhaps give us the ability to commit where the
later (l-later l-last) does not?

What I have tried to do is bulk select the key fields into appropriate
arrays, I then bind those arrays to the update fields. Then I call the
update statement. These steps are performed in a while loop until the
number of records retrieved is less than the the max number to
fetch/pass.

If you have a small little example of what I am trying to do, please
post it.

Thank You
}\/{ark

Jul 19 '05 #4

This discussion thread is closed

Replies have been disabled for this discussion.