Quantcast
Channel: SCN: Message List
Viewing all articles
Browse latest Browse all 9564

Re: Optimization Issue

$
0
0

Consider using UPDATE dbtab FROM TABLE itab syntax. As far as I understand, it groups updates into packages instead of sending them one by one, so this should reduce the overall number of database calls.

 

The code will be something like this:

 

DATA:

  it_std_tab TYPE TABLE OF <DB table type>,

  l_count    TYPE i VALUE 0.

 

LOOP AT int_tab INTO wa_int.

 

  wa_std_tab-zzmaterial = ... " Copy the fields

   ...

 

  APPEND wa_std_tab TO it_std_tab.

   l_count = l_count + 1.

 

  " Let's partition the source itab to prevent updates from being too huge

  IF l_count = 1000.

    " Chunk is full, write to DB

     UPDATE std_tab FROM TABLE it_std_tab.

     CLEAR it_std_tab.

   

    " Committing transaction with 1 mln changes may cause timeouts itself, so let's do it in chunks

    COMMIT WORK.

  ENDIF.

 

ENDLOOP.

 

Another option to try is to use UPDATE ... SET syntax, thus eliminating the need of intermediate tables/structures:

 

LOOP AT int_tab INTO wa_int.

  UPDATE std_tab

   SET

    zzmaterial = wa_int-zzmaterial

     ...

   WHERE ...

ENDLOOP.

 

Important: in both cases you need to make sure that UPDATE statements issued to the DB are supplied with STD_TAB primary key (or other index) values. So you need to either include them into it_std_tab (1st option) or provide explicitly in WHERE clause (2nd option).


Viewing all articles
Browse latest Browse all 9564

Trending Articles