Showing posts with label Performance. Show all posts
Showing posts with label Performance. Show all posts

Wednesday, 29 January 2020

Unpivoting Oracle APEX meta-data

There are APEX dictionary views for most of the data that represents the 'source' that is your APEX application meta-data.

Note: This post has beeen updated to reflect me not looking very hard, but I added a performance comparison... just because.

Today I found one place where I really wanted to run a query to find references to potential data - application substitution strings.

APEX Application Substitution Strings

And yes, I have a CSS rule to right-align those names, to make it easier to read.

I couldn't find any references in the APEX dictionary (correction, see below), so I looked in the Oracle data dictionary to find where it may live.
select * from all_tab_columns 
where column_name like 'SUB%03';
So not only are these stored in a place inaccessible to us mere-mortal developers, they are also stored in 20 sets of name/value columns - not rows.

This means if you have a dozen applications, with references in slightly different locations for each application, then string searches might be a pain.

Unpivot to the rescue!
select * from (
  select id app_id, alias, name -- key facts
      -- all the substitution stringz!
      ,substitution_string_01, substitution_value_01
      ,substitution_string_02, substitution_value_02
      ,substitution_string_03, substitution_value_03
      ,substitution_string_04, substitution_value_04
      ,substitution_string_05, substitution_value_05
      ,substitution_string_06, substitution_value_06
      ,substitution_string_07, substitution_value_07
      ,substitution_string_08, substitution_value_08
      ,substitution_string_09, substitution_value_09
      ,substitution_string_10, substitution_value_10
      ,substitution_string_11, substitution_value_11
      ,substitution_string_12, substitution_value_12
      ,substitution_string_13, substitution_value_13
      ,substitution_string_14, substitution_value_14
      ,substitution_string_15, substitution_value_15
      ,substitution_string_16, substitution_value_16
      ,substitution_string_17, substitution_value_17
      ,substitution_string_18, substitution_value_18
      ,substitution_string_19, substitution_value_19
      ,substitution_string_20, substitution_value_20
  from apex_190200.WWV_FLOWS -- direct from underlying view
)
unpivot (
(str, val) -- new columns
for rec in -- denoted by 
  ((substitution_string_01, substitution_value_01) as '01'
  ,(substitution_string_02, substitution_value_02) as '02'
  ,(substitution_string_03, substitution_value_03) as '03'
  ,(substitution_string_04, substitution_value_04) as '04'
  ,(substitution_string_05, substitution_value_05) as '05'
  ,(substitution_string_06, substitution_value_06) as '06'
  ,(substitution_string_07, substitution_value_07) as '07'
  ,(substitution_string_08, substitution_value_08) as '08'
  ,(substitution_string_09, substitution_value_09) as '09'
  ,(substitution_string_10, substitution_value_10) as '10'
  ,(substitution_string_11, substitution_value_11) as '11'
  ,(substitution_string_12, substitution_value_12) as '12'
  ,(substitution_string_13, substitution_value_13) as '13'
  ,(substitution_string_14, substitution_value_14) as '14'
  ,(substitution_string_15, substitution_value_15) as '15'
  ,(substitution_string_16, substitution_value_16) as '16'
  ,(substitution_string_17, substitution_value_17) as '17'
  ,(substitution_string_18, substitution_value_18) as '18'
  ,(substitution_string_19, substitution_value_19) as '19'
  ,(substitution_string_20, substitution_value_20) as '20'
  )
)
order by app_id, str 
This query transposes all the string/value columns into rows, each denonimated by the new "REC" column.

Note, this query can only be executed by those with the APEX Administrator Role, and if you want to have it within a view that could be executed by other schemas, then select access on wwv_flows is needed with grant option.

Columns unpivoted into rows

By placing the query in a view, I can now make queries like this to find any substitution strings in any applications that have date references.
select * from apx_app_sub_strings
where val like 'APP_DATE%';
Pretty neat, well at least as far as what I was trying to do.

This query may break in future versions, as it's based on an underlying, undocumented view.
I also think that once upon a time, there were fewer pairs.

Update: As the community quickly pointed out, I missed & forgot about a view already dedicated for such a task. I was too busy looking for a column number, when really I should have used APEX_APPLICATION_SUBSTITUTIONS.

I thought I'd see how 'they' solved the problem, and I was a little surprised to see a bunch of UNION statements.
SELECT ...
           f.substitution_string_18,
           f.substitution_value_18,
           f.substitution_string_19,
           f.substitution_value_19,
           f.substitution_string_20,
           f.substitution_value_20
      from wwv_flow_authorized auth,
           wwv_flows f
     where f.id = auth.application_id )
select workspace,
       workspace_display_name,
       application_id,
       application_name,
       substitution_string_01 as substitution_string,
       substitution_value_01  as substitution_value
  from substitution
where substitution_string_01 is not null
union all
select workspace,
       workspace_display_name,
       application_id,
       application_name,
       substitution_string_02 as substitution_string,
       substitution_value_02  as substitution_value
  from substitution
where substitution_string_02 is not null
union all...
After seeing this I couldn't help but run a brute for comparison, running both solutions x times.
iterations:5000
16.55 secs (.003310 secs per iteration) -- union all
 6.54 secs (.001308 secs per iteration) -- unpivot
I made sure the underlying join was the same, and I'm not all that surprised the unpivot did the job quicker.

Update 2: It appears 20.1 has gone with unpivot.

Thursday, 16 January 2020

Performance on Classic Reports with LOVs

Just an observation made yesterday that I thougt was worth everone's consideration.

I created a classic report on a reference table with a touch over 1000 rows, with pagination set at 150 rows per page.

That’s all I had on the page.

The table had it's own set of lookups for a couple of columns, so I assigned two LOVs I already defined in Shared Components.

Classic Report LOV

Next minute, I had a simple reference page taking 10 seconds to load.


This is information from apex_workspace_activity_logs, showing results where no LOV was applied, two LOVs, and just one. I later found the number of rows shown in the pagination set also varied the result.

As I removed one of the LOVs, I quickly realised this was the problem.

I ran the page in debug mode, to see if something crazy was happening as it constructed the query.
When I used debug = YES, it pinned all the effort onto the one line item - but not the query itself.
rows loop: 150 row(s)

Looking again using LEVEL9, I could see that every row took a bit of work, not just the query.
In fact each row had three debug line items referencing my LOV lookup SQL.
begin begin SELECT name display_value, id return_value  
bulk collect into wwv_flow_utilities.g_display,wwv_flow_utilities.g_value 
FROM  my_secondary_ref 
WHERE SYSDATE BETWEEN eff_start AND COALESCE(eff_end, SYSDATE)
ORDER by 1
; end;
end;
Debug chart

Apparently these are only melded into the SQL for IR/IG, not Classic Reports. Context switching kills the page instead - well, at least 149 extra executions of the one statement.

I ran another test so I could check v$sqlarea, and sure enough, there are far more executions of this lookup tha necessary - unnecessarily churning the CPU.

v$sql_area status

I’ve tended to embed these in my queries anyway, often as some form of scalar subqery.
This is a habit I started as a Oracle Forms developer, since Post-Query lookups made repeated network calls that just slowed the application down.
select sm.*
  ,(select name
    from my_lookup m
    where sm.id = m.id) my_desc
from some_model sm
This makes me really wonder how much of our existing page generation time is spent on this work?
Even on small classic report regions. This all adds up.
If only there was a way I could find all occurences of LOVs used in classic reports... wait a minute! I can query the APEX dictionary! ;p
select application_id, pagE_id, page_name, region_name
  ,(select maximum_rows_to_query 
    from apex_application_page_regions r
    where r.region_id = c.region_id) nbr_rows
  , column_alias, heading 
  , named_list_of_values, inline_list_of_values
  , column_is_hidden, condition_type
  ,build_option, column_comment
from apex_application_page_rpt_cols c
where application_id = 102
and display_as_code = 'TEXT_FROM_LOV'
order by nbr_rows desc
It's a shame such a nify declarative feature impacts performance this way.

Tuesday, 10 December 2019

Where would we be if we just believe?

As a science aficionado, there are certain phrases that ... catch the eye.

Recently on twitter there was an interesting thread that continued from Michelle Skamene's post on Top 15 Tuning Tips for APEX.  Michelle provided a wonderful follow-up post summarising the outcomes of the thread.

Point 9 suggests we avoid HTML in our queries, and use HTML expressions. This is undeniably good practice, but there was a question regarding how much performance is gained. Patrick Wolf summarises it well (sorting, XSS, context switching).

I've been curious about this for a while myself, and what better way to verify the truth than do some science ;p

I thought about the recent experimentation I did with interpreted code, and recent client work I've done with bind variables, so I thought I'd not only compare timings with a simple test harness I use, but see what v$sqlarea had to say.

When considering what SQL to compare, I decided to use the simple bit of SQL I used for my AskTOM Office Hours demo app. I saw how quickly the embedded HTML expanded to fit repeated business rules, and why not use something simple. If any difference was to be seen, let's see if it shows up with something basic, just like Juergen suggested.

The first query is just a simple query on scott.emp, with an extra expression to determine if the row is 'special'. This information is used declaratively within APEX, so these calculations are absent from this testing. The complexity has been transferred, but simplified. That's the reason it's best practice.
cursor c_1 is
select /*+ qb_name(c1) */ empno, ename, job, hiredate
  ,case
   when hiredate < date '2000-01-01'
     then 'special'
   end last_cent
from scott.emp
So, let's just concentrate on the differences in the SQL we can measure.

The second query has the HTML tags embedded, repeated on columns where I substituted the column into the class attribute in the previous example. The code required expands quickly, hence the argument to keep the SQL and presentation layer separate.
select /*+ qb_name(c2) */empno
  ,'<b>'||ename||'<b>' ename
  ,case when hiredate < date '2000-01-01'
    then '<span style="color:purple;font-weight:bold;">'
     ||'<span title="'||job||'">'||to_char(hiredate,'fmddth Mon YYYY')||'</span>'
   end ||'</span>'
   as hireDate
  ,case when hiredate < date '2000-01-01'
     then '<span style="color:purple;font-weight:bold;">'||job||'</span>'
   end job
from scott.emp
The third query introduces context switching to PL/SQL due to the inclusion of htf.bold and apex_escape.html. This protects the output we'd otherwise have escaped using the relevant property. Or you could mitigate usage by sanitising data on the way in.
cursor c_3 is
select /*+ qb_name(c3) */empno
  ,htf.bold(apex_escape.html(ename)) ename
  ,case when hiredate < date '2000-01-01'
    then '<span style="color:purple;font-weight:bold;">'
     ||'<span title="'||apex_escape.html(job)||'">'||to_char(hiredate,'fmddth Mon YYYY')||'</span>'
   end ||'</span>'
   as hireDate
  ,case when hiredate < date '2000-01-01'
     then '<span style="color:purple;font-weight:bold;">'||apex_escape.html(job)||'</span>'
   end job
from scott.emp;
If we just compare throughput, there is a clear loser. Context switching between SQL & PL/SQL is just too much, though I would like to think in recent versions of the database, those functions could be enhanced with such devices as the UDF pragma?
iterations:50000
     3.98 secs (.0000796 secs per iteration)
     5.16 secs (.0001032 secs per iteration)
    33.09 secs (.0006618 secs per iteration)
I just run these queries lots of times to measure throughput. Tom Kyte wrote a more enhanced test bed called runstats, if you want juicier details.
-- and start timing...
    l_start := dbms_utility.get_time;

    FOR i IN 1 ..c_iterations
    LOOP
     for x in c_1
     loop
        null;
     end loop;

    END LOOP;
    l_end := dbms_utility.get_time-l_start;
    dbms_output.put_line( TO_CHAR(l_end/100, '99990.00') 
                       || ' secs ('||(l_end/c_iterations/100) || ' secs per iteration)' );
What about v$sqlarea?
select substr(sql_text, 1, 22)sql, fetches, parse_calls, buffer_gets, sharable_mem
   ,persistent_mem, runtime_mem, user_io_wait_time
   ,plsql_exec_time, cpu_time, elapsed_time, physical_read_bytes 
from v$sqlarea  
where sql_text like 'SELECT%SCOTT.EMP%'
order by 1

v$sqlarea results

These statistics validate the timings, and I think also validates this as a performance tip that belongs in Michelle's list.
While performance difference with/without HTML may be marginal in this case, the supplementary benefits make it clear best practice.

Thursday, 28 November 2019

Interpreted code in APEX

A few years ago I posted a comparison between plugin code left in the source attribute, vs code that has been transferred to a PL/SQL package.

In the interests of good science, and I wanted to chat about it at next week's Office Hours, I wanted to repeat this test.

I had a little difficulty working out how I got the metrics, I think APEX debugging has changed a little since I ran the test. Instead I considered looking at v$sqlarea to assess performance.

Turns out I quickly found the relevant queries using the following SQL, which a case statement to help me identify the difference between rows each time
select
  case
  when sql_text like 'begin declare  begin wwv_flow_plugin_api.%' then 'API'
  when sql_text like 'begin declare function x return varchar2 is begin return null; %' then 'call dynamic'
  when sql_text like 'begin declare FUNCTION enkitec_sparkline_render%' then 'parse dynamic'
  when module = 'SQL Developer' then ' me'
  end which
  ,executions
  ,loads
  ,parse_calls
  ,disk_reads
  ,buffer_gets
  ,user_io_wait_time
  ,plsql_exec_time
  ,rows_processed
  ,cpu_time
  ,elapsed_time
  ,physical_read_requests
  ,physical_read_bytes
  ,lockeD_total
  ,pinned_total
  ,sql_text
from v$sqlarea
where sql_text like '%sparkline%'
order by which
When APEX invoked the code using an API call, it looked like

begin declare begin wwv_flow_plugin_api.g_dynamic_action_render_result := apx_plug_sparkline.render (p_dynamic_action => wwv_flow_plugin_api.g_dynamic_action,p_plugin => wwv_flow_plugin_api.g_plugin );end; end;

When APEX needed to parse the entire function, it looked like

begin declare FUNCTION enkitec_sparkline_render ( p_dynamic_action IN APEX_PLUGIN.T_DYNAMIC_ACTION, p_plugin IN APEX_PLUGIN.T_PLUGIN ) RETURN APEX_PLUGIN.T_DYNAMIC_ACTION_RENDER_RESULT IS

To do this test, all I needed to do was paste the PL/SQL back into the source attribute; I didn’t bother changing what was invoked in the callback fields.

After 50 page refreshes each in dev – parsing the code was at least twice as slow, based on CPU time.

I suspect the 'call dynamic' was the builder validating the code.

Basic glimpse

In an environment with more activity, I was able to compare the standard API call with a few hundred that included parsing the code.

Click/tap to embiggen

If I take 141312 CPU time, divide by 301, then multiply by 17778, I get parsing around 1.8x the amount time as using the API.
The same goes for elapsed time, while the PLSQL Execution time remains the same.

Plus there’s disk reads, an obscene amount of buffer gets, considering the execution ratio.

I tried a second plugin (nested reports), and while the CPU ratio seemed the same, the physical reads were over twice as high.

Remembering this is just placing the code in the box – I haven’t even referenced it.

Too many words and numbers? How about a graph.

If this seems a fair reason to reduce the amount of interpreted code you have…

Twice the work, for nothing.

... then how does this make you feel?

What's a buffer, and why do we want to get it?

It seems by removing the code from the source attribute of the plugin, we measurably reduce the amount of work the database does. Imagine if we reduce our PL/SQL usage throughout the application?

So I conclude

  1. Use bind variables
  2. Put your code in packages

Wednesday, 30 January 2019

Oracle USER vs sys_context

This post was inspired by the fact I couldn't find many good references for what I thought was 'previously discovered truth'. So what does any good science-nut do? Add their own contribution.

So here are two simple performance suggestions. The second was an added bonus I realised I could demonstrate simply.

1) Stop using USER


I'm using USER far less frequently anyway, since it has no context in Oracle APEX, but it is still a handy default value for created_by colums, and I'm sure some Forms programmers could add life with a small refactor.

Sven explores this in his excellent post regarding triggers in 12c (spoiler: there is typically no need for a trigger).
I thought Tim had a section in an article similar to this, but I couldn't find it again.
And there's a tweet. There's always a tweet.

I executed the following on a development server, and it took 47, 50, 50 seconds respectively.
declare 
  v_result varchar2(100);
begin
  for i in 1..1000000 loop
  v_result := user;
  end loop;
end;
/

50 seconds
Replacing USER with sys_context('userenv','session_user') took an order of magnitude lower at 2.5, 2.4, and 2.4 seconds.
declare 
  v_result varchar2(100);
begin
  for i in 1..1000000 loop
  v_result := sys_context('userenv','session_user');
  end loop;
end;
/

2.5 seconds
It's no longer context switching between SQL and PL/SQL.

2) Stop using NVL


Instead, consider COALESCE, or other null-related functions.

So consider the previous test, but NVLing both expressions. At 58 seconds, the time taken seems like the sum of both, plus time to evaluate. Connor has more detail on the difference.
declare
  v_result varchar2(100);
begin
  for i in 1..1000000 loop
  v_result := nvl(sys_context('userenv','session_user'),user);
  end loop;
end;
/

58 seconds
By swapping the NVL with a COALESCE, you utilise a programming concept called 'short circuit evaluation'.  Results: 2.5, 3.1, 2.5 seconds.
declare
  v_result varchar2(100);
begin
  for i in 1..1000000 loop
  v_result := coalesce(sys_context('userenv','session_user'),user);
  end loop;
end;
/

2.5 seconds
I'm sure there's the odd "it depends", but if you don't take the performance freebies, what are things going to be like when you tackle to nasty queries?

Thursday, 1 November 2018

Lazy Loading Menu Count

I've been using the lazy loading concept demonstrated in Maxime's post quite a lot recently, I'd love to see this as a declarative feature one day.

I also wondered if I could apply this concept to the badge count in the side menu - not slow the page load by a longer running query that populates the count.


Turns out it wasn't that hard, particularly since I already had the jQuery I needed from a previous requirement.

We first need to add a unique class to the link definition, so we can identify the menu item that needs updating.


The label contains the substitution string reference to an application item called LAZY_MENU, that is just computed to zero on each page load.
Surrounded by square brackets, this turns it into the count badge.

Add a dynamic action that executes PL/SQL on Page Load, probably to the global page, though this demo only fires when on page 15.


The demo populates the item based on a query that counts how many columns in my schema, plus a random number, so we can see it change each time.

A subsequent JavaScript action is where the real action is at
$('#t_TreeNav span.lazy-menu')
  .closest('div.a-TreeView-content')
  .find('span.a-TreeView-badge')
  .text($v('P15_COUNT'));
The selector identifies the menu item based on the lazy-menu class entered in the link definition, then traverses the tree to locate the relevant badge, which is then updated to the page item.

A more complete solution may also update LAZY_MENU application item, ready to have reasonably fresh information for the next page load.

Or we could make our queries faster ;p

Friday, 8 December 2017

Friday Fun SQL Lesson - union all

Our office kitchen is unavailable this Friday, so the call was put out for pub lunch.

After a couple of replies I decided to enter my reply, in full nerd mode.
select * from people_coming_to_lunch;

People
--------
Kate
Scott
Karolina

3 rows selected.
And of course one of the other SQL geeks (name redacted) replied to extend the data set.
select * from people_coming_to_lunch
union 
select 'Shanequa'
from dual;
And I couldn't help myself. I had to play the performance card and suggest that UNION ALL would be the appropriate usage, and should be the default you type out. Always. Until you decide otherwise.

That's because sorts are expensive. And a UNION will need sorting to check for duplicates.

That sort of all the rows isn't necessary
And it will sort the data set even if there is a unique key on the data.
create table people_coming_to_lunch (people varchar2(30));
insert into people_coming_to_lunch values ('Scott');
insert into people_coming_to_lunch values ('Kate');
insert into people_coming_to_lunch values ('Karolina');

create unique index lunch_people on people_coming_to_lunch(people);

select * from people_coming_to_lunch
union all
select 'Shanequa' from dual
By using UNION ALL instead of UNION, you're telling the database not to even bother sorting the set to eliminate any potential duplicates, since your advanced human brain knows there will be no duplicates.

With only a few rows, the timing of sheer throughput is barely noticable.
iterations:1000
     0.30 secs (.0003 secs per iteration)  -- UNION
     0.25 secs (.00025 secs per iteration) -- UNION ALL
 
iterations:10000
     1.72 secs (.000172 secs per iteration)
     1.09 secs (.000109 secs per iteration)
 
iterations:50000
    10.94 secs (.0002188 secs per iteration)
     8.48 secs (.0001696 secs per iteration)
So I turned it up a notch and added about 5000 rows to the table.
insert into people_coming_to_lunch
select table_name from all_tables;
 
5000 rows inserted
Here's the explain plan without the sort.

That's one less chunk of 5000 rows to process

Now the differences in performance stand out.
iterations:1000
     6.79 secs (.00679 secs per iteration) -- UNION
     2.85 secs (.00285 secs per iteration) -- UNION ALL
 
iterations:5000
    42.91 secs (.008582 secs per iteration)
    19.89 secs (.003978 secs per iteration)
 
iterations:5000
    31.70 secs (.00634 secs per iteration)
    22.83 secs (.004566 secs per iteration)
 
iterations:5000
    30.75 secs (.00615 secs per iteration)
    16.76 secs (.003352 secs per iteration)
Upto twice as long for the same statement?
No thanks, not when I could just type 4 extra characters to get an easy performance win.

Turns out this topic formed my first technical post. Back in 2009, after almost 10 years of using SQL, that was the first thing I blogged about. How about that.

Monday, 30 January 2017

APEX Survey Results: Addressing Performance

Yep, I'm still doing this. A bunch of questions to come, many worth the visit. Just a few weeks between drinks, so to speak. My annual review is a little late, too. Anyway...

Time for the performance questions in my 2015 survey. A favourite topic of mine, and my boss, Penny Cookson, lives for tuning.

Q9. How do you proactively address performance? (tick all that apply)




Tune SQL - I would be surprised if this wasn't the top result. Plenty of SQL used in APEX applications, and why not tune them?

Limit Interpreted PL/SQL - this is good practice in general, but in the world of APEX, this means moving inline code from application into packages. You can get pretty quick wins doing this with any plugins. Read here for details.

Materialised Views - a database construct that can aggregate complex information at regular intervals, to be queried many times with simpler SQL over fewer rows. I've seen interesting examples that obfuscate layers to external information using pipelined functions. Or you could just use it for your menu.

Care using v() - Particularly important when referencing page items, since this function would execute a query on the session state table. While values such as APP_ID come from a persistent package variable, it's still context switching between SQL and PL/SQL, so it would be even better to use bind variable.
where id = (select v('APP_ID') from dual)
I think any use of v() should be questioned. SQL queries should use bind variables (not substitution strings) and packages should be parameterised.

Page Workflow/Design - why refresh the entire page when a partial refresh would do? A well designed application will limit network traffic and the amount of queries necessary to serve the data. Refreshing regions on demand with dynamic actions is one of the most regular things I do. Declarative page modals in APEX 5.x have also made APEX life a lot more comfortable.

Global Page Modularisation - Performance also about the developer not repeating tasks over and over, causing future maintenance headaches.

JSS/CSS File Management - minifying code can reduce network traffic, as can well designed libraries; CDNs; and declarative options. I understand APEX Front End Boost can help.

Care with jQuery selectors - selectors can be abused just like table indexes. I have some commentary on this here.

Region Caching - possibly underutilised, but the ability to cache region on the global page in APEX 5.0 opens options, as do some new APIs.

data-attributes - jQuery related, what I meant by this was to offer more information during report generation, which can help interactivity and reduce AJAX activity. Though read consistency issues should be considered. See a basic example here.

Other - a few people suggested a well designed data model. Touche. Even more got stuck into it in the next question on performance return on investment.

The User Interface attributes are also an area worth being familiar with, particularly in regard to application level file management.

I don't need to worry about performance - yeah, right.

Monday, 3 October 2016

Decommissioning triggers in 12c

I've been operating with a 12c environment this year and I can see some standard patterns of mine changing.

One is the use of triggers, or lack thereof. I live in a city with a certain evangelist who does not like triggers, so I was happy to see Sven Weller's "perfect trigger" post. The answer is: there is no trigger.


Well, it's one thing to say 'create all new tables like this', but what about our existing structure? What process should we follow to decommission these triggers?

If you take the weekly DB Design quiz at the PL/SQL Challenge website, you may recognise this example as a quiz from Sept 2016.

Existing Framework

Consider a table with a structure similar to the one I described in a post of mine from 2010
drop table orbiters;
drop sequence orbiter_seq;

create sequence orbiter_seq;

create table orbiters
 (id  number not null
 ,CONSTRAINT orbiter_pk PRIMARY KEY (id)
 ,position    number not null
 ,planet varchar2(15) not null
 ,name   varchar2(30) not null
 ,distance  number not null
 ,created_date date not null
 ,created_by varchar2(50) not null
 );

create or replace TRIGGER orbiters_biur BEFORE INSERT OR UPDATE ON orbiters FOR EACH ROW DECLARE
BEGIN
  IF INSERTING THEN
    :NEW.id            := COALESCE(:NEW.id           , orbiter_seq.NEXTVAL);
    :NEW.created_date  := COALESCE(:NEW.created_date , SYSDATE);
    :NEW.created_by    := COALESCE(:NEW.created_by   , apex_application.g_user, USER);
  END IF;
END;
/

insert into orbiters (position,planet,name,distance) values (3,'Earth', 'Luna', 384) ;
insert into orbiters (position,planet,name,distance) values (3,'Earth', 'SpaceX', 400/1000);
insert into orbiters (position,planet,name,distance) values (3,'Earth', 'ISS', 400/1000);

select id, name from orbiters;

ID  NAME
1   Luna
2   SpaceX
3   ISS
It's not rocket surgery...

Audit columns

There are plenty of options about in regard to auditing your columns and tracking history, but for the pattern you're likely to see regularly, there is no need for a trigger.
alter table orbiters modify created_date default sysdate ;
alter table orbiters modify  created_by default coalesce(
          sys_context('APEX$SESSION','app_user')
         ,regexp_substr(sys_context('userenv','client_identifier'),'^[^:]*')
         ,sys_context('userenv','session_user')
         );
Sven's coalesce to resolve the username when the environment may or may not be APEX is neat.

Default using sequence

The sequence itself has no performance benefit using an assignment vs selecting from dual. Instead, it needs to either be in the insert statement, or as a default value as 12c (finally) allows.
alter table orbiters modify id default orbiter_seq.NEXTVAL;
drop trigger orbiters_biur;
If you get ORA-02262, check your sequence exists and you've typed it properly.

Identity columns

Identity columns are ideal for new tables, but existing tables would require more work.

Identity columns actually use a sequence under the hood, so while I thought I was on the right track by using this syntax:
alter table orbiters modify id GENERATED BY DEFAULT ON NULL AS IDENTITY start with 10;
It returns the following error.
ORA-30673: column to be modified is not an identity column
A few bloggers mention this fact, someone posted the question on Reddit of all places.

But I think the value equation of creating a new column and shifting data, foreign key references etc is outweighed by the fact that performance is better regardless of using identity vs sequence. Tim demonstrates this here. I discuss resetting sequences in identity columns here.

Definitely remove the trigger

Here is the best part:
drop trigger orbiters_biur;

Conclusion

To decommission triggers and replace them with new 12c features, use these steps:

alter table orbiters modify id default orbiter_seq.NEXTVAL;
alter table orbiters modify created_date default sysdate;
alter table orbiters modify created_by default sys_context('APEX$SESSION','app_user');
drop trigger orbiters_biur;

Happy #db12c!

Related Posts

12 Column Upgrades
Synchronise Identity Column

Wednesday, 25 May 2016

More on CSS selector performance in Oracle APEX

Last month I wrote a post about CSS performance, including some performance test results.

I recently encountered this brilliant post on Medium that describes some best practices for CSS.

While APEX does a lot of this for you, I think it's worth a read by all developers. Even applying basic naming conventions can make code easier to read and understand.

There was also a section on performance that described an issue that my previous example didn't explore, and that's 'tree walking'. Each web page could be likened to a giant tree, and your CPU will like you if you can minimise the amount of traversing necessary to verify the component being sought.

So for APEX developers, consider the selector reading right to left, and try keep it to two steps.

For instance, to identify a button within a classic report, you would define a static region ID for the report, and use the following selector
#p1_region a.t-Button

This would look for anchors with the standard Universal Theme button class, but only if it exists within the classic report region. Concise & effective.

You could use this selector to apply further CSS attributes to the buttons, perhaps via the Inline CSS page attribute.
#p1_region a.t-Button { font-weight : bold; }

Or you could use it as a jQuery selector for an on-click Dynamic Action, responding to the button press. If you add the following as link attributes to the column with the link button. It adds an attribute to the anchor with information from a column aliased as season in your SQL.
data-season="#SEASON#"
Note, to render a column link as a button, this attribute would also contain this class listing, to look like the nearby image.
class="t-Button t-Button--warning t-Button--simple t-Button--large t-Button--stretch"

You can refer to this information within a JavaScript action to set a hidden page item with the data from the season column.
$s('P1_SEASON', $(this.triggeringElement).data('season'));

A subsequent PL/SQL action could then submit this value to session state and execute something related on the database, without submitting the page on click of the report row button.

While that has side tracked from selector performance, it shows how a simple selector can be used within APEX to trigger events. A pattern I use regularly, done with minimum effort.

If you like this post, you may like my book /plug

Thursday, 28 April 2016

On CSS/jQuery Selector Performance

My post describing the use of a simple selector identifying page spinners was originally going to be about performance, then I learned something I found very interesting.

I likened what I learned to Tom Kyte's essay (Asktom->Resources->Presentations->FalseKnowledge.htm) on Correlation vs Causation. The essence was that things change over time, and we can't always trust authorities on the topic, and we must always test in our own environments. This aligns with skepticism in general, and here we are applying it to web development.

The following forum post described the subsequent code
https://community.oracle.com/thread/3908020
.u-Processing
{
display:none
}
I was going to suggest simply prefixing the provide class with the relevant HTML tag.
span.u-Processing
{
  display:none;
}
But I discovered a thing or two while drafting the post to explain my reasoning.

A while ago I incorporated much of my jQuery performance knowledge from this article
http://www.artzstudio.com/2009/04/jquery-performance-rules/
It was concise and it made sense. It also harmonised with other information I found on the topic. However, the post is now 7 years old and is probably worth re-testing it's assertions.

Consider the provided solution using .u-Processing. The period prefix means it's looking for a component on the page with the class u-Processing, as per in the definition of the spinner.
<span class="u-Processing" ...

If the component had an id, you could use the # prefix to reference the ID, much like using an unique* index. (*in the web world, the an ID is not guaranteed unique, but should be in best practice, particularly to avoid logic errors.)

As I had previously interpreted, the problem was that it needs to look at every single component on the page. If the selector was prefixed with the span, this reduces the workload on the browser by filtering the DOM tree to just the span tags.
span.u-Processing
And of those span tags, which have the provided class.

The reasoning seems sound, right? Part of it relates to native JavaScript methods available for locating content. Now if only there was a way to test these assertions...

Well on jsperf.com you can set up such test cases and run them against different browsers. Similar websites are available for SQL (livesql.oracle.com) and JavaScript test cases (jsfiddle.net).

[Note: jsperf.com was alive when I started drafting this post in March 2016, but has been down for a few weeks in April? ]

In regard to browser performance, someone had already prepared an example based on the performance rules described.
https://jsperf.com/jquery-class-vs-tag-qualfied-class-selector/47
And the results on my laptop in 2016 painted a somewhat different picture.

Chrome / Linux / Laptop

And true to Tom's essay, when I tried the same tests across different browsers on a different platform, the results are widespread.


But like anything it really depends on the page you're running it on. I did find the test case a little contrived since it was based on a very small portion of HTML, so I attempted the same type of test using HTML extracted from a typical page generated from APEX to see how many entries it's actually sifting through.

http://jsperf.com/apex-tag-vs-id-vs-class


To give you a sense of scale, I ran some jQuery statements in the browser console of an even bigger page, one including a region selector so plenty of regions & reports.

This returns the number of elements on the page you could identity with a jQuery selector. In my case it returned about 3000.
$('*').length

There are only 148 spans.
$('span').length

And only one spinner.
$('.u-Processing-spinner')

Considering the amount of operations/second the browsers are capable of, these numbers may seem small, but everything adds up. It's worth keeping up with better practices to trim performance where ever possible.

So the lesson I read out of this is if your developing APEX applications and you have basic jQuery mechanics, or even just use jQuery selectors in your dynamic actions, then perhaps stick with a readable, logical option that you're comfortable with.

Either that or I still have a lot to learn about jQuery performance ;p

If anyone has good, current resource recommendations on the topic, I'd be happy to hear. Particularly if it relates to use in the #orclapex environment.

Thursday, 21 April 2016

Improving PL/SQL performance in APEX

One of the simplest tuning techniques to encapsulate PL/SQL used in APEX within packages, minimising the size of anonymous blocks. This applies to any PL/SQL within the page, including computations, processes, plugins, dynamic actions, validations, shortcuts and dynamic PL/SQL regions.

This change can make a big impact in the execution time of PL/SQL as it's processed at compile time instead of being parsed and compiled at runtime as dynamic PL/SQL.

Plug-ins can be wonderful black boxes and consumers may not care how it works or looks under the hood. I have a few other views on the use plug-ins that I'll share on day, but this post looks at a way you can always improve it's performance by shifting the supplied code into a PL/SQL package.

Example of freshly imported plug-in in APEX 5.0

You (re)move the code that's supplied and defined as a procedure in your own package, perhaps apx_plugins.

Then modify the "Render function name" to prefix the call, ie: apx_plugins.render_save_before_exit

I gathered performance data on two plugins using compiled vs dynamic code. The following graph shows the relative difference in rendering time between the two using 11g, where the red dots display how many data points I used.

APEX Plug-in Performance - Dynamic vs Compiled PL/SQL
This particular information was garnered from debug log execution time.
select message, avg(execution_time)
from apex_debug_messages
where message like '%sparkline%'
group by message;

Update: I have used Oracle's instrumentation to get better figures
http://www.grassroots-oracle.com/2019/11/on-interpreted-code-in-oracle-apex.html

The recommendation to move this code can be found in (At least) the help text for plugin PL/SQL and documentation.


Moving code to packages will also overcome other limitations such as how much code fits in the attribute, though I think this boundary should only be reached in extreme circumstances. The Excel2Collection plug-in is an example. The author had to compress the PL/SQL code to make it and be packaged as a plug-in.

Encapsulating PL/SQL also provides more opportunity for re-use, reducing chance of defects and
allowing more granular version control.

I would like to think Oracle Forms developers have been doing this for years, in part to help minimise network traffic.

If I'm not mistaken, the same concepts can be applied to complex SQL, moving them into database views.

Daniel Hochleitner has also blogged on this topic.

Friday, 5 December 2014

Boosting APEX menu SQL performance

If you're using dynamic SQL to source your menus and you're utilising the apex_application_pages dictionary view to build you list of menu options - I suggest you consider using Materialized Views.

We have a reporting application that does just this; and the query also links to an authorisation table that decides what types of users see which reports.

You can create a snapshot (materialized view) using something as simple as
create materialized view apx_application_pages
as select * from apex_application_pages;
Then you can add an index on application_id and page_id, then you have the perfect substitute for your menu SQL.

In our development instance containing all sorts of experimental applications - apex_application_pages returns about 1500 rows - but it's a relatively complex view. Using the snapshot instead of the supplied view makes the following difference in throughput when running our query 50 times (on dev)
    33.93 secs (.6786 secs per iteration)
     0.17 secs (.0034 secs per iteration)
Just don't forget to refresh it when your application grows.
exec dbms_mview.refresh('apx_application_pages');

A simple concept, but it can make a very positive difference when done right. It's also applicable to a number of other related scenarios.

Monday, 17 March 2014

APEX 5 first peek - Performance

APEX 5 EA has shown some interesting improvements in regard to performance.

Icons

If you haven't noticed the swelling trend of flat designs in the past few years, you haven't been using the internet or held a smart device. Here are three good articles about this modernism, and like me you may learn the word skeuomorphic.

How does this affect APEX? Well of course all the icons have changed again.
How does this affect performance? The new icons are base64 encoded strings.

Where previously they were rendered like this:
<img src="/i/apex/builder/menu-pkg-128.png" width="128" height="128" alt="">

Now a small spacer image is used, then dressed up with a class with the image definition in the CSS file.
<img src="/i/f_spacer.gif" width="128" height="128" alt="" class="gi-Icon-pkg">


APEX5 rendering of CSS icon
This improves performance of your page. I find these days performance increases are not just improving your SQL & PL/SQL standards, but CSS & jQuery considerations for the browser as well.

Exports

The feature list described the following in regard to "Optimised application export format"
The APEX export file format has been optimized to make it easier to interrogate with diff/merge tools, to consumes less space, and make it faster to import.

So I thought I'd check it out:
APEX 4.2 export
APEX 5 export
As you may notice from the samples - extra spaces have been trimmed and unnecessary lines have been removed. A smaller export size might be beneficial for some of the applications I've worked on.
A quick import/export of a 4.2 application shows new export is 4/5 the original size - rather respectable.
4.2: 3,427,727 btyes
5.0: 2,730,066 bytes

I reminds me of much happiness when I discovered the TRIMSPOOL option in SQL*Plus

New export option "Export with Original IDs" is available for those for instance wanting to use diff tools.

APEX variables

I've previously posted about the dangers of using certain substitution options in your SQL, namely v('ITEM') and &SESSION.

The APEX team have now introduced some sys_context() options - and I've only discovered this so far thanks to Chris Neumueller posting on Jeff Kemp's post on the same issue.

This query returned successfully in the SQL workshop:
select sys_context('APEX$SESSION','app_session') app_session from dual

Recently I've found using context variables quite useful for improving performance and the parameterisation of views - spurred on by Jeff's great post on the topic. And we can both thank Connor McDonald for the idea.

Supplementary files

This topic requires a post on its own - and a bit more investigation, but maintenance and performance improvements have been gained.

--
No doubt there are some other performance nuggets to be found in the new release, not counting the productivity improvements for developers in the builder itself. Let me know if you find any!

Monday, 27 May 2013

Performance of APEX conditions

I often think about performance of conditional expressions. I'm not just talking about the expense of whatever the test is, but the difference between the condition types used.

I’ve heard about it from a few sources, and Tony Andrews blogged about it recently, but never really been able to/had a chance quantify it. Roel Hartman has also blogged about this after reading my tweet. He shows some figures, but difficult to rinse, repeat & measure like a PL/SQL process.

For example, while I’ve always encouraged this type of expression for consistency (which runs dynamically), particularly with new developers learning the differences between item substitution syntax.

This performs better (declaratively):

I don't see this second debug statement when using the declarative method.
The benefit is minor, but if multiplied by 50 separate events, that’s 1.5 seconds extra in rendering time.

Use the low-code options when you can, your database will appreciate it. You could even nest regions to get around multiple conditions.

Wednesday, 14 March 2012

A sign to refactor SQL

Note to fellow programmer: just wondering why the yearn for multiple subqueries when it seems a simple group by would suffice?

If you see multiple instances of the one table in a query, why not see if it can be re-factored.
Admittedly the query was originally querying from separate tables which got consolidated into a view, so the query pedigree was sharpened already - if that's the right term to use.

SELECT DISTINCT c.my_id
      ,c.my_name
      ,TO_CHAR(c.my_date, 'DD/MM/YYYY HH:MI AM') my_date
      ,(SELECT SUM(cl.my_total)
        FROM my_view cl
        WHERE cl.my_id = c.my_id) my_value
      ,(SELECT COUNT(1)
        FROM my_view cl
        WHERE cl.my_id = c.my_id
        AND   cl.special_nbr_field > 0) my_cnt
FROM   my_view c

vs

SELECT my_id
      ,my_name
      ,TO_CHAR(my_date, 'DD/MM/YYYY HH:MI AM') my_date
      ,SUM(my_total) my_value
      ,SUM(SIGN(special_nbr_field)) item_cnt
from my_view
group by my_id, my_date, my_name

Don't feel negative - I added an elegant use of SIGN


I fear my puns are getting worse...

Wednesday, 8 February 2012

APEX variables in SQL

When I was first learning Oracle Application Express, I found one of the trickiest things was deciphering when to use which substitution string. Martin Giffy D'Souza succinctly describes variable reference options here.

Oracle supplies a number of built-in substitution strings. The APEX documentation provides a number of examples of here, for example generating links and referring to the session number:
(from the Oracle Documentation)
Trouble is, each syntax has it's place - trouble is if you use the wrong one you could impact the performance of your application - as I described here.

9 times out of 10, I reckon you should be using the bind variable syntax of :SESSION

What I find when I visit clients a number of developers - typically those self-taught, use the &amp;SESSION. syntax in queries - which is bad, and this goes for any variable you want to reference.

The reason for this is because you're essentially flooding your shared SQL area in the database with similarly parsed SQL statements. This is bad because Oracle sweats when it has to to a hard-parse on your queries, as opposed to recognising a query you've executed before, and running it again with different bind variables - this is soft-parsing which Oracle can do blindfolded with it's little toe.

To elaborate, I created to basic report pages with slightly different SQL statements.
select org_id, name -- Good query
  ,'f?p='||:app_id||':1:'||:app_session lnk
from organisations;

select org_id, name -- BAD query
  ,'f?p=&APP_ID.:1:&SESSION.' lnk
from organisations;
It's a subtle different, but the highlighted line 6 is where the curry will burn.

I enabled tracing on my application by adding a parameter to the end of my URL
http://localhost:8080/apex40/f?p=105:7:1368517209058184::NO&p_trace=YES
(more information on tracing your Apex application can be found in the documentation)
I opened both pages a number of times, logging out a few times in the process to generate new session numbers

Then I located my trace file, ran tkprof over it, opened up the output and searched for "organisations". Forgetting all the other information for the moment, there was one instance of this:
select org_id, name
  ,'f?p='||:app_id||':1:'||:app_session lnk
from organisations
And a number of instances that all looked very similar
select org_id, name
  ,'f?p=4000:1:1223945495716883' lnk
from organisations

select org_id, name
  ,'f?p=105:1:3914512356591996' lnk
from organisations

select org_id, name
  ,'f?p=105:1:4361599949844983' lnk
from organisations

select org_id, name
  ,'f?p=105:1:8029570771757325' lnk
from organisations
...
See a concerning trend?

A quick peek in v$sqlarea confirmed the same issue - although I would like to ask the Oracle APEX team (or someone who knows more than me in these matters) why the parse calls is above the execution count for my "good sql" - it doesn't seem right to me.

Looking at it a second time, I notice the fourth result is from the application builder, so I would guess the extra parses come from me defining the page (5 to update the report... really?!)

One would need to obtain finer measurements to determine the hard vs soft parse count difference.

select sql_text,executions, parse_calls
from v$sqlarea 
where sql_text like '%--%from organisations%';

At the end of the day, please keep issues like this in mind when writing your queries.

Wednesday, 11 January 2012

APEX performance issues with v()

One of the checks in the APEX Advisor is to determine if any instances of the v() function exist within your SQL.

This confirms one of the first APEX best practices I remember hearing, that you should use bind variables instead of v() within your SQL.
It makes sense, since v() is essentially making an interrogation on the session state table (wwv_flow_data).

Patrick Wolf talks more about performance with determinism here. Martin Giffy D'Souza summaries APEX variables nicely here.

In the comment thread Patrick also suggests the wonderful scalar subquery solution that works on all db versions
WHERE id = (SELECT v('MY_ITEM') FROM DUAL)

The trouble is, I think it's a difficult situation to test - how would you go about determining how many times Oracle decides to invoke the v() function, and for which parameters?

I came across this case was on a 10.2.0.4 db with Apex 3.1.2. I've added Groucho glasses to the query to protect the innocent/guilty.
It's also somewhat simplified. There were half a dozen instances of v(), but I've highlighted what turned out to be the offending line.

select * from a_6000_row_table p
where (  ...
or exists
  (select null
   from a_user_access_table aa
   where (aa.org_unit in (SELECT * FROM TABLE (InList_fn(a_pkg.genInList(p.year, p.org_unit))))
       or aa.org_unit in (SELECT * FROM TABLE (InList_fn(a_pkg.genDiffInList(p.year, p.id)))))
  and aa.inactive is null
  and aa.staff_id = v('LOCAL_USER_ID')
  and (  (aa.access_type in ('ABC','ABCXYZ'))
   or (   aa.access_type in ('DEF')
      and p.latest_status = 'YIPEE'))
  )

As it turns out line 9 wasn't alone in causing the query to take at least 5 seconds to return 1-10 of 6000 in an interactive report.
The first part of the where clause where it constructs a comma delimited list, converts it into a nested table, then interpreted as a table - also lends some form of hand in slowing down the query.

These function calls could probably be replaced with some form of WHERE EXISTS, but I think the original author had a steel plated encapsulation hat.

When first assessing this query, I thought the problem related to a stragg function in the column list which used a FOR-SELECT-LOOP, but then I spotted the obvious.
So I replaced all the v() references with bind variables, and all of a sudden the 5+ second query went sub one second.

I felt a little social, so I tweeted my little win, and Trent asked if I had more details, and I was curious to isolate the exact improvement as when I've briefly played with performance using v() functions on simple queries I came away unconvinced of any issues but happy to play it safe.

Some tests by themselves
  1. Removing the IN list functions got the query to around 0.7s
  2. Changing v() to a bind variable was around 1s
  3. Changing v() to scalar subquery also gave a 1s query.
  4. Setting entire query to use bind variables got down to about 0.5s
For those tuning experts out there who know much more about the Oracle mechanics than I do, I'd be curious to know if you can shed some light on how/why in particular these improvements came about.
For instance, what do you suppose the relationship between the IN list functions and the use of v() is?

For reference - I squeezed in a function call at line 10 that counted how many times it was called. It was always 6000 times, regardless of bind variable usage until I took out the call to the IN list, then it was 9 records. I didn't isolate the significance of that second number.

I remember when learning APEX it was difficult to determine when to use what syntax to refer to values in session state, so two lessons affirmed here today:
  1. Bind variables work! Limit the use of v() function calls to assignments and conditions in PL/SQL packages
  2. Scalar subqueries are just as awesome.
And I'll just add two more related points
  1. Never (never say never) use &ITEM. syntax in queries - I've got a post in the works on that issue.
  2. Don't forget about the nv() function, allowing you to compare the same datatype if you're dealing with numbers. You need to make your own dv function...
Scott