Tuesday 22 December 2009

Apex 4.0 for Forms Developers

It was less than a week ago I woke up to a flurry in the Apex folder of my RSS feed and I knew something Apex related had been announced!

After reading Dimitri's comments regarding Apex 4.0's Tree View vs Component View, I had to learn more about this new view as it immediately made me think of all the Forms programmers out there learning Apex.

In addition to Oracle's Apex page dedicated to Forms developers, their Oracle Development Tools Statement of Direction states that instead of creating migration tools, they wish to create environments that expose similar concepts as Forms and Reports, giving to Forms and Reports developers the opportunity to become productive in a new, but familiar environment.

So as soon as I saw Apex 4.0's tree view, it made me think of the Oracle Forms object navigator.


Compared with the Forms object navigator, today's technology provides the Apex team with the ability to create a more dynamic environment. The browser environment, however, does also bring it's drawbacks.
Expanding and collapsing in Forms, while minor, did create time saving opportunities.

As Dimitri noted, the tree view does also limit the information you can see at once, such as conditions. I can foresee having to locate these via tool-tips may become frustrating.
On the plus side, the quick drag and drop feature will be handy, and the right mouse click is context sensitive


 I do agree with Dimitri in his comment that the tree view does provide an insight to the overall page process. This can assist Forms developers and those new to the Apex environment, but let's not forget the Page Events view.

Previous to Apex 4.0, this was one click away - however now it's three clicks and two pages away.
Utilities -> Additional Utilities -> Page Events

This Page Utilities option offers some new layouts that may turn out to be useful, plus a location to store links for some existing features.

The grid edit of pages is a nice way to modify those attributes, and navigate to the relevant pages - but does take time to navigate to. The page attributes also provides a potentially useful summary detailing counts of various page components.

There are also subtle changes around the place, one I do like is the runtime Developer Toolbar is now "frozen" to screen like a semi-transparent frame.



After a quick browse through the Application Migrations section, there are no obvious changes to the Forms to Apex conversion utility - including fixing some issues I uncovered while writing my presentation.

In an overall perspective, I think more screen real estate is unnecessarily used - particularly by the main application page. For those of us using laptops, screen height can be an issue. That being said, I do look forward to seeing how useful the various new dashboards can be.

As for the tree view, I'm undecided where I may spend most of my time. I think while it does illustrate the page processing very well, those familiar with Apex intricacies may stick with the existing screen showing more information is shown without interaction, and where double clicking is not necessary.

This is all without mentioning the extra coding grunt being provided out of the box. Keep a close eye on the Apex blogs for more information on these.

Friday 18 December 2009

11g Documentation trick

One handy tip I learned at the conference wasn't at one of the seminars, but whilst having an amber ale with my esteemed colleague Connor McDonald.

Neither of us seem to be a fan of the expand/collapse feature in Oracle, nor the navigation bar. However it was Connor that took the time to find a simple edit to restore the look and feel to pre 11g levels.

Where your local version of the documentation is stored, simply rename this file
.\dcommon\js\doccd.js

And instead of the documentation looking like this on entry, requiring more clicks to do anything of use:


It will look like this, all ready to search for your keyword of interest:


The main page will retain it's little search area, although if you use that, it will search online - so you might as well utilise something a little more powerful and handy like Ubiquity.


I don't know what the folks at Oracle Documentation HQ think of this, but at least it's not tinkering unlawfully with the database.

Wednesday 16 December 2009

JQuery Datepicker

Recent suggestions indicate that JQuery will be included as one of many new features in Apex 4.0, so why not start having a play with it now?

Take the example of the JQuery datepicker. The original date picker that comes with Apex reminds me of web applications from yesteryear, and it's welcoming to see a feature that feels integrated with the web page.



The documentation for these features are thorough and at first glance can seem a little overwhelming, so why not illustrate an example specifically for Apex? There are a few sets of instructions out there in the Apex blogging world, but this is what I found useful when I wanted to experiment with the features, plus a few thoughts of my own.

1) Download the bundle of components from jqueryui.com/download
Here you can select the theme your interested in - perhaps match it up with the theme you're using for your Apex application. You can also select what components are going to be included. If you want to keep your application light, deselect those you do not need, or you can prepare multiple copies and attached the relevant ones to the relevant pages in your application - you have a few options for this I'll mention shortly.

The download will also include an index.html to the development-bundle folder, allowing you to demonstrate with a local copy of the scripts.

2) Make the files visible to your workspace.
The following files are a must for the components to work
jquery-1.3.2.min.js -- this represents the functionality core to jquery, all components will need this script
jquery-ui-1.7.2.custom.min.js -- this will include any components you've checked in the build stage on the JQuery website
jquery-ui-1.7.2.custom.css -- this of course represents the theme you selected. Without it things will look a little strange.

If you're working with the embedded PL/SQL gateway, you will need to upload the files as Static Files or Images in Shared Components. By the looks of the CSS, JQuery doesn't seem to be built to easily adapt to this Apex setup, so you may have difficulties incorporating the images. If you wish to use JQuery, I'd suggest using Apache.

If you're working with Apache, they will need to reside somewhere in your /i/ folder. Note the images will need to be in a folder called ./images, relative to the location of the CSS file.
ie -
./apex/images/css/jquery-ui-1.7.2.custom.css
./apex/images/css/images/ui-icons_ef8c08_256x240.png

3) Include the scripts as part of your page
When referring to file locations, Apache will refer to #IMAGE_PREFIX#, and the embedded gateway to #WORKSPACE_IMAGES#
Further to this, it would be better practice to store your file location in a computed application that will return the relevant file location depending on your environment, for instance:
<script src="&F_FILE_LOCATION.jquery-1.3.2.min.js" type="text/javascript"></script>
<script src="&F_FILE_LOCATION.jquery-ui-1.7.2.custom.min.js" type="text/javascript"></script>
<link type="text/css" rel="stylesheet" href="&F_FILE_LOCATION.jquery-ui-1.7.2.custom.css" />
Some people choose to place this within the template, others like to include it in page zero where it may be conditionally included in various pages. This condition could be to include the JQuery scripts only on pages that utilise it. If you want to be really specific, you can include different scripts representing different components. You don't need to use the custom script you generated, you could refer to the specific .js files as required - check out the documentation for further tips.

4) Invoke the JQuery code
Within the page that will utilise the JQuery feature, you need to invoke the relevant function during page load. Again there are a few options here:
a) Include the following script in the source of the page
<script type="text/javascript">
  $(document).ready(function(){
    $(".datepicker").datepicker({
    showOn: "both",
    buttonImage: "#IMAGE_PREFIX#asfdcldr.gif",
    buttonImageOnly: true,
    showOn: 'button',
    dateFormat: 'dd-m-yy'
    });

  });
</script>
Edit - modified dateFormat to match documentation.

And apply to the item's HTML Form Element Attributes property:
class="datepicker"

b) Include the same code, but in the highlighted line 10, add referenced to your selected items:
$('#P1_MY_DATE').datepicker();

c) Or apply the date picker to all items in the application
<script type="text/javascript">
  $(function(){
  // Convert all Datepickers to a jQuery DatePicker
  // Remove the native Oracle Apex DatePicker and add JQuery one
    $("td.datepicker")
    .next("td")
    .remove()
    .end()
    .children("input[type!=hidden]")
    .datepicker(
    { dateFormat : 'dd-mon-yyyy'
    , showOn : 'button'
    , buttonImage: "#IMAGE_PREFIX#asfdcldr.gif"
    , buttonImageOnly : true
    });
  }); 
</script>

And voila! You'll have applied a JQuery component to your application. Why not start playing with other widgets?

Stay tuned for the implementation process come Apex 4.0

Monday 14 December 2009

How Developers should ask for help

Back in October ZDNet Australia posted a brief video interview of Tom Kyte describing How developers should ask for help (linked by Eddie Awad). I thought I'd mention it now since I was talking to a few colleagues at the recent AUSOUG Conference and they don't recall seeing the video.

When I was first learning Oracle, I spent about 20 minutes every morning reading entries on AskTom and learned a lot! You also get a feel for someone's personality when you continuously read responses to questions posted by all varieties of people. Frankly, Tom's a witty fellow. I especially like his oft used analogy of comparing apples to toaster ovens.

I also empathise with some of his pain in responding to questions that simply don't provide enough information. I must say I particularly agree with Dan Wilson's reply in the comments section. Regardless of the forum to which you may be posting a question, it's so important to provide a context, frame of reference, ins and outs etc.

Put yourself in Tom's shoes - what information would you like to receive from someone asking you a question about a technological problem? Do you want to hear "Why doesn't this query work?" or would you like to be provided with a really simple test case that provides you the ability to tinker straight away?

I also liked Tom's explanation of why he doesn't respond with RTFM. Not only is it important for everyone to learn by doing, it's also important for people to learn how to effectively find the correct documentation themselves - which should be the first step to nutting out a problem before you even consider approaching anyone. I'm passionate about documentation, and technical queries on Oracle functionality spurred me into writing my presentation on documentation usage.

Another analogy I find is when my eldest child tries to tell me about something in a book he's reading. Unless he puts it in context for me, I'm not really going to understand why it's funny. Or perhaps when I ask my father how to do some handy work around the house. I might ask him specifically how to do something, but if I don't even give him the bigger picture, he may not know to explain the even easier solution using different tools I wasn't even aware of.

This concept applies true to so many fields and situations.

Friday 11 December 2009

Interesting topics while I was gone

Not surprisingly, the world still ticks - even when I'm on holidays.

I've finally caught up with my RSS feeds - the work related ones anyway - and I've compiled a brief list of those that caught my eye in particular. When trying to catch up on three weeks of feeds, the best reads tend to stand out.

Comparing Oracle & MySQL - this may become more relevant as time goes on, and will certainly be useful every time my MySQL friend asks me Oracle questions.
Oracle XE 11g - while I'm not waiting for it, that would be a useful release, particularly for those who want to tinker and learn from home.
asktom.oracle.com/tkyte - I spotted the loss not long after Tom jumped domains, so it's good to see all his reference scripts made available again.
Multibyte characters - I encountered this issue recently, but not because I was coding for another language. I was importing data files that contained characters that I wasn't expecting, so I had to consider width as Alex described even though I was substringing a particular length.
Unwrapping PL/SQL - It's amazing how many people are curious about this.
Apex & Web 2.0 - There are some great demo applications linked through here.

And as I write this my replacement laptop power adapter arrived - let's hope it works. No more excuses for me!

Apex Application Alterations

Here is a tidbit I noticed that may be useful to someone.

Sometimes there comes a need to compare two applications to determine differences. Typically, this would be comparing an older version of the same application to see what has been changed (and try to determine why!)

Doing this via a text comparison of the export is tough. A simple compare of text files in TextPad gave me 5000 lines of differences for a copy of an application where I made one change! This would of course be due to the differences in IDs for all the objects. Perhaps an expert in text comparison tools may suggest an alternative there.

The Application Express builder provides a comparison tool for you. Once you have one of the applications selected, click on the Application Reports link in the Task List.

One set of application reports is "Cross Application"


There are two comparison reports, as opposed to the attribute listings:


Now this is an output of differences between my two applications, one was a copy where I made an attribute change of an item in a form.

The Show Details checkbox option does little to illustrate the difference Apex found, but the report can be a good starting point to debugging and locating a problem you may have.

In this case the Application Attribute difference is obvious - the application alias would have changed.
Page 7 relates to my change, so it seems to be highlighting some difference in the page itself, in addition to the item I modified. If the algorithm is using some form of checksum to make the comparison, this is not exceptional.
I didn't look into the page 35, where there is a blob displayed, nor the page templates.

If this is a task you may perform semi-regularly, they may be anomalies/expected behaviour you come to expect from the comparison and learn what is safer to ignore.

Monday 30 November 2009

Review - Oracle Application Express Forms Converter



Recently in the blogosphere - not just Oracle related blogs – I’ve noticed an increasing number of book reviews. Normally I’ve found these reviews are by authorities as part of the regular marketing campaign. As with other industries, new forms of marketing evolve – in this case one that facilitates professionals in the field an opportunity to provide critique on new texts.

I was recently provided the opportunity to review a book discussing how to use Oracle’s tool to convert Forms applications to Application Express: Oracle Application Express Forms Converter by Douwe Pieter van den Bos. This book, published by Packt Publishing, is the first of its kind on this subject matter, and you may have seen the cover if you regularly visit Oracle based blogs – particular specialising in Apex. This opportunity is in part due to the fact I’m presenting on this very topic at this year’s AUSOUG conference series.

It’s a tough topic. Oracle Forms has a large presence throughout the world, but it’s ageing. Support is consistently being extended, however this support is predominantly there to keep it stable. Oracle is suggesting to customers to move towards new environments such as ADF and Application Express.

Converters between technologies have been around for a long time, and Oracle offers an Oracle Forms to Application Express conversion tool. There are many facets to a conversion between technologies, and the industry is typically aware that a tool will rarely be a silver bullet. There are such fundamental differences between technologies and human interaction and guidance is always required.

Accompanying documentation with the tool is regrettably lacking. While it describes the conversion process using the product, it doesn’t go into detail about what else needs to be done - so the situation is ripe for a book offering.

When reading books, fictional, non-fictional – and the subset of that being technical manuals, I always seem to be lucky in selecting the good ones. I rarely find myself reading a bad book, which is lucky since like movies, I don’t want to stop regardless of how bad it is – curiosity killed that cat.

Unfortunately, I feel compelled to give this book a less than enthusiastic review. To the author’s credit, it’s partially due to what I feel is not only a disappointing tool, but a difficult concept to actualise as software.

The book starts very well, dictating exactly what the book offers and provides no disillusion to the reader about what the conversion itself offers. David Peake’s foreword offers insight into how the tool came to be, and the author’s preface re-iterates how much work will be required to convert the application between the two disparate technologies.

Each chapter is just like the copybook for presenters. Say what you're going to say, say it, say what you've said. I like this, personally it reconfirms what you've learnt, and it makes it a little easier to find things in future. It reads very much like a blog or OTN article, which works well for me. I remember some texts from University being very dry as it just stated facts, where if the book almost forms a conversation between the author and reader, it makes concepts easier to learn and your attention is kept.

The book stated in the preface "because this book is written for developers and analysts who know both Forms and Apex." I think this is the perfect target audience as for this subject best value would be obtained by directing attention to developers who have a decent understanding of both products. Suggest to readers whose knowledge of Application Express is low to read an Application Express specific book such as Pro Oracle Application Express by John Scott and Scott Spendolini (so many Scotts!). As a reader of this book I really want to know the ins and outs of conversion – not just the tool itself but all considerations that need to be made since the tool will only handle a small percentage of the conversion.

For this topic I feel a fair and safe assumption would be, for example, a decent understanding of how to create an LOV in Apex. The author, however, spent time discussing some of these Application Express basics which detracted from the main topic of conversion. During my presentation I stated the assumption that the audience knew how to enhance Apex – this allowed me to concentrate on the issues that would be faced during the conversion and what components needed to be targeted.

On a positive note, it does make a presumption you've got decent knowledge about the sample application they're converting, which is fine if again you consider that you want to understand the process – not a description of the sample application which is really just a catalyst. There is always a concern that the sample application is tweaked and diluted to show the strengths of the tool. In a basic sample application I created during my experimentation, I found many more issues that needed to be considered.

After a strong start where we heard how difficult a conversion process can be, I felt the chapters were finishing too quickly. For instance Chapter 5 talked about “Getting your Logic Right” - where you can edit the queries used. I found this to be the buggiest part of the process and no mention of these problems were made.

At 10 pages, Chapter 6 on the actual conversion was thin. The author talked about how we can “edit the most important parts of the application”, yet he just seemed to describe aesthetics – even then to surface level detail. I found this process to be the source of the most frustrating elements of the conversion - item types weren’t carried across; ordering of items was indeterminate; and usage of UI defaults were necessary but not mentioned in the book. For a large application, the list of pages to be generated would create more confusion for the developer than to have just created all pages from scratch. My sample application caused generation errors, but why didn’t the author didn’t cover this possibility?

Chapter 7 is when we see the author question our knowledge of Apex, presuming we don’t know its standard features. The LOV example I found demonstrated the author deviated from the topic of conversion to that of educating about Apex, which is a shame because it was a good time to highlight the fact we were missing components from the conversion; describe how to properly address it; how long it may take to do so; and how we could use annotations to continue effective management of the conversion – probably the most useful feature of the conversion tool.

I think sometimes the author gets a little off topic, even though he promised not to earlier on when stating targeted audience. For example in his final chapter he talks about security and pulling multiple applications together, too far removed from the conversion process for my liking.

Documentation on Application Express is already pretty thin, especially when comparing against the comprehensive documentation found for the database. Don’t expect this to be the manual for converting your Oracle Forms applications to Application Express - it’s a wonderful starting point to understanding how to use the tool, but not great at mitigating likely issues of your conversion.

In the interest of fairness, other reviews of this book can be found on the following blogs:
Marc Sewtz
Roel Hartman
David Peake

Sunday 29 November 2009

Good News Week

My recent leave from work has been most enjoyable, and to share the wonderful news my partner and I eloped to Fiji to get married!

It was just the two of us, but family & friends are happy with the news - some indicating they had their suspicions.

I thought my boss, Penny, sprung us in the taxi rank in Melbourne. She either said nothing or didn't spot Tracey's dress past our carefully placed luggage. I shall find out tomorrow.

We returned home to find everything secure, plants still alive, and a potential flood disaster averted by the neighbours. Back to work tomorrow...

Friday 13 November 2009

AUSOUG Conference Intermission

In terms of the Australian Oracle User Group Conference, right now represents the time between the successful Perth leg and the Melbourne finish.

For some, the series is over, one city is enough. For others, it's a quick re-jig of the presentation before whisking off to another state or 3. For me, as exhausting as it was, I'm looking forward to viewing those great presentations I just couldn't fit in between all the others I wanted to see.

For us at Sage, we had a great time and our presentations went well. Penny's involvement ODTUG stream was a success. Ray & Eddie assisted putting together a very organised look at putting Ajax into Oracle Application Express. I popped into Penny's best practices presentation while I was gearing up for mine and I saw plenty of heads nodding and what I'm sure were people thinking "I must go back and do that when I get back to work!"
Multiple award winning Chris Muir kept churning out excellent information, and quite happily accepted the challenge to extend his presentation time when an opening appeared.
All this while Branka & Kate held the fort at the Sage booth, keeping people fed (with chocolate) and keyboards clean with our little trinkets.

My presentation on converting Oracle Forms to Oracle Application Express went well, although 90 minutes sure does seem to go quickly when there is so much to be said.

As for some of the others I attended:

Howard Ong - Demonstrated a great introduction to Oracle Designer's replacement - the verbose Oracle SQL Developer Data Modeler. Despite some early bugs on Oracle's part, it shows much promise and we all feel it will come through quickly just like SQL Developer did.

Lynn Munsinger - showed Oracle Forms programmers like me why ADF will be a fairly simple transition. I was quite impressed with the comparisons she made, in particular the analogy of comparing a washing machine to a dishwasher when talking about conversions. It was great to hear an Oracle product manager confirm some of the sentiments of my own presentation.

Connor McDonald - this man once again outdid himself and walked away with the best (OTDUG stream) award for his SQL-ectric look at SQL, Analytics and the Model clause. I think the crowd thinned a little bit the deeper he got into it, but it remained entertaining throughout.

Gabriel Ilarda - after a few years of jeering, he found himself on stage talking about Oracle Data Warehouse builder. Despite Murphy's Law jumping all over his live demo, the message came across well.

Tim Armitage - we jokingly lamented on those who didn't come to hear about Oracle 11gR2's best feature - Edition Based Redefinition, but they were probably enjoying Tim Hall talk about PL/SQL (which I hope to see in Melbourne). I was a little taken aback on some of the features regarding editions, but I think once I get into it more I'll see the great potential this feature has.

Many thanks to those international speakers who came to our remote capital, it was good to catch up with old friends and meet new ones. For those making their first or second appearance as a presenter, good work - I hope they went well and we all trust you'll be up for doing it all again next year.

To all heading to Melbourne next week, I'll see you there!

Thursday 5 November 2009

Congratulations Chris Muir

For those who haven't picked up their Nov/Dev issue of Oracle Magazine, check out page 49 to see Sage's own Chris Muir.

Congratulations mate on the Oracle Ace Director of the Year award. Unfortunately that link doesn't have any photos, but you can see some of his recent awards in his blog - including the (little) coveted Sage Computing Services "How famous can one man get" award.

You can meet & greet this humble man at this year's AUSOUG Conference. Be there and be square.

ps - if you aspire to become an Oracle Ace, check out the Ace Program web page (with a map seems to be missing Connor McDonald and puts Chris Muir in Melbourne..) or this presentation, which my very own blog happened to make a cameo.

Thursday 29 October 2009

Unexpectedly learning something new...

Today I was exploring the new SQL Developer in regard to managing Application Express.

There is an option for example to change the alias for an application. As usual there is the facility to see the SQL so I thought why not have a look what was going on.

I was presented with a useful looking anonymous block I never really thought about before.

I thought I'd try it out:
create table a ( a number);

insert into a values (1);

declare
  PRAGMA AUTONOMOUS_TRANSACTION;
begin
  insert into a values (2);

  commit;
end;
/

rollback;

select * from a;
Not long after I found another potential use, I was doing some work with triggers and I was trying to decide on the method to create a new record on the same table that fired a trigger, but avoid the mutating table issue - perhaps this could do the trick without the need for a procedure defined with the pragma? Alas:
PLS-00710: PRAGMA AUTONOMOUS_TRANSACTION cannot be declared here

How about that? I like learning new things, and I happened to find something about PL/SQL while researching SQL Developer managing Application Express!

Monday 26 October 2009

Futher empowering search products

One of my recent presentations was inspired by some colleagues talking about the difficulty they find searching the Oracle documentation to find the information they require.

During this presentation I mention a book I read recently (alas I can't remember now which one) where they refer to the concept of the world's entire knowledge being stored on a device the size of a grain of sand. Seems a little fanciful now, but imagine what sounded fanciful back in 1901, and compare that to what we have today.

Of course the reason I made this reference was that our search engines are going to need to become smarter. And we're going to need to be smarter in relation to how we pool information from all resources.

For years now hardware has been sustaining growth to the approximation of Moore's Law. Hardware companies have certainly had their day in the sun, and that indeed continues. Oracle's recent partnership with Sun, releasing information on Exadata V2 show us what's happening in our little sphere.

Microsoft's time in the sun is beginning to wane. Competition grows strong between Bing & Google, and a recent news item caught my attention.

On Google's official blog they announced their future integration with Twitter. Now I mentioned how we need to be smarter with the way we integrate our information. I feel this is a good step towards this goal. Sites like Twitter are an untapped goldmine in regard to current information, current news stories, and the current opinions of the masses.

One of my all time favourite books explores (among other things) the power of real-time information. Richard Wiseman, psychologist and skeptic, recently used Twitter as a tool to conduct a scientific experiment. I heard him speak on a podcast recently where he's straining his brain trying to work out how to use social networking sites such as this that lead the cultural meme as an effective tool to do powerful studies on mass & diverse populations. Integration of this information with powerful search engines I'm sure will aid his work.

Arthur C. Clarke's book linked above mentions a generic "Search Engine" as being a tool ultimately hard-wired into our brains, such that we can use it to search for information, such as looking at the background of someone you just met while you strike a conversation.

Considering the world's knowledge is now more than just an ancient library of information, I feel that "information finding" companies like Google will be the strong force over the next few decades.

I also believe the tide is turning away from journalism and more towards blogs. I particularly find this on the scientific blogs I read. Recently it's been mentioned that perhaps we'll need to pay a subscription for obtaining news from major sources - just like buying a newspaper. I doubt such a model will work well, and for the type of new I'm interested in, I find science bloggers to a more accurate, objective and entertaining job. So perhaps future search engines also need to tailor to our preferences, our needs of the moment. We certainly see some of this happen now when we search for books on Amazon, or listen to music on iTunes or LastFM - common purchases are linked together and strengthen connections like the neural networks I learnt about in university.

Search facilities are the next big thing.

Now before I finish, considering this is an Oracle themed blog, I best mention one of Oracle's best search facilities is Oracle Text (formerly Oracle InterMedia). I'm waiting for the opportunity to explore this feature even more after developing a wonderful search facility a few years ago. Perhaps this requires a future presentation to highlight this possibly underutilised feature.

My thoughts for this week conclude with a quote sourced from Twitter via my RSS feed.

He who does not research has nothing to teach ~ Unknown

Friday 23 October 2009

2009 AUSOUG Conference Program

Thanks Jeff, for reminding me the 2009 AUSOUG Conference program has been released.

Jeff has indeed listed some key presentations. Having a look through the programs it seems I'll be conflicted in selection between some of the topics.

I'll be looking forward to seeing how my university friend Gabriel Ilarda goes with his first presentation, although he'll face tough competition with Tim Hall on the current schedule.

I look forward to seeing everyone again this year, and if you have registered yet, the early bird rate has been extended - so get in while you can!

I'll be there in Perth and Melbourne, so come by the Sage Computing Services booth and say g'day.

Monday 19 October 2009

Building simple test scenarios

When I talk to people learning about how to write queries in Oracle - particularly business analysts - a typical conversation would consist of basics such as single row functions, joins and aggregates. However when the subject of DDL is raised, such as CREATE TABLE, I often get the reaction - "Oh, I don't need to know that".

I beg to differ. Regularly I create little tables to test theories with; confirm behaviour; and demonstrate test cases - in fact my schema is littered with them. I think this is an important skill, particularly the latter. If you're having trouble getting some task done, it is often helpful to simplify the problem to the lowest common denominator - a basic test case. Colleagues, DBAs, Oracle support - they'll all ask for the same thing. Visit asktom.oracle.com, he requests and uses them all the time.

So I think one should fluent with creating and dropping small test tables. Take the example of demonstrating the handling of nulls by the AVG group function.
-- always good to have drop at the top so you can repeat the test from scratch
-- ... and clean up your schema!
drop table avg_test;
create table avg_test (a number);
insert into avg_test values (1);
insert into avg_test values (2);
insert into avg_test values (3);
insert into avg_test values (null);
select sum(a)
      ,avg(a)
      ,avg(nvl(a,0))
      ,sum(a)/count(a) my_avg
      ,sum(a)/count(*) my_Avg2
      ,count(a)
      ,count(*)
      ,count(1)
      ,count(null)
from avg_test;

    SUM(A)     AVG(A) AVG(NVL(A,0))     MY_AVG    MY_AVG2
---------- ---------- ------------- ---------- ----------
         6          2           1.5          2        1.5

  COUNT(A)   COUNT(*)   COUNT(1) COUNT(NULL)
---------- ---------- ---------- -----------
         3          4          4           0
So by creating a basic table with just one column, I can insert four rows for this particular case that allows me to demonstrate & confirm the behaviour of avg vs sum/count; and how count(*) differs from count of a particular field. This obviously harmonises with the oft forgot sentence in the documentation:
If you specify the asterisk (*), then this function returns all rows, including duplicates and nulls.

With this in mind, I can then relate back to my actual data and see how the following two queries can differ in behaviour just because of the expression I've used within the count function.
SELECT  count(*), daily_rate
FROM resources
GROUP by daily_rate
HAVING COUNT(*) > 1;

  COUNT(*) DAILY_RATE
---------- ----------
         2        100
         4
         2        120

SELECT  count(daily_rate), daily_rate
FROM resources
GROUP by daily_rate
HAVING COUNT(daily_rate) > 1;

COUNT(DAILY_RATE) DAILY_RATE
----------------- ----------
                2        100
                2        120
We've always got to consider nulls, even with aggregated functions!

In the context of a person learning how Oracle behaves, the simple process of creating small test tables can be quite valuable.

Wednesday 14 October 2009

Waiting for the OOW2009 announcement...

As not to be lost in the flood of Oracle Open World blog entries, I'm keeping myself quiet - instead living vicariously through the wonderful tales of the conference.

I've been particularly interested in the perspectives coming from Tim Hall, but what I'm really waiting for is the "big announcement" for this year. Sure, there is plenty to be said about the new Exadata machines but I'm a software developer!

If my time zone calculations are correct, within 20-24 hours (gee, we really are on the other side of the world) Larry Ellison should be delivering his keynote speech.

I wonder he'll include the news of a nearby launch date for Apex 4.0...

Tuesday 6 October 2009

On wrapping & obfuscating PL/SQL

The Oracle database provides the ability to obfuscate PL/SQL code using the wrap utility.

Many moons ago when I was working on a 9i database, I encountered an issue where my information wasn't completely obfuscated. I had a package that was performing some encryption, and I wanted to ensure the seed to my encryption method was hidden. Take the following example:
create or replace procedure seeder is
  vc varchar2(20) := 'This string';
begin
  null;
end;
/
On line 2 I declare a string. I would expect this information to be wrapped, just like the rest of my code, however when I assessed the wrapped version of the PL/SQL after using the following command:
wrap iname=c:\seeder.sql oname=c:\seeder.plb
I found that I could still see my string definition amongst the code (this is a partial copy from the resulting output):
...
2 :e:
1SEEDER:
1VC:
1VARCHAR2:
120:
1This string:
0
...
This wasn't acceptable, so my solution was to declare variables that contained one character strings and concatenated these to form my seed. In hindsight, perhaps I also may have used CHR() to formulate a string.

Recently on discussing this topic I wondered if the current version of the database had the same issue. I tried on 10gR2 using a combination of supplied PL/SQL packages.
exec  DBMS_DDL.CREATE_WRAPPED(dbms_metadata.get_ddl(object_type => 'PROCEDURE', name => 'SEEDER'));
And the resulting code had a different feel about it:
SAGE@sw10g> select dbms_metadata.get_ddl('PROCEDURE' ,'SEEDER') from dual;

DBMS_METADATA.GET_DDL('PROCEDURE','SEEDER')
--------------------------------------------------------------------------------

  CREATE OR REPLACE PROCEDURE "SAGE"."SEEDER" wrapped
a000000
b2
abcd
abcd
abcd
abcd
abcd
abcd
abcd
abcd
abcd
abcd
abcd
abcd
abcd
abcd
abcd
7
4f 92
t5QJKAdccmuGzbpujO65PNpHmLQwg5nnm7+fMr2ywFxpnp8yMlKyCQmldIsJaefHK+fHdMAz
uHRlJXxlUMNLwlxpKPqFVisW0T6XRwzqxIvAwDL+0h3l0pmBCC2LwIHHLYsJcKamqIi2Mg==

1 row selected.
So it seems that the algorithm has improved and being able to "see" strings in the wrapped code is no longer a problem.

Once again another demonstration of how "known knowns" can change over time, and you must always test behaviour on your version; your infrastructure.

Thursday 1 October 2009

More Aces than a deck of cards

... says the latest bulletin regarding the 2009 AUSOUG Conference.

Oracle Aces presenting this year include:
Connor McDonald - whom I've had the pleasure of working with in the past.
Lucas Jellema - will also be conducting some extra seminars while in Melbourne. 
Peter Koletzke - if you're into JDeveloper, he's your man. That link is a gold mine for middle-ware papers.
Tim Hall - Best speaker from last years conference series.
Penny Cookson - My effervescent yet humble foreman. Her presentations always get a giggle.

And of course I'll be there presenting my wares. Hope to see you, too!

Tuesday 29 September 2009

Thinking differently about application design

The theme for the 2008 AUSOUG Conference series was "Oracle Software from a User's Perspective". While you could infer focus on the end user from some of the presentations, it's sometimes a difficult theme to focus on, especially when you're writing a technically heavy paper. It's more like relating yourself to your second cousin once removed (unless of course I've missed the point and we're talking about the developer/DBA as the user of Oracle development tools.)

This doesn't mean we can't maintain our awareness for our ultimate customer - the end user.

Tom Kyte recently linked to a cartoon that painted 1000 words about the way we write applications. (Thanks Eric Burke.) We've all been on projects where screens look like this, and considering we write GUI applications for databases that contain structures like this, it's an interface that's inevitable. The same reason the 747 cockpit contains 1000+ knobs and switches.

However, we can still look for opportunities to simplify. The interface to Apple's iPod - iTunes isn't exactly a dummy user's delight. There are plenty of options and interface difficulties. They've just transferred the complexity away from the entry point for 90% of the music player's use.

So what about your application. Are there points of entry that result in a large percentage of use? Is there a way to simplify the entry of key units of data? To go even more granular, are there little smarts you can add to certain types of fields, such as dates? Are there ways to minimise use of the mouse at all times? Is the first key field of your entry form user/time/context aware?

Many considerations to make. I read a lot of blogs, and I admit many of them are certainly from a technical perspective on the database, but there are a few I subscribe to that aren't even software related - but still pertain to my job. In the same way that Microsoft employ psychologists to assist with the design of the menu structure and interfaces to their ubiquitous applications such as Word & Excel, I read some blogs that encourage thinking outside the square.

One such article I came across recently was from Seth Godin - The End of Dumb Software. I can't remember where I first stumbled across his blog, but he always has interesting perspectives as a marketer that I can either apply within my job as a database developer, or even utilise in day to day life.

He was having a gripe in particular about simple calendar applications. If 2 o'clock was selected, you'd have to presume 999 times out of 1000 that 2pm would be requested. So why is that not the default? Why can't a calendar application presume most dates used are within working hours, with the option to change if required. He goes on to suggest he shouldn't need to "edit" an event to view further details, though I understand the architect's potential thinking in that scenario.

There are plenty of other quick examples he runs through that I think all developers, architects, designers should read through and consider. How many LOVs, search windows, entry points of various kinds have you created that could have been written slightly differently to save the end user some time & frustration?

A simple example I can think of in a forms application I once worked on was the ability to enter the letter "t" in a date field to represent (and be automatically translated into on when-item-validated) today's date. How many times in your application may the user want to enter today's date? Turning it into 1 keystroke has saved between 5 and 10 keystrokes, depending on date format settings (310809 to 31-AUG-2009). One keystroke vs eleven? I know what I'd prefer.

Food for thought.

If you have other blogs in your RSS feed like Seth Godin, I'd be interested to hear about them.

Monday 28 September 2009

Advert: On-line Book Reviews

While doing some research I was directed to the online bookstore Packt Publishing. Some other online book stores have a chapter listing, or a short excerpt. At Packt you seem to be able to download a sample chapter for every book you're interested in - at least all that I had a look at.

If you're a blogger you've also got the opportunity to review books. I've chosen one to review to see if it's worthwhile and let developers like you know if it's worth it. Stay tuned over the coming months, I hope to have it reviewed by the time the 2009 AUSOUG Conference series commences.

Online reference manuals are great, but sometimes you need more than a reference. I'm not a huge fan of books that regurgitate the documentation, but a best practices book or a guide to a new concept; case study or transformation process never goes astray.

Friday 25 September 2009

SQL Developer 2.1 EAP

The latest major release of SQL Developer is now in the Early Adopter stage, starting at 2.1 (not to be confused with the much older & somewhat flaky 1.2 version).

I gave it a go and here are some quick observations:
  1. Unzipped footprint including JRE from 166mb to 200mb (Update: I've just noticed the memory usage on my laptop has almost doubled to 237mb!)
  2. On opening I still have to tell it where java.exe is, not the nicest thing to have to do for those more basic users.
  3. Snappier to open, even considering I was migrating preferences from 1.5.4
  4. All icons are sharper and clearer
  5. Jobs included in navigator list
  6. Free data model viewer, which makes sense to me and seems fair.
  7. I can compile packages/procedures using F9 in worksheet - yay!
  8. ... and I get syntax errors more appropriately.
  9. Although saying that when I open the program from the navigator, I get erroneous errors on valid compile.
  10. Syntax errors for queries aren't in popups, just in the results window. The number of rows fetched and timing just above results too, which is more friendly to the eye.
  11. I like the fact the tabs above the result set can be closed if you don't want them.
  12. I still can't figure out how to open to table views at the same time though.
  13. There is a unit testing component that will be interesting to explore, giving the nice people at Quest some more competition in the testing market.
  14. I like the preferences changes. One of which includes more options for SQL formatting, including a preview.
  15. The coding templates are also a good inclusion. Let's ensure good standards are more widely adopted!
  16. It didn't auto prompt me for a connection when I opened a new SQL file and attempted to execute.
  17. F6 for explain plan has moved to F10
  18. ...best I continue with work now and see how it performs with day-to-day tasks.
Official new feature list can be found here.

Some people think all it needs to do is run SQL and PL/SQL, but considering it's a GUI tool we want it to be brisk and easily customisable - otherwise we'll stick with SQL*Plus. Overall, I like the new improvements. I'm sure the kinks will be ironed out quickly and I look forward to the ongoing improvements to a good free product.

Tip o' the hat to Dimitri Gielis for the tip-off.

Monday 21 September 2009

Data Densification - filling dates example

Many years ago when I was first learning Oracle (in the deep end) I came by the problem of data densification. I can't remember how I solved the problem back then, but I've encountered it a few times since. When writing my presentation for the model clause, I explored the difference between two options. Recently whilst investigating syntax definitions for the ANSI standard outer joins, I encountered another solution called a partitioned outer join - so I thought I'd compare all three for performance and readability.

The scenario is thus - I have an amount in a table for various dates throughout the year, for a particular person.
NAME          AMT DT
------ ---------- -----------
Scott         117 11-jan-2009
Scott         250 17-feb-2009
Scott         300 14-apr-2009
Scott          50 06-jun-2009
Wade         1231 17-mar-2009
Wade         2321 22-apr-2009
Wade         3122 30-sep-2009
Wade           59 31-oct-2009
However in my report I wish all months of the year displayed for the person, regardless of whether amounts exist. Let's add a cumulative figure in there for good measure. Included below this is the SQL for my test case.
NAME   MTH              AMT    CUM_AMT
------ --------- ---------- ----------
Scott  January          117        117
Scott  February         250        367
Scott  March              0        367
Scott  April            300        667
Scott  May                0        667
Scott  June              50        717
Scott  July               0        717
Scott  August             0        717
Scott  September          0        717
Scott  October            0        717
Scott  November           0        717
Scott  December           0        717
Wade   January            0          0
Wade   February           0          0
Wade   March           1231       1231
Wade   April           2321       3552
Wade   May                0       3552
Wade   June               0       3552
Wade   July               0       3552
Wade   August             0       3552
Wade   September       3122       6674
Wade   October           59       6733
Wade   November           0       6733
Wade   December           0       6733

DROP TABLE customer;
CREATE TABLE customer (
   name  VARCHAR2(10)
  ,amt   NUMBER
  ,dt    DATE
);

insert into customer values('Scott',117,  to_date('11-01-2009','DD-MM-YYYY'));
insert into customer values('Scott',250,  to_date('17-02-2009','DD-MM-YYYY'));
insert into customer values('Scott',300,  to_date('14-04-2009','DD-MM-YYYY'));
insert into customer values('Scott',50,   to_date('06-06-2009','DD-MM-YYYY'));
insert into customer values('Wade',1231,  to_date('17-03-2009','DD-MM-YYYY'));
insert into customer values('Wade',2321,  to_date('22-04-2009','DD-MM-YYYY'));
insert into customer values('Wade',3122,  to_date('30-09-2009','DD-MM-YYYY'));
insert into customer values('Wade',59,    to_date('31-10-2009','DD-MM-YYYY'));
This is a pre-10g solution, using an efficient method on the DUAL table to conjure records. I also use analytics to determine the cumulative amount. There is however a cartesion join and two full table scans on the customer table.
SELECT date_fill.name
      ,TO_CHAR(real_dt,'Month') mth
      ,NVL(amt,0) amt
      ,NVL(SUM(amt) OVER (PARTITION BY date_fill.name
                          ORDER BY real_dt ),0) cum_amt
FROM
  (SELECT name, TRUNC(dt,'mm') dt, SUM(amt) amt
   FROM   customer
   GROUP BY name, TRUNC(dt,'mm')
   ) actual_data -- Actual data
  ,(SELECT name, real_dt
    FROM  (SELECT DISTINCT name
           FROM   customer)
         ,(WITH mths AS (SELECT TRUNC(SYSDATE,'YYYY') real_dt
                         FROM DUAL CONNECT BY LEVEL <= 12)
           SELECT ADD_MONTHS(real_dt,ROWNUM-1) real_dt FROM mths)
   ) date_fill -- Distinct list with conjured dates
-- Outer join actual data with full date list
WHERE date_fill.real_dt = actual_data.dt(+)
AND   date_fill.name    = actual_data.name(+)
ORDER BY date_fill.name, date_fill.real_dt;

-------------------------------------------------------------------------------------
| Id  | Operation                           | Name     | Rows  | Bytes | Cost (%CPU)|
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                    |          |     2 |    74 |    11  (28)|
|   1 |  WINDOW SORT                        |          |     2 |    74 |    11  (28)|
|*  2 |   HASH JOIN OUTER                   |          |     2 |    74 |    11  (28)|
|   3 |    VIEW                             |          |     2 |    24 |     6  (17)|
|   4 |     MERGE JOIN CARTESIAN            |          |     2 |    24 |     6  (17)|
|   5 |      VIEW                           |          |     1 |     6 |     2   (0)|
|   6 |       COUNT                         |          |       |       |            |
|   7 |        VIEW                         |          |     1 |     6 |     2   (0)|
|   8 |         CONNECT BY WITHOUT FILTERING|          |       |       |            |
|   9 |          FAST DUAL                  |          |     1 |       |     2   (0)|
|  10 |      BUFFER SORT                    |          |     2 |    12 |     6  (17)|
|  11 |       VIEW                          |          |     2 |    12 |     4  (25)|
|  12 |        HASH UNIQUE                  |          |     2 |    12 |     4  (25)|
|  13 |         TABLE ACCESS FULL           | CUSTOMER |     8 |    48 |     3   (0)|
|  14 |    VIEW                             |          |     8 |   200 |     4  (25)|
|  15 |     HASH GROUP BY                   |          |     8 |   136 |     4  (25)|
|  16 |      TABLE ACCESS FULL              | CUSTOMER |     8 |   136 |     3   (0)|
-------------------------------------------------------------------------------------
Here is what some would consider a more elegant solution using the model clause. The explain plan is far neater, and the performance was also enhanced - and there was no joins & half as many logical reads. The cumulative column was calculated using a nifty rule computation.
SELECT name, TO_CHAR(dt,'DD-MM-YYYY') dt, amt, cum_amt -- Model results
FROM (
   SELECT name, TRUNC(dt, 'MM') dt, SUM(amt) amt
   FROM   customer
   GROUP BY name, TRUNC(dt, 'MM')
)
MODEL
PARTITION BY (name)
DIMENSION BY (dt)
MEASURES (amt, cast(NULL AS NUMBER) cum_amt) -- Define calculated col
IGNORE NAV
RULES SEQUENTIAL ORDER(
   -- Conjure dates
   amt[FOR dt FROM TO_DATE('01-01-2009', 'DD-MM-YYYY')
              TO   TO_DATE('01-12-2009', 'DD-MM-YYYY')
              INCREMENT NUMTOYMINTERVAL(1, 'MONTH')
       ] = amt[CV(dt)] -- Apply amt for given date, if found
  ,cum_amt[ANY] = SUM(amt)[dt <= CV(dt)] -- Calculate cumulative
)
ORDER BY name, dt;
------------------------------------------------------------------------------
| Id  | Operation                    | Name     | Rows  | Bytes | Cost (%CPU)|
------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |          |     8 |   136 |     5  (40)|
|   1 |  SORT ORDER BY               |          |     8 |   136 |     5  (40)|
|   2 |   SQL MODEL ORDERED          |          |     8 |   136 |     5  (40)|
|   3 |    HASH GROUP BY             |          |     8 |   136 |     5  (40)|
|   4 |     TABLE ACCESS FULL        | CUSTOMER |     8 |   136 |     3   (0)|
|   5 |    WINDOW (IN SQL MODEL) SORT|          |       |       |            |
------------------------------------------------------------------------------
The following solution is the one I stumbled upon while playing with ANSI joins. I wanted to see how it compared with the others. We still have a join, but that's not necessarily an issue. It's certainly tidier looking than the first solution, and we don't need to fry our brain learn how to use model. I have heard some resistance to ANSI syntax however - not for the sake of going against the grain, but apparently there have been some security issues. I'm not qualified to comment on this, let's just compare the performance in this particular test case.
SELECT c.name, TO_CHAR(real_dt,'Month') mth, NVL(amt,0) amt
      ,NVL(SUM(amt) OVER (PARTITION BY c.name
                          ORDER BY real_dt ),0) cum_amt
FROM customer c
PARTITION BY (name)
RIGHT OUTER JOIN
  (WITH mths AS (SELECT TRUNC(SYSDATE,'YYYY') real_dt
                 FROM DUAL CONNECT BY LEVEL <= 12)
   SELECT ADD_MONTHS(real_dt,ROWNUM-1) real_dt FROM mths) mths
ON (real_dt = TRUNC(c.dt,'mm'))
ORDER BY c.name, real_dt;
-------------------------------------------------------------------------------------
| Id  | Operation                           | Name     | Rows  | Bytes | Cost (%CPU)|
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                    |          |     2 |   130 |     8  (13)|
|   1 |  WINDOW SORT                        |          |     2 |   130 |     8  (13)|
|   2 |   VIEW                              |          |     2 |   130 |     8  (13)|
|   3 |    NESTED LOOPS PARTITION OUTER     |          |     2 |    46 |     8  (13)|
|   4 |     BUFFER SORT                     |          |       |       |            |
|   5 |      VIEW                           |          |     1 |     6 |     2   (0)|
|   6 |       COUNT                         |          |       |       |            |
|   7 |        VIEW                         |          |     1 |     6 |     2   (0)|
|   8 |         CONNECT BY WITHOUT FILTERING|          |       |       |            |
|   9 |          FAST DUAL                  |          |     1 |       |     2   (0)|
|* 10 |     FILTER                          |          |       |       |            |
|  11 |      SORT PARTITION JOIN            |          |     1 |    17 |     4  (25)|
|  12 |       TABLE ACCESS FULL             | CUSTOMER |     1 |    17 |     3   (0)|
-------------------------------------------------------------------------------------
I executed these 3 statements 1000 times on my laptop on 10gR2 (similar results in 11gR1) to see how throughput compares. I consistently found the following performance:
6 secs -- Original
.006 secs per iteration
4.6 secs -- Model clause
.0046 secs per iteration
.35 secs -- Partitioned outer join
.00035 secs per iteration
Personally I was a little surprised at the dramatic timing difference with the partitioned outer join over the original. I thought perhaps that would have been more comparable to the model clause solution. Something to bear in mind in future development.

If I needed to write this for a future project I'd probably take a closer look at the relevant data set using Tom Kyte's runstats package. This used to be located here - htpp://asktom.oracle.com/tkyte/runstats.html, but the link is currently broken. Possibly related to his recent move. However here is one of many of his forum entries addressing runstats. This package is better than just measuring throughput, it will let you know how your application may scale by paying attention to latches. A DBA's perspective can be found here.

Sunday 13 September 2009

Other uses for NULLIF

There are some features in the database that are understandably difficult to first come across. Unless you have a specific need to fulfil, or you aren't the type who meanders through the new features guide, or you're just relatively new to Oracle, you may not have come across useful day-to-day features & concepts such as WITH, DBMS_JOB or pipelining. (I just picked a few random ones that came to mind...)

There are also abundant single row functions that are available, and some of them are easy to use, but sometimes no apparent usage immediately stands out. I make it no secret that my favourite online reference manual is the Oracle Database SQL Reference. I have a shortcut to a local version in my browser toolbar. Personally I prefer the 10g layout to the 11g one, but beggars can't be choosers I suppose.

I thoroughly recommend beginners and even the more mature programmers to peruse this manual, in particular the chapter on SQL Functions. I believe awareness is often the key to writing a successful application - or at least not re-inventing the wheel.

So to stop beating around the bush, what about the NULL-related functions, specifically NULLIF. Have you used it?
The manual illustrates an example showing those employees whose job has changed since they were hired. If their job hasn't changed, then null is shown.
SELECT e.last_name, NULLIF(e.job_id, j.job_id) "Old Job ID"
FROM employees e, job_history j
WHERE e.employee_id = j.employee_id
ORDER BY last_name;

LAST_NAME                 Old Job ID
------------------------- ----------
De Haan                   AD_VP
Hartstein                 MK_MAN
Kaufling                  ST_MAN
Kochhar                   AD_VP
Kochhar                   AD_VP
Raphaely                  PU_MAN
Taylor                    SA_REP
Taylor
Whalen                    AD_ASST
Whalen
What about if an employee record has multiple e-mail address columns - an alternative point of contact. In some scenarios/records this could potentially be unclean, so instead of coding with these older statements:
CASE WHEN email_address != email_address_alternate
THEN email_address_alternate
ELSE null
END email_address_alternate

DECODE(email_address, email_address_alternate, null, email_address_alternate) email_address_alternate
We can use a tidier:
NULLIF(email_address_alternate, email_address)
A similar situation applies to a mailing/delivery/home address (albeit ideally on a separate table) or a work/mobile/home number, but in this case the COALESCE function is more appropriate. I plan a blog entry on this function in the near future, along with performance considerations.

Perhaps you are performing a data load and you need to ignore certain indicative data coming through:
NULLIF(surname, 'INACTIVE')
The final scenario that I've started to use regularly is when faced with the old mathematical enigma of dividing by zero. (The wiki entry is a thrilling read, almost as fun as the images returned by google search)

Instead of facing this throughout a report:
select 1/0 from dual
*
ERROR at line 1:
ORA-01476: divisor is equal to zero
We can do this to return null where price is 0 instead of an error.
SELECT (price-cost)/NULLIF(price,0) gp FROM sales;
Sweet.

Thursday 10 September 2009

A LONG view ahead

Every time a new version is release I feel compelled to have a little peak at the definition of ALL_VIEWS, to see if columns such as ALL_VIEWS.TEXT have been converted from the LONG datatype. Alas, 11gR2 still has this utilises the LONG datatype, perhaps this is just something that has to be persistent through the ages.

However we can still get around the issue. The reason I find this of interest is sometimes during an impact analysis of say - dropping a column from a table, we need to determine what objects refer to this column. We could start with ALL_DEPENDENCIES, but this isn't granular enough for some analyses.

Due to the many restrictions with the LONG datatype, we can't use the INSTR function to search this dictionary view. Here's one work-around:
> DROP TABLE my_views;

> CREATE TABLE my_views(text CLOB);

> INSERT INTO my_views (text)
SELECT TO_LOB(text) 
FROM all_views;

SAGE@sw10g> SELECT *
  2  FROM   my_views
  3  WHERE INSTR(lower(text),'hiredate') > 0;

TEXT
------------------------------------------------------------------------
select "EMPNO","ENAME","JOB","MGR","HIREDATE","SAL","COMM","DEPTNO"

1 row selected.
Got any others?

I do! 
Here is my posting from April 2012 on using XML to solve the problem.

Thursday 3 September 2009

PRECEDES follows FOLLOWS

Thanks for a tip-off from volleyball coach / Oracle guru Connor McDonald (and later by colleague Chris Muir), it seems 11gR2 was released while I was 30000ft in the sky.

I wouldn't be practising what I preach if I didn't point you to one of the best books in the online Oracle Documentation - the New Features Guide.

If you want to keep up with Oracle Technology, and learn a thing or two, every time a new version of Oracle is release, I highly recommend a peruse through this book.

Keep a lookout in the blog community because plenty of articles pop-up around these times showing off the shiny new features. One feature I'll mention today is an extension to triggers.

In a recent presentation I included some thoughts on compound triggers and a quick note on the FOLLOWS clause, allowing you to indicate that a trigger fire after a specified trigger.

This can be useful if you need to extend proprietary Oracle software.
create or replace trigger package_trigger
after update of salary
on employees
for each row
begin
dbms_output.put_line('package_trigger');
end old_way;
/

create or replace trigger custom_stuff
after update of salary
on employees
for each row
follows package_trigger
begin
dbms_output.put_line('custom_stuff');
end old_way;
/
I don't know whether it was an afterthought or this was just one of the last features installed in 11gR1 at the pleading request of a client, but it was a little odd that a PRECEDES type functionality wasn't included.

However now in 11gR2, this is now available.

There are caveats however, and in this case PRECEDES may only be applied to a reverse cross edition trigger - this is also a whole new ball game and I can't yet confidently tell you more about editions, except that Connor's been excited about the prospect of these for quite some time & it's impact throughout the database seems widespread.

Other features for the handy developer to keep an eye out for include:
  • Enhancements to the Forms->Apex conversion process
  • Analytical function improvements
  • Recursive WITH clause - ANSI compliant hierarchical queries
  • Flashback support for DDL
  • Improvements to Oracle Scheduler
Oracle keeps on Oracling.

Sunday 30 August 2009

Upcoming Presentation

G'day All,

I'll be presenting in Melbourne soon - I'll be in town for a few days and hopefully making best use of my time!

My topic will be Conditional Compilation - something touched on briefly during Steven Feuerstein's recent visit.

Where: VicUrban, Level 12, 700 Collins Street Docklands, 3008 (Beside Southern Cross Station)
When: Tuesday, September 08, 2009 05:15pm - 06:15pm
RSVP: RSVP to president.vic@ausoug.org.au to advise your attendance by close of business on 07-SEP-2009.

If you're from Melbourne, have a interest in PL/SQL, I'd enjoy your company.

Thursday 27 August 2009

Syntax Highlighting for SQL

Today I decided I can be a choosey chap.

Unhappy with the basic <pre> syntax for pre-formatted text such as my SQL queries and output, I was on the hunt for a Syntax Highlighter to suit my satisfaction.

Part of this search entailed reviewing how other bloggers do it, but considering I read most of my feeds through Google Reader, I don't see as much of the magic.

I like the way Laurent Schneider formats his, but I think it may be internal and cumbersome.

I found a bunch of reviews here.

A common/popular one seems to be Syntax Highlighter by Alex Gorbatchev. I've applied this to my blog although I'm not entirely happy with it.

I don't know if it's my penchant for SQL*Plus, or my softer choice of highlight colours in TextPad, but I don't like the colour choice of the strings or certain keywords - I mean, pink for aggregate functions? I had a look at the source and it doesn't seem immediately obvious how to customise these. I need the .js/.css files externally hosted anyway because I can't host them in blogger nor Google Docs.

It looks cleaner without the line numbers - I think I'll only add those when I want to mention particular lines.

It also comes out quite small in Google Chrome - a browser I find myself using more and more - can't wait for XMarks to be available in Chrome.

I also tried Google Code Prettifier, althought it seemed to have minimal effect on SQL bar drawing a box around my code.

I don't know if I'm being picky or if there just isn't much out there for SQL. If anyone has any suggestions, I'd more than welcome them.

Thursday 20 August 2009

2009 AUSOUG Conference - Call for Papers

Hello Oracle Community,

The call for papers has been re-iterated for this year's AUSOUG 20:20 Foresight National Conference Series.

Always a good opportunity for networking, learning and a few fun & games, this year's conference series is ready for abstract submissions from young & old.

If you've never written a presentation before, I highly recommend it. If you have, then get cracking on the next one! I've (hopefully) got two great topics up my sleeve this year and I look forward to the experience.

The 2009 dates are:
  • Perth from 10-11 November at the Burswood Convention Centre
  • Melbourne from 16-17 November at the Sebel Albert Park Hotel
Visit www.ausoug.org.au/2020 for all the latest information.

Hope to see you in Perth and/or Melbourne!

Steven Feuerstein Perth Review - Session 2 - Writing Maintainable Code

Steven's second session remained light on for technical details, which was great for a Friday morning. He dealt with a topic that can be applied to any language and one that every developer faces on a day to day basis - the balancing act between deadlines and writing maintainable code.

As I write this I remember the old adage from Microsoft Secrets, and probably numerous software engineering books - every step a bug makes towards production, the exponentially more expensive it will be to fix.

Steven mentioned a few basic mantras and elaborated on each. I think this was well received by the audience and it always makes a difference to hear the argument from authority.
Without elaborating too much, some I managed to note down include:
  • Construct and enforce coding standards - even to the extent of when to utilise SQL. Steven pointed us to a set of standards he follows and has put out for public comment.
  • Personally, I'm in favour of using hungarian notation, including the datatype in the prefix of a variable, eg: ld_dob DATE;
  • - Utilise your development tool's template facilities. Whether it be Toad, SQL Developer, Textpad... they have have facilities to generate say a package with all the stubs, exception handlers, comment headers etc you need to avoid the White Canvas of Death, as he calls a blank .sql file. This helps with the enforcement of coding standards.
  • Encapsulate "stuff" to make it more readable
    • define a package for constants, and don't hardcode ANY value. If there is a slim chance in 100 years it could change, it should be defined somewhere. This also goes for formulas such as a full name evaluation from numerous fields.
    • Use pragma exceptions, so we see pkg_exceptions.e_collection_empty instead of -22166
    • Use PL/SQL APIs to each table (elaborated later in this post)
  • If you ever say Never or Always, it's a bug. Tom Kyte always says ;-)
  • Boolean functions read best when they return a boolean expression such as
    RETURN (this='TEST');
    as opposed to
    if this = 'TEST' then
    return true;
    else
    return false;
    end if;
  • Functions in general should work like a funnel. This way you can avoid ORA-06503 - Function returned without a value. PL/SQL warnings can help with this. Steven's words with this comment obviously help describe what a funnel looking function looks like!
  • Always (is this a bug?) return a subtype - never a specific datatype, ie: VARCHAR2(100)
  • Visit Refactoring.com
  • Use instrumentation/assertions. (Hint - I'll be elaborating on this later in the year)
  • And the fastest way to get the job done is to take your time. Don't rush. Mistakes are made when people rush, and experts from any industry will tell you that if you take your time, the job will get done quicker with fewer mistakes. The best analogy I heard on this was (I think) from a lecturer at uni -
    "Two carpenters were asked to make the same cabinet. The first carpenter, and old man, spent the first 2 hours thinking - studying the plans, laying out his tools.
    The second carpenter, an apprentice, (put his cowboy hat on) and got cracking straight away - sawing wood, drilling holes, occasionally swearing and looking for his hammer. He looked over at the old man thinking (chuckling), crikey, I'm halfway through this cabinet already and he's still thinking about the job!
    The old man then starts making the cabinet. All the relevant bits of timber were within arms reach as he built the cabinet, every tool never more than a step away. Actual construction of his cabinet took 30 minutes. Meanwhile, four hours in, the apprentice is still putting 'finishing touches' on his cabinet (working out how to put the doors on straight).
    The old man beamed at his work. Every join flush, every screw lined up. When the apprentice finally finished his, he figured he'd do better next time. Practice makes perfect."
As for coding practices, I had a flick through the book Steven was giving away on the day, and it looks well worth the read. Perhaps a book like that should be a standard text in a university curriculum?

Now to my first major criticism for the day. The concept of PL/SQL APIs to every table. I noticed Tim Hall also describes this sort of utopian development environment where all DML (including SELECT) for a table is done via PL/SQL APIs and any results returned via associative arrays. As I understand it, utopia is like the speed of light, or absolute zero - theoretically you can never reach it. And I think the reason in this particular case is the balance required between a tuned application and an abstracted/encapsulated one. To his credit, Steven himself said his focus is on PL/SQL performance and integrity, he's never been a SQL tuning fanatic.

Steven also mentioned Tom Kyte's mantra of "if it can be done solely in SQL, then do it in SQL". If there is no need to introduce PL/SQL for a solution, then don't.
E-Business Suite has been utilising table APIs for years, and quite successfully. It creates a well placed barrier and internalises all the business logic, turning say the insertion of a new HR employee into a procedure call whose black box touches a dozen tables to manage the setup. It also allows customisations to be neater and separate from the main install.

Software development theory aside, Steven did demonstrate a very nifty facility in his Toad suite to automatically generate all the typical APIs that may be required for a set of tables. Very tidy indeed.

There are many ideals in this session that need to be reiterated to every developer, and every newbie (I feel I should stipulate I think this is not a derogatory term, we should all remember we were all there once) should also cover it thoroughly. 4/5 for this session. Points lost because I think Hungarian notation should be in the major standard ;-), and further elaboration should be made on the utopian ideal of PL/SQL APIs - although time is always pressing for such a theological debate.

Monday 17 August 2009

Steven Feuerstein Perth Review - Session 1 - Code Therapy for Software Developers

Based on conversations I've had with various people since Friday's session, it seems his day seminar was a success. There was a touch over 100 people in the auditorium with a fairly wide distribution of positions, skills & industry. Many thanks for AUSOUG and the members responsible for bringing Steven to our remote city (after the Sydney/Melbourne legs) - some of them only had minimal to see Steven present; and thank you Branka for that wonderful food! (I can be easy to please) It's just a shame it was a rainy weekend in Perth.

I'd like to take the opportunity to do a small review on each of the four sessions, although most of my comments will probably tie together scatterings of comments throughout the day.

I think his four topics were well selected. While obviously separate presentations in their own right, they were cohesive enough that you could see strong relationships between them throughout the day.
  1. Coding Therapy for Software Developers
  2. Writing Maintainable Code
  3. High Performance PL/SQL
  4. Automated Testing Options for PL/SQL
Even before Chris Muir introduced him, he took the time to demonstrate two old yet simple games that he believes a good developer should be comfortable with, at the same time obtaining good audience participation.
Mastermind he likened to the debugging mind, where given various states of being, how that suits with the expected solution. It was a very good analogy and reminded me how I used to enjoy that game.
The other was The Set Game, something I stumbled across a few years ago, possibly via a comment on his website. It reflected a consistent tenet of his that development is closely tied with pattern matching.

His first session, entitled Coding Therapy for Software Developers, I personally found to be the most enjoyable session of the day. I think we need to see more of this style of presentation at these events - unfortunately they seem to fit better with the evangalist/guru types with years of experience. Not only was it a sidestep away from the usual technical presentation, it helped remind us why we enjoy our jobs and how we can modify/affirm our behaviours and appreciate our career choice.

He recommended a number of books during the presentation, and I noticed quite a few people note them down. Steven did give a good impression of enthusiasm for these books, and I will certainly be on the lookout for them. For those who missed them:

Christopher Alexander - Pattern Language - Timeless Way of Building - Steven mentioned that day to day problems already have (good) solutions and if we can harness the concepts of pattern matching we can utilise the better, efficient solutions more frequently. One of the development perspectives here being templates and modularisation/encapsulation.

Jeff Hawkins - On Intelligence - For many years engineers have been trying to build intelligent machines, and apparently this book explores the topic well.

Daniel Gilbert - Stumbling on Happiness - An interesting look at how our minds work, and why we delude ourselves in thinking we know what we all want to make us happy.

Steve McConnell - Code Complete - A more technical book mentioned later in the day discussing best practices in the art and science of creating software. I remember thoroughly enjoying the art vs science debate of development at university, I'm sure this will be a good read.

In fact listening to his comments that morning, I have a sneaking suspicion he's a fellow evolutionist - and I would quite happily have recommended a wonderful book I'm reading at the moment.

One of my hand written notes referred to a conversation about why birds don't look sick until they are near death. A pattern of evolution and adaptation - but I can't remember how it related to software development! Oops.

He certainly mentioned to avoid caffeine, eluding to memory - based on my understanding anything learnt while under the influence of caffeine means we have a better chance of remembering it only when we have the same amount of caffeine in our system. Not really a state of being that we should desire 24-7.

Steven is a charismatic speaker, and I'm sure his fairly generic American/Chicago accent has helped his success in the presentation theatre. I recall a conversation with Lynn Munsinger on this concept.

So for me, 5/5 for the first session. Stay tuned for my thoughts on the other three. I'd certainly be happy to see the same session again, and more like it.