Category Archives: ColdFusion

Comparing lists in Oracle

While Oracle is incredibly powerful, there are some things it generally isn’t asked to do. One of those things is compare a list of values. You can compare recordsets, tables, strings, etc… But trying to compare two separate lists of values can be a challenge.  Other languages such has ColdFusion have quite a few LIST functions baked in, but there are times I don’t want to (or can’t) switch from Oracle to CF for processing.

Below is a package I wrote with two functions I have found I need from time to time. If you are needing them every day, you should probably re-evaluate your data model… but there are times you have to deal with the data you got.

First, declare a datatype you will need.


CREATE TYPE LIST_PACK_TYPE AS TABLE OF VARCHAR2(4000);
/

Compile the package

CREATE OR REPLACE PACKAGE LIST_PACK 
AS
  FUNCTION LIST_CONVERT_FUNC
   ( P_LIST IN VARCHAR2,
     P_DELIM IN VARCHAR2 DEFAULT ',' )
   RETURN LIST_PACK_TYPE
   PIPELINED;
 --
 FUNCTION LIST_SINGLE_MATCH_FUNC
 ( P_LIST1 IN VARCHAR2,
   P_LIST2 IN VARCHAR2,
   P_DELIM IN VARCHAR2 DEFAULT ','
 )
 RETURN NUMBER;
 --
END LIST_PACK;
/

CREATE OR REPLACE PACKAGE BODY LIST_PACK 
AS
 FUNCTION LIST_CONVERT_FUNC
 ( P_LIST IN VARCHAR2,
   P_DELIM IN VARCHAR2 DEFAULT ',' 
 )
 RETURN LIST_PACK_TYPE
 PIPELINED
 AS
   L_STRING LONG := P_LIST ||P_DELIM;
   L_DELIM_INDEX PLS_INTEGER;
   L_INDEX PLS_INTEGER := 1;
 BEGIN
 LOOP
   L_DELIM_INDEX := INSTR(L_STRING, P_DELIM, L_INDEX);
   EXIT WHEN L_DELIM_INDEX = 0;
   PIPE ROW (SUBSTR(L_STRING, L_INDEX, L_DELIM_INDEX - L_INDEX));
   L_INDEX := L_DELIM_INDEX + 1;
 END LOOP;
 RETURN;
 END LIST_CONVERT_FUNC;
 --
 FUNCTION LIST_SINGLE_MATCH_FUNC
 ( P_LIST1 IN VARCHAR2,
   P_LIST2 IN VARCHAR2,
   P_DELIM IN VARCHAR2 DEFAULT ','
 )
 RETURN NUMBER
 IS
   v_MATCHES NUMBER;
   GET_OUT EXCEPTION;
 BEGIN
   IF TRIM(P_LIST1) IS NULL AND TRIM(P_LIST2) IS NULL THEN
     v_MATCHES := 0;
     RAISE GET_OUT;
   END IF;
   SELECT COUNT(*) INTO v_MATCHES 
   FROM ( SELECT COLUMN_VALUE 
          FROM TABLE(LIST_CONVERT_FUNC(P_LIST1,P_DELIM))
          INTERSECT 
          SELECT COLUMN_VALUE 
          FROM TABLE(LIST_CONVERT_FUNC(P_LIST2,P_DELIM))
        );
   IF v_MATCHES >= 1 THEN
     RETURN 1;
   ELSE 
     RETURN 0;
   END IF;
 EXCEPTION
   WHEN GET_OUT THEN
     RETURN v_MATCHES;
 END LIST_SINGLE_MATCH_FUNC;
 --
END LIST_PACK;
/

Examples:

sql>SELECT LIST_PACK.LIST_SINGLE_MATCH_FUNC('1,2,3','3,4,5') FROM DUAL;

LIST_PACK.LIST_SINGLE_MATCH_FUNC('1,2,3','3,4,5')
-------------------------------------------------
 1

sql>SELECT LIST_PACK.LIST_SINGLE_MATCH_FUNC('1,2,3','4,5,6') FROM DUAL;

LIST_PACK.LIST_SINGLE_MATCH_FUNC('1,2,3','4,5,6')
-------------------------------------------------
 0

sql>SELECT LIST_PACK.LIST_SINGLE_MATCH_FUNC('1,2,3','3,4,5',',') FROM DUAL;

LIST_PACK.LIST_SINGLE_MATCH_FUNC('1,2,3','3,4,5',',')
-----------------------------------------------------
 1

sql>SELECT LIST_PACK.LIST_SINGLE_MATCH_FUNC('1,2,3','4,5,6',',') FROM DUAL;

LIST_PACK.LIST_SINGLE_MATCH_FUNC('1,2,3','4,5,6',',')
-----------------------------------------------------
 0

sql>SELECT LIST_PACK.LIST_SINGLE_MATCH_FUNC('1|2|3','3|4|5','|') FROM DUAL;

LIST_PACK.LIST_SINGLE_MATCH_FUNC('1|2|3','3|4|5','|')
-----------------------------------------------------
 1

sql>SELECT LIST_PACK.LIST_SINGLE_MATCH_FUNC('1|2|3','4|5|6','|') FROM DUAL;

LIST_PACK.LIST_SINGLE_MATCH_FUNC('1|2|3','4|5|6','|')
-----------------------------------------------------
 0

sql>SELECT LIST_PACK.LIST_SINGLE_MATCH_FUNC('1|2|3','4|5|6',',') FROM DUAL;

LIST_PACK.LIST_SINGLE_MATCH_FUNC('1|2|3','4|5|6',',')
-----------------------------------------------------
 0

sql>SELECT LIST_PACK.LIST_SINGLE_MATCH_FUNC(NULL,NULL,',') FROM DUAL;

LIST_PACK.LIST_SINGLE_MATCH_FUNC(NULL,NULL,',')
-----------------------------------------------
 0
Advertisements

CFStoredProc – When Oracle and CF stop talking…

Today I had a coworker run into something that has caused me hours of pulling my hair out. He had a stored procedure in Oracle that he made some minor changes to, and them BAM! CF errors. There were several different errors served up from CF including (paraphrased) “wrong number/types of arguments” or “no statement parsed” or my favorite “unsupported datatype”. I am going from memory since I am at home, but you get the flavor of the messages even if my language is off.

I run into this problem about once every 8-10 months. Fortunately, I have hit it enough that I vaguely remember that it isn’t me. It is Oracle/CF not playing nice together.

Here is the scenario:
– Write a procedure that compiles fine. Runs from the command prompt and from CF pages just fine.
– Make some change to the procedure like adding an input variable.
– Update cfstoredproc arguments to match new Oracle definition
– Fail. Hard… unsympathetic fail. (See above error messages)
– Sometimes I find swearing under my breath helps at this point, but while I feel better, it does nothing to clear the error.

What I have gleaned from experience and some online resources is that CF is caching the call to the stored proc. The stored proc changed, but CF isn’t ready to try new things.

So the next question is “Great… now how do I fix it?”

(I should also mention that the call to the stored proc is in a CFC)

Rebooting the application (my CF app… not the server) does nothing. Clearing the query cache does nothing. Changing positions at my desk for new perspectives does nothing. So far, the server admins and I found only 2 things that will fix the problem. 1) Time. (time heals many things) and 2) In CF Admin, disable/re-enable the database connection (Datasource … Disable connections checkbox)

Item number 2 above does the trick, but I gotta believe there is a better way. I hoping one of the three people that read my blog might have a suggestion. Anybody?

One in a million

How many times have you been creating or migrating data and have everything fail because of one stinkin’ record? Can you deal with it? Of course. There are a lot of ways to trap that. Try/Catch with single inserts at a time. Spend time finding the offending record. Etc. When you do bulk inserts from CF, you have to be especially careful if you don’t want your page to fail half way through the operation with half the data saved.

Wouldn’t it be nice if you could basically say “Hey Oracle… I am going to insert a bunch of records. A few might not be valid, but please insert the rest and just let me know which ones failed. Please….”

Well, it turns out you can do just that. If you want to Goolge this, the proper term is DML Error Logging and was part of Oracle 10g Release 2.

Since I prefer to show and then explain…

SQL> CREATE TABLE EXAMPLE_TAB
 2 ( EMP_ID NUMBER,
 3   EMP_NAME VARCHAR2(30),
 4   EMP_POSITION VARCHAR2(30),
 5   CONSTRAINT PK_EXAMPLE_TAB PRIMARY KEY (EMP_ID)
 6 );

Table created.

SQL> INSERT INTO EXAMPLE_TAB (EMP_ID, EMP_NAME, EMP_POSITION)
 2   VALUES (100,'Chris','Standing');

1 row created.

SQL> INSERT INTO EXAMPLE_TAB (EMP_ID, EMP_NAME, EMP_POSITION)
 2   VALUES (110,'Angela','Sitting');

1 row created.

SQL> INSERT INTO EXAMPLE_TAB (EMP_ID, EMP_NAME, EMP_POSITION)
 2   VALUES (120,'Monika','Running');

1 row created.

SQL> INSERT INTO EXAMPLE_TAB (EMP_ID, EMP_NAME, EMP_POSITION)
 2   VALUES (110,'Jon Connor','Crawling');
INSERT INTO EXAMPLE_TAB (EMP_ID, EMP_NAME, EMP_POSITION)
*
ERROR at line 1:
ORA-00001: unique constraint (SNWAENGDRAW.PK_EXAMPLE_TAB) violated
SQL> COMMIT;

Commit complete.

SQL> select count(*) from example_tab;

COUNT(*)
----------
 3

What happened? Three records were created and one failed. No big deal. Running this set of queries, it would be easy to identify the problem and fix it. If you had this in code somewhere, you would need to handle that ugly error message.

Let’s try it again with the error logging.

SQL> CREATE TABLE EXAMPLE_TAB
 2 ( EMP_ID NUMBER,
 3   EMP_NAME VARCHAR2(30),
 4   EMP_POSITION VARCHAR2(30),
 5   CONSTRAINT PK_EXAMPLE_TAB PRIMARY KEY (EMP_ID)
 6 );

Table created.

SQL> EXEC DBMS_ERRLOG.CREATE_ERROR_LOG( 'EXAMPLE_TAB' );

PL/SQL procedure successfully completed.

SQL> INSERT INTO EXAMPLE_TAB (EMP_ID, EMP_NAME, EMP_POSITION)
 2   VALUES (100,'Chris','Standing') LOG ERRORS REJECT LIMIT UNLIMITED;

1 row created.

SQL> INSERT INTO EXAMPLE_TAB (EMP_ID, EMP_NAME, EMP_POSITION)
 2   VALUES (110,'Angela','Sitting') LOG ERRORS REJECT LIMIT UNLIMITED;

1 row created.

SQL> INSERT INTO EXAMPLE_TAB (EMP_ID, EMP_NAME, EMP_POSITION)
 2   VALUES (120,'Monika','Running') LOG ERRORS REJECT LIMIT UNLIMITED;

1 row created.

SQL> INSERT INTO EXAMPLE_TAB (EMP_ID, EMP_NAME, EMP_POSITION)
 2   VALUES (110,'Jon Connor','Crawling') LOG ERRORS REJECT LIMIT UNLIMITED;

0 rows created.

SQL> commit;

Commit complete.

SQL> select count(*) from example_tab;

COUNT(*)
----------
 3

SQL> DESC ERR$_EXAMPLE_TAB;
 Name             Null?             Type
 ---------------------------------------------
 ORA_ERR_NUMBER$                    NUMBER
 ORA_ERR_MESG$                      VARCHAR2(2000)
 ORA_ERR_ROWID$                     ROWID
 ORA_ERR_OPTYP$                     VARCHAR2(2)
 ORA_ERR_TAG$                       VARCHAR2(2000)
 EMP_ID                             VARCHAR2(4000)
 EMP_NAME                           VARCHAR2(4000)
 EMP_POSITION                       VARCHAR2(4000)

SQL> SELECT ORA_ERR_MESG$ FROM ERR$_EXAMPLE_TAB;

ORA_ERR_MESG$
-------------------------------------------------------------------------
ORA-00001: unique constraint (SNWAENGDRAW.PK_EXAMPLE_TAB) violated

Neat. For the sake of brevity, I will talk in bullets about the above code.

  • “EXEC DBMS_ERRLOG.CREATE_ERROR_LOG( ‘EXAMPLE_TAB’ );” Is telling Oracle, “Hey… log DML errors for the table EXAMPLE_TAB.
  • When you tell Oracle to set up logging, it creates a new table with your origal table name and “ERR%_” prepended to it.
  • Adding “LOG ERRORS REJECT LIMIT UNLIMITED” is telling Oracle, “Hey… I think this will work, but if it doesn’t, don’t throw an error. Just log the failure in that table I told you about.” Also… if dealing with a huge amount of data, you might want to think about not setting the limit to UNLIMITED.
  • Notice that the insert above that failed returned “0 rows created”.
  • If you start logging, have a plan in place to purge the data from the log table. Log tables are notorious for growing faster than you thought they would. At least monitor the size.

This technique works with UPDATES as well. The constraint type is also flexible. I just chose a PK for my example.

When I first learned of this technique, my world shifted a little. I began going back over the hours and hours I spent trying to track down malformed records in bulk inserts/updates. Give this technique a try and keep it in your library. It isn’t appropriate for everything, but when it is… it can save you A LOT of time.

For a much better explanation of what you can do with the error logging:

http://tkyte.blogspot.com/2005/07/how-cool-is-this.html

Hierarchical Queries in Oracle

The concept of hierarchical queries is one that seems simple, but can quickly become confusing to those who don’t commonly write them. I still have to occasionally walk away from the keyboard when writing these queries and I have been doing it for years.

Everybody has seen the example queries with employees and managers. I am not going to rehash this example again. Instead I am going to provide a practical example looking at roles granted to a user in Oracle.

The same concept can be applied to inheritance and all sorts of other real world data rollup issues.

Privileges in Oracle can be granted on an object to a user or a role. It gets more difficult to traverse when you have roles granted to other roles.

Here is a basic query that needs some work to get there.

SQL> SET LINESIZE 200;
SQL> SET PAGESIZE 50;
SQL> SELECT SYS_CONNECT_BY_PATH(GRANTED_ROLE,'-->') AS USER_AND_ROLE_STR
 2   FROM   ( SELECT NULL as GRANTEE,
 3                   USERNAME as GRANTED_ROLE
 4            FROM   DBA_USERS
 5            WHERE  USERNAME = 'CHRIS'
 6            UNION
 7            SELECT GRANTEE, GRANTED_ROLE
 8            FROM   DBA_ROLE_PRIVS
 9          )
 10 START WITH GRANTEE IS NULL
 11 CONNECT BY GRANTEE = PRIOR GRANTED_ROLE;

USER_AND_ROLE_STR
-----------------------------------------------------
-->CHRIS
-->CHRIS-->AA_USER
-->CHRIS-->CONNECT
-->CHRIS-->CONTRACT_TRACKING
-->CHRIS-->PROJPLANNING
-->CHRIS-->PROJPLANNING-->PLANNINGUPDT
-->CHRIS-->EDUSER
-->CHRIS-->PLANNINGUPDT
-->CHRIS-->GISUSER
-->CHRIS-->GISUSER-->TELE_USER
-->CHRIS-->MC_CLIENT
-->CHRIS-->AGENDAUSER
-->CHRIS-->VACUPDATE
-->CHRIS-->CAM_S
-->CHRIS-->CAM_U
-->CHRIS-->PLNING
-->CHRIS-->SDE_VIEWER
-->CHRIS-->SDE_VIEWER-->SDE_SYS_PRIVS
-->CHRIS-->ENGDEPT
-->CHRIS-->PLAN_DATA_ENTRY
-->CHRIS-->PLAN_DATA_ENTRY-->PLAN_USER
-->CHRIS-->PLAN_DATA_ENTRY-->PLAN_USER-->PLAN_VIEW
-->CHRIS-->PLAN_POWER_USER
-->CHRIS-->PLAN_POWER_USER-->PLAN_DATA_ENTRY
-->CHRIS-->PLAN_POWER_USER-->PLAN_DATA_ENTRY-->PLAN_USER
-->CHRIS-->PLAN_POWER_USER-->PLAN_DATA_ENTRY-->PLAN_USER-->PLAN_VIEW
-->CHRIS-->PR_USER
-->CHRIS-->TRACKING_USER
-->CHRIS-->PORTAL_ADMIN
-->CHRIS-->STRATEGIC_PLAN_ADMIN

30 rows selected.

Neat. The query above shows the user, roles granted to the user, and then roles granted to those roles.

What’s happening in the query…
1) We are looking for our user in the DBA_USERS object. And then we are getting a list of all the roles, and roles granted to roles from DBA_ROLE_PRIVS. The union of the two queries gives us our base set of data that we will then use.
2) The “START WITH” and “CONNECT BY” portions of the query are telling Oracle to look at the data as a tree using a parent/child relationship.
3) “SYS_CONNECT_BY_PATH” returns the path of the hierarchy set up with the “START WITH” and “CONNECT BY”. First parameter is the column you want to work with. Second parameter is your delimiter. Note: Your delimiter cannot be something that can be found in the data of your column. Do that and you get a nasty error. Example: If your data has commas, don’t use a comma as your delimiter.

Next, let’s clean things up a bit to make the data slightly more “pretty”. Well… as pretty as this type of data can be.

SQL> SET LINESIZE 200;
SQL> SET PAGESIZE 50;
SQL> SELECT LTRIM(SYS_CONNECT_BY_PATH(GRANTED_ROLE,'-->'),'-->') AS USER_AND_ROLE_STR
 2   FROM   ( SELECT NULL as GRANTEE,
 3                   USERNAME as GRANTED_ROLE
 4            FROM   DBA_USERS
 5            WHERE  USERNAME = 'CHRIS'
 6            UNION
 7            SELECT GRANTEE, GRANTED_ROLE
 8            FROM   DBA_ROLE_PRIVS
 9          )
 10  where  CONNECT_BY_ISLEAF = 1
 11  START WITH GRANTEE IS NULL
 12  CONNECT BY GRANTEE = PRIOR GRANTED_ROLE;

USER_AND_ROLE_STR
-----------------------------------------------------------
CHRIS-->AA_USER
CHRIS-->CONNECT
CHRIS-->CONTRACT_TRACKING
CHRIS-->PROJPLANNING-->PLANNINGUPDT
CHRIS-->EDUSER
CHRIS-->PLANNINGUPDT
CHRIS-->GISUSER-->TELE_USER
CHRIS-->MC_CLIENT
CHRIS-->AGENDAUSER
CHRIS-->VACUPDATE
CHRIS-->CAM_S
CHRIS-->CAM_U
CHRIS-->PLNING
CHRIS-->SDE_VIEWER-->SDE_SYS_PRIVS
CHRIS-->ENGDEPT
CHRIS-->PLAN_DATA_ENTRY-->PLAN_USER-->PLAN_VIEW
CHRIS-->PLAN_POWER_USER-->PLAN_DATA_ENTRY-->PLAN_USER-->PLAN_VIEW
CHRIS-->PR_USER
CHRIS-->TRACKING_USER
CHRIS-->PORTAL_ADMIN
CHRIS-->STRATEGIC_PLAN_ADMIN

21 rows selected.

So what’s different?
1) We have added an “LTRIM()” function that strips off our delimiter from the front of our return. You don’t really need that leading delimiter.
2) We are restricting our results to only leaves instead of the branch and leaf. Think of the data as a tree with the trunk being the user you searched for. If they have a role granted to them and it ends there, it is a leaf. If you have a role (1) granted to the “trunk”, and then another role (2) granted to that… role (1) is a branch where role (2) is a leaf. We still want to know about the branch (role (2)), but having a separate row returned just muddies the water.  CONNECT_BY_ISLEAF is a pseudocolumn that returns 1 for leaves, and 0 for branches. In the query above, we reduce our results using this pseudocolumn.

Where does the business logic go?

This is a debate I have both listened to and been a part of for years. When you are building an app of any respectable size, where does the business logic go? Specifically, for the sake of this discussion… In ColdFusion or Oracle?

To be honest about my opinion I think it is only fair to share where I “fit” into things. I would consider myself extremely proficient in data design, database programming, and query writing. Especially in the world of Oracle. I consider myself proficient, if not a bit rusty, in ColdFusion. The way our team is set up, I am the “backend” guy and therefore I live in the database. UI? Hacker. I follow what is going on, and can make modifications easily, but I am not the guy you want coding your front end.

I have been fortunate to be with my current company for quite a few years which means I have seen the full range of philosophies on this subject.

When I was a one man team, or working with a single other developer… if it could go in Oracle, it went in Oracle. Data updates all went through Packages and data retrieval, most of the time, did so as well. This worked well for us. I would be out front with the customers gathering requirements and designing the architecture for the app. Once I reached around an 80% completion range, the CF and UI guy (same guy) was brought in to bring it all together and get the app to the customer.

In the case above, the Oracle packages ended up serving similar needs as CF components. If you wanted a “project” object, you ended up with output parameters representing the “project” as well as recordsets representing things like invoices, checks, people/roles, etc. CF said “give me a project and everything about it”, Oracle did the heavy lifting. I liked it.

Several years back, I began working with a larger team that was comprised of quite a few more proficient CF and UI folks. Given the high number of applications to produce/maintain, our development strategy changed. All of the business logic got pulled out of Oracle and into CFCs. My role was still data architect, but business logic was rarely my realm and I also got labeld “the query guy”. We have a lot of apps and a lot of data to manage, so since I am the main Oracle guy, this works well for our team. We have more CF people so in this case, it makes sense to have the business logic where it is accessible to the most people to understand and maintain.

So my question is this… with all things being equal, where “should” it go? Or is it always a function of the skills/staff available on the project?

DECODE, CASE… Does it Matter?

If I had a nickel for every time I get asked that question… well, I wouldn’t have much money; but judging by the applications I have worked on, quite a few programmers are 50/50 on the topic. Their code reflects not knowing which to use.

When I discovered CASE many moons ago, I never looked back. It was infinitely more readable than DECODE and handled complex evaluations with simpler code. All things being close to equal, readability/simple is important to me. While I am working on something, I can remember day to day what I was thinking. When an app is in maintenance mode, deciphering nested DECODEs is a pain.

Why is DECODE so rampant in queries?
There are a couple contributing factors that play to the answer here. First… if you want a simple if/then/else, DECODE is less typing. Secondly… Before Oracle 8.1, DECODE was the only game in town for this type of logic in the query… outside of writing your own function. If you have been around for a while, you probably started with DECODE and it comes as naturally as using CFQUERY.

Before we get to the differences between DECODE and CASE, for those not familiar with one or the other, let’s see an equivalent example of both.

/*	WHAT IS IN THE TABLE?	*/

SQL> SELECT * FROM EMP;

    EMP_ID EMP_NAME   SUPERVISOR GENDER
---------- ---------- ---------- -
        10 Larry      Chris      M
        15 Tim        Chris      M
        20 Monika     Eric       F
        25 Jon Connor Angela     M

SQL> SELECT     DECODE(
  2                     SUPERVISOR,'Chris','Works for Chris',
  3                     'Does not work for Chris'
  4             ) AS DECODE_RESULT,
  5             (       CASE    WHEN SUPERVISOR = 'Chris' THEN 'Works for Chris'
  6                             ELSE 'Does not work for Chris'
  7                     END
  8             ) AS CASE_RESULT
  9  from       EMP;

DECODE_RESULT           CASE_RESULT
----------------------- -----------------------
Works for Chris         Works for Chris
Works for Chris         Works for Chris
Does not work for Chris Does not work for Chris
Does not work for Chris Does not work for Chris

Both are easy to read and both return the same results.

Let’s try nesting a bit and see how it looks.

SQL> SELECT     DECODE(
  2                     SUPERVISOR,'Chris','Works for Chris',
  3                     DECODE(
  4                             SUPERVISOR,'Eric','Works for Eric',
  5                             'No idea'
  6                     )
  7             ) as DECODE_RESULT1,
  8             DECODE(
  9                     SUPERVISOR,'Chris','Works for Chris','Eric','Works for Eric','No Idea'
 10             ) as DECODE_RESULT2,
 11             (       CASE    WHEN SUPERVISOR = 'Chris' THEN 'Works for Chris'
 12                             WHEN SUPERVISOR = 'Eric' THEN 'Works for Eric'
 13                             ELSE 'No idea'
 14                     END
 15             ) AS CASE_RESULT
 16  FROM       EMP;

DECODE_RESULT1  DECODE_RESULT2  CASE_RESULT
--------------- --------------- ---------------
Works for Chris Works for Chris Works for Chris
Works for Chris Works for Chris Works for Chris
Works for Eric  Works for Eric  Works for Eric
No idea         No Idea         No idea

In the above queries, there are are two different styles for the DECODE. DECODE_RESULT2 is probably the easier of the two to read. I still think the CASE is the more readable.

If you aren’t careful, you can end up with DECODE(…,DECODE(…,(DECODE(…,)))). And if any of those DECODES have multiple tests, then good luck debugging. (Good code formatting will help, but still can get hairy when trying to debug).

So what are some of the differences?

1) How they handle nulls is an important distinction.

SQL> SELECT     DECODE(
  2                     NULL,NULL,'I AM NULL',
  3                     'I AM NOT NULL'
  4             ) AS DECODE_RESULT
  5  FROM       DUAL;

DECODE_RE
---------
I AM NULL

SQL> SELECT     (       CASE    NULL
  2                             WHEN NULL THEN 'I AM NULL'
  3                             ELSE 'I AM NOT NULL'
  4                     END
  5             ) AS CASE_RESULT1
  6  FROM       DUAL;

CASE_RESULT1
-------------
I AM NOT NULL

SQL> SELECT     (       CASE    WHEN NULL IS NULL THEN 'I AM NULL'
  2                             ELSE 'I AM NOT NULL'
  3                     END
  4             ) AS CASE_RESULT1
  5  FROM       DUAL;

CASE_RESULT1
-------------
I AM NULL

If you aren’t careful, the CASE can throw you off when testing for nulls.

2) CASE can work with logical operators other than equals. (You can also use ranges in CASE statements)

SQL> SELECT     DECODE(EMP_ID,<100,'Early Hire','New Hire') as DECODE_RESULT
  2  FROM       EMP;

SELECT     DECODE(EMP_ID,<100,'Early Hire','New Hire') as DECODE_RESULT
                         *
ERROR at line 1:
ORA-00936: missing expression

SQL> SELECT     (       CASE    WHEN EMP_ID<100 THEN 'Early Hire'
  2                             ELSE 'New Hire'
  3                     END
  4             ) AS CASE_RESULT
  5  FROM       EMP;

CASE_RESUL
----------
Early Hire
Early Hire
Early Hire
Early Hire

3) CASE can use subqueries.

SQL> SELECT     (       CASE    WHEN SUPERVISOR = 'Chris' THEN 'Good supervisor'
  2                             WHEN EMP_ID NOT IN 
                                    (    SELECT EMP_ID 
                                         FROM   EMP 
                                         WHERE  SUPERVISOR = 'Chris'
                                    ) THEN 'Questionable leadership'
  3                             ELSE    NULL
  4                     END
  5             ) AS CASE_RESULT
  6  FROM       EMP;

CASE_RESULT
-----------------------
Good supervisor
Good supervisor
Questionable leadership
Questionable leadership

4) CASE can be used as a PL/SQL construct. I won’t demonstrate that here, but it can be.

5) CASE can be used as a parameter to a procedure.

SQL> CREATE PROCEDURE STOOPID_PROC
  2  (  P_VAL IN VARCHAR2       )
  3  AS
  4  BEGIN
  5     DBMS_OUTPUT.PUT_LINE(P_VAL);
  6  END STOOPID_PROC;
  7  /

Procedure created.

SQL> EXEC STOOPID_PROC(DECODE(NULL,'NULL','NOT NULL'));

BEGIN STOOPID_PROC(DECODE(NULL,'NULL','NOT NULL')); END;

                   *
ERROR at line 1:
ORA-06550: line 1, column 20:
PLS-00204: function or pseudo-column 'DECODE' may be used inside a SQL
statement only
ORA-06550: line 1, column 7:
PL/SQL: Statement ignored

SQL> EXEC STOOPID_PROC((CASE WHEN NULL IS NULL THEN 'NULL' ELSE 'NOT NULL' END));
NULL

PL/SQL procedure successfully completed.

6) CASE is ANSI. DECODE is not.

7) Speed. There is a difference, but I never seem to find a good answer on which way this one goes. Mostly, it appears CASE is faster, but I have never done a full on test to see. At least not yet.

*Note: You don’t need all the parenthesis around the CASE statements. I like them.

Coalesce as an alternative to NVL

While this seems database specific, I think it hits on a query technique I see all the time in ColdFusion code.

How many times have you written a NVL statement in your (Oracle) query like this:
NVL(NVL(NVL(COL1,COL2),COL3)COL4)

Basically in English this says… If COL1 is null then Col2… If COL2 is null then COL3… If COL3 is null then COL4.

It isn’t very readable and depending on the number of columns you are working with, the nesting can get “challenging” to debug/edit. Also… NVL is Oracle specific where COALESCE is ANSI. This post however, will be looking at how Oracle handles things.

There are some things you should be aware of when using COALESCE, but first let’s see it in action.

SQL>  CREATE TABLE CHRIS_IS_NEAT
2     (  COL1    VARCHAR2(10),
3        COL2    VARCHAR2(10),
4        COL3    VARCHAR2(10),
5        COL4    VARCHAR2(10)
6  );
Table created.

SQL> INSERT INTO CHRIS_IS_NEAT VALUES (NULL,NULL,'BLUE','YELLOW');
1 row created.
SQL> INSERT INTO CHRIS_IS_NEAT VALUES ('RED','BLUE','BROWN','YELLOW');
1 row created.
SQL> INSERT INTO CHRIS_IS_NEAT VALUES (NULL,NULL,NULL,'BLACK');
1 row created.
SQL> COMMIT;
Commit complete.

SQL>
SQL> SELECT     NVL(NVL(NVL(COL1,COL2),COL3),COL4),
2               COALESCE(COL1,COL2,COL3,COL4)
3    FROM       CHRIS_IS_NEAT;

NVL(NVL(NV COALESCE(C
---------- ----------
BLUE       BLUE
RED        RED
BLACK      BLACK

SQL> DROP TABLE CHRIS_IS_NEAT;
Table dropped.
SQL> PURGE RECYCLEBIN;
Recyclebin purged.

The COALESCE function returns the first non-NULL value in the list. I don’t know about you, but that is a lot easier to follow than the NVL syntax.

Important things to remember about COALESCE

  • Datatypes… Try to make them the same. If you do that, skip the rest of the mumbo-jumbo on this bullet. NVL will do an implicit conversion of the datatype to the first parameter if it can. COALESCE… If you don’t have consistent datatypes, you will get a “datatype error”. If the columns are different types, make sure you convert them on the way into the function. Caveat… Different numeric types will implicitly be converted to the argument with the highest numeric precedence. Also, if you have a non-numeric datatype than can be implicitly converted to a number, Oracle will try.
  • NVL evaluates both columns represented. COALESCE stops evaluating as soon as it finds a non-NULL value. Oracle calls this a “short-circuit evaluation*” in the docs. This can provide a speed bump as well over NVL. Most of the time it will probably be too small to notice. (There is great disagreement on this point (speed bump), but limited testing shows COALESCE “can” be faster depending on how it is being used.)
  • If all columns passed in are null, COALESCE returns null.
  • My example above uses column names, but they could also be functions or constant values.

*Other examples of short-circuit functionality are DECODE and CASE.

Aggregating your data in the query

How the data is stored rarely matches how we want it displayed. How often you need to aggregate data will normally hinge on the degree of normalization your data architect has chosen.

As with quite a few of the items I have written about or plan to write about, there are multiple ways to accomplish what is being described. Chances are that if you are a ColdFusion developer, you already perform this type of aggregation in the middle tier. That way is no less valid, especially if you are dealing with a shared query and messing with it (technical term there) might cause other developers or applications problems. If given the opportunity though, I always try to let the database do what the database does well. Deal with the data.

Oracle 11g introduced some new functionality that easily allows you to aggregate data directly in your query. The function is called listagg(). Prior to Oracle 11g, to accomplish the same goal you needed to create custom datatypes and write your own functions. While the solution was genius, it was often more complex than people wanted to take on. If you are on an older version of Oracle, I recommend checking out the AskTom (http://asktom.oracle.com/) site and search for “stragg”. I could write a post about the function, but I don’t think I could add more than Tom’s site provides.

I am going to demonstrate the listagg() function in SQLPlus; obviously the query demonstrated can go in your cfquery tag.

/*	CREATE SOME TEMP TABLES	*/
SQL> CREATE TABLE CCH_PERSON
  2  (  PERSON_PK               NUMBER,
  3     PERSON_NAME             VARCHAR2(30),
  4     IS_ACTIVE               NUMBER(1) DEFAULT 1 NOT NULL,
  5     PROJECT_FK              NUMBER,
  6     CONSTRAINT PK_CCH_PERSON PRIMARY KEY(PERSON_PK)
  7  );

Table created.

SQL>
SQL> CREATE TABLE CCH_PROJECTS
  2  (  PROJECT_PK      NUMBER,
  3     PROJECT_NAME    VARCHAR2(30),
  4     CONSTRAINT PK_CCH_PROJECTS PRIMARY KEY(PROJECT_PK)
  5  );

Table created.

SQL>
SQL> CREATE TABLE CCH_PERSON_PROJECTS_J
  2  (  PERSON_FK       NUMBER  NOT NULL,
  3     PROJECT_FK      NUMBER  NOT NULL
  4  );

Table created.

SQL> set feedback off;
SQL>
SQL> /* CREATE SOME PEOPLE      */
SQL> INSERT INTO CCH_PERSON (PERSON_PK, PERSON_NAME     ) VALUES ('10','Chris');
SQL> INSERT INTO CCH_PERSON (PERSON_PK, PERSON_NAME     ) VALUES ('11','Anthony');
SQL> INSERT INTO CCH_PERSON (PERSON_PK, PERSON_NAME     ) VALUES ('12','Tim');
SQL> INSERT INTO CCH_PERSON (PERSON_PK, PERSON_NAME     ) VALUES ('13','John');
SQL> INSERT INTO CCH_PERSON (PERSON_PK, PERSON_NAME     ) VALUES ('14','Fred');
SQL> INSERT INTO CCH_PERSON (PERSON_PK, PERSON_NAME     ) VALUES ('15','Sam');
SQL> commit;
SQL>
SQL> /* CREATE SOME PROJECTS    */
SQL> INSERT INTO CCH_PROJECTS(PROJECT_PK, PROJECT_NAME) VALUES ('1','Big Blog');
SQL> INSERT INTO CCH_PROJECTS(PROJECT_PK, PROJECT_NAME) VALUES ('2','Project Tracking');
SQL> INSERT INTO CCH_PROJECTS(PROJECT_PK, PROJECT_NAME) VALUES ('3','Property DB');
SQL> INSERT INTO CCH_PROJECTS(PROJECT_PK, PROJECT_NAME) VALUES ('4','Employee Relations');
SQL> INSERT INTO CCH_PROJECTS(PROJECT_PK, PROJECT_NAME) VALUES ('5','Billing System');
SQL> INSERT INTO CCH_PROJECTS(PROJECT_PK, PROJECT_NAME) VALUES ('6','World Domination');
SQL> INSERT INTO CCH_PROJECTS(PROJECT_PK, PROJECT_NAME) VALUES ('7','Pet Project');
SQL> commit;
SQL>
SQL> /* MAKE PEOPLE WORK ON PROJECTS    */
SQL> INSERT INTO CCH_PERSON_PROJECTS_J (PERSON_FK, PROJECT_FK) VALUES ('10','1');
SQL> INSERT INTO CCH_PERSON_PROJECTS_J (PERSON_FK, PROJECT_FK) VALUES ('10','2');
SQL> INSERT INTO CCH_PERSON_PROJECTS_J (PERSON_FK, PROJECT_FK) VALUES ('10','3');
SQL> INSERT INTO CCH_PERSON_PROJECTS_J (PERSON_FK, PROJECT_FK) VALUES ('11','4');
SQL> INSERT INTO CCH_PERSON_PROJECTS_J (PERSON_FK, PROJECT_FK) VALUES ('11','5');
SQL> INSERT INTO CCH_PERSON_PROJECTS_J (PERSON_FK, PROJECT_FK) VALUES ('12','6');
SQL> INSERT INTO CCH_PERSON_PROJECTS_J (PERSON_FK, PROJECT_FK) VALUES ('13','7');
SQL> INSERT INTO CCH_PERSON_PROJECTS_J (PERSON_FK, PROJECT_FK) VALUES ('15','5');
SQL> INSERT INTO CCH_PERSON_PROJECTS_J (PERSON_FK, PROJECT_FK) VALUES ('15','1');
SQL> INSERT INTO CCH_PERSON_PROJECTS_J (PERSON_FK, PROJECT_FK) VALUES ('15','6');
SQL> COMMIT;
SQL> set feedback on;

--	RUN A QUERY TO FIND OUT WHAT PEOPLE ARE WORKING ON
SQL> SELECT     A.PERSON_NAME,
  2             C.PROJECT_NAME
  3  FROM       CCH_PERSON A,
  4             CCH_PERSON_PROJECTS_J B,
  5             CCH_PROJECTS C
  6  WHERE      A.PERSON_PK     = B.PERSON_FK(+)
  7     AND     B.PROJECT_FK    = C.PROJECT_PK(+)
  8  ORDER BY A.PERSON_NAME;

PERSON_NAM PROJECT_NAME
---------- ------------------------------
Anthony    Employee Relations
Anthony    Billing System
Chris      Big Blog
Chris      Project Tracking
Chris      Property DB
Fred
John       Pet Project
Sam        World Domination
Sam        Big Blog
Sam        Billing System
Tim        World Domination

11 rows selected.

SQL> -- ADD SOME COLUMN FORMATTING SO THE DISPLAY IS NICE
SQL> column PERSON_NAME format a10
SQL> column PROJECTS format a46
SQL>
SQL> -- AGGREGATE THE DATA
SQL> SELECT     A.PERSON_NAME,
  2             LISTAGG(C.PROJECT_NAME,', ') WITHIN GROUP (ORDER BY A.PERSON_NAME) AS PROJECTS
  3  FROM       CCH_PERSON A,
  4             CCH_PERSON_PROJECTS_J B,
  5             CCH_PROJECTS C
  6  WHERE      A.PERSON_PK = B.PERSON_FK(+)
  7     AND     B.PROJECT_FK = C.PROJECT_PK(+)
  8  GROUP BY A.PERSON_NAME
  9  ORDER BY A.PERSON_NAME;

PERSON_NAM PROJECTS
---------- ----------------------------------------------
Anthony    Billing System, Employee Relations
Chris      Big Blog, Project Tracking, Property DB
Fred
John       Pet Project
Sam        Big Blog, Billing System, World Domination
Tim        World Domination

6 rows selected.

SQL> -- CLEAN UP AFTER YOURSELF
SQL> DROP TABLE CCH_PERSON;
Table dropped.

SQL> DROP TABLE CCH_PERSON_PROJECTS_J;
Table dropped.

SQL> DROP TABLE CCH_PROJECTS;
Table dropped.

SQL> PURGE RECYCLEBIN;
Recyclebin purged.

Ok… So what is happening here? You are running a query to collect two columns. Person and Project. Person is the value that you group on and Project is the data you want to aggregate. ListAgg() has two input parameters; column to aggregate, and delimeter. Then the ‘WITHIN GROUP (…’ tells Oracle which group to aggregate over.

The above example is using Listagg() as a group-set aggregate. It has some other options as well such as an analytical function, but that would be a considerably longer post. Maybe down the road.

Listagg() is a easy but powerful function to implement… As soon as you use it once, you will want to implement it everywhere.

Insert or update? Upsert!

For years, when saving data to the database I have always run the logic “Does the record exist? If so, update it. If not, insert a new record.”

Difficult? No. Do I get tired of writing it? Yeah.

While there are many ways to deal with this, below is the method I like to use.

I have the same example below in both SQL Plus and ColdFusion.

SQL Plus Example

CREATE TABLE UPSERT_TAB
(	UPSERT_TAB_PK	NUMBER(1),
	UPSERT_VALUE 	VARCHAR2(30)
);

GRANT SELECT ON UPSERT_TAB TO SNWACF;

INSERT INTO UPSERT_TAB VALUES (1,'Dog');
INSERT INTO UPSERT_TAB VALUES (2,'Cat');
COMMIT;

/*	TRY TO UPDATE A RECORD THAT DOES NOT EXIST	*/
begin
	UPDATE	UPSERT_TAB
	SET 	UPSERT_TAB_PK = 3,
	        UPSERT_VALUE = 'Big Foot'
	WHERE 	UPSERT_TAB_PK = 3;
	--
	IF ( sql%rowcount = 0 )
	    THEN
	    INSERT INTO UPSERT_TAB
	        VALUES (3, 'Big Foot');
	END IF;
end;
/

SQL> begin
  2     UPDATE  UPSERT_TAB
  3     SET     UPSERT_TAB_PK = 3,
  4             UPSERT_VALUE = 'Big Foot'
  5     WHERE   UPSERT_TAB_PK = 3;
  6     --
  7     IF ( sql%rowcount = 0 )
  8         THEN
  9         INSERT INTO UPSERT_TAB
 10             VALUES (3, 'Big Foot');
 11     END IF;
 12  end;
 13  /

PL/SQL procedure successfully completed.

SQL>
SQL> SELECT * FROM UPSERT_TAB;

UPSERT_TAB_PK UPSERT_VALUE
------------- ------------------------------
            1 Dog
            2 Cat
            3 Big Foot

/*	Now try an update	*/

SQL> begin
  2     UPDATE  UPSERT_TAB
  3     SET     UPSERT_TAB_PK = 2,
  4             UPSERT_VALUE = 'Cats are weird'
  5     WHERE   UPSERT_TAB_PK = 2;
  6     --
  7     IF ( sql%rowcount = 0 )
  8         THEN
  9         INSERT INTO UPSERT_TAB
 10             VALUES (2, 'Cats are weird');
 11     END IF;
 12  end;
 13  /

PL/SQL procedure successfully completed.

SQL> SELECT * FROM UPSERT_TAB;

UPSERT_TAB_PK UPSERT_VALUE
------------- ------------------------------
            1 Dog
            2 Cats are weird
            3 Big Foot

/*	rollback the data and try it in CF	*/
DELETE FROM UPSERT_TAB WHERE UPSERT_TAB_PK = 3;
UPDATE UPSERT_TAB SET UPSERT_VALUE = 'Cat' WHERE UPSERT_TAB_PK = 2;
COMMIT;

Now try the same thing in ColdFusion

<b>Fresh table of data before we start messin' with it.</b><BR>

<cfquery name="getFreshTable" datasource="#application.strDatasource#" >
	SELECT * FROM UPSERT_TAB
</cfquery>

<cfdump var="#getFreshTable#" metainfo="false" label="getFreshTable">
<!----------------------------------------------------------------------->

<br><br>
<b>Run the first upsert and then look at the table again.</b><br>

<cfquery name="Upsert1" datasource="#application.strDatasource#">
	begin
		UPDATE  UPSERT_TAB
		SET     UPSERT_TAB_PK	= <cfqueryparam cfsqltype="cf_sql_numeric" value="3" >,
				UPSERT_VALUE 	= <Cfqueryparam cfsqltype="cf_sql_varchar" value="Big Foot">
		WHERE   UPSERT_TAB_PK 	= <cfqueryparam cfsqltype="cf_sql_numeric" value="3" >;
		--
		IF ( sql%rowcount = 0 )
		THEN
			INSERT INTO UPSERT_TAB
				VALUES (
					<cfqueryparam cfsqltype="cf_sql_numeric" value="3" >, 
					<Cfqueryparam cfsqltype="cf_sql_varchar" value="Big Foot">
				);
		END IF;
	end;
</cfquery>

<cfquery name="getUpsertResult1" datasource="#application.strDatasource#" >
	SELECT * FROM UPSERT_TAB
</cfquery>

<cfdump var="#getUpsertResult1#" metainfo="false" label="getUpsertResult1">
<!-------------------------------------------------------------------->

<br><Br>
<b>Run the second upsert and then look at the table yet again.</b>

<cfquery name="upsert2" datasource="#application.strDatasource#">
	begin
		UPDATE  UPSERT_TAB
		SET     UPSERT_TAB_PK = <cfqueryparam cfsqltype="cf_sql_numeric" value="2" >,
				UPSERT_VALUE = <Cfqueryparam cfsqltype="cf_sql_varchar" value="Cats are weird">
		WHERE   UPSERT_TAB_PK = <cfqueryparam cfsqltype="cf_sql_numeric" value="2" >;
		--
		IF ( sql%rowcount = 0 )
		THEN
			INSERT INTO UPSERT_TAB
				VALUES (
					<cfqueryparam cfsqltype="cf_sql_numeric" value="2" >, 
					<Cfqueryparam cfsqltype="cf_sql_varchar" value="Cats are weird">
				);
		END IF;
	end;
</cfquery>

<cfquery name="getUpsertResult2" datasource="#application.strDatasource#" >
	SELECT * FROM UPSERT_TAB
</cfquery>

<cfdump var="#getUpsertResult2#" metainfo="false" label="getUpsertResult2">

Screenshot:
upsert

Pivot your query

*Note: Syntax below is for Oracle.

Similar functionality can easily be implemented in ColdFusion, but I lean towards putting this type of thing in the originating query instead of massaging the data after retrieval from the database.  Massaging data can be fun, but like many things in life, not always a good idea.

Every time I hear the word “pivot”, I can’t help but think of Ross and Joey trying to carry a sofa. Get the reference? No? Ok… moving on…

In the past, when you wanted to “pivot” rows of data into columns, you could use the “max(decode(…))” method to accomplish the task (If anybody wants me to explain that, hit me up in the comments).  In Oracle 11g, the “pivot” and “unpivot” keywords have been added that will allow pivoting your data from rows to columns as well as from columns to rows.  I will demonstrate the “pivot” method.  This is a technique that should be in the library for anybody wishing to analyze data.

First, let’s set up some test data to work with.

Note: I wouldn’t design tables this way, but for the sake of illustration I will keep it simple.

CREATE TABLE WORKER
( WORKER_ID         NUMBER,
WORKER_NAME         VARCHAR2(30),
WORKER_DEPT_NO      NUMBER,
WORKER_JOB          VARCHAR2(30),
WORKER_SALARY       NUMBER(6),
WORKER_FUNCTION     VARCHAR2(30)
);

INSERT INTO WORKER VALUES (100,'Chris',20,'Managerial',100000,'MANAGE');
INSERT INTO WORKER VALUES (110,'Tim',20,'Programmer',60000,'WORK');
INSERT INTO WORKER VALUES (120,'Larry',30,'Programmer',75000,'WORK');
INSERT INTO WORKER VALUES (130,'Mike',30,'Managerial',130000,'MANAGE');
INSERT INTO WORKER VALUES (140,'Lisa',30,'Programmer',65000,'WORK');
INSERT INTO WORKER VALUES (150,'Jan',30,'Programmer',55000,'WORK');
INSERT INTO WORKER VALUES (160,'Akash',30,'Intern',25000,'WORK');
INSERT INTO WORKER VALUES (170,'Summer',20,'Intern',22000,'WORK');
INSERT INTO WORKER VALUES (200,'Ian',20,'Intern',23000,'WORK');
INSERT INTO WORKER VALUES (180,'Josh',40,'Managerial',110000,'MANAGE');
INSERT INTO WORKER VALUES (190,'Monika',40,'Programmer',95000,'WORK');
COMMIT;

Get a count of staff by department number.

SELECT    WORKER_DEPT_NO, COUNT(WORKER_DEPT_NO) AS WORKER_COUNT
FROM      WORKER
GROUP BY  WORKER_DEPT_NO;

WORKER_DEPT_NO WORKER_COUNT
-------------- ------------
 30                5
 40                2
 20                4

The query returns three rows; one for each department with a count of staff for that department. Neat. Ok… not really.

What if you wanted the department numbers as columns? Pivot is the answer.

WITH QRY_WORKER AS
 ( SELECT WORKER_DEPT_NO
   FROM   WORKER
 )
SELECT *
FROM   QRY_WORKER
PIVOT ( COUNT(WORKER_DEPT_NO) FOR (WORKER_DEPT_NO) IN (20,30,40) );

20     30     40
----------------------------
 4     5      2

So that is a little more neat. If you are unfamiliar with the “with” above, go read about it! You will be glad you did. (Maybe I will write a future post about that.) Now we have department numbers as columns and the counts represented in the data. Note that the values for the department numbers are explicitly listed.

Let’s try something similar finding min/max salaries for each of the non-managerial positions by department.

/* MAKE THE OUTPUT PRETTY IN SQLPLUS. PRETTY IS A SUBJECTIVE TERM */
SET LINESIZE 100;
WITH QRY_WORKER AS
 ( SELECT WORKER_DEPT_NO, WORKER_JOB, WORKER_SALARY
   FROM   WORKER
 )
SELECT *
FROM   QRY_WORKER
PIVOT ( MIN(WORKER_SALARY) AS MINSAL,
 MAX(WORKER_SALARY) as MAXSAL
 FOR (WORKER_JOB)
 IN ('Intern' AS INTERN,
 'Programmer' as PROGRAMMER
 )
 )
ORDER BY WORKER_DEPT_NO;

WORKER_DEPT_NO INTERN_MINSAL INTERN_MAXSAL PROGRAMMER_MINSAL PROGRAMMER_MAXSAL
-------------- ------------- ------------- ----------------- -----------------
 20            22000         23000         60000             60000
 30            25000         25000         55000             75000
 40            95000         95000

More data would make this more interesting; a small data set means there isn’t a big variation in the data represented by min/max… but you get the idea.

In the query above three rows are selected. The WORKER_JOB is the pivot, the salaries are aggregated and the departments are returned as distinct rows. Note the different values for the pivot are explicitly listed.

The inline view QRY_WORKER contains three columns, the salary is aggregated, the job is transposed into multiple columns, and the last column is used for grouping. Department, which contains three distinct values, returns one row each.

Go clean up after yourself.

SQL> DROP TABLE WORKER;
Table dropped.
SQL> PURGE RECYCLEBIN;
Recyclebin purged.