dbKoda for MongoDB

dbKoda for MongoDB - a modern open source database IDE, now available for MongoDB. Download it here!

Next Generation
Databases: NoSQL,
NewSQL and Big Data

Buy at Amazon

Read sample at Amazon

Buy at Apress

Latest Postings:


Oracle Performance Survival Guide

Buy It
Read it on Safari
Scripts and Examples
Sample Chapter


Powered by Squarespace

 MySQL Stored procedure programming

Buy It
Read it on Safari
Scripts and Examples 



D.I.Y. MySQL 5.1 monitoring

I wrote recently about using events and the new processlist table in MySQL 5.1 to keep track of the number of connected processes.  Although having the PROCESSLIST available as an INFORMATION SCHEMA table is usefull, it seemed to me that having SHOW GLOBAL STATUS exposed in a similar fashion would be far more useful.  So at the MySQL UC last year, I asked Brian Aker if that would be possible.  I know how many enhancement requests MySQL has to deal with, so I was really happy to see that table appear in the latest 5.1 build (5.1.14 beta). 

This table, together with the EVENT scheduler, lets us keep track of the values of status variables over time without having to have any external deamon running.  This won't come anywhere near to matching what MySQL have made avalable in Merlin, but still could be fairly useful.  So lets build a simple system using events to keep track of "interesting" status variables....

I also thought I’d take the opportunity of working with the MySQL Workbench to design the tables involved.   Unfortunately the product is effectively unusable on my system until at least rc7 (due when?) and b/c it only support windows, I’m not up for a download the source and compile.  Oh well.  I used our own (Quest Software's) Toad Data Modeller instead.   

The idea is to schedule some events that trap (snapshot) the contents of the PROCESSLIST and GLOBAL_STATUS variables into tables that can be used to track trends and monitor performance in a simple way.  I created three tables to hold the data:


  1. GHSNAP_STATUS_VARIABLES lists the variables names we consider "interesting".  If the variable_name is listed here and CAPTURE_FLAG=1, then we will capture the value of the variable.  VARIABLE_SOURCE indicates the table from which the data is obtained - I only implemented GLOBAL_VARIABLES for now
  2. GHSNAP_SNAPSHOTS contains one row for each snapshot of the table we take.  The UPTIME column is used to detect server restarts so we don't calculate nonsense deltas or rates
  3. GHSNAP_SNAPSHOT_VALUES contains one row for each variable for each snapshot.  We capture the raw value of the variable, its change since the last snapshot and the rate of change per second.

This file installs the tables, stored procedures and events.  You may have to run it from the MySQL query browser to avoid errors caused by the command line client not processing DELIMITER statements properly.  The script creates a new database GH_SNAPSHOTS.  Three stored procedures are created:

  1. ghsnap_populate_variables creates default entries in the GHSNAP_STATUS_VARIABLES table and is run only during installation.  By default I set it up to capture all the INNODB%, COM% and QCACHE% variables but you might want to change the CAPTURE_FLAG for those you do/don't want.
  2. ghsnap_take_snapshot takes a snapshot of GLOBAL_STATUS and stores it in the snapshot tables.  It also works out if the server has been restarted and if not, calculates rates and deltas.
  3. ghsnap_delete_snapshots deletes all snapshots more than a certain number of days old.

There's two events defined:

  1. ghsnap_take_snap_event runs (by default) every five minutes and simply executes ghsnap_take_snapshot.
  2. ghsnap_delete_snap_event runs every 5 hours (by default) and deletes snapshots more than 1 day old (you can edit the installation script to change this default).

There's some obvious additional things that we could do with this (optimze deletes via partitioning, calculate ratios, capture other state, etc) but this probably has some real value already.  Now I can issue queries such as this:

select snapshot_id,snapshot_timestamp ,
       sum( case variable_name when 'COM_COMMIT' then variable_value_rate end
          ) commit_ps,
       sum( case variable_name when 'COM_SELECT' then variable_value_rate end
          ) select_ps,
       sum( case variable_name when 'COM_INSERT' then variable_value_rate end
          ) insert_ps,
       sum( case variable_name when 'COM_DELETE' then variable_value_rate end
          ) delete_ps,
       sum( case variable_name when 'COM_UPDATE' then variable_value_rate end
          ) update_ps
  from ghsnap_snapshots join ghsnap_snapshot_values using (snapshot_id)
where snapshot_id>70
group by snapshot_id,snapshot_timestamp

To view the activity on my server.  Using a reporting or charting tool I can generate usefull charts, etc.  For instance, using the BIRT module in eclipse I created the following chart showing my SQL execution rates:


Nothing earth-shattering but still useful and - with the event scheduler, stored procedures and the new INFORMATION_SCHEMA tables we don't need any external infrastructure to capture this information. 

We'll probably implement something like this in the next version of our  freeware Spotlight on MySQL as well as better diagnostics on replication and exploiting the fact that in 5.1 you can get SELECT access to logs.  Feel free to check it out.   

P.S. don't forget that to set EVENT_SCHEDULER=1 to enable events on your 5.1 server.  And watch out on Windows where the event scheduler in 5.1.14 still seems a bit unstable.


10g tracing quick start

Oracle’s released a few new facilities to help with tracing in 10g,  here’s a real quick wrap up of the most significant:

Using the new client identifier

You can tag database sessions with a session identifier that can later be used to identify sessions to trace.  You can set the identifier like this:


You can set this from a login trigger if you don’t have access to the source code.  To set trace on for a matching client id, you use DBMS_MONITOR.CLIENT_ID_TRACE_ENABLE:


   DBMS_MONITOR.client_id_trace_enable (client_id      => 'GUY1',
                                        waits          => TRUE,
                                        binds          => FALSE

You can add waits and or bind variables to the trace file using the flags shown.

Tracing by Module and/or action

Many Oracle-aware applications set Module and action properties and you can use these to enable tracing as well.  The serv_mod_act_trace_enable method allows you to set the tracing on for sessions matching particular service, module, actions and (for clusters) instance identifiers.  You can see current values for these usng the following query:

SELECT DISTINCT instance_name, service_name, module, action
           FROM gv$session JOIN gv$instance USING (inst_id);



10g time model query

Joining the 10g time model to the traditional wait interface views and taking advantage of the wait_class data is something most of us have probably done.  Here's my standard queries that do that thing....

fCOLUMN wait_class format a20
COLUMN name format a30
COLUMN time_secs format 999,999,999,999.99
COLUMN pct format 99.99

SELECT   wait_class, NAME, ROUND (time_secs, 2) time_secs,
         ROUND (time_secs * 100 / SUM (time_secs) OVER (), 2) pct
    FROM (SELECT n.wait_class, e.event NAME, e.time_waited / 100 time_secs
            FROM v$system_event e, v$event_name n
           WHERE n.NAME = e.event AND n.wait_class <> 'Idle'
                 AND time_waited > 0
          SELECT 'CPU', 'server CPU', SUM (VALUE / 1000000) time_secs
            FROM v$sys_time_model
           WHERE stat_name IN ('background cpu time', 'DB CPU'))
ORDER BY time_secs DESC;

Which generates CPU and wait event times broken down to the event name:

WAIT_CLASS           NAME                                     TIME_SECS    PCT


Unit testing stored procedures with Junit

Anyone who has used an automated unit testing framework such as Junit knows just how life-changing an automated test suite can be.   Once you've  experienced validating that recent changes have not broken old code, or discovering subtle bugs via junit that would otherwise have remained undetected , you naturally want to have this capability in all your programming environments.

Guisseppe Maxia has written a few stored procedure snippets to assist with automated unit testing of MySQL routines.  Unfortunately, the MySQL stored procedure language itself does not have the necessary abilities to fully implement the sort of unit testing we would like.  In particular, the inability for a stored procedure to capture the result sets generated by another stored procedure prevents a stored procedure from fully unit testing another. 

So I decided that - for me - Junit offered the best solution.  I created an extension to the Junit class that contains some assertions useful when unit testing stored procedure and extended my PlainSql JDBC wrapper classes to allow a stored procedure to return a single object that contains all its result sets and the values of OUT or INOUT parameters.   This object can be made the target of the various assertions.

If you're not familiar with Java and/or, you might feel that this solution is not for you.  However, the amount of java programming you need to do is very minimal and GUI environments such as Eclipse make it very easy to set up.  The rest of this article is available here;  it contains instructions, my Java classes and examples for setting up Junit test cases for MySQL stored procedures.


MySQL stored procedures with Ruby

Ruby's getting an incredible amount of attention recently, largely as the result of Ruby on Rails.  I've played a little with Ruby on Rails and it certainly is the easiest way I've seen so far to develop  web interfaces to a back-end database.

At the same time,  I've been shifting from perl to Java as my language of choice for any serious database utility development.  But I still feel the need for something dynamic and hyper-productive when I'm writing something one-off or for my own use.  I've been playing with Python, but if Ruby has the upper ground as a web platform then maybe I should try Ruby. 

So seeing as how I've just finished the MySQL stored procedure book, first thing is to see if I can use Ruby for MySQL stored procedures.

Database - and MySQL - support for Ruby is kind of all over the place.  There's a DBI option (similar to perl) which provides a consistent interface and there's also native drivers.  For MySQL there are pure-ruby native drivers and drivers written in C.  Since the DBI is based on the native driver, I thought I'd try the native driver first.  The pure-ruby driver gave me some problems so I started with the C driver on Linux (RHAS4). 

Retrieving multiple result sets

The main trick with stored procedures is that they might return multiple result sets. OUT or INOUT parameters can be an issue too, but you can always work around that using session variables. 

If you try to call a stored procedure that returns a result set, you'll at first get a "procedure foo() can't return a result set in the given context error".  This is because the CLIENT_MULTI_RESULTS flag is not set by default when the connection is created.  Luckily we can set that in our own code:

dbh.real_connect("", "root", "secret", "prod",3306,nil,Mysql::CLIENT_MULTI_RESULTS)

The "query" method returns a result set as soon as it is called, but I found it easier to retrieve each result set manually, so i set the query_with_result attribute to false:


The next_result and more_results methods are implemented in the Ruby MySql driver, but there's some weird things about the more_results C API call that causes problems in python and PHP.  In Ruby, the more_results call returns true whether or not there is an additional result.   The only reliable way I found to determine if there is another result set is to try and grab the results and bail out if an exception fires (the exception doesn't generate an error code, btw);
    dbh.query("CALL foo()")
    rescue Mysql::Error => e 

We can then call more_results at the end of each rowset loop.  So here's a method that dumps all the result sets from a stored procedure call as XML using this approach (I'm know the Ruby is probably crap, it's like my 3rd Ruby program):

def procXML(dbh,sql)
  printf("<?xml version='1.0'?>\n");
  printf("<proc sql=\"%s\">\n",sql)
  until no_more_results
    rescue Mysql::Error => e 
     if no_more_results==false
      printf("\t<resultset id=%d columns=%s>\n",result_no,colcount)
      rs.each do |row|
        printf "\t\t<row no=%d>\n",rowno
        rs.fetch_fields.each_with_index do |col,i|
          printf("\t\t\t<colvalue column=\"%s\">%s</colvalue>\n",col.name,row[i])

No C programming required!

Whew!  No need to hack into the C code.  So you can use MySQL stored procedures in Ruby with the existing native C driver. The problem is that the C driver is not yet available as a binary on Windows yet and trying to compile it turns out to be beyond my old brain (and yes, I used minGW and all the other "right" things).   Hopefully a copy of the MySQL binary driver it will be available in the one-click installer Ruby installer eventually.

The above code doesn't work using the pure-Ruby driver on windows by the way -  there's an "out of sequence" error when trying to execute the stored proc.  I might hack around on that later (at the moment I'm 35,000 ft with 15 minutes of battery left on the way to the MySQL UC).  For now if you want to use MySQL stored procedures in a ruby program on windows I can't help.

Note that ruby seems to hit a bug that causes MySQL to go away if there are two calls to the same stored proc in the same session and the stored proc is created using server-side prepared statements.  Fixed soon hopefully, but for now if you get a "MySQL server has gone away error" you might be hitting the same problem.   Wez posted on this problem here.

I suppose the end of this investigation will probably be to see if there's any way to use stored procedure calls to maintain a Rails AcitveRecord object.  Not that I think you'd necessarily want to, but it would probably be a good learning exercise.