dbKoda - a modern open source databases IDE, now available for MongoDB Download it here!

Next Generation
Databases: NoSQL,
NewSQL and Big Data

Buy at Amazon

Read sample at Amazon

Buy at Apress

Latest Postings:

Search

Oracle Performance Survival Guide

Buy It
Read it on Safari
Scripts and Examples
Sample Chapter

                                    

Powered by Squarespace

 MySQL Stored procedure programming

Buy It
Read it on Safari
Scripts and Examples 

                                                

Entries by Guy Harrison (90)

Monday
Jul172017

Announcing dbKoda!

I'm very excited to announce the release of dbKoda - a next generation database development and administration tool now available for MongoDB.

Those who've been following me know that I've been working with databases since the early Mesozoic period and I've worked in database tooling for almost two decades.

Working with next generation databases like MongoDB has been a lot of fun, but did make me realise how much need there is for a strong tooling ecosystem around these new databases.  I like to think that I made significant contributions to tooling for relational databases and had a strong desire to build something for post-relational systems.

Consequently, late last year I founded the company Southbank Software and this week we launched our first product - dbKoda (www.dbKoda.com).

dbKoda is a modern, open source database development tool.  The first release targets MongoDB.   It is a 100% Javascript application which runs on Linux, Mac or Windows.  It features a rich editing environment with syntax highlighting, code completion and formatting.  It also offers easy graphical access to common MongoDB administration and configuration tasks.

Coding

I'm really excited about dbKoda - I hope that it will become the foundation for a product family that will support modern database development across a wide range of environments.   And working closely with the small team of brilliant dbKoda developers has been an absolute privilege.

Checkout the dbKoda website and download dbKoda here.  You can also checkout an introductory video on dbKoda.   Please also follow dbKoda on https://twitter.com/db_Koda.

Chart

Wednesday
Nov302016

Optimizing the order of MongoDB aggregation steps

MongoDB does have a query optimizer, and in most cases it's effective at picking the best of multiple possible plans.  However it's worth remembering that in the case of the aggregate function the sequence in which various steps are executed is completely under your control.  The optimizer won't reorder steps into the optimal sequence to get you out of trouble. 


Optimizing the order of steps probably comes mainly to reducing the amount of data in the pipeline as early as possible – this reduces the amount of work that has to be done by each successive step.  The corollary is that steps that perfom a lot of work on data should be placed after any filter steps.

Nowhere is this more important that in $lookup steps.  Since $lookup steps perform a separate collection lookup – hopefully using an index – we should make sure we delay them until all data has been filtered.   Consider this aggregation function, which generates a “top 10” list of product purchases by customer:

   1: var output=db.orders.aggregate([
   2:       {$sample:{size:sampleSize}},
   3:       {$match:{orderStatus:"C"}},
   4:       {$project:{CustId:1,lineItems:1}},
   5:       {$unwind:"$lineItems"},
   6:       {$group:{_id:{ CustId:"$CustId",ProdId:"$lineItems.prodId"},
   7:                 "prodCount":{$sum:"$lineItems.prodCount"},
   8:                 "prodCost":{$sum:"$lineItems.Cost"}}},
   9:       {$sort:{prodCost:-1}},
  10:       {$limit:10},
  11:       {$lookup:{
  12:                    from: "customers",
  13:                      as: "c",
  14:              localField: "_id.CustId",
  15:            foreignField: "_id"
  16:       }},
  17:       {$lookup:{
  18:                    from: "products",
  19:                      as: "p",
  20:              localField: "_id.ProdId",
  21:            foreignField: "_id"
  22:       }},
  23:       {$unwind:"$p"},{$unwind:"$c"}, //Get rid of single element arrays
  24:       {$project:{"Customer":"$c.CustomerName","Product":"$p.ProductName",
  25:        prodCount:1,prodCost:1,_id:0}}
  26:     ]);

Lines 11-22 perform lookups on the customers and products collection to get customer and product names. 

We could have done these lookups much earlier in the pipeline.  So for instance, this code returns the exact same results, but does the lookup a little earlier in the sequence:

   1: var output=db.orders.aggregate([
   2:       {$sample:{size:sampleSize}},
   3:       {$match:{orderStatus:"C"}},
   4:       {$project:{CustId:1,lineItems:1}},
   5:       {$unwind:"$lineItems"},
   6:       {$group:{_id:{ CustId:"$CustId",ProdId:"$lineItems.prodId"},
   7:                 "prodCount":{$sum:"$lineItems.prodCount"},
   8:                 "prodCost":{$sum:"$lineItems.Cost"}}},
   9:       {$lookup:{
  10:                    from: "customers",
  11:                      as: "c",
  12:              localField: "_id.CustId",
  13:            foreignField: "_id"
  14:       }},
  15:       {$lookup:{
  16:                    from: "products",
  17:                      as: "p",
  18:              localField: "_id.ProdId",
  19:            foreignField: "_id"
  20:       }},
  21:       {$sort:{prodCost:-1}},
  22:       {$limit:10},
  23:       {$unwind:"$p"},{$unwind:"$c"}, //Get rid of single element arrays
  24:       {$project:{"Customer":"$c.CustomerName","Product":"$p.ProductName",
  25:        prodCount:1,prodCost:1,_id:0}}
  26:     ]);

The difference in performance is striking.  By moving the $lookup a few lines earlier, we have created a much less scalable solution:

image

When the $lookups are before the $limit step, we have to perform as many lookups as there are matching rows.  When we move after the $limit we only have to perform 10.  It’s an obvious but important optimization.

The aggregation framework is similar in nature to pig (see this post).  Both provide a procedural way for processing data which is philosophically different from that that we have become familiar with in the SQL world.  The main thing to remember is that you are in control of the execution plan in an aggregation pipeline.  As the Pig programmers like to say “it uses the query optimizer between your ears”!

Monday
Nov072016

Bulk inserts in MongoDB

Like most database systems,  MongoDB provides API calls that allow multiple documents to be inserted in a single operation.  I’ve written about similar interfaces in Oracle in the past – for instance in this post

Array/Bulk interfaces improve database performance markedly by reducing the number of round trips between the client and the databases – Dramatically.  To realize how fundamental an optimization this is, consider that you have a bunch of people that you are going to take across a river.  You have a boat that can take 100 people at a time, but for some reason you are only taking one person across in each trip – not smart, right?  Failing to take advantage of array inserts is very similar: you are essentially sending  network packets that could take hundreds of documents over with only a single document.

Coding bulk inserts in MongoDB is a little more work, but far from rocket science.  The exact syntax varies depending on the language.  Here we’ll look at a little bit of JavaScript code. 

 

   1: if (orderedFlag==1) 
   2:   bulk=db.bulkTest.initializeOrderedBulkOp();
   3: else 
   4:   bulk=db.bulkTest.initializeUnorderedBulkOp(); 
   5:  
   6: for (i=1;i<=NumberOfDocuments;i++) {
   7:   //Insert a row into the bulk batch
   8:   var doc={_id:i,i:i,zz:zz};
   9:   bulk.insert(doc);
  10:   // Execute the batch if batchsize reached
  11:   if (i%batchSize==0) {
  12:     bulk.execute();
  13:     if (orderedFlag==1)
  14:       bulk=db.bulkTest.initializeOrderedBulkOp();
  15:     else
  16:       bulk=db.bulkTest.initializeUnorderedBulkOp();
  17:   }
  18: }
  19: bulk.execute();

On lines 2 or 4 we initialize a bulk object for the “bulkTest” collection.  There are two ways to do this – we can create it ordered or non-ordered.  Ordered guarantees that the collections are inserted in the order they are presented to the bulk object.  Otherwise, MongoDB can optimize the inserts into multiple streams which may not insert in order. 

On line 9 we add documents to the “bulk” object.  When we hit an appropriate batch size (line 11), we execute the batch (line 12) and reinitialize the bulk object (lines 14 or 16).  We do a further execute at the end (line 19) to make sure all documents are inserted. 

I inserted 100,000 documents into a collection on my laptop, using various “batch” sizes (eg, the number of documents inserted between execute() calls). I tried both ordered and unordered bulk operations.  The results are charted below:

image

The results are pretty clear – inserting in batches improves performance dramatically.  Initially, every increase in batchsize reduces performance but eventually the improvement levels off.  I believe MongoDB transparently limits batches to 1000 per operation anyway, but even before then, the chances are your network packets will be filled up and you won’t see any reduction in elapsed time by increasing the batch size.  To use the analogy above – the rowboat is full! 

For my example, there was no real difference between ordered and nonordered bulk operations but this might reflect a limitation on my laptop.  Something to play with next time….

When inserting multiple documents into a MongoDB collection you should generally take advantage of the massive performance advantages offered by the bulk operations interface.

Wednesday
Aug242016

Graph Lookup in MongoDB 3.3

Specialized graph databases such as Neo4J specialize in traversing graphs of relationships – such as those you might find in a social network.  Many non-graph databases have been incorporating Graph Compute Engines to perform similar tasks.  In the MongoDB 3.3 release, we now have the ability to perform simple graph traversal using the $graphLookup aggregation framework function.  This will become a production feature in the 3.4 release.

The new feature is documented in MongoDB Jira SERVER-23725.  The basic syntax is shown here:

   1: {$graphLookup:
   2:         from: <name of collection to look up into>,
   3:         startWith: <expression>,
   4:         connectFromField: <name of field in document from “from”>,
   5:         connectToField: <name of field in document from “from”>,
   6:         as: <name of field in output document>,
   7:         maxDepth: <optional - non-negative integer>,
   8:         depthField: <optional - name of field in output
   9:  documents>
  10:     }

I started playing with this capability originally using the POKEC dataset which represents data from a real social network in Slovakia.  The relationship file soc-pokec-relationships.txt.gz  contains the social network for about 1.2 million people.  I loaded it into Mongo using this perl script.   The following pipeline did the trick:

   1: gzip -dc ~/Downloads/soc-pokec-relationships.txt |perl loadit.pl|mongoimport -d GraphTest -c socialGraph --drop

Now we have a collection with records like this:

   1: > db.socialGraph.findOne()
   2: {
   3:     "_id" : ObjectId("57b841b02e2a30792c8bb6bd"),
   4:     "person" : 1327456,
   5:     "name" : "User# 1327456",
   6:     "friends" : [
   7:         427220,
   8:         488072,
   9:         975403,
  10:         1322901,
  11:         1343431,
  12:         51639,
  13:         54468,
  14:         802341
  15:     ]
  16: }
We can expand the social network for a single person using a syntax like this:
   1: db.socialGraph.aggregate([
   2:     {
   3:         $match: {person:1476767}
   4:     },
   5:     {
   6:         $graphLookup: {
   7:             from: "socialGraph",
   8:             startWith: [1476767],
   9:             connectFromField: "friends",
  10:             connectToField: "person",
  11:             as: "socialNetwork",
  12:             maxDepth:2,
  13:             depthField:"depth"
  14:         }
  15:     },
  16:     {
  17:        $project: {_id:0,name:1,"Network":"$socialNetwork.name",
  18:                                  "Depth":"$socialNetwork.depth" }
  19:     },
  20:     {$unwind: {"Network"}}
  21: ])

What we are doing here is starting with person 1476767, then following the elements of the friends array out to two levels – i.e.: to “friends of friends”.

Increasing the maxdepth exponentially increases the amount of data we have to cope with.  This is the notorious “seven degrees of separation” effect – most people in a social network are linked by 6-7 hops, so once we get past that we are effectively traversing the entire set.   Unfortunately, this meant that traversing more than 3 deep caused me to run out of memory:

   1: assert: command failed: {
   2:     "ok" : 0,
   3:     "errmsg" : "$graphLookup reached maximum memory consumption",
   4:     "code" : 40099
   5: } : aggregate failed

The graph lookup can only consume at most 100MB of memory, and currently doesn't spill to disk, even if theallowDiskUse : true clause is specified within the aggregation arguments.   SERVER-23980 is open to correct this but it doesn't appear to have been scheduled yet. 

So I tried building a “flatter” network so that I wouldn’t run out of memory.  This JavaScript builds the network and this Javascript runs some performance tests.  I tried expanding the network both with and without a supporting index on the connectToField (person) in this case.  Here’s the results (note the logarithmic scale):

image

For shallow networks,  having an index on the connectToField makes an enormous difference.  But as the depth increases, the index performance advantage decreases and eventually performance matches that of the unindexed case.   In this example data that just happens to be at the “7 degrees of separation” but it will clearly depend on the nature of the data.

The $graphLookup operator is a very powerful addition to the MongoDB aggregation framework and continues the trend of providing richer query capabilities within the server.  Mastering the aggregation framework is clearly a high priority for anyone wanting to exploit the full power of MongoDB

Monday
Jul042016

Join performance in MongoDB 3.2 using $lookup

One of the key tenants of MongoDB schema design is to account for the absence of server-side joins.  Data is joined all the time inside of application code of course, but traditionally there’s been no way to perform joins within the server itself. 

This changed in 3.2 with the introduction of the $lookup operator within the aggregation framework.  $lookup performs the equivalent of a left outer join – eg: it retrieves matching data from another document and returns null data if no match is found.

Here’s an example using the MongoDB version of the Sakila dataset that I converted from MySQL back in this post

   1: db.films.aggregate([
   2:     {$match:{"Actors.First name" : "CHRISTIAN",
   3:              "Actors.Last name" : "GABLE"}},
   4:     {$lookup: {
   5:         from: "customers",
   6:         as: "customerData",
   7:         localField: "_id",
   8:         foreignField: "Rentals.filmId"
   9:     }},
  10:     {$unwind: "$customerData"},
  11:     {$project:{"Title":1,
  12:                "FirstName":"$customerData.First Name",
  13:                "LastName" :"$customerData.Last Name"}},
  14: ])

 

What we’re doing here is finding all customers who have ever hired a film staring “Christian Gable”;  We start by finding those films in the films collection (lines 2-3), then use $lookup to retrieve customer data (lines 4-9).  Films embeds actors in the “Actors” array;  the customers collection embeds films that have been hired in the "Rentals" array. 

The result of the join contains all the customers who have borrowed the movie returned as an array, so we use the $unwind operator to “flatten” them out (line 10).  The resulting output looks like this:

{ "_id" : 1, "Title" : "ACADEMY DINOSAUR", "FirstName" : "SUSAN", "LastName" : "WILSON" }
{ "_id" : 1, "Title" : "ACADEMY DINOSAUR", "FirstName" : "REBECCA", "LastName" : "SCOTT" }
{ "_id" : 1, "Title" : "ACADEMY DINOSAUR", "FirstName" : "DEBRA", "LastName" : "NELSON" }
{ "_id" : 1, "Title" : "ACADEMY DINOSAUR", "FirstName" : "MARIE", "LastName" : "TURNER" }
{ "_id" : 1, "Title" : "ACADEMY DINOSAUR", "FirstName" : "TINA", "LastName" : "SIMMONS" }

One thing that we need to be careful here is with join performance.  The $lookup function is going to be executed once for each document returned by our $match condition.  There is - AFAIK - no equivalent of a hash or sort merge join operation possible here, so we need to make sure that we've used an index.  Unfortunately, the explain() command doesn’t help us.  It tells us only if we have used an index to perform the initial $match, but doesn't show us if we used an index within the $lookup

Here's the explain output from the operation above (TL;DR):

   1: > db.films.explain().aggregate([
   2: ... {$match:{"Actors.First name" : "CHRISTIAN",
   3: ...          "Actors.Last name" : "GABLE"}},
   4: ...     {$lookup: {
   5: ... from: "customers",
   6: ... as: "customerData",
   7: ... localField: "_id",
   8: ... foreignField: "Rentals.filmId"
   9: ... }},
  10: ... {$unwind: "$customerData"},
  11: ... {$project:{"Title":1,
  12: ...            "FirstName":"$customerData.First Name",
  13: ...            "LastName" :"$customerData.Last Name"}},
  14: ...
  15: ... ])
  16: {
  17:         "waitedMS" : NumberLong(0),
  18:         "stages" : [
  19:                 {
  20:                         "$cursor" : {
  21:                                 "query" : {
  22:                                         "Actors.First name" : "CHRISTIAN",
  23:                                         "Actors.Last name" : "GABLE"
  24:                                 },
  25:                                 "fields" : {
  26:                                         "Title" : 1,
  27:                                         "customerData.First Name" : 1,
  28:                                         "customerData.Last Name" : 1,
  29:                                         "_id" : 1
  30:                                 },
  31:                                 "queryPlanner" : {
  32:                                         "plannerVersion" : 1,
  33:                                         "namespace" : "sakila.films",
  34:                                         "indexFilterSet" : false,
  35:                                         "parsedQuery" : {
  36:                                                 "$and" : [
  37:                                                         {
  38:                                                                 "Actors.First name" : {
  39:                                                                         "$eq" : "CHRISTIAN"
  40:                                                                 }
  41:                                                         },
  42:                                                         {
  43:                                                                 "Actors.Last name" : {
  44:                                                                         "$eq" : "GABLE"
  45:                                                                 }
  46:                                                         }
  47:                                                 ]
  48:                                         },
  49:                                         "winningPlan" : {
  50:                                                 "stage" : "COLLSCAN",
  51:                                                 "filter" : {
  52:                                                         "$and" : [
  53:                                                                 {
  54:                                                                         "Actors.First name" : {
  55:                                                                                 "$eq" : "CHRISTIAN"
  56:                                                                         }
  57:                                                                 },
  58:                                                                 {
  59:                                                                         "Actors.Last name" : {
  60:                                                                                 "$eq" : "GABLE"
  61:                                                                         }
  62:                                                                 }
  63:                                                         ]
  64:                                                 },
  65:                                                 "direction" : "forward"
  66:                                         },
  67:                                         "rejectedPlans" : [ ]
  68:                                 }
  69:                         }
  70:                 },
  71:                 {
  72:                         "$lookup" : {
  73:                                 "from" : "customers",
  74:                                 "as" : "customerData",
  75:                                 "localField" : "_id",
  76:                                 "foreignField" : "Rentals.filmId",
  77:                                 "unwinding" : {
  78:                                         "preserveNullAndEmptyArrays" : false
  79:                                 }
  80:                         }
  81:                 },
  82:                 {
  83:                         "$project" : {
  84:                                 "Title" : true,
  85:                                 "FirstName" : "$customerData.First Name",
  86:                                 "LastName" : "$customerData.Last Name"
  87:                         }
  88:                 }
  89:         ],
  90:         "ok" : 1
  91: }

However, we can see the queries created by the $lookup function if we enable profiling.  For instance if we turn profiling on can see a full collection scan of customers has have been generated for every film document that has been joined:

image

These “nested” collection scans are bad news.  Below is the results of a benchmark in which I joined two collections using $lookup with and without an index.  As you can see, the unindexed $lookup degrades steeply as the number of rows to be joined increases. The solution is obvious:

 Always create an index on the foreignField attributes in a $lookup, unless the collections are of trivial size. 


image

The MongoDB company is putting a lot of new features into the aggregation framework:  they clearly intend to create a very powerful and flexible capability that matches and maybe even exceeds what can be done with SQL.  Indeed,  the aggregation framework seems poised to become a dataflow language similar to Pig.  Anyone wanting to do any serious work in MongoDB should make sure they are very comfortable with aggregate.  If you use $lookup to perform joins in aggregate, make sure there is an index on the ForiegnField attribute.