Experimental Thoughts » SQL http://thoughts.davisjeff.com Ideas on Databases, Logic, and Language by Jeff Davis Sat, 16 Jun 2012 21:05:47 +0000 en hourly 1 http://wordpress.org/?v=3.3.1 Applications vs. Platforms http://thoughts.davisjeff.com/2012/06/16/applications-vs-platforms/ http://thoughts.davisjeff.com/2012/06/16/applications-vs-platforms/#comments Sat, 16 Jun 2012 20:09:21 +0000 Jeff Davis http://thoughts.davisjeff.com/?p=565 Continue reading ]]> When are you developing an application, and when are you developing a platform? A lot of discussion about programming comes down to this question; and the less-helpful discussions can usually be traced to confusion over this point.

For instance, an article here (by an author I respect) entitled Your Coding Philosophies are Irrelevant makes a fairly typical point: that it’s hard to make a connection between a good end-user experience and particular programming practices or philosophies. It’s similar to an argument that the ends justify the means, though hopefully not fraught with the same moral problems.

On the other hand, advocates of development styles, programming languages, etc., point out how their approach helps manage the complexity of software development. This section of Learn You a Haskell extols the “safety” of a type system (presumably implying that your program will work better).

So who is right? If you are developing an application, then you need to set the philosophies aside, pick up whatever tools are most convenient, and build. But if you are building a platform, taking care with the methods you choose and the interfaces or languages you design are crucial.

For applications, every moment you spend trying to decide on a programming language is a costly distraction from the problem you are trying to solve and the potential users trying to solve it. Over time, you should practice and hone your skills, and carefully choose (or develop) the right tools and platforms to make building future applications easier. But when you have an idea, the only thing you should be thinking about is “build, build, build”.

If you are developing a platform — which I’ll define here as a building block for applications[1] — the potential users are developers and they already have many ways to solve their problem. You are trying to make it easier to develop existing applications and inspire the development of new applications that previously seemed out of reach. Imagining specifically what the developers would do can lead to an overly-specific solution when a good general solution is possible. Furthermore, if you can see in so much detail what a developer should do with your platform, then maybe you should be building an application, instead. Being surprised by how developers use the platform is a good sign.

The best example of a philosophy and platform that really does matter is the relational database management system. There’s no doubt that an SQL comes with its share of opinion and philosophy, and it tries to guide you into this philosophy at every turn. The availability of primary and foreign keys in most implementations strongly encourages you to follow some basic patterns for modeling your business. Yet there’s also no doubt that SQL is wildly successful; the de facto platform for the lion’s share of applications that interact with data.

SQL offers some pretty tremendous advantages:

  • ROLLBACK — if you encounter a problem in the middle of an operation, you can just reset as though nothing happened. This is probably the most important advantage — nothing kills your inspiration developing an application like struggling with error recovery bugs. See my previous article.
  • Separation of data from structure — allows your application to work as well on 10 million rows as it does on the 10 rows you tested with. You might need to add an index or something, but there’s no need to retest the application. Adding an index will never change the query results (except maybe the order of the results if you forget an ORDER BY).
  • Declarative constraints — have you ever tried implementing a UNIQUE constraint? If so, I hope you were careful of avoiding race conditions. And I hope you were mindful of performance, and didn’t just lock the whole data set. Regardless, there’s no reason to go through that complexity (which, again, distracts from the purpose of the application) because you can just declare it.

Notice that these are all benefits to the application built on SQL. The SQL implementation itself doesn’t enjoy those benefits — there’s a lot of imperative code behind something like ROLLBACK.

Although PostgreSQL still strongly encourages some common SQL practices and philosophies, it permits much more flexibility in this regard than most systems. User-defined types, functions, and the variety of procedural languages offer a lot of options when developing applications; while still all benefiting from the advantages I listed above.

So, philosophies do matter when designing a platform. A lot. Just don’t get distracted by them when trying to build an innovative application — but hopefully, at that time, you’ve already honed your skills with good platforms like PostgreSQL.

[1] Although libraries often fall into this category, some special-purpose libraries are better thought of as an application. For instance, a (de)compression library is solving a specific user need and interacting with certain formats; and building an application only requires a user interface on top. Contrast that with a database library, where you have no idea what ultimate application might be built.

]]>
http://thoughts.davisjeff.com/2012/06/16/applications-vs-platforms/feed/ 1
Taking a step back from ORMs http://thoughts.davisjeff.com/2012/02/26/taking-a-step-back-from-orms/ http://thoughts.davisjeff.com/2012/02/26/taking-a-step-back-from-orms/#comments Sun, 26 Feb 2012 22:17:21 +0000 Jeff Davis http://thoughts.davisjeff.com/?p=498 Continue reading ]]> Do object-relational mappers (ORMs) really improve application development?

When I started developing web applications, I used perl. Not even all of perl, mostly just a bunch of “if” statements and an occasional loop that happened to be valid perl (aside: I remember being surprised that I was allowed to write a loop that would run on a shared server, because “what if it didn’t terminate?!”). I didn’t use databases; I used a mix of files, regexes to parse them, and flock to control concurrency (not because of foresight or good engineering, but because I ran into concurrency-related corruption).

I then made the quantum leap to databases. I didn’t see the benefits instantaneously[1], but it was clearly a major shift in the way I developed applications.

Why was it a quantum leap? Well, many reasons, which are outside of the scope of this particular post, but which I’ll discuss more in the future. For now, I’ll just cite the overwhelming success of SQL over a long period of time; and the pain experienced by anyone who has built and maintained a few NoSQL applications[2].

I don’t think ORMs are a leap forward; they are just an indirection[3] between the application and the database. Although it seems like you could apply the same kind of “success” argument, it’s not the same. First of all, ORM users are a subset of SQL users, and I think there are a lot of SQL users that are perfectly content without an ORM. Second, many ORM users feel the need to “drop down” to the SQL level frequently to get the job done, which means you’re not really in new territory.

And ORMs do have a cost. Any tool that uses a lot of behind-the-scenes magic will cause a certain amount of trouble — just think for a moment on the number of lines of code between the application and the database (there and back), and imagine the subtle semantic problems that might arise.

To be more concrete: one of the really nice things about using a SQL DBMS is that you can easily query the database as though you were the application. So, if you are debugging the application, you can quickly see what’s going wrong by seeing what the application sees right before the bug is hit. But you quickly lose that ability when you muddy the waters with thousands of lines of code between the application error and the database[4]. I believe the importance of this point is vastly under-appreciated; it’s one of the reasons that I think a SQL DBMS is a quantum leap forward, and it applies to novices as well as experts.

A less-tangible cost to ORMs is that developers are tempted to remain ignorant of the SQL DBMS and the tools that it has to offer. All these features in a system like PostgreSQL are there to solve problems in the easiest way possible; they aren’t just “bloat”. Working with multiple data sources is routine in any business environment, but if you don’t know about support for foreign tables in postgresql, you’re likely to waste a lot of time re-implementing similar functionality in the application. Cache invalidation (everything from memcache to statically-rendered HTML) is a common problem — do you know about LISTEN/NOTIFY? If your application involves scheduling, and you’re not using Temporal Keys, there is a good chance you are wasting development time and performance; and likely sacrificing correctness. The list goes on and on.

Of course there are reasons why so many people use ORMs, at least for some things. A part of it is that application developers may think that learning SQL is harder than learning an ORM, which I think is misguided. But a more valid reason is that ORMs do help eliminate boilerplate in some common situations.

But are there simpler ways to avoid boilerplate? It seems like we should be able to do so without something as invasive as an ORM. For the sake of brevity, I’ll be using hashes rather than objects, but the principle is the same. The following examples are in ruby using the ‘pg’ gem (thanks Michael Granger for maintaining that gem!).

First, to retrieve records as a hash, it’s already built into the ‘pg’ gem. Just index into the result object, and you get a hash. No boilerplate there.

Second, to do an insert, there is a little boilerplate. You have to build a string (yuck), put in the right table name, make the proper field list (unless you happen to know the column ordinal positions, yuck again), and then put in the values. And if you add or change fields, you probably need to modify it. Oh, and be sure to avoid SQL injection!

Fortunately, once we’ve identified the boilerplate, it’s pretty easy to solve:

# 'conn' is a PG::Connection object
def sqlinsert(conn, table, rec)
  table     = conn.quote_ident(table)
  rkeys     = rec.keys.map{|k| conn.quote_ident(k.to_s)}.join(",")
  positions = (1..rec.keys.length).map{|i| "$" + i.to_s}.join(",")
  query     = "INSERT INTO #{table}(#{rkeys}) VALUES(#{positions})"
  conn.exec(query, rec.values)
end

The table and column names are properly quoted, and the values are passed in as parameters. And, if you add new columns to the table, the routine still works, you just end up with defaults for the unspecified columns.

I’m sure others can come up with other examples of boilerplate that would be nice to solve. But the goal is not perfection; we only need to do enough to make simple things simple. And I suspect that only requires a handful of such routines.

So, my proposal is this: take a step back from ORMs, and consider working more closely with SQL and a good database driver. Try to work with the database, and find out what it has to offer; don’t use layers of indirection to avoid knowing about the database. See what you like and don’t like about the process after an honest assessment, and whether ORMs are a real improvement or a distracting complication.

[1]: At the time, MySQL was under a commercial license, so I tried PostgreSQL shortly thereafter. I switched between the two for a while (after MySQL became GPL), and settled on PostgreSQL because it was much easier to use (particularly for date manipulation).

[2]: There may be valid reasons to use NoSQL, but I’m skeptical that “ease of use” is one of them.

[3]: Some people use the term “abstraction” to describe an ORM, but I think that’s misleading.

[4]: The ability to explore the data through an ORM from a REPL might resemble the experience of using SQL. But it’s not nearly as useful, and certainly not as easy: if you determine that the data is wrong in the database, you still need to figure out how it got that way, which again involves thousands of lines between the application code that requests a modification and the resulting database update.

]]>
http://thoughts.davisjeff.com/2012/02/26/taking-a-step-back-from-orms/feed/ 28
SQL: the successful cousin of Haskell http://thoughts.davisjeff.com/2011/09/25/sql-the-successful-cousin-of-haskell/ http://thoughts.davisjeff.com/2011/09/25/sql-the-successful-cousin-of-haskell/#comments Sun, 25 Sep 2011 07:10:29 +0000 Jeff Davis http://thoughts.davisjeff.com/?p=472 Continue reading ]]> Haskell is a very interesting language, and shows up on sites like http://programming.reddit.com frequently. It’s somewhat mind-bending, but very powerful and has some great theoretical advantages over other languages. I have been learning it on and off for some time, never really getting comfortable with it but being inspired by it nonetheless.

But discussion on sites like reddit usually falls a little flat when someone asks a question like:

If haskell has all these wonderful advantages, what amazing applications have been written with it?

The responses to that question usually aren’t very convincing, quite honestly.

But what if I told you there was a wildly successful language, in some ways the most successful language ever, and it could be characterized by:

  • lazy evaluation
  • declarative
  • type inference
  • immutable state
  • tightly controlled side effects
  • strict static typing

Surely that would be interesting to a Haskell programmer? Of course, I’m talking about SQL.

Now, it’s all falling into place. All of those theoretical advantages become practical when you’re talking about managing a lot of data over a long period of time, and trying to avoid making any mistakes along the way. Really, that’s what relational database systems are all about.

I speculate that SQL is so successful and pervasive that it stole the limelight from languages like haskell, because the tough problems that haskell would solve are already solved in so many cases. Application developers can hack up a SQL query and run it over 100M records in 7 tables, glance at the result, and turn it over to someone else with near certainty that it’s the right answer! Sure, if you have a poorly-designed schema and have all kinds of special cases, then the query might be wrong too. But if you have a mostly-sane schema and mostly know what you’re doing, you hardly even need to check the results before using the answer.

In other words, if the query compiles, and the result looks anything like what you were expecting (e.g. the right basic structure), then it’s probably correct. Sound familiar? That’s exactly what people say about haskell.

It would be great if haskell folks would get more involved in the database community. It looks like a lot of useful knowledge could be shared. Haskell folks would be in a better position to find out how to apply theory where it has already proven to be successful, and could work backward to find other good applications of that theory.

Competing directly in the web application space against languages like ruby and javascript is going to be an uphill battle even if haskell is better in that space. I’ve worked with some very good ruby developers, and I honestly couldn’t begin to tell them where haskell might be a practical advantage for web application development. Again, I don’t know much about haskell aside from the very basics. But if someone like me who is interested in haskell and made some attempt to understand it and read about it still cannot articulate a practical advantage, clearly there is some kind of a problem (either messaging or technical). And that’s a huge space for application development, so that’s a serious concern.

However, the data management space is also huge — a large fraction of those applications exist primarily to collect data or present data. So, if haskell folks could work with the database community to advance data management, I believe that would inspire a lot of interesting development.

]]>
http://thoughts.davisjeff.com/2011/09/25/sql-the-successful-cousin-of-haskell/feed/ 11
Database for a Zoo: the problem and the solution http://thoughts.davisjeff.com/2011/09/21/database-for-a-zoo-the-problem-and-the-solution/ http://thoughts.davisjeff.com/2011/09/21/database-for-a-zoo-the-problem-and-the-solution/#comments Wed, 21 Sep 2011 07:00:52 +0000 Jeff Davis http://thoughts.davisjeff.com/?p=315 Continue reading ]]> Let’s say you’re operating a zoo, and you have this simple constraint:

You can put many animals of the same type into a single cage; or distribute them among many cages; but you cannot mix animals of different types within a single cage.

This rule prevents, for example, assigning a zebra to live in the same cage as a lion. Simple, right?

How do you enforce it? Any ideas yet? Keep reading: I will present a solution that uses a generalization of the standard UNIQUE constraint.

(Don’t dismiss the problem too quickly. As with most simple-sounding problems, it’s a fairly general problem with many applications.)

First of all, let me say that, in one sense, it’s easy to solve: see if there are any animals already assigned to the cage, and if so, make sure they are the same type. That has two problems:

  1. You have to remember to do that each time. It’s extra code to maintain, possibly an extra round-trip, slightly annoying, and won’t work unless all access to the database goes through that code path.
  2. More subtly, the pattern read, decide what to write, write is prone to race conditions when another process writes after you read and before you write. Without excessive locking, solving this is hard to get right — and likely to pass tests during development before failing in production.

[ Aside: if you use true serializability in PostgreSQL 9.1, that completely solves problem #2, but problem #1 remains. ]

Those are exactly the kinds of problems that a DBMS is meant to solve. But what to do? Unique indexes don’t seem to solve the problem very directly, and neither do foreign keys. I believe that they can be combined to solve the problem by using two unique indexes, a foreign key, and an extra table, but that sounds painful (perhaps someone else has a simpler way to accomplish this with SQL standard features?). Row locking and triggers might be an alternative, but also not a very clean solution.

A better solution exists in PostgreSQL 9.1 using Exclusion Constraints (Exclusion Constraints were introduced in 9.0, but this solution requires the slightly-more-powerful version in 9.1). If you have never seen an Exclusion Constraint before, I suggest reading a previous post of mine.

Exclusion Constraints have the following semantics (copied from documentation link above):

The EXCLUDE clause defines an exclusion constraint, which guarantees that if any two rows are compared on the specified column(s) or expression(s) using the specified operator(s), not all of these comparisons will return TRUE. If all of the specified operators test for equality, this is equivalent to a UNIQUE constraint…

First, as a prerequisite, we need to install btree_gist into our database (make sure you have the contrib package itself installed first):

CREATE EXTENSION btree_gist;

Now, we can use an exclude constraint like so:

CREATE TABLE zoo
(
  animal_name TEXT,
  animal_type TEXT,
  cage        INTEGER,
  UNIQUE      (animal_name),
  EXCLUDE USING gist (animal_type WITH <>, cage WITH =)
);

Working from the definition above, what does this exclusion constraint mean? If any two tuples in the relation are ever compared (let’s call these TupleA and TupleB), then the following will never evaluate to TRUE:

TupleA.animal_type <> TupleB.animal_type AND
TupleA.cage        =  TupleB.cage

[ Observe how this would be equivalent to a UNIQUE constraint if both operators were "=". The trick is that we can use a different operator -- in this case, "<>" (not equals). ]

Results: 

=> insert into zoo values('Zap', 'zebra', 1);
INSERT 0 1
=> insert into zoo values('Larry', 'lion', 2);
INSERT 0 1
=> insert into zoo values('Zachary', 'zebra', 1);
INSERT 0 1
=> insert into zoo values('Zeta', 'zebra', 2);
ERROR:  conflicting key value violates exclusion constraint "zoo_animal_type_cage_excl"
DETAIL:  Key (animal_type, cage)=(zebra, 2) conflicts with existing key (animal_type, cage)=(lion, 2).
=> insert into zoo values('Zeta', 'zebra', 3);
INSERT 0 1
=> insert into zoo values('Lenny', 'lion', 2);
INSERT 0 1
=> insert into zoo values('Lance', 'lion', 1);
ERROR:  conflicting key value violates exclusion constraint "zoo_animal_type_cage_excl"
DETAIL:  Key (animal_type, cage)=(lion, 1) conflicts with existing key (animal_type, cage)=(zebra, 1).
=> select * from zoo order by cage;
 animal_name | animal_type | cage
-------------+-------------+------
 Zap         | zebra       |    1
 Zachary     | zebra       |    1
 Larry       | lion        |    2
 Lenny       | lion        |    2
 Zeta        | zebra       |    3
(5 rows)
And that is precisely the constraint that we need to enforce!
  1. The constraint is declarative so you don’t have to deal with different access paths to the database or different versions of the code. Merely the fact that the constraint exists means that PostgreSQL will guarantee it.
  2. The constraint is also immune from race conditions — as are all EXCLUDE constraints — because again, PostgreSQL guarantees it.

Those are nice properties to have, and if used properly, will simplify the overall application complexity and improve robustness.

]]>
http://thoughts.davisjeff.com/2011/09/21/database-for-a-zoo-the-problem-and-the-solution/feed/ 13
Building SQL Strings Dynamically, in 2011 http://thoughts.davisjeff.com/2011/07/09/building-sql-strings-dynamically-in-2011/ http://thoughts.davisjeff.com/2011/07/09/building-sql-strings-dynamically-in-2011/#comments Sat, 09 Jul 2011 16:57:50 +0000 Jeff Davis http://thoughts.davisjeff.com/?p=403 Continue reading ]]> I saw a recent post Avoid Smart Logic for Conditional WHERE Clauses which actually recommended, “the best solution is to build the SQL statement dynamically—only with the required filters and bind parameters”. Ordinarily I appreciate that author’s posts, but this time I think that he let confusion run amok, as can be seen in a thread on reddit.

To dispel that confusion: parameterized queries don’t have any plausible downsides; always use them in applications. Saved plans have trade-offs; use them sometimes, and only if you understand the trade-offs.

When query parameters are conflated with saved plans, it’s creates FUD about SQL systems because it mixes the fear around SQL injection with the mysticism around the SQL optimizer. Such confusion about the layers of a SQL system are a big part of the reason that some developers move to the deceptive simplicity of NoSQL systems (I say “deceptive” here because it often just moves an even greater complexity into the application — but that’s another topic).

The confusion started with this query from the original article:

SELECT first_name, last_name, subsidiary_id, employee_id
FROM employees
WHERE ( subsidiary_id    = :sub_id OR :sub_id IS NULL )
  AND ( employee_id      = :emp_id OR :emp_id IS NULL )
  AND ( UPPER(last_name) = :name   OR :name   IS NULL )

[ Aside: In PostgreSQL those parameters should be $1, $2, and $3; but that's not relevant to this discussion. ]

The idea is that one such query can be used for several types of searches. If you want to ignore one of those WHERE conditions, you just pass a NULL as one of the parameters, and it makes one side of the OR always TRUE, thus the condition might as well not be there. So, each condition can either be there and have one argument (restricting the results of the query), or be ignored by passing a NULL argument; thus effectively giving you 8 queries from one SQL string. By eliminating the need to use different SQL strings depending on which conditions you want to use, you reduce the opportunity for error.

The problem is that the article says this kind of query is a problem. The reasoning goes something like this:

  1. Using bind parameters forces the plan to be saved and reused for multiple queries.
  2. When a plan is saved for multiple queries, the planner doesn’t have the actual argument values.
  3. Because the planner doesn’t have the actual argument values, the “x IS NULL” conditions aren’t constant at plan time, and therefore the planner isn’t able to simplify the conditions (e.g., if one condition is always TRUE, just remove it).
  4. Therefore it makes a bad plan.

However, #1 is simply untrue, at least in PostgreSQL. PostgreSQL can save the plan, but you don’t have to. See the documentation for PQexecParams. Here’s an example in ruby using the “pg” gem (EDIT: Note: this does not use any magic query-building behind the scenes, it uses a protocol level feature in the PostgreSQL server to bind the arguments):

require 'rubygems'
require 'pg'

conn = PGconn.connect("dbname=postgres")

conn.exec("CREATE TABLE foo(i int)")
conn.exec("INSERT INTO foo SELECT generate_series(1,10000)")
conn.exec("CREATE INDEX foo_idx ON foo (i)")
conn.exec("ANALYZE foo")

# Insert using parameters. Planner sees the real arguments, so it will
# make the same plan as if you inlined them into the SQL string. In
# this case, 3 is not NULL, so it is simplified to just "WHERE i = 3",
# and it will choose to use an index on "i" for a fast search.
res = conn.exec("explain SELECT * FROM foo WHERE i = $1 OR $1 IS NULL", [3])
res.each{ |r| puts r['QUERY PLAN'] }
puts

# Now, the argument is NULL, so the condition is always true, and
# removed completely. It will surely choose a sequential scan.
res = conn.exec("explain SELECT * FROM foo WHERE i = $1 OR $1 IS NULL", [nil])
res.each{ |r| puts r['QUERY PLAN'] }
puts

# Saves the plan. It doesn't know whether the argument is NULL or not
# yet (because the arguments aren't provided yet), so the plan might
# not be good.
conn.prepare("myplan", "SELECT * FROM foo WHERE i = $1 OR $1 IS NULL")

# We can execute this with:
res = conn.exec_prepared("myplan",[3])
puts res.to_a.length
res = conn.exec_prepared("myplan",[nil])
puts res.to_a.length

# But to see the plan, we have to use the SQL string form so that we
# can use EXPLAIN. This plan should use an index, but because we're
# using a saved plan, it doesn't know to use the index. Also notice
# that it wasn't able to simplify the conditions away like it did for
# the sequential scan without the saved plan.
res = conn.exec("explain execute myplan(3)")
res.each{ |r| puts r['QUERY PLAN'] }
puts

# ...and use the same plan again, even with different argument.
res = conn.exec("explain execute myplan(NULL)")
res.each{ |r| puts r['QUERY PLAN'] }
puts

conn.exec("DROP TABLE foo")

See? If you know what you are doing, and want to save a plan, then save it. If not, do the simple thing, and PostgreSQL will have the information it needs to make a good plan.

My next article will be a simple introduction to database system architecture that will hopefully make SQL a little less mystical.

]]>
http://thoughts.davisjeff.com/2011/07/09/building-sql-strings-dynamically-in-2011/feed/ 6
Why PostgreSQL Already Has Query Hints http://thoughts.davisjeff.com/2011/02/05/why-postgresql-already-has-query-hints/ http://thoughts.davisjeff.com/2011/02/05/why-postgresql-already-has-query-hints/#comments Sat, 05 Feb 2011 18:33:25 +0000 Jeff Davis http://thoughts.davisjeff.com/?p=388 Continue reading ]]> This is a counterpoint to Josh’s recent post: Why PostgreSQL Doesn’t Have Query Hints. I don’t really disagree, except that I think that there are many different definitions of “hints” floating around, leading to a lot of confusion. I could subtitle this post “More Terminology Confusion” after my previous entry.

So, let’s pick a reasonable definition: “hints are some mechanism to influence the SQL planner to choose a better plan”. Why did I choose that definition? Because it’s the actual use case. If a user encounters a bad plan, or an unstable plan, they need a way to get it to choose a better plan. There’s plenty of room to argue about the right way to do that and the wrong way, but almost every DBMS allows some form of hints. Including PostgreSQL.

Here are a few planner variables you can tweak (out of many):

  • enable_seqscan
  • enable_mergejoin
  • enable_indexscan

Not specific enough for you? Well, you can try plantuner to pick or forbid specific indexes.

Want to enforce join order? Try setting from_collapse_limit.

Want to get even more specific? You can set the selectivity of individual operators.

There is a philosophical difference between PostgreSQL’s approach and that of many other systems. In PostgreSQL, it is encouraged to specify costs and selectivities more than exact plans. There are good reasons for that, such as sheer number of possible plans for even moderately complex queries (as Josh points out). Additionally, specifying exact plans tends to lead you into exactly the type of trouble you are trying to avoid by specifying hints in the first place — after input cardinalities change, the previous plan may now be a very poor one.

PostgreSQL clearly has a set of mechanisms that could be called “hints”. It turns out that there are actually quite a lot of ways to control the plan in postgres; but they generally aren’t recommended except as a solution to a specific problem someone posts to the performance list. That is part of the postgresql culture: a bit like getting a prescription for a doctor, so that the doctor can see the whole picture, help you look for alternative solutions, and weigh the side effects of the treatment against the benefits. I’m exaggerating, of course — these tweaks are documented (well, most of them), and anyone can use them; you just won’t hear them shouted from the rooftops as recommendations.

Except in this post, I suppose, which you should use at your own risk.

]]>
http://thoughts.davisjeff.com/2011/02/05/why-postgresql-already-has-query-hints/feed/ 4
Big Company Uses Product XYZ http://thoughts.davisjeff.com/2010/11/11/big-company-uses-product-xyz/ http://thoughts.davisjeff.com/2010/11/11/big-company-uses-product-xyz/#comments Thu, 11 Nov 2010 18:22:16 +0000 Jeff Davis http://thoughts.davisjeff.com/?p=362 Continue reading ]]> Joshua Drake’s recent article makes some interesting points, but there’s one thing in particular I find missing among many of these discussions. From the article:

It appeared they felt we should be impressed that Facebook runs on MySQL not PostgreSQL. … The problem I have, is that Facebook data is worthless.

All of the concentration is on the company, and whether their use case matters (of course it does, at least to them and their customers). But phrases like “runs on” and “uses” are used too loosely, in my opinion.

Even with celebrity endorsements — for example, a basketball player endorsing shoes — at least they use shoes in roughly the same manner as you might. The shoes might not help you play basketball in any appreciable way, but at least “use” means the same for both the basketball player and you.

However, do you think that running a query at [insert big company here] involves just using the “mysql” client, logging in, and running any ad-hoc query you want? I doubt it. I suspect that the data is always spread around in complex ways with complex caches, and there’s a lot of custom supporting code to get the right information from the right cache at the right time. For every new query, they can unleash a team of very good engineers to build the necessary caches, provision the necessary servers, distribute data to the right places, write the code to populate and read the caches appropriately, and integrate it into the general data-movement architecture.

If your environment looks like that, then a lot of the little problems go away. One might complain that Slony is hard to set up; but in an environment like the one above, it’s insignificant. If there’s some missing feature, you can write it. If something is bothering you, you can fix it. People do that all the time with PostgreSQL, and many of those things get released in the community version. For MySQL, they tend to build up as “patch sets” (or forks, some might call them). I suspect that PostgreSQL gets more contributions because it does everything possible to make the process of community contribution smooth — clean code, no copyright assignment requirement, well-defined “commit fests”, community review, and a diverse group of core members, committers, and contributors. PostgreSQL also has a rock-solid foundation, giving developers more confidence to build the features they need without destabilizing the product.

If your environment doesn’t look like that, and you just want to use the product directly, then take advantage of that. Use the product that makes your life easier, helps you catch errors before they become problems, and keeps your data safe. By the time you scale up, you will be using the DBMS in such a radically different way that it almost doesn’t matter what DBMS you started with.

]]>
http://thoughts.davisjeff.com/2010/11/11/big-company-uses-product-xyz/feed/ 13
Exclusion Constraints are generalized SQL UNIQUE http://thoughts.davisjeff.com/2010/09/25/exclusion-constraints-are-generalized-sql-unique/ http://thoughts.davisjeff.com/2010/09/25/exclusion-constraints-are-generalized-sql-unique/#comments Sat, 25 Sep 2010 20:37:22 +0000 Jeff Davis http://thoughts.davisjeff.com/?p=321 Continue reading ]]> Say you are writing an online reservation system. The first requirement you’ll encounter is that no two reservations may overlap (i.e. no schedule conflicts). But how do you prevent that?

It’s worth thinking about your solution carefully. My claim is that no existing SQL DBMS has a good solution to this problem before PostgreSQL 9.0, which has just been released. This new release includes a feature called Exclusion Constraints (authored by me), which offers a good solution to a class of problems that includes the “schedule conflict” problem.

I previously wrote a two part series (Part 1 and Part 2) on this topic. Chances are that you’ve run into a problem similar to this at one time or another, and these articles will show you the various solutions that people usually employ in the real world, and the serious problems and limitations of those approaches.

The rest of this article will be a brief introduction to Exclusion Constraints to get you started using a much better approach.

First, install PostgreSQL 9.0 (the installation instructions are outside the scope of this article), and launch psql.

Then, install two modules: “temporal” (which provides the PERIOD data type and associated operators) and “btree_gist” (which provides btree functionality via GiST).

Before installing these modules, make sure that PostgreSQL 9.0 is installed and that the 9.0 pg_config is in your PATH environment variable. Also, $SHAREDIR meas the directory listed when you run pg_config --sharedir.

To install Temporal PostgreSQL:

  1. download the tarball
  2. unpack the tarball, go into the directory, and type “make install
  3. In psql, type: \i $SHAREDIR/contrib/period.sql

To install BTree GiST (these directions assume you installed from source, some packages may help here, like Ubuntu’s “postgresql-contrib” package):

  1. Go to the postgresql source “contrib” directory, go to btree_gist, and type “make install“.
  2. In psql, type: \i $SHAREDIR/contrib/btree_gist.sql

Now that you have those modules installed, let’s start off with some basic Exclusion Constraints:

DROP TABLE IF EXISTS a;
CREATE TABLE a(i int);
ALTER TABLE a ADD EXCLUDE (i WITH =);

That is identical to a UNIQUE constraint on a.i, except that it uses the Exclusion Constraints mechanism; it even uses a normal BTree to enforce it. The performance will be slightly worse because of some micro-optimizations for UNIQUE constraint, but only slightly, and the performance characteristics should be the same (it’s just as scalable). Most importantly, it behaves the same under high concurrency as a UNIQUE constraint, so you don’t have to worry about excessive locking. If one person inserts 5, that will prevent other transactions from inserting 5 concurrently, but will not interfere with a transaction inserting 6.

Let’s take apart the syntax a little. The normal BTree is the default, so that’s omitted. The (i WITH =) is the interesting part, of course. It means that one tuple TUP1 conflicts with another tuple TUP2 if TUP1.i = TUP2.i. No two tuples may exist in the table if they conflict. In other words, there are no two tuples TUP1 and TUP2 in the table, such that TUP1.i = TUP2.i. That’s the very definition of UNIQUE, so that shows the equivalence. NULLs are always permitted, just like with UNIQUE constraints.

Now, let’s see if they hold up for multi-column constraints:

DROP TABLE IF EXISTS a;
CREATE TABLE a(i int, j int);
ALTER TABLE a ADD EXCLUDE (i WITH =, j WITH =);

The conditions for a conflicting tuple are ANDed together, just like UNIQUE. So now, in order for two tuples to conflict, TUP1.i = TUP2.i AND TUP1.j = TUP2.j. This is strictly a more permissive constraint, because conflicts require both conditions to be met. Therefore, this is identical to a UNIQUE constraint on (a.i, a.j).

What can we do that UNIQUE can’t? Well, for starters we can use something other than a normal BTree, such as Hash or GiST (for the moment, GIN is not supported, but that’s only because GIN doesn’t support the full index AM API; amgettuple in particular):

DROP TABLE IF EXISTS a;
CREATE TABLE a(i int, j int);
ALTER TABLE a ADD EXCLUDE USING gist (i WITH =, j WITH =);
-- alternatively using hash, which doesn't support
-- multi-column indexes at all
ALTER TABLE a ADD EXCLUDE USING hash (i WITH =);

So now we can do UNIQUE constraints using hash or gist. But that’s not a real benefit, because a normal btree is probably the most efficient way to support that, anyway (Hash may be in the future, but for the moment it doesn’t use WAL, which is a major disadvantage).

The difference really comes from the ability to change the operator to something other than “=“. It can be any operator that is:

  • Commutative
  • Boolean
  • Searchable by the given index access method (e.g. btree, hash, gist).

For BTree and Hash, the only operator that meets those criteria is “=”. But many data types (including PERIOD, CIRCLE, BOX, etc.) support lots of interesting operators that are searchable using GiST. For instance, “overlaps” (&&).

Ok, now we are getting somewhere. It’s impossible to specify the constraint that no two tuples contain values that overlap with eachother using a UNIQUE constraint; but it is possible to specify such a constraint with an Exclusion Constraint! Let’s try it out.

DROP TABLE IF EXISTS b;
CREATE TABLE b (p PERIOD);
ALTER TABLE b ADD EXCLUDE USING gist (p WITH &&);
INSERT INTO b VALUES('[2009-01-05, 2009-01-10)');
INSERT INTO b VALUES('[2009-01-07, 2009-01-12)'); -- causes ERROR

Now, try out various combinations (including COMMITs and ABORTs), and try with concurrent sessions also trying to insert values. You’ll notice that potential conflicts cause transactions to wait on eachother (like with UNIQUE) but non-conflicting transactions proceed unhindered. A lot better than LOCK TABLE, to say the least.

To be useful in a real situation, let’s make sure that the semantics extend nicely to a more complete problem. In reality, you generally have several exclusive resources in play, such as people, rooms, and time. But out of those, “overlaps” really only makes sense for time (in most situations). So we need to mix these concepts a little.

CREATE TABLE reservation(room TEXT, professor TEXT, during PERIOD);

-- enforce the constraint that the room is not double-booked
ALTER TABLE reservation
    ADD EXCLUDE USING gist
    (room WITH =, during WITH &&);

-- enforce the constraint that the professor is not double-booked
ALTER TABLE reservation
    ADD EXCLUDE USING gist
   (professor WITH =, during WITH &&);

Notice that we actually need to enforce two constraints, which is expected because there are two time-exclusive resources: professors and rooms. Multiple constraints on a table are ORed together, in the sense that an ERROR occurs if any constraint is violated. For the academic readers out there, this means that exclusion constraint conflicts are specified in disjunctive normal form (consistent with UNIQUE constraints).

The semantics of Exclusion Constraints extend in a clean way to support this mix of atomic resources (rooms, people) and resource ranges (time). Try it out, again with a mix of concurrency, commits, aborts, conflicting and non-conflicting reservations.

Exclusion constraints allow solving this class of problems quickly (in a couple lines of SQL) in a way that’s safe, robust, generally useful across many applications in many situations, and with higher performance and better scalability than other solutions.

Additionally, Exclusion Constraints support all of the advanced features you’d expect from a system like Postgres9: deferrability, applying the constraint to only a subset of the table (allows a WHERE clause), or using functions/expressions in place of column references.

]]>
http://thoughts.davisjeff.com/2010/09/25/exclusion-constraints-are-generalized-sql-unique/feed/ 9
Flexible Schemas and PostgreSQL http://thoughts.davisjeff.com/2010/05/06/flexible-schemas-and-postgresql/ http://thoughts.davisjeff.com/2010/05/06/flexible-schemas-and-postgresql/#comments Thu, 06 May 2010 17:42:28 +0000 Jeff Davis http://thoughts.davisjeff.com/?p=267 Continue reading ]]> First, what is a “flexible schema”? It’s hard to pin down an exact definition, but it’s used to mean a data model that permits changes in application data structures without needing to migrate old data or incur other administrative hassles.

That’s a worthwhile goal. Applications often grow organically, especially in the early, exploratory stages of development. For example, you may decide to track when a user last did something on the website, so that you can adapt news and notices for those users (e.g. “Did you know that we added feature XYZ since you last visited?”). Developers have a need to produce a prototype quickly to work out the edge cases (do we update that timestamp for all actions, or only certain ones?), and probably a need to put it in production so that the users can benefit sooner.

A common worry is that ALTER TABLE will be a major performance problem. That’s sometimes a problem, but in PostgreSQL, you can add a column to a table in constant time (not dependent on the size of the table) in most situations. I don’t think this is a good reason to avoid ALTER TABLE, at least in PostgreSQL (other systems may impose a greater burden).

There are good reasons to avoid ALTER TABLE, however. We’ve only defined one use case for this new “last updated” field, and it’s a fairly loose definition. If we use ALTER TABLE as a first reaction for tracking any new application state, we’d end up with lots of columns with overlapping meanings (all subtly different), and it would be challenging to keep them consistent with each other. More importantly, adding new columns without thinking through the meaning and the data migration strategy will surely cause confusion and bugs. For example, if you see the following table:

    CREATE TABLE users
    (
      name         TEXT,
      email        TEXT,
      ...,
      last_updated TIMESTAMPTZ
    );

you might (reasonably) assume that the following query makes sense:

    SELECT * FROM users
      WHERE last_updated < NOW() - '1 month'::INTERVAL;

Can you spot the problem? Old user records (before the ALTER TABLE) will have NULL for last_updated timestamps, and will not satisfy the WHERE condition even though they intuitively qualify. There are two parts to the problem:

  1. The presence of the last_updated field fools the author of the SQL query into making assumptions about the data, because it seems so simple on the surface.
  2. NULL semantics allow the query to be executed even without complete information, leading to a wrong result.

Let’s try changing the table definition:

    CREATE TABLE users
    (
      name       TEXT,
      email      TEXT,
      ...,
      properties HSTORE
    );

HSTORE is a set of key/value pairs. Some tuples might have the last_updated key in the properties attribute, and others may not. This accomplishes two things:

  1. There’s no need for ALTER TABLE or cluttering of the namespace with a lot of nullable columns.
  2. The name “properties” is vague enough that query writers would (hopefully) be on their guard, understanding that not all records will share the same properties.

You could still write the same (wrong) query against the second table with minor modification. Nothing has fundamentally changed. But we are using a different development strategy that’s easy on application developers during rapid development cycles, yet does not leave a series of pitfalls for users of the data. When a certain property becomes universally recorded and has a concrete meaning, you can plan a real data migration to turn it into a relation attribute instead.

Now, we need some guiding principles about when to use a complex type to represent complex information, and when to use separate columns in the table. To maximize utility and minimize confusion, I believe the best guiding principle is the meaning of the data you’re storing across all tuples. When defining the attributes of a relation, if you find yourself using vague nouns such as “properties,” or resorting to complex qualifications (lots of “if/then” branching in your definition), consider less constrained data types like HSTORE. Otherwise, it’s best to nail down the meaning in terms of appropriate nouns, which will help keep the DBMS smart and queries simple (and correct). See Choosing Data Types and further guidance in reference [1].

I believe there are three reasons why application developers feel that relational schemas are “inflexible”:

  1. A reliance on NULL semantics to make things “magically work,” when in reality, it just makes queries succeed that should fail. See my previous posts: None, nil, Nothing, undef, NA, and SQL NULL and What is the deal with NULLs?.
  2. The SQL database industry has avoided interesting types, like HSTORE, for a long time. See my previous post: Choosing Data Types.
  3. ORMs make a fundamental false equivalence between an object attribute and a table column. There is a relationship between the two, of course; but they are simply not the same thing. This is a direct consequence of “The First Great Blunder”[2].

EDIT: I found a more concise way to express my fundamental point — During the early stages of application development, we only vaguely understand our data. The most important rule of database design is that the database should represent reality, not what we wish reality was like. Therefore, a database should be able to express that vagueness, and later be made more precise when we understand our data better. None of this should be read to imply that constraints are less important or that we need not understand our data. These ideas mostly apply only at very early stages of development, and even then, prudent use of constraints often makes that development much faster.

[1] Date, C.J.; Darwen, Hugh (2007). Databases, Types, and the Relational Model. pp. 377-380 (Appendix B, “A Design Dilemma”).

[2] Date, C.J. (2000). An Introduction To Database Systems, p. 865.

]]>
http://thoughts.davisjeff.com/2010/05/06/flexible-schemas-and-postgresql/feed/ 2
Temporal PostgreSQL Roadmap http://thoughts.davisjeff.com/2010/03/09/temporal-postgresql-roadmap/ http://thoughts.davisjeff.com/2010/03/09/temporal-postgresql-roadmap/#comments Wed, 10 Mar 2010 04:49:06 +0000 Jeff Davis http://thoughts.davisjeff.com/?p=254 Continue reading ]]> Why are temporal extensions in PostgreSQL important? Quite simply, managing time data is one of the most common requirements, and current general-purpose database systems don’t provide us with the basic tools to do it. Every general-purpose DBMS falls short both in terms of usability and performance when trying to manage temporal data.

What is already done?

  • PERIOD data type, which can represent anchored intervals of time; that is, a chunk of time with a definite beginning and a definite end (in contrast to a SQL INTERVAL, which is not anchored to any specific beginning or end time).
    • Critical for usability because it acts as a set of time, so you can easily test for containment and other operations without using awkward constructs like BETWEEN or lots of comparisons (and keeping track of inclusivity/exclusivity of boundary points).
    • Critical for performance because you can index the values for efficient “contains” and “overlaps” queries (among others).
  • Temporal Keys (called Exclusion Constraints, and will be available in the next release of PostgreSQL, 9.0), which can enforce the constraint that no two periods of time (usually for a given resource, like a person) overlap. See the documentation (look for the word “EXCLUDE”), and see my previous articles (part 1 and part 2) on the subject.
    • Critical for usability to avoid procedural, error-prone hacks to enforce the constraint with triggers or by splitting time into big chunks.
    • Critical for performance because it performs comparably to a UNIQUE index, unlike the other procedural hacks which are generally too slow to use for most real systems.

What needs to be done?

  • Range Types — Aside from PERIOD, which is based on TIMESTAMPTZ, it would also be useful to have very similar types based on, for example, DATE. It doesn’t stop there, so the natural conclusion is to generalize PERIOD into “range types” which could be based on almost any subtype.
  • Range Keys, Foreign Range Keys — If Range Types are known to the Postgres engine, that means that we can have syntactic sugar for range keys (like temporal keys, except for any range type), etc., that would internally use Exclusion Constraints.
  • Range Join — If Range Types are known to the Postgres engine, there could be syntactic sugar for a “range join,” that is, a join based on “overlaps” rather than “equals”. More importantly, there could be a new join type, a Range Merge Join, that could perform this join efficiently (without a Range Merge Join, a range join would always be a nested loop join).
  • Simple table logs — The ability to easily create an effective “audit log” or similar trigger-based table log, that can record changes and be efficiently queried for historical state or state changes.

I’ll be speaking on this subject (specifically, the new Exclusion Constraints feature) in the upcoming PostgreSQL Conference EAST 2010 (my talk description) in Philadelphia later this month and PGCon 2010 (my talk description) in Ottawa this May. In the past, these conferences and others have been a great place to get ideas and help me move the temporal features forward.

The existing features have been picking up a little steam lately. The temporal-general mailing list has some traffic now — fairly low, but enough that others contribute to the discussions, which is a great start. I’ve also received some great feedback from a number of people, including the folks at PGX. There’s still a ways to go before we have all the features we want, but progress is being made.

]]>
http://thoughts.davisjeff.com/2010/03/09/temporal-postgresql-roadmap/feed/ 8