Experimental Thoughts » NoSQL http://thoughts.davisjeff.com Ideas on Databases, Logic, and Language by Jeff Davis Sat, 16 Jun 2012 21:05:47 +0000 en hourly 1 http://wordpress.org/?v=3.3.1 Flexible Schemas and PostgreSQL http://thoughts.davisjeff.com/2010/05/06/flexible-schemas-and-postgresql/ http://thoughts.davisjeff.com/2010/05/06/flexible-schemas-and-postgresql/#comments Thu, 06 May 2010 17:42:28 +0000 Jeff Davis http://thoughts.davisjeff.com/?p=267 Continue reading ]]> First, what is a “flexible schema”? It’s hard to pin down an exact definition, but it’s used to mean a data model that permits changes in application data structures without needing to migrate old data or incur other administrative hassles.

That’s a worthwhile goal. Applications often grow organically, especially in the early, exploratory stages of development. For example, you may decide to track when a user last did something on the website, so that you can adapt news and notices for those users (e.g. “Did you know that we added feature XYZ since you last visited?”). Developers have a need to produce a prototype quickly to work out the edge cases (do we update that timestamp for all actions, or only certain ones?), and probably a need to put it in production so that the users can benefit sooner.

A common worry is that ALTER TABLE will be a major performance problem. That’s sometimes a problem, but in PostgreSQL, you can add a column to a table in constant time (not dependent on the size of the table) in most situations. I don’t think this is a good reason to avoid ALTER TABLE, at least in PostgreSQL (other systems may impose a greater burden).

There are good reasons to avoid ALTER TABLE, however. We’ve only defined one use case for this new “last updated” field, and it’s a fairly loose definition. If we use ALTER TABLE as a first reaction for tracking any new application state, we’d end up with lots of columns with overlapping meanings (all subtly different), and it would be challenging to keep them consistent with each other. More importantly, adding new columns without thinking through the meaning and the data migration strategy will surely cause confusion and bugs. For example, if you see the following table:

    CREATE TABLE users
    (
      name         TEXT,
      email        TEXT,
      ...,
      last_updated TIMESTAMPTZ
    );

you might (reasonably) assume that the following query makes sense:

    SELECT * FROM users
      WHERE last_updated < NOW() - '1 month'::INTERVAL;

Can you spot the problem? Old user records (before the ALTER TABLE) will have NULL for last_updated timestamps, and will not satisfy the WHERE condition even though they intuitively qualify. There are two parts to the problem:

  1. The presence of the last_updated field fools the author of the SQL query into making assumptions about the data, because it seems so simple on the surface.
  2. NULL semantics allow the query to be executed even without complete information, leading to a wrong result.

Let’s try changing the table definition:

    CREATE TABLE users
    (
      name       TEXT,
      email      TEXT,
      ...,
      properties HSTORE
    );

HSTORE is a set of key/value pairs. Some tuples might have the last_updated key in the properties attribute, and others may not. This accomplishes two things:

  1. There’s no need for ALTER TABLE or cluttering of the namespace with a lot of nullable columns.
  2. The name “properties” is vague enough that query writers would (hopefully) be on their guard, understanding that not all records will share the same properties.

You could still write the same (wrong) query against the second table with minor modification. Nothing has fundamentally changed. But we are using a different development strategy that’s easy on application developers during rapid development cycles, yet does not leave a series of pitfalls for users of the data. When a certain property becomes universally recorded and has a concrete meaning, you can plan a real data migration to turn it into a relation attribute instead.

Now, we need some guiding principles about when to use a complex type to represent complex information, and when to use separate columns in the table. To maximize utility and minimize confusion, I believe the best guiding principle is the meaning of the data you’re storing across all tuples. When defining the attributes of a relation, if you find yourself using vague nouns such as “properties,” or resorting to complex qualifications (lots of “if/then” branching in your definition), consider less constrained data types like HSTORE. Otherwise, it’s best to nail down the meaning in terms of appropriate nouns, which will help keep the DBMS smart and queries simple (and correct). See Choosing Data Types and further guidance in reference [1].

I believe there are three reasons why application developers feel that relational schemas are “inflexible”:

  1. A reliance on NULL semantics to make things “magically work,” when in reality, it just makes queries succeed that should fail. See my previous posts: None, nil, Nothing, undef, NA, and SQL NULL and What is the deal with NULLs?.
  2. The SQL database industry has avoided interesting types, like HSTORE, for a long time. See my previous post: Choosing Data Types.
  3. ORMs make a fundamental false equivalence between an object attribute and a table column. There is a relationship between the two, of course; but they are simply not the same thing. This is a direct consequence of “The First Great Blunder”[2].

EDIT: I found a more concise way to express my fundamental point — During the early stages of application development, we only vaguely understand our data. The most important rule of database design is that the database should represent reality, not what we wish reality was like. Therefore, a database should be able to express that vagueness, and later be made more precise when we understand our data better. None of this should be read to imply that constraints are less important or that we need not understand our data. These ideas mostly apply only at very early stages of development, and even then, prudent use of constraints often makes that development much faster.

[1] Date, C.J.; Darwen, Hugh (2007). Databases, Types, and the Relational Model. pp. 377-380 (Appendix B, “A Design Dilemma”).

[2] Date, C.J. (2000). An Introduction To Database Systems, p. 865.

]]>
http://thoughts.davisjeff.com/2010/05/06/flexible-schemas-and-postgresql/feed/ 2
Scalability and the Relational Model http://thoughts.davisjeff.com/2010/03/07/scalability-and-the-relational-model/ http://thoughts.davisjeff.com/2010/03/07/scalability-and-the-relational-model/#comments Sun, 07 Mar 2010 21:37:24 +0000 Jeff Davis http://thoughts.davisjeff.com/?p=242 Continue reading ]]> The relational model is just a way to represent reality. It happens to have some very useful properties, such as closure over many useful operations — but it’s a purely logical model of reality. You can implement relational operations using hash joins, MapReduce, or pen and paper.

So, right away, it’s meaningless to talk about the scalability of the relational model. Given a particular question, it might be difficult to break it down into bite-sized pieces and distribute it to N worker nodes. But going with MapReduce doesn’t solve that scalability problem — it just means that you will have a hard time defining a useful map or reduce operation, or you will have to change the question into something easier to answer.

There may exist scalability problems in:

  • SQL, which defines requirements outside the scope of the relational model, such as ACID properties and transactional semantics.
  • Traditional architectures and implementations of SQL, such as the “table is a file” equivalence, lack of sophisticated types, etc.
  • Particular implementations of SQL — e.g. “MySQL can’t do it, so the relational model doesn’t scale”.

Why are these distinctions important? As with many debates, terminology confusion is at the core, and prevents us from dealing with the problems directly. If SQL is defined in a way that causes scalability problems, we need to identify precisely those requirements that cause a problem, so that we can proceed forward without losing all that has been gained. If the traditional architectures are not suitable for some important use-cases, they need to be adapted. If some particular implementations are not suitable, developers need to switch or demand that it be made competitive.

The NoSQL movement (or at least the hype surrounding it) is far too disorganized to make real progress. Usually, incremental progress is best; and sometimes a fresh start is best, after drawing on years of lessons learned. But it’s never a good idea to start over with complete disregard for the past. For instance, an article from Digg starts off great:

The fundamental problem is endemic to the relational database mindset, which places the burden of computation on reads rather than writes.

That’s good because he blames it on the mindset not the model, and then identifies a specific problem. But then the article completely falls flat:

Computing the intersection with a JOIN is much too slow in MySQL, so we have to do it in PHP.

A join is faster in PHP than MySQL? Why bother even discussing SQL versus NoSQL if your particular implementation of SQL — MySQL — can’t even do a hash join, the exact operation that you need? Particularly when almost every other implementation can (including PostgreSQL)? That kind of reasoning won’t lead to solutions.

So, where do we go from here?

  1. Separate the SQL model from the other requirements (some of which may limit scalability) when discussing improvements.
  2. Improve the SQL model (my readers know that I’ve criticized SQL’s logical problems many times in the past).
  3. Improve the implementations of SQL, particularly how tables are physically stored.
  4. If you’re particularly ambitious, come up with a relational alternative to SQL that takes into account what’s been learned after decades of SQL and can become the next general-purpose DBMS language.

EDIT 2010-03-09: I should have cited Josh Berkus’s talk on Relational vs. Non-Relational (complete list of PGX talks), which was part of the inspiration for this post.

]]>
http://thoughts.davisjeff.com/2010/03/07/scalability-and-the-relational-model/feed/ 14