In my last post, Why DBMSs are so complex, I raised the issue of type mismatches between the application language and the DBMS.
Type matching between the DBMS and the application is as important as types themselves for successful application development. If a type behaves one way in the DBMS, and a “similar” type behaves slightly differently in the application, that can only cause confusion. And it’s a source of unnecessary awkwardness: you already need to define the types that suit your business best in one place, why do you need to redefine them somewhere else, based on a different basic type system?
At least we’re using PostgreSQL, the most extensible database available, where you can define sophisticated types and make them perform like native features.
But there are still problems. Most notably, it’s a non-trivial challenge to find an appropriate way to model NULLs in the application language. You can’t not use them in the DBMS, because the SQL spec generates them from oblivion, e.g. from an outer join or an aggregate function, even when you have no NULLs in your database. So the only way to model the same semantics in your application is to somehow make your application language understand NULL semantics.
=> -- aggregate with one NULL input => select sum(column1) from (values(NULL::int)) t; sum ----- (1 row) => -- aggregate with two inputs, one of them NULL => select sum(column1) from (values(1),(NULL)) t; sum ----- 1 (1 row) => -- aggregate with no input => select sum(column1) from (values(1),(NULL)) t where false; sum ----- (1 row) => -- + operator => select 1 + NULL; ?column? ---------- (1 row)
I’ll divide the “NULL-ish” values of various languages into two broad categories:
- Separate type, few operators defined, error early, no 3VL — Python, Ruby and Haskell fall into this category, because their “NULL-ish” types (None, nil, and Nothing, respectively) usually result in an immediate exception, unless the operator to which the NULLish value is passed handles it as a special case. Few built-in operators are defined for arguments of these types. These fail to behave like SQL NULL, because they employ no three-valued logic (3VL) at all, and thus fail in the forth portion of the SQL example.
- Member of all types, every operator defined — Perl and R fall into this category. Perl’s undef can be passed through many built-in operators (like +), but doesn’t ever use 3VL, so fails the forth portion of the SQL example. R uses a kind of 3VL for it’s NA value, but it uses it everywhere, so sum(c(1,NA)) results in NA (thus failing the second portion of the SQL example). In R, you can omit NAs from the sum explicitly (not a very good solution, by the way), but then it will fail the first portion of the SQL example.
As far as I can tell (correct me if I’m mistaken), none of these languages support the third portion of the SQL example: the sum of an empty list in SQL is NULL. The languages that I tested with a built-in sum operator (Python, R, Haskell) all return 0 when passed an empty list.
Languages from the first category appear safer, because you will catch the errors earlier rather than later. However, transforming SQL NULLs in these languages to None, nil, or Nothing is actually quite dangerous, because a change in the data you store in your database (inserting NULLs or deleting records that may be aggregated) or even a change in a query (outer join, or an aggregate that may have no input) can produce NULLs, and therefore produce exceptions, that can evade even rigorous testing procedures and sneak into production.
Languages from the second category tend to pass the “undef” or “NA” along deeper into the application, which can cause unintuitive and difficult-to-trace problems. Perhaps worse, something will always happen, and usually the result will take the form of the correct answer even if it is wrong.
So where does that leave us? I think the blame here rests entirely on the SQL standard’s definition of NULL, and the inconsistency between “not a value at all” and “the third logical value” (both of which can be used to describe NULL in different contexts). Not much can be done about that, so I think the best strategy is to try to interpret and remove NULLs as early as possible. They can be removed from result sets before returning to the client by using COALESCE, and they can be removed after they reach the client with client code. Passing them along as some kind of special value is only useful if your application already must be thoroughly aware of that special value.
Note: Microsoft has defined some kind of “DBNull” value, and from browsing the docs, it appears a substantial amount of work went into making them behave as SQL NULLs. This includes a special set of SQL types and operators. Microsoft appears to be making a lot of progress matching DBMS and application types more closely, but I think the definition of SQL NULLs is a lost cause.