Do object-relational mappers (ORMs) really improve application development?
When I started developing web applications, I used perl. Not even all of perl, mostly just a bunch of “if” statements and an occasional loop that happened to be valid perl (aside: I remember being surprised that I was allowed to write a loop that would run on a shared server, because “what if it didn’t terminate?!”). I didn’t use databases; I used a mix of files, regexes to parse them, and flock to control concurrency (not because of foresight or good engineering, but because I ran into concurrency-related corruption).
I then made the quantum leap to databases. I didn’t see the benefits instantaneously, but it was clearly a major shift in the way I developed applications.
Haskell is a very interesting language, and shows up on sites like http://programming.reddit.com frequently. It’s somewhat mind-bending, but very powerful and has some great theoretical advantages over other languages. I have been learning it on and off for some time, never really getting comfortable with it but being inspired by it nonetheless.
But discussion on sites like reddit usually falls a little flat when someone asks a question like:
If haskell has all these wonderful advantages, what amazing applications have been written with it?
The responses to that question usually aren’t very convincing, quite honestly.
But what if I told you there was a wildly successful language, in some ways the most successful language ever, and it could be characterized by:
- lazy evaluation
- type inference
- immutable state
- tightly controlled side effects
- strict static typing
Surely that would be interesting to a Haskell programmer? Of course, I’m talking about SQL.
I saw a recent post Avoid Smart Logic for Conditional WHERE Clauses which actually recommended, “the best solution is to build the SQL statement dynamically—only with the required filters and bind parameters”. Ordinarily I appreciate that author’s posts, but this time I think that he let confusion run amok, as can be seen in a thread on reddit.
To dispel that confusion: parameterized queries don’t have any plausible downsides; always use them in applications. Saved plans have trade-offs; use them sometimes, and only if you understand the trade-offs.
When query parameters are conflated with saved plans, it’s creates FUD about SQL systems because it mixes the fear around SQL injection with the mysticism around the SQL optimizer. Such confusion about the layers of a SQL system are a big part of the reason that some developers move to the deceptive simplicity of NoSQL systems (I say “deceptive” here because it often just moves an even greater complexity into the application — but that’s another topic).
The confusion started with this query from the original article:
In my last post, Why DBMSs are so complex, I raised the issue of type mismatches between the application language and the DBMS.
Type matching between the DBMS and the application is as important as types themselves for successful application development. If a type behaves one way in the DBMS, and a “similar” type behaves slightly differently in the application, that can only cause confusion. And it’s a source of unnecessary awkwardness: you already need to define the types that suit your business best in one place, why do you need to redefine them somewhere else, based on a different basic type system?