In my last post, Why DBMSs are so complex, I raised the issue of type mismatches between the application language and the DBMS.
Type matching between the DBMS and the application is as important as types themselves for successful application development. If a type behaves one way in the DBMS, and a “similar” type behaves slightly differently in the application, that can only cause confusion. And it’s a source of unnecessary awkwardness: you already need to define the types that suit your business best in one place, why do you need to redefine them somewhere else, based on a different basic type system?
Developing a database application means converting your data into types understood by the DBMS, generating SQL code, sending it to the DBMS for evaluation, retrieving the result, and converting the result into types understood by your application language. It’s even more strange when you use parameterized SQL: you generate some code with placeholders instead of literals, and then you send the literals separately (by “more strange,” I certainly don’t mean “worse,” — parameterized SQL is a huge improvement).
If this design were suggested for any purpose other than interacting with a DBMS, others would react with immediate (and justifiable) suspicion. The use of “eval” is strongly avoided, and many popular languages don’t even support it. So why is this design so widely accepted for database management systems?