My employer is evaluating RT. So, we have a test instance set up, of course, and it’s in a working state. The application is written in Perl with the Mason framework, it has support for many databases (including PostgreSQL), and generally has a decent feature set.
We need to migrate from our previous system, so we need to import the data into RT. RT’s tools and APIs do not meet our importing needs, so we look into the schema to do the import directly. I decide the easiest way to figure out how the schema fits together is to just turn on query logging. So, we reload the page, and all of a sudden a deluge of output appears in our “tail -f”. At first I thought it was just the normal number of queries generated by some abstraction layer, and we could dig through them. But it was about 700 queries.
Additionally, as you might expect from this type of application, the schema is pretty bad. It’s basically a lot of auto-generated id references all over the place pointing through poorly-named tables, a lot of “type” attributes (that hold the name of an application object class), and some bizarre organization. For instance, to associate a comment with a ticket, you have to join through the “transactions” table (a log of operations that were performed). There are also no foreign keys, at all, which is just wonderful when trying to figure out the schema.
There are some simple, well established best practices for database development that are easy to find. It’s not even hard to support multiple databases, you can just write separate data access code for each database you want to support. Yet application developers who know nothing about data management decide they can do it better. RT is a perfect example of this. I’m not saying all developers are like this, but I think there’s a much greater tendency for developers to re-invent data management than, say, invent a new memory management scheme, or invent a new filesystem API, or a new sorting algorithm.