The higher-ups in my company were told by good friends that flat files are the way to go, and we should switch from SQL Server to them for everything we do. We have over 300 servers and hundreds of different databases. From just the few I'm involved with we have > 10 billion records in quite a few of them with upwards of 100k new records a day and who knows how many updates... Me and a couple others need to come up with a response saying why we shouldn't do this. Most of our stuff is ASP.NET with some legacy ASP. We thought that making a simple console app that tests/times the same interactions between a flat file (stored on the network) and SQL over the network doing large inserts, searches, updates etc along with things like network disconnects randomly. This would show them how bad flat files can be, especially when you are dealing with millions of records.
What things should I use in my response? What should I do with my demo code to illustrate this?
My sort list so far:
- Security
- Concurrent access
- Performance with large amounts of data
- Amount of time to do such a massive rewrite/switch and huge $ cost
- Lack of transactions
- PITA to map relational data to flat files
- NTFS doesn't support tons of files in a directory well
- Lack of Adhoc data searching/manipulation
- Enforcing data integrity
- Recovery from network outage
- Client delay while waiting for other clients changes to commit
- Most everybody stopped using flat files for this type of storage long ago for good reason
- Load balancing/replication
I fear that this will be a great post on the Daily WTF someday if I can't stop it now.
Additionally
Does anyone know if anything about HIPPA could be used in this fight? Many of our records are patient records...