The reason that database management systems usually either use exactly one write-ahead log file, or alternatively a custom file system, is that the usual file systems are really bad at making any guarantees about atomicity or order of execution.
They are entirely optimized for consistency and performance.
Therefore, only the order of writes into a single file can be trusted, unless you create a partition or image file with a specialized file system.
You can, though, write transaction numbers with every write to multiple files.
And make it obvious whether a write was complete, e.g. with xml or json style blocks with begin / end markers like <element>...</element> or { ... }, &c.
Then, your code can easily detect any gaps across multiple files and determine a last consistent state after a crash.
To avoid the last consistent state after a crash ending up arbitrarily old because some write has been waiting in the cache for minutes, any of these approaches can be combined with sync / fsync.
Using sync / fsync also makes transactional commits kind of possible, i.e. guaranteeing that everything until now was written at least from the file system's point of view.
Your storage system might then still lose the last write because of a power outage, though, be it a hard disk or SSD with an internal cache, a NAS, &c.
The guarantees these systems give can vary wildly, which is a challenge to be considered with all approaches, whether one is using the file system or a traditional RDBMS for storage.
Using a UPS is definitely a great addition if you're writing to a built-in hard disk or SSD, especially if you shut your system down in a controlled manner once the USP signals that it's about time to do so ^^
d = FileWriter.open(['file1', 'file2'], 'wb+')
i think this statement will put a write-lock on that file. the only thing you have to ensure that a different process checks the lock-status of a file. – Conditioned