PHP DataMapper with multiple persistence layers
Asked Answered
E

3

8

I am writing a system in PHP that has to write to three persistence layers:

  • One web service
  • Two databases (one mysql one mssql)

The reason for this is legacy systems and cannot be changed.

I am wanting to use the DataMapper pattern and I am trying to establish the best way to achieve what I want. I have an interface like follows:

<?php
    $service = $factory->getService()->create($entity);
?>

Below is some contrived and cut down code for brevity:

<?php
class Post extends AbstractService
{
    protected $_mapper;

    public function create(Entity $post)
    {
        return $this->_mapper->create($post);
    }
}

class AbstractMapper
{
    protected $_persistence;

    public function create(Entity $entity)
    {
        $data = $this->_prepareForPersistence($entity);
        return $this->_persistence->create($data);
    }   
}
?>

My question is that because there are three persistence layers, there will also therefore likely be a need for three mappers for each. I'd like a clean design pattern inspired interface to make this work.

I see it as having three options:

  1. Inject three mappers into the Service and call create on each
  2. $_mapper is an array/collection and it iterates through them calling create on each
  3. $_mapper is actually a container object that acts as a further proxy and calls create on each

Something strikes me as wrong with each of these solutions and would appreciate any feedback/recognised design patterns that might fit this.

Exclamation answered 25/10, 2012 at 8:7 Comment(0)
S
2

I have had to solve a similar problem but very many years ago in the days of PEAR DB. In that particular case, there was a need to replicate the data across multiple databases.

We did not have the problem of the different databases having different mappings though so it was a fair bit simpler.

What we did was to facade the DB class and override the getResult function (or whatever it was called). This function then analysed the SQL and if it was a read - it would send it to just one backed and if it was a write, it would send it to all.

This actually worked really well for a very heavily utilised site.

From that background, I would suggest entirely facading all of the persistence operations. Once you have done that, the implementation details are less relevant and can be changed at any time.

From this perspective, any of your implementation ideas seem like a reasonable approach. There are various things you will want to think about though.

  • What if one of the backends throw an error?
  • What is the performance impact of writing to three database servers?
  • Can the writes be done asynchronously (if so, ask the first question again)

There is potentially another way to solve this problem as well. That is to use stored procedures. If you have a primary database server, you could write a trigger which, on commit (or thereabouts) connects to the other database and sychronises the data.

If the data update does not need to be immediate, you could get the primary database to log changes and have another script that regularly "fed" this data into the other system. Again, the issue of errors will need to be considered.

Hope this helps.

Spank answered 24/2, 2013 at 19:28 Comment(0)
M
1

First, a bit of terminology: what you call three layers, are in fact, three modules, not layers. That is, you have three modules within the persistence layer.

Now, the basic premise of this problem is this: you MUST have three different persistence logic, corresponding to three different storage sources. This is something that you can't avoid. Therefore, the question is just about how to invoke write operation on this modules (assuming that for read you don't need to call all three, or if you do, that is a separate question any ways).

From the three options you have listed, in my opinion the first one is better. Because, that is the simplest of the three. The other two, will still need to call three modules separately, with the additional work of introducing a container or some sort of data structure. You still can't avoid calling three modules somewhere.

If working with the first option, then you obviously need to work with interfaces, to provide a uniform abstraction for the user/client (in this case the service).

My point is that: 1. Their is an inherent complexity in your problem, which you can't simplify further. 2. The first option is better, because the other two, make things more complex, not simple.

Motherofpearl answered 28/2, 2013 at 17:39 Comment(0)
P
0

I think option #2 is the best in my opinion. I would go with that. If you had like 10+ mappers than option #3 would make sense to shift the create logic to the mapper itself, but since you have a reasonable number of mappers it makes more sense to just inject them and iterate over them. Extending functionality by adding another mapper would be a matter of just adding 1 line to your dependency injection configuration.

Piled answered 27/2, 2013 at 22:51 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.