.NET Object persistence options
Asked Answered
M

5

11

I have a question that I just don't feel like I've found a satisfactory answer for, either that or I've not been looking in the right place.

Our system was originally built using .NET 1.1 (however the projects all now support 3.5) and all entities are persisted to the database using stored procedures and a "SQLHelper" that has the standard ExecuteReader, ExecutreNonQuery type methods.

So what generally happens is we'll have our entities for example, User and Role and we'll have another class called UserIO that persists those objects to database with methods like:

 static UserIO.SaveUser(User user)

The reason for the separate IO file is to keep the IO separate from the entity however isn't it more satisfactory to just to call?:

User.Save()

Maybe I'm wrong but it just doesn't feel right to have these "IO" files scattered all over the place. So I'm thinking about looking at other options for persistence and I wondered where would be best place to start. I have used datasets in the past but had some mixed experiences particularly with their performance. I know LINQ is around now but I heard that rather than LINQ I should be using the ADO.NET Entity Framework but then somebody else told me the Entity Framework isn't quite right and I should be waiting for C# 4.0. If that's the case and with C# 4.0 just around the corner should I just carry on with my "IO" file approach and start with the Entity Framework when C# 4.0 is finally released. Or is there perhaps a more elegant class structure I could using e.g. utilizing Partial Classes?

I should say, I'm not looking at completely replacing the data access that already exists, I'm more concerned with the new entities I'm creating.

I'm sorry if this question is a little general, however I don't have many people around to bounce this kind of thought off.

Monkish answered 4/3, 2010 at 18:44 Comment(0)
E
4

I have successfully used Entity Framework 3.5. There are some, who I would characterize as purists, who felt that Entity Framework violated some set of rules, and should not be used.

In my opinion, the only rules that matter are your own. I recommend you begin experimenting with Entity Framework 3.5, since you have it now. Also, as soon as you can, you (and just about everyone else) need to begin experimenting with .NET 4.0. The Release Candidate is available for free, so there's no reason to not at least know what's available.

It's possible that you'll find you like the EF changes in 4.0 so much that you'll want to wait for it. It's just as likely that you won't feel a need to wait, and can go ahead and benefit from EF as it is in 3.5. I have, and I'm very glad I didn't wait.

Edibles answered 4/3, 2010 at 18:53 Comment(2)
Thanks that's some good encouragement to not ignore EF completely. Have been meaning to look at the RC as with everything though it's finding time! Would you say that EF is suitable for coming in alongside our existing data access? My question for you is very similar as the one for @LBushkin. Thanks EdMonkish
@MrEdmundo: I have successfully used EF 3.5 in production applications. There are very few problems with it, and they are quite small, in my opinion.Edibles
A
3

If you are looking for object-relational mapping models you can look into:

There's also a longer list here: http://en.wikipedia.org/wiki/List_of_object-relational_mapping_software#.NET

As for the general question of how to design an object model for persistence, much of the design choice depends on the complexity of the system, the extensibility you require, whether you need to support multiple persistence stores (SQLServer, Oracle, file system, etc), and so on. The pattern you describe looks like a DataTansferObject (DTO). It's a common design for separating persistence logic from business logic.

As an aside, a general principle of good system design is the single responsibility principle. When building a system, you have to decide whether it makes sense to combine different responsibilities into a single class. Combining responsibilities can often complicate a system and create design conflicts that are difficult to resolve.

Asleep answered 4/3, 2010 at 18:48 Comment(2)
Thanking you for putting a name on rather long winded description :). Do you think that the type of mapping models you've described are suitable for coming in late in a products life cycle (as in my case), or are they more suited to ground up development?Monkish
It depends. Some libraries are better at working with POCOs, others are not. Libraries that that require the consumer to implement numerous interfaces or inherit from specific base classes are clearly harder to introduce into a project late in the game. NHibernate, for example, is relatively good at working with POCOs - so you may be able to use it even late in a project. However, keep in mind - there is something to be said for consistency of implementation. Having multiple different ways to persist data in an application may not be conducive to long-term maintenance or developer on-boarding.Asleep
A
2

A pattern that I use quite regularly is: Each object has the following:

  • Data Transfer Object (DTO) - This keeps the memory used by datasets as small as possible.
  • Business Object - That takes at least the above DTO as a constructor - This will perform any function on the DTO that is not a CRUD function
  • CRUD / Persistant methods in a Repository class

The latter can be done either of 2 ways. You can have a big repository class, which is fine for applications / components with only a few objects, or you can have separate repositories for each object.

Take a look at Rudy Lacovaras Blog. He's recently done a series of posts on efficient data access using a similar pattern.

Adest answered 4/3, 2010 at 19:24 Comment(0)
M
0

It is a common pattern to have a repository that is separate from the model. The name IO is unique for such a pattern but valid. Now depending on who you talk to (TDD nuts come to mind) you may get flack for using a static class.

Melodious answered 4/3, 2010 at 18:47 Comment(0)
S
0

Having a set of classes that implement the data functions is often called tiered programming. (http://en.wikipedia.org/wiki/N-tier). By separating out the classes that access the data tier you create a system which is much more maintainable. If you merged those functions into the classes which implement the business rules in your application you would lose many of the advantages of having a multi-tiered design.

Having the data access functions spit into their own classes is good (3 cheers for the designer), however having them spread all over the place is bad. Ideally you would not the source for those functions to all be in the same directory or file (depending on the project size). If they are all together you gain many advantages. Having them split off into (random?) many locations defeats the purpose of modularizing this code.

Scratchboard answered 4/3, 2010 at 18:55 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.