Some tips for report model design:
1. Build a data mart
There are several tools like Report Builder: Business Objects, Oracle Discoverer to name a couple. They all have metadata layers that get you some of the way to an end-user reporting tool, however they still really need to be spoon-fed data in a suitable format in order to produce an effective solution. This means that you really need to think in terms of building some sort of data-mart as well.
Without clean data, the tools will expose all of the gotchas in the production database, so users will have to understand these to get correct results out. This means that the reporting should really come off a clean data source.
You have approximately zero control over the SQL that these tools produce, so they are quite capable of producing queries that will herniate your production database. This means that your reporting should take place on a separate server. A schema that is friendly to ad-hoc tools (such as a star schema) will mitigate the worst of the potential issues with performance.
2. Clean the data
There is no developer in the loop with ad-hoc tools, so users will naively use the tool without knowing what the data issues are. Inaccurate query results will always be viewed as the fault of the tool. For credibility, these pitfalls need to be eliminated from the data set upstream of the tool.
3. Make the navigation robust and idiot-proof
Report builder can set up restrictions on moving from one entity to another. Without these, it's possible to join multiple tables together in a m:m relationship. This is called a Fan Trap and will return incorrect totals. You need to set up the model so that individual fact tables are aggregated on common dimensions - i.e. rolled up before they are joined. Getting this right eliminates a class of errors. Most tools have some mechanism for preventing this.
4. Make the data aggregate
You get this for free from Business Objects, but you will have to put an aggregate measure over each base measure explicitly with Report Builder. Hide the base measures and expose the aggregates. This means that the system will roll up the data to the grain of the dimensions the user has chosen.
Conclusion
Placing an ad-hoc tool directly over a production database is not likely to work well. The data will have too many pitfalls and the schema will not lend itself to reporting. This means that you are up for some work building a data mart to scrub the data and prep it for the tool. If you are spending significant time building ad-hoc extracts, there might be a business case simply in the developer time this would save later on.
EDIT: The Report Model Wizard (like most such things) makes quite a mess when run. You'll have to tweak the settings such as restricting the generation of irrelevant aggregates. In the past I've had quite good results by generating sums, hiding all of the base measures and exposing the aggregates as if they were base measures. This gave behaviour much like Business Objects. On specific instances you might also want to expose count, min/max or averages as well.
The particular instance I'm thinking of was quite a large report model with about 1,500 fields in it, so the aggregate-fest generated from the wizard was un-manageable with 10,000+ fields in total. You can also set up folder structures a bit like Analysis Services and use these to organise the fields. Finally, if entered the description on the field will show up as a tooltip if you hover over it in the end user tool.