Firstly, indexed views have two gotchas:
1) An updated row in the base table must be able to propagate through to the indexed view without having to reference any other rows in the base table. This is why you can use SUM() and COUNT_BIG(), since those aggregates can be updated just by knowing what is in the changed rows, but you can't use MIN() or MAX(), since you've have to look back through the base table to make sure.
All the seemingly arbitrary restrictions on indexed views (no TVFs, no subqueries, no unions, etc) boil down the the above reason. The engine has to be able to maintain the index without constantly looking at the whole base table.
2) AFTER triggers on the base table still get processed before the indexed views are updated.
These limit the complexity of what you can accomplish with indexed views alone.
Secondly, test that denormalization will actually help. If you are just instantiating simple joins and you are already I/O bound, that will make the bottleneck worse. On the other hand, if you are precomputing large aggregates or taking vertical slices of extremely wide joins, it's more likely to be an improvement.
Thirdly, to use indexed views, use a pattern like this:
CREATE TABLE huge_data_table ( ... )
GO
CREATE VIEW huge_data_table_monthly_summary_index AS
SELECT
YEAR(...) AS [year_...]
,MONTH(...) AS [month_...]
,SUM(...) AS [sum_...]
,COUNT_BIG(*) AS [count_...]
FROM huge_data_table
GROUP BY YEAR(...),MONTH(...)
GO
CREATE UNIQUE CLUSTERED INDEX UC__xyz
ON huge_data_table_monthly_summary_index
([year_...],[month_...])
GO
CREATE VIEW monthly_summary AS
SELECT
[year_...]
,[month_...]
,[sum_...]
,[count_...]
FROM huge_data_table_monthly_summary_index WITH (NOEXPAND) --force use of the view's index
GO
Then you can query monthly_summary
and get the aggregates without having to recompute them every time. The engine updates huge_data_table_monthly_summary_index
automatically whenever huge_data_table
changes.
It may seem like magic, but it works. Use it as far as you can in lieu of triggers/jobs/etc to synchronize tables. I have found that I can usually break down a complex reporting job into smaller, simpler pieces that can use indexed views, and joining the intermediate results on the fly at reporting time turns out to be fast enough to get the job done.
If that's not enough for your needs, you are probably in the realm of log shipping or mirroring to a separate reporting server. There's not much middle ground where syncing manually makes sense.