What's so bad about running an IIS web application from an UNC share, as suggested in DotNetNuke docs?
Asked Answered
B

2

9

I've been taught in the past that it is unwise to run a web application from an UNC share. Reasons I remember are security, rights and authorization trouble and performance. However, in the DotNetNuke documentation it says:

The web farm configuration that DotNetNuke initially supports involves two or more front end web servers ("web-heads") whose IIS website root directories are mapped to a common UNC share on a remote file server. The UNC share contains the application source code as well as any static content for the individual sites.

Somehow this sounds to me like a poor-mans configuration and I feel like opening a potential Pandora's box. Is it wise to follow DotNetNuke Corp's suggestion here?

Basie answered 14/7, 2011 at 15:28 Comment(0)
M
4

There are a few things that can become problematic when it comes to this type of configuration with DOtNetNuke.

  1. This method although a "web-farm" scenario results in a single point of failure. The UNC share becomes your choke point and if it goes down all nodes go down.
  2. Disk IO and configuration of network communication can be an issue. This is related to the number of "File System Watchers" that can be opened/maintained on remote content. THis issue isn't too big of a deal in MOST cases, but can be a royal PITA when it happens.
  3. Security can be an issue, but is typically something that you just have issues with at the onset of the configuration. You need to be sure that you properly assign permissions to the user account so that it can have full access to the UNC share.

My guess as to why this is the "default" recommendation by DotNetNuke corporation would be due to the following. NOTE: these are ONLY my opinions.

  1. This configuration provides the "least" complication when it comes to real-time content synching. WIth only one file system there is no need to talk about replication and things of that nature.
  2. Default caching uses file based caching, with both going to the same system cache expiration is easy to manage. If replicated it wouldn't work.
Malchy answered 14/7, 2011 at 19:30 Comment(3)
I guess I was also under the impression that using an installment on each system would give me the possibility to update one, leave the other running, then update the other, having 100% uptime during upgrading. Using an UNC creates a single point of failure, as you also mention in (1).Basie
Your upgrade scenario wouldn't work either, as the database would be updated and the site files wouldn't which would put the system in limbo.Malchy
Good point, guess that scenario should stroked through and continuity is simply not an option during this time, unless we consider a whole different and more complex scenario. This is the accepted answer, but a lot of info is also in the second answer by ScottS. Thanks go to the both of you.Basie
D
5

There is nothing inherently wrong with using a UNC share. At a previous company we operated dozens of web servers and they all used UNC shares (not on DNN). There were over 80k paying subscribers of which 10's of thousands used the applications every day. It worked very well.

To address Mitchel's points:

1.) Single point of failure is only an issue if you make it an issue. There is plenty of redundancy available in the various SAN/NAS solutions.

2.) IO will not be an issue with any decent SAN or NAS. I have never had a problem with file system watchers. DNN doesn't directly use any, in the unlikely event that the built-in ASP.Net watchers created a problem I would probably disable them.

3.) I don't see security as any more of an issue than any other solution. You must be sure to control access to your files and setup permissions appropriately. With local disks you may choose to leave permissions more open than on a network, but you probably should secure both equally well. There is an extra configuration step related to using a UNC path. The extra effort around configuring security will be minuscule when compared to the weeks if not months of effort involved in creating a site that is worthy of a web farm.

I totally agree with Mitchel's opinions on why not to use file synchronization.

I know there are some people out there running DNN sites with file synchronization. I don't know of any who have not had to work around issues caused by the file synchronization. Personally I doubt that getting a site running well with file synchronization is cheaper than using a UNC on a SAN once you count the labor spent sorting out the quirks of file synchronization.

Diagnose answered 14/7, 2011 at 20:23 Comment(2)
Thanks for the elaboration on Mitchel's points. However, the SPoF remains an issue though, regardless the SAN/NAS solution chosen, i.e. when doing an upgrade of the core website.Basie
@Diagnose - regarding #2 it isn't an issue with the SAN it is a WINDOWS limitation on the number of file watchers (File based caching and ASP.NET process monitoring)Malchy
M
4

There are a few things that can become problematic when it comes to this type of configuration with DOtNetNuke.

  1. This method although a "web-farm" scenario results in a single point of failure. The UNC share becomes your choke point and if it goes down all nodes go down.
  2. Disk IO and configuration of network communication can be an issue. This is related to the number of "File System Watchers" that can be opened/maintained on remote content. THis issue isn't too big of a deal in MOST cases, but can be a royal PITA when it happens.
  3. Security can be an issue, but is typically something that you just have issues with at the onset of the configuration. You need to be sure that you properly assign permissions to the user account so that it can have full access to the UNC share.

My guess as to why this is the "default" recommendation by DotNetNuke corporation would be due to the following. NOTE: these are ONLY my opinions.

  1. This configuration provides the "least" complication when it comes to real-time content synching. WIth only one file system there is no need to talk about replication and things of that nature.
  2. Default caching uses file based caching, with both going to the same system cache expiration is easy to manage. If replicated it wouldn't work.
Malchy answered 14/7, 2011 at 19:30 Comment(3)
I guess I was also under the impression that using an installment on each system would give me the possibility to update one, leave the other running, then update the other, having 100% uptime during upgrading. Using an UNC creates a single point of failure, as you also mention in (1).Basie
Your upgrade scenario wouldn't work either, as the database would be updated and the site files wouldn't which would put the system in limbo.Malchy
Good point, guess that scenario should stroked through and continuity is simply not an option during this time, unless we consider a whole different and more complex scenario. This is the accepted answer, but a lot of info is also in the second answer by ScottS. Thanks go to the both of you.Basie

© 2022 - 2024 — McMap. All rights reserved.