Best practices for large solutions in Visual Studio (2008) [closed]
Asked Answered
E

12

89

We have a solution with around 100+ projects, most of them C#. Naturally, it takes a long time to both open and build, so I am looking for best practices for such beasts. Along the lines of questions I am hoping to get answers to, are:

  • how do you best handle references between projects
    • should "copy local" be on or off?
  • should every project build to its own folder, or should they all build to the same output folder(they are all part of the same application)

  • Are solutions' folders a good way of organizing stuff?

I know that splitting the solution up into multiple smaller solutions is an option, but that comes with its own set of refactoring and building headaches, so perhaps we can save that for a separate thread :-)

Emu answered 27/3, 2009 at 14:40 Comment(8)
May I ask why a single solution needs 100+ projects? Are they each creating thier own assembly?Cooperative
Yes, they are, and each of them represents a logically separate piece of functionality.Emu
We have a similar thing here in Java ... one framework, around 200 plugins by now. Puts Eclipse to the test when having all of them loaded and I bet it's not that rare (although this is academia, so probably different from The Real World™).Inglorious
@Emu - Isn't that what classes and namespaces are for? What benefit does separate assemblies bring you? I can think of some, such as potentially smaller upgrades and memory use (if some aren't loaded), but there is certainly a cost to having 100+ assmemblies to compile and manage.Bein
@Eyvind, was this question answered?Bisset
@Mark I feel your pain. We're up to 94 projects here. Can't do much about it now, since we'd have to halt development on multiple teams to restructure. We've got 3 EXE projects that reference the other 91. I'm experimenting with one common output folder, so far the gains are very impressive.Quirita
Man, 100+? That is nothing for what we have... pushing nearly 600 here now...Innoxious
why was this question closed? (and as not constructive) Seriously?Pubes
B
43

You might be interested in these two MSBuild articles that I have written.

MSBuild: Best Practices For Creating Reliable Builds, Part 1

MSBuild: Best Practices For Creating Reliable Builds, Part 2

Specificially in Part 2 there is a section Building large source trees that you might want to take a look at.

To briefly answer your questions here though:

  • CopyLocal? For sure turn this off
  • Build to one or many output folders? Build to one output folder
  • Solution folders? This is a matter of taste.

Sayed Ibrahim Hashimi

My Book: Inside the Microsoft Build Engine : Using MSBuild and Team Foundation Build

Bisset answered 22/6, 2009 at 21:59 Comment(6)
Thanks man, I'll be sure to check those out!Emu
I will say that I have this book and its awesome. It has helped me automate my TFS builds and the local dev builds very extensively. I'm now working on Incremental builds, and the book goes into the subject very deep. Worth the price. Thanks Sayed. Keep up the great work.Heall
Thanks irperez for the commentsBisset
-1 for UNCONDITIONAL recommendation to disable CopyLocal. Set CopyLocal=false can cause different issues during deployment time. See my blog post "Do NOT Change "Copy Local” project references to false, unless understand subsequences." ( geekswithblogs.net/mnf/archive/2012/12/09/…)Thievery
Could "build to one output folder" cause problems when compiling with multiple threads?Seals
Or what if your solution contains multiple "top-level" builds--e.g. web sites or executables which share many of the libraries--which are independently deployable? (Granted--this may be a case for solution splitting, but assume, for the moment, that this is not possible at this time) It wouldn't be feasible to compile to a single output directory. Wouldn't the advice then be: CopyLocal should be False for libraries but left to True for these "top-level" projects?Prove
B
9

+1 for sparing use of solution folders to help organise stuff.

+1 for project building to its own folder. We initially tried a common output folder and this can lead to subtle and painful to find out-of-date references.

FWIW, we use project references for solutions, and although nuget is probably a better choice these days, have found svn:externals to work well for both 3rd party and (framework type) in-house assemblies. Just get into the habit of using a specific revision number instead of HEAD when referencing svn:externals (guilty as charged:)

Bein answered 27/3, 2009 at 14:51 Comment(6)
+1 I also have problems with the common output folder solution. I didn't understand what "project references for solutions" means, please explain a little bit more...Tights
As in we use VS->Add Reference->Projects, instead of VS->Add Reference->Browse. Small point I know but some folks do it differently (and I think this causes more headaches). If you look at MSBuild csproj text you will see ProjectReference property instead of Reference.Bein
Interesting answer! I think we zigged where you zagged. We have no solution folders at all and rely on project naming (projects are named the same as the namespace). All of our output goes into a single folder, which makes the developer setup much closer to the deployment one. If dependencies are set correctly then we don't have out-of-date references.Rabbinical
aye, the common output directory leads to a lot of pain, especially when using resharper. Out of date references indeed.Shama
Don't quite understand the problem with a common output folder if you use this with Project References.Quirita
@Kevin, it's several years now, but from memory one example is two projects referencing the same assembly, say an external library which is not project referenced. All fine when they're on the same version, but then a new release of this library comes out, but only one project updates their reference, which version gets used in the common output folder?Bein
M
8

Unload projects you don't use often, and buy a SSD. A SSD doesn't improve compile time, but Visual Studio becomes twice faster to open/close/build.

Muttonhead answered 24/6, 2009 at 15:27 Comment(2)
It's worth noting that unloading a project is a per-user setting, so developers on your team can load only the projects they care about. This has really helped our team out.Pinwork
You can use Solution Load Manager extension visualstudiogallery.msdn.microsoft.com/… to define what to not load by defaultThievery
R
6

We have a similar problem as we have 109 separate projects to deal with. To answer the original questions based on our experiences:

1. How do you best handle references between projects

We use the 'add reference' context menu option. If 'project' is selected, then the dependency is added to our single, global solution file by default.

2. Should "copy local" be on or off?

Off in our experience. The extra copying just adds to the build times.

3. Should every project build to its own folder, or should they all build to the same output folder(they are all part of the same application)

All of our output is put in a single folder called 'bin'. The idea being that this folder is the same as when the software is deployed. This helps prevents issues that occur when the developer setup is different from the deployment setup.

4. Are solutions folders a good way of organizing stuff?

No in our experience. One person's folder structure is another's nightmare. Deeply nested folders just increase the time it takes to find anything. We have a completely flat structure but name our project files, assemblies and namespaces the same.


Our way of structuring projects relies on a single solution file. Building this takes a long time, even if the projects themselves have not changed. To help with this, we usually create another 'current working set' solution file. Any projects that we are working on get added in to this. Build times are vastly improved, although one problem we have seen is that Intellisense fails for types defined in projects that are not in the current set.

A partial example of our solution layout:

\bin

OurStuff.SLN

OurStuff.App.Administrator
OurStuff.App.Common
OurStuff.App.Installer.Database
OurStuff.App.MediaPlayer
OurStuff.App.Operator
OurStuff.App.Service.Gateway
OurStuff.App.Service.CollectionStation
OurStuff.App.ServiceLocalLauncher
OurStuff.App.StackTester
OurStuff.Auditing
OurStuff.Data
OurStuff.Database
OurStuff.Database.Constants
OurStuff.Database.ObjectModel
OurStuff.Device
OurStuff.Device.Messaging
OurStuff.Diagnostics
...
[etc]
Rabbinical answered 3/5, 2009 at 9:8 Comment(1)
Could "build to one output folder" cause problems when compiling with multiple threads?Seals
H
4

We work on a similar large project here. Solution folders has proved to be a good way of organising things, and we tend to just leave copy local set to true. Each project builds to its own folder, and then we know for each deployable project in there we have the correct subset of the binaries in place.

As for the time opening and time building, that's going to be hard to fix without breaking into smaller solutions. You could investigate parallelising the build (google "Parallel MS Build" for a way of doing this and integrating into the UI) to improve speed here. Also, look at the design and see if refactoring some of your projects to result in fewer overall might help.

Hames answered 27/3, 2009 at 14:49 Comment(0)
S
3

In terms of easing the building pain, you can use the "Configuration Manager..." option for builds to enable or disable building of specific projects. You can have a "Project [n] Build" that could exclude certain projects and use that when you're targeting specific projects.

As far as the 100+ projects goes, I know you don't want to get hammered in this question about the benefits of cutting down your solution size, but I think you have no other option when it comes to speeding up load time (and memory usage) of devenv.

Shadchan answered 27/3, 2009 at 14:49 Comment(4)
Hm, was this downvoted for being wrong, or just for disagreeing? I can live with the former if you can tell me why.Shadchan
Agreed, I dislike downvoting without comment, so Michael you get my +1 to balance the equation ;-)Bein
btw, I personally don't like mucking around with configuration management for this sort of thing. Why? because if you accidentally commit your modified configuration, then your build server or other developers will be affected.Bein
Agreed. Actually, the same problem applies with solutions with too many projects. :)Shadchan
P
3

What I typically do with this depends a bit on how the "debug" process actually happens. Typically though I do NOT set copy local to be true. I setup the build directory for each project to output everything to the desired end point.

Therefore after each build I have a populated folder with all dll's and any windows/web application and all items are in the proper location. Copy local wasn't needed since the dll's end up in the right place in the end.

Note

The above works for my solutions, which typically are web applications and I have not experienced any issues with references, but it might be possible!

Preordain answered 27/3, 2009 at 14:51 Comment(0)
N
3

We have a similar issue. We solve it using smaller solutions. We have a master solution that opens everything. But perf. on that is bad. So, we segment up smaller solutions by developer type. So, DB developers have a solution that loads the projects they care about, service developers and UI developers the same thing. It's rare when somebody has to open up the whole solution to get what they need done on a day to day basis. It's not a panacea -- it has it's upsides and downsides. See "multi-solution model" in this article (ignore the part about using VSS :)

Neogene answered 3/5, 2009 at 3:0 Comment(0)
M
2

I think with solutions this large the best practice should be to break them up. You can think of the "solution" as a place to bring together the necessary projects and perhaps other pieces to work on a solution to a problem. By breaking the 100+ projects into multiple solutions specialized to developing solutions for only a part of the overall problem you can deal with less at a given time there by speeding your interactions with the required projects and simplifying the problem domain.

Each solution would produce the output which it is responsible for. This output should have version information which can be set in an automated process. When the output is stable you can updated the references in dependent projects and solutions with the latest internal distribution. If you still want to step into the code and access the source you can actually do this with the Microsoft symbol server which Visual Studio can use to allow you to step into referenced assemblies and even fetch the source code.

Simultaneous development can be done by specifying interfaces upfront and mocking out the assemblies under development while you are waiting for dependencies that are not complete but you wish to develop against.

I find this to be a best practice because there is no limit to how complex the overall effort can get when you break down it down physically in this manner. Putting all the projects into a single solution will eventually hit an upper limit.

Hope this information helps.

Mat answered 6/5, 2009 at 2:58 Comment(0)
P
2

We have about 60+ projects and we don't use solution files. We have a mix of C# and VB.Net projects. The performance was always an issue. We don't work on all the projects at the same time. Each developer creates their own solution files based on the projects they're working on. The solution files doesn't get checked into our source control.

All Class library projects would build to a CommonBin folder at the root of the source directory. Executable / Web Projects build to their individual folder.

We don't use project references, instead file based reference from the CommonBin folder. I wrote a custom MSBuild Task that would inspect the projects and determine the build order.

We have been using this for few years now and have no complaints.

Pentheus answered 5/6, 2009 at 17:52 Comment(1)
Would you mind sharing your custom msbuild task?Asuncion
P
0

It all has to do with your definition and view on what a solution and a project are. In my mind a solution is just that, a logical grouping of projects that solve a very specific requirement. We develop a large Intranet application. Each application within that Intranet has it's own solution, which may also contain projects for exes or windows services. And then we have a centralized framework with things like base classes and helpers and httphandlers/httpmodules. The base framework is fairly large and is used by all applications. By splitting up the many solutions in this way you reduce the amount of projects required by a solution, as most of them have nothing to do with one another.

Having that many projects in a solution is just bad design. There should be no reason to have that many projects under a solution. The other problem I see is with project references, they can really screw you up eventually, especially if you ever want to split up your solution into smaller ones.

My advice is to do this and develop a centralized framework (your own implementation of Enterprise Library if you will). You can either GAC it to share or you can directly reference the file location so that you have a central store. You could use the same tactic for centralized business objects as well.

If you want to directly reference the DLL you will want to reference it in your project with copy local false (somewhere like c:\mycompany\bin\mycompany.dll). A runtime you will need to add some settings to your app.config or web.config to make it reference a file not in the GAC or runtime bin. In all actuality it doesn't matter if it's copy local or not, or if the dll ends up in the bin or is even in the GAC, because the config will override both of those. I think it is bad practice to copy local and have a messy system. You will most likely have to copy local temporarily if you need to debug into one of those assemblies though.

You can read my article on how to use a DLL globally without the GAC. I really dislike the GAC mostly because it prevents xcopy deployment and does not trigger an autorestart on applications.

http://nbaked.wordpress.com/2010/03/28/gac-alternative/

Prophylaxis answered 2/4, 2010 at 5:16 Comment(0)
T
0

Set CopyLocal=false will reduce build time, but can cause different issues during deployment time.

There are many scenarios, when you need to have Copy Local’ left to True, e.g. Top-level projects, Second-level dependencies, DLLs called by reflection.

My experience with setting CopyLocal=false wasn't successful. See summary of pro and cons in my blog post "Do NOT Change "Copy Local” project references to false, unless understand subsequences."

Thievery answered 9/12, 2012 at 2:36 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.