Kevin's and Robin's answer is the most accurate. Oscar's answer is pretty close to correct. But neither the Gnustep documentation nor logancautrell's reasons for the existence of zones is quite correct.
Zones were originally created -- first NXZone, then NSZone -- to ensure that objects allocated from a single zone would be relatively contiguous in memory, that much is true. As it turns out, this does not reduce the amount of memory an app uses; it ends up increasing it slightly, in most cases.
The larger purpose was to be able to mass destroy a set of objects.
For example, if you were to load a complicated document into a document based application, tear-down of the object graph when the document was closed could actually be quite significantly expensive.
Thus, if all the objects for a document were allocated from a single zone and the allocation metadata for that zone was also in that zone, then destruction of all objects related to the document would be as cheap as simply destroying the zone (which was really cheap -- "here, system, have these pages back" -- one function call).
This proved unworkable. If a single reference to an object in the zone leaked out of the zone, then your app would go BOOM as soon as the document was closed and there was no way for the object to tell whatever was referring to it to stop. Secondly, this model also fell prey to the "scarce resource" problem so often encountered in GC'd system. That is, if the object graph of the document held onto non-memory resources, there was no way to clean up said resources efficiently prior to zone destruction.
In the end, the combination of not nearly enough of a performance win (how often do you really close complex documents) with all the added fragility made zones a bad idea. Too late to change the APIs, though, and we are left with the vestiges.