A closure is just a definition with "free variables", together with an external environment that provides bindings for those free variables. In x + 1
x
is a free variable; there's no definition of what x
is, and so you can only say what value this expression has in an environment that includes an x
. In y => x + y
x
is still free but y
is not; the expression defines a function, and the function's parameter is the binding for y
.
A "lexical closure" in the context of Scala is basically just a "closure" (because all its closures are lexical) but technically "closure" is a more general concept. "Closure" on its own doesn't say where the environment comes from; "lexical closure" specifies that it is the lexical scope of the definition of the closure (i.e. where it occurs in the source code) that determines the environment.
So a "side effecting lexical closure" is just one of those, that has side effects.
My take on that comment is that Apocalisp was contrasting the mathematical idea of a function with the programming idea of a parameterised block of code to execute. Whether or not that's what he was thinking, I'll expand on my thinking on that:
In mathematics a function is basically just a mapping from values in some input set (the function's domain) to values in some output set (its codomain). Under this view a function isn't a special restricted form of procedure where we disallow side effects, the concept of "having side effects" just don't apply to it. Asking whether a mathematical function has side effects is like asking whether the colour yellow has side effects; even the answer "no" isn't really correct. All you can do with a mathematical function is ask what value in its codomain corresponds to a given value in its domain; if I have a function described by { 1 -> 11, 2 -> 22, 3 -> 33 }
and I ask what codomain value corresponds to 2, it doesn't make sense to answer "22, and object foo's count
attribute is now 7".
In idealised functional programming we view code as merely a way to define the mappings that correspond to the functions we want to define. Most interesting functions are infinite (or at least impractically vast), so we don't do it by writing out a literal map from inputs to outputs but rather by writing down more-or-less abstract rules that describe how outputs correspond to inputs. Of course in practice we do spend a lot of time thinking operationally about how our code will be executed too, but generally a functional programmer would rather think about definitions first.
On the other hand, what is called a function in traditional imperative programming has very little to do with the mathematical notion of a function; rather than a mapping from values in a domain to values in a codomain, a programmer's function is a sequence of steps to be executed one after the other (possibly parameterised by input values and returning an output value). Each of these steps may have effects, so you can't ignore their existence and say that it's just another way to defined the domain -> codomain mapping, and can't examine them as things on their own independent of their context.
In a programming language that wants to support both functional programming and imperative programming, you use the same language elements to define both mathematical functions and programmer's functions. Or if you use the term function exclusively to refer to mathematical functions, you use the same language elements to define both functions and "something else that isn't a function". I took Apolalisp's phrase "lexical closure" as describing what Scala's function-definition-syntax defines apart from the notion of function, and when you further add that it's a "side effecting lexical closure" then it's definitely not a function that you're talking about.