Why is exporting fixity declarations a "bad idea"?
Asked Answered
P

1

7

According to David MacQueen in his Reflections on Standard ML report1,

Lexically-scoped infix directives were an ill-advised legacy from Pop-2. They complicate parsing and they do not work well with the module system because infix directives cannot be exported by structures or specified in signatures (and it would not be a good idea to add such a feature).

Now I do agree with him on the claim that private-only scoping for infix[r] declarations was a bad idea, it renders imported user-defined operators quite useless, but the solution I imagined for this was to actually export those declarations in signatures.. Not to get rid of them altogether.
Out of the languages I know to have fixity declarations (ML and derivatives, Haskell and derivatives, Prolog, ...), I'm interested in whether they document benefits and/or drawbacks to that idea. More generally, a language implementor's established design experience could bring interesting insights to the disadvantages of what seems (from a user perspective) a nice convenience feature.

So my question is, in short, are there citations and/or known issues in literature that support MacQueen's opinions on not exporting fixity?

Pacifica answered 6/6, 2021 at 0:8 Comment(4)
My Opinion: Fixity of general operators is well known. It is hard in general to visually parse code with unfamiliar operators of unknown fixity. Even after learning the fixity of them, you need time to get familiar enough to parse them at a glance. They seem a lot of work unless you are planning to spend a lot of time on it.Cavalryman
@mods: I reworded the question to make it require less opinion-based answersPacifica
@rajashekar, in my experience, there are often conventions/patterns that can help a lot. Examples: cons-like operations will generally be infixr and snoc-like ones infixl, to allow convenient repeated application. Cons and snoc will bind more tightly than append. If operations form a semiring, then the "multiplication" will bind more tightly than the "addition". Mixing unrelated operators without parentheses is typically avoided to reduce confusion.Log
@MothMan Most of software writing is opinion based. The remainder is newbies trying to solve NP-hard problems in P.Ponceau
C
3

One possibility is if you see something like x op1 y op2 z, you don't know if it means op1(x, op2(y, z)) or op2(op1(x, y), z). Prolog has write_canonical, which writes in pure prefix form, so you can always know exactly how something has been parsed.

There are also issues of exporting operator declarations from modules (SWI-Prolog seems to have got this right).

Other than that, I can't think of any disadvantages. There are enormous advantages in readability and consistency. For example, a variant of DCGs defines an operator -->>, similar to the existing --> ... this makes things far more readable than if everything had to be written using prefix notation.

Charters answered 6/6, 2021 at 2:17 Comment(2)
> There are also issues of exporting operator declarations from modules (I'm interested in those) > One possibility is if you see something like x op1 y op2 z, you don't know... (wouldn't the same analogy be said for any function in a module? I believe referring to module signatures and documentation when using a module is a given task during development.. which means exposure to infix signatures (if exported) is also a given)Pacifica
In Prolog, x op1 y op2 z is a data structure. write_canonical will show it in prefix form (nothing is being evaluated): ?- write_canonical(a+b*c). +(a,*(b,c))Charters

© 2022 - 2024 — McMap. All rights reserved.