Is this property of a functor stronger than a monad?
Asked Answered
E

3

28

While thinking about how to generalize monads, I came up with the following property of a functor F:

inject :: (a -> F b) -> F(a -> b) 

-- which should be a natural transformation in both a and b.

In absence of a better name, I call the functor F bindable if there exists a natural transformation inject shown above.

The main question is, whether this property is already known and has a name, and how is it related to other well-known properties of functors (such as, being applicative, monadic, pointed, traversable, etc.)

The motivation for the name "bindable" comes from the following consideration: Suppose M is a monad and F is a "bindable" functor. Then one has the following natural morphism:

fbind :: M a -> (a -> F(M b)) -> F(M b)

This is similar to the monadic "bind",

bind :: M a -> (a -> M b) -> M b

except the result is decorated with the functor F.

The idea behind fbind was that a generalized monadic operation can produce not just a single result M b but a "functor-ful" F of such results. I want to express the situation when a monadic operation yields several "strands of computation" rather than just one; each "strand of computation" being again a monadic computation.

Note that every functor F has the morphism

eject :: F(a -> b) -> a -> F b

which is converse to "inject". But not every functor F has "inject".

Examples of functors that have "inject": F t = (t,t,t) or F t = c -> (t,t) where c is a constant type. Functors F t = c (constant functor) or F t = (c,t) are not "bindable" (i.e. do not have "inject"). The continuation functor F t = (t -> r) -> r also does not seem to have inject.

The existence of "inject" can be formulated in a different way. Consider the "reader" functor R t = c -> t where c is a constant type. (This functor is applicative and monadic, but that's beside the point.) The "inject" property then means R (F t) -> F (R t), in other words, that R commutes with F. Note that this is not the same as the requirement that F be traversable; that would have been F (R t) -> R (F t), which is always satisfied for any functor F with respect to R.

So far, I was able to show that "inject" implies "fbind" for any monad M.

In addition, I showed that every functor F that has "inject" will also have these additional properties:

  • it is pointed

point :: t -> F t

  • if F is "bindable" and applicative then F is also a monad

  • if F and G are "bindable" then so is the pair functor F * G (but not F + G)

  • if F is "bindable" and A is any profunctor then the (pro)functor G t = A t -> F t is bindable

  • the identity functor is bindable.

Open questions:

  • is the property of being "bindable" equivalent to some other well-known properties, or is it a new property of a functor that is not usually considered?

  • are there any other properties of the functor "F" that follow from the existence of "inject"?

  • do we need any laws for "inject", would that be useful? For instance, we could require that R (F t) be isomorphic to F (R t) in one or both directions.

Existence answered 22/9, 2016 at 21:56 Comment(37)
Side question: Do you have some useful piece of code which demonstrates the usefulness of this?Driblet
@Driblet Not yet. Before writing any code, I want to understand the the properties of the functors I'm going to use. It's much easier and faster to work with types than with actual code.Existence
How did you show that every bindable functor is pointed?Gallenz
I think you've invented Traversable, but can't check right nowIndemonstrable
@jberryman: it's certainly not equivalent to Traversable (e.g. you can easily traverse Const), but it does seem related.Aroma
@Indemonstrable I think it's actually the "opposite" of Traversable in some sense. sequenceA would be able to yank -> to the outside of a Traversable (using (->) r's Applicative instance) but not the other way round. IOW sequenceA :: Traversable t => t (a -> b) -> a -> t bApocalypse
@Gallenz To show that inject implies point, first we set a = F t in inject and get inject :: (F t -> F t) -> F(F t -> t). We then apply this to the identity morphism id :: F t -> F t. We obtain a (naturally defined) value of F(F t -> t). Now, we apply the "strength" of the functor: x -> F y -> F (x,y) and project this to obtain the morhpism x -> F y -> F x. Now we set x = t and y = F t -> t in this morphism. We obtain t -> F(F t -> t) -> F t. Since we already have a natural value of F (F t -> t), we thus obtain a natural transformation t -> F t.Existence
@BenjaminHodgson I think the property F (a->b) -> a -> F b holds for any functor F, not only for traversable.Existence
#13765824Aroma
@Existence Ah, so basically point x = fmap (const x) (inject id)?Gallenz
@Gallenz Right. This question has been closed as duplicate, but the other question does not really have a satisfactory answer as to what properties characterize F such that inject exists.Existence
@Existence Ah, yes, you're right. Thanks!Apocalypse
@winitzki: agreed, I reverted the closing for this reason. I reckon that this property is much stronger than Monad and in fact any such functor F is equivalent to the function functor (Log F ->), where Log basically counts the number of times that t appears in F t, which must be fixed and >0. But I don't have a proof for this, and in fact I doubt that Log would be actually computable, at least not in Haskell types. It might be possible mathematically, but I wouldn't be surprised if it required the axiom of choice or something.Aroma
@Aroma Thank you! Let's hope someone has a better answer.Existence
@Aroma I guess what you are saying is that F must be representable with a constant type: that is, there must be a constant type b such that F t is isomorphic to b -> t. But, so far, I have some examples of a "bindable" F that are not of this form. Namely, if A is a profunctor then A t -> t is bindable. An example is the "writer" contrafunctor A t = (t -> r). Then F t = (t -> r) -> t is bindable but not of the form b -> t for a constant b.Existence
This looks way stronger than a monad to me. From a purely practical point of view, there's no way we can construct something like inject id :: IO (IO a -> a). That would be very dangerous, effectively providing access to unsafePerformIO in pure code with the mild requirement that eventually this is being called from IO (which always happens, since we start from main). Basically we only have to do main = do upIO <- inject id ; print (pureF upIO 12) ; ... to allow the purely typed pureF to run side-effects. Scary.Homogeneous
Note that I asked this question: #26264911 a while ago, and as far as anyone could find, the only functors with this property are Identity and (->) a. So I suspect it's much stronger than Monad.Griego
@jcast Thank you for linking to your older question. So it's called promote and not inject. I found a wider class of examples within exponential-polynomial functors, as I detailed in my question. You can construct these functors from Identity, by taking pairs, and by mapping from a contrafunctor.Existence
@Existence Oh, can you? Your question was hard for me to read :). Would you mind summarizing your construction as an answer on my question, so I can switch my accepted answer?Griego
How about F = Maybe, inject _ = Nothing, or any other functor F with F Void nonempty? Surely you want some kind of condition on inject.Alidis
@ReidBarton Indeed, I want a condition that would prohibit trivial inject instances for nontrivial functors. Perhaps eject . inject = id or vice versa, where eject :: F (a->b) -> a -> F b.Existence
Distributive f gets you distribute :: Functor g => g (f a) -> f (g a), which from a Haskell type standpoint is more powerful than your function. I don't know if it satisfies your category-theoretic laws. Every Representable functor is distributive, and the documentation says the other way round holds mathematically.Uncircumcised
If I calculated correctly, for your F t = (t -> r) -> t example, eject . inject is the identity but inject . eject isn't the identity.Alidis
@Uncircumcised Thank you! This link is very useful, I will read that documentation.Existence
@ReidBarton Interesting! I will try to check your calculations.Existence
@ReidBarton I checked and got the same result. For "bindable" polynomial functors F, both eject . inject and inject . eject are identity. The non-identity appears only as inject . eject and only for "bindable" functors F of the form F t = A t -> G t where A is a contrafunctor and G is bindable. The reason is that the chain of morphisms A (a->t) -> A t -> A (a->t) does not yield identity, while A t -> A (a->t) -> A t does.Existence
@Uncircumcised I think you should make that an answerIndemonstrable
I'd like to ruminate on this a bit more, and at the same time wait for other people to say something. Then, if no one wants to, perhaps I would write an answer, summarizing what everyone said here: 0) improved definition of "bindable" with the law eject . inject = id 1) "bindable" is strictly stronger than monad, and strictly weaker than "distributive". 2) specific examples of bindable and non-bindable functors, as well as bindable that is not distributive.Existence
@leftroundabout: Isn't Log f = forall a. f a -> a?Peekaboo
@TomEllis This Log will not always work as a "logarithm". It's a weird type function that does not have any of the properties of the usual logarithm except for a single example: if f a = r -> a then Log f = r. Ordinary logarithm has properties such as log(a*b) = log(a)+log(b), which this Log does not have.Existence
@Existence Firstly, I'm not claiming it has any properties besides the one @leftroundabout speculated about: Log F -> is isomorphic to F (for the class of F under consideration). Secondly, it's not clear to me that it doesn't have the property you claim (where we use Product f g a = Product (f a) (g a) and Sum f g a = Sum (Either (f a) (g a)), of course).Peekaboo
@TomEllis Many examples of using Log show properties very different from what you would expect. 1) We have Log (Sum F G) = Product (Log F) (Log G) instead of log(a*b) = log(a)+log(b). 2) Defining a functor g as type g a = (Int, a, a), what would you expect Log g to be? Calculations show Log g = Int -> Bool. 3) What would you expect Log Maybe to be? Calculations show that Log Maybe is a recursive type equivalent to natural numbers, data Nat = One | Succ Nat. I'd say all of these examples show quite unexpected results, showing that Log is not at all similar to a logarithm.Existence
Correction: Calculations show that Log Maybe is void. I mixed it up with Log h where h is defined by type h a = (Maybe a) -> a. For that type constructor, Log h is equivalent to the natural numbers type Nat defined by data Nat = One | Succ Nat.Existence
@Existence It's interesting. It does seem to have some bizarre properties. It would be nice to see them written down somewhere bigger than a tiny text box.Peekaboo
@TomEllis All of that, and much more, will hopefully be in the book I'm writing now. github.com/winitzki/sofpExistence
Apologies if this has already been answered somewhere, but what exactly is the instance for (a -> (b, b)) -> (a, a) -> (b, b)? Does it throw away some data? Maybe all the instances are obvious to the folks in the discussion, but it would be helpful for the slowpokes among us if you could just explicitly write out the instances you're suggesting in your question/answer @winitzki.Creosol
Assuming Whatever f => (f a, f a) is not an instance I probably know what this is, but it's hard to say without knowing what the actual instance should look like.Creosol
E
19

To improve terminology a little bit, I propose to call these functors "rigid" instead of "bindable". The motivation for saying "rigid" will be explained below.

Definition

A functor f is called rigid if it has the inject method as shown. Note that every functor has the eject method.

class (Functor f) => Rigid f where
  inject :: (a -> f b) -> f(a -> b)

  eject :: f(a -> b) -> a -> f b
  eject fab x = fmap (\ab -> ab x) fab

The law of "nondegeneracy" must hold:

eject . inject = id

Properties

A rigid functor is always pointed:

instance (Rigid f) => Pointed f where
  point :: t -> f t
  point x = fmap (const x) (inject id)

If a rigid functor is applicative then it is automatically monadic:

instance (Rigid f, Applicative f) => Monad f where
  bind :: f a -> (a -> f b) -> f b
  bind fa afb = (inject afb) <*> fa

The property of being rigid is not comparable (neither weaker nor stronger) than the property of being monadic: If a functor is rigid, it does not seem to follow that it is automatically monadic (although I don't know specific counterexamples for this case). If a functor is monadic, it does not follow that it is rigid (there are counterexamples).

Basic counterexamples of monadic functors that are not rigid are Maybe and List. These are functors that have more than one constructor: such functors cannot be rigid.

The problem with implementing inject for Maybe is that inject must transform a function of type a -> Maybe b into Maybe(a -> b) while Maybe has two constructors. A function of type a -> Maybe b could return different constructors for different values of a. However, we are supposed to construct a value of type Maybe(a -> b). If for some a the given function produces Nothing, we don't have a b so we can't produce a total function a->b. Thus we cannot return Just(a->b); we are forced to return Nothing as long as the given function produces Nothing even for one value of a. But we cannot check that a given function of type a -> Maybe b produces Just(...) for all values of a. Therefore we are forced to return Nothing in all cases. This will not satisfy the law of nondegeneracy.

So, we can implement inject if f t is a container of "fixed shape" (having only one constructor). Hence the name "rigid".

Another explanation as to why rigidity is more restrictive than monadicity is to consider the naturally defined expression

(inject id) :: f(f a -> a) 

where id :: f a -> f a. This shows that we can have an f-algebra f a -> a for any type a, as long as it is wrapped inside f. It is not true that any monad has an algebra; for example, the various "future" monads as well as the IO monad describe computations of type f a that do not allow us to extract values of type a - we shouldn't be able to have a method of type f a -> a even if wrapped inside an f-container. This shows that the "future" monads and the IO monad are not rigid.

A property that is strictly stronger than rigidity is distributivity from one of E. Kmett's packages. A functor f is distributive if we can interchange the order as in p (f t) -> f (p t) for any functor p. Rigidity is the same as being able to interchange the order only with respect to the "reader" functor r t = a -> t. So, all distributive functors are rigid.

All distributive functors are necessarily representable, which means they are equivalent to the "reader" functor c -> t with some fixed type c. However, not all rigid functors are representable. An example is the functor g defined by

type g t = (t -> r) -> t

The functor g are not equivalent to c -> t with a fixed type c.

Constructions and examples

Further examples of rigid functors that are not representable (i.e. not "distributive") are functors of the form a t -> f t where a is any contrafunctor and f is a rigid functor. Also, the Cartesian product and the composition of two rigid functors is again rigid. In this way, we can produce many examples of rigid functors within the exponential-polynomial class of functors.

My answer to What is the general case of QuickCheck's promote function? also lists the constructions of rigid functors:

  1. f = Identity
  2. if f and g are both rigid then the functor product h t = (f t, g t) is also rigid
  3. if f and g are both rigid then the composition h t = f (g t) is also rigid
  4. if f is rigid and g is any contravariant functor then the functor h t = g t -> f t is rigid

One other property of rigid functors is that the type r () is equivalent to (), i.e. there is only one distinct value of the type r (). This value is point (), where point is defined above for any rigid functor r. (I have a proof but I will not write it here, because I could not find an easy one-line proof.) A consequence is that a rigid functor must have only one constructor. This immediately shows that Maybe, Either, List etc. cannot be rigid.

Connection with monads

If f is a monad that has a monad transformer of the "composed-outside" kind, t m a = f (m a), then f is a rigid functor.

The "rigid monads" are possibly a subset of rigid functors because construction 4 only yields a rigid monad if f is also a rigid monad rather than an arbitrary rigid functor (but the contravariant functor g can still be arbitrary). However, I do not have any examples of a rigid functor that is not also a monad.

The simplest example of a rigid monad is type r a = (a -> p) -> a, the "search monad". (Here p is a fixed type.)

To prove that a monad f with the "composed-outside" transformer t m a = f (m a) also has an inject method, we consider the transformer t m a with the foreign monad m chosen as the reader monad, m a = r -> a. Then the function inject with the correct type signature is defined as

 inject = join @t . return @r . (fmap @m (fmap @f return @m))

with appropriate choices of type parameters.

The non-degeneracy law follows from the monadic naturality of t: the monadic morphism m -> Identity (substituting a value of type r into the reader) is lifted to the monadic morphism t m a -> t Id a. I omit the details of this proof.

Use cases

Finally, I found two use cases for rigid functors.

The first use case was the original motivation for considering rigid functors: we would like to return several monadic results at once. If m is a monad and we want to have fbind as shown in the question, we need f to be rigid. Then we can implement fbind as

fbind :: m a -> (a -> f (m b)) -> f (m b)
fbind ma afmb = fmap (bind ma) (inject afmb)

We can use fbind to have monadic operations that return more than one monadic result (or, more generally, a rigid functor-ful of monadic results), for any monad m.

The second use case grows out of the following consideration. Suppose we have a program p :: a that internally uses a function f :: b -> c. Now, we notice that the function f is very slow, and we would like to refactor the program by replacing f with a monadic "future" or "task", or generally with a Kleisli arrow f' :: b -> m c for some monad m. We, of course, expect that the program p will become monadic as well: p' :: m a. Our task is to refactor p into p'.

The refactoring proceeds in two steps: First, we refactor the program p so that the function f is explicitly an argument of p. Assume that this has been done, so that now we have p = q f where

q :: (b -> c) -> a

Second, we replace f by f'. We now assume that q and f' are given. We would like to construct the new program q' of the type

q' :: (b -> m c) -> m a

so that p' = q' f'. The question is whether we can define a general combinator that will refactor q into q',

refactor :: ((b -> c) -> a) -> (b -> m c) -> m a

It turns out that refactor can be constructed only if m is a rigid functor. In trying to implement refactor, we find essentially the same problem as when we tried to implement inject for Maybe: we are given a function f' :: b -> m c that could return different monadic effects m c for different b, but we are required to construct m a, which must represent the same monadic effect for all b. This cannot work, for instance, if m is a monad with more than one constructor.

If m is rigid (and we do not need to require that m be a monad), we can implement refactor:

refactor bca bmc = fmap bca (inject bmc)

If m is not rigid, we cannot refactor arbitrary programs. So far we have seen that the continuation monad is rigid, but the "future"-like monads and the IO monad are not rigid. This again shows that rigidity is, in a sense, a stronger property than monadicity.

Existence answered 28/9, 2016 at 1:12 Comment(3)
The functor W r defined as type W r t = (t->r) -> t is actually a monad. The definition of join is join :: W r (W r t) -> W r t; join ww = \y -> ww (\w -> y (w y)) y) and I checked that all monad laws hold. More generally, if M is a monad then the functor g t = (M t -> r) -> M t is also a monad, as are the functors r -> M t and M(r -> t).Existence
Is the continuation monad rigid?Ideal
@PyRulez No, the continuation monad is not rigid, despite what I said in the answer above. The required type for inject is not inhabitable for the continuation monad. In my answer to #26264911 I have detailed some constructions of rigid functors, and the continuation monad does not fit there.Existence
M
6

Here is one possible presentation of rigid functors. I have taken the liberty to bikeshed your names a bit, for reasons I'll soon get to:

flap :: Functor f => f (a -> b) -> a -> f b
flap u a = ($ a) <$> u 

class Functor g => Rigid g where
    fflip :: (a -> g b) -> g (a -> b)
    fflip f = (. f) <$> extractors

    extractors :: g (g a -> a)
    extractors = fflip id

-- "Left inverse"/non-degeneracy law: flap . fflip = id

instance Rigid ((->) r) where
    fflip = flip

Some remarks on my phrasing:

  • I have changed the names of inject and eject to fflip and flap, mainly because, to my eyes, flap looks more like injecting, due to things like this:

    sweep :: Functor f => f a -> b -> f (a, b)
    sweep u b = flap ((,) <$> u) b
    
  • I took the flap name from relude. It is a play on flip, which is fitting because it is one of two symmetrical ways of generalising it. (We can either pull the function outside of an arbitrary Functor, as in flap, or pull a Rigid functor outside of a function, as in fflip.)

  • extractors and fflip are interdefinable, making it possible to write, for example, this neat instance for the search/selection monad:

    newtype Sel r a = Sel { runSel :: (a -> r) -> a }
        deriving (Functor, Applicative, Monad) via SelectT r Identity
    
    instance Rigid (Sel r) where
        -- Sel r (Sel r a -> a) ~ ((Sel r a -> a) -> r) -> Sel r a -> a
        extractors = Sel $ \k m -> m `runSel` \a -> k (const a)
    

A significant fact about extractors is that it gives rise to the following combinator:

distributeLike :: (Rigid g, Functor f) => f (g a) -> g (f a)
distributeLike m = (<$> m) <$> extractors

distributeLike is a more general version of distribute from the Distributive class. A lawful distribute, in turn, must abide by the following laws, which are dual to the laws of Traversable:

-- Identity law
fmap runIdentity . distribute = runIdentity

-- Composition law
fmap getCompose . distribute = distribute . fmap distribute . getCompose

-- Naturality law (always holds, by parametricity)
-- For any natural transformation t
fmap t . distribute = distribute . t

Since fflip is distributeLike with reader (that is, the function functor) as the other functor, and that flap is distribute for reader, both flap . fflip = id and fflip . flap = id are special cases of...

-- m :: f (g a)
distributeLike (distributeLike m) = m

... with appropriate choices of f and g. Now, the property above can be shown to be equivalent to the following conditions:

  1. distributeLike for g follows the identity law of distributive functors (which, by the way, is equivalent to the rigid law);

  2. distributeLike for f also follows the identity law of distributive functors;

  3. Either of the following equivalent conditions hold:

    a. distributeLike for f follows the composition law of distributive functors; or

    b. All f a -> a functions made available by extractors for f are natural in a.

In particular, as flap is a lawful distribute, flap . fflip = id amounts to the identity law for g (condition #2), and fflip . flap = id, to f being distributive (conditions #1 and #3).

(The conditions above can be established by analysing distributeLike . distributeLike = id in terms of extractors, following a similar strategy to the one I applied to the composition law in the "The roadblock, and a detour" section of my post "Every Distributive is Representable".)

For the sake of illustration, let's consider the case of Sel r. As you note, it is rigid but not distributive, its distributeLike follows the identity law but not the composition one. Accordingly, fflip . flap = id does not hold.

With respect to finding a place for Rigid in the type class constellation, I would highlight condition #3b as being particularly interesting. It appears that, given how extractors @f :: forall a. f (f a -> a) is fully polymorphic in a, for it to provide non-natural f a -> a extractors f must not be strictly positive, corresponding to construction #4 in the "Constructions and examples" section of your answer. The lack of strict positivity makes it possible for extractors to incorporate non-naturality (through a user-supplied contravariant argument) without having it specified explicitly in its definition. That being so, only functors that aren't strictly positive, such as Sel r, might be rigid without being distributive.

Miscellaneous remarks

  • Looking at fflip and flap from a monadic point of view, we might say that rigid monads are equipped with an injective conversion from Kleisli arrows to static arrows. With distributive monads, the conversion is upgraded to an isomorphism, which is a generalisation of how Applicative and Monad are equivalent for Reader. One curious aspect of non-distributive rigid monads is that fflip being injective but not surjective implies that there are more static arrows than Kleisli arrows, which is a quite unusual state of affairs.

  • extractors condenses much of what Distributive is about. For any distributive functor g, there is a g (g a -> a) value in which each position is filled with a matching g a -> a natural extractor function. With rigid functors that aren't distributive, this tidy correspondence no longer holds. With Sel r, for instance, every a -> r gives rise to an extractor, which generally isn't natural. That ultimately precludes having distribute/fflip (and also, by the way, tabulate) as isomorphisms. In fact, the very notion of a shape with well-defined positions arguably breaks down when dealing with functors that aren't strictly positive.

  • Distributive is dual to Traversable, and there are several correspondences between facts about the two classes. (In particular, the presentation of Distributive as Representable, in terms of an isomorphism to the reader functor, mirrors the shape-and-contents formulation of Traversable, which can be expressed in terms of an isomorphism to some list-like functor.) That being so, one might wonder if a notion analogous to Rigid make sense for Traversable. I believe it does, though it is unclear how useful such a concept could possibly be. One example of a "co-rigid" pseudo-traversable would be a data structure equipped with a traversal that duplicates effects, but then discards the corresponding duplicate elements upon rebuilding the structure under the applicative layer, so that the identity law is followed, but not the composition one.

Marinmarina answered 30/6, 2019 at 2:54 Comment(10)
It is very interesting to see that Rigid has one fewer law than Distributive. Two questions: 1) Is Traversable similar in that it has two laws, and we can omit one law to have a weaker typeclass (your conjectured "co-Rigid")? 2) What exactly is the usefulness of Distributive functors, and are there any examples of Distributive other than the Reader monad?Existence
[1/3] (1) Indeed. Implementation details aside, the Traversable isomorphism amounts to a clear function, which empties the traversable structure giving out its shape and a list of its contents, and a fill function, which remakes the structure from shape and contents. fill . clear = id is equivalent to the identity law of Traversable, and adding clear . fill = id amounts to adding the composition law. A pseudo-traversable class with only the identity law is conceivable, but I suspect it wouldn't see much use -- the main problem being that traversals wouldn't compose cleanly.Marinmarina
[2/3] (2a) All distributive functors are isomorphic to Reader r for some r, though working with the non-function form can be more convenient depending on what one wants to do. Examples include infinite streams, fixed length vectors, and more generally any data structure with a fixed shape.Marinmarina
[3/3] (2b) While there are some nifty things one might do with the Distributive methods (for instance, with a vector type like Duo a = Duo a a, cotraverse @Duo @[] can be used to zip a list of vectors with a fold), the class has a rather tiny interface. The real power comes with Representable, which gives direct access to the isomorphism, thus making it possible to do with the functorial values almost anything you'd do with a function. (Distributive and Representable are separate classes mostly for the sake of packaging a simpler subset of the interface separately.)Marinmarina
@Existence Beyond Distributive and Representable, there is still a further layer of bells and whistles that can be added to the interface, resulting in Adjunction, the class of Hask/Hask adjunctions. The answer I wrote about it the other day is somewhat relevant when it comes to what Distributive-like things can do.Marinmarina
What about a traversable that has clear . fill = id but not fill . clear = id? Would that be useful?Existence
[1/2] @Existence I believe a pseudo-traversable with only the composition law would be able to violate identity through rearrangements, as long as the rearrangements were idempotent. A traversal that only visits a certain number of elements from the structure might be a reasonable example.Marinmarina
[2/2] @Existence Also, I am beginning to suspect that Witherable might have something to do with that idea (even though it is not exactly the same situation, as the rearrangements there are element-dependent): filtering is idempotent, and the witherable laws include a weakened identity and an unchanged composition law. So yes, it sounds like this other notion might be more fruitful.Marinmarina
@Existence Revisiting this, I noticed a flaw in my reasoning: there is no reasonable revert for Sel r with the proper higher-rank type ((forall x. Sel r x -> x) -> a) -> Sel r a. The extractors and fflip we wrote arise from an imitation of revert with type ((Sel r a -> a) -> a) -> Sel r a. Given this less polymorphic type and Sel r a not being strictly positive, we can cheat and use a monomorphic-in-a k :: a -> r function to extract a result. That means most of my considerations about revert/evert don't apply to Sel r. I'll look into a way to fix this answer...Marinmarina
@Existence I have rewritten the answer, removing the various tangents about revert/evert, and focusing on what looks like a nice, positive result: it appears that only functors that aren't strictly positive (i.e. your construction #4) can be rigid without being distributive.. By the way, considering the (counter-)example of Sel was vital towards making my understanding of Distributive clearer; thanks for getting me to think about it!Marinmarina
C
1

We are all familiar with the Traversable typeclass, which can be boiled down to the following:

class Functor t => Traversable t
  where
  sequenceA :: Applicative f => t (f a) -> f (t a)

This makes use of the concept of an Applicative functor. There is a laws-only strengthening of the categorical concept underlying Applicative that goes like this:

-- Laxities of a lax monoidal endofunctor on Hask under (,)
zip :: Applicative f => (f a, f b) -> f (a, b)
zip = uncurry $ liftA2 (,)

husk :: Applicative f => () -> f ()
husk = pure

-- Oplaxities of an oplax monoidal endofunctor on ... (this is trivial for all endofunctors on Hask)
unzip :: Functor f => f (a, b) -> (f a, f b)
unzip fab = (fst <$> fab, snd <$> fab)

unhusk :: f () -> ()
unhusk = const ()

-- The class
class Applicative f => StrongApplicative f

-- The laws
-- zip . unzip = id
-- unzip . zip = id
-- husk . unhusk = id
-- unhusk . husk = id -- this one is trivial

The linked question and its answers have more details, but the gist is that StrongApplicatives model some notion of "fixed size" for functors. This typeclass has an interesting connection to Representable functors. For reference, Representable is:

class Functor f => Representable x f | f -> x
  where
  rep :: f a -> (x -> a)
  unrep :: (x -> a) -> f a

instance Representable a ((->) a)
  where
  rep = id
  unrep = id

An argument by @Daniel Wagner shows that StrongApplicative is a generalization of Representable, in that every Representable is StrongApplicative. Whether there are any StrongApplicatives that are not Representable is not yet clear.

Now, we know that Traversable is formulated in terms of Applicative, and runs in one direction. Since StrongApplicative promotes the Applicative laxities to isomorphisms, perhaps we want to use this extra equiment to invert the distributive law that Traversable supplies:

class Functor f => Something f
  where
  unsequence :: StrongApplicative f => f (t a) -> t (f a)

It just so happens that (->) a is a StrongApplicative, and in fact a representative specimen (if you'll pardon the pun) of the genus of Representable StrongApplicative functors. Hence we can write your inject/promote operation as:

promote :: Something f => (a -> f b) -> f (a -> b)
promote = unsequence

We mentioned before that StrongApplicative is a superclass of the family of Representative functors. From examining the type of unsequence, it is obvious that the stronger a constraint we place on the polymorphic applicative, the easier it will be to implement unsequence (and hence the more instances of the resulting class).

So in a sense there is a hierarchy of "detraversable" functors that flows in the opposite direction to a hierarchy of applicative effects with respect to which you might wish to detraverse them. The hierarchy of "inner" functors would go like this:

Functor f => Applicative f => StrongApplicative f => Representable x f

And the corresponding hierarchy of detraversable/distributive functors might go like this:

Distributive t <= ADistributive t <= SADistributive t <= RDistributive t

With definitions:

class RDistributive t
  where
  rdistribute :: Representable x f => f (t a) -> t (f a)

  default rdistribute :: (SADistributive t, StrongApplicative f) => f (t a) -> t (f a)
  rdistribute = sadistribute

class RDistributive t => SADistributive t
  where
  sadistribute :: StrongApplicative f => f (t a) -> t (f a)

  default sadistribute :: (ADistributive t, Applicative f) => f (t a) -> t (f a)
  sadistribute = adistribute

class SADistributive t => ADistributive t
  where
  adistribute :: Applicative f => f (t a) -> t (f a)

  default adistribute :: (Distributive t, Functor f) => f (t a) -> t (f a)
  adistribute = distribute

class ADistributive t => Distributive t
  where
  distribute :: Functor f => f (t a) -> t (f a)

Our definition of promote can be generalized to depend on RDistributive (since (->) a itself is indeed a representable functor):

promote :: RDistributive f => (a -> f b) -> f (a -> b)
promote = rdistribute

In a strange turn of events, once you get down to the bottom of this hierarchy (i.e. to Distributive), your promise of detraversability has become so strong relative to your demands that the only functors for which you can implement it are themselves Representable. An example of such a distributive, representable (and hence rigid) functor is that of pairs:

data Pair a = Pair { pfst :: a, psnd :: a }
  deriving Functor

instance RDistributive Pair
instance SADistributive Pair
instance ADistributive Pair
instance Distributive Pair
  where
  distribute x = Pair (pfst <$> x) (psnd <$> x)

Of course if you make a strong demand of the polymorphic "inner functor", for example Representable x f in RDistributive, instances like this become possible:

newtype Weird r a = Weird { runWeird :: (a -> r) -> a }
  deriving Functor

instance RDistributive (Weird r)
  where
  rdistribute = fmap unrep . promoteWeird . rep
    where
    promoteWeird :: (x -> Weird r a) -> Weird r (x -> a)
    promoteWeird f = fmap (. f) $ Weird $ \k m -> m `runWeird` \a -> k (const a)

TODO: Check where (if anywhere) in the hierarchy all the other examples of rigid functors fall.

As I said I haven't thought about it super carefully, so maybe the folks here that have devoted some thought to the rigid functor concept can immediately poke holes in it. Alternately, maybe it makes things fall into place that I can't yet see.

It's probably worthwhile thinking about some laws for these untraversing typeclasses. An obvious one that suggests itself is sequence . unsequence = id and unsequence . sequence = id wherever the functor supports both Traversable and Untraverse.

It's also worth mentioning that the interaction of "distributive law"s of functors with monads and comonads is quite well studied, so that might have some relevance to the monad related discussion in your posts.

Creosol answered 10/3, 2020 at 21:2 Comment(1)
"Whether there are any StrongApplicatives that are not Representable is not yet clear" -- StrongApplicative does imply Representable. husk . unhusk = id means () <$ u = husk () for any u. That being so, the shape of husk () is the only possible one for a StrongApplicative, and having a single shape amounts to being Representable/Distributive.Marinmarina

© 2022 - 2024 — McMap. All rights reserved.