Does the difference between Mealy and Moore state machines have any real significance when it comes to a C implementation? What would that difference normally be?
A long time ago, it was much easier for me to understand Mealy/Moore advantages/disadvantages when it comes to RTL. The whole output depending on the current state/output depending on the current state+current input difference made sense, as did the fact that Mealy's could be made with 1 less state in some cases also made sense. Associating timing diagrams with each FSM implementation also made the difference between them more clear.
Say I'm making a state machine in C. In one case a LUT depends on state/current inputs (Mealy) and in the Moore the LUT just looks up the current state and returns the next. In either the output happens after the return from the LUT (I think, though I could be wrong). I haven't thought of a clear way that a Mealy has an advantage when coded in C. Topics like code readability, speed, code density, design ease, may all be relevant topics - from my perspective the two models seem almost the same.
Maybe this difference is just a topic for academics - and in practice in C implementations the difference is negligible. If you know the key ways a C state machine implementation would differ between Mealy and Moore, and if there are real advantages (that are also significant) I'd be curious to know. I'd like to emphasize - I'm not asking about RTL implementations.
I did see one other Mealy/Moore post here: Mealy v/s. Moore
But this isn't really the level of explanation I am looking for.