How to create a two way mapping between an input space and a higher dimensional sparse constrained space?
Asked Answered
S

1

16

I think this problem can be solved with ML because there are some properties of the output space that I want to achieve.

Problem: D1 <-> D2 where D1 is input space and D2 is a space such that: D2 will have more dimensionality (by orders of magnitude probably) where each dimension is constrained to a natural number between 0 and N and there is a probability P that +-1 change to a random dimension in D2 will have no effect on the mapping back to D1. There is a probability P2 that such a change will only affect a single dimension in D1, probability P3 that it will affect 2 dimensions, and other such rules...

The goal is to create a way to map that would allow the application of genetic algorithms to the D2 space with the rationale that this is how DNA works and it's obviously effective.

Genetic algorithms applied to D1 can be next to useless if there are hidden relationships between dimensions, it's the main reason D2 is needed, where such relationships would be minimized and where they do exist their impact magnitude back to D1 would be randomized.

Statolith answered 4/11, 2015 at 6:13 Comment(2)
It looks to me you are looking for a redundant representation. In genetic programming you have quite a few to choose from. Such an example would be ReNCoDe, that actually has two levels of redundancy. The first comes from the fact that under the hood it uses an Artificial Regulatory Network that achieves this redundancy through a majority rule to map genes into proteins. The second is when the ReNCoDe algorithm maps proteins into functions/terminals.Weeden
A couple of catches: first, cannot track modifications on the upper levels precisely to the lower levels (if that is necessary); second, maybe in this case the dimensionality of the upper levels is smaller (instead of bigger). I did not fully understand that part of the questionWeeden
U
5

It sounds like you are looking for an "error correcting code." In this case, D1 is your initial representation, and D2 is a redundant code. The theory of these codes allows you to calculate the probability of recovering the correct representation, given the size of the code D2, and the probability of D2 being corrupted.

One really excellent reference for binary error correcting codes is David MacKay's Information Theory, Inference, and Learning Algorithms, particularly section II. Note that this is not exactly what you want, as you mentioned natural numbers from 0 to N, not binary numbers. You can also search for "analog error correcting codes," which might get you closer to exactly what you're requesting here.

About genetic algorithms, apparently these can also be applied to the problem of discovering ideal error correcting codes, for example in this paper.

Ullund answered 12/11, 2015 at 18:25 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.