Combining Predictors

I say we start linking predictors together.

A linear string of Predictors

If we start simple, and set the Predictor's Input width to 1 (basically reducing it to a transcoder), we can create a string of Predictors.

Each Predictor will have it's output set to the next Predictor's input.

The first Predictor gets the input and the last Produces the output.

What would a string like this do? It would compress the incoming stream of inputs.

Not much a feat, heh?

A hierarchy of layers

So we extend it: we allow inputs wider than 1, and organize these Predictors in layers.

The first layer will have N1 Predictors. Each predictor will have several inputs.

Each Predictor in a layer will redirect it's output to every Predictor in the next layer.

By setting up hierarchies with various number of layers, and number of Predictors in the layers, we can create all kinds of compressors.

Connecting Predictors

If we set every Predictor from a layer to every Predictor in the next layer then we end up with a very densely connected network.

Apart from these densely connected networks, we can devise various ways to link predictors from a layer to the next layers.

And if we want, we can also link Predictors from the same layer. And we can also link Predictors from a higher layer to one in the lower layers.

I think the possibilities are endless...


Post a Comment

Links to this post:

Create a Link

<< Home