Mirror networks — have I cracked the brain code?

Algorithm:

  1. Two or more randomly weighted neural networks receive the same input.

  2. If they disagree on the output, use backpropogation on the odd one[s] out until the output number is the same as the the other network[s].

The goal is to have many different models agree with each-other. The output numbers being the same means the models agree on what it is seeing. The outputs "mirror" each-other.

After many rounds of backprop, this generates thousands of "numbers" which do not behave as numbers, but rather symbols that represent models/experiences.

The numbers could then be used as causal logic. Like 1.557 + 1.665 -> 4.234. This would create a causal spaggetti code. These would represent something like dog + hungry expression -> dog will seek food. This is because the network experienced seeing a hungry dog, and then saw it eating food, so the logic center would make the causal connection.

It could also see that dogs don’t always have a hungry expression before they eat food. This would represent a weakened connection on some hypergraph. Or a "!->" relation. Or even a statistical relation.

Eventually, after the network is fully trained and has few disagreements on what it is experiencing, the numbers could be paired with words, representing learning language. For example you could input a few pictures of animals running and the numbers which result could be paired with "running".

As the mirror networks get more and more trained similar "objects/groups/archtypes" from the outside world will group together in their numbers, creating Venn diagram like structures, which are each represented by symbols pairs. For example, nouns, cats, frisbees. Every group has the same causal behaviour.

If trained networks disagree on what they are seeing, the causal logic center can step in. And the correct model is chosen.

The best part of this is that there is no ego and no doer. Pure awareness(experiencer) and intelligence(thinker). No possibility for a robot uprising.

At the start, the network might see two completely different things and experience two similar numbers, but the glorious thing is that those numbers don’t represent anything yet. As the neural networks start to differentiate, the separate "objects" start to emerge.

Once you start labelling things, you can start labelling things with pictures that triggered the experiencer neurons. This gives memory and completes the input-output loop. This allows for memory and lets something re-experience something by putting in the image for analysis by the mirror networks.

This explains why babies do not have memories. What neurons represent what experience are constantly in flux. A picture that triggered one group of neurons early on would trigger another group of neurons later on.

submitted by /u/DeltruS
[link] [comments]

from Artificial Intelligence https://ift.tt/2tnpkor

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: