So I’ve been thinking about this for a while and it hit me today what I think is an interesting DNN design. It’s based on what OpenNARS is doing and I wonder if anyone is doing anything similar to this:
Have a RNN take natural language sentences as input and return a sentence as output. This would be trained on logical X & Y -> Z triplets, then quintuplets, etc.
Have another RNN answer questions true or false (on a scale) given background info X & Y -> Z? Train it on the same data but with this one it can include things like X & Y !-> Z Use it as an adversarial network to continue to train the first network. This is important because, like with NARS, logic is 2D (for them it’s frequency and confidence, for this I guess it’s soundness and validity).
Now, treat these two networks as adversarial, and the platform as a whole as genetic.
Select Phase – Pick two sentences from a database of knowledge. Crossover – Give them to network 1 and return one or more children. Score – Check that the output is valid with network 2 Mutate – Randomly invert the output some of the time.
Put the new sentence in the database.
In NARS there is a system in place for detecting sentences which are the same and letting new computations which generate that sentence add "evidence" to the claim aka frequency. Since this is natural language that would be the hard part, keeping the database from growing out of control and merging same knowledge.
I think validity might be measurable by humans or in response to an environment as an added score. Like reinforcement learning.
Are there any similar Q&A networks out there in NLP?
from Artificial Intelligence http://ift.tt/2rm20Gj