Here is the idea of the research topic: It seems like humans can play videos (e.g. imaginations, memories, etc.) and audio (e.g. sub-vocalizations, imaginations) can be played in their heads . Is it similar to how computers play videos (frame-by-frame)? How does human brains decide what to "play next?" And it seems like these "videos" can be played while the eyes are opened. And it seems like these "videos" are used to make decisions (e.g. if I go forward, I might get hit, so I…) If it is possible, how can we make a computer mimic these abilities?
What degrees or study pathways is good to be knowledgeable enough perform research on this? If these questions are stupid or on the wrong track, then my question can be revised to: What degrees can help know that these questions are stupid?
submitted by /u/vjlomocso
[link] [comments]
from Artificial Intelligence http://ift.tt/2sTJAiU