Abstract Visual imagery during sleep has long been a topic of persistent speculation, but its private nature has hampered objective analysis. Here we present a neural decoding approach in which machine-learning models predict the contents of visual imagery during the sleep-onset period, given measured brain activity, by discovering links between human functional magnetic resonance imaging patterns and verbal reports with the assistance of lexical and image databases. Decoding models trained on stimulus-induced brain activity in visual cortical areas showed accurate classification, detection, and identification of contents. Our findings demonstrate that specific visual experience during sleep is represented by brain activity patterns shared by stimulus perception, providing a means to uncover subjective contents of dreaming using objective neural measurement.
睡眠过程中的视觉意象一直都是被持续关注的话题，但是它的私密性妨碍了我们对它的研究。 这里我们介绍了一种神经解码的方法，给予测得的大脑活动然后利用机器学习模型来推断睡眠阶段视觉意象的内容，通过在词汇和图像数据库的帮助下发现的人体功能磁共振成像模拟与口头称述之间的关系。 从视觉皮质区刺激引起的大脑活动得到解码模型显示正确的分类，检测和鉴定的内容。 我们的发现表明睡眠过程中特定的视觉体验通过刺激感知共享的大脑活动模式表达，提供了利用客体神经测量法揭露主观梦境内容的方法。