From generating human-like conversations via high-end chatbots to automating various facets of our day-to-day tasks, AI is certainly making new strides across the globe. While the pandemonium surrounding AI replacing humans in many jobs refuses to abate, its awe-inspiring abilities are also giving rise to renewed optimism about future possibilities that can aid humanity. Dreams are integral to human experience. They not only inspire but also occasionally surprise one with their crudeness. However, not everyone is capable of describing their dreams vividly. A lot of it is lost in translation, driving most humans to wonder if at all they could capture the succession of images, ideas, and sensations in a physical form. While neuroscientists from around the world have been grappling with the mammoth task of converting mental images into something tangible, AI seems to have paved the way. A recent study has demonstrated that AI can read brain scans and offer realistic interpretations of mental images. Researchers Shinji Nishimoto and Yu Takagi from Japan's Osaka University recreated high-resolution images from scans of brain activity. The technology according to the duo has the potential to offer numerous applications that include exploring how animals perceive the world around them, recording dreams in humans and even aiding communication with people suffering from paralysis. Dream interpretations This is not the first time that something of this scale has been attempted. Earlier, various studies reported that AI has been used to read brain scans to create images of landscapes and faces. This is the first time that an AI algorithm known as Stable Diffusion has been used. As part of the study, the researchers imparted additional training to the default Stable Diffusion system. This essentially meant connecting additional text descriptions of thousands of photos to brain patterns that were recorded when the same images were observed by the participants of the brain scan studies. While earlier AI algorithms used to decode brain scans relied on large data sets, Stable Diffusion was able to achieve the feat with less training - essentially by incorporating captions of images into its algorithm. Ariel Goldstein, a cognitive neuroscientist at Princeton University who was involved with the study, called it a novel approach combining textual and visual information to decipher the brain. Recording brain activity The study suggests that the AI algorithm processed information gathered from different regions of the brain such as the occipital and temporal lobes that are involved in perceiving images. The system interpreted information via functional magnetic resonance imaging or fMRI scans of the brains. The researchers said that when people look at an image, the temporal lobes register information about its contents, while the occipital lobes record layouts and perspectives. All of this information is recorded using the fMRI, which helps in detecting the changes in blood flow to active regions of the brain. The recorded information, according to the researchers, can be converted into an imitation of the image with the help of AI. The additional training added to the Stable Diffusion algorithm was based on an online data set that was provided by the University of Minnesota. The data set consisted of brain scans from four participants who each viewed 10,000 pictures. However, a portion of the brain scans from the participants was not used in training and was later used to test the AI system later.