Mind - Wikipedia
Relationship between mind and brain: A proposal of solution based on forms of intra- and extra-individual negentropy. ARTÍCULOS DE REVISIÓN. Propósitos y . Diseases of the brain in an aging population will increasingly limit the use of our .. Consequently, there is a connection between mind and brain that leads into. The relationship between brain and mind is same. Your brain is the physical representation of your mind and Your mind is the abstract representation of your .
One is that the system that is having syntactic thoughts about its own syntactic thoughts would have to have its symbols grounded in the real world for it to feel like something to be having higher-order thoughts.
The intention of this clarification is to exclude systems such as a computer running a program when there is in addition some sort of control or even overseeing program checking the operation of the first program. We would want to say that in such a situation it would feel like something to be running the higher-level control program only if the first-order program was symbolically performing operations on the world and receiving input about the results of those operations, and if the higher-order system understood what the first-order system was trying to do in the world.
The issue of symbol grounding is considered further in Section 3. The point here is that it is helpful to be able to think about particular one-off plans, and to correct them; and that this type of operation is very different from the slow learning of fixed rules by trial and error, or the application of fixed rules by a supervisory part of a computer program.
The view I suggest on such qualia is as follows. Information processing in and from our sensory systems e. Given that these inputs must be represented in the system that plans, we may ask whether it is more likely that we would be conscious of them or that we would not. I suggest that it would be a very special-purpose system that would allow such sensory inputs, and emotional and motivational states, to be part of linguistically based planning, and yet remain unconscious given that the processing being performed by this system is inherently conscious, as suggested above.
It seems to be much more parsimonious to hold that we would be conscious of such sensory, emotional, and motivational qualia because they would be being used or are available to be used in this type of linguistically based higher-order thought processing system, and this is what I propose.
What is the relationship between the brain and the mind? | Functions of the Brain - Sharecare
It would require a very special machine to enable this higher-order linguistically-based thought processing, which is conscious by its nature, to occur without the sensory, emotional and motivational states which must be taken into account by the higher-order thought system becoming felt qualia.
The sensory, emotional, and motivational qualia are thus accounted for by the evolution of a linguistic i. It may be that much non-human animal behaviour, provided that it does not require flexible linguistic planning and correction by reflection, could take place according to reinforcement-guidance. Such behaviours might appear very similar to human behaviour performed in similar circumstances, but need not imply qualia. It would be primarily by virtue of a system for reflecting on flexible, linguistic, planning behaviour that humans and animals close to humans, with demonstrable syntactic manipulation of symbols, and the ability to think about these linguistic processes would be different from other animals, and would have evolved qualia.
Certain constraints arise here.Mind Brain Relationship
For example, in the sensory pathways, the nature of the representation may change as it passes through a hierarchy of processing levels, and in order to be conscious of the information in the form in which it is represented in early processing stages, the early processing stages must have access to the part of the brain necessary for consciousness. An example is provided by processing in the taste system. In the primate primary taste cortex, neurons respond to taste independently of hunger, yet in the secondary taste cortex, food-related taste neurons e.
Now the quality of the tastant sweet, salt, etc. The implication of this is that for quality and intensity information about taste, we must be conscious of what is represented in the primary taste cortex or perhaps in another area connected to it that bypasses the secondary taste cortexand not of what is represented in the secondary taste cortex.
I suggest that this correspondence arises because pleasure is the subjective state that represents in the conscious system a signal that is positively reinforcing rewardingand that inconsistent behaviour would result if the representations did not correspond to a signal for positive reinforcement in both the conscious and the non-conscious processing systems.
I do not suggest this at all. Instead the arguments I have put forward above suggest that we are only conscious of representations when we have high-order thoughts about them. Thus, in the example given, there must be connections to the language areas from the primary taste cortex, which need not be direct, but which must bypass the secondary taste cortex, in which the information is represented differently [Rolls, b, ].
There must also be pathways from the secondary taste cortex, not necessarily direct, to the language areas so that we can have higher-order thoughts about the pleasantness of the representation in the secondary taste cortex. A schematic diagram incorporating this anatomical prediction about human cortical neural connectivity in relation to consciousness is shown in Fig. Schematic illustration indicating that early cortical stages in information processing may need access to language areas that bypass subsequent levels in the hierarchy, so that consciousness of what is represented in early cortical stages, and which may not be represented in later cortical stages, can occur.
Higher-order linguistic thoughts HOLTs could be implemented in the language cortex itself, and would not need a separate cortical area. The position to which the above arguments lead is that indeed conscious processing does have a causal role in the elicitation of behaviour, but only under the set of circumstances when higher-order thoughts play a role in correcting or influencing lower-order thoughts.
The sense in which the consciousness is causal is then, it is suggested, that the higher-order thought is causally involved in correcting the lower-order thought; and that it is a property of the higher-order thought system that it feels like something when it is operating. As we have seen, some behavioural responses can be elicited when there is not this type of reflective control of lower-order processing, nor indeed any contribution of language.
There are many brain-processing routes to output regions, and only one of these involves conscious, verbally represented processing that can later be recalled see Fig. Consider grief which may occur when a reward is terminated and no immediate action is possible [Rolls, ].
It may be adaptive by leading to a cessation of the formerly rewarded behaviour, and thus facilitating the possible identification of other positive reinforcers in the environment.
In humans, grief may be particularly potent because it becomes represented in a system that can plan ahead, and understand the enduring implications of the loss. Thinking about or verbally discussing emotional states may also in these circumstances help, because this can lead towards the identification of new or alternative reinforcers, and of the realization that, for example, negative consequences may not be as bad as feared.
Free will would in this scheme involve the use of language to check many moves ahead on a number of possible series of actions and their outcomes, and then with this information to make a choice from the likely outcomes of different possible series of actions. The operation of brain machinery must be relatively deterministic, for it has evolved to provide reliable outputs for given inputs. The issue of whether the brain operates deterministically Section 4 is not therefore I suggest the central or most interesting question about free will.
Why might this arise? One suggestion is that if one is an organism that can think about its own long-term multistep plans, then for those plans to be consistently and thus adaptively executed, the goals of the plans would need to remain stable, as would memories of how far one had proceeded along the execution path of each plan. If one felt each time one came to execute, perhaps on another day, the next step of a plan, that the goals were different, or if one did not remember which steps had already been taken in a multistep plan, the plan would never be usefully executed.
So, given that it does feel like something to be doing this type of planning using higher-order thoughts, it would have to feel as if one were the same agent, acting towards the same goals, from day to day, for which autobiographical memory would be important. If it feels like anything to be the actor, according to the suggestions of the higher-order thought theory, then it should feel like the same thing from occasion to occasion to be the actor, and no special further construct is needed to account for self-identity.
Humans without such a feeling of being the same person from day to day might be expected to have, for example, inconsistent goals from day to day, or a poor recall memory. It may be noted that the ability to recall previous steps in a plan, and bring them into the conscious, higher-order thought system, is an important prerequisite for long-term planning which involves checking each step in a multistep process. However, as stated above, one does not feel that there are straightforward criteria in this philosophical field of enquiry for knowing whether the suggested theory is correct; so it is likely that theories of consciousness will continue to undergo rapid development; and current theories should not be taken to have practical implications.
How are representations grounded in the world? I therefore now develop somewhat what I understand by representations being grounded in the world. From the firing of small ensembles of neurons in the hippocampus, it is possible to know where in allocentric space a monkey is looking [Rolls, Treves, Robertson et al. What is being measured in each example is the mutual information between the firing of an ensemble of neurons and which stimuli are present in the world.
In this sense, one can read off the code that is being used at the end of each of these sensory systems. What is the content of the representation? So which particular neurons fire as a result of the self-organization to represent a particular object or stimulus is arbitrary. What meaning, therefore, does the particular ensemble that fires to an object have? How is the representation grounded in the real world?
This is the case in that the representation may be activated by any view of the object or face. But it still does not provide the representation with any meaning in terms of the real world. What actions might one make, or what emotions might one feel, if that arbitrary set of temporal cortex visual cells was activated? I suggest that one type of meaning of representations in the brain is provided by their reward or punishment value: In the case of primary reinforcers such as the taste of food or pain, the activation of these representations would have meaning in the sense that the animal would work to obtain the activation of the taste of food neurons when hungry, and to escape from stimuli that cause the neurons representing pain to be activated.
Evolution has built the brain so that genes specify these primary reinforcing stimuli, and so that their representations in the brain should be the targets for actions [Rolls]. For example, the touch of a solid object such as a table might become associated with evidence from the motor system that attempts to walk through the table result in cessation of movement.
In this second sense, meaning will be conferred on the visual sensory representation because of its associations in the sensory—motor world. Thus it is suggested that there are two ways by which sensory representations can be said to be grounded, that is to have meaning, in the real world.
The fact that some stimuli are reinforcers but may not be adaptive as goals for action is no objection.
Genes are limited in number, and can not allow for every eventuality, such as the availability to humans of non-nutritive saccharin as a sweetener. The genes can just build reinforcement systems the activation of which is generally likely to increase the fitness of the genes specifying the reinforcer or may have increased their fitness in the recent past.
This is a novel, Darwinian, approach to the issue of symbol grounding. Language in the current theory is defined by syntactic manipulation of symbols, and does not necessarily imply verbal or natural language. This enables correction of errors that cannot be easily corrected by reward or punishment received at the end of the reasoning, due to the credit assignment problem.
That is, there is a need for some type of supervisory and monitoring process, to detect where errors in the reasoning have occurred. It is having such a HOST brain system, and it becoming engaged even if only a littlethat according to the HOST theory is associated with phenomenal consciousness. Put another way, this point is that credit assignment when reward or punishment is received is straightforward in a one-layer network in which the reinforcement can be used directly to correct nodes in error, or responsesbut is very difficult in a multistep linguistic process executed once.
Very complex mappings in a multilayer network can be learned if hundreds of learning trials are provided. But once these complex mappings are learned, their success or failure in a new situation on a given trial cannot be evaluated and corrected by the network. Indeed, the complex mappings achieved by such networks e. In contrast, to correct a multistep, single occasion, linguistically based plan or procedure, recall of the steps just made in the reasoning or planning, and perhaps related episodic material, needs to occur, so that the link in the chain that is most likely to be in error can be identified.
This may be part of the reason why there is a close relationship between declarative memory systems, which can explicitly recall memories, and consciousness. Should these count as higher-order linguistic thought processes? My current response to this is that they should not, to the extent that they operate with fixed rules to correct the operation of a system that does not itself involve linguistic thoughts about symbols grounded semantically in the external world.
If on the other hand it were possible to implement on a computer such a high-order linguistic thought—supervisory correction process to correct first-order one-off linguistic thoughts with symbols grounded in the real world as described at the end of Section 3. If it were possible in a thought experiment to reproduce the neural connectivity and operation of a human brain on a computer, then prima facie it would also have the attributes of consciousness. Raw sensory feels, and subjective states associated with emotional and motivational states, may not necessarily arise first in evolution.
In addition, given that a linguistic system can control behavioural output, several parallel streams might produce maladaptive behaviour apparent as, e. The close relationship between, and the limited capacity of, both the stream of consciousness, and auditory—verbal short-term working memory, may be that both implement the capacity for syntaxin neural networks. For example, the code about which visual stimulus has been shown can be read off from the end of the visual system without taking the temporal aspects of the neuronal firing into account; much of the information about which stimulus is shown is available in short times of 30—50 ms, and cortical neurons need fire for only this long during the identification of objects [Tovee, Rolls, Treves et al.
The fact that the binding must be implemented in neuronal networks may well place limitations on consciousness that lead to some of its properties, such as its unitary nature. However, the fact that oscillations and neuronal synchronization are especially evident in anaesthetized cats does not impress as strong evidence that oscillations and synchronization are critical features of consciousness, for most people would hold that anaesthetized cats are not conscious.
The advantages for a system of being able to do this have been described, and this has been suggested as the reason why consciousness evolved. The evidence that consciousness arises by virtue of having a system that can perform higher-order linguistic processing is however, and I think might remain, circumstantial.
The evidence described here suggests that it does feel like something when we are performing a certain type of information processing, but does not produce a strong reason for why it has to feel like something. It just does, when we are using this linguistic processing system capable of higher-order thoughts.
Evidence also comes from neurological cases, from, for example, split-brain patients who may confabulate conscious stories about what is happening in their other, non-language, hemisphere ; and from cases such as frontal lobe patients who can tell one consciously what they should be doing, but nevertheless may be doing the opposite. The force of this type of case is that much of our behaviour may normally be produced by routes about which we cannot verbalize, and are not conscious about.
Does consciousness cause our behaviour?
- What is the relationship between the brain and the mind?
- Understanding Brain, Mind and Soul: Contributions from Neurology and Neurosurgery
The view that I currently hold is that the information processing that is related to consciousness activity in a linguistic system capable of higher-order thoughts, and used for planning and correcting the operation of lower-order linguistic systems can play a causal role in producing our behaviour. It is, I postulate, a property of processing in this system capable of higher-order thoughts that it feels like something to be performing that type of processing. It is in this sense that I suggest that consciousness can act causally to influence our behaviour—consciousness is the property that occurs when a linguistic system is thinking about its lower-order thoughts, which may be useful in correcting plans.
Most humans would find it very implausible though to posit that they could be thinking about their own thoughts, and reflecting on their own thoughts, without being conscious. This type of processing does appear, for most humans, to be necessarily conscious. First, a linguistic system, not necessarily verbal, but implementing syntax between symbols implemented in the environment would be needed.
This system would be necessary for a multi-step one-off planning system. Then a higher-order thought system also implementing syntax and able to think about the representations in the first-order linguistic system, and able to correct the reasoning in the first-order linguistic system in a flexible manner, would be needed.
The system would also need to have its representations grounded in the world, as discussed in Section 3. So my view is that consciousness can be implemented in neuronal networks and that this is a topic worth discussingbut that the neuronal networks would have to implement the type of higher-order linguistic processing described in this paper, and also would need to begrounded in the world.
If the external evidence is contrary to the noise-influenced decision, then the firing rates of the neurons in the winning attractor are not supported by the external evidence, and are lower than expected.
The second attractor network allows decisions to be made about whether to change the decision made by the first network, and for example abort the trial or strategy see Fig. The second network, the confidence decision network, is in effect monitoring the decisions taken by the first network, and can cause a change of strategy or behaviour if the assessment of the decision taken by the first network does not seem a confident decision. This is described in detail elsewhere [Insabato, Pannunzi, Rolls et al.
Network architecture for decisions about confidence estimates. Computer simulation of the branching architecture of the dendrites of pyramidal neurons. Turing published "Computing machinery and intelligence" in Mindin which he proposed that machines could be tested for intelligence using questions and answers. This process is now named the Turing Test. The term Artificial Intelligence AI was first used by John McCarthy who considered it to mean "the science and engineering of making intelligent machines".
AI is studied in overlapping fields of computer sciencepsychologyneuroscience and engineeringdealing with intelligent behaviorlearning and adaptation and usually developed using customized machines or computers.
Research in AI is concerned with producing machines to automate tasks requiring intelligent behavior. Examples include controlplanning and schedulingthe ability to answer diagnostic and consumer questions, handwritingnatural languagespeech and facial recognition. As such, the study of AI has also become an engineering discipline, focused on providing solutions to real life problems, knowledge miningsoftware applications, strategy games like computer chess and other video games.
One of the biggest limitations of AI is in the domain of actual machine comprehension. Consequentially natural language understanding and connectionism where behavior of neural networks is investigated are areas of active research and development. The debate about the nature of the mind is relevant to the development of artificial intelligence. If the mind is indeed a thing separate from or higher than the functioning of the brain, then hypothetically it would be much more difficult to recreate within a machine, if it were possible at all.
If, on the other hand, the mind is no more than the aggregated functions of the brain, then it will be possible to create a machine with a recognisable mind though possibly only with computers much different from today'sby simple virtue of the fact that such a machine already exists in the form of the human brain. In religion[ edit ] Many religions associate spiritual qualities to the human mind. These are often tightly connected to their mythology and ideas of afterlife.
The Indian philosopher -sage Sri Aurobindo attempted to unite the Eastern and Western psychological traditions with his integral psychologyas have many philosophers and New religious movements. Judaism teaches that "moach shalit al halev", the mind rules the heart.
Humans can approach the Divine intellectually, through learning and behaving according to the Divine Will as enclothed in the Torah, and use that deep logical understanding to elicit and guide emotional arousal during prayer. Christianity has tended to see the mind as distinct from the soul Greek nous and sometimes further distinguished from the spirit. Western esoteric traditions sometimes refer to a mental body that exists on a plane other than the physical.
Hinduism 's various philosophical schools have debated whether the human soul Sanskrit atman is distinct from, or identical to, Brahmanthe divine reality. Taoism sees the human being as contiguous with natural forces, and the mind as not separate from the body.
How Are The Mind And Brain Related?
Confucianism sees the mind, like the body, as inherently perfectible. The arising and passing of these aggregates in the present moment is described as being influenced by five causal laws: According to Buddhist philosopher Dharmakirtithe mind has two fundamental qualities: If something is not those two qualities, it cannot validly be called mind.
You cannot have a mind — whose function is to cognize an object — existing without cognizing an object. Mind, in Buddhism, is also described as being "space-like" and "illusion-like". Mind is space-like in the sense that it is not physically obstructive.
It has no qualities which would prevent it from existing. In Mahayana Buddhism, mind is illusion-like in the sense that it is empty of inherent existence. This does not mean it does not exist, it means that it exists in a manner that is counter to our ordinary way of misperceiving how phenomena exist, according to Buddhism.