# Information & Technology Global Journal of Computer Science and Technology: H # Istanbul Maltepe State Hospital, Istanbul Turkey Artificial Intelligence and Mind Dr. Nergis Ustoglu Abstract-According to the absence of any indication in favor of the subjective experience in artificial intelligence simulations including a model with many numbers of neurons and synapses, this should be due to the system structure, not the numbers. This structure may be due to an extra dimension with a different information (quale). While the number of transistors is directly proportional to the more complex calculation, it does not include "awareness". Using C2 onLLNL's Dawn Blue Gene/P supercomputer with 147.456 CPUs and 144 TB of main memory, the DARPA's neuromorphic adaptive SyNAPSE, that has the ambitious goal of engendering a revolutionary system of compact, low-power neuromorphic and synaptronic chips by using novel synapse-like nanodevices, is compared with the number of neurons and synapses in cortices of mammals [1]. Cortical simulation which exceeds that of a cat cerebral cortex is achieved. A cat with 6.10 × 10 12 synapses is simulated using a model, with 0.9 × 10 9 neurons and 0.9 × 10 13 synapses, that uses probabilistic connectivity and a simulation time step of 1 ????, only 83 times slower than real time per Hertz of average neuronal firing rate. They were succesful in memory storage, uncovering the the relationship in data and in pattern recognition but could not have the consciousness of a cat with subjective experience. If we use "Occam's Razor", is it possible to say that the awareness of a system, which is non-connected to a processor belongs to the structure of the system, even though the connection between network and data processing is clear? One of the co-founders of phenomenology, Franz Brentano [2], speaks of a science based on internal perception. Mental acts can totally cover themselves by folding over themselves and taking themselves as an object. Mind can become conscious about a perception in the mind by taking it as an object. In artificial intelligence, though self-folding loop continues forever; it will never be possible. Neither the multiplicity of synapses nor the excess number of processors can be the solution. As expressed in "Gödel's incompleteness theorem", the proof of "inconsistency" in a system will not be possible within the system itself. We all perceive three spatial and one time dimensional space, thus, within this system, it is not possible to understand the "awareness". So our perception, our memory and self-experience of our concepts need a more advanced system with an additional dimension. The algorithms used in artificial intelligence are formed up on dimensions that we can only perceive and which are non-overlapping with the subjective experience. Unlike a computer, the brain internalization of an apple in the memory could be just the case supplied by an additional spatial dimension, without depending on a function. The fourth dimension, which can be visualized as a matter which is intertwined in three dimensions, can be interpreted as the "awareness", like the formation of matter in three dimensions. The missing side of artificial neural networks, processing at nano seconds, compared to human brain at milliseconds, may be some structural properties in the hardware, not the computation. Artificial neural networks are mathematical systems in a neuromorphic network of process units which are connected in a weighted manner. Despite Moore's law and the huge development in computational analysis and the memory storage, nothing has been achieved in terms of awareness and self-sufficiency. Similarly, a more enlarged space is required for the algorithms which are more complex and sensitive; like the homunculus in a brain. In a living thing with nervous system, unlike a computer, all of these algorithms work with input data, each with a certain "awareness". To feel, to suffer, to get pleasure, to know what you think, require an extra information (awareness). This information (quale), which is so different than what we perceive, could be due to some extra data given by neurons on a micro level. It is not possible to observe the "awareness" within the system, by these brain-made algorithms in the 3 spatial and 1 time dimensions. While quale is being formed in that extra dimension, plasticity and computations are done in the sub-dimension. Consciousness is a case which develops its algorithm with the connections between the neurons after taking the inputs, which can turn back into and cover themselves, supplied by the extra dimension of I. Introduction rtificial intelligence researchers, tried to mimic the activities carried out by the brain in terms of the functional aspects. Alan Turing, claims that if the machine replaced by human can convince the interrogator that he is a human, then someone can conclude that the machine is thinking. Can a machine that has knowledge of all things have human awareness? A different laws. Sensation, awareness, that are experienced in this extra dimension, would also supply the autonomy of mind. # II. Conclusion The fact that artificial intelligence simulations have no sense and awareness is not about the number of transistors and the network but about the extra dimension where "quale" is being formed. * References Références Referencias 1 The cat is Out of the Bag: Cortical Simulations with 10 9 Neurons, 10 13 Synapses RajagopalAnanthanarayanan StevenKEsser HorstDSimon DharmendraSModha USA SC09 November 14-20, 2009 * The distinction between Mental and Physical Phenomena FBrentano Philosophy of Mind, Classical and Contemporary Reading, Davis Chalmerz Oxford University Press 1874. 2002