problems unmanageable by Turing machines providing means for decreasing complexity of computations and decision-making (Burgin, 2005). Consequently, in comparison with Turing machines and other recursive algorithms, inductive Turing machines represent the next step in the development of computer science as well as in the advancement of network and computational technology. In additi on, inductive Turing machines supply more adequate than recursive algorithms and automata models of computations, algorithms, networks, and information processing systems. As a result, inductive Turing machines have found diverse applications in algorithmic information theory and complexity studies (Burgin, 2004;2010), software testing ; Burgin, Debnath and Lee, 2009), high performance computing (Burgin, 1999), machine learning (Burgin and Klinger, 2004), software engineering (Burgin and Debnath, 2004;2005), computer networks (Burgin, 2006;Burgin and Gupta, 2012) and evolutionary computations (Burgin and Eberbach, 2009;2009a;. For instance, inductive Turing machines can perform all types of machine learning -TxtEx-learning, TxtFin-learning, TxtBClearning, and TxtEx*-learning, (Beros, 2013). While the traditional approach to machine learning models learning processes using functions, e.g., limit partial recursive functions (Gold, 1967), inductive Turing machines are automata, which can compute values of the modeling functions and perform other useful operations while functions only describe such operations. Inductive Turing machines also provide efficient tools for algorithmic information theory, which is one of the indispensable areas in information theory and is based on complexity of algorithms and automata (Chaitin, 1977;Burgin, 2010). There are different kinds and types of complexity with a diversity of different complexity measures. One of the most popular and important of them is Kolmogorov, also called algorithmic, complexity, which has turned into an important and popular tool in many areas such as information theory, computer science, software development, probability theory, and statistics. Algorithmic complexity has found applications in medicine, biology, neurophysiology, physics, economics, hardware and software engineering. In biology, algorithmic complexity is used for estimation of # Introduction Author: University of California, Los Angeles 405 Hilgard Ave. e-mail: markburg@cs.ucla.edu However, with the discovery of super-recursive algorithms and exploration of unconventional computations, more powerful models than Turing machines came to the forefront of computer science (Burgin, 2005; Burgin and Dodig-Crnkovic, 2012). One of the most natural extensions of conventional algorithmic models is inductive Turing machine, which is an innovative model of computations, algorithms and information processing systems more powerful than Turing machine. Inductive Turing machines can solve or a long time, Turing machines dominated theoretical computer science as researchers assumed that they were absolute and allencompassing models of computers and computations. Although Turing machines are functionally equivalent to many other models of computers and computations, such as partial recursive functions, cellular automata or random access machines (RAM), which are called recursive algorithms or recursive automata, computer scientists and mathematicians have been mostly using Turing machines for theoretical exploration of computational problems. F protein identification (Dewey, 1996;1997). In physics, problems of quantum gravity are analyzed based on the algorithmic complexity of a given object. In particular, the algorithmic complexity of the Schwarzschild black hole is estimated (Dzhunushaliev, 1998;Dzhunushaliev and Singleton, 2001). Benci, et al (2002) apply algorithmic complexity to chaotic dynamics. Zurek elaborates a formulation of thermodynamics by inclusion of algorithmic complexity and randomness in the definition of physical entropy (Zurek, 1991). Gurzadyan (2003) uses Kolmogorov complexity as a descriptor of the Cosmic Microwave Background (CMB) radiation maps. Kreinovich, and Kunin (2004) apply Kolmogorov complexity to problems in classical mechanics, while Yurtsever (2000) employs Kolmogorov complexity in quantum mechanics. Tegmark (1996) discusses what can be the algorithmic complexity of the whole universe. The main problem with this discussion is that the author identifies physical universe with physical models of this universe. To get valid results on this issue, it is necessary to define algorithmic complexity for physical systems because conventional algorithmic complexity is defined only for such symbolic objects as words and texts (Li, and Vitanyi, 1997). Then it is necessary to show that there is a good correlation between algorithmic complexity of the universe and algorithmic complexity of its model used by Tegmark (1996). In economics, a new approach to understanding of the complex behavior of financial markets using algorithmic complexity is developed (Mansilla, 2001). In neurophysiology, algorithmic complexity is used to measure characteristics of brain functions (Shaw, et al, 1999). Algorithmic complexity has been useful in the development of software metrics and other problems of software engineering (Burgin, and Debnath, 2003;Lewis, 2001). Crosby and Wallach (2003) use algorithmic complexity to study lowbandwidth denial of service attacks that exploit algorithmic deficiencies in many common applications' data structures. Thus, we see that Kolmogorov/algorithmic complexity is a frequent word in present days' scientific literature, in various fields and with diverse meanings, appearing in some contexts as a precise concept of algorithmic complexity, while being a vague idea of complexity in general in other texts. The reason for this is that people study and create more and more complex systems. Algorithmic complexity in its classical form gives an estimate of how many bits of information we need to build or restore a given text by algorithms from a given class. This forms the foundation for algorithmic information theory (Chaitin, 1977;Burgin, 2010). Conventional Kolmogorov, or recursive algorithmic complexity and its modifications, such as uniform complexity, prefix complexity, monotone complexity, process complexity, conditional Kolmogorov complexity, quantum Kolmogorov complexity, time-bounded Kolmogorov complexity, space-bounded Kolmogorov complexity, conditional resource-bounded Kolmogorov complexity, time-bounded prefix complexity, and resource-bounded Kolmogorov complexity, use conventional, i.e., recursive, algorithms, such as Turing machines. Inductive complexity studied in this paper is a special type of the generalized Kolmogorov complexity (Burgin, 1990), which is based on inductive Turing machines. It is possible to apply inductive algorithmic complexity in all cases where Kolmogorov complexity is used and even in such situations where Kolmogorov complexity is not defined. In particular, inductive algorithmic complexity has been used in the study of mathematical problem complexity (Calude, et al, 2012; Hertel, 2012; Burgin, et al, 2013), as well as for exploration of other problems (Burgin, 2010a). The goal of this work is to find properties of inductively computable and inductively decidable sets and functions applying these properties to inductive algorithmic complexity. This paper has the following structure. In Section 2, we give definitions of simple inductive Turing machines, which can compute much more than Turing machines. In Section 3, we study relations between inductively computable sets, inductively recognizable sets, inductively decidable sets, and inductively computable functions. In Section 4, we use the obtained relations to advance inductive algorithmic complexity and algorithmic information theory. Utilization of inductive algorithmic complexity makes these relations more exact as for infinitely many objects, inductive algorithmic complexity is essentially smaller than Kolmogorov complexity (Burgin, 2004). Section 5 contains conclusion and directions for further research. # II. Simple Inductive Turing Machines as a Computational Model Here we consider only simple inductive Turing machines (Burgin, 2005) and for simplicity call them inductive Turing machines although there are other kinds of inductive Turing machines. A simple inductive Turing machine M works with words in some alphabet and has the same structure and functioning rules as a Turing machine with three heads and three linear tapes (registers) -the input tape (register), output tape (register) and working tape (register). Any inductive Turing machine of the first order is functionally equivalent to a simple inductive Turing machine. Inductive Turing machine of higher orders are more powerful than simple inductive Turing machines allowing computation of more functions and sets. The machine M works in the following fashion. At the beginning, an input word w is written in the input tape, which is a read-only tape. Then the machine M rewrites the word w from the input tape to the working tape and starts working with it. From time to time in this process, the machine M rewrites the word from the working tape to the output tape erasing what was before written in the output tape. In particular, when the machine M comes to a final state, it rewrites the word from the working tape to the output tape and stops without changing the state. The machine M gives the result when M halts in a final state, or when M never stops but at some step of the computation, the content of the output tape (register) stops changing. The computed result of M is the word that is written in the output tape of M. In all other cases, M does not give the result. This means that a simple inductive Turing machine can do what a Turing machine can do but in some cases, it produces its results without stopping. Namely, it is possible that in the sequence of computations after some step, the word (say, w) on the output tape (in the output register) is not changing, while the inductive Turing machine continues working. Then this word w is the final result of the inductive Turing machine. Note that if an inductive Turing machine gives the final result, it is produced after a finite number of steps, that is, in finite time, even when the machine does not stop. So contrary to confusing claims of some researchers, an inductive Turing machine does not need infinite time to produce a result. We assume that inductive Turing machines work with finite words in some alphabet ? or with natural numbers represented by such words. Consequently, inductive Turing machines compute sets X of finite words in ?, i.e., X ? ?* where ?* is the set of all finite words in the alphabet ?, or sets of natural numbers Z ? N represented by words. As it is possible to code any alphabet by words in the alphabet {0, 1}, we can assume (when it is necessary) that this binary alphabet is used by all considered inductive Turing machines. If an inductive Turing machine M transforms words from ?* into words from ?*, then ?* is called the domain and codomain of M. If an inductive Turing machine M transforms numbers from N into numbers from N, then N is called the domain and codomain of M. The set of words (numbers) for which the machine M is defined (gives the result) is called the definability domain of M. The set of words (numbers) computed (generated) by the machine M is called the range of M. Informally, an inductively computable set consists of all final results of some inductive Turing machine M. # III. # Inductively Computable and Inductively Decidable Sets Sets ?* and N are simple examples of inductively computable sets. We remind that a recursively computable set, which is also called a recursively enumerable set, is the range of some Turing machine or of another recursive algorithm (Burgin, 2005). Inductively computable sets are closely related to inductively computable functions, which have the form f: ?* ? ?* or g: N ? N. Definition 3.2. A functions f is called inductively computable if there is an inductive Turing machine M that computes X, i.e., given an arbitrary input x, the machine M computes the value f(x) when f is defined for x and does not give the result when f is undefined for x. The domain, codomain, definability domain and range of an inductively computable function coincides with the domain, codomain, definability domain and range, respectively, of the inductive Turing machine that computes this function. We remind that a recursively computable function, which is also called a partial recursive function, is a function computed by some Turing machine or of another recursive algorithm (Burgin, 2005). As it is possible to simulate any Turing machine by an inductive Turing machine (Burgin, 2005), we have the following result. Proposition 3.1. Any recursively computable function is inductively computable. Definition 3.3. a) A set X ? ?* or X ? N is called inductively recognizable, also called inductively semidecidable, if there is an inductive Turing machine M such that gives the result 1 for input x if and only if x belongs to X. b) A set X ? ?* or X ? N is called inductively corecognizable if there is an inductive Turing machine M such that gives the result 1 for input x if and only if x does not belong to X. Definition 3.4. A set X ? ?* or X ? N is called inductively decidable if there is an inductive Turing machine M such that gives the result 1 for any input x from X and gives the result 0 for any input z from ?*\X (from N\X). Informally, a set X is inductively decidable if some inductive Turing machine M can indicate whether an arbitrary element belongs to X or does not belong. In other words, a set X is inductively decidable if its indication (characteristic) function is inductively computable. Lemma 3.1. A set X ? ?* or X ? N is inductively recognizable if and only if it is inductively computable. Proof. Sufficiency. Let us consider an inductively computable set X. By definition, there is an inductive Turing machine M X the range of which is equal to X. It is possible to assume that the machine M X gives the result if and only if its output stabilizes. To show that X is inductively recognizable, we add a new component (subprogram) C to the machine M X , building the new inductive Turing machine N X . After each step of the machine M X (as a subprogram of the machine N X ), the subprogram C checks if the two consecutive intermediate outputs of M X are equal or not. When they are equal, the machine N X gives the intermediate output 1and then the machine M X (as a subprogram of the machine N X ) makes the next step. When the two consecutive intermediate outputs of M X are not equal, the machine N X gives the intermediate output 1, followed by the intermediate output 0 and then the machine M X (as a subprogram of the machine N X ) makes the next step. This construction shows that the output of N X stabilizes if and only if the output of M X stabilizes. It means that the inductive Turing machine N X gives the result 1 for any input x from X and does not gives the result otherwise. Thus, the set X is inductively recognizable. Necessity. Let us consider an inductively recognizable set X. By definition, there is an inductive Turing machine K X that gives the result 1 for any input x from X and either does not give the result otherwise or gives the result 0. It is possible to assume that the machine K X gives the result if and only if its output stabilizes and all intermediate outputs of K X are equal either to 1 or to 0. To show that X is inductively computable, we transform the machine K X , building the new inductive Turing machine N X . At the beginning, K X stores the input x. Then when the machine K X gives the intermediate output 1, the machine N X gives the intermediate output x. When the machine K X gives the intermediate output 0 the first time, the machine N X gives the intermediate output w, which is not equal to x. When the machine K X gives the intermediate output 0 next time, the machine N X gives the intermediate output x. Next time, the machine K X gives the intermediate output 0, the machine N X gives the intermediate output w and so on. Thus, even if the machine K X obtains the result 0, the machine N X does not give the result. In such a way, the machine N X obtains the result if and only if the machine K X obtains the result. In addition, all results of N X belong to the set X and only to it because K X computes the indicating function of X. Thus, X is equal to the range of the function computed by N X and consequently, X is inductively computable. Lemma is proved. Lemma 3.2. The range of a total monotone inductively computable function is inductively decidable. Proof. Let us consider a total monotone inductively computable function f with the range X. Then there is an inductive Turing machine M that computes f. We build an inductive Turing machine K that gives 1 as its result for all inputs from X and gives 0 as its result for all inputs that does not belong to X. It means that K decides the set X. To achieve this goal, we include the machine M as a part (in the form of subroutine) of the machine K and define functioning of K in the following way. When K obtains a word x as the input x, the goal is to whether x belongs to the set X or does not belong. To do this, the machine K starts simulating the machine M for all inputs x 1 , x 2 , ? , x n that are less than x in a parallel mode. It means that each step is repeated for all inputs x 1 , x 2 , ? , x n , then the next step is also repeated for all inputs x 1 , x 2 , ? , x n , and so on. On each step, the machine K compares intermediate outputs of the machine M with the word x. When, at least, one of the intermediate outputs of the machine M for these inputs is equal to x, the machine K gives the intermediate output 1. When no intermediate outputs of the machine M for these inputs coincide with x, the machine K gives the intermediate output 0. As the inductive Turing machine M computes a total function, all intermediate outputs start repeating at some step of the machine M computation. That is why the word x belong to X if on this step, it coincides with one of the outputs. By construction, the machine K continues to repeat the output 1 forever. If the word x does not coincide with any of the outputs of the machine M, then the word x does not belong to X because x can be the value of f only for arguments x 1 , x 2 , ? , x n , x as f is a monotone function. In such a way, the machine K decides whether an arbitrary word belongs to X or does not belong. Lemma is proved. These lemmas allow us to prove existence of definite relations between inductively computable sets and inductively decidable sets. Theorem 3.1. Any infinite inductively computable set contains an infinite inductively decidable subset. Proof. Let us consider an inductively computable set X. By Lemma 3.1, the set X is inductively recognizable. It means that there is an inductive Turing machine K X that gives the result 1 for any input x from X and either does not give the result otherwise or gives the result 0. It is possible to assume that the machine K X gives the result if and only if its output stabilizes and all intermediate outputs of K X are equal either to 1 or to 0 (Burgin, 2005). As we know, there is the natural order in the set N and the lexicographical order in the set ?* (cf., for example, (Burgin, 2005)). It means that the domain of any inductive Turing machine is the ordered sequence { x 1 , x 2 , x 3 , ? , x n , ?} where x k < x k + 1 for all = 1, 2, 3, ? . To find an inductively decidable subset in the set X, we extend the alphabet ? by adding a new symbol The machine M processes information in cycles organized in the following way. Cycle 1*1: When the machine M gets the word x 1 as its input, it gives x 1 to the machine K X , which starts processing it. At the same time, the counter C counts the number of steps made by K X . When the machine K X gives the intermediate output 1, the machine M gives the intermediate output x 1 , which is stored in the memory of M. When the machine K X gives the intermediate output 0, the machine M gives the intermediate output #, the machine K X stops processing x 1 , the number n 1 of steps made by K X is stored in the memory of M and the generator G generates the word x 2 . Cycle 1*2: Then the machine M gives x 2 to the machine K X , which starts processing it. At the same time, the counter C counts the number of steps made by K X . When the machine K X makes less than n 1 steps, the machine M always gives the intermediate output x 2 . After n 1 steps, when the machine K X gives the intermediate output 1, the machine M gives the intermediate output x 2 . When the machine K X gives the intermediate output 0, the machine M gives the intermediate output #, the machine K X stops processing x 2 , the number n 2 of steps made by K X is stored in the memory of M and the machine K X starts once more processing the word x 1 . At the same time, the counter C counts the number of steps made by K X . Cycle 1*3: When the machine K X makes less than n 2 steps, the machine M always gives the intermediate output x 1 . After n 2 steps, when the machine K X gives the intermediate output 1, the machine M gives the intermediate output x 1 . When the machine K X gives the intermediate output 0, the machine M gives the intermediate output #, the machine K X stops processing x 1 , the number n 3 of steps made by K X is stored in the memory of M and the machine K X starts once more processing the word x 2 . At the same time, the counter C counts the number of steps made by K X . Cycle 1*4: When the machine K X makes less than n 3 steps, the machine M always gives the intermediate output x 2 . After n 2 steps, when the machine K X gives the intermediate output 1, the machine M gives the intermediate output x 2 . When the machine K X gives the intermediate output 0, the machine M gives the intermediate output #, the machine K X stops processing x 2 , the number n 4 of steps made by K X is stored in the memory of M and the generator G generates the word x 3 . Cycle 1*5: Then the machine M gives x 3 to the machine K X , which starts processing it. At the same time, the counter C counts the number of steps made by K X . When the machine K X makes less than n 4 steps, the machine M always gives the intermediate output x 3 . After n 4 steps, when the machine K X gives the intermediate output 1, the machine M gives the intermediate output x 3 . When the machine K X gives the intermediate output 0, the machine M gives the intermediate output #, the machine K X stops processing x 3 , the number n 5 of steps made by K X is stored in the memory of M and the machine K X starts once more processing the word x 1 . At the same time, the counter C counts the number of steps made by K X . This process continues until it stabilizes, which happens because the definability domain of the machine K X is not empty. In such a way, the machine M makes the machine K X to process more and more elements x n , making more and more steps with each of them as its input. As the definability domain of the machine K X is not empty, at some step m 1 , the machine K X continues forever repeating 1 as its output for an input x k . By construction, the machine M continues forever repeating x k as its output for an input x 1 . It means M(x 1 ) = x k . Note that x 1 ? x k and x k may be not the least element in the definability domain X of the machine K X . Given the word x 2 as its input, the machine M performs similar cycles as before but with pairs of words (x i , x j ). Cycle 2*1: Thus, at the beginning when the machine M gets the word x 2 as its input, it gives x 2 and the word x 3 generated by G to the machine K X , which starts processing both words in a parallel mode. At the same time, the counter C counts the number of steps made by K X . When the machine K X gives the intermediate output 1 for both inputs, the machine M gives the intermediate output x 3 , which is stored in the memory of M. When the machine K X gives the intermediate output 0 for the input x 2 before it gives the intermediate output 0 for the input x 3 , the machine M gives the intermediate output #, the machine K X stops processing the pair (x 2 , x 3 ), the number n 1 of steps made by K X is stored in the memory of M, the generator G generates the word x 4 and the machine M goes to the cycle 2*2. When the machine K X gives the intermediate output 0 for the input x 3 at the same time or before it gives the intermediate output 0 for the input x 2 , the machine M gives the intermediate output #, the machine K X stops processing the pair (x 2 , x 3 ), the number n 2 of steps made by K X is stored in the memory of M, the generator G generates the word x 4 and the machine M goes to the cycle 2*3. Cycle 2*2: Then the machine M gives the pair (x 3 , x 4 ) to the machine K X , which starts processing it in a parallel mode. At the same time, the counter C counts the number of steps made by K X . When the machine K X makes less than n 1 steps, the machine M always gives the intermediate output x 4 . After n 1 steps, when the Year 2016 ( ) H machine K X gives the intermediate output 1 for both inputs, the machine M gives the intermediate output x 4 . When the machine K X gives the intermediate output 0 for the input x 3 before it gives the intermediate output 0 for the input x 4 , the machine M gives the intermediate output #, the machine K X stops processing the pair (x 3 , x 4 ), the number n 3 of steps made by K X is stored in the memory of M and the machine M goes to the cycle 2*4. When the machine K X gives the intermediate output 0 for the input x 4 at the same time or before it gives the intermediate output 0 for the input x 3 , the machine M gives the intermediate output #, the machine K X stops processing the pair (x 2 , x 3 ), the number n 2 of steps made by K X is stored in the memory of M, the generator G generates the word x 4 and the machine M goes to the cycle 2*5. Cycle 2*3: Then the machine M gives the pair (x 2 , x 4 ) to the machine K X , which starts processing it in a parallel mode. At the same time, the counter C counts the number of steps made by K X . When the machine K X makes less than n 2 steps, the machine M always gives the intermediate output x 4 . After n 1 steps, when the machine K X gives the intermediate output 1 for both inputs, the machine M gives the intermediate output x 4 . When the machine K X the intermediate output 0 for the input x 2 before it gives the intermediate output 0 for the input x 4 , the machine M gives the intermediate output #, the machine K X stops processing the pair (x 2 , x 4 ), the number n 2 of steps made by K X is stored in the memory of M and the machine M goes to the cycle 2*6. When the machine K X gives the intermediate output 0 for the input x 4 at the same time or before it gives the intermediate output 0 for the input x 3 , the machine M gives the intermediate output #, the machine K X stops processing the pair (x 2 , x 3 ), the number n 2 of steps made by K X is stored in the memory of M, the generator G generates the word x 4 and the machine M goes to the cycle 2*7 and so on. This process continues until it stabilizes, which happens because the definability domain of the machine K X is infinite. In such a way, the machine M makes the machine K X to process more and more pairs (x i , x j ) functioning in a parallel mode and making more and more steps with each pair as its inputs. As in the case of the input x 1 , the machine M, at first, finds the word x k for which the machine K X continues forever repeating 1 as its output and then locates a word x n > x k for which the machine K X also continues forever repeating 1 as its output. The machine M can do this because the definability domain of the machine K X is infinite. When the machine M finds this word x n , it continues forever repeating x n as its output for an input x 2 . It means M(x 2 ) = x n and x n > x k . Note that x 2 ? x n and x n may be not the least element in the definability domain X of the machine K X that is larger than x k . Given the word x 2 as its input, the machine M performs similar cycles as before but with triples of words (x i , x t , x j ) as inputs to the machine K X , which processes them in a parallel mode. In this case, the machine M, at first, finds the word x k for which the machine K X continues forever repeating 1 as its output and then locates a word x n > x k for which the machine K X also continues forever repeating 1 as its output. After this, the machine M finds the word x p > x n for which the machine K X continues forever repeating 1 as its output. The machine M can do this because the definability domain of the machine K X is infinite. When the machine M finds this word x p , it continues forever repeating x p as its output for an input x 3 . It means M(x 3 ) = x p and x p > x n > x k . Note that x 3 ? x p and x p may be not the least element in the definability domain X of the machine K X that is larger than x n . In such a way, the machine M finds results for any input x i computing a total monotone function. By Lemma 2, the range Z of this function is inductive decidable and by construction, it is infinite. Theorem is proved. This result allows us to find additional properties of inductive algorithmic complexity (cf. Section 4). Let us consider the set R M = {(x, t); given the input x, an inductive Turing machine M gives the result in not more than t steps}, i.e., R M consists of all pairs (x, t), in which x is a word from {0, 1}* and t is a natural number. Lemma 3.3. The set R M is inductively decidable. Proof. We build an inductive Turing machine K that gives 1 as its result for all inputs from R M and gives 0 as its result for all inputs that does not belong to R M . It means that K decides the set R M . To achieve this goal, we include the machine M as a part (in the form of subroutine) of the machine K and define functioning of K in the following way. When K obtains a word (x, t) as the input x, it starts simulating the machine M for the input x. When the step number n is made the machine K gives the intermediate output 1. Then the machine K makes one more step simulating the machine M for the input x and compares the new intermediate output of the machine M with its previous result. When these outputs coincide, the machine K gives the intermediate output 1. Otherwise the machine K gives the final output 0 and stops. After each intermediate output 1, the machine K makes one more step simulating the machine M for the input x and compares the new intermediate output of the machine M with its previous result. When these outputs coincide, the machine K gives the intermediate output 1. As the result, the inductive Turing machine K gives 1 when the outputs of M start repeating from the step t and gives 0 as its result otherwise. In such a way, the machine K decides whether an arbitrary word (x, t) belongs to R M or does not belong. Now we find additional relations between inductively computable sets and inductively decidable sets. # Taking a binary relation R ? ?* × ?*, it is possible to consider two projections of this relation: The left projection Pr l R ={ x; ?y ( (x, y) ? R )} The right projection Pr r R ={ y; ?x ( (x, y) ? R )} Theorem 3.2. A set X is inductively computable if and only if it is the left projection of an inductively decidable binary relation. Proof. Necessity. Let us consider an inductively computable set X. By definition, there is an inductive Turing machine M, which computes X. Let us consider the set R M = {(x, t); given the input x, an inductive Turing machine M gives the result not more than in t steps}, i.e., R M consists of all pairs (x, t), in which x is a word from {0, 1}* and t is a natural number. By Lemma 3.3, the set R M is inductively decidable and Pr l R M = X because an element x is computed by M if and only if there is a number t such that given the input x, an inductive Turing machine M gives the result not more than in t steps. Note that X = Pr lr R o M where R o M = {(t, x); {(x, t) ? R M } and thus, R o M is inductively decidable Sufficiency. Let us consider an inductively decidable binary relation R ? ?* × ?* and its left projection Pr l R ={ x; ?y ( (x, y) ? R )}, which we denote by X. By definition, there is an inductive Turing machine K R that gives the result 1 for any input (x, y) from R and gives the result 0 for any input (z, u) that does not belong to R. To show that the set X is inductively computable, we extend the alphabet ? by adding the new symbol # and build a new inductive Turing machine M, which computes X. The machine M contains the machine K R as a component (subroutine), a component (subroutine) G, which generates all words x 1 , x 2 , x 3 , ? , x n , ? in the alphabet ? one after another, and a counter C as another component (subroutine) C. The machine M processes information in cycles organized in the following way. Cycle 1: When the machine M gets a word w as its input, the generator G produces the word x 1 and the machine M gives the pair (w, x 1 ) to the machine K R , which starts processing it. At the same time, the counter C counts the number of steps made by K R . When the machine K X gives the intermediate output 1, the machine M gives the intermediate output w, which is stored in the memory of M. When the machine K X gives the intermediate output 0, the machine M gives the intermediate output #, the machine K X stops processing the pair (w, x 1 ), the number n 1 of steps made by K R is stored in the memory of M and the generator G generates the word x 2 . Cycle 2: Then the machine M gives the pair (w, x 2 ) to the machine K R , which starts processing it. At the same time, the counter C counts the number of steps made by K R . When the machine K R makes less than n 1 steps, the machine M always gives the intermediate output w . After n 1 steps, when the machine K R gives the intermediate output 1, the machine M gives the intermediate output w. When the machine K R gives the intermediate output 0, the machine M gives the intermediate output #, the machine K R stops processing the pair (w, x 2 ), the number n 2 of steps made by K R is stored in the memory of M and the machine K R starts once more processing the pair (w, x 1 ). At the same time, the counter C counts the number of steps made by K R . Cycle 3: When the machine K R makes less than n 2 steps, the machine M always gives the intermediate output w . After n 2 steps, when the machine K R gives the intermediate output 1, the machine M gives the intermediate output w . When the machine K R gives the intermediate output 0, the machine M gives the intermediate output #, the machine K R stops processing the pair (w, x 1 ), the number n 3 of steps made by K R is stored in the memory of M and the machine K R starts once more processing the pair (w, x 2 ). At the same time, the counter C counts the number of steps made by K R . Cycle 4: When the machine K R makes less than n 3 steps, the machine M always gives the intermediate output w . After n 2 steps, when the machine K R gives the intermediate output 1, the machine M gives the intermediate output w . When the machine K R gives the intermediate output 0, the machine M gives the intermediate output #, the machine K R stops processing the pair (w, x 2 ) , the number n 4 of steps made by K R is stored in the memory of M and the generator G generates the word x 3 . Cycle 5: Then the machine M gives pair (w, x 3 ) to the machine K R , which starts processing it. At the same time, the counter C counts the number of steps made by K R . When the machine K R makes less than n 4 steps, the machine M always gives the intermediate output w . After n 4 steps, when the machine K R gives the intermediate output 1, the machine M gives the intermediate output w . When the machine K R gives the intermediate output 0, the machine M gives the intermediate output #, the machine K R stops processing the pair (w, x 3 ), the number n 5 of steps made by K R is stored in the memory of M and the machine K R starts once more processing the pair (w, x 1 ) and this process continues, while the counter C counts the number of steps made by K R . This process stabilizes if and only if the machine K R stabilizes processing a pair (w, x) for some x. If it happens, the machine M computes the word w. In this case, w ? X. Otherwise, w does not belong to the range of M. In this case, w also does not belong to X. As w is an arbitrary word, it means that the machine M computes the set X. # IV. Inductive Algorithmic Complexity Here we study inductive algorithmic complexity for finite objects such as natural numbers or words in a finite alphabet. Usually, it is the binary alphabet {0, 1}. Note that if M is a Turing machine, then algorithmic complexity AC M (x) with respect to M coincides with Kolmogorov complexity C M (x) with respect to M. If M is a prefix Turing machine, then the algorithmic complexity IC M (x) is the prefix Kolmogorov complexity K M (x). However, as in the case of conventional Kolmogorov complexity, we need an invariant complexity of objects. This is achieved by using a universal simple inductive Turing machine (Burgin, 2004;2005). Note that inductive complexity is a special case of generalized Kolmogorov complexity (Burgin, 1990), which in turn, is a kind of axiomatic dual complexity measures (Burgin, 2005). The prefix inductive complexity IK(x) is optimal in the class of prefix inductive complexities IK T (x). Optimality is based on the relation ? defined for functions f(n) and g(n), which take values in natural numbers: f(n) ? g(n) if there is a real number c such that f(n) ? g(n) + c for almost all n?N Let us consider a class H of functions that take values in natural numbers. Then a function f(n) is called optimal for H if f(n) ? g(n) for any function g(n) from H. In the context of the axiomatic theory of dual complexities, such a function f(n) is called additively optimal for the class H. Results from the axiomatic theory of dual complexities (Burgin, 1990;2010) imply the following theorem. Theorem 4.1. The function IC(x) is optimal in the class of all prefix inductive complexities IK T (x) with respect to a prefix simple inductive Turing machine T. As there is a simple inductive Turing machine M such that M(x) = x for all words x in the alphabet {1, 0}, we have the following result. Proposition 4.1. IC(x) is a total function. Let us assume for simplicity that inductive Turing machines are working with words in some finite alphabet and that all these words are well ordered, that is, any set of words contains the least element. It is possible to find such orderings, for example, in (Li and Vitaniy, 1997). Theorem 4.1. If h is an increasing inductively computable function that is defined in an infinite inductively computable set W and tends to infinity when l(x) ? ?, then for infinitely many elements x from W, we have h(x) > IC(x). Proof. Let us consider an increasing inductively computable function f that is defined in an infinite inductively computable set W and tends to infinity when l(x) ? ?. Then by Theorem X1, W contains an infinite inductively decidable subset V. Because the set V is infinite, the restriction h of the function f on the set V tends to infinity when l(x) ? ?. By Theorem 5.3.12 from (Burgin, 2005), for infinitely many elements x from V, we have h(x) > IC(x). As V is a subset of W, for infinitely many elements x from W, we have h(x) > IC(x). Theorem is proved. Since the composition of two increasing functions is an increasing function and the composition of a recursive function and an inductively computable function is an inductively computable function, we have the following result. Corollary 4.1. If h(x) and g(x) are increasing functions, h(x) is inductively computable and defined in an infinite inductively computable set W, g(x) is a recursive function, and they both tend to infinity when l(x) ? ?, then for infinitely many elements x from W, we have g(h(x)) > IC(x). Corollary 4.2. The function IC(x) is not inductively computable. Moreover, no inductively computable function f(x) defined for an infinite inductively computable set of numbers can coincide with IC(x) in the whole of its domain of definition. As Kolmogorov complexity C(x) is inductively computable (Burgin, 2005), Theorem X3 implies the following result. Theorem 4.2. For any increasing recursive function h(x) that tends to infinity when l(x) ? ? and any inductively computable set W, there are infinitely many elements x from W for which h(C(x)) > IC(x). Corollary 4.3. In any inductively computable set W, there are infinitely many elements x for which C(x) > IC(x). Corollary 4.4. For any natural number a and in any inductively computable set W, there are infinitely many elements x for which ln a (C(x)) > IC(x). Corollary 4.5. In any inductively computable set W, there are infinitely many elements x for which ln 2 (C(x)) > IC(x). If ln 2 (C(x)) > IC(x), then C(x) > 2 IC(x) . At the same time, for any natural number k, the inequality 2 n > k?n is true almost everywhere. This and Corollary X7 imply the following result. Corollary 4.6. For any natural number k and in any inductively computable set W, there are infinitely many elements x for which C(x) > k?IC(x). Corollary 4.7. In any inductively computable set W, there are infinitely many elements x for which C(x) > 2 IC(x) . Corollary 4.8. For any natural number a and in any inductively computable set W, there are infinitely many elements x for which C(x) > a IC(x) . In addition, it is possible to apply obtained results to inductive algorithmic complexity of inductively computable functions, which are infinite objects but have a finite representation when they are enumerated. V. # Conclusion We have found some basic properties of inductively computable, recognizable and decidable sets, as well as of inductively computable functions for computations, recognition and decision are performed by simple inductive Turing machines. These results show that inductive Turing machines form a natural extension of Turing machines allowing essentially increase power computations and decision-making. We also applied the obtained results to algorithmic information theory demonstrating how inductive Turing machines allow obtaining more information for essentially decreasing complexity in comparison with Turing machines. The results obtained in this paper extend and improve similar results from (Burgin, 2004;2005). At the same time, simple inductive Turing machines form only the first level of the constructive hierarchy of inductive Turing machines (Burgin, 2005). Thus, it would be interesting to study similar properties arising in the higher levels of the constructive hierarchy. Besides, it would be useful to consider these problems in the axiomatic theory of algorithms (Burgin, 2010b). 31![A set X ? ?* or X ? N is called inductively computable if there is an inductive Turing machine M with the range X.](image-2.png "Definition 3 . 1 .") © 2016 Global Journals Inc. (US) ( )H * Dynamical systems and computable information VBenci CBonanno SGalatolo GMenconi MVirgilio 2002 Preprint in Physics condmat/0210654. electronic edition * Learning Theory in the Arithmetical Hierarchy, Preprint in mathematics AABeros math.LO/1302 2013 7069 electronic edition * Generalized Kolmogorov Complexity and other Dual Complexity Measures, Cybernetics and System Analysis MSBurgin 1990 26 * Super-recursive Algorithms as a Tool for High Performance Computing MBurgin Proceedings of the High Performance Computing Symposium the High Performance Computing SymposiumSan Diego 1999 * Algorithmic Complexity of Recursive and Inductive Algorithms MBurgin Theoretical Computer Science 1/3 2004 * Super-recursive Algorithms MBurgin 2005 Springer New York/ Heidelberg/ Berlin * Algorithmic Control in Concurrent Computations MBurgin Proceedings of the 2006 International Conference on Foundations of Computer Science the 2006 International Conference on Foundations of Computer ScienceLas Vegas CSREA Press June, 2006 * Theory of Information: Fundamentality, Diversity and Unification MBurgin 2010 World Scientific New York/London/Singapore * Algorithmic Complexity of Computational Problems MBurgin International Journal of Computing & Information Technology 2 2010a * Measuring Power of Algorithms, Computer Programs, and Information Automata MBurgin 2010b Nova Science Publishers New York * Inductive Complexity Measures for Mathematical Problems MBurgin CSCalude ECalude International Journal of Foundations of Computer Science, v 24 4 2011. 2013 * Complexity of Algorithms and Software Metrics MBurgin NCDebnath Proceedings of the ISCA 18 th International Conference "Computers and their Applications the ISCA 18 th International Conference "Computers and their ApplicationsHonolulu, Hawaii 2003 International Society for Computers and their Applications * Measuring Software Maintenance MBurgin NDebnath Proceedings of the ISCA 19 th International Conference "Computers and their Applications the ISCA 19 th International Conference "Computers and their ApplicationsISCA, Seattle, Washington 2004 * MBurgin NDebnath Journal for Computational Methods in Science and Engineering 1 5 2005 Supplement * Super-Recursive Algorithms in Testing Distributed Systems MBurgin NDebnath Proceedings of the ISCA 24 th International Conference "Computers and their Applications the ISCA 24 th International Conference "Computers and their ApplicationsNew Orleans, Louisiana, USA April, 2009 ISCA * Measuring Testing as a Distributed Component of the Software Life Cycle MBurgin NDebnath HKLee Journal for Computational Methods in Science and Engineering 9 1 2009 * From the Closed Universe to an Open World MBurgin GDodig-Crnkovic Proceedings of Symposium on Natural Computing/Unconventional Computing and its Philosophical Significance, AISB/IACAP World Congress Symposium on Natural Computing/Unconventional Computing and its Philosophical Significance, AISB/IACAP World CongressBirmingham, UK 2012. July 2-6, 2012 * Universality for Turing Machines, Inductive Turing Machines and Evolutionary Algorithms, Fundamenta Informaticae MBurgin EEberbach 2009 91 * On Foundations of Evolutionary Computation: An Evolutionary Automata Approach MBurgin EEberbach Handbook of Research on Artificial Immune Systems and Natural Computing: Applying Complex Adaptive Technologies (Hongwei Mo Hershey, Pennsylvania IGI Global 2009a * Evolutionary Automata: Expressiveness and Convergence of Evolutionary Computation MBurgin EEberbach Computer Journal, v 55 9 2012 * Second-level Algorithms, Superrecursivity, and Recovery Problem in MBurgin BGupta * Quantum Mechanics and Algorithmic Randomness UYurtsever Complexity, v 6 1 2000 * Algorithmic information content, Church-Turing thesis, physical entropy, and Maxwell's demon, in Information dynamics WHZurek Adv. Sci. Inst. Ser. B Phys 256 1991. 1990 Plenum