We investigate a new paradigm in the context of learning in the limit, namely, learning correction grammars for classes of computably enumerable (c.e.) languages. Knowing a language may feature a representation of it in terms of two grammars. The second grammar is used to make corrections to the first grammar. Such a pair of grammars can be seen as a single description of (or grammar for) the language. We call such grammars correction grammars. Correction grammars capture the observable fact that people do correct their linguistic utterances during their usual linguistic activities. We show that learning correction grammars for classes of c.e. languages in the TxtEx-model (i.e., converging to a single correct correction grammar in the limit) is sometimes more powerful than learning ordinary grammars even in the TxtBc-model (where the learner is allowed to converge to infinitely many syntactically distinct but correct conjectures in the limit). For each n >= 0, there is it similar learning advantage, again in learning correction grammars for classes of c.e. languages, but where we compare learning correction grammars that make n + 1 corrections to those that make it corrections. The concept of a correction grammar can be extended into the constructive transfinite, using the idea of counting-down from notations for transfinite constructive ordinals. This transfinite extension call also be conceptualized as being about learning Ershov-descriptions for c.e. languages. For it a notation in Kleene's general system (O, <(0)) of ordinal notations for constructive ordinals. we introduce the concept of an u-correction grammar. where it is used to bound the number of corrections that the grammar is allowed to make. We prove a general hierarchy result: if it and v are notations for constructive ordinals such that u <(0) v, then there are classes of c.e. languages that can be TxtEx-learned by conjecturing v-correction grammars but not by conjecturing u-correction grammars. Surprisingly, we show that-above "omega-many" corrections-it is not possible to strengthen the hierarchy: TxtEx-learning u-correction grammars of classes of c.e. languages, where it is a notation in O for any ordinal, can be simulated by TxTBc-learning w-correction grammars, where in is any notation for the smallest infinite ordinal omega.
LEARNING CORRECTION GRAMMARS / Carlucci, Lorenzo; John, Case; Sanjay, Jain. - In: THE JOURNAL OF SYMBOLIC LOGIC. - ISSN 0022-4812. - 74:2(2009), pp. 489-516. [10.2178/jsl/1243948324]
LEARNING CORRECTION GRAMMARS
CARLUCCI, LORENZO;
2009
Abstract
We investigate a new paradigm in the context of learning in the limit, namely, learning correction grammars for classes of computably enumerable (c.e.) languages. Knowing a language may feature a representation of it in terms of two grammars. The second grammar is used to make corrections to the first grammar. Such a pair of grammars can be seen as a single description of (or grammar for) the language. We call such grammars correction grammars. Correction grammars capture the observable fact that people do correct their linguistic utterances during their usual linguistic activities. We show that learning correction grammars for classes of c.e. languages in the TxtEx-model (i.e., converging to a single correct correction grammar in the limit) is sometimes more powerful than learning ordinary grammars even in the TxtBc-model (where the learner is allowed to converge to infinitely many syntactically distinct but correct conjectures in the limit). For each n >= 0, there is it similar learning advantage, again in learning correction grammars for classes of c.e. languages, but where we compare learning correction grammars that make n + 1 corrections to those that make it corrections. The concept of a correction grammar can be extended into the constructive transfinite, using the idea of counting-down from notations for transfinite constructive ordinals. This transfinite extension call also be conceptualized as being about learning Ershov-descriptions for c.e. languages. For it a notation in Kleene's general system (O, <(0)) of ordinal notations for constructive ordinals. we introduce the concept of an u-correction grammar. where it is used to bound the number of corrections that the grammar is allowed to make. We prove a general hierarchy result: if it and v are notations for constructive ordinals such that u <(0) v, then there are classes of c.e. languages that can be TxtEx-learned by conjecturing v-correction grammars but not by conjecturing u-correction grammars. Surprisingly, we show that-above "omega-many" corrections-it is not possible to strengthen the hierarchy: TxtEx-learning u-correction grammars of classes of c.e. languages, where it is a notation in O for any ordinal, can be simulated by TxTBc-learning w-correction grammars, where in is any notation for the smallest infinite ordinal omega.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.