Need help?

800-5315-2751 Hours: 8am-5pm PST M-Th;  8am-4pm PST Fri
Medicine Lakex
medicinelakex1.com
/w/www2.cs.uregina.ca1.html

Dans la pharmacie en ligne Viagra-représenté Paris large éventail de la dysfonction érectile anti-plus consommée. Générique Levitra (vardenafil), Cialis (tadalafil) et achat viagra pour homme, dont le prix est acceptable pour tous les budgets.1

Yo no soy un gran amante de pedir medicamentos por internet. Pero a veces la necesidad de herramientas, que en las farmacias regulares o no, o rara vez cialis generico Recibes como un paquete, todo montado y embalado.

Www2.cs.uregina.ca

Replacing limit learners with equally powerful one-shot query learners Steffen Lange1 and Sandra Zilles2 1 Fachhochschule Darmstadt, FB Informatik, Haardtring 100, 64295 Darmstadt, Germany, 2 Technische Universit¨at Kaiserslautern, FB Informatik, Postfach 3049, 67653 Kaiserslautern, Germany, Abstract. Different formal learning models address different aspectsof human learning. Below we compare Gold-style learning —interpretinglearning as a limiting process in which the learner may change its mindarbitrarily often before converging to a correct hypothesis—to learningvia queries—interpreting learning as a one-shot process in which thelearner is required to identify the target concept with just one hypothesis.
Although these two approaches seem rather unrelated at first glance,we provide characterizations of different models of Gold-style learning(learning in the limit, conservative inference, and behaviourally correctlearning) in terms of query learning. Thus we describe the circumstanceswhich are necessary to replace limit learners by equally powerful one-shot learners. Our results are valid in the general context of learningindexable classes of recursive languages.
In order to achieve the learning capability of Gold-style learners, thecrucial parameters of the query learning model are the type of queries(membership, restricted superset, or restricted disjointness queries) andthe underlying hypothesis space (uniformly recursive, uniformly r. e., oruniformly 2-r. e. families). The characterizations of Gold-style languagelearning are formulated in dependence of these parameters.
Undeniably, there is no formal scheme spanning all aspects of human learning.
Thus each learning model analysed within the scope of learning theory addressesonly special facets of our understanding of learning.
For example, Gold's [8] model of identification in the limit is concerned with learning as a limiting process of creating, modifying, and improving hypothesesabout a target concept. These hypotheses are based upon instances of the targetconcept offered as information. In the limit, the learner is supposed to stabilizeon a correct guess, but during the learning process one will never know whetheror not the current hypothesis is already correct. Here the ability to change itsmind is a crucial feature of the learner.
In contrast to that, Angluin's [2, 3] model of learning with queries focusses learning as a finite process of interaction between a learner and a teacher. Thelearner asks questions of a specified type about the target concept and theteacher—having the target concept in mind—answers these questions truthfully.
After finitely many steps of interaction the learner is supposed to return its solehypothesis—correctly describing the target concept. Here the crucial featuresof the learner are its ability to demand special information on the target con-cept and its restrictiveness in terms of mind changes. Since a query learner isrequired to identify the target concept with just a single hypothesis, we refer tothis phenomenon as one-shot learning.
Our analysis concerns common features and coincidences between these two seemingly unrelated approaches, thereby focussing our attention on the identifi-cation of formal languages, ranging over indexable classes of recursive languages,as target concepts, see [1, 10, 14]. In case such coincidences exist, their revelationmight allow for transferring theoretically approved insights from one model tothe other. In this context, our main focus will be on characterizations of Gold-style language learning in terms of learning via queries. Characterizing differenttypes of Gold-style language learning in such a way, we will point out interestingcorrespondences between the two models. In particular, our results demonstratehow learners identifying languages in the limit can be replaced by one-shot learn-ers without loss of learning power. That means, under certain circumstances thecapability of limit learners is equal to that of one-shot learners using queries.
The crucial question in this context is what abilities of the teacher are re- quired to achieve the learning capability of Gold-style learners for query learners.
In particular, it is of importance which types of queries the teacher is able toanswer (and thus the learner is allowed to ask). This addresses two facets: first,the kind of information prompted by the queries (we consider membership, re-stricted superset, and restricted disjointness queries) and second, the hypothesisspace used by the learner to formulate its queries and hypotheses (we consideruniformly recursive, uniformly r. e., and uniformly 2-r. e. families). Note thatboth aspects affect the demands on the teacher.
Our characterizations reveal the corresponding necessary requirements that have to be made on the teacher. Thereby we formulate coincidences of the learn-ing capabilities assigned to Gold-style learners and query learners in a quitegeneral context, considering three variants of Gold-style language learning. More-over, we compare our results to several insights in Gold-style learning via oracles,see [13] for a formal background. As a byproduct of our analysis, we provide a spe-cial indexable class of recursive languages which can be learned in a behaviourallycorrect manner3 in case a uniformly r. e. family is chosen as a hypothesis space,but which is not learnable in the limit, no matter which hypothesis space is cho-sen. Although such classes have already been offered in the literature, see [1], upto now all examples—to the authors' knowledge—are defined via diagonalisation 3 Behaviourally correct learning is a variant of learning in the limit, see for example [7, 4, 13]. A definition is given later on.
in a rather involved manner. In contrast to that, the class we provide below isvery simply and explicitly defined without any diagonal construction.
Preliminaries and basic results Familiarity with standard mathematical, recursion theoretic, and language the-oretic notions and notations is assumed, see [12, 9]. From now on, a fixed finitealphabet Σ with {a, b} ⊆ Σ is given. A word is any element from Σ∗ and alanguage any subset of Σ∗. The complement L of a language L is the set Σ∗ L.
Any infinite sequence t = (wi)i∈ with {w i i ∈ N} = L is called a text for L.
A family (Ai)i∈ of languages is uniformly recursive (uniformly r. e.) if there is a recursive (partial recursive) function f such that Ai = {w ∈ Σ∗ f (i, w) = 1}for all i ∈ N. A family (Ai)i∈ is uniformly 2-r. e., if there is a recursive func- tion g such that Ai = {w ∈ Σ∗ g(i, w, n) = 1 for all but finitely many n} for alli ∈ N. Note that for uniformly recursive families membership is uniformly de-cidable.
Let C be a class of recursive languages over Σ∗. C is said to be an indexable class of recursive languages (in the sequel we will write indexable class for short),if there is a uniformly recursive family (Li)i∈ of all and only the languages in C.
Such a family will subsequently be called an indexing of C.
A family (Ti)i∈ of finite languages is recursively generable, if there is a recursive function that, given i ∈ N, enumerates all elements of Ti and stops.
In the sequel, let ϕ be a G¨ odel numbering of all partial recursive functions and Φ the associated Blum complexity measure, see [5].
Gold-style language learning Let C be an indexable class, H = (Li)i∈ any uniformly recursive family (called hypothesis space), and L ∈ C. An inductive inference machine (IIM ) M is analgorithmic device that reads longer and longer initial segments σ of a textand outputs numbers M (σ) as its hypotheses. An IIM M returning some i isconstrued to hypothesize the language Li. Given a text t for L, M identifies Lfrom t with respect to H in the limit, if the sequence of hypotheses output by M ,when fed t, stabilizes on a number i (i. e., past some point M always outputs thehypothesis i) with Li = L. M identifies C in the limit from text with respect to H,if it identifies every L0 ∈ C from every corresponding text. LimTxt rec denotesthe collection of all indexable classes C0 for which there are an IIM M 0 and auniformly recursive family H0 such that M 0 identifies C0 in the limit from textwith respect to H0. A quite natural and often studied modification of LimTxt recis defined by the model of conservative inference, see [1]. M is a conservative IIMfor C with respect to H, if M performs only justified mind changes, i. e., if M , onsome text t for some L ∈ C, outputs hypotheses i and later j, then M must haveseen some element w / ∈ Li before returning j. The collection of all indexable classes identifiable from text by a conservative IIM is denoted by Consv Txt rec.
Note that Consv Txt rec ⊂ LimTxtrec [14]. Since we consider learning from textonly, we will assume in the sequel that all languages to be learned are non-empty.
One main aspect of human learning is modelled in the approach of learning inthe limit: the ability to change one's mind during learning. Thus learning isconsidered as a process in which the learner may change its hypothesis arbitrarilyoften until reaching its final correct guess. In particular, it is in general impossibleto find out whether or not the final hypothesis has been reached, i. e., whetheror not a success in learning has already eventuated.
Note that in the given context, where only uniformly recursive families are considered as hypothesis spaces for indexable classes, LimTxt rec coincides withthe collection of all indexable classes identifiable from text in a behaviourallycorrect manner, see [7]: If C is an indexable class, H = (Li)i∈ recursive family, M an IIM, then M is a behaviourally correct learner for C fromtext with respect to H, if for each L ∈ C and each text t for C, all but finitely manyoutputs i of M when fed t fulfil Li = L. Here M may alternate different correcthypotheses arbitrarily often instead of converging to a single hypothesis. Definingthe notion BcTxt rec correspondingly as usual yields BcTxtrec = LimTxtrec (afolklore result). In particular, each IIM BcTxt -identifying an indexable classC0 in some uniformly recursive family H0 can be modified to an IIM LimTxt -identifying C0 in H0.
This coincidence no longer holds, if more general types of hypothesis spaces are considered. Assume C is an indexable class and H+ = (Ui)i∈ is any uni- formly r. e. family of languages comprising C. Then it is also conceivable to useH+ as a hypothesis space. LimTxt r.e. (BcTxtr.e.) denotes the collection of allindexable classes learnable as in the definition of LimTxt rec (BcTxtrec), if thedemand for a uniformly recursive family H as a hypothesis space is loosenedto demanding a uniformly r. e. family H+ as a hypothesis space. Interestingly,LimTxt rec = LimTxtr.e. (a folklore result), i. e., in learning in the limit, the ca-pabilities of IIMs do not increase, if the constraints concerning the hypothesisspace are weakened by allowing for arbitrary uniformly r. e. families. In con-trast to that, in the context of BcTxt -identification, weakening these constraintsyields an add-on in learning power, i. e., BcTxt rec ⊂ BcTxtr.e. In particular,LimTxt rec ⊂ BcTxtr.e. and so LimTxt- and BcTxt-learning no longer coincidefor identification with respect to arbitrary uniformly r. e. families, see also [4, 1].
Hence, in what follows, our analysis of Gold-style language learning will focus on the inference types LimTxt rec, Consv Txtrec, and BcTxtr.e.
The main results of our analysis will be characterizations of these inference types in the query learning model. For that purpose we will make use of well-known characterizations concerning so-called families of telltales, see [1].
Definition 1. Let (Li)i∈ be a uniformly recursive family and (T of finite non-empty sets. (Ti)i∈ is called a family of telltales for (L 1. Ti ⊆ Li.
2. If Ti ⊆ Lj ⊆ Li, then Lj = Li.
The concept of telltale families is the best known notion to illustrate the specific differences between indexable classes in LimTxt rec, Consv Txtrec, andBcTxt r.e. Telltale families and their algorithmic structure have turned out tobe characteristic for identifiability in our three models, see [1, 10, 14, 4]: Theorem 1. Let C be an indexable class of languages.
1. C ∈ LimTxt rec iff there is an indexing of C possessing a uniformly r. e. family of telltales.
2. C ∈ Consv Txt rec iff there is a uniformly recursive family comprising C and possessing a recursively generable family of telltales.
3. C ∈ BcTxt r.e. iff there is an indexing of C possessing a family of telltales.
The notion of telltales is closely related to the notion of locking sequences, see [6]. If H = (Ui)i∈ is a hypothesis space, M an IIM, and L a language, then any finite text segment σ of L is called a LimTxt -locking sequence forM and L (a BcTxt -locking sequence for M , L and H), if M (σ) = M (σσ0)(UM(σ) = UM(σσ0)) for all finite text segments σ0 of L. If L is LimTxt-learnedby M (BcTxt -learned by M ) respecting H, then there exists a LimTxt -lockingsequence σ for M and L (a BcTxt -locking sequence for M , L, and H). Moreover,UM(σ) = L must be fulfilled for each such locking sequence.
Language learning via queries In the query learning model, a learner has access to a teacher that truthfullyanswers queries of a specified kind. A query learner M is an algorithmic devicethat, depending on the reply on the previous queries, either computes a newquery or returns a hypothesis and halts, see [2]. Its queries and hypotheses arecoded as natural numbers; both will be interpreted with respect to an underlyinghypothesis space. When learning an indexable class C, any indexing H = (Li)i∈Nof C may form a hypothesis space. So, as in the original definition, see [2], whenlearning C, M is only allowed to query languages belonging to C.
More formally, let C be an indexable class, let L ∈ C, let H = (Li)i∈ be an indexing of C, and let M be a query learner. M learns L with respect to Husing some type of queries if it eventually halts and its only hypothesis, say i,correctly describes L, i. e., Li = L. So M returns its unique and correct guess iafter only finitely many queries. Moreover, M learns C with respect to H usingsome type of queries, if it learns every L0 ∈ C with respect to H using queries ofthe specified type. Below we consider, for learning a target language L:Membership queries. The input is a string w and the answer is ‘yes' or ‘no', depending on whether or not w belongs to L.
Restricted superset queries. The input is an index of a language L0 ∈ C. The answer is ‘yes' or ‘no', depending on whether or not L0 is a superset of L.
Restricted disjointness queries. The input is an index of a language L0 ∈ C. The answer is ‘yes' or ‘no', depending on whether or not L0 and L are disjoint.4 4 The term "restricted" is used to distinguish these types of query learning from learning with superset (disjointness) queries, where, together with each negativeanswer the learner is provided a counterexample, i. e., a word in L Lj (in L ∩ Lj ).
MemQ , rSupQ , and rDisQ denote the collections of all indexable classes C0 for which there are a query learner M 0 and a hypothesis space H0 such that M 0learns C0 with respect to H0 using membership, restricted superset, and restricteddisjointness queries, respectively. In the sequel we will omit the term"restricted"for convenience. In the literature, see [2, 3], more types of queries such as (re-stricted) subset queries and equivalence queries have been analysed, but in whatfollows we concentrate on the three types explained above.
Note that, in contrast to the Gold-style models introduced above, learning via queries focusses the aspect of one-shot learning, i. e., it is concerned withlearning scenarios in which learning may eventuate without mind changes.
Having a closer look at the different models of query learning, one easily finds negative learnability results. For instance, the class Csup consisting of the lan-guage L∗ = {a}∗ ∪{b} and all languages {ak k ≤ i}, i ∈ N, is not learnable withsuperset queries. Assume a query learner M learns Csup with superset queries inan indexing (Li)i∈ of C and consider a scenario for M learning L∗. Obviously, a query j is answered ‘yes', iff Lj = L∗. After finitely many queries, M hypoth-esizes L∗. Now let i be maximal, such that a query j with Lj = {ak k ≤ i} hasbeen posed. The above scenario is also feasible for the language {ak k ≤ i + 1}.
Given this language as a target, M will return a hypothesis representing L∗ andthus fail. This yields a contradiction, so Csup / Moreover, as can be verified easily, the class Cdis consisting only of the lan- guages {a} and {a, b} is not learnable with disjointness queries.
Both examples point to a drawback of Angluin's query model, namely the demand that a query learner is restricted to pose queries concerning languagescontained in the class of possible target languages. Note that the class Csup wouldbe learnable with superset queries, if it was additionally permitted to query thelanguage {a}∗, i. e., to ask whether or not this language is a superset of the targetlanguage. Similarly, Cdis would be learnable with disjointness queries, if it wasadditionally permitted to query the language {b}. That means there are verysimple classes of languages, for which any query learner must fail just becauseit is barred from asking the "appropriate" queries.
To overcome this drawback, it seems reasonable to allow the query learner to formulate its queries with respect to any uniformly recursive family comprisingthe target class C. So let C be an indexable class. An extra query learner forC is permitted to query languages in any uniformly recursive family (L0 ) comprising C. We say that C is learnable with extra superset (disjointness) queriesrespecting (L0 ) iff there is an extra query learner M learning C with respect to using superset (disjointness) queries concerning (L0 ) ) denotes the collection of all indexable classes C learnable with extra superset (disjointness) queries respecting a uniformly recursive family.
Our classes Csup and Cdis witness rSupQ ⊂ rSupQ and rDisQ ⊂ rDisQ Note that both classes would already be learnable, if in addition to the superset(disjointness) queries the learner was allowed to ask a membership query for theword b. So the capabilities of rSupQ -learners (rDisQ -learners) already increasewith the additional permission to ask membership queries. Yet, as Theorem 2 shows, combining superset or disjointness queries with membership queries doesnot yield the same capability as extra queries do. For convenience, denote thefamily of classes which are learnable with a combination of superset (disjointness)queries and membership queries by rSupMemQ (rDisMemQ).
Theorem 2. 1. rSupQ ⊂ rSupMemQ ⊂ rSupQ 2. rDisQ ⊂ rDisMemQ ⊂ rDisQ Proof. ad 1. rSupQ ⊆ rSupMemQ is evident; the class Csup yields the inequality.
In order to prove rSupMemQ ⊆ rSupQ , note that, for any word w and any language L, w ∈ L iff Σ∗ {w} 6⊇ L. This helps to simulate membership querieswith extra superset queries. Further details are omitted.
rSupMemQ 6= ∅ is witnessed by the class C of all languages L Lk,l for k, l ∈ N, where Lk = {akbz z ∈ N}, Lk,l = Lk, if ϕk(k) is undefined,and Lk,l = {akbz z ≤ Φk(k) ∨ z > Φk(k) + l}, if ϕk(k) is defined, see [10].
To verify C ∈ rSupQ choose a uniformly recursive family comprising C and all languages L∗ = {akbz z ≤ Φ k (k)}, k ∈ N. Note that L∗ undefined. An rSupQ -learner M for C may act on the following instructions.
- For k = 0, 1, 2, . . ask a superset query concerning Lk, until the answer ‘yes' is received for the first time, i. e., until some k with Lk ⊇ L is found.
- Pose a superset query concerning the language L∗. (* Note that L∗ is a superset of the target language iff L∗ is infinite iff ϕ k (k) is undefined. *) If the answer is ‘yes', then output a hypothesis representing Lk and stop.
If the answer is ‘no' (* in this case ϕk(k) is defined *), then compute Φk(k).
Pose a superset query concerning Lk,1. (* Note that, for any target languageL ⊆ Lk, this query will be answered with ‘yes' iff akbΦk(k)+1 / If the answer is ‘no', then output a hypothesis representing Lk and stop.
If the answer is ‘yes', then, for any l = 2, 3, 4, . ., pose a superset queryconcerning Lk,l. As soon as such a query is answered with ‘no', for some l,output a hypothesis representing Lk,l−1 and stop.
The details verifying that M learns C with extra superset queries are omitted.
In contrast to that one can show that C / ∈ rSupMemQ. Otherwise the halting problem with respect to ϕ would be decidable. Details are omitted.
Hence rSupMemQ ⊂ rSupQ ad 2. rDisQ ⊆ rDisMemQ is obvious; the class Cdis yields the inequality.
In order to prove rDisMemQ ⊆ rDisQ , note that, for any word w and any language L, w ∈ L iff {w} and L are not disjoint. This helps to simulatemembership queries with extra disjointness queries. Further details are omitted.
To prove the existence of a class in rDisQ rDisMemQ, define an indexable class C consisting of L0 = {b} and all languages Li+1 = {ai+1, b}, i ∈ N.
To show that C ∈ rDisQ choose a uniformly recursive family comprising C as well as {a}∗ and all languages {ai+1}, i ∈ N. A learner M identifying C withextra disjointness queries may work according to the following instructions.
Pose a disjointness query concerning {a}∗. (* Note that the only possibletarget language disjoint with {a}∗ is L0. *)If the answer is ‘yes', then return a hypothesis representing L0 and stop.
If the answer is ‘no', then, for i = 0, 1, 2, . . ask a disjointness query con-cerning {ai+1}, until the answer ‘no' is received for the first time. (* Notethat this must eventually happen. *) As soon as such a query is answeredwith ‘no', for some i, output a hypothesis representing Li+1 and stop.
The details verifying that M learns C with extra disjointness queries are omitted.
In contrast one can show that C / ∈ rDisMemQ. For that purpose, to deduce a contradiction, assume that there is a query learner identifying C with disjointnessand membership queries respecting an indexing (L0 ) of C. Consider a learning scenario of M for the target language L0. Obviously, each disjointness query isanswered with ‘no'; a membership query for a word w is answered with ‘no' iffw 6= b . After finitely many queries, M must return a hypothesis representing L0.
Now let i be maximal, such that a membership query concerning a word aihas been posed. The scenario described above is also feasible for the language{ai+1, b}. If this language constitutes the target, then M will return a hypothesisrepresenting L∗ and thus fail. This yields the desired contradiction.
Hence rDisMemQ ⊂ rDisQ Characterizations of Gold-style inference types Characterizations in the query model One main difference between Gold-style and query learning lies in the questionwhether or not a current hypothesis of a learner is already correct. A Gold-style learner is allowed to change its mind arbitrarily often (thus in general thisquestion can not be answered), whereas a query learner has to find a correctrepresentation of the target object already in the first guess, i. e., within "oneshot" (and thus the question can always be answered in the affirmative). An-other difference is certainly the kind of information provided during the learningprocess. So, at first glance, these models seem to focus on very different aspectsof human learning and do not seem to have much in common.
Thus the question arises, whether there are any similarities in these models at all and whether there are aspects of learning both models capture. This requiresa comparison of both models concerning the capabilities of the correspondinglearners. In particular, one central question in this context is whether Gold-style(limit) learners can be replaced by equally powerful (one-shot) query learners.
The rather trivial examples of classes not learnable with superset or disjointnessqueries already show that quite general hypothesis spaces—such as in learningwith extra queries—are an important demand, if such a replacement shall besuccessful. In other words, we demand a more potent teacher, able to answer moregeneral questions than in Angluin's original model. Astonishingly, this demand isalready sufficient to coincide with the capabilities of conservative limit learners:in [11] it is shown that the collection of indexable classes learnable with extrasuperset queries coincides with Consv Txt rec. And, moreover, this also holds forthe collection of indexable classes learnable with extra disjointness queries.
rec holds by [11]. Thus it remains to prove that . For that purpose let C be any indexable class.
First assume C ∈ rDisQ . Then there is a uniformly recursive family (L and a query learner M , such that M learns C with extra disjointness queries withrespect to (Li)i∈ . Now define L0 = L i for all i ∈ N.
Suppose L is a target language. A query learner M 0 identifying L with extra superset queries respecting (L0 ) is defined via the following instructions: - Simulate M when learning L.
- If M poses a disjointness query concerning Li, then pose a superset query to your teacher. If the answer is ‘yes', then transmit the answer ‘yes' to M . If the answer is ‘no', then transmit the answer ‘no' to M .
(* Note that Li ∩ L = ∅ iff L ⊆ Li iff L0 - If M hypothesizes Li, then output a representation for L0 .
It is not hard to verify that M 0 learns C with extra superset queries with . Hence C ∈ rSupQ . This implies rDisQ The opposite inclusion rSupQ is verified analogously.
As initially in Gold-style learning, we have only considered uniformly recur- sive families as hypothesis spaces for query learners. Similarly to the notion ofBcTxt r.e., it is conceivable to permit more general hypothesis spaces also in thequery model, i. e., to demand an even more potent teacher. Thus, by rSupQ r.e.
(rDisQ ) we denote the collection of all indexable classes which are learnable with superset (disjointness) queries respecting a uniformly r. e. family. Interest-ingly, this relaxation helps to characterize learning in the limit in terms of querylearning.
Proof. First we show rDisQ rec. For that purpose, let C ∈ rDisQ r.e.
be an indexable class. Fix a uniformly r. e. family (Ui)i∈ and a query learner M identifying C with disjointness queries with respect to (Ui)i∈ .
The following IIM M 0 LimTxt -identifies C with respect to (Ui)i∈ . Given a text segment σ of length n, M 0 interacts with M simulating a learning processfor n steps. In step k, k ≤ n, depending on how M 0 has replied to the previousqueries posed by M , the learner M computes either (i) a new query i or (ii) ahypothesis i. In case (ii), M 0 returns the hypothesis i and stops simulating M .
In case (i), M 0 checks whether there is a word in σ, which is found in Ui withinn steps. If such a word exists, M 0 transmits the answer ‘no' to M ; else M 0transmits the answer ‘yes' to M . If k < n, M executes step k + 1, else M 0returns any auxiliary hypothesis and stops simulating M . Given segments σ ofa text for some target language, if their length n is large enough, M 0 answersall queries of M correctly and M returns its sole hypothesis within n steps. So,the hypotheses returned by M 0 stabilize on this correct guess.
Hence C ∈ LimTxt r.e.(= LimTxtrec) and therefore rDisQ Second we show that LimTxt rec ⊆ rDisQ . So let C ∈ LimTxt indexable class. Fix an indexing H = (Li)i∈ of C and an IIM M , such that M LimTxt -identifies C with respect to H.
Let (Ui)i∈ be any G¨ odel numbering of all r. e. languages and (w effective enumeration of Σ∗. Suppose L ∈ C is the target language. An rDisQ -learner M 0 for L with respect to (Ui)i∈ is defined to act on the following instruc- tions, starting in step 0. Note that G¨ odel numbers (representations in (Ui)i∈ ) can be computed for all queries to be asked. Step n reads as follows: - Ask disjointness queries for {w0}, . . ,{wn}. Let L[n] be the set of words wx, x ≤ n, for which the corresponding query is answered with ‘no'. (* Note thatL[n] = L ∩ {wx x ≤ n}. *) be an effective enumeration of all finite text segments for L For all x, y ≤ n pose a disjointness query for LM(σyx) and thus build Candn ={σy x, y ≤ n and L ∩ L = ∅} from the queries answered with ‘yes'.
(* Note that Candn = {σy x, y ≤ n and L ⊆ L - For all σ ∈ Candn, pose a disjointness query for the language (Σ∗ , if M(σσ0) 6= M(σ) for some text segment σ0 of L (* Note that U 0 is uniformly r. e. in σ and U 0 ∩L = ∅ iff σ is a LimTxt -locking sequence for M and LM(σ). *)If all these disjointness queries are answered with ‘no', then go to step n + 1.
Otherwise, if σ ∈ Candn is minimal fulfilling U 0 ∩ L = ∅, then return a hypothesis representing LM(σ) and stop.
M 0 identifies L with disjointness queries respecting (Ui)i∈ , because (i) M 0 even- tually returns a hypothesis and (ii) this hypothesis is correct for L. To prove (i),note that M is a LimTxt -learner for L respecting (Li)i∈ . So there are i, x, y such that M (σy) = i, L is a LimTxt -locking sequence for M and L.
= ∅ and the corresponding disjointness query is answered with ‘yes'.
Thus M 0 returns a hypothesis. To prove (ii), assume M 0 returns a hypothesisrepresenting LM(σ) for some text segment σ of L. Then, by definition of M 0,L ⊆ LM(σ) and σ is a LimTxt-locking sequence for M and LM(σ). In particular,σ is a LimTxt -locking sequence for M and L. Since M learns L in the limit fromtext, this implies L = LM(σ). Hence the hypothesis M 0 returns is correct for L.
Therefore C ∈ rDisQ rec ⊆ rDisQ r.e.
Reducing the constraints concerning the hypothesis spaces even more, let ) denote the collection of all indexable classes which are learnable using superset (disjointness) queries with respect to a uniformly 2-r. e.
family.5 This finally allows for a characterization of the classes in BcTxt r.e.
Proof. First we show rSupQ r.e. and rDisQ 2-r.e.
that purpose, let C ∈ rSupQ ) be an indexable class, (L an indexing of C. Fix a uniformly 2-r. e. family (Vi)i∈ and a query learner M identifying C with superset (disjointness) queries respecting (Vi)i∈ .
5 With analogous definitions for Gold-style learning one easily obtains LimTxt2-r.e. = LimTxt r.e. = LimTxtrec and BcTxt2-r.e. = BcTxtr.e.
To obtain a contradiction, assume that C / ∈ BcTxt r.e. By Theorem 1, (Li)i∈N does not possess a telltale family. In other words, there is some i ∈ N, such thatfor any finite set W ⊆ Li there exists some j ∈ N satisfying W ⊆ Lj ⊂ Li. (∗) Consider M when learning Li. In the corresponding learning scenario S – M poses queries representing V – the answers are ‘no' for V and ‘yes' for V – afterwards M returns a hypothesis representing Li.
That means, for all z ∈ {1, . . , k}, we have V particular, for all z ∈ {1, . . , k}, there is a word wz ∈ Li V W = {w1, . . , wk}(⊆ Li). By (∗) there is some j ∈ N satisfying W ⊆ Lj ⊂ Li.
Now note that the above scenario S is also feasible for Lj: wz ∈ Lj implies j = ∅) for all z ∈ {1, . . , m}. Thus all queries in S are answered truthfully for Lj. Since M hypothesizes Li in the scenario S, and Li 6= Lj, Mfails to identify Lj. This is the desired contradiction.
Hence C ∈ BcTxt r.e., so rSupQ r.e., rDisQ 2-r.e.
Second we show that BcTxt r.e. ⊆ rSupQ r.e. ⊆ rDisQ 2-r.e.
let C ∈ BcTxt r.e. be an indexable class. Fix a uniformly r. e. family (Ui)i∈ and an IIM M , such that M BcTxt r.e.-identifies C with respect to (Ui)i∈ .
Let (Vi)i∈ be a uniformly 2-r. e. family such that indices can be computed for all queries to be asked below. Let (wx)x∈ an effective enumeration of Σ∗.
Assume L ∈ C is the target language. A query learner M 0 identifying L with superset (disjointness) queries respecting (Vi)i∈ is defined according to the following instructions, starting in step 0. Step n reads as follows: - Ask superset queries for Σ∗ {wi} (disjointness queries for {wi}) for all i ≤ n.
Let L[n] be the set of words wx, x ≤ n, for which the corresponding query isanswered with ‘no'. (* Note that L[n] = L ∩ {wx x ≤ n}. *) be an effective enumeration of all finite text segments for L For all x, y ≤ n pose a superset query for UM(σyx) (a disjointness query forU x ) ) and thus build Candn = {σy L = ∅} from the queries answered with ‘yes'.
- For all σ ∈ Candn, pose a superset (disjointness) query for the language M (σ) 6= UM (σσ0) for some text segment σ0 of UM (σ) , (* Note that V 0 is uniformly 2-r. e. in σ and V 0 6⊇ L iff V 0 ∩ L = ∅ iff σ is a BcTxt -locking sequence for M and UM(σ). *)If all these superset queries are answered with ‘yes' (all these disjointnessqueries are answered with ‘no'), then go to step n+1. Otherwise, if σ ∈ Candnis minimal fulfilling V 0 6⊇ L and thus V 0 ∩ L = ∅, then return a hypothesis representing UM(σ) and stop.
M 0 learns L with superset (disjointness) queries in (Vi)i∈ , because (i) M 0 even- tually returns a hypothesis and (ii) this hypothesis is correct for L. To prove (i),note that M is a BcTxt -learner for L in (Ui)i∈ . So there are x, y such that is a BcTxt -locking sequence for M , L, and (U = ∅ and the corresponding superset query is answered with ‘no' (the dis- jointness query with ‘yes'). Thus M 0 returns a hypothesis. To prove (ii), supposeM 0 returns a hypothesis representing UM(σ) for a text segment σ of L. Then, bydefinition of M 0, σ is a BcTxt -locking sequence for M , UM(σ), and (Ui)i∈ . In particular, σ is a BcTxt -locking sequence for M , L, and (Ui)i∈ . As M BcTxt- learns L, this implies L = UM(σ) and the hypothesis of M 0 is correct for L.
Therefore C ∈ rSupQ r.e. ⊆ rSupQ 2-r.e.
BcTxt r.e. ⊆ rDisQ Characterizations in the model of learning with oracles—acomparison In our characterizations we have seen that the capability of query learnersstrongly depends on the hypothesis space and thus on the demands concern-ing the abilities of the teacher. Of course a teacher might have to be morepotent to answer questions with respect to some uniformly r. e. family thanto work in some uniformly recursive family. For instance, teachers of the firstkind might have to be able to solve the halting problem with respect to someG¨ odel numbering. In other words, the learner might use such a teacher as an oracle for the halting problem. The problem we consider in the following is tospecify nonrecursive sets A ⊆ N such that A-recursive6 query learners usinguniformly recursive families as hypothesis spaces are as powerful as recursivelearners using uniformly r. e. or uniformly 2-r. e. families. For instance, we knowthat rDisQ rec. So we would like to specify a set A, such that LimTxt rec equals the collection of all indexable classes which can be iden-tified with A-recursive rDisQ -learners. The latter collection will be denoted [A]. Subsequently, similar notions are used correspondingly.
In the Gold-style model, the use of oracles has been analysed for example in [13]. Most of the claims below use K-recursive or Tot -recursive learners, whereK = {i ϕi(i) is defined} and Tot = {i ϕi is a total function}. Concerningcoincidences in Gold-style learning, the use of oracles is illustrated by Lemma 1.
Lemma 1. 1. [13] Consv Txt rec[K] = LimTxtrec.
2. Consv Txt rec[Tot] = LimTxtrec[K] = BcTxtr.e.
3. BcTxt r.e.[A] = BcTxtr.e. for all A ⊆ N.
Proof. ad 3. Let A ⊆ N. By definition BcTxtr.e. ⊆ BcTxtr.e.[A]. Thus it remainsto prove the opposite inclusion, namely BcTxt r.e.[A] ⊆ BcTxtr.e. For that pur-pose let C ∈ BcTxt r.e.[A] be an indexable class. Fix an A-recursive IIM M suchthat C is BcTxt r.e.-learned by M . Moreover, let (Li)i∈ be an indexing of C.
Striving for a contradiction, assume C / ∈ BcTxt r.e. By Theorem 1, (Li)i∈N does not possess a telltale family. In other words, there is some i ∈ N, such thatfor any finite set W ⊆ Li there exists some j ∈ N satisfying W ⊆ Lj ⊂ Li.
6 A-recursive means recursive with the help of an oracle for the set A.
Since M is a BcTxt -learner for Li in some hypothesis space H, there must be a BcTxt -locking sequence σ for M , Li, and H. If W denotes the set of wordsoccurring in σ, there is some language Lj ∈ C with W ⊆ Lj ⊂ Li. Thus σ is aBcTxt -locking sequence for M , Lj, and H. In particular, M fails to BcTxtr.e.-identify Lj. This yields the contradiction. Hence BcTxtr.e.[A] = BcTxtr.e.
ad 2. The proofs of Consv Txt rec[Tot] ⊆ BcTxtr.e., LimTxtrec[K] ⊆ BcTxtr.e.
are obtained by similar means as the proof of 3. It suffices to use Theorem 1 forConsv Txt rec and LimTxtrec instead of the accordant statement for BcTxtr.e.
Note that LimTxt rec[K] = BcTxtr.e. is already verified in [4].
Next we prove BcTxt r.e. ⊆ Consv Txtrec[Tot] and BcTxtr.e. ⊆ LimTxtrec[K].
For that purpose, let C be an indexable class in BcTxt r.e. By Theorem 1 thereis an indexing (Li)i∈ of C which possesses a family of telltales. Next we show: (i) (Li)i∈ possesses a Tot-recursively generable (uniformly K-r. e.) family of telltales.
(ii) A Consv Txt rec-learner (LimTxtrec-learner) for C can be computed from any recursively generable (uniformly r. e.) family of telltales for (Li)i∈ .
To prove (i), Let (wx)x∈ be an effective enumeration of all words in Σ∗.
Given i ∈ N, let a function fi enumerate a set Ti as follows.
- fi(0) = wz for z = min{x wx ∈ Li}.
- If fi(0), . . , fi(n) are computed, then test whether or not there is some j ∈ N (some j ≤ n), such that {fi(0), . . , fi(n)} ⊆ Lj ⊂ Li. (* Note that this testis Tot -recursive (K-recursive). *) - If such a number j exists, then fi(n + 1) = wz for z = min{x wx ∈ Li {fi(0), . . , fi(n)}}. If no such number j exists, then fi(n + 1) = fi(n).
With Ti = {fi(x) x ∈ N}, it is not hard to verify that (Ti)i∈ is a Tot-recur- sively generable (uniformly K-r. e.) family of telltales for (Li)i∈ . Here note that, in the case of using a Tot -oracle, Ti = {fi(x) fi(y + 1) 6= fi(y) for all y < x}.
Finally, (ii) holds since Theorem 1.1/1.2 has a constructive proof, see [1, 10].
Claims (i) and (ii) imply C ∈ Consv Txt rec[Tot] and C ∈ LimTxtrec[K]. So BcTxt r.e. ⊆ Consv Txtrec[Tot] and BcTxtr.e. ⊆ LimTxtrec[K].
Since this proof is constructive as are the proofs of our characterizations above, we can deduce results like for example rDisQ C ∈ LimTxt rec, a K-recursive conservative IIM for C can be constructed from aLimTxt rec-learner for C. Moreover, a rDisQ -learner for C can be constructed from a conservative IIM for C. Thus, a K-recursive rDisQ -learner for C can be constructed from a LimTxt rec-learner. Similar results are obtained by combiningLemma 1 with our characterizations above. This proves the following theorem.
Theorem 6. 1. rSupQ r.e. for all A ⊆ N.
Our characterizations have revealed a correspondence between Gold-style learn-ing and learning via queries—between limiting and one-shot learning processes.
Crucial in this context is that the learner may ask the "appropriate" queries.
Thus the choice of hypothesis spaces and, correspondingly, the ability of theteacher is decisive. If the teacher is potent enough to answer disjointness queriesin some uniformly r. e. family of languages, then, by Theorem 4, learning withdisjointness queries coincides with learning in the limit. Interestingly, given uni-formly recursive or uniformly 2-r. e. families as hypothesis spaces, disjointnessand superset queries coincide respecting the learning capabilities. As it turnsout, this coincidence is not valid, if the hypothesis space may be any uniformlyr. e. family. That means, rDisQ rec) is not equal to the collection of all indexable classes learnable with superset queries in uniformly r. e. families.
Theorem 7. LimTxt rec ⊂ rSupQ Proof. To verify LimTxt rec ⊆ rSupQ , the proof of LimTxt rec ⊆ rDisQ r.e.
be adapted. It remains to quote a class in rSupQ Let, for all k, n ∈ N, Clim contain the languages Lk = {akbz z ≥ 0} and {akbz z ≤ m} , if m ≤ n is minimal such that ϕk(m) is undefined , {akbz z ≤ n} ∪ {bn+1ay+1} , k (x) is defined for all x ≤ n and y = max{Φk(x) x ≤ n} .
Clim is an indexable class; the proof is omitted due to the space constraints.
To show Clim ∈ rSupQ odel numbering of all r. e. lan- guages. Assume L ∈ C is the target language. A learner M identifying L withsuperset queries respecting (Ui)i∈ is defined to act on the following instructions: - For k = 0, 1, 2, . . ask a superset query concerning Lk ∪ {bras r, s ∈ N}, until the answer ‘yes' is received for the first time.
- Pose a superset query concerning the language Lk.
If the answer is ‘no', then, for r, s = 0, 1, 2, . . ask a superset query concerningLk ∪ {br+1as+1}, until the answer ‘yes' is received for the first time. Outputa hypothesis representing Lk,r and stop.
If the answer is ‘yes', then pose a superset query for the language ({akbz z ≤ n} , if n is minimal, such that ϕk(n) is undefined , if ϕk is a total function .
(* Note that U 0 is uniformly r. e. in k. U 0 is a superset of L iff U 0 = L. *) If the answer is ‘yes', then return a hypothesis representing U 0 and stop.
If the answer is ‘no', then return a hypothesis representing Lk and stop.
The details proving that M rSupQ -identifies Clim respecting (Ui)i∈ are omitted.
∈ LimTxt rec holds, since otherwise Tot would be K-recursive.
To verify this, assume M is an IIM learning Clim in the limit from text. Letk ≥ 0. To decide whether or not ϕk is a total function, proceed as follows: Let σ be a LimTxt -locking sequence for M and Lk. (* Note that σ exists by assumption and thus can be found by a K-recursive procedure. *) If thereis some x ≤ max{z akbz occurs in σ}, such that ϕk(x) is undefined (* also aK-recursive test *), then return ‘0'. Otherwise return ‘1'.
It remains to show that ϕk is total, if this procedure returns ‘1'. So let the procedure return ‘1'. Assume ϕk is not total and n is minimal, such that ϕk(n)is undefined. By definition, the language L = {akbz z ≤ n} belongs to Clim.
Then the sequence σ found in the procedure is also a text segment for L andby choice—since L ⊂ Lk—a LimTxt-locking sequence for M and L. As M (σ) iscorrect for Lk, M fails to identify L. This is a contradiction; hence ϕk is total.
Thus the set Tot is K-recursive—a contradiction. So Clim / ∈ LimTxt rec.
, one easily obtains rSupQ Whether or not these two collections are equal, remains an open question. Stillit is possible to prove that any indexable class containing just infinite languagesis in rSupQ iff it is in BcTxt r.e. We omit the proof. In contrast to that there are classes of only infinite languages in BcTxt r.e. LimTxtrec.
Moreover, note that the indexable class Clim defined in the proof of Theorem 7 belongs to BcTxt r.e. LimTxtrec. Up to now, the literature has not offered manysuch classes. The first example can be found in [1], but its definition is quiteinvolved and uses a diagonalisation. In contrast to that, Clim is defined com-pactly and explicitly without a diagonal construction and is—to the authors'knowledge—the first such class known in BcTxt r.e. LimTxtrec.
1. D. Angluin. Inductive inference of formal languages from positive data. Inform.
Control, 45:117–135, 1980.
2. D. Angluin. Queries and concept learning. Machine Learning, 2:319–342, 1988.
3. D. Angluin. Queries revisited. Theoret. Comput. Sci., 313:175–194, 2004.
4. G. Baliga, J. Case, S. Jain. The synthesis of language learners. Inform. Comput., 152:16–43, 1999.
5. M. Blum. A machine-independent theory of the complexity of recursive functions.
J. ACM, 14:322–336, 1967.
6. L. Blum, M. Blum. Toward a mathematical theory of inductive inference. Inform.
Control, 28:125–155, 1975.
7. J. Case, C. Lynes. Machine inductive inference and language identification. In: Proc. ICALP 1982, LNCS 140, 107–115, Springer, 1982.
8. E. M. Gold. Language identification in the limit. Inform. Control, 10:447–474, 1967.
9. J. E. Hopcroft, J. D. Ullman. Introduction to Automata Theory, Languages, and Computation. Addison-Wesley Publishing Company, 1979.
10. S. Lange, T. Zeugmann. Language learning in dependence on the space of hy- potheses. In: Proc. COLT 1993, 127–136, ACM Press, 1993.
11. S. Lange, S. Zilles. On the learnability of erasing pattern languages in the query model. In: Proc. ALT 2003, LNAI 2842, 129–143, Springer, 2003.
12. H. Rogers, Jr. Theory of Recursive Functions and Effective Computability, MIT Press, 1987.
13. F. Stephan. Degrees of Computing and Learning. Habilitationsschrift, Ruprecht- at, Heidelberg, 1999.
14. T. Zeugmann, S. Lange. A guided tour across the boundaries of learning recursive In: Algorithmic Learning for Knowledge-Based Systems, LNAI 961, 190–258, Springer, 1995.

Source: http://www2.cs.uregina.ca/~zilles/langeZ04COLTqueries.pdf

www2.cs.uregina.ca

Replacing limit learners with equally powerful one-shot query learners Steffen Lange1 and Sandra Zilles2 1 Fachhochschule Darmstadt, FB Informatik, Haardtring 100, 64295 Darmstadt, Germany, 2 Technische Universit¨at Kaiserslautern, FB Informatik, Postfach 3049, 67653 Kaiserslautern, Germany, Abstract. Different formal learning models address different aspectsof human learning. Below we compare Gold-style learning —interpretinglearning as a limiting process in which the learner may change its mindarbitrarily often before converging to a correct hypothesis—to learningvia queries—interpreting learning as a one-shot process in which thelearner is required to identify the target concept with just one hypothesis.Although these two approaches seem rather unrelated at first glance,we provide characterizations of different models of Gold-style learning(learning in the limit, conservative inference, and behaviourally correctlearning) in terms of query learning. Thus we describe the circumstanceswhich are necessary to replace limit learners by equally powerful one-shot learners. Our results are valid in the general context of learningindexable classes of recursive languages.In order to achieve the learning capability of Gold-style learners, thecrucial parameters of the query learning model are the type of queries(membership, restricted superset, or restricted disjointness queries) andthe underlying hypothesis space (uniformly recursive, uniformly r. e., oruniformly 2-r. e. families). The characterizations of Gold-style languagelearning are formulated in dependence of these parameters.

Ce 330 - aging, systemic disease and oral health: implications for women worldwide (part ii)

Aging, Systemic Disease and Oral Health: Implications for Women Worldwide (Part II) Pam Hughes, RDH, MS Continuing Education Units: 3 hours Part one of this two-part series on Women, Aging and Oral Health appears in the dentalcare.com CE library and introduced the global prevalence and risk factors of three common health conditions among aging women: cardiovascular disease, diabetes and osteoporosis. The aim of the course was to provide dental professionals prevention and treatment approaches, information on connections to oral health and specific treatment plans for each condition.