Abstract. Multiclass learning problems involve ænding a deænition for an unknown function fèxè - PDF

Journal of Artiæcial Intelligence Research 2 è1995è 263í286 Submitted 8è94; published 1è95 Solving Multiclass Learning Problems via Error-Correcting Output Codes Thomas G. Dietterich Department of Computer

Please download to get full document.

View again

of 20
All materials on our website are shared by users. If you have any questions about copyright issues, please report us to resolve them. We are always happy to assist you.

Small Business & Entrepreneurship

Publish on:

Views: 138 | Pages: 20

Extension: PDF | Download: 0

Journal of Artiæcial Intelligence Research 2 è1995è 263í286 Submitted 8è94; published 1è95 Solving Multiclass Learning Problems via Error-Correcting Output Codes Thomas G. Dietterich Department of Computer Science, 303 Dearborn Hall Oregon State University Corvallis, OR USA Ghulum Bakiri Department of Computer Science University of Bahrain Isa Town, Bahrain Abstract Multiclass learning problems involve ænding a deænition for an unknown function fèxè whose range is a discrete set containing ké2 values èi.e., k ëclasses è. The deænition is acquired by studying collections of training examples of the form hx i ;fèx i èi. Existing approaches to multiclass learning problems include direct application of multiclass algorithms such as the decision-tree algorithms C4.5 and CART, application of binary concept learning algorithms to learn individual binary functions for each of the k classes, and application of binary concept learning algorithms with distributed output representations. This paper compares these three approaches to a new technique in which error-correcting codes are employed as a distributed output representation. We show that these output representations improve the generalization performance of both C4.5 and backpropagation on a wide range of multiclass learning tasks. We also demonstrate that this approach is robust with respect to changes in the size of the training sample, the assignment of distributed representations to particular classes, and the application of overætting avoidance techniques such as decision-tree pruning. Finally, we show that like the other methods the error-correcting code technique can provide reliable class probability estimates. Taken together, these results demonstrate that error-correcting output codes provide a general-purpose method for improving the performance of inductive learning programs on multiclass problems. 1. Introduction The task of learning from examples is to ænd an approximate deænition for an unknown function fèxè given training examples of the form hx i ;fèx i èi. For cases in which f takes only the values f0; 1g binary functions there are many algorithms available. For example, the decision-tree methods, such as C4.5 èquinlan, 1993è and CART èbreiman, Friedman, Olshen, & Stone, 1984è can construct trees whose leaves are labeled with binary values. Most artiæcial neural network algorithms, such as the perceptron algorithm èrosenblatt, 1958è and the error backpropagation èbpè algorithm èrumelhart, Hinton, & Williams, 1986è, are best suited to learning binary functions. Theoretical studies of learning have focused almost entirely on learning binary functions èvaliant, 1984; Natarajan, 1991è. In many real-world learning tasks, however, the unknown function f often takes values from a discrete set of ëclasses : fc 1 ;:::;c k g.for example, in medical diagnosis, the function might map a description of a patient to one of k possible diseases. In digit recognition èe.g., cæ1995 AI Access Foundation and Morgan Kaufmann Publishers. All rights reserved. Dietterich & Bakiri LeCun, Boser, Denker, Henderson, Howard, Hubbard, & Jackel, 1989è, the function maps each hand-printed digit to one of k = 10 classes. Phoneme recognition systems èe.g., Waibel, Hanazawa, Hinton, Shikano, & Lang, 1989è typically classify a speech segment into one of 50 to 60 phonemes. Decision-tree algorithms can be easily generalized to handle these ëmulticlass learning tasks. Each leaf of the decision tree can be labeled with one of the k classes, and internal nodes can be selected to discriminate among these classes. We will call this the direct multiclass approach. Connectionist algorithms are more diæcult to apply to multiclass problems. The standard approach is to learn k individual binary functions f 1 ;:::;f k, one for each class. To assign a new case, x, to one of these classes, each of the f i is evaluated on x, and x is assigned the class j of the function f j that returns the highest activation ènilsson, 1965è. We will call this the one-per-class approach, since one binary function is learned for each class. An alternative approach explored by some researchers is to employ a distributed output code. This approach was pioneered by Sejnowski and Rosenberg è1987è in their widelyknown NETtalk system. Each class is assigned a unique binary string of length n; we will refer to these strings as ëcodewords. Then n binary functions are learned, one for each bit position in these binary strings. During training for an example from class i, the desired outputs of these n binary functions are speciæed by the codeword for class i. With artiæcial neural networks, these n functions can be implemented by the n output units of a single network. New values of x are classiæed by evaluating each of the n binary functions to generate an n-bit string s. This string is then compared to each of the k codewords, and x is assigned to the class whose codeword is closest, according to some distance measure, to the generated string s. As an example, consider Table 1, which shows a six-bit distributed code for a ten-class digit-recognition problem. Notice that each row is distinct, so that each class has a unique codeword. As in most applications of distributed output codes, the bit positions ècolumnsè have been chosen to be meaningful. Table 2 gives the meanings for the six columns. During learning, one binary function will be learned for each column. Notice that each column is also distinct and that each binary function to be learned is a disjunction of the original classes. For example, f vl èxè =1iffèxè is1,4,or5. To classify a new hand-printed digit, x, the six functions f vl ;f hl ;f dl ;f cc ;f ol ; and f or are evaluated to obtain a six-bit string, such as Then the distance of this string to each of the ten codewords is computed. The nearest codeword, according to Hamming distance èwhich counts the number of bits that diæerè, is , which corresponds to class 4. Hence, this predicts that fèxè=4. This process of mapping the output string to the nearest codeword is identical to the decoding step for error-correcting codes èbose & Ray-Chaudhuri, 1960; Hocquenghem, 1959è. This suggests that there might be some advantage to employing error-correcting codes as a distributed representation. Indeed, the idea of employing error-correcting, distributed representations can be traced to early research inmachine learning èduda, Machanik, & Singleton, 1963è. 264 Error-Correcting Output Codes Table 1: A distributed code for the digit recognition task. Code Word Class vl hl dl cc ol or Table 2: Meanings of the six columns for the code in Table 1. Column position Abbreviation Meaning 1 vl contains vertical line 2 hl contains horizontal line 3 dl contains diagonal line 4 cc contains closed curve 5 ol contains curve open to left 6 or contains curve open to right Table 3: A 15-bit error-correcting output code for a ten-class problem. Code Word Class f 0 f 1 f 2 f 3 f 4 f 5 f 6 f 7 f 8 f 9 f 10 f 11 f 12 f 13 f Dietterich & Bakiri Table 3 shows a 15-bit error-correcting code for the digit-recognition task. Each class is represented by acodeword drawn from an error-correcting code. As with the distributed encoding of Table 1, a separate boolean function is learned for each bit position of the errorcorrecting code. To classify a new example x, each of the learned functions f 0 èxè;:::;f 14 èxè is evaluated to produce a 15-bit string. This is then mapped to the nearest of the ten codewords. This code can correct up to three errors out of the 15 bits. This error-correcting code approach suggests that we view machine learning as a kind of communications problem in which the identity of the correct output class for a new example is being ëtransmitted over a channel. The channel consists of the input features, the training examples, and the learning algorithm. Because of errors introduced by the ænite training sample, poor choice of input features, and æaws in the learning process, the class information is corrupted. By encoding the class in an error-correcting code and ëtransmitting each bit separately èi.e., via a separate run of the learning algorithmè, the system may be able to recover from the errors. This perspective further suggests that the one-per-class and ëmeaningful distributed output approaches will be inferior, because their output representations do not constitute robust error-correcting codes. A measure of the quality of an error-correcting code is the minimum Hamming distance between any pair of code words. If the minimum Hamming distance is d, then the code can correct at least b d,1 2 c single bit errors. This is because each single bit error moves us one unit away from the true codeword èin Hamming distanceè. If we make only b d,1 2 c errors, the nearest codeword will still be the correct codeword. èthe code of Table 3 has minimum Hamming distance seven and hence it can correct errors in any three bit positions.è The Hamming distance between any two codewords in the oneper-class code is two, so the one-per-class encoding of the k output classes cannot correct any errors. The minimum Hamming distance between pairs of codewords in a ëmeaningful distributed representation tends to be very low. For example, in Table 1, the Hamming distance between the codewords for classes 4 and 5 is only one. In these kinds of codes, new columns are often introduced to discriminate between only two classes. Those two classes will therefore diæer only in one bit position, so the Hamming distance between their output representations will be one. This is also true of the distributed representation developed by Sejnowski and Rosenberg è1987è in the NETtalk task. In this paper, we compare the performance of the error-correcting code approach to the three existing approaches: the direct multiclass method èusing decision treesè, the one-per-class method, and èin the NETtalk task onlyè the meaningful distributed output representation approach. We show that error-correcting codes produce uniformly better generalization performance across a varietyofmulticlass domains for both the C4.5 decisiontree learning algorithm and the backpropagation neural network learning algorithm. We then report a series of experiments designed to assess the robustness of the error-correcting code approachtovarious changes in the learning task: length of the code, size of the training set, assignment of codewords to classes, and decision-tree pruning. Finally, we show that the error-correcting code approach can produce reliable class probability estimates. The paper concludes with a discussion of the open questions raised by these results. Chief among these questions is the issue of why the errors being made in the diæerent bit positions of the output are somewhat independent of one another. Without this indepen- 266 Error-Correcting Output Codes Table 4: Data sets employed in the study. Number of Number of Number of Number of Name Features Classes Training Examples Test Examples glass fold xval vowel POS , fold xval soybean audiologys ISOLET ,238 1,559 letter ,000 4,000 NETtalk phonemes 1000 words = 1000 words = 6 stresses 7,229 letters 7,242 letters dence, the error-correcting output code method would fail. We address this question for the case of decision-tree algorithms in a companion paper èkong & Dietterich, 1995è. 2. Methods This section describes the data sets and learning algorithms employed in this study. It also discusses the issues involved in the design of error-correcting codes and describes four algorithms for code design. The section concludes with a brief description of the methods applied to make classiæcation decisions and evaluate performance on independent test sets. 2.1 Data Sets Table 4 summarizes the data sets employed in the study. The glass, vowel, soybean, audiologys, ISOLET, letter, and NETtalk data sets are available from the Irvine Repository of machine learning databases èmurphy & Aha, 1994è. 1 The POS èpart of speechè data set was provided by C. Cardie èpersonal communicationè; an earlier version of the data set was described by Cardie è1993è. We did not use the entire NETtalk data set, which consists of a dictionary of 20,003 words and their pronunciations. Instead, to make the experiments feasible, we chose a training set of 1000 words and a disjoint test set of 1000 words at random from the NETtalk dictionary. In this paper, we focus on the percentage of letters pronounced correctly èrather than whole wordsè. To pronounce a letter, both the phoneme and stress of the letter must be determined. Although there are 54æ6 syntactically possible combinations of phonemes and stresses, only 140 of these appear in the training and test sets we selected. 1. The repository refers to the soybean data set as ësoybean-large , the ëaudiologys data set as ëaudiology.standardized , and the ëletter data set as ëletter-recognition . 267 Dietterich & Bakiri 2.2 Learning Algorithms We employed two general classes of learning methods: algorithms for learning decision trees and algorithms for learning feed-forward networks of sigmoidal units èartiæcial neural networksè. For decision trees, we performed all of our experiments using C4.5, Release 1, which is an older èbut substantially identicalè version of the program described in Quinlan è1993è. We have made several changes to C4.5 to support distributed output representations, but these have not aæected the tree-growing part of the algorithm. For pruning, the conædence factor was set to C4.5 contains a facility for creating ësoft thresholds for continuous features. We found experimentally that this improved the quality of the class probability estimates produced by the algorithm in the ëglass , ëvowel , and ëisolet domains, so the results reported for those domains were computed using soft thresholds. For neural networks, we employed two implementations. In most domains, we used the extremely fast backpropagation implementation provided by the CNAPS neurocomputer èadaptive Solutions, 1992è. This performs simple gradient descent with a æxed learning rate. The gradient is updated after presenting each training example; no momentum term was employed. A potential limitation of the CNAPS is that inputs are only represented to eight bits of accuracy, and weights are only represented to 16 bits of accuracy. Weight update arithmetic does not round, but instead performs jamming èi.e., forcing the lowest order bit to 1 when low order bits are lost due to shifting or multiplicationè. On the speech recognition, letter recognition, and vowel data sets, we employed the opt system distributed by Oregon Graduate Institute èbarnard & Cole, 1989è. This implements the conjugate gradient algorithm and updates the gradient after each complete pass through the training examples èknown as per-epoch updatingè. No learning rate is required for this approach. Both the CNAPS and opt attempt to minimize the squared error between the computed and desired outputs of the network. Many researchers have employed other error measures, particularly cross-entropy èhinton, 1989è and classiæcation ægure-of-merit ècfm, Hampshire II & Waibel, 1990è. Many researchers also advocate using a softmax normalizing layer at the outputs of the network èbridle, 1990è. While each of these conægurations has good theoretical support, Richard and Lippmann è1991è report that squared error works just as well as these other measures in producing accurate posterior probability estimates. Furthermore, cross-entropy and CFM tend to overæt more easily than squared error èlippmann, personal communication; Weigend, 1993è. We chose to minimize squared error because this is what the CNAPS and opt systems implement. With either neural network algorithm, several parameters must be chosen by the user. For the CNAPS, we must select the learning rate, the initial random seed, the number of hidden units, and the stopping criteria. We selected these to optimize performance on a validation set, following the methodology of Lang, Hinton, and Waibel è1990è. The training set is subdivided into a subtraining set and a validation set. While training on the subtraining set, we observed generalization performance on the validation set to determine the optimal settings of learning rate and network size and the best point at which to stop training. The training set mean squared error at that stopping point is computed, and training is then performed on the entire training set using the chosen parameters and stopping at the indicated mean squared error. Finally, we measure network performance on the test set. 268 Error-Correcting Output Codes For most of the data sets, this procedure worked very well. However, for the letter recognition data set, it was clearly choosing poor stopping points for the full training set. To overcome this problem, we employed a slightly diæerent procedure to determine the stopping epoch. We trained on a series of progressively larger training sets èall of which were subsets of the ænal training setè. Using a validation set, we determined the best stopping epoch on each of these training sets. We then extrapolated from these training sets to predict the best stopping epoch on the full training set. For the ëglass and ëpos data sets, we employed ten-fold cross-validation to assess generalization performance. We chose training parameters based on only one ëfold of the ten-fold cross-validation. This creates some test set contamination, since examples in the validation set data of one fold are in the test set data of other folds. However, we found that there was little or no overætting, so the validation set had little eæect on the choice of parameters or stopping points. The other data sets all come with designated test sets, which we employed to measure generalization performance. 2.3 Error-Correcting Code Design We deæne an error-correcting code to be a matrix of binary values such as the matrix shown in Table 3. The length of a code is the number of columns in the code. The number of rows in the code is equal to the number of classes in the multiclass learning problem. A ëcodeword is a row in the code. A good error-correcting output code for a k-class problem should satisfy two properties: æ Row separation. Each codeword should be well-separated in Hamming distance from each of the other codewords. æ Column separation. Each bit-position function f i should be uncorrelated with the functions to be learned for the other bit positions f j ;j 6= i: This can be achieved by insisting that the Hamming distance between column i and each of the other columns be large and that the Hamming distance between column i and the complement of each of the other columns also be large. The power of a code to correct errors is directly related to the row separation, as discussed above. The purpose of the column separation condition is less obvious. If two columns i and j are similar or identical, then when a deterministic learning algorithm such as C4.5 is applied to learn f i and f j, it will make similar ècorrelatedè mistakes. Errorcorrecting codes only succeed if the errors made in the individual bit positions are relatively uncorrelated, so that the number of simultaneous errors in many bit positions is small. If there are many simultaneous errors, the error-correcting code will not be able to correct them èpeterson & Weldon, 1972è. The errors in columns i and j will also be highly correlated if the bits in those columns are complementary. This is because algorithms such as C4.5 and backpropagation treat a class and its complement symmetrically. C4.5 will construct identical decision trees if the 0-class and 1-class are interchanged. The maximum Hamming distance between two columns is attained when th
Related Search
Similar documents
View more...
We Need Your Support
Thank you for visiting our website and your interest in our free products and services. We are nonprofit website to share and download documents. To the running of this website, we need your help to support us.

Thanks to everyone for your continued support.

No, Thanks