====================================================================== David W. Aha March 1990 Documentation file for IB1, IB2, IB3, and IB4 Updated on 3/9/94 (references no longer current) ====================================================================== The algorithms IB1, IB2, IB3, and IB4 (previously named Proximity, Growth, NTGrowth, and Bloom respectively) are implemented in C without much attention to optimization. Although the code is relatively well documented, it will require some time to understand it well enough to modify. The code consists of 5 source files and one file for data specifications: 1. ibl.c: front end 2. training.c: contains the training functions 3. testing.c: contains the testing functions 4. utility.c: contains lower-level functions 5. printing.c: contains functions for outputting information 6. datastructures.h I have included an example run, which includes the following files: 1. example-call: executes the example system call 2. trainfile: contains the training instances 3. testfile: contains the testing instances 4. outputfile: contains the output generated from the example system call 5. descriptionfile: describes which attributes are targets and which are predictors. 6. namesfile: describes the typing of each attribute This particular example comes from the LED-25 domain, which contains 24 binary-valued predictor attributes and one concept attribute, which can take on 1 of 10 possible values (i.e., the 10 decimal digits). The first 7 predictor attributes are relevant; the latter 17 are irrelevant. Thus, this shows off the utility of IB4, which can tolerate irrelevant attributes. Lots of variants of k-nn exist that tune weight settings; I have not seen a comparison of them at the time of this writing. Hmmm. After compiling, you'll always get (included after a possible error message) the following description of the expected input parameters by typing "ibl": > Usage: description namesfile trainfile testfile outputfile seed [options] > > Required Parameters: > descriptionfile contains the predictee/predictor info > namesfile contains the datafile's format information > trainfile contains training instances > testfile contains testing instances > outputfile will contain the experiment's results > seed is used to initialize random variable generator > > User Parameters: (name, default, and brief description) > -signif_accept (75 confidence) above class frequency > -signif_drop (75 confidence) below class frequency > -ib1 (off) Act like IB1 rather than IB3 > -ib2 (on) Act like IB2 rather than IB3 > -ib3 (off) Act like IB3 > -ib4 (off) Act like IB4 rather than IB3 > -k (1) Number of nearest acceptables wanted > -norm_none (off) No normalization option > -norm_linear (on) Linear normalization option > -norm_sd (off) Standard deviation normalize option > -missing_maxdiff (on) Assume maximum possible difference > -missing_ave (off) Assume average or most frequent value > -missing_ignore (off) Ignore, Normalize the sim'y results > > Convenience Options: > -testrate (100) How often to run on test set > -reportrate (25) How often to report on things > -startup (0) When to start testing/reporting > -overlap (off) IB4: set of binary concepts > -best_concept_only (off) IB4: if overlap, give only best > -probability_weights (off) IB4: Toggles between 2 methods > -printweights (off) IB4: always print attribute weights > -testlast (off) Test after finished? Brief description of the data format: The descriptionfile tells the algorithms which attributes to predict and which to use as predictors. The namesfile tells the algorithms the number and types of attributes used to describe the instances. The format for each attribute is either: : numeric or : nominal ,,..., The values in the instances are expected to be separated by commas. Output lines: Te-accuracy: test accuracy (percent correct) Tr-accuracy: training set accuracy (percent correct) -- Comprehensive! This begins keeping records when the 2nd instance is processed. Recently: Tr-accuracy since last report or test -- all reports are printed to the console -- all test results are printed to the output file Total Storage: of instances in the concept description Accepteds: of instances in the concept description that are used for classification Typical problems: 1. Message received is: FATAL ERROR in translate_instance Impossible to interpret attribute number n... Probable fix: update the constants in datastructures.h. Perhaps the value for MAX_NUMBER_OF_VALUES is too small. The values for the other constants may also need to be increased. 2. Watch out for the strict format for namesfiles. The single space after the word "nominal" is important. 3. Often errors occur when the variable MAX_NUMBER_OF_ATTRIBUTES is set to too small a value. In that case, the program will not run and will report an error message to the effect that you're asking it to work with too many attributes for its array sizes. Just increase it to the correct size, along with the somewhat redundant parameter named MAX_NUMBER_OF_PREDICTORS. Same for MAX_NUMBER_OF_INSTANCES as needed. 4. IB2-4 are incremental algorithms. Thus, they'll work poorly if you decide to feed them all instances of one class before all instances of other classes, don't be surprised if your results differ from published results! (I usually randomly permute the training set before processing.) 5. Currently, I expect to be accessible at aha@aic.nrl.navy.mil for the foreseeable future. Please contact me if you have any questions regarding this code, or if you're up for a chat on such algorithms. Relevant References: I've included some of my own and a few you'd want to have if you decided to take up this subject. Note that this is probably out of date by now. Enjoy. Note: any subdirectories (if any) of this directory concern specific experiments I've run with these algorithms. ====================================================================== {Aboaf,~E., Drucker,~S., \& Atkeson,~C. (1989). Task-level robot learning: Juggling a tennis ball more accurately. In {\it Proceedings of the IEEE International Conference on Robotics and Automation}. IEEE Press.} {Aha,~D.~W. (1989). {\it Incremental learning of independent, overlapping, and graded concepts with an instance-based process framework} (Technical Report 89-10). Irvine, CA: University of California, Department of Information and Computer Science.} {Aha,~D.~W. (1989). Incremental, instance-based learning of independent and graded concept descriptions. In {\it Proceedings of the Sixth International Workshop on Machine Learning} (pp. 387--391). Ithaca, NY: Morgan Kaufmann.} {Aha,~D.~W. (1990). {\it A study of instance-based learning algorithms for supervised learning tasks: Mathematical, empirical, and psychological evaluations} (Technical Report 90-42). Irvine, CA: University of California, Department of Information and Computer Science.} {Aha,~D.~W. (1991). Case-based learning algorithms. In {\it Proceedings of the DARPA Case-Based Reasoning Workshop} (pp. 147--158). Washington, D.C.: Morgan Kaufmann.} {Aha,~D.~W. (1991). Incremental constructive induction: An instance-based approach. In {\it Proceedings of the Eighth International Workshop on Machine Learning} (pp. 117--121). Evanston, IL: Morgan Kaufmann.} {Aha,~D.~W. (1992). Tolerating Noisy, Irrelevant, and Novel Attributes in Instance-Based Learning Algorithms. {\it International Journal of Man-Machine Studies}, {\it 36}, 267--287.} {Aha,~D.~W. (1992). Generalizing from case studies: A case study. In {\it Proceedings of the Ninth International Conference on Machine Learning} (pp. 1--10). Aberdeen, Scotland: Morgan Kaufmann.} {Aha,~D.~W., \& Goldstone,~R.~L. (1990). Learning attribute relevance in context in instance-based learning algorithms. In {\it Proceedings of the Twelfth Annual Conference of the Cognitive Science Society} (pp. 141--148). Cambridge, MA: Lawrence Erlbaum.} {Aha,~D.~W., \& Goldstone,~R.~L. (1992). Concept learning and flexible weighting. In {\it Proceedings of the Fourteenth Annual Conference of the Cognitive Science Society} (pp. 534--539). Bloomington, IN: Lawrence Earlbaum.} {Aha,~D.~W., \& Kibler,~D. (1989). Noise-tolerant instance-based learning algorithms. In {\it Proceedings of the Eleventh International Joint Conference on Artificial Intelligence} (pp. 794--799). Detroit, MI: Morgan Kaufmann.} {Aha,~D.~W., Kibler,~D., \& Albert,~M.~K. (1991). Instance-based learning algorithms. {\it Machine Learning}, {\it 6}, 37--66.} {Aha,~D.~W., \& Salzberg,~S.~L. (1993). Learning to catch: Applying nearest neighbor algorithms to dynamic control tasks. In {\it Proceedings of the Fourth International Workshop on Artificial Intelligence and Statistics.} (pp. 363--368). Ft. Lauderdale, FL: Unpublished.} {Albert,~M.~K., \& Aha,~D.~W. (1991). Analyses of instance-based learning algorithms. In {\it Proceedings of the Ninth National Conference on Artificial Intelligence} (pp. 553--558). Anaheim, CA: AAAI Press.} {Atkeson, C. (1989). Using local models to control movement. {\it Proceedings of Neural Information Processing Systems}.} {Bareiss,~R. (1989). The experimental evaluation of a case-based learning apprentice. In {\it Proceedings of a Case-Based Reasoning Workshop} (pp. 162--167). Pensacola Beach, FL: Morgan Kaufmann.} {Bareiss,~R. (1989). {\it Exemplar-based knowledge acquisition.} San Diego, CA: Academic Press.} {Barsalou,~L.~W. (1989). On the indistinguishability of exemplar memory and abstraction in category representation. In T.~K.~Srull \& R.~S.~Wyer (Eds.), {\it Advances in social cognition}. Hillsdale, NJ: Lawrence Erlbaum.} {Bradshaw,~G. (1987). Learning about speech sounds: The NEXUS project. In {\it Proceedings of the Fourth International Workshop on Machine Learning} (pp. 1--11). Irvine, CA: Morgan Kaufmann.} {Branting,~L.~K. (1989). Integrating generalizations with exemplar-based reasoning. In {\it Proceedings of the Eleventh Annual Conference of the Cognitive Science Society} (pp. 139--146). Ann Arbor, MI: Lawrence Erlbaum.} (See Karl's terrific dissertation also) {Breiman,~L., Friedman,~J.~H., Olshen,~R.~A., \& Stone,~C.~J. (1984). {\it Classification and regression trees.} Belmont, CA: Wadsworth International Group.} {Brooks,~L. (1978). Nonanalytic concept formation and memory for instances. In E.~Rosch \& B.~B.~Lloyd (Eds.), {\it Cognition and categorization}. Hillsdale, NJ: Lawrence Erlbaum.} {Busemeyer,~J.~R., Dewey,~G.~I., \& Medin,~D.~L. (1984). Evaluation of exemplar-based generalization and the abstraction of categorical information. {\it Journal of Experimental Psychology: Learning, Memory, and Cognition}, {\it 10}, 638--648.} {Cain,~T., Pazzani,~M.~J., \& Silverstein,~G. (1991). Using domain knowledge to influence similarity judgement. In {\it Proceedings of the Case-Based Reasoning Workshop} (pp. 191--202). Washington, DC: Morgan Kaufmann.} {Callan,~J.~P., Fawcett,~T.~E., \& Rissland,~E.~L. (1991). CABOT: An adaptive approach to case-based search. In {\it Proceedings of the Twelvth International Joint Conference on Artificial Intelligence} (pp. 803--808). Sydney, Australia: Morgan Kaufmann.} {Cardie,~C. (1993). Using decision trees to improve case-based learning. To appear in {\it Proceedings of the Tenth International Conference on Machine Learning}. Amherst, MA: Morgan Kaufmann.} {Clark,~P.~E. (1988). A comparison of exemplar-based and rule-based concept representations. In {\it Proceedings of an International Workshop on Machine Learning and Meta Reasoning Logics} (pp. 69--82). Sesimbra, Portugal: Unpublished.} {Clark,~P.~E. (1989). {\it Exemplar-based reasoning in geological prospect appraisal} (Technical Report 89-034). Glasgow, Scotland: University of Strathclyde, Turing Institute.} {Connell,~M.~E., \& Utgoff,~P.~E. (1987). Learning to control a dynamic physical system. In {\it Proceedings of the Sixth National Conference on Artificial Intelligence} (pp. 456--460). Seattle, WA: Morgan Kaufmann.} {Cost,~S., \& Salzberg,~S. (1990). {\it A weighted nearest neighbor algorithm for learning with symbolic features} (Technical Report JHU-90/11). Baltimore, MD: The Johns Hopkins University, Department of Computer Science.} - More recently published in MLj {Cover,~T.~M. (1968). Estimation by the nearest neighbor rule. {\it Institute of Electrical and Electronics Engineers Transactions on Information Theory}, {\it 14}, 50--55.} {Cover,~T.~M., \& Hart,~P.~E. (1967). Nearest neighbor pattern classification. {\it Institute of Electrical and Electronics Engineers Transactions on Information Theory}, {\it 13}, 21--27.} If anything, get this: {Dasarathy,~B.~V. (Ed.). (1991). {\it Nearest neighbor(NN) norms: NN pattern classification techniques.} Los Alamitos, CA: IEEE Computer Society Press.} {Devijver,~P.~A. (1986). On the editing rate of the Multiedit algorithm. {\it Pattern Recognition Letters}, {\it 4}, 9--12.} {Duda,~R.~O., \& Hart,~P.~E. (1973). {\it Pattern classification and scene analysis.} New York, NY: Wiley.} {Elliot,~T., \& Scott,~P.~D. (1991). Instance-based and generalization-based learning procedures applied to solving integration problems. In {\it Proceedings of the Eighth Conference of the Society for the Study of Artificial Intelligence} (pp. 256--265). Leeds, England: Springer-Verlag.} {Fisher,~D.~H. (1989). Noise-tolerant concept clustering. In {\it Proceedings of the Eleventh International Conference on Artificial Intelligence} (pp. 825--830). Detroit, MI: Morgan Kaufmann.} {Fix,~E., \& Hodges,~J.~L., Jr. (1951). {\it Discriminatory analysis, nonparametric discrimination, consistency properties} (Technical Report 4). Randolph Field, TX: United States Air Force, School of Aviation Medicine.} {Fix,~E., \& Hodges,~J.~L., Jr. (1952). {\it Discriminatory analysis: Small sample performance} (Technical Report 11). Randolph Field, TX: United States Air Force, School of Aviation Medicine.} {Fogarty,~T.~C. (in press). First nearest neighbor classification on Frey and Slate's letter recognition problem. To appear in {\it Machine Learning}.} {Hart,~P.~E. (1968). The condensed nearest neighbor rule. {\it Institute of Electrical and Electronics Engineers Transactions on Information Theory}, {\it 14}, 515--516.} {Hellman,~M.~E. (1970). The nearest neighbor classification rule with a reject option. {\it Institute of Electrical and Electronic Engineers Transactions on Systems, Science, and Cybernetics}, {\it 6}, 179--185.} {Hintzman,~D.~L. (1984). MINERVA II: A simulation model of human memory. {\it Behavior Research Methods, Instruments, \& Computers}, {\it 16}, 96--101.} {Hintzman,~D.~L. (1986). ``Schema abstraction'' in a multiple-trace memory model. {\it Psychological Review}, {\it 93}, 411--428.} {Hintzman,~D.~L. (1988). Judgments of frequency and recognition memory in a multiple-trace memory model. {\it Psychological Review}, {\it 95}, 528--551.} {Hintzman,~D.~L., \& Ludlam,~G. (1984). Differential forgetting of prototypes and old instances: Simulation by an exemplar-based classification model. {\it Memory \& Cognition}, {\it 8}, 378--382.} {Homa,~D., Sterling,~S., \& Trepel,~L. (1981). Limitations of exemplar-based generalization and the abstraction of categorical information. {\it Journal of Experimental Psychology: Human Learning and Memory}, {\it 7}, 418--439.} {Hurwitz,~J.~B. (1991). Learning rule-based and probabilistic categories in a hidden pattern-unit network model. Unpublished manuscript. Harvard University, Department of Psychology, Cambridge, MA.} {Jabbour,~K., Riveros,~J.~F.~V., Landsbergen,~D., \& Meyer~W. (1987). ALFA: Automated load forecasting assistant. In {\it Proceedings of the 1987 IEEE Power Engineering Society Summer Meeting}. San Francisco, CA.} {Kelly,~J.~D., Jr., \& Davis,~L. (1991). A hybrid genetic algorithm for classification. In {\it Proceedings of the Twelfth International Joint Conference on Artificial Intelligence} (pp. 645--650). Sydney, Australia: Morgan Kaufmann.} {Kibler,~D., \& Aha,~D.~W. (1987). Learning representative exemplars of concepts: An initial case study. In {\it Proceedings of the Fourth International Workshop on Machine Learning} (pp. 24--30). Irvine, CA: Morgan Kaufmann.} Also in J.~W.~Shavlik \& T.~G.~Dietterich (Eds.), {\it Readings in machine learning}. San Mateo, CA: Morgan Kaufmann.} {Kibler,~D., Aha,~D.~W., \& Albert,~M. (1989). Instance-based prediction of real-valued attributes. {\it Computational Intelligence}, {\it 5}, 51--57.} {Kibler,~D., \& Aha,~D.~W. (1988). Comparing instance-averaging with instance-filtering learning algorithms. In {\it Proceedings of the Third European Working Session on Learning} (pp. 63--80). Glasgow, Scotland: Pitman.} {Kibler,~D., \& Aha,~D.~W. (1989). Comparing instance-saving with instance-averaging learning algorithms. In D.~P.~Benjamin (Ed.), {\it Change of representation and inductive bias}. Boston, MA: Kluwer.} {Koh,~K., \& Meyer,~D.~E. (1989). Induction of continuous stimulus-response relations. In {\it Proceedings of the Eleventh Annual Conference of the Cognitive Science Society} (pp. 233-240). Ann Arbor, MI: Lawrence Erlbaum.} {Kruschke,~J.~K. (1990). {\it ALCOVE: A connectionist model of category learning} (Technical Report 19). Bloomington, IN: Indiana University, Department of Psychology.} {Kruschke,~J.~K. (1991). Dimensional attention learning in models of human categorization. In {\it Proceedings of the Thirteenth Annual Conference of the Cognitive Science Society} (pp. 281--286). Chicago, IL: Lawrence Earlbaum.} {Kruschke,~J.~K. (1991). {\it ALCOVE: An exemplar-based connectionist model of category learning} (Technical Report 47). Bloomington, IN: Indiana Univeristy, Department of Psychology.} {Kruschke,~J.~K. (1992). ALCOVE: An exemplar-based connectionist model of category learning. {\it Psychological Review}, {\it 99}, 22-44.} {Kurtzberg,~J.~M. (1987). Feature analysis for symbol recognition by elastic matching. {\it International Business Machines Journal of Research and Development}, {\it 31}, 91--95.} {Lehnert,~W.~G. (1987). Case-based problem solving with a large knowledge base of learned cases. In {\it Proceedings of the Sixth National Conference on Artificial Intelligence} (pp. 301-306), Seattle, WA: Morgan Kaufmann.} {Logan,~G.~D. (1989). Toward an instance theory of automatization. {\it Psychological Review}, {\it 95}, 492--527.} {Medin,~D.~L., Altom,~M.~W., \& Murphy,~T.~D. (1984). Given versus induced category representations: Use of prototype and exemplar information in classification. {\it Journal of Experimental Psychology: Learning, Memory, and Cognition}, {\it 10}, 333--352.} {Medin,~D.~L., Altom,~M.~W., Edelson,~S.~M., \& Freko,~D. (1982). Correlated symptoms and simulated medical classification. {\it Journal of Experimental Psychology: Learning, Memory, and Cognition}, {\it 8}, 37--50.} {Medin,~D.~L., Dewey,~G.~I., \& Murphy,~T.~D. (1983). Relationships between item and category learning: Evidence that abstraction is not automatic. {\it Journal of Experimental Psychology: Learning, Memory, and Cognition}, {\it 9}, 607--625.} {Medin,~D.~L., \& Edelson,~S.~M. (1988). Problem structure and the use of base-rate information from experience. {\it Journal of Experimental Psychology: General}, {\it 117}, 68--85.} {Medin,~D.~L., \& Schaffer,~M.~M. (1978). Context theory of classification learning. {\it Psychological Review}, {\it 85}, 207--238.} {Medin,~D.~L., \& Schwanenflugel,~P.~J. (1981). Linear separability in classification learning. {\it Journal of Experimental Psychology: Human Learning and Memory}, {\it 7}, 355--368.} {Medin,~D.~L., \& Shoben,~E.~J. (1988). Context and structure in conceptual combination. {\it Cognitive Psychology}, {\it 20}, 158--190.} {Moore,~A.~W. (1990). Acquisition of dynamic control knowledge for a robotic manipulator. In {\it Proceedings of the Seventh International Conference on Machine Learning} (pp. 244--252). Austin, TX: Morgan Kaufmann.} - Andrew has an upcoming paper in MLj with Chris Atkeson; excellent paper. {Nosofsky,~R.~M. (1984). Choice, similarity, and the context theory of classification. {\it Journal of Experimental Psychology: Learning, Memory, and Cognition}, {\it 10}, 104--114.} {Nosofsky,~R.~M. (1986). Attention, similarity, and the identification-categorization relationship. {\it Journal of Experimental Psychology: General}, {\it 15}, 39--57.} {Nosofsky,~R.~M. (1987). Attention and learning processes in the identification and categorization of integral stimuli. {\it Journal of Experimental Psychology: Learning, Memory, and Cognition}, {\it 13}, 87--108.} {Nosofsky,~R.~M., Clark,~S.~E., \& Shin,~H.~J. (1989). Rules and exemplars in categorization, identification, and recognition. {\it Journal of Experimental Psychology: Learning, Memory, and Cognition}, {\it 15}, 282--304.} {Nosofsky,~R.~M. (1989). Further tests of an exemplar-similarity approach to relating identification and categorization. {\it Perception \& Psychophysics}, {\it 45}, 279--290.} {Ortony,~A., Vondruska,~R.~J., Foss,~M~A., \& Jones,~L.~E. (1985). Salience, similes, and asymmetry of similarity. {\it Journal of Memory and Language}, {\it 24}, 569--594.} {Penrod,~C.~S., \& Wagner,~T.~J. (1977). Another look at the edited nearest neighbor rule. {\it Institute of Electrical and Electronic Engineers Transactions on Systems, Man and Cybernetics}, {\it 7}, 92--94.} {Porter,~B.~W. (1989). Similarity assessment: Computation vs. representation. In {\it Proceedings of a Workshop on Case-Based Reasoning} (pp. 82-84). Pensacola Beach, FL: Morgan Kaufmann.} {Porter,~B.~W., Bareiss,~R., \& Holte,~R.~C. (1990). Knowledge acquisition and heuristic classification in weak-theory domains. {\it Artificial Intelligence}, {\it 45}, 229--263.} {Quinlan,~J.~R. (1993). Combining instance-based leanring and model-based learning. To appear in {\it Proceedings of the Tenth International Conference on Machine Learning.} Amherst, MA: Morgan Kaufmann.} {Reed,~S.~K. (1972). Pattern recognition and categorization. {\it Cognitive Psychology}, {\it 3}, 382--407.} {Rosch,~E. (1978). Principles of categorization. In E.~Rosch \& B.~B.~Lloyd (Eds.), {\it Cognition and categorization.} Hillsdale, NJ: Lawrence Erlbaum.} {Rosch,~E., \& Mervis,~C.~B. (1975). Family resemblances: Studies in the internal structure of categories. {\it Cognitive Psychology}, {\it 7}, 573--605.} {Salzberg,~S.~L. (1990). {\it Learning with nested generalized exemplars.} Boston, MA: Kluwer.} {Salzberg,~S.~L. (1991). A nearest hyperrectangle learning method. {\it Machine Learning}, {\it 6}, 251--276.} {Samuel,~A.~L. (1959). Some studies in machine learning using the game of checkers. {\it IBM Journal of Research and Development}, {\it 3}, 211--229.} {Sebestyen,~G.~S. (1962). {\it Decision-making processes in pattern recognition}. New York, NY: Macmillan.} Wonderful, but out of print. Contains the first edited k-nn algorithm I could find. {Seidel,~R. (1987). On the number of faces in higher-dimensional Voronoi diagrams. In {\it Proceedings of the Third Annual Symposium on Computational Geometry} (pp. 181--185). Waterloo, Ontario: Association for Computing Machinery.} {Shepard,~R.~N. (1987). Toward a universal law of generalization for psychological science. {\it Science}, {\it 237}, 1317--1323.} {Smith,~E.~E., \& Medin,~D.~L. (1981). {\it Categories and concepts}. Cambridge, MA: Harvard University Press.} A great place to start from. {Stanfill,~C. (1987). Memory-based reasoning applied to English pronunciation. In {\it Proceedings of the Sixth National Conference on Artificial Intelligence} (pp. 577--581). Seattle, WA: Morgan Kaufmann.} {Stanfill,~C., \& Waltz,~D. (1986). Toward memory-based reasoning. {\it Communications of the Association for Computing Machinery}, {\it 29}, 1213--1228.} {Stanfill,~C. (1988). Learning to read: A memory-based model. In {\it Proceedings of a Case-Based Reasoning Workshop} (pp. 402--413). Clearwater Beach, FL: Morgan Kaufmann.} {Stanfill,~C., \& Waltz,~D. (1988). The memory-based reasoning paradigm. In {\it Proceedings of a Case-Based Reasoning Workshop} (pp. 414--424). Clearwater Beach, FL: Morgan Kaufmann.} {Tan,~M., \& Schlimmer,~J.~C. (1990). Two case studies in cost-sensitive concept acquisition. In {\it Proceedings of the Eighth National Conference on Artificial Intelligence} (pp. 854--860). Boston, MA: American Association for Artificial Intelligence Press.} {Tomek,~I. (1976). A generalization of the $k$-NN rule. {\it Institute of Electrical and Electronics Engineers Transactions on Systems, Man, and Cybernetics}, {\it 6}, 121--126.} {Tomek,~I. (1976). An experiment with the edited nearest neighbor rule. {\it Institute of Electrical and Electronics Engineers Transactions on Systems, Man, and Cybernetics}, {\it 6}, 448--452.} {Tversky,~A. (1977). Features of similarity. {\it Psychological Review}, {\it 84}, 327--352.} {Tversky,~A., \& Hutchinson,~J.~W. (1986). Nearest neighbor analysis of psychological spaces. {\it Psychological Review}, {\it 1}, 3--22.} {Volper,~D.~J., \& Hampson,~S.~E. (1987). Learning and using specific instances. {\it Biological Cybernetics}, {\it 57}, 57--71.} !!! {Voronoi,~G. (1908). Novelles applications des parametres continus \`{a} la theorie des formes quadratique, deuxi\`{e}me m\'{e}moire: recherches sur les parall\'{e}llo<\`{e}dres primitifs. {\it Journal Reine U. Angew. Math}, {\it 1}, 198--287.} {Waltz,~D. (1987). Applications of the connection machine. {\it Computer}, {\it 20}, 85--97.} {Waltz,~D. (1990). Massively parallel AI. In {\it Proceedings of the Eighth National Conference on Artificial Intelligence} (pp. 1117-1122). Boston, MA: AAAI Press.} {Widmar,~G. (1993). Plausible explanations and instance-based learning in mixed symbolic/numeric domains. In {\it Proceedings of the Second International Workshop on Multi-Strategy Learning}. Harper's Ferry, West Virginia: publisher unknown.} {Wilson,~D. (1972). Asymptotic properties of nearest neighbor rules using edited data. {\it Institute of Electrical and Electronic Engineers Transactions on Systems, Man and Cybernetics}, {\it 2}, 408--421.} {Wolpert,~D.~H. (1990). Constructing a generalizer superior to NETtalk via a mathematical theory of generalization. {\it Neural Networks}, {\it 3}, 445--452.} {Zhang,~J. (1990). A method that combines inductive learning with exemplar-based learning. In {\it Proceedings for Tools for Artificial Intelligence} (pp. 31--37). Herndon, VA: IEEE Computer Society Press.} {Zhang,~J. (1992). Selecting typical instances in instance-based learning. In {\it Proceedings of the Ninth International Machine Learning Conference} (pp. 470--479). Aberdeen, Scotland: Morgan Kaufmann.}