Co-training
Encyclopedia
Co-training is a machine learning
algorithm used when there are only small amounts of labeled data and large amounts of unlabeled data. One of its uses is in text mining for search engines. It was introduced by Avrim Blum
and Tom Mitchell in 1998.
technique that requires two views of the data. It assumes that each example is described using two different feature sets that provide different, complementary information about the instance. Ideally, the two views are conditionally independent (i.e., the two feature sets of each instance are conditionally independent given the class) and each view is sufficient (i.e., the class of an instance can be accurately predicted from each view alone). Co-training first learns a separate classifier for each view using any labeled examples. The most confident predictions of each classifier on the unlabeled data are then used to iteratively construct additional labeled training data
.
The original co-training paper described experiments using co-training to classify web pages into "academic course home page" or not; the classifier correctly categorized 95% of 788 web pages with only 12 labeled web pages as examples. The paper has been cited over 1000 times, and received the 10 years Best Paper Award at the 25th International Conference on Machine Learning (ICML
2008), a renowned computer science
conference.
Krogel and Scheffer showed in 2004 that co-training is only beneficial if the data sets used in classification are independent. Co-training can only work if one of the classifiers correctly labels a piece of data that the other classifier previously misclassified. If both classifiers agree on all the unlabeled data, i.e. they are not independent, labeling the data does not create new information. When they applied co-training to problems in functional genomics
, co-training worsened the results as the dependence of the classifiers was greater than 60%.
, which is typical for the text appearing on web pages and in emails. According to Tom Mitchell, "The features that describe a page are the words on the page and the links that point to that page. The co-training models utilize both classifiers to determine the likelihood that a page will contain data relevant to the search criteria." Text on websites can judge the relevance of link classifiers, hence the term "co-training". Mitchell claims that other search algorithms are 86% accurate, whereas co-training is 96% accurate.
Co-training was used on FlipDog.com, a job search site, and by the U.S. Department of Labor, for a directory of continuing and distance education. It has been used in many other applications, including statistical parsing and visual detection.
Machine learning
Machine learning, a branch of artificial intelligence, is a scientific discipline concerned with the design and development of algorithms that allow computers to evolve behaviors based on empirical data, such as from sensor data or databases...
algorithm used when there are only small amounts of labeled data and large amounts of unlabeled data. One of its uses is in text mining for search engines. It was introduced by Avrim Blum
Avrim Blum
Avrim Blum is a prominent computer scientist who in 2007 was inducted as a Fellow of the Association for Computing Machinery "for contributions to learning theory and algorithms."-Biography:...
and Tom Mitchell in 1998.
Algorithm design
Co-training is a semi-supervised learningSemi-supervised learning
In computer science, semi-supervised learning is a class of machine learning techniques that make use of both labeled and unlabeled data for training - typically a small amount of labeled data with a large amount of unlabeled data...
technique that requires two views of the data. It assumes that each example is described using two different feature sets that provide different, complementary information about the instance. Ideally, the two views are conditionally independent (i.e., the two feature sets of each instance are conditionally independent given the class) and each view is sufficient (i.e., the class of an instance can be accurately predicted from each view alone). Co-training first learns a separate classifier for each view using any labeled examples. The most confident predictions of each classifier on the unlabeled data are then used to iteratively construct additional labeled training data
Training set
A training set is a set of data used in various areas of information science to discover potentially predictive relationships. Training sets are used in artificial intelligence, machine learning, genetic programming, intelligent systems, and statistics...
.
The original co-training paper described experiments using co-training to classify web pages into "academic course home page" or not; the classifier correctly categorized 95% of 788 web pages with only 12 labeled web pages as examples. The paper has been cited over 1000 times, and received the 10 years Best Paper Award at the 25th International Conference on Machine Learning (ICML
ICML
The International Conference on Machine Learning is the leading international academic conference in machine learning, attracting annually about 500 participants from all over the world...
2008), a renowned computer science
Computer science
Computer science or computing science is the study of the theoretical foundations of information and computation and of practical techniques for their implementation and application in computer systems...
conference.
Krogel and Scheffer showed in 2004 that co-training is only beneficial if the data sets used in classification are independent. Co-training can only work if one of the classifiers correctly labels a piece of data that the other classifier previously misclassified. If both classifiers agree on all the unlabeled data, i.e. they are not independent, labeling the data does not create new information. When they applied co-training to problems in functional genomics
Functional genomics
Functional genomics is a field of molecular biology that attempts to make use of the vast wealth of data produced by genomic projects to describe gene functions and interactions...
, co-training worsened the results as the dependence of the classifiers was greater than 60%.
Uses
Co-training has been used to classify web pages using the text on the page as one view and the anchor text of hyperlinks on other pages that point to the page as the other view. Simply put, the text in a hyperlink on one page can give information about the page it links to. Co-training can work on "unlabeled" text that has not already been classified or taggedTag (metadata)
In online computer systems terminology, a tag is a non-hierarchical keyword or term assigned to a piece of information . This kind of metadata helps describe an item and allows it to be found again by browsing or searching...
, which is typical for the text appearing on web pages and in emails. According to Tom Mitchell, "The features that describe a page are the words on the page and the links that point to that page. The co-training models utilize both classifiers to determine the likelihood that a page will contain data relevant to the search criteria." Text on websites can judge the relevance of link classifiers, hence the term "co-training". Mitchell claims that other search algorithms are 86% accurate, whereas co-training is 96% accurate.
Co-training was used on FlipDog.com, a job search site, and by the U.S. Department of Labor, for a directory of continuing and distance education. It has been used in many other applications, including statistical parsing and visual detection.