Can automated text classification improve content analysis of software project data?

John Noll, Dominik Seichter, Sarah Beecham

Research output: Contribution to journalConference articlepeer-review

Abstract

Content analysis is a useful approach for analyzing unstructured software project data, but it is labor-intensive and slow. Can automated text classification (using supervised machine learning) be used to reduce the labor or improve the speed of content analysis? We conducted a case study involving data from a previous study that employed content analysis of an open source software project. We used a human-coded data set with 3256 samples to create different size training sets ranging in size from 100 to 3000 samples to train an 'ensemble' text classifier to assign one of five different categories to a test set of samples. The results show that the automated classifier could be trained to recognize categories, but much less accurately than the human classifiers. In particular, both precision and recall for low-frequency categories was very low (less than 20%). Nevertheless, we hypothesize that automated classifiers could be used to filter a sample to identify common categories before human researchers examine the remainder for more difficult categories.

Original languageEnglish
Article number6681372
Pages (from-to)300-303
Number of pages4
JournalInternational Symposium on Empirical Software Engineering and Measurement
DOIs
Publication statusPublished - 2013
Event2013 ACM / IEEE International Symposium on Empirical Software Engineering and Measurement, ESEM 2013 - Baltimore, MD, United States
Duration: 10 Oct 201311 Oct 2013

Keywords

  • Content Analysis
  • Machine Learning
  • Open Source Software
  • Qualitative Research
  • Software Engineering
  • Text Classification

Fingerprint

Dive into the research topics of 'Can automated text classification improve content analysis of software project data?'. Together they form a unique fingerprint.

Cite this