In several areas of research, data collection in the usual sense may not be feasible or may not be required. Data—not the ones directly needed to study the underlying phenomenon—may be available in documents, artifacts, public pronouncements, images, newspaper reports or television serials and the like. Messages bearing on the research questions as are covertly contained in such data have to be deciphered, coded and then subjected to some form of analysis to draw relevant conclusions. This exercise is known as Content Analysis and has a great appeal to social scientists. Usually, more than one coder or referee or rater will be coding the data into some categories and agreement among the raters will be examined by using simple statistical tools. Once sufficient agreement among raters has been established, we proceed to test for relevant research hypotheses using statistical techniques appropriate to categorical data. The present chapter discusses several measures of concordance among raters applicable to different situations and provides their standard errors, so that observed values of the measures can be tested for their significance or otherwise. Several illustrations have been provided to facilitate application of techniques used in Content Analysis.