|
Honey's content analysis technique
|
|
This technique has been developed
in the consultancy environment. Unlike other content
analyses which simply categorise the meaning of a set of constructs, Honey's technique utilises some
of the ratings available in the repertory
grids from which the pool of
constructs being categorised are taken. In this way, it manages to
aggregate
the meanings shared by a group of people while reflecting some of the
individual
provenance of their private meanings.
Each individual interviewee is asked to rate all the elements on a
single supplied construct, as well as on a set of his or her own
constructs
elicited in the usual way. This construct relates directly to the topic
of the grid, and to the purpose of the overall grid investigation. So,
in a study of how a group of local authority employees construe
effective
supervision by their direct managers, the construct "Overall, a more
effective
boss versus Overall, a less effective boss" might be used;
(here, the elements might be a set of 8 to 10 "managers I have known").
In a course in which undergraduates are asked to reflect on the ways in
which they learn, the supplied construct might be "Overall, more
conducive
to learning versus Overall, less conducive to learning"; (one
might use as elements, 8 to 10 occurrences in the individual
interviewee's
life from which s/he really, but really! learnt something important).
The content analysis proceeds on two assumptions:
(a) |
that elicited constructs express
personal ways by which each
respondent understands the supplied construct; they are personal
aspects of that construct, |
(b) |
that
this personal meaning can be expressed as a matter of degree: some
elicited
constructs lie closer to the personal meaning of the supplied construct
than others. |
For each
interviewee, the sum of differences between the ratings of the elements
on each elicited construct, and the ratings of the elements on
the supplied construct, are computed. (As with any pairwise comparison
of
ratings on constructs, the directionality of constructs has to be taken
into
account by checking for reversals.) A
simple
transformation of these sums of differences into percentage matching
scores
can be done to cater for the situation in which different interviewees
might
have been working with different numbers of elements. Finally, all the
constructs
of all interviewees are pooled, and the pool categorised using
conventional
content analysis techniques (see e.g. Neuendorf, 2002).
The result of this procedure is rather powerful: every construct has
attached to it a percentage matching score, which indicates its
personal relevance to the topic of the study as defined by each
individual interviewee's own definition of "relevance" (the match
is between the ratings on each construct and the individual's
ratings on the supplied construct, after all).
This becomes particularly valuable when the final step of the content
analysis is taken. This usually involves the investigator in choosing a
set of constructs to exemplify each category that has been identified
in the content analysis. By choosing those constructs whose meaning is
the same,
and which have the highest percentage matching scores, one is
automatically choosing constructs on which there is consensus across
the group of interviewees and which represent the individual
interviewee’s own understanding of the topic of the grid.
Honey recognises that different interviewees have different construct
similarity metrics (a match of 82% between a given elicited construct
and the supplied construct may be unremarkable in one interviewee whose
construct structure for this topic is somewhat "obsessive", i.e.
implicationally tight; while representing a very high degree of
agreement when observed in another interviewee whose construct
structure is relatively loose with matching
scores all of the order of 60% to 70%). He advocates that, when
selecting
sample constructs in the last stage of the content analysis, as well as
choosing ones with similar meaning and with high % matching scores, one
should
particularly focus attention on constructs with matching scores which
are
particularly high for the given individual who contributed that
particular construct.
As with any content analysis, which represents a process of sociality
by which the investigator construes the interviewees' construing, it is
very advisable to carry out inter-rater reliability checks between at
least two independent investigators on the first content analysis, a
mutually agreed redefinition of the categories, and a repetition of the
content analysis using these agreed categories, before terminating the
analysis. Reliability figures of at least 0.90 for pooled construct
sets composed of 200 - 400 constructs can be achieved with a little
care in category definition: see e.g. Dick and Jankowicz (2001).
Perreault and Leigh (1989) provide an excellent review of relevant
reliability measures, in an article which should be better known to
many psychologists who are interested in measuring the reliability of
their category schemes. The article indicates some of the inadequacies
of favourite measures such as % agreement and Cohen’s Kappa, and offers
a powerful alternative measure. |
|
References
|
|
- Dick, P. & Jankowicz A.D.
(2001). A social constructionist account of police culture and its
influence on the representation and progression of female officers: a
repertory grid
analysis in a UK police force. Policing, 24, 2, 181-199.
- Honey,
P. (1979). The repertory grid in action. Industrial and
Commercial Training,11, 11, 452-459.
- Perreault,
W.D. Jnr. & Leigh, L.E. (1989). Reliability of nominal
data based
on qualitative judgements. Journal of Marketing Research XXVI,
May,
135-148.
|
|
Devi Jankowicz
|
|
|