|
|
|
|
|
|
|
|
PERSONAL
CONSTRUCT PSYCHOLOGY AND CONTENT ANALYSIS
|
|
|
Bob Green
|
|
|
Community Forensic Mental Health Service, Brisbane,
Australia |
|
|
|
|
|
|
Published PCP research is
dominated by
analyses of construct and element ratings, or the use of structural
measures.
An attraction of grids is the potential to examine subjectivity through
such
‘quantitative’ analyses. In contrast, significantly less use has been
made of
approaches for analysing the content of construing. Content analysis
has the
potential to provide techniques for systematically analysing construct
content.
However, content analysis raises some fundamental questions for PCP
researchers
and practitioners. The present paper reviews the various approaches
taken by
PCP researchers to content analysis. Limitations associated with these
approaches are discussed along with recommendations for future PCP
content analyses.
Key words: Personal Construct Psychology,
content analysis.
|
|
|
|
|
|
|
|
INTRODUCTION
For many people, PCP is about repertory
grids. In 1985, Neimeyer (1985)
reported 96% of empirical research published between
1954–1981 utilised repertory grids. While the percentage may have
reduced it is
unlikely the dominance of grids would have reduced. An attraction of
grids is
the potential to examine subjectivity through ‘quantitative’ analyses.
In contrast,
significantly less use has been made of systematic approaches to
analysing the
content of construing. However, as Fransella
et al. (2004)
remind us “grids are about constructs” and
construing.
Content analysis is a set of techniques
that have the potential to assist in examining not only grid content,
but any
data generated clinically or for a range of purposes by PCP
practitioners and
others. According to Krippendorf
(2004a), “Content analysis is a research technique
for making
replicable and valid inferences from texts (or other meaningful matter)
to the
contexts of their use”. This definition contains three
fundamental aspects of
content analysis, namely: (a) the findings from a content analysis
should be
able to be replicated by others, (b) the analysis should measure what
it claims
to measure and, (c) content analysis is not limited to textual data.
There are three basic approaches to
content
analysis. The first is the frequency count of words. This approach is
probably
least useful to the analysis of a single or even multiple repertory
grids
because of the relatively small number of words in a grid. A second
approach is
to examine the co-occurrence of words. For example, the number of times
the
words “boring” and “presentation” go together. Again, this approach is
more
likely to be suited to relatively larger texts, as the likelihood of
two words
appearing together will always be less than the likelihood of either
word
appearing individually. Five hundred to 2,000 words has been suggested
as an
optimal length, because if texts are too long it is highly likely words
will
co-occur (Miller and Rierchert
2001).
The third major approach to content analysis, is the
coding of text units (e.g words, sentences or paragraphs) using some
form of
coding scheme. While it is this last approach that will be the primary
focus of
this paper and which has been most often used in PCP research, other
data that
has been analysed by PCP researchers will also be considered.
THE ISSUE OF INDIVIDUAL
MEANING
Content analysis has not historically
been
widely used in PCP research. One reason for this could be the concern
that
constructs are not necessarily equivalent to word labels (Kelly
1955).
Shaw
(1994)
has described four possible relationships that may
exist between constructs and word labels, namely, agreement between
constructs
and word labels, different words being used for the same construct, the
same
word being used for different constructs and different words being used
for different
constructs. Secondly, there is concern that individual meaning cannot
be readily
categorised by another person, especially in the absence of elaboration
or
understanding of context and application (Yorke,
1989).
When describing the analysis of
self-characterisations, Kelly noted, “A
literalistically minded clinician, who
does not realize that he is setting out to learn a new language, may
seriously
misinterpret what his client means, simply because he presumes that the
client
agrees with the dictionary’’ (Kelly, 1955). Kelly was, however,
interested in both
how an individual was similar to others but also unique. This issue has
also
been discussed by Duck (1983)
in terms of what Kelly meant by two individuals
having similarity of construing, i.e. is similarity to be understood in
terms of
structure, content or the conclusions drawn about events.
ANALYSIS OF GRID DATA
Another way to describe content analysis
is
in terms of whether the categories utilised in a content analysis are
theory or
data driven (Simon and Xenos 2004).
Data driven content analysis develops the categories
from the raw data, whereas theory driven approaches categorise data in
terms of
categories developed on the basis of theory or on empirical grounds.
PCP
research has adopted both approaches. Examples of both types of content
analysis will be discussed along with respective advantages and
disadvantages
of these approaches.
Data driven approaches
Data driven content analysis has been
favoured by authors working in the business or management field (Honey, 1979; Jankowicz, 2004; Stewart
and Stewart, 1982; Sypher and
Zorn, 1988;
Wright,
2004).
The basic approach recommended by these authors is
to take individual repertory grids and cut up the grids so each
elicited
construct is on a separate sheet of paper. These constructs are then
sorted
into groups of similar constructs. It is recommended by these authors
that
another person groups the same constructs and that negotiation occur
regarding
any disagreements. In effect, this is a two-stage process, of
developing
categories from the data and then reliably allocating the constructs to
the categories.
This, however, is also an iterative process that might require several
cycles
before satisfactory reliability is obtained (Jankowicz, 2004). In
effect, data
driven approaches involve an individual construing the constructs of
others.
A primary advantage of these approaches
is
that the categories reflect the constructs they were developed from,
and are
closer to the raw data. Disadvantages include the potential for low
replicability
by others (e.g., if a rigorous and transparent approach to coding
development
is not adopted) and potential difficulties in resolving disagreements.
Honey
(1979) has recommended the use of two additional coders to counter
potential
bias of the code developer. However, the available literature offers
little
empirical basis for determining the number of coders that should be
used. In
general, the more coders are employed, the more certain one can be
regarding
the reliability of the coding scheme, i.e., if other coders were to
code the
data similar results would be obtained (Noda et al. 2001 has discussed
this
issue in relation to diagnostic decisions). Further, Krippendorf
(2004a)
has cautioned against colleagues acting as the second
coder because they are more likely to be aware of the researcher’s aims
or
general approach and are less likely to be truly independent.
Stewart
and Stewart (1982)
also propose that supplied (theory derived)
categories can be used (e.g. ‘propositional’, ‘sensory’ and
‘evaluative’;
‘ends’ versus ‘means to ends’; ‘people’ versus ‘technical issues’) in
combination with data derived categories. Another hybrid approach has
been described
by Honey (1979)
who recommends the inclusion of supplied constructs
to sum each individual’s perspective on the topic of interest. This
approach
has been discussed in detail by Jankowicz
(2004)
and involves computing matching scores between the
elicited and supplied constructs. These scores are used to aggregate
constructs
across a sample and to examine the extent individual constructs match
the
ratings of the supplied ‘summary’ construct.
A small number of other studies have
utilised data driven approaches. Content analysis was used to examine
40 mental
health professionals’ judgments about the suitability for release of
security
patients (Green, 1996; Green and Baglioni, 1997). The first author
reviewed the
data for commonly recurring constructs and developed a 19
category-coding
scheme. To test the reliability of this coding scheme a random sample
of eight
grids (20% of the grids) and two grids selected for their anticipated
difficulty to code were coded by three independent coders. Following
the
attainment of a kappa value of 0.82, another three coders then coded
the full
set of 316 constructs. Intercoder agreement for the full data set was
0.73
(n=316) and 0.74 for the six constructs (n=240) nominated as most
important by
each participant. Disagreement between the coders was resolved after
the
reliability analysis. An alternative, more rigorous approach would have
been to
further refine the coding scheme and employ a fresh set of coders.
A more complex approach designed to
develop
a group cognitive map from constructs generated by multiple individuals
has
been described by Hill (1995).
Data that was analysed included, the original
elicited (“chosen”) construct pole, most superordinate constructs and
the
subordinate construct which describes how the superordinate construct
will be
achieved. Contrast poles were included along with the preferred poles
in the
group cognitive map “where possible”. Other features included
translating words
into noun equivalents and the use of Key-word-in-context lists to
examine words
in context. Words shared across participants were examined for
superordinate
and subordinate linkages and implications. Constructs without
superordinate and
subordinate links were excluded, as a key concept underlying the
approach was
the assumption that a construct is defined by the constructs that are
superordinate and subordinate to it. The seven participants who
generated the
constructs were shown the completed individual and group cognitive maps
and confirmation
was sought regarding accuracy of the maps. Participants were also asked
to rate
whether their individual map was an “accurate expression” of their
views and
whether the group cognitive map accurately subsumed their views. This
method
was developed to facilitate team building and is labour intensive and
would be
complex to implement with the large numbers of participants typically
recruited
to research studies.
Simple theory driven
coding schemes
PCP researchers have utilised a variety
of
theory driven approaches to content analysis. A number of studies have
utilised
only a small number of categories that were used to code constructs.
For
example, Bieri et al. (1958)
calculated an External Construct Score based on the
number of constructs categorised as ‘external’ (e.g., external
qualities such
as physical characteristics, relationships, interests and activities),
while Little (1968)
categorised constructs as ‘psychological’ or ‘role’.
This latter approach was adopted by Duck
and Spencer (1972)
who added an ‘other’ category in their analysis of
constructs related to friendship formation. Walker
et al. (1988)
coded constructs in terms of whether the constructs
concerned ‘people’ or ‘problems’, whether the constructs were ‘global’
and
‘subordinate’, and whether there was evidence of preemptive construing
and
impermeability. A second coder categorised the constructs in terms of
these
latter constructs and inter-coder agreement of 99% was reported. As is
the case
for many PCP studies agreement was based on raw percent agreement which
does
not control for chance agreement (Krippendorf
2004a;
Neuendorf
2002).
The content analysis codes suggested by Stewart and
Stewart (1982)
also utilised a small number of predefined
categories.
Landfield content analysis
categories
Landfield
(1971)
developed a 32-category coding scheme, that included
a small number of sub-categories to indicate time (e.g. past, present,
future)
or degree (e.g., high or low). This coding scheme has been described in
detail
by Fransella (1972)
and Winter
(1992).
The coding scheme has been used in full or in part
in a number of studies, including research into people who stutter (Fransella 1972),
sex offenders (Horley
1988),
suicide (Landfield
1976)
and psychotherapy (Winter
1992).
Landfield’s data on intercoder agreement has been
reported by Fransella (1972)
and the raw percentage agreement ranged from 59% to
75%. Fransella (1972)
did not report intercoder agreement data but did
indicate there were a “substantial minority” of constructs that could
have been
allotted to several categories. In such situations the implicit
construct was
used to determine the appropriate category. The problems with
intercoder
agreement in this study may have been reduced if more structured
training had
been provided. Harter (2004)
obtained an average kappa of 0.72 when a selection of
347 of Landfield’s constructs were coded by three raters (though it is
of note
three other coders were dropped because of “unreliable” ratings). An
average
kappa of 0.65 was obtained when new data (e.g., the 60% of 1960
constructs not
included in Landfield’s scoring index) was coded.
Two studies (Burke and
Noller, 1995;
Harter,
2004)
have utilised a computer program, AUTOREP (Murphy and
Neimeyer, 1986)
that contains a dictionary of 1,500 constructs based
on previous research by Landfield. Burke
and Noller (1995)
noted some constructs were scored for at least four different
categories. This issue was raised as one of the limitations of
Landfield’s
coding scheme (Feixas et al. 2002).
Other limitations identified by Feixas et al.
included the use of overlapping, non-exclusive categories, coding of
construct
poles as separate entities, as well as inclusion of categories with
differing
levels of abstraction and type (e.g., inclusion of time categories). Feixas et al.
also
described the coding scheme as being
non-comprehensive because categories with low inter-coder agreement
were
dropped. While the effect of dropping such categories on the
comprehensiveness
of the coding scheme is not certain, alternative strategies have been
proposed
in more recent texts on content analysis. Neuendorf
(2002)
for example, has proposed that once problematic
categories are identified further coder training or code-book revision
is
recommended (see also Hruschka et al., 2004). Category revision is an
issue
relevant to both data and theory driven content analysis.
Feixas’s ‘Classification
System for
Personal Constructs (CSPC)’
Bearing in mind the above limitations, Feixas et al. (2002)
developed a six-category coding scheme that consisted
of 45 sub-categories. Central features of this coding scheme were that
it was
developed only for coding psychological constructs, the major
categories were
hierarchically organised (e.g., the highest category was moral) so that
if a
construct might be considered to fit into two categories, the construct
was allocated
to the higher order category. Another feature was that constructs were
coded as
a bi-polar entity rather than both poles being coded separately. While Feixas et al. (2002)
were critical of the Landfield scheme for coding both
construct poles, argument can be mounted for both approaches,
especially when
the poles appear to come from different domains. This latter issue has
been
considered by Yorke (1989)
who has provided examples of biploar constructs that
are antonyms (“straight constructs”) and constructs in which the
superordinate
relationship between the poles is difficult to discern (“bent”
constructs”).
Yorke has suggested “bent” constructs may be composed of poles from
separate
constructs. Clearly, coding such a construct into a single category is
problematic.
The coding scheme was developed using a
sound methodology (e.g., independent coders and use of a development
and an
analysis data set). It would be expected that the three hours of
training which
was provided to coders contributed to the high level of inter-coder
agreement
reported. While both raw percentage and chance corrected measures of
agreement
were calculated, somewhat surprisingly the level of chance corrected
agreement
(e.g., kappa=0.95) was higher than the raw percentage of agreement
(0.87).
Similar levels of agreement were obtained by Haritos
et al. (2004)
for coding of constructs elicited from element role
titles (kappa=0.90) and constructs elicited from ‘acquaintance’
elements
(kappa=0.87). An overall kappa of 0.89 was obtained though coders could
not
agree on how to code 12.4% of the constructs. This paper is also of
interest
because it examined the relationship between role titles versus
acquaintance
elements, the types of constructs generated and grid structure.
A lower level of agreement (kappa=0.74)
was
obtained in a study which used a modified version of the CSPC (Neimeyer et al., 2001).
This modified coding scheme featured the addition of
two major categories (e.g., a higher order ‘existential’ category and a
lower
order ‘concrete descriptor’ category) as well as an additional
‘self-criticism/acceptance’ sub-category that was added to the
‘personal’
category. To date the CSPC and the modified form have received limited
use
though are promising developments. A major issue remains how constructs
that
come from different domains are treated. A second issue that faces all
attempts
at the content analysis of constructs is that constructs are often
single words
and as such there is a higher likelihood of ambiguity than when a
clause is
available. This issue will be considered further when the
Gottschalk–Gleser content
analysis scales are considered (Gottschalk,
1995;
Viney,
1983).
Neimeyer ‘Content analysis
of Death Constructs’
Neimeyer
et al. (1984)
developed a 25-category scheme to code death
constructs. The coding scheme was developed in two stages, with three
independent coders coding a sub-set of the initial development data
set.
Following this second stage the original categories were revised, with
categories being added as well as deleted. Data on raw percentage
intercoder
agreement (0.81 – 1.0) was provided. Although such a coding scheme will
have
limited use because of the focus on death, what is useful is the
publication of
a how a large number of words were categorised. This dictionary allows
for
greater transparency and reproducibility than was found in other coding
schemes. This dictionary could also be utilised for the purpose of
developing a
computerised analysis.
ANALYSIS OF TEXTUAL AND
NARRATIVE DATA
Word co-occurrence data
Although not a content analysis as such,
an
example of utilising word-occurrence that has relevance to PCP is Rosenberg and Jones (1972).
In this study personality trait co-occurrence was
analysed using cluster analysis and multidimensional scaling to
determine
dimensions of person perception. Gara (1982) has considered extensions
of this
approach and applications to understanding individual meaning and
therapy. More
recently, a variety of approaches based on word co-occurrence have been
discussed in the general content analysis literature. Other researchers
have
examined word-word correlations using cluster or factor analysis (Hogenraad et al., 2003),
correspondence, cluster or factor analysis of word
frequency by variable tables (Lebart
et al., 1998),
and factor analysis of word frequency matrices (Simon
and Xenos, 2004).
The latter paper compared manual coding and results
obtained by factor analysis, and concluded that the latter approach
provided a
richer coding system that better represented concerns raised in news
articles. Lebart et al. (1998)
have argued that such approaches do not involve the
researcher imposing on data to the same extent as when data is coded
using
categories, e.g., when a researcher develops codes the potential for
bias is
introduced. The basic assumption of these approaches is that counts of
words
reveal the structure in texts, through examining the relationship
between words
that constitute the text. A disadvantage of these approaches is that
larger
texts are required than is typically generated by repertory grids or
self-characterisations. Another consideration is the issue identified
by
Harter, Erbes and Hart (2004) and Yorke (1989) regarding the problems
associated with trying to infer meaning from an individual word (e.g.,
just) in
the absence of context, such as “a just war” as opposed to “just war
after
war”.
Gottschalk-Gleser-Viney
content analysis
scales
A series of content analysis scales were
developed for the purpose of ‘objectively’ inferring psychological
states from
verbal reports. These sophisticated scales feature differential weights
to indicate
intensity and magnitude, correction for number of words and weighted
categories (Gottschalk et al., 1969;
Gottschalk,
1995).
These scales had their origins in psychoanalysis and
clinical practice, and have sought to correlate psychological states
with
physiological and other variables.
The first published scales included
scales
to measure anxiety, hostility, social-alienation and personal
disorganisation (Gottschalk et al., 1969).
Subsequently, a number of scales have been developed
by Viney (1983)
to measure cognitive anxiety, sociality, pawn and
origin, positive affect and life stress (Viney,
1981;
Viney,
1983).
The cognitive anxiety scale, in particular, was developed
to investigate Kellian anxiety, i.e., individuals’ experiences of being
unable
to make sense of events (Viney
1983).
Typically, data is collected by asking an individual
to speak for five minutes about “any interesting or dramatic personal
life
experiences they have had”. The response of each individual is taped,
transcribed and analysed using the standardised scales. Rather than
individual
words, the unit of analysis is the clause, as it is considered to more
meaningfully convey an individual’s thoughts, feelings and actions
toward themselves
and others (Gottschalk 1995).
The various Gottschalk-Gleser scales
have
been extensively used. The original authors and others (Gottschalk,
1995;
Viney,
1983) have undertaken numerous validation studies
with high levels of reliability. This has been
achieved through careful definition of terms, scale testing, and a
strong emphasis
on coder training (Viney, 1983).
However, extensive training in the use of the scales
is required. In describing use of the Hostility-Outward scale, Gift et al. (1986)
noted that anyone who has had the appropriate training
(approximately 20 hours) could score the scale. Computerised scoring,
which has
been developed, will reduce the need for extensive training, though
careful
transcription and preparation of text is still required (see Roberts
and
Robinson, 2004, for a general discussion of this topic). It follows
that this
approach is dependent on the quality of transcription. Another
consideration
for PCP researchers and clinicians concerns the compatibility of the
psychoanalytic assumptions underlying this approach and associated
constructs
with PCP. Careful examination of the scale scoring systems (see Gottschalk, 1995)
will
assist in determining the compatibility of the
scales with PCP theoretical constructs. Viney (1981) has addressed this
issue
by developing PCP consistent scales using principles derived from the
Gottschalk-Gleser approach and utilising the Anxiety scale as a measure
of
Kellian threat (Viney 1993). The other advantage of this approach is
that
time-demands on research participants is minimal, though Viney
(1983)
has noted data quality is dependent on individuals’
verbal ability. This method of data collection also has the advantage
of being
relatively unobtrusive and focuses on events that are individually
salient (Viney, 1983).
The analysis of
self-characterisations
Kelly
(1955)
described his approach to analysing
self-characterisation in terms of examining content and organisation,
area or
topical analysis, themes or cause-and-effect relationships, dimensional
analysis and the application of professional constructs. Kelly
distinguished
his analysis from approaches that relied primarily on verbal or
syntactical
analysis, and stated that he was primarily interested in understanding
“the
dichotomized alternatives between which the client must continually and
consecutively choose”.
While Kelly’s approach was specifically
directed
toward clinical application, Jackson
(1988;
1990)
has attempted to develop a more standardised approach
to the content analysis of self-characterisation. Jackson
(1988)
described eight categories which were used to score a
self-characterisation. Each category reflected one of Kelly’s
corollaries. For
example, the first category labelled ‘self-esteem’ was intended to
reflect the
sociality corollary. To score this item a count was made of the number
of times
a person referred to the views taken of him or her by others.
Similarly, the
second category labelled ‘non-psychological statements’ referred to the
experience corollary. This category was scored by counting the number
of times
a person referred to his or her past, or possible future in
psychological
terms.
A paper published two years later (Jackson, 1990)
makes no reference to the corollaries. Although some
category titles were changed, ‘self-esteem’ became ‘views of others’,
and
‘non-psychological statements’ was changed to “history and future’, the
category definitions remained essentially the same. Two additions were
a count
of the number of prompts and a count of the number of non-psychological
statements (e.g., purely behavioural statements, activities and
physical
descriptions). These items were then deducted from the total score.
These
papers and another source (Houston,
1998)
describe this scoring method, provide sample
self-characterisations and examples of how they were scored. Houston (1998)
provides a particularly detailed example of the scoring
approach. This scoring approach has potential usefulness, however,
while
attempting to replicate the scores by independently coding the provided
self-characterisations, it was apparent that more detailed definitions
of terms
are required to ensure adequate inter-coder agreement.
An alternative method of scoring
self-characterisations has been described by Klevjer and Walker (2002).
Self-characterisations were segmented into phrases and each phrase
scored
according to McAdam’s (1994) three tiered personality framework, e.g.,
dispositional traits (Level I), personal concerns (Level II), and life
story or
narrative (Level III). For Level I phrases were scored as either ‘Big
Five’
(based on a list developed by Costa and McCrae 1992) or ‘non-Big Five’.
Level II
was scored using 11 sub-categories, including personal striving, coping
strategies, values, skills, needs, while Level III was scored for
changes in
tense and obvious plots. Additional categories included ‘How others
view me’,
‘belief about self’, ‘belief about others,’ and ‘personal emotion’. No
phrase
was scored for the same level more than once, but could be classified
into more
than one level, e.g., change of tense and trait characteristic. A
second coder
checked 70 of the units (intercoder agreement = 88%). The case for
using a
coding approach based on empirically derived theory has been argued by
McAdams
and Zeldow (1993).
Longer texts
While the theory and data driven
approaches, which have been described, can be applied to longer texts, Feixas and Villegas (1991)
have described a set of procedures for analysing
autobiographical texts. This approach is more suited to longer texts,
or
multiple texts created over time. One focus of this method is on
identifying
different types of constructs, for example, simple evaluative
constructs (e.g
“Ross is sex mad”), meta evaluative (e.g “John thinks Ann is pretty”)
or
relational (e.g. “Ross treated me badly”) constructs. Behavioural or
descriptive constructs are not recorded. Following exclusion of
elements
associated with fewer than five constructs and constructs that are
applied to
only one element, constructs are coded as ‘negative’, ‘positive’ or
‘neutral’.
Following exclusion of ‘neutral’ constructs, the evaluative constructs
applied
to various elements can be examined for “coherence”, i.e., an element
is
considered coherent if more than 85% of constructs are either
‘positive’ or
‘negative’. Elements below this
threshold are considered ambivalent and situational analysis is
undertaken to determine
factors associated with this ambivalence. Additionally, construct and
element
relationships can be examined by cluster analysis of a
construct-element matrix
and various indices such as intensity and density calculated. Analyses
can also
be conducted using the meta evaluative and relational constructs. The
authors
identified the limitations of their approach as the need for an
extensive
number of constructs that are applied repetitively across diverse
elements and
multiple characters that include the text’s author. It was considered
beneficial
by Feixas and Villegas (1991)
that texts cover as wide a range as possible of an
author’s life. To use this method considerable understanding is
required both
of the text preparation procedures and the various analysis methods.
CONCLUDING COMMENTS
A diverse range of PCP content analysis
applications has been reviewed. In developing a content analysis,
decisions
have to be made regarding the unit of analysis, the method of analysing
the data
and the reliability of the coding. Data driven approaches are flexible
and
appealing because they are closer to, and derived from the raw data. It
can
also be argued that the researcher or clinician is not attempting to
force data
into predetermined categories that may not be applicable to the data.
In
contrast, McAdams and Zeldow (1993) have stated the case for theory
driven
approaches, “For our taste, well
validated measures of carefully articulated
constructs are preferable to omnibus systems that promise to cover the
universe
by presenting a list of theoretically decontextualised terms and topics”.
Theory driven approaches are also more transparent, more readily
applied by
others and make explicit assumptions, which may not be as apparent in
data driven
approaches. It is important in considering when considering which
approach to
adopt to consider the respective advantages and limitations, as well as
the
range of approaches that are available.
Further, it is considered that attention
to
developments in the broader content analysis literature (see Kolbe and
Burnett,
1991, for a review of identified weaknesses) can significantly enhance
PCP
derived content analyses. One example of this is the work by Viney (1983).
There are, however, particular issues that PCP researchers
and clinicians need to consider. These issues include:
1.
|
Consideration of whether the focus should
be on a single construct pole, both individual poles or the construct
as a
single entity. |
2.
|
Less reliance on single word labels to describe
constructs so as to ensure a fuller description of individual’s meaning
(see
Yorke, 1989). |
3.
|
Use of appropriate statistics that take
into account chance agreement to measure inter-coder reliability (see
Krippendorf, 2004b, for a recent review). |
4.
|
More comprehensive definition and operationalisation
of categories, and an iterative process of category development to
obtain a
satisfactory level of reliability (Jankowicz, 2004).
|
5.
|
Use of multiple coders and greater independence
of coders to increase the reproducibility of findings.
|
The preceding review has discussed some
of
the theoretical and practical challenges that face PCP researchers and
clinicians who intend to use content analysis. Appropriate attention to
methodological
issues can positively contribute to analyses that more comprehensively
and
rigorously examine individual and group construing, while providing
models for
future researchers, thus enhancing the status of PCP research.
|
|
|
|
|
|
REFERENCES
Bieri, J, Bradburn, W. M. &
Galinsky, M. D. (1958). Sex
differences in perceptual behavior. Journal
of Personality, 26, 1-12.
Burke, M. & Noller, P. (1995).
Content
analysis of changes in self-construing during a career transition. Journal of
Constructivist Psychology, 8, 213-226.
Costa P.T. & McCrae, R.R. (1992). The
NEO-PI-R: professional manual. Odessa: Psychological Assessment
Resources.
Duck, S. (1983). Two individuals in
search
of agreement: the commonality corollary. In: J. R. Adams-Weber & J.
Mancuso
(Eds), Application of personal
construct theory (pp 222-234). Toronto: Academic
Press.
Duck, S. W. & Spencer, C. (1972).
Personal constructs and friendship formation. Journal of Personality and Social
Psychology, 23, 40-45.
Feixas, G., Geldschläger, H. &
Neimeyer, R. A. (2002). Content analysis of personal constructs. Journal of
Constructivist Psychology, 15, 1-19.
Feixas, G. & Villegas, M. (1991).
Personal construct analysis of autobiographical texts: a method
presentation
and case illustration. International
Journal of Personal Construct Psychology,
4, 51-83.
Fransella, F. (1972). Personal change and
reconstruction. Research on a treatment of stuttering. London:
Academic Press.
Fransella, F., Bell, R. & Bannister,
D.
(2004). A manual of repertory grid
technique. London: Routledge.
Gara, M.A. (1982). Back to basics in
personality study-The individual persons own organization of
experience: the
individuality corollary. In J. Mancuso & J.Adams-Webber. The Construing
Person (pp170-197). New York, Praeger.
Gift, T., Cole, R. & Wynne, L.
(1986).
An interpersonal measure of hostility based on speech context. In: L,
A.Gottschalk, F. Lolas, & L.L Viney (Eds.), Content Analysis of verbal
behavior (pp. 87-92). Significance in clinical medicine and psychiatry.
Berlin:
Springer-Verlag.
Gottschalk, L. (1995). Content analysis
of
verbal behavior. New findings and
clinical applications. Hillsdale: Lawrence
Erlbaum.
Gottschalk, L. A., Winget, C. N. &
Gleser, G. C. (1969). Manual of
instructions for using the Gottschalk-Gleser
content analysis scales: anxiety, hostility, and social
alienation-personal disorganisation. Berkley: University of
California Press.
Green, B. (1996). The release of
forensic
patients. Australian Social Work, 49,
47-53.
Green, B. & Baglioni, A. (1997).
Judging the suitability for release of patients from a maximum security
hospital by hospital and community staff. International Journal of Law and
Psychiatry, 20, 323-335.
Haritos, A., Gindidis, A., Doan, C., et
al.
(2004). The effect of element role titles on construct structure and
content. Journal
of Constructivist Psychology, 17, 221-236.
Harter, S., Erbes, C.R. & Hart, C.C.
(2004). Content analysis of the personal constructs of female sexual
abuse survivors
elicited through repertory grid technique. Journal of Constructivist
Psychology, 17, 27-43.
Hill, R. A. (1995). Content analysis for
creating and depicting aggregated personal construct derived cognitive
maps.
In: R.A. Neimeyer & G. J. Neimeyer (Eds.), Advances in Personal construct
psychology, Volume 3 (pp.101-132).Greenwich: JAI Press.
Hogenraad, R., Mckenzie, D. &
Péladeau,
N. (2003). Force and influence in content analysis: the production of
new
social knowledge. Quality and
Quantity, 37, 221-238.
Honey, P. (1979). The repertory grid in
action. How to use it to conduct an attitude survey. Industrial and Commercial
Training, 11, 452-459.
Horley, J. (1988). Cognitions of child
sexual abusers. The Journal of Sex
Research, 25, 542-545.
Houston, J. (1998). Making sense with
offenders. Personal constructs, therapy and change. Chichester;
John Wiley.
Hruschka,
D. J., Schwartz, D., Cobb St. John, D., et al. (2004). Reliability in
coding
open-ended data: lessons learned from HIV behavioral research. Field Methods,
16, 307-331.
Jackson, S. R. (1988).
Self-characterisation: dimensions of meaning. In: F. Fransella. &
L. Thomas
(Eds.), Experimenting with personal
construct psychology (pp 223-231). London:
Routledge and Kegan Paul.
Jackson, S. R. (1990).
Self-characterisation: development and deviance in adolescent
construing. In:
P. Maitland (Ed.), Personal
construct theory deviancy and social work (pp
60-68). London: Inner London Probation Service & Centre for
Personal
Construct Psychology.
Jankowicz, D. (2004). The easy guide to
repertory grids. Chichester: John Wiley.
Kelly, G. (1955). The psychology of
personal constructs. New York: Norton.
Klevjer, I. & Walker, B. (2002).
Beyond
the 'Big Five': a qualitative study of age differences in personality. Australian
Journal of Psychology, (supplement), 54, 5.
Kolbe, R.H. & Burnett, M.S. (1991).
Content-analysis research reliability and objectivity. Journal of Consumer Research,
18, 243-250.
Krippendorf, K. (2004a). Content analysis.
An introduction to its methodology. Thousand Oaks: Sage.
Krippendorf, K. (2004b). Reliability in
content analysis: some common misconceptions and recommendations. Human
Communication Research, 30, 411-433.
Landfield, A. (1971). Personal construct
systems in psychotherapy. Lincoln: University of Nebraska.
Landfield, A. (1976). A personal
construct
approach to suicidal behaviour. In P. Slater (Ed.), The measurement of
intrapersonal space by grid technique. Explorations of intrapersonal
space (pp.
93-107). London: John Wiley.
Lebart, L., Salem, A. & Berry, L.
(1998). Exploring textual data. Dordrecht:
Kluwer Academic Publishing.
Little, B. R. (1968). Factors affecting
the
use of psychological vs. non-psychological constructs on the rep test. Bulletin
of the British Psychological Society, 21, 34.
McAdams, D.P. & Zeldow, P.B. (1993).
Construct validity and content analysis. Journal of Personality Assessment, 61,
243-245.
McAdams, D.P. (1994). Can personality
change? Levels of stability and growth in personality across the
lifespan. In
T. Heatherton & J. Weinberger (Eds.). Can personality change? (pp.
299-313). Washington: APA Books.
Miller, M. & Rierchert, B. P.
(2001). Frame
mapping: a quantitative method for investigating issues in the public
sphere,
In: M. D. West (Ed.), Theory,
method, and practice in computer content analysis
(pp. 61-75).Westport: Ablex.
Murphy, M. & Neimeyer, R. A. (1986).
AUTOREP:
software reference manual. Memphis: Memphis State University.
Neimeyer, R. A. (1985). The development of
personal construct psychology. Lincoln: University of Nebraska.
Neimeyer, R. A., Anderson, A. &
Stockton, L. (2001). Snakes versus ladders: a validation of laddering
technique
as a measure of hierarchical structure. Journal of Constructivist Psychology,
14, 85-105.
Neimeyer, R. A., Fontana, D. J. &
Gold,
K. (1984). A manual for content analysis of death constructs. In F. R.
Epting
& R. A, Neimeyer (Eds.), Personal
meanings of death. Applications of
personal construct theory to clinical practice (pp.
213-234).Washington:
Hemisphere Publishing.
Neuendorf, K. A. (2002). The content
analysis guidebook. Thousand Oaks: Sage Publications.
Noda, A.N., Kraemer, H.C., Yesavage,
J.A.
& Periyakoil, V. (2001). How many raters are needed to make a
reliable
diagnosis? International Journal of
Methods in Psychiatric Research, 10, 119-125.
Roberts,
F. & Robinson, J. D. (2004). Interobserver agreement on first-stage
conversation analytic transcription. Human
Communication Research, 30, 376-410.
Rosenberg, S. & Jones, R. (1972). A
method for investigating and representing a person's implicit theory of
personality:
Theodore Dreiser's view of people. Journal
of Personality and Social Psychology,
22, 372-386.
Shaw, M. (1994). Methodology for sharing
personal construct systems. Journal
of Constructivist Psychology, 7, 35-52.
Simon, A. F. & Xenos, M. (2004).
Dimensional reduction of word-frequency data as a substitute for
intersubjective content analysis. Political
Analysis, 12, 63-75.
Stewart, V. & Stewart, A. (1982). Business
applications of repertory grid. London: McGraw-Hill.
Viney, L. L. (1981). Content analysis: a
research tool for community psychologists. American Journal of Community
Psychology, 9, 269-281.
Viney, L. (1983). The assessment of
psychological states through content analysis of verbal communications.
Psychological
Bulletin, 94, 542-563.
Viney, L.L. (1993). Listening to what my
clients and I say: content analysis categories and scales. In: G.
Neimeyer
(Ed.). Constructivist assessment: a
casebook (pp 104-142). Newbury Park: Sage.
Walker, B. M., Ramsey, F. L. & Bell,
R.
C. (1988). Dispersed and undispersed dependency. International Journal of
Personal Construct Psychology, 1, 63-80.
Winter, D. A. (1992). Personal construct
psychology in clinical practice: theory, research and applications.
London: Routledge.
Wright, R. P. (2004). Mapping cognitions
to
better understand attitudinal and behavioral responses in appraisal
research. Journal of Organizational
Behavior, 25, 339-374.
Yorke, M. (1989). The intolerable
wrestle:
words, numbers, and meanings. International
Journal of Personal Construct
Psychology, 2, 65-76.
|
|
|
|
|
|
ACKNOWLEDGEMENTS
Comments on an earlier draft of this
paper
by Devi Jankowicz, and clarification by Robin Hill and Beverly Walker
on their
methods are appreciated. |
|
|
|
|
|
ABOUT THE
AUTHOR
Bob
Green is a social worker, working in a
forensic mental health service. He has conducted research using
Personal
Construct Psychology to examine the judgments of mental health
professionals
and has presented papers on Personal Construct Psychology in relation
to a
range of topics, including schizophrenia and criminal behaviour. He is
currently completing a Ph.D., examining individuals’ anticipations
regarding
cannabis use.
Email: bgreen@dyson.brisnet.org.au
|
|
|
|
|
|
REFERENCE
Green, B. (2004).
Personal construct
psychology and content analysis. Personal
Construct Theory & Practice, 1,
82-91
(Retrieved from http://www.pcp-net.org/journal/pctp04/green04.html)
|
|
|
|
|
|
Received: 9 Aug 2004 – Accepted: 26 Oct
2004 -
Published: 30 Dec 2004
|
|
|
|
|
|