Thursday, January 13, 2011

Past input into the Systems...from Margith:)


About Me


Margith A. Strand
View my complete profile

Initial Work of mine...Margith Strand/

Critical Discourse Analysis (CDA) is a rapidly developing area of language study. It regards discourse as "form of social practice" (Fairclough, 1995), and takes into consideration of the context of language use to be crucial to discourse (Wodak, 2001).  It takes particular interest in the relation between language and power Claims can be made that cultural and economic dimensions are significant in the creation and maintenance of power relations.  The key figures in this area include Fairclough (1992a,b,c,1993, 1995a, b, 1996, 1998, 2000, 2001, 2003), van Dijk (1993, 1997, 1998a, b, 1999, 2001), Gee (1999, 2005), van Leeuwen (1993, 1995, 1996), Wodak (1996, 2000, 2001), and Scollon (2001).  It is generaly agreed that Critical Discourse Analysis is viewed as an approach, which consists of different perspectives and different methods for studying the relationship between the use of landguage and social context. 

It is within this construction of language and power, that I am addressing the field of Distance Education with special attention to the Constructivistic frame of network formats in language and delivery expression of Semiotic Assessment.  These objectives will be done within the context of Teaching and Learning in the realm of Distance Education.  The construction of the methodology for analysis will be carried out through the application of in-text, intertexuality, attitudinal-evaluative and reductionistic and social atomistic framework of backdrop within the platform layout of the online classroom and arena. 

Organizational behavior and organizational management techniques of analysis will also be used to validate and describe the construction of the Social Psychological parameters of the Group Processes, Group Decision Making, Organizational culture, Cross-cultural analysis, Personality (preferential attitudinal measurements), Work Design (platform effectiveness), Motvation (environmental constraints and measurements), Perception (attitudinal measurements), Leadership effectiveness (facilitator effectivity) and Performativity. 

One of the first principles of Critical Discourse Analysis is that it addresses social problems.  C.D.A. not only focuses on language and language use, but also on the linguistic characteristic of social and cultural processes.  It is to this end that I have decided to employ C.D.A. for analytical expression in my Constructivistic examination of Distance Teaching and Learning field of arena.  Critical Discourse Analysis aims to derive results which are of pratical relevance to the social, cultural, political and even economic contexts (Fairclough & Wodak, 1997).

The second principle of C.D.A. is that power relations are discursive.  C.D.A. explains through this method of approach how social relations of power are exercised and negotiated in and through discourse (Fairclough & Wodak, 1997).  The next principle is that discourse constitutes society and culture.  This means that every instance of language use makes its own contribution to responding and transforming society and culture, including relations of power (Fairclough & Wodak, 1997). In the design and application of C.D.A., I wish to integrate the approach constructed in the orgranizational patterning of the DIstance Education, namely the Teaching and Learning field, and bring to light the constructivistic details of the protocol exhibited within a science realm of online courses. 


Beginnings of beginnings....and onto...Discourse:)

Dissertation General/ November 13, 2009
Margith A. Strand


Topic/Question:
What is the systematic analysis of a closed space of the distance education field through a Grounded Theory perspective, i.e. gerunding

Intent:
To formulate the essentials in the distance education field so that improvement features can be generated upon observation for other model theatres.



Research:
Investigate the data generated through three years of student-Instructor interaction. Look for features placed at the beginning of the study.

Data Analysis: Formulation of Gerunds
Placement of data and significance of gerunding thematics found from the student-Instructor profiles.


Re-Question
Reinvestigate the thematics and verify the gerunding picture noted for the sample population found.  Define the gerunds as found for the groups and compare the results between differing environmental factors such as course topics.


Search and Find by Margith Strand/ May 24, 2010/ Used in Concept for Dissertation

Creators of STATISTICA Data Analysis Software and Services


StatSoft.comTextbookPrincipal Components Factor Analysis
Principal Components and Factor Analysis
General Purpose
Basic Idea of Factor Analysis as a Data Reduction Method
Factor Analysis as a Classification Method
Miscellaneous Other Issues and Statistics

General Purpose
The main applications of factor analytic techniques are: (1) to reduce the number of variables and (2) to detect structure in the relationships between variables, that is to classify variables. Therefore, factor analysis is applied as a data reduction or structure detection method (the term factor analysis was first introduced by Thurstone, 1931). The topics listed below will describe the principles of factor analysis, and how it can be applied towards these two purposes. We will assume that you are familiar with the basic logic of statistical reasoning as described in Elementary Concepts. Moreover, we will also assume that you are familiar with the concepts of variance and correlation; if not, we advise that you read the Basic Statistics topic at this point.
There are many excellent books on factor analysis. For example, a hands-on how-to approach can be found in Stevens (1986); more detailed technical descriptions are provided in Cooley and Lohnes (1971); Harman (1976); Kim and Mueller, (1978a, 1978b); Lawley and Maxwell (1971); Lindeman, Merenda, and Gold (1980); Morrison (1967); or Mulaik (1972). The interpretation of secondary factors in hierarchical factor analysis, as an alternative to traditional oblique rotational strategies, is explained in detail by Wherry (1984).
Confirmatory factor analysis. Structural Equation Modeling (SEPATH) allows you to test specific hypotheses about the factor structure for a set of variables, in one or several samples (e.g., you can compare factor structures across samples).

Correspondence analysis. Correspondence analysis is a descriptive/exploratory technique designed to analyze two-way and multi-way tables containing some measure of correspondence between the rows and columns. The results provide information which is similar in nature to those produced by factor analysis techniques, and they allow you to explore the structure of categorical variables included in the table. For more information regarding these methods, refer to Correspondence Analysis. To index

Basic Idea of Factor Analysis as a Data Reduction Method

Suppose we conducted a (rather "silly") study in which we measure 100 people's height in inches and centimeters. Thus, we would have two variables that measure height. If in future studies, we want to research, for example, the effect of different nutritional food supplements on height, would we continue to use both measures? Probably not; height is one characteristic of a person, regardless of how it is measured.
Let's now extrapolate from this "silly" study to something that you might actually do as a researcher. Suppose we want to measure people's satisfaction with their lives. We design a satisfaction questionnaire with various items; among other things we ask our subjects how satisfied they are with their hobbies (item 1) and how intensely they are pursuing a hobby (item 2). Most likely, the responses to the two items are highly correlated with each other. (If you are not familiar with the correlation coefficient, we recommend that you read the description in Basic Statistics - Correlations) Given a high correlation between the two items, we can conclude that they are quite redundant.

Combining Two Variables into a Single Factor. You can summarize the correlation between two variables in a scatterplot. A regression line can then be fitted that represents the "best" summary of the linear relationship between the variables. If we could define a variable that would approximate the regression line in such a plot, then that variable would capture most of the "essence" of the two items. Subjects' single scores on that new factor, represented by the regression line, could then be used in future data analyses to represent that essence of the two items. In a sense we have reduced the two variables to one factor. Note that the new factor is actually a linear combination of the two variables.

Principal Components Analysis. The example described above, combining two correlated variables
 into one factor, illustrates the basic idea of factor analysis, or of principal components analysis to be precise (we will return to this later). If we extend the two-variable example to multiple variables, then the computations become more involved, but the basic principle of expressing two or more variables by a single factor remains the same.


Extracting Principal Components. We do not want to go into the details about the computational aspects of principal components analysis here, which can be found elsewhere (references were provided at the beginning of this section). However, basically, the extraction of principal components amounts to a variance maximizing (varimax) rotation of the original variable space. For example, in a scatterplot we can think of the regression line as the original X axis, rotated so that it approximates the regression line. This type of rotation is called variance maximizing because the criterion for (goal of) the rotation is to maximize the variance (variability) of the "new" variable (factor), while minimizing the variance around the new variable (see Rotational Strategies).

Generalizing to the Case of Multiple Variables. When there are more than two variables, we can think of them as defining a "space," just as two variables defined a plane. Thus, when we have three variables, we could plot a three- dimensional scatterplot, and, again we could fit a plane through the data.
With more than three variables it becomes impossible to illustrate the points in a scatterplot, however, the logic of rotating the axes so as to maximize the variance of the new factor remains the same.
Multiple orthogonal factors. After we have found the line on which the variance is maximal, there remains some variability around this line. In principal components analysis, after the first factor has been extracted, that is, after the first line has been drawn through the data, we continue and define another line that maximizes the remaining variability, and so on. In this manner, consecutive factors are extracted. Because each consecutive factor is defined to maximize the variability that is not captured by the preceding factor, consecutive factors are independent of each other. Put another way, consecutive factors are uncorrelated or orthogonal to each other.

How many Factors to Extract? Remember that, so far, we are considering principal components analysis as a data reduction method, that is, as a method for reducing the number of variables. The question then is, how many factors do we want to extract? Note that as we extract consecutive factors, they account for less and less variability. The decision of when to stop extracting factors basically depends on when there is only very little "random" variability left. The nature of this decision is arbitrary; however, various guidelines have been developed, and they are reviewed in Reviewing the Results of a Principal Components Analysis under Eigenvalues and the Number-of- Factors Problem.

Reviewing the Results of a Principal Components Analysis. Without further ado, let us now look at some of the standard results from a principal components analysis. To reiterate, we are extracting factors that account for less and less variance. To simplify matters, you usually start with the correlation matrix, where the variances of all variables are equal to 1.0. Therefore, the total variance in that matrix is equal to the number of variables. For example, if we have 10 variables each with a variance of 1 then the total variability that can potentially be extracted is equal to 10 times 1. Suppose that in the satisfaction study introduced earlier we included 10 items to measure different aspects of satisfaction at home and at work. The variance accounted for by successive factors would be summarized as follows:
STATISTICA
FACTOR
ANALYSISEigenvalues (factor.sta)
Extraction: Principal components


Value
Eigenval% total
VarianceCumul.
EigenvalCumul.
%
1
2
3
4
5
6
7
8
9
106.118369
1.800682
.472888
.407996
.317222
.293300
.195808
.170431
.137970
.08533461.18369
18.00682
4.72888
4.07996
3.17222
2.93300
1.95808
1.70431
1.37970
.853346.11837
7.91905
8.39194
8.79993
9.11716
9.41046
9.60626
9.77670
9.91467
10.0000061.1837
79.1905
83.9194
87.9993
91.1716
94.1046
96.0626
97.7670
99.1467
100.0000
Eigenvalues
In the second column (Eigenvalue) above, we find the variance on the new factors that were successively extracted. In the third column, these values are expressed as a percent of the total variance (in this example, 10). As we can see, factor 1 accounts for 61 percent of the variance, factor 2 for 18 percent, and so on. As expected, the sum of the eigenvalues is equal to the number of variables. The third column contains the cumulative variance extracted. The variances extracted by the factors are called the eigenvalues. This name derives from the computational issues involved.

Eigenvalues and the Number-of-Factors Problem

Now that we have a measure of how much variance each successive factor extracts, we can return to the question of how many factors to retain. As mentioned earlier, by its nature this is an arbitrary decision. However, there are some guidelines that are commonly used, and that, in practice, seem to yield the best results.

The Kaiser criterion. First, we can retain only factors with eigenvalues greater than 1. In essence this is like saying that, unless a factor extracts at least as much as the equivalent of one original variable, we drop it. This criterion was proposed by Kaiser (1960), and is probably the one most widely used. In our example above, using this criterion, we would retain 2 factors (principal components).
The scree test. A graphical method is the scree test first proposed by Cattell (1966). We can plot the eigenvalues shown above in a simple line plot.

Cattell suggests to find the place where the smooth decrease of eigenvalues appears to level off to the right of the plot. To the right of this point, presumably, you find only "factorial scree" - "scree" is the geological term referring to the debris which collects on the lower part of a rocky slope. According to this criterion, we would probably retain 2 or 3 factors in our example.

Which criterion to use. Both criteria have been studied in detail (Browne, 1968; Cattell & Jaspers, 1967; Hakstian, Rogers, & Cattell, 1982; Linn, 1968; Tucker, Koopman & Linn, 1969). Theoretically, you can evaluate those criteria by generating random data based on a particular number of factors. You can then see whether the number of factors is accurately detected by those criteria. Using this general technique, the first method (Kaiser criterion) sometimes retains too many factors, while the second technique (scree test) sometimes retains too few; however, both do quite well under normal conditions, that is, when there are relatively few factors and many cases. In practice, an additional important aspect is the extent to which a solution is interpretable. Therefore, you usually examines several solutions with more or fewer factors, and chooses the one that makes the best "sense." We will discuss this issue in the context of factor rotations below.

Principal Factors Analysis

Before we continue to examine the different aspects of the typical output from a principal components analysis, let us now introduce principal factors analysis. Let us return to our satisfaction questionnaire example to conceive of another "mental model" for factor analysis. We can think of subjects' responses as being dependent on two components. First, there are some underlying common factors, such as the "satisfaction-with-hobbies" factor we looked at before. Each item measures some part of this common aspect of satisfaction. Second, each item also captures a unique aspect of satisfaction that is not addressed by any other item.

Communalities. If this model is correct, then we should not expect that the factors will extract all variance from our items; rather, only that proportion that is due to the common factors and shared by several items. In the language of factor analysis, the proportion of variance of a particular item that is due to common factors (shared with other items) is called communality. Therefore, an additional task facing us when applying this model is to estimate the communalities for each variable, that is, the proportion of variance that each item has in common with other items. The proportion of variance that is unique to each item is then the respective item's total variance minus the communality. A common starting point is to use the squared multiple correlation of an item with all other items as an estimate of the communality (refer to Multiple Regression for details about multiple regression). Some authors have suggested various iterative "post-solution improvements" to the initial multiple regression communality estimate; for example, the so-called MINRES method (minimum residual factor method; Harman & Jones, 1966) will try various modifications to the factor loadings with the goal to minimize the residual (unexplained) sums of squares.
Principal factors vs. principal components. The defining characteristic then that distinguishes between the two factor analytic models is that in principal components analysis we assume that all variability in an item should be used in the analysis, while in principal factors analysis we only use the variability in an item that it has in common with the other items. A detailed discussion of the pros and cons of each approach is beyond the scope of this introduction (refer to the general references provided in Principal components and Factor Analysis - Introductory Overview). In most cases, these two methods usually yield very similar results. However, principal components analysis is often preferred as a method for data reduction, while principal factors analysis is often preferred when the goal of the analysis is to detect structure (see Factor Analysis as a Classification Method). To index

Factor Analysis as a Classification Method
Let us now return to the interpretation of the standard results from a factor analysis. We will henceforth use the term factor analysis generically to encompass both principal components and principal factors analysis. Let us assume that we are at the point in our analysis where we basically know how many factors to extract. We may now want to know the meaning of the factors, that is, whether and how we can interpret them in a meaningful manner. To illustrate how this can be accomplished, let us work "backwards," that is, begin with a meaningful structure and then see how it is reflected in the results of a factor analysis. Let us return to our satisfaction example; shown below is the correlation matrix for items pertaining to satisfaction at work and items pertaining to satisfaction at home.
STATISTICA
FACTOR
ANALYSISCorrelations (factor.sta)
Casewise deletion of MD
n=100
VariableWORK_1WORK_2WORK_3HOME_1HOME_2HOME_3
WORK_1
WORK_2
WORK_3
HOME_1
HOME_2
HOME_31.00
.65
.65
.14
.15
.14.65
1.00
.73
.14
.18
.24.65
.73
1.00
.16
.24
.25.14
.14
.16
1.00
.66
.59.15
.18
.24
.66
1.00
.73.14
.24
.25
.59
.73
1.00
The work satisfaction items are highly correlated amongst themselves, and the home satisfaction items are highly intercorrelated amongst themselves. The correlations across these two types of items (work satisfaction items with home satisfaction items) is comparatively small. It thus seems that there are two relatively independent factors reflected in the correlation matrix, one related to satisfaction at work, the other related to satisfaction at home.
Factor Loadings. Let us now perform a principal components analysis and look at the two-factor solution. Specifically, let us look at the correlations between the variables and the two factors (or "new" variables), as they are extracted by default; these correlations are also called factor loadings.
STATISTICA
FACTOR
ANALYSISFactor Loadings (Unrotated)
Principal components

VariableFactor 1Factor 2
WORK_1
WORK_2
WORK_3
HOME_1
HOME_2
HOME_3.654384
.715256
.741688
.634120
.706267
.707446.564143
.541444
.508212
-.563123
-.572658
-.525602
Expl.Var
Prp.Totl2.891313
.4818851.791000
.298500
Apparently, the first factor is generally more highly correlated with the variables than the second factor. This is to be expected because, as previously described, these factors are extracted successively and will account for less and less variance overall.
Rotating the Factor Structure. We could plot the factor loadings shown above in a scatterplot. In that plot, each variable is represented as a point. In this plot we could rotate the axes in any direction without changing the relative locations of the points to each other; however, the actual coordinates of the points, that is, the factor loadings would of course change. In this example, if you produce the plot it will be evident that if we were to rotate the axes by about 45 degrees we might attain a clear pattern of loadings identifying the work satisfaction items and the home satisfaction items.
Rotational strategies. There are various rotational strategies that have been proposed. The goal of all of these strategies is to obtain a clear pattern of loadings, that is, factors that are somehow clearly marked by high loadings for some variables and low loadings for others. This general pattern is also sometimes referred to as simple structure (a more formalized definition can be found in most standard textbooks). Typical rotational strategies are varimax, quartimax, and equamax.
We have described the idea of the varimax rotation before (see Extracting Principal Components), and it can be applied to this problem as well. As before, we want to find a rotation that maximizes the variance on the new axes; put another way, we want to obtain a pattern of loadings on each factor that is as diverse as possible, lending itself to easier interpretation. Below is the table of rotated factor loadings.
STATISTICA
FACTOR
ANALYSISFactor Loadings (Varimax normalized)
Extraction: Principal components

VariableFactor 1Factor 2
WORK_1
WORK_2
WORK_3
HOME_1
HOME_2
HOME_3.862443
.890267
.886055
.062145
.107230
.140876.051643
.110351
.152603
.845786
.902913
.869995
Expl.Var
Prp.Totl2.356684
.3927812.325629
.387605
Interpreting the Factor Structure. Now the pattern is much clearer. As expected, the first factor is marked by high loadings on the work satisfaction items, the second factor is marked by high loadings on the home satisfaction items. We would thus conclude that satisfaction, as measured by our questionnaire, is composed of those two aspects; hence we have arrived at a classification of the variables.
Consider another example, this time with four additional Hobby/Misc variables added to our earlier example.
In the plot of factor loadings above, 10 variables were reduced to three specific factors, a work factor, a home factor and a hobby/misc. factor. Note that factor loadings for each factor are spread out over the values of the other two factors but are high for its own values. For example, the factor loadings for the hobby/misc variables (in green) have both high and low "work" and "home" values, but all four of these variables have high factor loadings on the "hobby/misc" factor.

Oblique Factors. Some authors (e.g., Catell & Khanna; Harman, 1976; Jennrich & Sampson, 1966; Clarkson & Jennrich, 1988) have discussed in some detail the concept of oblique (non-orthogonal) factors, in order to achieve more interpretable simple structure. Specifically, computational strategies have been developed to rotate factors so as to best represent "clusters" of variables, without the constraint of orthogonality of factors. However, the oblique factors produced by such rotations are often not easily interpreted. To return to the example discussed above, suppose we would have included in the satisfaction questionnaire above four items that measured other, "miscellaneous" types of satisfaction. Let us assume that people's responses to those items were affected about equally by their satisfaction at home (Factor 1) and at work (Factor 2). An oblique rotation will likely produce two correlated factors with less-than- obvious meaning, that is, with many cross-loadings

Van Manen/ Catherine Adams/Search and FInd: Margith Strand/ June 11, 2010/ Fielding Graduate University

Online research in philosophy
Entries: 215,392  New this week: 236
  General search   Category finder 
 
advanced search | help | use + and * as usual.
 





Type words to match in category names

Home
New items
Browse by area
Browse journals
Forums
Sign in
Submit material
More
Off-campus access
Using PhilPapers from home?
Click here to configure this browser for off-campus access.

Catherine Adams /Max van Manen (2009).

 The Phenomenology of Space in Writing Online. Educational Philosophy and Theory 41 (1):10-21.
In this paper we explore the phenomenon of writing online. We ask, 'Is writing by means of online technologies affected in a manner that differs significantly from the older technologies of pen on paper, typewriter, or even the word processor in an off-line environment?' In writing online, the author is engaged in a spatial complexity of physical, temporal, imaginal, and virtual experience: the writing space, the space of the text, cyber space, etc. At times, these may provide a conduit to a writerly understanding of human phenomena. We propose that an examination of the phenomenological features of online writing may contribute to a more pedagogically sensitive understanding of the experiences of online seminars, teaching and learning.

Philosophy of Education in Philosophy of Social Science
In my reading list   |  Discuss this article  |  Edit  |  Categorize  |  

My bibliography  |

Export citation | Scholar
12 downloads  |  Added to index: 2009-01-27  |  Mark as duplicate |  Delete from index


Discussion of Catherine Adams Max van Manen, The phenomenology of space in writing online
      Other forums | There are no threads in this forum | Start a new thread First post Latest post Total
Nothing in this forum yet.



Applied ethicsEpistemologyMeta-ethicsMetaphysicsNormative ethics
Philosophy of biologyPhilosophy of languagePhilosophy of mindPhilosophy of religionMore ...
Home | Blog | New books and articles | Philosophy journals | Forums | The Categorization Project | About PhilPapers | Contact us
 Sponsored by the Joint Information Systems Committee as part of the
Information Environment Programme

18 Critical Discourse Analysis by Teun A. Van Dijk/ Search and Find by Margith Strand/ May 26, 2010

18 Critical Discourse Analysis
TEUN A. VAN DIJK
0 Introduction: What Is Critical Discourse Analysis?
Critical discourse analysis (CDA) is a type of discourse analytical research that primarily
studies the way social power abuse, dominance, and inequality are enacted,
reproduced, and resisted by text and talk in the social and political context. With
such dissident research, critical discourse analysts take explicit position, and thus
want to understand, expose, and ultimately resist social inequality.
Some of the tenets of CDA can already be found in the critical theory of the
Frankfurt School before the Second World War (Agger 1992b; Rasmussen 1996). Its
current focus on language and discourse was initiated with the "critical linguistics"
that emerged (mostly in the UK and Australia) at the end of the 1970s (Fowler et al.
1979; see also Mey 1985). CDA has also counterparts in "critical" developments in
sociolinguistics, psychology, and the social sciences, some already dating back to the
early 1970s (Birnbaum 1971; Calhoun 1995; Fay 1987; Fox and Prilleltensky 1997;
Hymes 1972; Ibanez and Iniguez 1997; Singh 1996; Thomas 1993; Turkel 1996; Wodak
1996). As is the case in these neighboring disciplines, CDA may be seen as a reaction
against the dominant formal (often "asocial" or "uncritical") paradigms of the 1960s
and 1970s.
CDA is not so much a direction, school, or specialization next to the many other
"approaches" in discourse studies. Rather, it aims to offer a different "mode" or
"perspective" of theorizing, analysis, and application throughout the whole field. We
may find a more or less critical perspective in such diverse areas as pragmatics,
conversation analysis, narrative analysis, rhetoric, stylistics, sociolinguistics, ethnography,
or media analysis, among others.
Crucial for critical discourse analysts is the explicit awareness of their role in society.
Continuing a tradition that rejects the possibility of a "value-free" science, they
argue that science, and especially scholarly discourse, are inherently part of and
influenced by social structure, and produced in social interaction. Instead of denying
or ignoring such a relation between scholarship and society, they plead that such
relations be studied and accounted for in their own right, and that scholarly practices
Critical Discourse Analysis
be based on such insights. Theory formation, description, and explanation, also in
discourse analysis, are sociopolitically "situated," whether we like it or not. Reflection
on the role of scholars in society and the polity thus becomes an inherent part
of the discourse analytical enterprise. This may mean, among other things, that discourse
analysts conduct research in solidarity and cooperation with dominated groups.
Critical research on discourse needs to satisfy a number of requirements in order to
effectively realize its aims:
• As is often the case for more marginal research traditions, CDA research has to be
"better" than other research in order to be accepted.
• It focuses primarily on
paradigms and fashions.
• Empirically adequate critical analysis of social problems is usually
353, social problems and political issues, rather than on currentmultidisciplinary.
• Rather than merely
properties of social interaction and especially social structure.
• More specifically, CDA focuses on the ways discourse structures enact, confirm,
legitimate, reproduce, or challenge relations of
Fairclough and Wodak (1997: 271-80) summarize the main tenets of CDA as follows:
1. CDA addresses social problems
2. Power relations are discursive
3. Discourse constitutes society and culture
4. Discourse does ideological work
5. Discourse is historical
6. The link between text and society is mediated
7. Discourse analysis is interpretative and explanatory
8. Discourse is a form of social action.
Whereas some of these tenets have also been discussed above, others need a more
systematic theoretical analysis, of which we shall present some fragments here as a
more or less general basis for the main principles of CDA (for details about these
aims of critical discourse and language studies, see, e.g., Caldas-Coulthard and
Coulthard 1996; Fairclough 1992a, 1995a; Fairclough and Wodak 1997; Fowler et al.
1979; van Dijk 1993b).
describe discourse structures, it tries to explain them in terms ofpower and dominance in society.
1 Conceptual and Theoretical Frameworks
Since CDA is not a specific direction of research, it does not have a unitary theoretical
framework. Within the aims mentioned above, there are many types of CDA, and
these may be theoretically and analytically quite diverse. Critical analysis of conversation
is very different from an analysis of news reports in the press or of lessons and
teaching at school. Yet, given the common perspective and the general aims of CDA,
we may also find overall conceptual and theoretical frameworks that are closely
related. As suggested, most kinds of CDA will ask questions about the way specific
354
Teun A. van Dijk
discourse structures are deployed in the reproduction of social dominance, whether
they are part of a conversation or a news report or other genres and contexts. Thus,
the typical vocabulary of many scholars in CDA will feature such notions as "power,"
"dominance," "hegemony," "ideology," "class," "gender," "race," "discrimination,"
"interests," "reproduction," "institutions," "social structure," and "social order," besides
the more familiar discourse analytical notions.'

"Learning How to Believe: Epistemic Development in Cultural Context" By Eli Gottlieb/ Search and FInd: May 26, 2010 by Margith Strand

Eli Gottlieb
Mandel Leadership Institute
Jerusalem, Israel
Over the last decade, researchers have become increasingly interested in students’
beliefs about the nature of knowledge and how these beliefs develop. Although initial
psychological accounts portrayed epistemic development as a domain-independent
process of cognitive maturation, recent studies have found trajectories of epistemic
development to vary considerably across contexts. However, few studies have focused
on cultural context. This article examines the role community values and practices
play in fostering particular epistemological orientations by comparing the
epistemological beliefs of 5th, 8th, and 12th graders (
schools in Israel regarding 2 controversies: belief in God and punishment of
children. In both controversies, older participants were less likely than younger participants
to consider the controversy rationally decidable. However, this shift
emerged earlier in the God controversy than in the punishment controversy. In the
God controversy, General pupils were less likely than Religious pupils to consider
the question rationally decidable or their own beliefs infallible. But no such school
differences were observed in the punishment controversy. Qualitative and quantitative
analyses linked these differences to divergent discourse practices at General and
Religious schools, suggesting that the relations between learning and epistemic development
are more intricate than has been assumed hitherto.
N = 200) from General and Religious
Epistemology is an area of philosophy concerned with questions of what knowledge
is and how it is justified. Although few people give these questions such detailed
and sustained attention as professional philosophers, anyone attempting to
acquire, produce, or evaluate knowledge relies, at least implicitly, on some set of
epistemological beliefs. Such beliefs are of obvious interest to educators. To understand
how students acquire, evaluate, and justify knowledge, we need to under-
THE JOURNAL OF THE LEARNING SCIENCES,
Copyright © 2007, Lawrence Erlbaum Associates, Inc.
Correspondence should be addressed to Eli Gottlieb, Mandel Leadership Institute, P.O. Box 10613,
Jerusalem 93553, Israel. E-mail: gottlieb@mandelinstitute.org.il
16(1), 5–35Downloaded

Learning How to Believe: Epistemic
Development in Cultural Context