Thursday, January 13, 2011

Past input into the Systems...from Margith:)


About Me


Margith A. Strand
View my complete profile

Initial Work of mine...Margith Strand/

Critical Discourse Analysis (CDA) is a rapidly developing area of language study. It regards discourse as "form of social practice" (Fairclough, 1995), and takes into consideration of the context of language use to be crucial to discourse (Wodak, 2001).  It takes particular interest in the relation between language and power Claims can be made that cultural and economic dimensions are significant in the creation and maintenance of power relations.  The key figures in this area include Fairclough (1992a,b,c,1993, 1995a, b, 1996, 1998, 2000, 2001, 2003), van Dijk (1993, 1997, 1998a, b, 1999, 2001), Gee (1999, 2005), van Leeuwen (1993, 1995, 1996), Wodak (1996, 2000, 2001), and Scollon (2001).  It is generaly agreed that Critical Discourse Analysis is viewed as an approach, which consists of different perspectives and different methods for studying the relationship between the use of landguage and social context. 

It is within this construction of language and power, that I am addressing the field of Distance Education with special attention to the Constructivistic frame of network formats in language and delivery expression of Semiotic Assessment.  These objectives will be done within the context of Teaching and Learning in the realm of Distance Education.  The construction of the methodology for analysis will be carried out through the application of in-text, intertexuality, attitudinal-evaluative and reductionistic and social atomistic framework of backdrop within the platform layout of the online classroom and arena. 

Organizational behavior and organizational management techniques of analysis will also be used to validate and describe the construction of the Social Psychological parameters of the Group Processes, Group Decision Making, Organizational culture, Cross-cultural analysis, Personality (preferential attitudinal measurements), Work Design (platform effectiveness), Motvation (environmental constraints and measurements), Perception (attitudinal measurements), Leadership effectiveness (facilitator effectivity) and Performativity. 

One of the first principles of Critical Discourse Analysis is that it addresses social problems.  C.D.A. not only focuses on language and language use, but also on the linguistic characteristic of social and cultural processes.  It is to this end that I have decided to employ C.D.A. for analytical expression in my Constructivistic examination of Distance Teaching and Learning field of arena.  Critical Discourse Analysis aims to derive results which are of pratical relevance to the social, cultural, political and even economic contexts (Fairclough & Wodak, 1997).

The second principle of C.D.A. is that power relations are discursive.  C.D.A. explains through this method of approach how social relations of power are exercised and negotiated in and through discourse (Fairclough & Wodak, 1997).  The next principle is that discourse constitutes society and culture.  This means that every instance of language use makes its own contribution to responding and transforming society and culture, including relations of power (Fairclough & Wodak, 1997). In the design and application of C.D.A., I wish to integrate the approach constructed in the orgranizational patterning of the DIstance Education, namely the Teaching and Learning field, and bring to light the constructivistic details of the protocol exhibited within a science realm of online courses. 


Beginnings of beginnings....and onto...Discourse:)

Dissertation General/ November 13, 2009
Margith A. Strand


Topic/Question:
What is the systematic analysis of a closed space of the distance education field through a Grounded Theory perspective, i.e. gerunding

Intent:
To formulate the essentials in the distance education field so that improvement features can be generated upon observation for other model theatres.



Research:
Investigate the data generated through three years of student-Instructor interaction. Look for features placed at the beginning of the study.

Data Analysis: Formulation of Gerunds
Placement of data and significance of gerunding thematics found from the student-Instructor profiles.


Re-Question
Reinvestigate the thematics and verify the gerunding picture noted for the sample population found.  Define the gerunds as found for the groups and compare the results between differing environmental factors such as course topics.


Search and Find by Margith Strand/ May 24, 2010/ Used in Concept for Dissertation

Creators of STATISTICA Data Analysis Software and Services


StatSoft.comTextbookPrincipal Components Factor Analysis
Principal Components and Factor Analysis
General Purpose
Basic Idea of Factor Analysis as a Data Reduction Method
Factor Analysis as a Classification Method
Miscellaneous Other Issues and Statistics

General Purpose
The main applications of factor analytic techniques are: (1) to reduce the number of variables and (2) to detect structure in the relationships between variables, that is to classify variables. Therefore, factor analysis is applied as a data reduction or structure detection method (the term factor analysis was first introduced by Thurstone, 1931). The topics listed below will describe the principles of factor analysis, and how it can be applied towards these two purposes. We will assume that you are familiar with the basic logic of statistical reasoning as described in Elementary Concepts. Moreover, we will also assume that you are familiar with the concepts of variance and correlation; if not, we advise that you read the Basic Statistics topic at this point.
There are many excellent books on factor analysis. For example, a hands-on how-to approach can be found in Stevens (1986); more detailed technical descriptions are provided in Cooley and Lohnes (1971); Harman (1976); Kim and Mueller, (1978a, 1978b); Lawley and Maxwell (1971); Lindeman, Merenda, and Gold (1980); Morrison (1967); or Mulaik (1972). The interpretation of secondary factors in hierarchical factor analysis, as an alternative to traditional oblique rotational strategies, is explained in detail by Wherry (1984).
Confirmatory factor analysis. Structural Equation Modeling (SEPATH) allows you to test specific hypotheses about the factor structure for a set of variables, in one or several samples (e.g., you can compare factor structures across samples).

Correspondence analysis. Correspondence analysis is a descriptive/exploratory technique designed to analyze two-way and multi-way tables containing some measure of correspondence between the rows and columns. The results provide information which is similar in nature to those produced by factor analysis techniques, and they allow you to explore the structure of categorical variables included in the table. For more information regarding these methods, refer to Correspondence Analysis. To index

Basic Idea of Factor Analysis as a Data Reduction Method

Suppose we conducted a (rather "silly") study in which we measure 100 people's height in inches and centimeters. Thus, we would have two variables that measure height. If in future studies, we want to research, for example, the effect of different nutritional food supplements on height, would we continue to use both measures? Probably not; height is one characteristic of a person, regardless of how it is measured.
Let's now extrapolate from this "silly" study to something that you might actually do as a researcher. Suppose we want to measure people's satisfaction with their lives. We design a satisfaction questionnaire with various items; among other things we ask our subjects how satisfied they are with their hobbies (item 1) and how intensely they are pursuing a hobby (item 2). Most likely, the responses to the two items are highly correlated with each other. (If you are not familiar with the correlation coefficient, we recommend that you read the description in Basic Statistics - Correlations) Given a high correlation between the two items, we can conclude that they are quite redundant.

Combining Two Variables into a Single Factor. You can summarize the correlation between two variables in a scatterplot. A regression line can then be fitted that represents the "best" summary of the linear relationship between the variables. If we could define a variable that would approximate the regression line in such a plot, then that variable would capture most of the "essence" of the two items. Subjects' single scores on that new factor, represented by the regression line, could then be used in future data analyses to represent that essence of the two items. In a sense we have reduced the two variables to one factor. Note that the new factor is actually a linear combination of the two variables.

Principal Components Analysis. The example described above, combining two correlated variables
 into one factor, illustrates the basic idea of factor analysis, or of principal components analysis to be precise (we will return to this later). If we extend the two-variable example to multiple variables, then the computations become more involved, but the basic principle of expressing two or more variables by a single factor remains the same.


Extracting Principal Components. We do not want to go into the details about the computational aspects of principal components analysis here, which can be found elsewhere (references were provided at the beginning of this section). However, basically, the extraction of principal components amounts to a variance maximizing (varimax) rotation of the original variable space. For example, in a scatterplot we can think of the regression line as the original X axis, rotated so that it approximates the regression line. This type of rotation is called variance maximizing because the criterion for (goal of) the rotation is to maximize the variance (variability) of the "new" variable (factor), while minimizing the variance around the new variable (see Rotational Strategies).

Generalizing to the Case of Multiple Variables. When there are more than two variables, we can think of them as defining a "space," just as two variables defined a plane. Thus, when we have three variables, we could plot a three- dimensional scatterplot, and, again we could fit a plane through the data.
With more than three variables it becomes impossible to illustrate the points in a scatterplot, however, the logic of rotating the axes so as to maximize the variance of the new factor remains the same.
Multiple orthogonal factors. After we have found the line on which the variance is maximal, there remains some variability around this line. In principal components analysis, after the first factor has been extracted, that is, after the first line has been drawn through the data, we continue and define another line that maximizes the remaining variability, and so on. In this manner, consecutive factors are extracted. Because each consecutive factor is defined to maximize the variability that is not captured by the preceding factor, consecutive factors are independent of each other. Put another way, consecutive factors are uncorrelated or orthogonal to each other.

How many Factors to Extract? Remember that, so far, we are considering principal components analysis as a data reduction method, that is, as a method for reducing the number of variables. The question then is, how many factors do we want to extract? Note that as we extract consecutive factors, they account for less and less variability. The decision of when to stop extracting factors basically depends on when there is only very little "random" variability left. The nature of this decision is arbitrary; however, various guidelines have been developed, and they are reviewed in Reviewing the Results of a Principal Components Analysis under Eigenvalues and the Number-of- Factors Problem.

Reviewing the Results of a Principal Components Analysis. Without further ado, let us now look at some of the standard results from a principal components analysis. To reiterate, we are extracting factors that account for less and less variance. To simplify matters, you usually start with the correlation matrix, where the variances of all variables are equal to 1.0. Therefore, the total variance in that matrix is equal to the number of variables. For example, if we have 10 variables each with a variance of 1 then the total variability that can potentially be extracted is equal to 10 times 1. Suppose that in the satisfaction study introduced earlier we included 10 items to measure different aspects of satisfaction at home and at work. The variance accounted for by successive factors would be summarized as follows:
STATISTICA
FACTOR
ANALYSISEigenvalues (factor.sta)
Extraction: Principal components


Value
Eigenval% total
VarianceCumul.
EigenvalCumul.
%
1
2
3
4
5
6
7
8
9
106.118369
1.800682
.472888
.407996
.317222
.293300
.195808
.170431
.137970
.08533461.18369
18.00682
4.72888
4.07996
3.17222
2.93300
1.95808
1.70431
1.37970
.853346.11837
7.91905
8.39194
8.79993
9.11716
9.41046
9.60626
9.77670
9.91467
10.0000061.1837
79.1905
83.9194
87.9993
91.1716
94.1046
96.0626
97.7670
99.1467
100.0000
Eigenvalues
In the second column (Eigenvalue) above, we find the variance on the new factors that were successively extracted. In the third column, these values are expressed as a percent of the total variance (in this example, 10). As we can see, factor 1 accounts for 61 percent of the variance, factor 2 for 18 percent, and so on. As expected, the sum of the eigenvalues is equal to the number of variables. The third column contains the cumulative variance extracted. The variances extracted by the factors are called the eigenvalues. This name derives from the computational issues involved.

Eigenvalues and the Number-of-Factors Problem

Now that we have a measure of how much variance each successive factor extracts, we can return to the question of how many factors to retain. As mentioned earlier, by its nature this is an arbitrary decision. However, there are some guidelines that are commonly used, and that, in practice, seem to yield the best results.

The Kaiser criterion. First, we can retain only factors with eigenvalues greater than 1. In essence this is like saying that, unless a factor extracts at least as much as the equivalent of one original variable, we drop it. This criterion was proposed by Kaiser (1960), and is probably the one most widely used. In our example above, using this criterion, we would retain 2 factors (principal components).
The scree test. A graphical method is the scree test first proposed by Cattell (1966). We can plot the eigenvalues shown above in a simple line plot.

Cattell suggests to find the place where the smooth decrease of eigenvalues appears to level off to the right of the plot. To the right of this point, presumably, you find only "factorial scree" - "scree" is the geological term referring to the debris which collects on the lower part of a rocky slope. According to this criterion, we would probably retain 2 or 3 factors in our example.

Which criterion to use. Both criteria have been studied in detail (Browne, 1968; Cattell & Jaspers, 1967; Hakstian, Rogers, & Cattell, 1982; Linn, 1968; Tucker, Koopman & Linn, 1969). Theoretically, you can evaluate those criteria by generating random data based on a particular number of factors. You can then see whether the number of factors is accurately detected by those criteria. Using this general technique, the first method (Kaiser criterion) sometimes retains too many factors, while the second technique (scree test) sometimes retains too few; however, both do quite well under normal conditions, that is, when there are relatively few factors and many cases. In practice, an additional important aspect is the extent to which a solution is interpretable. Therefore, you usually examines several solutions with more or fewer factors, and chooses the one that makes the best "sense." We will discuss this issue in the context of factor rotations below.

Principal Factors Analysis

Before we continue to examine the different aspects of the typical output from a principal components analysis, let us now introduce principal factors analysis. Let us return to our satisfaction questionnaire example to conceive of another "mental model" for factor analysis. We can think of subjects' responses as being dependent on two components. First, there are some underlying common factors, such as the "satisfaction-with-hobbies" factor we looked at before. Each item measures some part of this common aspect of satisfaction. Second, each item also captures a unique aspect of satisfaction that is not addressed by any other item.

Communalities. If this model is correct, then we should not expect that the factors will extract all variance from our items; rather, only that proportion that is due to the common factors and shared by several items. In the language of factor analysis, the proportion of variance of a particular item that is due to common factors (shared with other items) is called communality. Therefore, an additional task facing us when applying this model is to estimate the communalities for each variable, that is, the proportion of variance that each item has in common with other items. The proportion of variance that is unique to each item is then the respective item's total variance minus the communality. A common starting point is to use the squared multiple correlation of an item with all other items as an estimate of the communality (refer to Multiple Regression for details about multiple regression). Some authors have suggested various iterative "post-solution improvements" to the initial multiple regression communality estimate; for example, the so-called MINRES method (minimum residual factor method; Harman & Jones, 1966) will try various modifications to the factor loadings with the goal to minimize the residual (unexplained) sums of squares.
Principal factors vs. principal components. The defining characteristic then that distinguishes between the two factor analytic models is that in principal components analysis we assume that all variability in an item should be used in the analysis, while in principal factors analysis we only use the variability in an item that it has in common with the other items. A detailed discussion of the pros and cons of each approach is beyond the scope of this introduction (refer to the general references provided in Principal components and Factor Analysis - Introductory Overview). In most cases, these two methods usually yield very similar results. However, principal components analysis is often preferred as a method for data reduction, while principal factors analysis is often preferred when the goal of the analysis is to detect structure (see Factor Analysis as a Classification Method). To index

Factor Analysis as a Classification Method
Let us now return to the interpretation of the standard results from a factor analysis. We will henceforth use the term factor analysis generically to encompass both principal components and principal factors analysis. Let us assume that we are at the point in our analysis where we basically know how many factors to extract. We may now want to know the meaning of the factors, that is, whether and how we can interpret them in a meaningful manner. To illustrate how this can be accomplished, let us work "backwards," that is, begin with a meaningful structure and then see how it is reflected in the results of a factor analysis. Let us return to our satisfaction example; shown below is the correlation matrix for items pertaining to satisfaction at work and items pertaining to satisfaction at home.
STATISTICA
FACTOR
ANALYSISCorrelations (factor.sta)
Casewise deletion of MD
n=100
VariableWORK_1WORK_2WORK_3HOME_1HOME_2HOME_3
WORK_1
WORK_2
WORK_3
HOME_1
HOME_2
HOME_31.00
.65
.65
.14
.15
.14.65
1.00
.73
.14
.18
.24.65
.73
1.00
.16
.24
.25.14
.14
.16
1.00
.66
.59.15
.18
.24
.66
1.00
.73.14
.24
.25
.59
.73
1.00
The work satisfaction items are highly correlated amongst themselves, and the home satisfaction items are highly intercorrelated amongst themselves. The correlations across these two types of items (work satisfaction items with home satisfaction items) is comparatively small. It thus seems that there are two relatively independent factors reflected in the correlation matrix, one related to satisfaction at work, the other related to satisfaction at home.
Factor Loadings. Let us now perform a principal components analysis and look at the two-factor solution. Specifically, let us look at the correlations between the variables and the two factors (or "new" variables), as they are extracted by default; these correlations are also called factor loadings.
STATISTICA
FACTOR
ANALYSISFactor Loadings (Unrotated)
Principal components

VariableFactor 1Factor 2
WORK_1
WORK_2
WORK_3
HOME_1
HOME_2
HOME_3.654384
.715256
.741688
.634120
.706267
.707446.564143
.541444
.508212
-.563123
-.572658
-.525602
Expl.Var
Prp.Totl2.891313
.4818851.791000
.298500
Apparently, the first factor is generally more highly correlated with the variables than the second factor. This is to be expected because, as previously described, these factors are extracted successively and will account for less and less variance overall.
Rotating the Factor Structure. We could plot the factor loadings shown above in a scatterplot. In that plot, each variable is represented as a point. In this plot we could rotate the axes in any direction without changing the relative locations of the points to each other; however, the actual coordinates of the points, that is, the factor loadings would of course change. In this example, if you produce the plot it will be evident that if we were to rotate the axes by about 45 degrees we might attain a clear pattern of loadings identifying the work satisfaction items and the home satisfaction items.
Rotational strategies. There are various rotational strategies that have been proposed. The goal of all of these strategies is to obtain a clear pattern of loadings, that is, factors that are somehow clearly marked by high loadings for some variables and low loadings for others. This general pattern is also sometimes referred to as simple structure (a more formalized definition can be found in most standard textbooks). Typical rotational strategies are varimax, quartimax, and equamax.
We have described the idea of the varimax rotation before (see Extracting Principal Components), and it can be applied to this problem as well. As before, we want to find a rotation that maximizes the variance on the new axes; put another way, we want to obtain a pattern of loadings on each factor that is as diverse as possible, lending itself to easier interpretation. Below is the table of rotated factor loadings.
STATISTICA
FACTOR
ANALYSISFactor Loadings (Varimax normalized)
Extraction: Principal components

VariableFactor 1Factor 2
WORK_1
WORK_2
WORK_3
HOME_1
HOME_2
HOME_3.862443
.890267
.886055
.062145
.107230
.140876.051643
.110351
.152603
.845786
.902913
.869995
Expl.Var
Prp.Totl2.356684
.3927812.325629
.387605
Interpreting the Factor Structure. Now the pattern is much clearer. As expected, the first factor is marked by high loadings on the work satisfaction items, the second factor is marked by high loadings on the home satisfaction items. We would thus conclude that satisfaction, as measured by our questionnaire, is composed of those two aspects; hence we have arrived at a classification of the variables.
Consider another example, this time with four additional Hobby/Misc variables added to our earlier example.
In the plot of factor loadings above, 10 variables were reduced to three specific factors, a work factor, a home factor and a hobby/misc. factor. Note that factor loadings for each factor are spread out over the values of the other two factors but are high for its own values. For example, the factor loadings for the hobby/misc variables (in green) have both high and low "work" and "home" values, but all four of these variables have high factor loadings on the "hobby/misc" factor.

Oblique Factors. Some authors (e.g., Catell & Khanna; Harman, 1976; Jennrich & Sampson, 1966; Clarkson & Jennrich, 1988) have discussed in some detail the concept of oblique (non-orthogonal) factors, in order to achieve more interpretable simple structure. Specifically, computational strategies have been developed to rotate factors so as to best represent "clusters" of variables, without the constraint of orthogonality of factors. However, the oblique factors produced by such rotations are often not easily interpreted. To return to the example discussed above, suppose we would have included in the satisfaction questionnaire above four items that measured other, "miscellaneous" types of satisfaction. Let us assume that people's responses to those items were affected about equally by their satisfaction at home (Factor 1) and at work (Factor 2). An oblique rotation will likely produce two correlated factors with less-than- obvious meaning, that is, with many cross-loadings

Van Manen/ Catherine Adams/Search and FInd: Margith Strand/ June 11, 2010/ Fielding Graduate University

Online research in philosophy
Entries: 215,392  New this week: 236
  General search   Category finder 
 
advanced search | help | use + and * as usual.
 





Type words to match in category names

Home
New items
Browse by area
Browse journals
Forums
Sign in
Submit material
More
Off-campus access
Using PhilPapers from home?
Click here to configure this browser for off-campus access.

Catherine Adams /Max van Manen (2009).

 The Phenomenology of Space in Writing Online. Educational Philosophy and Theory 41 (1):10-21.
In this paper we explore the phenomenon of writing online. We ask, 'Is writing by means of online technologies affected in a manner that differs significantly from the older technologies of pen on paper, typewriter, or even the word processor in an off-line environment?' In writing online, the author is engaged in a spatial complexity of physical, temporal, imaginal, and virtual experience: the writing space, the space of the text, cyber space, etc. At times, these may provide a conduit to a writerly understanding of human phenomena. We propose that an examination of the phenomenological features of online writing may contribute to a more pedagogically sensitive understanding of the experiences of online seminars, teaching and learning.

Philosophy of Education in Philosophy of Social Science
In my reading list   |  Discuss this article  |  Edit  |  Categorize  |  

My bibliography  |

Export citation | Scholar
12 downloads  |  Added to index: 2009-01-27  |  Mark as duplicate |  Delete from index


Discussion of Catherine Adams Max van Manen, The phenomenology of space in writing online
      Other forums | There are no threads in this forum | Start a new thread First post Latest post Total
Nothing in this forum yet.



Applied ethicsEpistemologyMeta-ethicsMetaphysicsNormative ethics
Philosophy of biologyPhilosophy of languagePhilosophy of mindPhilosophy of religionMore ...
Home | Blog | New books and articles | Philosophy journals | Forums | The Categorization Project | About PhilPapers | Contact us
 Sponsored by the Joint Information Systems Committee as part of the
Information Environment Programme

18 Critical Discourse Analysis by Teun A. Van Dijk/ Search and Find by Margith Strand/ May 26, 2010

18 Critical Discourse Analysis
TEUN A. VAN DIJK
0 Introduction: What Is Critical Discourse Analysis?
Critical discourse analysis (CDA) is a type of discourse analytical research that primarily
studies the way social power abuse, dominance, and inequality are enacted,
reproduced, and resisted by text and talk in the social and political context. With
such dissident research, critical discourse analysts take explicit position, and thus
want to understand, expose, and ultimately resist social inequality.
Some of the tenets of CDA can already be found in the critical theory of the
Frankfurt School before the Second World War (Agger 1992b; Rasmussen 1996). Its
current focus on language and discourse was initiated with the "critical linguistics"
that emerged (mostly in the UK and Australia) at the end of the 1970s (Fowler et al.
1979; see also Mey 1985). CDA has also counterparts in "critical" developments in
sociolinguistics, psychology, and the social sciences, some already dating back to the
early 1970s (Birnbaum 1971; Calhoun 1995; Fay 1987; Fox and Prilleltensky 1997;
Hymes 1972; Ibanez and Iniguez 1997; Singh 1996; Thomas 1993; Turkel 1996; Wodak
1996). As is the case in these neighboring disciplines, CDA may be seen as a reaction
against the dominant formal (often "asocial" or "uncritical") paradigms of the 1960s
and 1970s.
CDA is not so much a direction, school, or specialization next to the many other
"approaches" in discourse studies. Rather, it aims to offer a different "mode" or
"perspective" of theorizing, analysis, and application throughout the whole field. We
may find a more or less critical perspective in such diverse areas as pragmatics,
conversation analysis, narrative analysis, rhetoric, stylistics, sociolinguistics, ethnography,
or media analysis, among others.
Crucial for critical discourse analysts is the explicit awareness of their role in society.
Continuing a tradition that rejects the possibility of a "value-free" science, they
argue that science, and especially scholarly discourse, are inherently part of and
influenced by social structure, and produced in social interaction. Instead of denying
or ignoring such a relation between scholarship and society, they plead that such
relations be studied and accounted for in their own right, and that scholarly practices
Critical Discourse Analysis
be based on such insights. Theory formation, description, and explanation, also in
discourse analysis, are sociopolitically "situated," whether we like it or not. Reflection
on the role of scholars in society and the polity thus becomes an inherent part
of the discourse analytical enterprise. This may mean, among other things, that discourse
analysts conduct research in solidarity and cooperation with dominated groups.
Critical research on discourse needs to satisfy a number of requirements in order to
effectively realize its aims:
• As is often the case for more marginal research traditions, CDA research has to be
"better" than other research in order to be accepted.
• It focuses primarily on
paradigms and fashions.
• Empirically adequate critical analysis of social problems is usually
353, social problems and political issues, rather than on currentmultidisciplinary.
• Rather than merely
properties of social interaction and especially social structure.
• More specifically, CDA focuses on the ways discourse structures enact, confirm,
legitimate, reproduce, or challenge relations of
Fairclough and Wodak (1997: 271-80) summarize the main tenets of CDA as follows:
1. CDA addresses social problems
2. Power relations are discursive
3. Discourse constitutes society and culture
4. Discourse does ideological work
5. Discourse is historical
6. The link between text and society is mediated
7. Discourse analysis is interpretative and explanatory
8. Discourse is a form of social action.
Whereas some of these tenets have also been discussed above, others need a more
systematic theoretical analysis, of which we shall present some fragments here as a
more or less general basis for the main principles of CDA (for details about these
aims of critical discourse and language studies, see, e.g., Caldas-Coulthard and
Coulthard 1996; Fairclough 1992a, 1995a; Fairclough and Wodak 1997; Fowler et al.
1979; van Dijk 1993b).
describe discourse structures, it tries to explain them in terms ofpower and dominance in society.
1 Conceptual and Theoretical Frameworks
Since CDA is not a specific direction of research, it does not have a unitary theoretical
framework. Within the aims mentioned above, there are many types of CDA, and
these may be theoretically and analytically quite diverse. Critical analysis of conversation
is very different from an analysis of news reports in the press or of lessons and
teaching at school. Yet, given the common perspective and the general aims of CDA,
we may also find overall conceptual and theoretical frameworks that are closely
related. As suggested, most kinds of CDA will ask questions about the way specific
354
Teun A. van Dijk
discourse structures are deployed in the reproduction of social dominance, whether
they are part of a conversation or a news report or other genres and contexts. Thus,
the typical vocabulary of many scholars in CDA will feature such notions as "power,"
"dominance," "hegemony," "ideology," "class," "gender," "race," "discrimination,"
"interests," "reproduction," "institutions," "social structure," and "social order," besides
the more familiar discourse analytical notions.'

"Learning How to Believe: Epistemic Development in Cultural Context" By Eli Gottlieb/ Search and FInd: May 26, 2010 by Margith Strand

Eli Gottlieb
Mandel Leadership Institute
Jerusalem, Israel
Over the last decade, researchers have become increasingly interested in students’
beliefs about the nature of knowledge and how these beliefs develop. Although initial
psychological accounts portrayed epistemic development as a domain-independent
process of cognitive maturation, recent studies have found trajectories of epistemic
development to vary considerably across contexts. However, few studies have focused
on cultural context. This article examines the role community values and practices
play in fostering particular epistemological orientations by comparing the
epistemological beliefs of 5th, 8th, and 12th graders (
schools in Israel regarding 2 controversies: belief in God and punishment of
children. In both controversies, older participants were less likely than younger participants
to consider the controversy rationally decidable. However, this shift
emerged earlier in the God controversy than in the punishment controversy. In the
God controversy, General pupils were less likely than Religious pupils to consider
the question rationally decidable or their own beliefs infallible. But no such school
differences were observed in the punishment controversy. Qualitative and quantitative
analyses linked these differences to divergent discourse practices at General and
Religious schools, suggesting that the relations between learning and epistemic development
are more intricate than has been assumed hitherto.
N = 200) from General and Religious
Epistemology is an area of philosophy concerned with questions of what knowledge
is and how it is justified. Although few people give these questions such detailed
and sustained attention as professional philosophers, anyone attempting to
acquire, produce, or evaluate knowledge relies, at least implicitly, on some set of
epistemological beliefs. Such beliefs are of obvious interest to educators. To understand
how students acquire, evaluate, and justify knowledge, we need to under-
THE JOURNAL OF THE LEARNING SCIENCES,
Copyright © 2007, Lawrence Erlbaum Associates, Inc.
Correspondence should be addressed to Eli Gottlieb, Mandel Leadership Institute, P.O. Box 10613,
Jerusalem 93553, Israel. E-mail: gottlieb@mandelinstitute.org.il
16(1), 5–35Downloaded

Learning How to Believe: Epistemic
Development in Cultural Context

For My information/Margith Strand

Three Common Small Group Networks  [Social Nodal]

CHAIN                                                                   WHEEL                            ALL_CHANNEL


<----><----><----->                                         <---><------------><--->          <----><----><---------->
                                                                                                                     <----><-----><--------->


p. 318 of Robbins  [all]                           

Process Analysis/ From Search on May 18, 2011 by Margith Strand

Process Analysis

Sampling Plans
General Purpose
Computational Approach
Means for H0 and H1
Alpha and Beta Error Probabilities
Fixed Sampling Plans
Sequential Sampling Plans
Summary
Process (Machine) Capability Analysis
Introductory Overview
Computational Approach
Process Capability Indices
Process Performance vs. Process Capability
Using Experiments to Improve Process Capability
Testing the Normality Assumption
Tolerance Limits
Gage Repeatability and Reproducibility
Introductory Overview
Computational Approach
Plots of Repeatability and Reproducibility
Components of Variance
Summary
Non-Normal Distributions
Introductory Overview
Fitting Distributions by Moments
Assessing the Fit: Quantile and Probability Plots
Non-Normal Process Capability Indices (Percentile Method)
Weibull and Reliability/Failure Time Analysis
General Purpose
The Weibull Distribution
Censored Observations
Two- and three-parameter Weibull Distribution
Parameter Estimation
Goodness of Fit Indices
Interpreting Results
Grouped Data
Modified Failure Order for Multiple-Censored Data
Weibull CDF, Reliability, and Hazard Functions

Sampling plans are discussed in detail in Duncan (1974) and Montgomery (1985); most process capability procedures (and indices) were only recently introduced to the US from Japan (Kane, 1986), however, they are discussed in three excellent recent hands-on books by Bohte (1988), Hart and Hart (1989), and Pyzdek (1989); detailed discussions of these methods can also be found in Montgomery (1991).

Step-by-step instructions for the computation and interpretation of capability indices are also provided in the Fundamental Statistical Process Control Reference Manual published by the ASQC (American Society for Quality Control) and AIAG (Automotive Industry Action Group, 1991; referenced as ASQC/AIAG, 1991). Repeatability and reproducibility (R & R) methods are discussed in Grant and Leavenworth (1980), Pyzdek (1989) and Montgomery (1991); a more detailed discussion of the subject (of variance estimation) is also provided in Duncan (1974).

Step-by-step instructions on how to conduct and analyze R & R experiments are presented in the Measurement Systems Analysis Reference Manual published by ASQC/AIAG (1990). In the following topics, we will briefly introduce the purpose and logic of each of these procedures. For more information on analyzing designs with random effects and for estimating components of variance, see Variance Components.
Sampling Plans
General Purpose
Computational Approach
Means for H0 and H1
Alpha and Beta Error Probabilities
Fixed Sampling Plans
Sequential Sampling Plans
Summary
General Purpose
A common question that quality control engineers face is to determine how many items from a batch (e.g., shipment from a supplier) to inspect in order to ensure that the items (products) in that batch are of acceptable quality. For example, suppose we have a supplier of piston rings for small automotive engines that our company produces, and our goal is to establish a sampling procedure (of piston rings from the delivered batches) that ensures a specified quality. In principle, this problem is similar to that of on-line quality control discussed in Quality Control. In fact, you may want to read that section at this point to familiarize yourself with the issues involved in industrial statistical quality control.
Acceptance sampling. The procedures described here are useful whenever we need to decide whether or not a batch or lot of items complies with specifications, without having to inspect 100% of the items in the batch. Because of the nature of the problem – whether to accept a batch – these methods are also sometimes discussed under the heading of acceptance sampling.

Advantages over 100% inspection. An obvious advantage of acceptance sampling over 100% inspection of the batch or lot is that reviewing only a sample requires less time, effort, and money. In some cases, inspection of an item is destructive (e.g., stress testing of steel), and testing 100% would destroy the entire batch. Finally, from a managerial standpoint, rejecting an entire batch or shipment (based on acceptance sampling) from a supplier, rather than just a certain percent of defective items (based on 100% inspection) often provides a stronger incentive to the supplier to adhere to quality standards.

Computational Approach
In principle, the computational approach to the question of how large a sample to take is straightforward. Elementary Concepts discusses the concept of the sampling distribution. Briefly, if we were to take repeated samples of a particular size from a population of, for example, piston rings and compute their average diameters, then the distribution of those averages (means) would approach the normal distribution with a particular mean and standard deviation (or standard error; in sampling distributions the term standard error is preferred, in order to distinguish the variability of the means from the variability of the items in the population). Fortunately, we do not need to take repeated samples from the population in order to estimate the location (mean) and variability (standard error) of the sampling distribution. If we have a good idea (estimate) of what the variability (standard deviation or sigma) is in the population, then we can infer the sampling distribution of the mean. In principle, this information is sufficient to estimate the sample size that is needed in order to detect a certain change in quality (from target specifications). Without going into the details about the computational procedures involved, let us next review the particular information that the engineer must supply in order to estimate required sample sizes.

My Notes from May 17, 2010/ Margith Strand

Available Materials and Content Analysis
The first purpose of the use of available materials is to explore the nature of the data and the subjects, to get an insight into the total situation.  A second purpose of available materials is to suggest hypothesis.  A third use of available materials is to test hypothesis. 

Content analysis is a method of studying and analyzing communications in a systematic, objective, and quantitative manner for the purpose of measuring variables.  Most content analysis has not been done to measure variables, as such.  Rather, it has been used to determine the relative emphasis or frequency of various communication phenomena; propaganda, trends, styles, changes in content, readability. 

Definition and Categorization of Universe
Categorization, or the partitioning of U, is perhaps the most important part of content anslysis.  It is most important because it is a direct reflection of the theory and the problem of any study.  It spells out, in effect, the variables of the hypothesis. 

Units of Analysis
Berelson lists five major units of analysis: words, themes, characters, items, and space-time measures.  The word is the smallest unit.  The theme is a very useful though much more difficult unit.  A theme is often a sentence, a proposition about something. Themes are combined into sets of themes.  Like the theme, the item unit is important.  The item is a whole production, an essay, a news story, a television program, a class recititation or discussion. 

In my study: I believe that I can use the following catgorizations- Word Meaning, Contextual Meaning, Organization, Thought, Specifics, Express Ideas, Inferences, Literary Devices, and Determine Writer Purpose.  These categories can be utilized to signify the passage of the Abstract to Concrete formation of the process of the C.T.C.C. idea, as discussed earlier in the Concept Paper.

Reference: Kerlinger, F. N. Foundations of Behavioral Research, Holt, Rinehart and Winston, Inc.New York, 1967

Search and FInd: May 18, 2010 by Margith Strand/January 13, 2011

http://books.google.com/books?hl=en&lr=&id=SEpl7643WE0C&oi=fnd&pg=PA1&dq=statistical+processing+for+qualitative+research&ots=QaOuaTII0c&sig=XEg98fX_18O1YueLYs44iEUS8Yk#v=onepage&q=statistical%20processing%20for%20qualitative%20research&f=false

Discourse....

http://books.google.com/books?id=eDfdQSdo3HEC&lpg=PR6&ots=X0mqRTUCq8&dq=discourse%20analysis%20and%20process%20theory&lr&pg=PA22#v=onepage&q=discourse%20analysis%20and%20process%20theory&f=false

Wednesday, January 12, 2011

Go to:

Systems Analysis: try [profile Margith Strand / in search]

......For the sake of....Search- Find on June 25, 2010 by Margith Strand

Summary: Constructivism as a paradigm or worldview posits that learning is an active, constructive process. The learner is an information constructor. People actively construct or create their own subjective representations of objective reality. New information is linked to to prior knowledge, thus mental representations are subjective.

Originators and important contributors: Vygotsky, Piaget, Dewey, Vico, Rorty, Bruner
Keywords: Learning as experience, activity and dialogical process; Problem Based Learning (PBL); Anchored instruction; Vygotsky’s Zone of Proximal Development (ZPD); cognitive apprenticeship (scaffolding); inquiry and discovery learning.

Constructivism

A reaction to didactic approaches such as behaviorism and programmed instruction, constructivism states that learning is an active, contextualized process of constructing knowledge rather than acquiring it. Knowledge is constructed based on personal experiences and hypotheses of the environment. Learners continuously test these hypotheses through social negotiation. Each person has a different interpretation and construction of knowledge process. The learner is not a blank slate (tabula rasa) but brings past experiences and cultural factors to a situation.

NOTE: A common misunderstanding regarding constructivism is that instructors should never tell students anything directly but, instead, should always allow them to construct knowledge for themselves. This is actually confusing a theory of pedagogy (teaching) with a theory of knowing. Constructivism assumes that all knowledge is constructed from the learner’s previous knowledge, regardless of how one is taught. Thus, even listening to a lecture involves active attempts to construct new knowledge.
Vygotsky’s social development theory is one of the foundations for constructivism.

What is Contextual Learning? Search-find on July 2, 2010 by Margith Strand

What Is Contextual Learning?

What is the best way to convey the many concepts that are taught in a particular course so that all students can use and retain that information? How can the individual lessons be understood as interconnected pieces that build upon each other? How can a teacher communicate effectively with students who wonder about the reason for, the meaning of, and the relevance of what they study? How can we open the minds of a diverse student population so they can learn concepts and techniques that will open doors of opportunity for them throughout their lives? These are the challenges teachers face every day, the challenges that a curriculum and an instructional approach based on contextual learning can help them face successfully.
The majority of students in our schools are unable to make connections between what they are learning and how that knowledge will be used. This is because the way they process information and their motivation for learning are not touched by the traditional methods of classroom teaching. The students have a difficult time understanding academic concepts (such as math concepts) as they are commonly taught (that is, using an abstract, lecture method), but they desperately need to understand the concepts as they relate to the workplace and to the larger society in which they will live and work. Traditionally, students have been expected to make these connections on their own, outside the classroom.
However, growing numbers of teachers today (especially those frustrated by repeated lack of student success in demonstrating basic proficiency on standard tests) are discovering that most students' interest and achievement in math, science, and language improve dramatically when they are helped to make connections between new information (knowledge) and experiences they have had, or with other knowledge they have already mastered. Students' involvement in their schoolwork increases significantly when they are taught why they are learning the concepts and how those concepts can be used outside the classroom. And most students learn much more efficiently when they are allowed to work cooperatively with other students in groups or teams.

Contextualized learning is a proven concept that incorporates much of the most recent research in cognitive science. It is also a reaction to the essentially behaviorist theories that have dominated American education for many decades. The contextual approach recognizes that learning is a complex and multifaceted process that goes far beyond drill-oriented, stimulus-and-response methodologies.
According to contextual learning theory, learning occurs only when students (learners) process new information or knowledge in such a way that it makes sense to them in their own frames of reference (their own inner worlds of memory, experience, and response). This approach to learning and teaching assumes that the mind naturally seeks meaning in context, that is, in relation to the person's current environment, and that it does so by searching for relationships that make sense and appear useful.
Building upon this understanding, contextual learning theory focuses on the multiple aspects of any learning environment, whether a classroom, a laboratory, a computer lab, a worksite, or a wheat field. It encourages educators to choose and/or design learning environments that incorporate as many different forms of experience as possible (social, cultural, physical, and psychological) in working toward the desired learning outcomes.

In such an environment, students discover meaningful relationships between abstract ideas and practical applications in the context of the real world; concepts are internalized through the process of discovering, reinforcing, and relating. For example, a physics class studying thermal conductivity might measure how the quality and amount of building insulation material affect the amount of energy required to keep the building heated or cooled. Or a biology or chemistry class might learn basic scientific concepts by studying the spread of AIDS or the ways in which farmers suffer from and contribute to environmental degradation.




© 2010 CORD. All Rights Reserved.
Web Development By World Wide Internet Publishing.

 Reply Forward

Meta Learning/ Search-find on July 2, 2010 by Margith Strand

Meta learning
From Wikipedia, the free encyclopedia
Jump to: navigation, search
This article is about meta learning in social psychology. For meta learning in computer science, see Meta learning (computer science).
Metalearning in education

Originally described by Donald B. Maudsley (1979) as "the process by which learners become aware of and increasingly in control of habits of perception, inquiry, learning, and growth that they have internalized". Maudsely sets the conceptual basis of his theory as synthesized under headings of assumptions, structures,change process, and facilitation. Five principles were enunciated to facilitate meta-learning. Learners must: (a) have a theory, however primitive; (b) work in a safe supportive social and physical environment; (c) discover their rules and assumptions; (d) reconnect with reality-information from the environment; and (e) reorganize themselves by changing their rules/assumptions.
The idea of metalearning was later used by John Biggs (1985) to describe the state of ‘being aware of and taking control of one’s own learning’. You can define metalearning as an awareness and understanding of the phenomenon of learning itself as opposed to subject knowledge. Implicit in this definition is the learner’s perception of the learning context, which includes knowing what the expectations of the discipline are and, more narrowly, the demands of a given learning task. Within this context, metalearning depends on the learner’s conceptions of learning, epistemological beliefs, learning processes and academic skills, summarized here as a learning approach. A student who has a high level of metalearning awareness is able to assess the effectiveness of her/his learning approach and regulate it according to the demands of the learning task. Conversely, a student who is low in metalearning awareness will not be able to reflect on her/his learning approach or the nature of the learning task set. In consequence, s/he will be unable to adapt successfully when studying becomes more difficult and demanding. (Norton et al. 2004)
Meta learning model for teams and relationships

Meta learning is the dynamic process whereby a system (relationship, team or organization) manages to dissolve limiting dynamics such as point attractors and limit cycles that impede effective action and evolve liberating and creative dynamics represented by complex attractors whose trajectories in phase space, by never repeating themselves, can portray creative and innovative processes (see complexor). These trajectories have a fractal nature, hence their complex order in which highly creative processes are possible. High performance teams are able to "meta learn" and this differentiates them from the inability of low performance teams to transcend their limiting behaviors that impede innovation and creativity (Losada, 1999; Losada & Heaphy, 2004; Fredrickson & Losada, 2005).

The meta learning model was derived from thousands of time series data generated at two human interaction laboratories in Ann Arbor, Michigan, and Cambridge, Massachusetts. These time series portrayed the interaction dynamics of business teams doing typical business tasks such as strategic planning. These teams were classified into three performing categories: high, medium and low. Performance was evaluated by the profitability of the teams, the level of satisfaction of their clients, and 360-degree evaluations.

The meta learning model comprises three state variables and one control parameter. The control parameter is connectivity and reflects the level of attunement and responsiveness that team members have to one another. The three state variables are inquiry-advocacy, positivity-negativity, and other-self (external-internal focus). The state variables are linked by a set of nonlinear differential equations (Losada, 1999; Fredrickson & Losada, 2005; for a graphical representation of the meta learning model see Losada & Heaphy, 2004). When connectiviy is low, there is preponderance of advocacy and self orientation (internal focus) and more negativity than positivity. When connectivity is high there is a dynamical equilibrium between inquiry and advocacy as well as internal and external focus and the ratio of positivity-to-negativity is at least 2.9. This ratio is known as the Losada line, because it separates high from low performance teams as well as flourishing from languisning in individuals and relationships (Fredrickson & Losada, 2005; Waugh & Fredrickson, 2006; Fredrickson, 2009).
The Meta Learning model was developed by Marcial Losada and is now widely used by business organizations and universities.
[edit] See also
metacognition
metaknowledge

Humanism...

HUMANISM AS AN INSTRUCTIONAL PARADIGM
 
To appear as a chapter in C. Dills & A. Romiszowski (Eds.), Instructional development: State of the art paradigms in the field (Volume Three). Englewood Cliffs, NJ: Educational Technology Publications, in press.


Ralph G. Brockett
Associate Professor
University of Tennessee
Knoxville, Tennessee
 



INTRODUCTION

Effective instructional design begins with an understanding of the basic assumptions that underlie the design and development process. This is, essentially, the philosophy that an educator brings to the instructional situation. One's educational philosophy can be articulated by responding to such questions as: (a) What do I believe about human nature? (b) What is the basic purpose of learning and instruction? (c) What do I believe about the abilities and potential of the learners with whom I work?
Humanism provides a way of looking at the instructional design process that emphasizes the strengths the learner brings to the instructional setting. It is an optimistic perspective that celebrates the potential of learners to successfully engage in the instructional process. Although humanism is sometimes subject to criticism regarding its basic tenets, and is perceived by some as an irrelevant way to deal with the instructional needs of the present and future, most of these criticisms are based on misunderstandings of beliefs underlying the paradigm and how these beliefs are played out in practice. Ultimately, humanism can and should have an important role to play in the future of instructional development.
The purpose of this chapter is to offer an examination of humanism and its potential within the state of the art of instructional design theory and practice. The chapter will begin with a look at the philosophical and psychological underpinnings of humanism. Emphasis will then shift to an examination of how principles of humanism can be applied to the instructional development process. Next, some potential limitations of the paradigm will be mentioned. Finally, several conclusions will be gleaned from the discussion relative to the value of humanism as an instructional design model.
 

THE NATURE OF HUMANISM

Humanism has variously been described as a philosophy, a theory of psychology, and an approach to educational practice. Each of these is accurate. Philosophy and psychology provide a foundation for the understanding of humanism, while education serves a "playing field" upon which these principles are implemented in practice. This section will examine the philosophical and psychological backgrounds while the following section focuses upon the application of these principles to instructional practice.
 

Humanism as a Philosophy

Humanism is a paradigm that emphasizes the freedom, dignity, and potential of humans. According to Lamont (1965), humanism can be defined as "a philosophy of joyous service for the greater good of all humanity in this natural world and advocating the methods of reason, science, and democracy" (p. 12). Elias and Merriam (1980) state that humanism is "as old as human civilization and as modern as the twentieth century" (p. 109). Early threads of humanist thought can be found in the works of Confucius, Greek philosophers such as Progagoras and Aristotle, Renaissance philosophers Erasmus and Montaigne, Spinoza in the 17th century and Rousseau in the 18th century. In the 20th century, Bertrand Russell, George Santayana, Albert Schweitzer, and Reinhold Niebuhr have all made contributions to contemporary humanism. Similarly, Nietzche, Tillich, Buber, and Sartre have contributed to the development of existentialism, a contemporary form of humanism (Elias & Merriam, 1980; Lamont, 1965).
Rooted in the idea that "human beings are capable of making significant personal choices within the constraints imposed by heredity, personal history, and environment" (Elias & Merriam, 1980, p. 118), principles of humanist philosophy stress the importance of the individual and specific human needs. Lamont (1965) has outlined 10 central propositions of humanist philosophy. These can be summarized as follows:
 
1. Humanism is based on a naturalistic metaphysics that views all forms of the supernatural as myth;
2. Humanism believes that humans are an evolutionary product of nature and, since body and personality are inseparably united, one "can have no conscious survival after death (p. 13);
3. Humanism holds that "human beings possess the power or potentiality of solving their own problems, through reliance primarily upon reason and scientific method applied with courage and vision" (p. 13);
4. Humanism holds that because individuals "possess freedom of creative choice and action," they are within limits, "masters of their own destiny"; in this way, humanism is in contrast with views of universal determinism, as well as fatalism and predestination (p. 13);
5. Humanism stresses a view of ethics or morality based in present-life experiences and relationships and emphasizes "this-worldly happiness, freedom, and progress" of all humans (p. 13);
6. Humanism believes that individuals attain the good life by combining personal growth and satisfaction with commitment to the welfare of the entire community;
7. Humanism places great value in aesthetics, and thus, emphasizes the value of art and the awareness of beauty;
8. Humanism values actions that will promote the establishment of "democracy, peace, and a high standard of living" throughout the world (p. 14);
9. Humanism advocates the use of reason and scientific method and, as such, supports democratic procedures such as freedom of expression and civil liberties in all realms of life;
10. Humanism supports "the unending questioning of basic assumptions and convictions, including its own" (p. 14).
In summarizing the essence of these points, Lamont (1965) offers the following observation:
Humanism is the viewpoint that men [sic] have but one life to lead and should make the most of it in terms of creative work and happiness; that human happiness is its own justification and requires no sanction or support from supernatural sources; that in any case the supernatural, usually conceived of in the form of heavenly gods or immortal heavens, does not exist; and that human beings, using their own intelligence and cooperating liberally with one another, can build an enduring citadel of peace and beauty upon this earth. (p. 14)
 
In a discussion of humanistic philosophy directed toward its application to the field of adult education, Elias and Merriam (1980) summarize the major beliefs of humanism as: (a) human nature is inherently good; (b) individuals are essentially free and autonomous within the constraints of heredity, personal history, and environment; (c) each person is unique with unlimited potential for growth; (d) self-concept plays a key role in influencing development; (e) individuals possess an urge toward self-actualization; (f) reality is a personally defined construct; and (g) individuals are responsible to themselves and to others. While it is clear that the ideas presented by Elias and Merriam are compatible with those of Lamont, by emphasizing notions such as self-concept and self-actualization, the Elias and Merriam description serves as a natural link between humanism as a philosophy and as a theory of psychology.
 

Humanistic Psychology

For the first half of the 20th century, psychology was primarily influenced by two schools of thought. One of these was psychoanalytic theory, perhaps best represented by Freudian psychoanalysis. The other was behaviorism, reflected in the research and theories of Watson, Hull, and Skinner. However, throughout the 1930s, 1940s, and 1950s psychologists such as Gordon Allport, Henry Murray, Gardner Murphy, and George Kelly began to present views "which rejected both the mechanistic premises of behaviorism and the biological reductionism of classical psychoanalysis" (Smith, 1990, p. 8). Thus, it was out of response to both the determinism inherent in psychoanalysis and the limited importance placed on affect, dignity, and freedom found in behaviorism that gave rise to what is sometimes called the "third force" of psychology: humanism.
In describing the development of humanistic psychology, Smith (1990) has noted that the approach began to be recognized as a "movement" during the mid- 1960s. It is important to note, however, that there is no single conception of humanistic psychology; rather, many individuals contributed different elements to the movement. For instance, Charlotte Buhler emphasized the notion of life-span development. Rollo May emphasized European existentialism and phenomenology. The encounter group movement was a vital aspect of humanistic psychology in the 1960s and 1970s, particularly through the work of J.L. Moreno and his development of psychodrama, Kurt Lewin and field theory, and Fritz Perls and his work with Gestalt therapy. And Viktor Frankl, in part through his personal experiences during the holocaust, developed logotherapy as "an account of the human predicament that emphasizes the human need to place death and suffering in a context of human meaning that can be lived with" (Smith, 1990, p. 14).
Probably the two individuals who have had the greatest influence on humanistic psychology, however, were Carl Rogers and Abraham Maslow. Rogers' approach to therapy was originally described as "nondirective counseling" (1942), but was later recast as "client-centered therapy" (1951, 1961). The essence of Rogers' thinking was that human beings have a tendency toward self- actualization; however, the way in which individuals are socialized often blocks that urge. According to Rogers, a therapeutic relationship based on the values of unconditional positive regard, accurate empathic understanding, and honesty and integrity can help individuals fulfill their greatest potential (Smith, 1990). Through this process, Rogers demonstrated his belief in the potential of his clients and his trust in their ability to take responsibility for their lives.
 
A major goal of Rogerian therapy is to help individuals foster a greater level of self-direction. According to Rogers, self-direction "means that one chooses - and then learns from the consequences" (Rogers, 1961, p. 171). Self- direction is where a person can see a situation clearly and takes responsibility for that situation (Rogers, 1983). This notion of self-direction has important implications for educational practice which will be discussed later in this chapter as well as in subsequent chapters in this volume by Sisco and Hiemstra.
 
Maslow developed a theory of human motivation originally presented in his 1954 book Motivation and Personality, which was revised in 1970. This theory holds that needs are arranged in ascending order: physiological needs, safety, love and belonging, esteem, and self-actualization. Maslow described the first four levels as "deficiency" needs, in that one must be able to meet needs at a lower level prior to working toward the needs at the next level.
 
As with Rogers, Maslow designated "self-actualization" as an ideal to work toward achieving. Self-actualization, according to Maslow, is the highest level of human growth, where one's potential has been most fully realized. Maslow held that self-actualizers tend to "possess a more efficient view of reality and a corresponding tolerance of ambiguity; be accepting of themselves and others; demonstrate spontaneous behavior that is in tune with their own values and not necessarily tied to the common beliefs and practices of the culture; focus on problems that lie outside of themselves, thus demonstrating a highly ethical concern; maintain a few extremely close interpersonal relationships rather than seek out a large number of less intense friendships; and possess high levels of creativity" (Brockett & Hiemstra, 1991, p. 126).