I just posted current validity information about CEEA v4.2 (formerly, CREE) on IEE’s website. Currently, 12 schools have collected the surveys from at least part of their student body and staff, and we had data from over 20 schools in the fall.
The preliminary data results can be seen at: http://excellenceandethics.com/assess/CEEA_v4.2_ReliabilityValidity.pdf
As I shared before, all scales, including the ones added in version 4.2, have Cronbach’s alphas that range from .83 to .94. These are excellent results for the internal consistency of the scales supporting reliability and validity of CEEA. Validity of a survey is further demonstrated by how well the pattern of relationships identified in the data confirms what can be expected theoretically. Let me mention just a couple of observations that point to strong validity of CEEA.
Individuals tend to perceive themselves more positively than others, especially if they are asked to report on ability, rather than actual behavior. In these data, just as one would expect, students on average report much higher perceptions of their own Competencies of Excellence/Ethics (3.74/3.82), compared to their reports of peer behaviors captured in the scales of Culture of Excellence/Ethics (2.88/2.91).
When examining the pattern of bivariate correlations in student data, the highest predictors of student competencies and school culture are faculty practices impacting excellence/ethics and faculty support for & engagement of students. At the same time, Student Safety is barely correlated with students’ reports of competencies and strongly correlated with their perceptions of student culture. Again, taken together, this pattern of relationships confirms what would be expected theoretically.
There is a similar pattern in the faculty data. Faculty give highest responses on the items about their own practices impacting excellence and ethics (4.17 and 4.09). However, in the correlations data, we see that these same scales are just barely correlated with faculty assessments of student competencies and student culture of excellence and ethics (from .111 to .169). Instead, faculty perceptions of what other faculty do (measured by such scales as Faculty Support for & Engagement of Students, Faculty Beliefs & Behaviors) are all much strong predictors of student competencies and culture (from .412 to .553).
For students, perceptions of peers (the Culture scales) are only modestly correlated with reports of students’ own competencies (from .269 to .306). In faculty data, however, student culture scales are stronger predictors of student competencies than any of the faculty practices/behaviors (from .610 to .688). This is to be expected, as faculty tend to think similarly about students and somewhat differently about themselves and colleagues. (In statistical and research methods language, one would refer to this pattern of findings as evidence of divergent/convergent validity).
More work remains to be done to collect the remaining data and generate school CEEA (CREE) reports. While giving strong support to the validity of the instrument, these results also identify a range of concerns, such as the discrepancy in faculty’s beliefs about their own work and what happens around them in the school I just mentioned. When studied carefully and discussed with an open mind by school leadership teams and faculty, the CEEA reports should provide excellent entry points for serious dialogue and decision-making for improvement.