Outcomes
Outcome 1 |
---|
Students will demonstrate knowledge of the contributions of significant Western thinkers to ongoing intellectual debate about moral, social, and political alternatives. |
Outcome 2 |
Students will demonstrate knowledge of the major trends and movements that have shaped and responded to this debate: e.g., monotheism, humanism, etc. |
Outcome 3 |
Students will demonstrate the ability to think critically about moral, social, and political arguments in the Western intellectual tradition, evaluating the logic of these arguments and relating them to the historical and cultural context. |
Outcome 4 |
Students will consider moral, social, and political issues from an interdisciplinary perspective. |
Fall 2020
Method
For Fall 2020, Outcome 4 was assessed in all sections using either course essay assignments or embedded essay questions in the courses' final exams. Instructors were encouraged to use a Canvas assessment rubric to record and submit their data. However, some instructors submitted their assessment over email to the area chair.
Percentage of students assessed
Outcome 4 (Fall 2020): (672 total enrolled) / (286 total assessed) = 43% students
Figure 2. Comparative Students Enrolled vs. Assessed for Outcome Four, 2009-2020
Figure 1. The Canvas rubric for Outcome Four.
Faculty Participation by Department or School
Department or School | Number of Sections Taught / Percentage of College Total | Number of Sections Assessed / Percentage of Dept. or School Total |
---|---|---|
Philosophy | 8 / 38% | 3 / 38% |
History | 4 / 19% | 3 / 75% |
English | 6 / 29% | 3 / 50% |
Education | 1 / ~5% | 1 / 100% |
Political Science | 0 / 0% | N/A |
Language & Literatures | 1 / ~5% | 0 / 0% |
Provost's Office | 1 / ~5% | 0 / 0% |
Total | 21 / 100% | 10 / 48% (overall) |
Spring 2021
Method
For Fall 2020, Outcome 3 was assessed in all sections using either course essay assignments or embedded essay questions in the courses' final exams. Instructors were required to use a Canvas assessment rubric to record and submit their data.
Percentage of students assessed
Outcome 4 (Fall 2020): (737 total enrolled) / (168 total assessed) = 23% students
Figure 5. Comparative Students Enrolled vs. Assessed for Outcome Three, 2009-2020
Figure 4. The Canvas rubric for Outcome Three.
Faculty Participation by Department or School
Department or School | Number of Sections Taught / Percentage of College Total | Number of Sections Assessed / Percentage of Dept. or School Total |
---|---|---|
Philosophy | 7 / 33% | 3 / 43% |
History | 4 / 24% | 0 / 0% |
English | 6 / 29% | 1 / 17% |
Education | 1 / 5% | 1 / 100% |
Political Science | 0 / 0% | NA |
Language & Literatures | 2 / 5% | 0 / 0% |
Total | 21 / 100% | 5 / 24% |
Analysis and Reflection for 2020-2021 Assessments
I. Analysis
This year's assessment occurred during the ongoing COVID-19 pandemic, which influenced not only student learning but the assessment process itself. Bear that in mind as you continue.
The assessment results suggest that most students meet or exceed either assessed outcome: 90% for Outcome Three, and 61% for Outcome Four.
Outcome Four's results are solidly in line with past performances, although there is not much evidence for continual improvement. In fact, scores were below those of Spring 2020—although that assessment results from a voluntary dataset of 11% of enrolled students. Disregarding that semester's data, and reviewing the Academic Year 2014-2015 with some scrutiny (as that data was the resulted from a formal assessment of Outcome Three), it is apparent that a greater percentage of students are exceeding the outcome standard than during 2011 or 2009.
Outcome Three assessment may demonstrate continual improvement, as just under half (45%) of the students exceeded the outcome standard. But the overall assessment participation rate was among the lowest, undercutting the certainty of that analysis. Happily, the Spring 2021 Approaching and Not Meeting scores—which are often interpreted as bellwethers for instructional and learning difficulties—are within the general trends of the comparative assessment data.
Some items of note concerning the 2020-2021 assessment:
- Because of the COVID-19 pandemic, most HUMN 220, 221, and 222 sections were online, which is still a unique delivery method for these courses.
- The fall semester recorded the highest percentage of enrolled HUMN students ever assessed for Outcome Four.
- Faculty participation for this mandatory assessment of both outcomes was the lowest since 2015.
- This year's assessment is the first using the Canvas outcome tool and the assessment rubrics designed by the college's Canvas team.
- Faculty have yet to assess a single section of HUMN 222, Black Humanities.
II. Reflection
Traditionally, area assessment analysis has circulated around two points: reliability and validity.
Reliability
A review of the trend lines for a decade worth of assessment (see Figures 3 and 6) suggests a certain amount of consistency in HUMN 220, 221, and 222 assessment. Given that most students meet or exceed the assessed outcomes, the faculty have interpreted this consistency as a marker of reliability. Reliability can also be ensured through independent replications of a measurement procedure on the same persons or population (see Kane 1992). For the five-year General Education assessment cycle, the faculty are, more or less, considering a semester cohort to be the same "population" as a preceding semester cohort, a half-decade earlier.
Validity
Validity of our assessment mainly depends on its reliance on summative student ability: most of our assessment relies on the achievement of content goals and instructional objectives. Late semester assessment, which what we do for HUMN 220, 221, and 222, increases the validity of the data.
The commitment to faculty independence has relieved instructors of the burden of a mandatory shared assignment, which is a common strategy for guaranteeing artifact consistency. Guidance However, guidance suggests that instructors use expository assignments for HUMN 220, 221, and 222 assessment. Similarly, faculty independence dissuades the introduction of instructor-independent (or third-party) assessment, which could make the assessment process more valid by reducing bias when implementing the instrument (in this case the Canvas rubrics).
The greatest anxiety over the validity of our assessment reports—both this year and in past semesters—focuses on the relatively small sample size of the assessment data (see Figures 2 and 5). All instructors should submit assessment data for their courses, but not all do. While in certain semesters some department faculty perform worse than others, non-compliance does not necessarily center on a single department nor on faculty of a given rank or status. (The 2008-09 coordinator's statement that "[p]art-time instructors, who constitute a quarter or more of the Hum[n] teaching staff, almost never respond" has not held for a long while.)
However, the sample size does not necessarily affect the assessment's validity, if our overall chain of inference concerning the assessment (the artifact is appropriate, the instrument is accurate) is correct.
In both of these matters, reliability and validity, the faculty should reflect on the above assumptions.