INTD 105 conducted an assessment during the spring semester of 2021. This assessment looked at how well the course covers each of the SUNY Basic Communication learning outcomes, as well as selected GLOBE outcomes. In the following, we discuss this assessment and the conclusions we draw from it.

Doug Baldwin

Gillian Paku

INTD 105 Coordinators

Method

This assessment has two immediate motivations, namely a review of INTD 105 and 106 conducted during the summer and fall of 2019 by an outside consultant (Prof. Michael Murphy of SUNY Oswego and chair of the SUNY Council on Writing) and Geneseo’s periodic assessment of each general education area. Following one of Prof. Murphy’s recommendations, we wanted a portfolio-based assessment that would help us understand how well INTD 105 aligns with Geneseo’s GLOBE outcomes; in accordance with SUNY needs, we also needed to assess how well INTD 105 achieves each of the SUNY GER Basic Communication outcomes. Fortunately, some of the SUNY outcomes align well with the GLOBE outcomes we were interested in, and so we were able to fold 5 GLOBE outcomes into SUNY outcomes; we assessed the sixth SUNY outcome independently of GLOBE. The result was an assessment of the following outcomes:

SUNY Outcome

GLOBE Outcome

BC1. Reading and Responding

Critical Thinking. Explicate and evaluate the assumptions underlying the claims of self and others.

BC2. Reasoning

Critical Thinking. Draw soundly reasoned and appropriately limited conclusions on the basis of evidence.

BC3. Writing

Communication. Compose written texts that effectively inform or persuade, following Standardized English conventions and practices of academic disciplines.

BC4. Research and Evidence

Information Literacy. Search effectively and efficiently for relevant information, evidence, and data

BC5. Information Literacy

Information Literacy. Evaluate the credibility of information obtained.

BC6. Revision


We asked each spring 2021 INTD 105 instructor to assess three pieces of writing for these outcomes, using a rubric we developed. This rubric draws on Prof. Murphy’s report, a follow-up retreat with INTD 105 instructors and friends held in November 2019, inspiration from the AAC&U VALUE rubrics, consultation with INTD 105 instructors in a February 2020 workshop, and subsequent brainstorming by the INTD 105 co-coordinators. The result, with explanatory notes, is available on the first page of the “rubric workspace” at

https://docs.google.com/document/d/19aVxRMmHSZYokmb8_b4QfoKHw-dp1fdqNU1R1A-np6s/edit?usp=sharing

(Despite the 2020 date, the COVID-19 pandemic delayed use of this rubric until spring 2021.) For ease of use, the INTD 105 coordinators and CIT loaded this rubric into Canvas so that instructors could enter their assessment results through the “Speed Grader” tool and raw data could then be exported from Canvas for analysis. CIT loaded the rubric in early March 2020, and we then met with instructors who wanted to in order to train them on Canvas grading and the rubric. Due to this timing, most assessment data come from work students did in the second half of the semester. Although we asked instructors to evaluate 3 pieces of writing, not all were able to do so.

We wanted to capture students’ broad writing ability at or near the end of the course, but without looking at just a single data point for each student and outcome. Guided by Barbara Walvoord’s recommendations in Assessing and Improving Student Writing in College: A Guide for Institutions, General Education Departments, and Classrooms (Jossey-Bass, 2014), and Geneseo’s 2021 Assesstivus keynote address by Kevin Gannon, we therefore used a weighted average of all of each students’ scores within each learning outcome to produce a summary score for that student on that outcome. We weighted the latest score twice as heavily as the earlier ones, thus emphasizing performance late in the course while also including earlier performance. As a further gauge of how the course changed students’ writing ability we looked at the difference between each student’s first and last scores on each outcome. These numbers are interesting, but hard to interpret, as discussed below.

Results

Seven sections of INTD 105 (out of 17) provided assessment data. The actual number of students for whom we received usable data varies by learning outcome and kind of analysis, and so we give those numbers individually below.

Our main measure of how thoroughly INTD 105 achieves its learning outcomes is the number of students whose summary scores are at or above the rubric’s mastery level (3 out of 4 points), as follows:

Outcome

Total # of Students

Percent at or Above Mastery

BC1. Reading and Responding

100

78%

BC2. Reasoning

99

74%

BC3. Writing

100

72%

BC4. Research and Evidence

98

60%

BC5. Information Literacy

87

70%

BC6. Revision

97

45%

We consider these percentages to be satisfactory for the “Reading and Responding,” “Reasoning,” “Writing,” and “Information Literacy” outcomes. However, they are lower than we feel they should be for the “Research and Evidence” and “Revision” outcomes.

We also examined the difference in assessment score between each student’s last and first assessment. Theoretically this gives us a measure of how much each student improved or regressed over the course (or, because of the timing of the assessment, at least over its second half). Data on individual assessments, however, need to be taken with a great deal of skepticism, since (particularly during the pandemic) students may be under different degrees of personal stress or pressure from other courses while doing different assignments for INTD 105, different assignments likely have different difficulties, instructors’ understanding of the rubric may change over time, etc. Furthermore, the structure of INTD 105, in which each instructor develops their own exercises within a framework of common learning outcomes, means that scores from different sections are necessarily also scores on different assignments. Counts or averages over all students in all sections average these uncertainties away to some extent. With that in mind, we calculated the average change in score, in “points” as used by the rubric, for each outcome, as well as the percentage of students who improved (i.e., got a higher score on their last assessment than on their first), regressed (got a lower score on their last assessment than on their first), and showed no change. Because we could only do this analysis on students who were assessed at least twice, and some students weren’t, the number of students in this analysis is slightly less than the number analyzed for mastery. Also note that a positive change in scores indicates improvement. With all this in mind, the results are as follows (percentages may not always add up to 100 because of rounding):




% of Students…

Outcome

# Students

Avg Change

Improved

Unchanged

Regressed

BC1. Reading and Responding

95

0.15

25%

61%

14%

BC2. Reasoning

95

0.19

33%

52%

16%

BC3. Writing

95

0.25

34%

56%

11%

BC4. Research and Evidence

94

0.39

36%

59%

5%

BC5. Information Literacy

86

0.3

34%

56%

10%

BC6. Revision

93

0.11

29%

52%

19%

These results suggest that on average, INTD 105 students improve on all outcomes during the course. Some students seem to get worse however, although, as discussed above, it’s unclear how many, if any, of these cases really represent loss of writing ability as opposed to being artifacts of the assessment method. In any case, larger fractions of students improved on each outcome, although the same uncertainty applies to them. In all cases, the largest fraction appear unchanged, although this could be due to the relatively coarse scale used in the rubric (only 4 points) as well as to the measurement issues already mentioned.

Reflections and Next Steps

We recognize that many factors might call into question the results of this assessment and conclusions reached from it. Most importantly, it was conducted in the midst of the COVID-19 pandemic, when myriad personal and emotional stresses no doubt distracted from both students’ and instructors’ commitment to the course. Beyond that, any assessment of a multi-section, in-progress, course inevitably lacks the kinds of controls and supervision needed for a truly scientifically and statistically rigorous study. Nonetheless, we believe that useful conclusions are possible, and we reflect on some below.

Reflecting on our analysis of student mastery of learning outcomes, we are pleased that around ¾ of the students master the “Reading and Responding,” “Reasoning,” “Writing,” and “Information Literacy” outcomes. Because we based this analysis on an average of multiple writing samples, while emphasizing the latest one, we are confident that this conclusion represents a pattern of sustained mastery as students finish INTD 105. While the fraction of students attaining mastery could always be higher, ¾ is a good fraction to see in a first-year writing course, particularly considering that by doing a portfolio assessment we look for sustained mastery of each outcome rather than merely a one-time success at it. We will hope to raise this fraction in the future, but will concentrate most of our effort on improving areas where this assessment suggests more serious problems.

One of those problems is the lower mastery that students attained for the “Research and Evidence” and “Revision” outcomes (60% and 45% mastery, respectively). We don’t know why these outcomes had such low levels of mastery, but we want students to reach a level closer to the 70% - 80% seen with other outcomes. We plan to start working towards this goal by discussing these outcomes with the instructors who assessed them; it’s possible that this discussion will expose some methodological problem affecting these two outcomes that we weren’t aware of. Assuming no obvious problems in the assessment method, we can share materials, brainstorm course modifications, and conduct other professional development related to these outcomes in the periodic workshops we hold for INTD 105 instructors.

 We notice that the “Research and Evidence” outcome, one of the problematic ones for student mastery, had the best results in our analysis of student improvement (0.39 points average improvement, 36% of individual students improving). While we have less faith in that analysis than in the one of mastery, this comparison suggests that INTD 105 may help students in significant ways even when it doesn’t bring them to sustained mastery.

Our analysis of student improvement relies more on individual students’ performance on individual assignments, and so, for reasons described in the “Results” section of this report, is more subject to error than our analysis of mastery. We have very little faith that the exact averages or percentages we found would be repeatable in other offerings of INTD 105, but we do believe that the overall pattern is real: the overall change in students’ writing ability while taking INTD 105 is positive, and the majority of individuals improve or maintain their writing ability. Unfortunately, this assessment was not a carefully enough controlled experiment to be certain that this overall improvement is caused by INTD 105, as opposed to being something that would have happened anyhow, but it is highly likely that INTD 105 contributes to it in some way. The question of students whose writing apparently regresses is also a nagging one, and we hope to understand it better through discussions with instructors and perhaps future assessments that gather more reliable data on improvement versus regression.

Although the total number of students assessed is enough to give reasonable confidence in the averages we calculated, it would have been nice to have more instructors participating in this assessment. The fact that this was a portfolio assessment and so required instructors to score multiple assignments for each student probably contributed to low buy-in. Even though we tried hard to create Canvas tools that minimized the burden on instructors, we probably did ask more of them than most other assessments at Geneseo do. Doing this assessment in a pandemic year, when we had less opportunity to establish close connections with instructors, instructors had less opportunity to feel close to their classes, and instructors were stretched thin just keeping their classes running, certainly also contributed. All the same, we feel that the portfolio approach gave us better data than we would get from assessing a single assignment, and we hope to continue it in the future, hopefully under less stressful circumstances, and revised based on instructor feedback to have wider acceptance.

Finally, an important goal for us in this assessment was to measure how well INTD 105 supports the GLOBE outcomes. Five of the six outcomes we assessed are both SUNY and GLOBE outcomes. Four of those five are ones for which students achieved satisfactory mastery in our analysis, and the fifth (“Research and Evidence”) is the one where 60% achieved mastery. We therefore conclude that INTD 105 is already playing an important role in the College’s move to GLOBE, and that said role can increase as we respond to the present findings. We recommend that, as the College implements a GLOBE-based general education curriculum, INTD 105 remain the entry to the writing part of that curriculum. We also note, however, that writers do not become proficient in one fell swoop (or course); writing is a skill that develops continually over time. Geneseo’s revised general education must also include an upper-level writing component, a recommendation that also appears in Michael Murphy’s review.

  • No labels