Technological convergence and human connectivity

This unit was centered on 21st century literacies, automated essay scoring technologies, and policy. There was an overarching theme of technological advancement and the advent of digital assessing technologies in this unit. I was interested in what Diane Penrod had to say about how technological convergence affects program assessment and classroom assessment in the following four ways (in Composition in Convergence): 1. “interactively” (through networking apparatuses like hyperlinks, or generally the phenomenon of information created by and for a participatory culture), 2. “graphically” (i.e.- composing in virtual spaces allows students opportunities to exercise visual rhetoric more freely, compared to composing in more traditional spaces), 3. “perspectively” (the motivations for “the establishment of connections” and the portrayal of online composers’ identities (43) and 4. “theoretically” (we have to look at the situation of networked composing knowing that it requires a different kind of thought process and understanding, and approach) (Penrod 40). Dr. Neal writes, in Writing Assessment and the Revolution of Digital Texts and Technologies, that, “as hypermedia is becoming more of an integral component of 21st century literacy both in an outside the classroom, it is becoming increasingly significant in the assessment of writing. This has implications that span the continuum of theoretical and practical” (Neal 93). I think it’s important to remember this continuum; our understanding of advancements in assessment is developing alongside the actual developments, and awareness of this simultaneous formulation of thought and practice underscores Penrod’s four points.

I found these four points interesting because they each are founded in the notion of human-to-human connection and interaction (though often via media or technological interface). The first point’s connection with communication is inherent; we have more ways to interact, participate, and communicate with each other because of digital media and technology. Composing graphically gives us another avenue of composition; that is, we can reach more people because we compose in multiple ways, appealing to different learning styles. The third point, like her first point, is inherently based in communication and connection, and the fourth point, the salience of the development of new theory to tackle technological convergence, entails a more esoteric, but still intensified, form of communication (that is situated academically; scholars looking at this situation, like all theorists, and make connections with each other through conferences, collaborations, publications, etc.).

Based off of Penrod’s points here, I think I can say that a trend I noticed in this unit was that when technology is used to foster communication between people it seems to be a generally constructive force. Writing programs and institutions are doing beautiful things with ePortfolios, digital literacies and rhetoric, etc. (and I should mention, here, this fascinating and elegantly simple idea from Whithaus: “the judging of students’ composing skills in digital compositions is not simply a matter of applying new rubrics based on the structural qualities of web pages and sites. Composing and cognition in electronic environments are affected by the differences in what types of evidence is presented as well as how the delivery of that evidence is shaped” (Whithaus 6). This is my interpretation of his idea: our understanding of digital composing does not need to change so much as it needs to be born, or formed).

But when technology is not used for the primary purpose of forging human connections, when it is used almost as a barrier between groups of people (like, in an AES situation, where machines divvy out scores that decide who gets to join the institution or not), then we have ethical and logistical problems.

One of the scholars we read who has a notably negative opinion about AES is William Condon, who writes that “the type of test AES can score is in conflict with the needs of a student to learn how to improve as a writer and of a teacher, who needs to know how to facilitate that improvement” (Condon). So, in terms of what Condon is saying, if we use AES in an inappropriate context (the definition of which I’m sure varies from person to person, but I here I think it would be safe to assume a general learning environment would work), we are short-changing students who need feedback on their writing that goes beyond an analysis of surface-level craft; machines can’t (at least, they can’t yet) assess content. But Condon doesn’t even seem convinced that AES is a wise decision at all: “In short, the ability of AES to evaluate samples that reveal writing ability—not just fluency, accuracy of text production, and sophistication of vocabulary… is scant, if not non-existent, and for that reason assessments that can be scored via AES are poor predictors of students’ success in courses that require them to think, to write with an awareness of purpose and audience, and to control the writing process” (Condon). 

Posted in Uncategorized | Leave a comment

Composition student agency in the form of freedom, choice, and responsibility

One of the ideological structures I felt this unit was significantly founded upon was the idea that students in a composition setting should have (and be expected to act upon) their own degree of agency. We discussed four major subjects, and three of them entail a degree of applicable and exercised student agency in order to function properly, and this agency seems to take three forms (in this unit):

  1. The critique of rubrics– I’ve seen all kinds of rubrics across the humanities departments here at FSU. There’s a strictness spectrum: they can be so free-form that they (arguably) don’t need to exist at all (such as a rubric that is worded nebulously and so open to interpretation that any answer could be the right answer (I’ve run into several of these during my time in the visual arts department)), and on the other end of the spectrum, some are so rigid that they don’t allow for deviation from a standard of expectation. Balester writes in “How Writing Rubrics Fail” that instead of tossing rigid rubrics, we should revise them so that they’re more socially inclusive, and instead of making students conform to set standards and labeling anything that does not as “error,” we should encourage writing that is authentic and rhetorically-effective. The kind of agency I see fitting in here is an allowance, on the part of the assessor, for the student so that they can write in an authentic way and not be told that they’re wrong for doing so. This agency is the power to wield and hone your own natural voice (and perhaps a part of that agency is the confidence you would glean from such an affordance- your way would be one more right way to write). And students are empowered when they are given the freedom to write about whatever they want, using whatever media form they want: “when students are able to choose not only the subject about which they are writing but also the media through which they reach their audiences, the selection of the composition media becomes a significant rhetorical choice… [w]riting teachers’ jobs should include making students aware of the situations within which they compose” (Whithaus 47-48). This kind of flexibility can be realized within rubrics to make them more empowering for students.
  2. Directed self placement– A process of directed self-placement entails that a student will be able to choose which class they take. They aren’t sorted like first-years at Hogwarts. They have some level of power in that they are able to dictate where they go next (in terms of writing classes).
  3. Portfolios– A writing portfolio situated in a composition course (as a method of assessment) puts a lot of power into the hands of the students. Since they’re not bound to several specific deadlines, they are left (and expected) to self-regulate and hold themselves accountable for maintaining a writing practice and process. This is Spiderman-esque agency: “With great power comes great responsibility.” The agency students receive as part of participating in a portfolio-writing process (including drafting, editing, and revision activities) entails that they will need to develop the responsibility to use the power they’ve been given well. This is something that preoccupies Ricky Lam in his article, “Two portfolio systems: EFL students’ perceptions of writing ability, text improvement, and feedback”: “Promotion of metacognitive skills (i.e., self assessment and awareness-raising tasks) in the portfolio compilation is likely to heighten EFL learners’ rhetorical awareness and discourse competence, and help them internalize essential linguistic features of a piece of good writing” (Lam 150). Essentially, mindfulness is inescapable is the student creating the portfolio is aware of their own process and practice, and not only that, they become more rhetorically-effective and craft-savvy because of that mindfulness. To Lam, this mindfulness is something to be encouraged: “students should be encouraged to negotiate both meaning and form during self revision in order to make text revision a productive event in the writing classroom” (Lam 150).
Posted in Uncategorized | Leave a comment

Superficiality in Assessment

This week’s readings (particularly the Klein/Taub piece) had me wondering how much of writing assessment is (inadvertently) tied up in superficial (and I am not necessarily implying a negative connotation here, in the sense that I’m talking about handwriting; I mean “surface-level”) aspects of the students’ compositions. Today I will be addressing the articles that discussed the interesting and noteworthy decisions that assessors made when assessing essays composed through handwritten or computer processing approaches, but we can definitely see how assessment decisions about pieces of writing could be rooted in superficial aspects elsewhere within institutional contexts, namely in situations where race or gender alterity is involved (as we can see in the studies and observations made by Ball, Kelly-Riley, Haswell, and Anson)).

Klein and Taub noticed significant differences between handwritten paper grades over surface-level differences, like poor legibility, coloring, and general aesthetic differences (and these differences could perhaps indicate something to us about the way teachers have understood (and currently understand) compositional aesthetics, and if there is an argument to be made for aesthetic standards in writing, or if there’s not). These surface-level differences, once noted, affected the grades of the compositions, but did they affect (or were they tied up) in the deeper meanings and intentions of the composition? I love visual rhetoric, and I’m usually the first person to suggest that the visual is always tied to the verbal, but (and I don’t mean to generate a paradox here) I wouldn’t associate the choice to write in a pink gel pen with a student’s decision to write about their first family pet, for example. What about those arbitrary decisions in the hadwriting process that affect the student detrimentally?

I know I have more questions than answers here, but I am genuinely curious about other perspectives and practices that address this situation (probably because I’ve taken quite a few tests where the essay had to be handwritten myself, and I have wondered before how different it would be to grade a handwritten test compared to a typed essay). What I mean to as is, if superficial elements of the handwritten essay aren’t tied up with the deeper meaning of the student’s answer (or at least, if you aren’t able to make a case that they are) is it fair to treat them as if they are make-or-break aspects of the composition?

I do think there might be a hierarchy of aesthetic importance here. I would be more quick to understand a poor grade for illegibility (which Klein and Taub document (141)) over aesthetic preference, because one inhibits reader comprehension while the other probably only distracts or diverts the reader, if that.

And I know Chen says that “there were no statistically or practically significant scoring differences between handwritten and transcribed computer responses to the three writing tasks” (58) within that particular article, but the researchers there also note that there is a balance created when teachers understand they are dealing with both handwritten essays and computers, and then cut the hand-writers some slack because they know they don’t have the same advantages as a person typing (namely speed, familiarity, legibility, and convenience (Whithaus 11-12). That is, even though Chen perceives an evenness between both kinds of assessments, the knowledge of different composing methods influences the assessors’ decisions regardless and from the beginning too, so the distinction is really quite important and deserving of attention.

Posted in Uncategorized | Leave a comment

Attitudes toward Assessment Technologies

In interacting with the texts for this week, I kept catching myself trying to determine others’ attitudes and opinions (positivity/negativity, optimism/pessimism, cynicism/suspicion, etc.) about assessment technology. I tried to pick up on these feelings through each scholars’ varying definitions and examinations of technology in assessment, a concept which is nuanced and complex. I’ll begin with Inoue, who has an amazing quote (embedded in the Prezi) that delivers up a fleshed-out vision of “writing assessment as technology:”

it is…

  • “historically and materially situated”
  • “a process in which power is made, used, and, transformed”
  • “consists of sets of artifacts and technical codes”
  • it’s “manipulated by institutionally-sanctioned agents”
  • it’s “constructed for particular purposes that have relations to abstract ideas, concepts, people, and places” and
  • its “products, its effects or outcomes, shape and are shaped by, the racial, class-based, gender, and other socio-political arrangements in its particular place or context” (Inoue).

Inoue repeatedly draws attention to the fact that assessment technologies are situated within and constructed/defined by society; the process is cyclical, also, since we produce ‘it,’ it affects us, we produce a technological amendment, addition, augmentation, etc., and the process continues. To Inoue, assessment technology is, it seems, something that can affect a person only because of how it has been created, and in that way it functions almost as a cultural mirror. And Inoue seems pretty enthusiastic about examining culture indirectly by examining assessment technology practices and developments.

Huot presses the idea that technology should be used to connect people rather than evaluate people. He also points out the conveniences and affordances of assessment technologies (for instance, “technology, like any other human activity, can help to promote certain social, political and ideological values” (Huot 141)), and wonders whether assessment itself has become a technology. So his outlook (or ideal, perhaps) seems to be a very positive one. But he also seems a bit wary of technology: he makes the effort to caution against using technology in writing assessment to the extent that it will obscure “the essential purpose of assessment as research and inquiry, as a way of asking and answering questions about students’ writing and the programs designed to teach students to write” (148).

Neal seems to convey the message that we should neither worship nor abhor technology, but we should rather observe and utilize it when we need to. It’s a very level-headed approach. Like Inoue, he realizes that assessment technologies are culturally-situated and thus, should be treated accordingly: “It is not our goal to find an assessment instrument that is neutral and fair. Instead, we need to develop and implement writing assessment technologies that reflect values consistent with teaching and learning that have been the foundation of our discipline and that work in specific contexts for specific purposes. To do that we need to consider the context surrounding the assessment, including the student populations; the content of our curricula; the decisions we make based on the results of assessment instruments and just as important, the social and educational consequences that result from those decisions” (23). And in terms of “predicting the evolution of writing assessment technologies” we should always be on the lookout (to the extent that we’re vigilant) for social change, especially educational change (47). Thus, it seems Neal is advocating a presence of mind in terms of thinking about assessment technologies; since the technologies are embedded in culture, we must pay attention to the ways cultures are developing in order to capture technology’s best potential.

Posted in Uncategorized | Leave a comment

Two nebulous concepts

I found our initial discussion of reliability and validity in class to be helpful, especially when Dr. Neal showed us that the conventional understanding of the two concepts is not necessarily the understanding we should adopt. I hope I am remembering this correctly, but I believe he said that many understand validity to mean “it measures what it purports to measure” and that reliability is “a necessary but inefficient condition of validity.” We (drawing on Messig) should understand that validity and reliability, as concepts, imply a dual focus on accuracy and appropriateness; but we are going to continue wondering what’s accurate and appropriate. Trying to clarify the nature of the concepts intrigued me, and, as a result, I want to search out definitions of or ideas related to the concepts as they appear in the texts from the theorists we read last week. What do they have to say about the nature of reliability and validity? That’s what I want to know.

Is Pamela Moss addressing the theoretical chasm between validity and reliability when she writes that “sound program of validity research begins with a clear statement of both the purpose and the intended interpretation of meaning of test scores and then examines, through logical analysis, the coherence of tests with that understanding” (Moss 155)? To an extent, I feel that Camp discusses how we have tried to establish balances between the two concepts within testing scenarios; a specific example she provides of a type of assessment that tries to compromise between validity and reliability is the format that combines the “multiple-choice test, the impromptu writing sample, and in many cases a combination of the two” (Camp 103).

Of this particular model and approach she says that, “the combination in particular has represented for many, including developers a smaller scale assessments, a compromise accommodating both traditional psychometric expectations for reliability and the concern for validity expressed by teachers of writing and others convinced that judgments about writing ability should be based on writing performance” (Camp 103). Camp is critical of this model later: “The traditional formats for writing assessment, including the writing sample, seem insufficient… Multiple-choice tests of writing seem more remote than ever… The streamlined performance represented by the single impromptu writing sample, which corresponds to only a small portion of what we know understand to be involved in writing, no longer seems a strong basis for validity (Camp 108)). Regardless, compromise is the key word on 103 because it implies a halfway meeting. From this quote we can understand that there is a dissonance between these two concepts; they are paradoxically intertwined in a relationship of conflict and symbiosis.

Cherry and Meyer write that, “a test cannot be valid unless it is reliable, but the opposite is not true; a test can be reliable but still not be valid. Thus, reliability is a necessary but not a sufficient condition for validity” (Cherry and Meyer 30).
 To these theorists, the concepts are interdependent in a one-way kind of way, and they privilege validity, saying it, above all, is the fundamentally necessary goal to pursue in the crafting of assessment.

Posted in Uncategorized | Leave a comment

Positivity and Student Identity (Blog 1)

There’s a saying, usually attributed to Mother Teresa, that I’ve been thinking about in relation to assessment: “I was once asked why I don’t participate in anti-war demonstrations. I said that I will never do that, but as soon as you have a pro-peace rally, I’ll be there.” This quote, which urges hearers to focus on positivity rather than negativity, has affected me ever since my mother first told it to me as a child. I fight to maintain a positive focus and attitude;  I try to share that attitude with others (namely my students).

But there is something about the notion of “assessment” that, for years, escaped my laser beam of mindful, intentional positivity. I naturally associated the concept and practice of assessment with fear, intimidation, and negative self-definition (because I dreaded the thought of earning bad marks on a high-stakes assessment  and our high school teachers had wonderful, but very high, expectations). I’ve even heard people disregard assessment feedback before in attempts to reestablish their sense of intellectual identity (i.e.- “I bombed the GRE, but  I don’t do well on timed tests. How I did on that test is not who I am”). Before this unit, it honestly never occurred to me to consider assessment as a tool to build students up or at least construct their identity in a positive way.

From among the histories that we studied, I credit Dr. Yancey with my paradigm shift here. Her essay implies that the fusion of trust and the gift of responsibility in the form of portfolio assessment can actually foster a more well-rounded sense of identity in a student (Yancey 145). Students, in this attitude, transcend their status as government or institutional numbers and are acknowledged (and thus, dignified), just like we discussed in class on Tuesday.

I try to imagine an ideal assessment, one “in which reliability and validity work in harmony rather than tension” (Huot et al. 29); is it one where the student is prompted to improve their writing ability through incentive rather than fear? How can we use assessment to generate a Mother Teresa-esque focus on positive student potential? How can technology facilitate this kind of development? And (I’m asking this question because of my personal interest in teaching across genres in comp.), would such an assessment situation warrant us to augment “the scope of writing activities to include creative and imaginative works” so that we “focus on idea and content within writing rather than solely form” (Behizadeh and Engleherd Jr. 197)? Should we visualize assessment as more of a chain in a link instead of a bottom line?

Posted in Uncategorized | Leave a comment