Critical Reflection and Agency in Computing Index

Assessment Instrument 2025

A psychometric instrument with evidence for validity to measure computing students' attitudes and perspectives toward ethical development. Grounded in Critically Conscious Computing theory and Paulo Freire's concept of conscientização, this index helps educators understand and track students' development of both critical reflection (awareness of socio-technical issues) and critical agency (confidence and power to address these issues).


Overview

The Critical Reflection and Agency in Computing Index, also called the Critical Computing Index (CCI), provides computing educators with a theoretically-grounded tool to systematically measure attitudes and perspectives toward ethical development. Through rigorous psychometric validation with 938 participants across two studies, we've established strong evidence for the instrument's reliability and validity.

Theoretical Framework: Critically Conscious Computing

The index is grounded in Critically Conscious Computing, a framework inspired by Paulo Freire's concept of conscientização (critical consciousness). We define Critically Conscious Computing as a form of computing practice in which individuals:

  1. Critical Reflection — Analyze the sociotechnical implications of computing systems and practices, including issues of power, politics, culture, and equity.
  2. Critical Agency — Develop the sense of agency to question and challenge norms, assumptions, and practices in computing.
  3. Critical Action — Take action to serve the needs and interests of diverse and marginalized communities, whether through creating new technologies, reshaping existing ones, or resisting the use of computing in harmful ways.

Critical consciousness theory posits that as individuals develop a deeper understanding of systemic issues (reflection) and recognize their own capacity and power to effect change (agency), they are more likely to engage in transformative praxis (action). This index measures the first two components — reflection and agency — as foundational attitudes that precede ethical action in computing contexts.

Full Index

The index consists of 31 Likert-scale items distributed across two main constructs. We recommend administering with a 6-point Likert scale (Strongly Disagree to Strongly Agree).

Critical Reflection (22 items)

As a scale, we operationalize this construct as technosolutionism (reverse-coded), valuing marginalized perspectives, and valuing ethics training. We also include two standalone items.

Question Wording: Computing technologies have wide-ranging impacts on society. Please indicate the extent to which you agree or disagree with the following statements:
Standalone Items
  1. Computing should inform, not replace, human decision-making.
  2. We should prioritize computational solutions over human judgment. (R)
Techno-Solutionism (4 items, α = .79)
  1. With enough resources, computing technologies can solve any problem. (R)
  2. Datasets that are large enough can overcome any bias in collection. (R)
  3. Biases in datasets can always be corrected with the right techniques. (R)
  4. Computing technologies benefit everyone equally. (R)
Valuing Marginalized Perspectives (2 items, α = .63)
  1. Considering issues of social justice should be a fundamental consideration in the design and development of any computing system.
  2. Developing computer software for public use requires input from marginalized groups.
Question Wording: Different professions require different training. Please indicate the extent to which you agree or disagree that the following should be part of training for every software engineer:
Valuing Ethics Training (7 items, α = .89)
  1. The social impacts of software.
  2. The environmental impacts of software.
  3. Legal considerations in software development.
  4. Ethical implications of topics being studied.
  5. Collaborating on software development projects with local community groups.
  6. Guidelines for discussing ethical issues with others.
  7. A software development code of ethics.
Valuing Technical Training (comparison items)
  1. Computer architectures. (0)
  2. Databases. (0)
  3. Technical programming skills. (0)
  4. Software quality assurance and testing. (0)
  5. Computer science theory and algorithms. (0)
  6. Identifying requirements to build software. (0)
  7. Data structures. (0)

Critical Agency (9 items)

As a scale, we operationalize this construct as personal effectiveness (the belief in one's ability to uphold ethical conduct and communicate ethics perspectives) and system responsiveness (the belief that ethical concerns raised will be heard and addressed in computing projects and workplaces).

Question Wording: Please indicate the extent to which you agree or disagree with the following:
Personal Effectiveness (5 items, α = .89)
  1. I have a good understanding of the important ethical and social impacts to consider when developing software.
  2. I am able to participate in discussions about ethics and social impacts of computing.
  3. I am confident in my own ability to uphold ethical conduct in software development.
  4. I am better informed about the ethics and societal impacts of technology than most of my software developer peers.
  5. When working on computing projects with others, I could effectively voice my perspectives on ethical issues.
System Responsiveness (4 items, α = .84)
  1. There are processes within workplaces to handle reported ethical computing violations or concerns.
  2. When I talk about ethical computing issues, my peers usually pay attention.
  3. Software development professionals are allowed to have a say about ethical computing concerns at their workplaces.
  4. When ethical computing concerns are raised by employees, workplaces are responsive to addressing these concerns.

Legend

  • (R) = Reverse-coded item (higher scores indicate less critical reflection)
  • (0) = Comparison item (not scored as part of ethics subscales)
  • α = Cronbach's alpha (internal consistency reliability)

How to Use It

Administration

  • We recommend using a 6-point Likert scale format (from "Strongly Disagree" to "Strongly Agree")
  • Administration time: approximately 5-15 minutes
  • Can be administered online or in paper format
  • Copy a ready-made Google Form version to use as a starting point for online administration

Scoring & Analysis

  1. Reverse-code items marked with (R) before any analysis. For a 6-point scale, reverse-code as: 1→6, 2→5, 3→4, 4→3, 5→2, 6→1. After reverse-coding, higher scores indicate less technosolutionist beliefs (i.e., more critical reflection).
  2. Check internal reliability for each subscale in your sample using Cronbach's alpha (α). When α ≥ .70, calculate the subscale score as the mean of its items. When α < .70, analyze items individually rather than averaging into a composite score.
  3. Calculate subscale scores by averaging responses across items within each reliable subscale. Higher scores indicate stronger agreement with the operationalization.
  4. For the Valuing Technical Training comparison items: These are not scored as part of the ethics subscales. Instead, compare the mean of Valuing Ethics Training items against the mean of Valuing Technical Training items to assess whether students differentially value ethical vs. technical education.
  5. Select appropriate statistical tests based on your design:
    • Pre/post (paired) data: Use the Wilcoxon Signed Rank Test for paired comparisons. Check that the distribution of differences is symmetric (e.g., via Anderson-Darling normality test or graphical inspection); if not symmetric, use the Sign Test instead.
    • Independent samples: Use the Wilcoxon Rank Sum Test (Mann-Whitney U) for comparisons between groups.
  6. Report effect sizes alongside significance tests: r = Z / √N for Wilcoxon tests, or Cliff's delta for Sign Tests.

Recommended Research Applications

  • Pre/Post Assessment: Measure changes in critical consciousness before and after ethics courses
  • Initial Assessment: Tailor targeted interventions based on specific class profiles
  • Program Evaluation: Compare outcomes across different pedagogical approaches
  • Cross-Institutional Studies: Our standardized measurement enables comparison across institutions

Terminology

When reporting results, we recommend using the following terms consistently:

Term Refers to Example
Index The full instrument, comprising multiple scales The Critical Computing Index (CCI)
Construct A theoretical concept the index aims to measure Critical Reflection, Critical Agency
Operationalization How a construct is concretely defined and measured via a group of items Technosolutionism, Personal Effectiveness
Scale The set of items that together measure a construct The Critical Reflection scale (22 items)
Sub-scale A subset of a scale measuring a specific facet of the construct Technosolutionism, Valuing Ethics Training
Item An individual survey question within an operationalization "We should prioritize computational solutions over human judgment."

How to Cite

Development Paper

Aadarsh Padiyath, Mark Guzdial, and Barbara Ericson. 2025. Development of the Critical Reflection and Agency in Computing Index. In Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems (CHI '25), April 26–May 1, 2025, Yokohama, Japan. ACM, New York, NY, USA, 11 pages. https://doi.org/10.1145/3706598.3713189

Validation Paper

Aadarsh Padiyath, Casey Fiesler, Mark Guzdial, and Barbara Ericson. 2025. Validation of the Critical Reflection and Agency in Computing Index: Do Computing Ethics Courses Make a Difference? In ACM Conference on International Computing Education Research V.1 (ICER 2025 Vol. 1), August 3–6, 2025, Charlottesville, VA, USA. ACM, New York, NY, USA, 19 pages. https://doi.org/10.1145/3702652.3744208

Limitations & Considerations

  • Geographic scope: Validated with U.S.-based computing students and professionals; generalizability to other cultural contexts may require additional validation.
  • Self-report measures: Subject to social desirability bias, Hawthorne effect
  • Temporal stability: As notions of computing ethics evolves, the constructs as operationalized and measured may need periodic review and updates

Questions or Feedback?

If you're using this index in your research or have questions about implementation and analysis, please reach out to aadarsh@umich.edu.