Filtering content is often done with good intent, but filtering can also create equity and privacy issues.

This post is co-authored by Chris Gilliard and Hugh Culik. Chris and Hugh are both community college professors.

In the late 1990s, a student works in her high school library, searching for information on E. E. Cummings. She types in the poet's name and waits, and suddenly a buzzer at the front desk rings, alerting the teacher and staff that a student is looking for "inappropriate" material. The student's screen flashes a red command to report to the front desk, and the buzzer continues to blare so everyone can see her walk of shame. The buzzer -- a customized add-on to the off-the-shelf filter -- speaks on behalf of someone's decision that "cummings" is a plural noun, not the name of a New England poet. After explaining herself, the chagrined student abandons that search and begins another one, this time for Emily Dickinson. And then imagine Emily Dickinson blushing in her grave when the buzzer rings again at the sound of her name.

Twenty years ago, before the "black boxes" became invisible and silent, buzzers alerted us when someone pushed against a boundary. We try to reassure ourselves that today, the road to information has become clearer, unencumbered by bells, whistles, or buzzers.

Today, in the present, Nina is a contemporary community college student enrolled in a composition class that explores issues of surveillance, privacy, and online identity. She participates in a class discussion of revenge porn, and she decides that it offers a topic for exploring the connection between the digital world and issues of gender, power, and free speech. She goes to one of the computer labs and does a quick search.

Her effort proves unproductive. It produces information about ABC's hit show Revenge but nothing about what everyone reading this column understands "revenge porn" to mean. For Nina, the concept doesn't exist -- not because the Internet contains no information on revenge porn but because Nina's version of the Internet is filtered. Because the filters between her and the Internet block access to information, she reasonably believes that the issue is marginal and that other topics might prove more fruitful. She moves on to something else, unaware of the invisible walls erected that prevent her from accessing information that might allow her to do her I am work. The student has been digitally redlined, walled off from information based on the IT policies of her institution.

Little does she know that the limits of her world are being shaped by the limits imposed on the information she can access. Because she's a community college student, it's likely that she is hemmed in by many invisible boundaries. When she uses journal storage (through JSTOR), she is probably using one of its smaller versions that offer far fewer journals. When her curiosity points her to enrollment in a course outside her major, her advisor uses the "intrusive advising" recommended by the Guided Pathways to Success program to keep her from exploring options she did not know existed when she came to college. Her college's acceptable-use policy is likely to exclude her from P2P services. These informational boundaries at her community college make her less competitive than the graduates of "regular" universities where digital redlining is far less common.

At the community college where we teach -- as at many community colleges nationwide -- where digital resources are scarce and the students and faculty are embedded in working class realities, digital redlining imposes losses that directly limit the futures of our students. But the issue isn't confined to community colleges. Its pervasive role in educational technologies needs to be recognized and integrated into the judgments we make about how edtech can categorize students and limit their choices. This issue speaks to us because we see the consequences of such practices on a daily basis, not only in education but in the world of post-industrial inequities.

THE ROOTS OF REDLINING

In the United States, redlining began informally but was institutionalized in the National Housing Act of 1934. At the behest of the Federal Home Loan Bank Board, the Home Owners Loan Corporation (HOLC) created maps for America's largest cities that color-coded the areas where loans would be differentially available. The difference among these areas was race. In Detroit, "redlining" was a practice that efficiently barred specific groups—African-Americans, Eastern Europeans, Arabs—from access to mortgages and other financial resources. We can still see landmarks such as the Birwood Wall, a six-foot-high wall explicitly built to mark the boundary of white and black neighborhoods. Even though the evidence is clear, there is a general failure to acknowledge that redlining was a conscious policy.

DIGITAL REDLINING

What does this have to do with digital tools, data analytics, algorithms, and filters? It may have to do with the growing sense that digital justice isn't only about who has access but also about what kind of access they have, how it’s regulated, and how good it is. Just as we need to understand how the Birwood Wall limited financial opportunity, so also do we need to understand how the shape of information access controls the intellectual (and, ultimately, financial) opportunities of some college students. If we emphasize the consequences of differential access, we see one facet of the digital divide; if we ask about how these consequences are produced, we are asking about digital redlining. The comfortable elision in "edtech" is dangerous; it needs to be undone by emphasizing the contexts, origins, aims, and ideologies of technologies.

Digital redlining becomes more obvious if we examine how community colleges are embedded in American class structures. For about 50 percent of U.S. undergraduates, higher education means enrollment in these institutions. They offer a distinct education that emerges from the intersection of largely working class students with the institutional forces that shape their curricula, assessment, and pedagogy. These students face powerful forces—foundation grants, state funding, and federal programs—that configure education as job training and service to corporate needs. These colleges sometimes rationalize this strategy by emphasizing community college as a means of escaping poverty, serving community needs, and avoiding student debt.

Digital redlining arises out of policies that regulate and track students' engagement with information technology. Acceptable use policies (AUPs) are central to regulating this engagement. To better understand how digital redlining works at community colleges, we sampled acceptable use policies from across the range of Carnegie Classifications. These policies, like the HOLC maps, create boundaries. The boundaries not only control information access and filtering, but also they determine methods of collection and retention of student data and how data is passed on to third parties. The modern filter not only limits access to knowledge, but it also tracks when people knock against these limits. In this environment, curiosity looks a lot like transgression.

Certainly there are differences between the students of community colleges and those attending "higher level" colleges. What is more important is that we can see how the differences are reinscribed in AUPs in ways that reinforce class boundaries. For example, network filtering plays an essential role in stopping malware, viruses, and child pornography. But beyond those legal requirements, the informational geography of community college students can be heavily redlined through content filters that block Web sites and reinforce restrictive pedagogies. The dangers of digital redlining have been recognized by the White House in areas of filtering and personalization and by prominent civil rights groups in terms of broadband access, but it's a concern in colleges as well. We don't have to rely on hypotheticals: We know because we see it at our own institution on a daily basis. We know because we see how these policies are codified in institutions' acceptable use policies.

The demographic profile of community college students is distinctly working class, and our interest in digital redlining prompted us to review about 30 acceptable use policies at such institutions. These revealed a consistent emphasis on prohibited uses, prohibited services, and limited privacy rights. To formulate an evaluation of the class biases of these institutional policies, we interviewed CIOs from four categories of the Carnegie Classification of Institutions of Higher Education, an upper-level information officer at an R1 institution, and the associate dean of a digital writing program at another R1. These interviews led to a further sampling and assessment of AUPs from each of the Carnegie Classifications. After reviewing 109 AUPs, the results were straightforward: The more research-based the institution, the more the policies emphasized IT as an environment with a variety of stakeholders. On the other hand, institutions that emphasized job training and certification saw IT as a tool for transmitting information as determined by the school. These deeply different approaches to digital technologies are a form of redlining that can both discourage and limit working class students from the open-ended inquiry supported at more elite institutions.

Digital redlining is not a renaming of the digital divide. It is a different thing, a set of education policies, investment decisions, and IT practices that actively create and maintain class boundaries through strictures that discriminate against specific groups. The digital divide is a noun; it is the consequence of many forces. In contrast, digital redlining is a verb, the "doing" of difference, a "doing" whose consequences reinforce existing class structures. In one era, redlining created differences in physical access to schools, libraries, and home ownership. Now, the task is to recognize how digital redlining is integrated into edtech to produce the same kinds of discriminatory results. Armed with the history of redlining, and understanding its digital resurrection, we glimpse the use of technologies to reinforce the boundaries of race, class, ethnicity, and gender. Our experience is that this problem is seldom recognized as an urgent educational issue.

Returning to Nina, the community college student who wants information about revenge porn: After researching the topic on a filtered and monitored connection, she reasonably believes there isn't much to the topic. For her, the digital redlining occurs because she—like many community college students—relies on the school for Internet access beyond her phone. If the school restricts information access, knowledge doesn't simply become invisible; it does not exist. The filter—a tool configured by policy and implemented by an individual's interpretation of that policy—digitally redlines the intellectual territory of the community college students in ways not seen at "higher level" institutions where the working class does not predominate.

Digital redlining isn't recognized as a postsecondary problem, we would guess, for three reasons. First, these issues typically aren't problematic at higher-level institutions because, as our research shows, the IT policies at R1 institutions lean toward an open environment. Second, the socioeconomic status of students at R1s means that their options for accessing the Internet are not limited to their time on campus. Third, it isn't recognized as a problem at community colleges because the institutions themselves often share in a class consciousness that sees education as job training. Training is fundamentally a transmission process: The instructor has predetermined processes and goals that the student must absorb. It assumes that there is an authority that determines what a student needs to know and what the outcome will be. Under such conditions, technologies will not be interrogated to reveal how they actively reinscribe social structures. Indeed, with filtering in place, attempts to research the limits placed on technology can be blocked, and the person asking the questions will have their searches logged. Therein lies the danger.

Moving forward, the framing, construction, and critique of tech policy at institutions needs to take into account the extent to which those policies either open up or wall off access to knowledge and information. Do the policies erect invisible walls that restrict access, or do they allow for students to do the kinds of interest-driven learning necessary for students to take control of their learning? We also have to begin by asking questions about the implicit pedagogy of any technology we are looking to adopt. Does it restrict or promote openness and access?

For software developers, the questions are not so different. Great edtech is rooted in clearly defined and articulated pedagogy. As you shepherd a concept into a product, can you describe the pedagogical underpinnings of this tech? To that end, we would challenge developers to ask themselves that question and to engage faculty (the pedagogy experts) at all stages of developing tech. Further, developers need to take into account the populations and the institutions for whom they are developing. Digital redlining takes place when policymakers and designers make conscious choices about what kinds of tech and education are "good enough" for certain populations but also happens through the failure to interrogate policy and design. We all have a role to play in these interrogations -- any of us can ask about the policies and technologies that filter our access and track our interactions. To think about digital redlining is to historicize contemporary digital culture, an urgent task if these technologies are to help students reach their goals.

Image Credit: Home Owners' Loan Corporation, Philadelphia redlining map. This map is in the public domain.

Chris G.

Chris Gilliard has been a professor for 20 years, teaching writing, literature, and digital studies at a variety of institutions, including Purdue University, Michigan State University, the University of Detroit, and currently Macomb Community College. His students have gone on to graduate programs at a variety of schools: University of Colorado, University of Michigan, University of Illinois, Columbia, University of Chicago, and elsewhere. Chris is interested in questions of privacy, surveillance, data mining, and the rise in our algorithmically determined future.