While practitioners reported many similarities in their EJ assessment experiences, results from the practitioner interviews revealed several themes relating to variations in practitioner roles, challenges accessing data, differences in the subjective nature of assessing disproportionate impacts, and internal and external EJ collaboration. Each of the following sections will describe the main themes using quotes to reinforce thematic findings.
3.1 Practitioners support EJ in the NEPA process with varying levels of specialization and ability to influence EJ/CIA practice in the agency.
Three types of practitioner roles and orientations to EJ were revealed through the interviews: Type 1 being a general NEPA practitioner, Type 2 being a NEPA practitioner with additional EJ/CIA SME responsibilities, and Type 3 being a dedicated EJ/CIA SME who does not necessarily engage directly in the NEPA review process. Type 1 practitioners often described EJ as part of many other environmental review processes they carried out within their broader NEPA responsibilities and rarely identified specialization in the topic. Type 2 practitioners were formally designated as an EJ or CIA SME in their department, carrying out similar responsibilities as a Type 1 practitioner but with the added responsibility of being an EJ specialist and some development of internal guidance. Finally, Type 3 practitioners operated as dedicated EJ resources in the agency with their primary responsibility being EJ or CIA review, often engaging in more formalized EJ policy and guidance development and collaboration for the agency.
NEPA emerged as a prevailing cornerstone for practitioners to orient themselves to EJ within their agencies, either through environmental review or public involvement (PI) in project delivery. Frank, a district/regional environmental planner with over twenty years’ experience, described his role primarily “as a NEPA practitioner in [his agency’s] transportation planning and project development arena, preparing NEPA documentation, submitting NEPA documentation for approval, and documentation for EJ or community impact assessment, as needed.” Type 1 practitioners like Frank noted having other responsibilities in the environmental permitting process such as wetland permitting, state permits, Army Corps, and other related permits.
Environmental planners in several states had a designated role as either a socioeconomic or EJ SME. This can be thought of as a designated specialty on top of their primary role as environmental planners (NEPA+SME). As Mr. O indicated,
“I review any possible socioeconomic or environmental justice concerns and also help with the guidance and implementation of those processes, as well as overseeing most of the environmental portion of a project…any of our seven sections. But, my job as the subject matter expert is to be well versed in the socioeconomic and environmental justice aspect.”
Additional, specialized knowledge sets Type 2 practitioners like Mr. O. apart from traditional environmental planners as it formalizes EJ expertise within the agency, providing a focal person for resources, review, and implementation.
Finally, Type 3 practitioners share a common trait within their respective agencies as being dedicated socioeconomic, CIA, or EJ SMEs responsible for overseeing EJ/CIA at some level within the agency. Angie and Heather described their experiences in their agency where they “oversee that whole (EJ/CIA) process” and “develop the guidance that [they] give to [their] staff and district staff.” Further formalizing EJ/CIA within the agency, agencies interviewed with dedicated EJ/CIA SMEs described more elements relating to structure in EJ/CIA review, guidance, and implementation. Juliet expressed the tension not having a dedicated specialist (Type 3) has on an agency and the questions arising from that tension,
“’Should we bring in an actual specialist just to focus on [EJ]?’...my title is NEPA Program Manager. I’m not an EJ specialist. So, I’m teaching myself while doing this, so that I’m constantly asking myself, ‘Are my efforts enough?’”
This tension highlights a desire from some practitioners for the creation of Type 3 roles within their agency to carry out EJ analysis more effectively, which was often described as being internally limited due to resource constraints, particularly when an agency only contained Type 1 practitioners.
Where Type 1 and Type 2 or 3 practitioners existed within an agency (typically district/region and headquarters/division, respectively), Type 1 practitioners described having an opportunity to readily interact with the other types of practitioners in their agency for guidance, document review, and support. Additionally, Type 2 and 3 practitioners discussed being resources for other departments on topics of socioeconomics or EJ and, in some cases, interacting directly with EJ peers in other state agencies and the state FHWA office in the review process. Further discussions of practitioner collaboration will be covered in a later theme.
3.2 Access to updated and disaggregated data is a challenge for both urban and rural practitioners.
While discussing data analysis, many practitioners noted the importance of going beyond quantitative data to support analysis with qualitative data collection and impact assessment. Ground truthing data was described as an extension of initial quantitative data assessments and not a substitute. Additionally, data access and aggregation were noted as an issue for both urban and rural practitioners.
Decennial Census, American Community Survey (ACS), and EJSCREEN data are a nearly unanimous starting point for all practitioners when determining the level of EJ analysis or impact a project may initiate. However, practitioners often followed their list of quantitative data sources with descriptions of needs for on the ground assessments and verification of data. As Juliet noted, “when you’re looking at numbers, you take away the human part of it…the numbers [aren’t] going to tell you what’s important to that community.” Similarly, Swan noted that “impacts to people [aren’t] the same analysis as analyzing soil conditions.” Practitioners expressed how quantitative data can only get them so far, often just serving as a baseline for further, deeper qualitative EJ impact analysis.
While qualitative analysis is important, several practitioners described a tension between gathering data in a computer environment and what could be described as a reticence of practitioners or project team members to engage with the community. One practitioner explained how “a lot of times what [she sees] is folks will look at Census data, but there’s no ground truthing…there may still be something else missing if you’re not out in the community, understanding the situation.” Patto summarized this point with a reflection about the impacts a practitioner’s perspective often as community outsiders can have without seeking further contextual and relational understanding,
“…there are a lot of things that I don’t think are immediately obvious until you really start talking to people within the community, and then you find out some stuff that otherwise, without being a member of that community, you would never know.”
Other practitioners emphasized project impacts caused by a failure to engage in ground-truthing early enough by noting how not fully leveraging community context, engagement, and input may leave a project vulnerable to delays and increased costs if EJ impacts are not identified sooner in the process.
Finally, while quantitative data may be generally available, several practitioners in both urban and rural contexts noted data access limitations and difficulties with aggregation. These limitations often leave practitioners wanting for greater levels of information and seeking additional methods for understanding impacts. Angie, an urban practitioner, noted her agency’s struggle:
“it’s not always easy to get the information we need as far as…who’s living where, who might we impact. The Census data is only so good, it’s only so up to date, it only gets down to a certain level, and it’s not all easily available.”
Grace B., a rural practitioner, also described struggles with Census data not being “super accurate, or not accurate” at a rural aggregation level and being let down by other data sources as well because “the information is just not available for the area that [they] are looking at.” Several practitioners noted their agencies leveraging relationships in the community, performing door to door surveys, capturing information in the PI process, or developing their own internal data sources as ways of combatting these publicly available data limitations. As a result, practitioners and agencies engage in a lot of uncertainty when carrying out EJ community identification and impact analysis. These uncertainties create opportunities for differences of opinion in the process, and challenges to results and findings.
3.3 Differences of opinion and the subjective nature of disproportionate impacts can make reaching consensus difficult.
Disproportionate impact determinations were described by many practitioners as either “murky,” a “smell test,” or “subjective” in nature, creating opportunities for differences of opinion. The need for increased guidance and definition of EJ was discussed as an approach to improve clarity; however, practitioners also noted increased guidance could restrict decision-making flexibility.
Determining what qualified as impacts to EJ communities, as with other EJ practitioner experiences, varied from state to state, and as Mr. O noted “once we start getting into impacts and remediation, it gets a little bit fuzzy.” The geographic area and unit of analysis can influence the calculated severity of a particular impact, especially when impacts can be direct or indirect. As Stacy discussed, “it totally depends based on your project. We don’t have definitive [limits], it’s just what are your potential impacts and how will that relate to the area that you’re in? So, it’s context and the intensity of the impact.” Stacy was not alone, as other practitioners noted using a “context and intensity” approach in their agency assessments of impacts. One practitioner described it as “a kind of…smell test just to see if it registers in your gut as though it may be a disproportionate effect…to an EJ community.”
While some practitioners admitted they had yet to encounter a disproportionate impact assessment due to the nature of their agency context and personal background, those with experience assessing disproportionate impacts often provided uncertain responses about process and outcome. As Frank noted, “there’s no broad understanding of [disproportionate impact], that [he’s] aware of, that transfers from one person to the other.” One practitioner described determining disproportionate impacts as “a little black box.” Discussions of the subjective, and individually determined or experienced nature, of impacts had no clear consensus among practitioners other than there was no consensus. Practitioner rationale around this problem indicated it may be an inherent feature of the entire impact determination process and “it’s going to always be a qualitative exercise and you’ll always have room for disagreement.” Ultimately, working toward consensus through the PI process and in conjunction with the local FHWA office was explicitly described as the current solution for several practitioners.
Further complicating practitioners’ ability to develop consensus, a lack of EJ guidance and definitions were often cited as sources of confusion and “a floating atmosphere.” Mike, a Type 3 (SME) practitioner, described the complex relationship between practitioners and guidance in this way:
“Every year I hope FHWA is going to define some of these things a little more clearly. I mean, it’s good, sometimes, to have room to maneuver, so you can try to do your best job of calling out what the true impacts are and bringing adequate mitigation into play, but it would be helpful [to have] some clear guidelines.”
Other practitioners echoed Mike’s sentiment and expressed frustration with sparse guidance on EJ impacts and interdependencies which can hinder more accurate representations of impacts in project documentation determinations and overall project decision-making.
Thresholds were a useful tool for some practitioners to use in determining when potential impacts might require further evaluation. One practitioner described how they recently changed their tract demographic threshold from 40% to 15%, where “if [the tract has] more than 15% low-income or People of Color, then [they] need to take a closer look at the impacts.” While not all practitioners used thresholds, those that did not indicated that clearer guidance from either FHWA or their own agency would make “decision-making easier.”
While thresholds were described as helpful to determine if further review or a higher NEPA documentation level was required, individual opinions of practitioners and stakeholders could still lead to disagreement about impact assessment. One practitioner noted “how people use the tools and what they ultimately conclude, or may say in a conclusion, can be different from one… practitioner to the next.” Differences in practitioners’ individual judgement and assessment highlights a significant methodological, and interpersonal hurdle practitioners face in their own agencies and interacting with EJ communities. Further, differences in practitioners’ individual judgement create a potential source of disparity in the treatment of impacts and mitigation across and between practitioners.
3.4 Practitioner collaboration through the form of internal and external working groups is becoming more common and can help create more cohesive approaches and help reduce functional and individual isolation.
Practitioners expressed in several interviews how collaboration among EJ specialists was a common benefit. Internal (within the agency) and external (statewide) working groups focusing on EJ, CIA, civil rights (Title VI, etc.), or other topical intersections provided practitioners with opportunities for developing cohesion of visions and processes with existing resources in their own contexts. While not common in many interviews, several practitioners noted their isolation or compartmentalization on project teams and within their agencies.
Collaboration served as a benefit to practitioners on several levels. Practitioners described collaborative efforts as an informal back and forth or checks and balance system between practitioners and other agency departments or state agencies. Examples ranged from something as simple as project-level decision making to state-wide agreement on factors for EJ impact identification and analysis. Peter, and other practitioners, noted how “competing definitions and interests [between Title VI, EJ, equity, etc.]…has made things complicated.” Collaborative measures appear to be one step practitioners and agencies are taking to reduce complexity and increase efficiency in their processes.
Aside from quantifiable benefits, Mr. O discussed how the good relationships his agency developed between state agencies helped facilitate the EJ impact analysis process and created space where project teams are willing to hear each other out while working toward a decision. While practitioners primarily discussed benefits to their own work and processes, benefits were not only described in unilateral terms, but included discussions of cross-agency impacts, such as benefits for FHWA and other collaborating agencies. One of the increasingly prevalent mechanisms practitioners described for facilitating internal and external collaboration was through topic specific working groups. EJ, CIA, civil rights, equity, and similarly aligned topics served either individually or collectively as the organizing charter for working groups. Often, this varied based on the degree of specialization present within an agency; larger, and more specialized, agencies created working groups within departments and across the agency, and smaller agencies tended to discuss developing state-wide working groups. Meeting frequency responses included monthly, quarterly, and annually, generally increasing in frequency as the degree of practitioner specialization increased.
Angie, who works in an agency with Type 1 and 2 practitioners, described how their EJ working group started “based on the expectations and new guidance coming out of the federal level on EJ. [They]…pulled together a group from environmental…planning…MPOs…[DEI]…planning…transit…local programs…a whole host of people who do different things.” A few practitioners echoed Angie’s experience in starting their own working groups. Agencies with primarily Type 1 practitioners noted state agencies reaching out to them to better understand their process, and vice versa. Describing a more ad-hoc approach, Stacy described being “curious about what other government agencies within [her] state are doing…[and is] looking to understand [adverse impacts] better.” Carol, who meets with her working group monthly, discussed how working groups in her agency created an environment for sharing “ideas, interesting projects, innovative practices, new guidance, [and asking] questions.” Practitioners at large agencies noted this was especially important for them as it is difficult to coordinate actions across multiple groups who all have the same goal.
Further highlighting the importance of collaboration and working groups, practitioners in some states expressed frustration from the isolation or compartmentalization in their departments or agencies. On one level, practitioners hoped other departments involved in project delivery would catch “the EJ fever,” allowing for earlier or even clear integration into the project delivery process. John, a Type 2 practitioner, described how it is “easy to get siloed within my discipline. I don’t always have a clear idea or understanding of where I (CIA) will fit for each specific project.” Others described how even similar topical groups in an agency do not interact with each other, keeping their processes separate in a way one practitioner frustratingly described as a “you just do your thing” mentality. Michelle highlighted how without the type of working group collaboration inside her agency she “[feels] so alone in [her state]” and relies on outside conferences and groups to find a shared experience give her “the will to continue moving forward” in her EJ SME role.
These examples illustrate the importance of collaboration in any form for supporting practitioners and improving project delivery through increased information sharing and collective goal orientation. While the benefits appear to accrue regardless of agency size and specialization, it also appears not every agency is engaging in collaborative project delivery practices, let alone across topical areas like EJ. The following section will further discuss each of these themes in further depth, compare with extant literature, share limitations, and offer opportunities for further research and development.