Welcome to New America, redesigned for what’s next.

A special message from New America’s CEO and President on our new look.

Read the Note

Methodology

The 2019 Congressional Capacity Study is a collaborative research project conducted by a core team of political scientists: Alexander C. Furnas and Timothy M. LaPira. The project followed the successful fielding of the 2017 Congressional Capacity Survey which included original qualitative and quantitative data.

From May through October, 2019, the research team fielded an online questionnaire targeting all congressional staff with political or legislative responsibilities primarily located in Washington, D.C. offices. The sampling frame was purposely broad based on staffers’ geographic location to best capture those who contribute to Congress’s legislative, appropriations, oversight, or general public policy operations.

Purpose

The survey questionnaire sought to find out more about the backgrounds, career paths, policy views, technical knowledge, substantive expertise, and job experiences of congressional staffers, as well as the procedures and organizational structures that allow them to assist members of Congress to do their work in the most effective and democratically responsive ways. The sampling and fielding process is purposely intended to seek as broad and representative a sample of congressional staff as possible.

Sample Construction

We constructed the sampling frame from the full LegiStorm contact list as of July 18, 2017 that included individual’s names, employers, and official email addresses. The contact list contained the full census of 10,369 legislative branch employees with a Washington, D.C. office address. The contact list included 729 House, Senate, and bicameral offices and organizational units. The list excluded legislative support agencies (such as the Congressional Research Service, Government Accountability Office, and the Congressional Budget Office) that employ personnel as federal civil servants. From this list of organizational units, the research team selected 633 organizational units with names suggesting the primary mission contributed to legislative operations, as broadly as could be determined by public information about the office. Primarily, these units focus on members’ personal offices, standing committees, and party leadership offices. Secondarily, we included “other” administrative offices (such as the House Parliamentarian)1 and institutionalized caucuses or member organizations (such as the Senate Caucus on International Narcotics Control and the House Republican Study Committee).

The sampling frame excludes offices with exclusively administrative, facilities, or maintenance missions (such as House Office of Logistics and Support and Senate Office of Printing, Graphics and Direct Mail). We also excluded staffers with primarily administrative responsibilities, limiting our sample to legislative, communications, and political management staff.

Table 17 summarizes the 6,505 individuals in our sampling frame this process considered to be primarily employed as political appointees in the legislative branch. The table cross-tabulates prospective respondents by chamber and office type.

Table 17 | 2019 Sampling Frame

Personal Committee Party Leadership Total
House 2,756 953 184 3,893
Senate 1,768 741 103 2,612
Total 4,524 1,694 287 6,505

The process intentionally makes no assumptions about individual staffers within an office based on common job titles to maximize the variety of staff. Moreover, job titles are notoriously noisy descriptions of staffers actual responsibilities within an office. This sampling frame conservatively biases toward over-coverage of prospective participants that may reasonably be thought of as politically appointed staff engaged in legislative operations, even if their role is tangential, such as press secretaries. The ex ante expectation is that response rates would be artificially deflated because we were likely asking non-legislative staff—such as office operations managers—employed in legislative offices to participate. We expect these non-legislative staff employed in legislative offices to be more likely to decline to participate in the survey.

Fielding Process and Timeline

The survey was offered exclusively online using the James Madison University license to the Qualtrics survey platform. The survey was offered in three sequential data collection stages between August and December. Each of the 6,505 prospective staffers were contacted first by mail with a personalized survey link and then by email with a personalized link to identify respondents with existing biographical data and to maintain strict confidentiality. In addition to direct contacts, the research team recruited senior legislative staffers in our professional networks to ask them to spread the word as much as they were willing, and partnered with external validator groups including the Congressional Management Foundation, R Street, the Legislative Capacity Working Group, and Demand Progress to promote participation. The fielding process was conducted over the course of five months in 2019, including:

  1. May 14-17: Initial invitation emails sent in batches of 100.
  2. June 2: Email response declines and survey completions identified, dropped from first follow-up contact list.
  3. June 4-5: First reminder email sent.
  4. June 30: Second round email response declines and survey completions identified, dropped from first follow-up contact list.
  5. July 2-10: Second reminder email sent.
  6. August 1-5: Third round email response declines and survey completions identified, dropped from first follow-up contact list.
  7. September 5-11: Third reminder email sent.
  8. October 28th: Survey closed and response data collected from Qualtrics.
  9. December 6: LegiStorm delivers biographical data for in- and out-samples.
  10. December 7-18: Survey response and Legistorm biographical data processing.

Response Rates, and Post-stratification Weights

The overall response rate was 5.5 percent (355 of 6,505). The margin of error at the 95 percent confidence level is 5.2 percent. Post-stratification survey weights (psweight) were calculated using the survey package in R. For the purpose of calculating weights, respondents were counted as having taken the survey for the purpose of inclusion in the numerator if they agreed to participate in the survey and responded to any other question in the survey. The provided psweights are the inverse probability of selection for each respondent conditioning on the joint distribution of chamber, office type, and party in the population, using the sampling frame purchased from LegiStorm.

Linking back to biographical data

In addition to the variables collected with the survey instrument, respondents were subsequently linked back to the original biographical and payroll data in the sampling frame purchased from LegiStorm. Selected variables were used to calculate psweights and to define subgroups by chamber, party, office type, job title, seniority, and gender.

Categorizing Staffer Responsibility

Our analyses rely on unique staffer responsibility classifications. To develop these classifications, we employ a hybrid human- and machine-based coding algorithm. According to this protocol, certain job titles receive automatic coding decisions, which are assigned via a simple algorithm in Python. However, for more ambiguous job titles, research assistants investigated the staffer's responsibilities for the specified year and quarter in greater detail. This additional investigation involved searching for staffers in quarterly volumes of the Congressional Yellow Books, where factors such as the staffer's office location (Washington versus the district), policy portfolio (if one exists), and (occasionally) more descriptive job titles are listed. This information was incorporated systematically into coding decisions, as delineated in the coding protocol.

While some studies have opted to fully automate similar coding decisions, such automation is highly likely to encourage both measurement error and systematic bias. Careful human coding can capture cross-sectional differences and over-time changes in naming conventions and more accurately report staffers' responsibilities. Our codes reflect a staffer's primary office responsibilities. For example, if a CoS has legislative issues associated with her Congressional Yellow Book entry, we assume that a larger portion of her time is occupied by legislative matters than a CoS presenting no associated legislative issues. This captures common differences in CoS duties between offices. Chiefs are particularly important, because they occupy a significant portion of a member's MRA. Other titles are less consequential, but our process nevertheless treats them with equal care. Interns, for example, are known to perform multiple functions for the office, even though many focus primarily on answering phone calls or giving tours. In some cases, a staffer will have multiple responsibilities, with no single responsibility dominant. In these cases, we split the staffer's salary equally between the categories.

Another crucial feature of the coding process is that it is designed to minimize underestimation of legislative and constituency service investment. Any code may be overridden by the presence of legislative responsibilities within the Congressional Yellow Book. In practice, these cases were double-checked to make sure member offices are credited with the fullest possible measure of legislative investment. Constituency service is handled similarly. The procedure therefore renders the “Legislative,” “Constituency Service,” and “Communications” codes as the most precise coding categories available in the dataset. Each of these categories has informative, concrete coding rules to capture underlying responsibilities accurately. By contrast, “Political Management'' and “Administrative'' serve as residual categories determined by a combination of salary information, absence of legislative responsibilities, and presence in Washington.

Question Wording and Variable Construction

Ideology: We measure staffer ideology as a latent variable derived from a five question battery taken from Heinz and colleagues,2 validated by Kevin Esterling.3 We use the items in this battery to create an ideal point estimate for each staffer using a Partial Credit Model (PCM), a Rasch model extension of item-response theory (IRT) that is appropriate for ordinal variables.4 We standardize this ideology score to have a mean of 0 and a standard deviation of 1.

Ideology Battery Questions

Q19 Thinking about YOUR OWN personal opinions—not what you think your boss believes—what do you think about the following? [Responses on a Likert-type agreement scale: (1) Strongly agree (2) Somewhat agree (3) Neither agree nor disagree (4) Somewhat disagree (5) Strongly disagree]

q19.1 The protection of consumer interests is best insured by a vigorous competition among sellers rather than by federal government regulation on behalf of consumers.

q19.2 There is too much power concentrated in the hands of a few large companies for the good of the country.

q19.3 One of the most important roles of government is to help those who cannot help themselves, such as the poor, the disadvantaged, and the unemployed.

q19.4 All Americans should have access to quality medical care regardless of ability to pay.

q19.5 The differences in income among occupations should be reduced.

For purposes of scale construction q19.1 was reverse coded so that higher numbers were always the more conservative codes.

Party Identification Battery

Q21 Generally speaking, do you usually think of YOURSELF as a Republican, a Democrat, an Independent, or something else? Note: your response here may differ from your boss’s party. (1) Republican (2) Democrat (3) Independent (4) Other (5) No preference

[Display This Question if Q21== 1] Q23 Would you call yourself a strong Republican or a not very strong Republican? (1) Strong (2) Not very strong

[Display This Question if Q21== 2] Q25 Would you call yourself a strong Democrat or a not very strong Democrat? (1) Strong (2) Not very strong

[Display This Question if Q21 === 3,4 or 5] Q27 Do you think of yourself as closer to the Republican or Democratic party? (1) Republican (2) Democratic

Citations
  1. Institutional officers and their staff are arbitrarily attributed to the respective majority party, though they operate in fact as non- partisan employees.
  2. Heinz, J. P., E. O. Laumann, R. L. Nelson, and R. H. Salisbury (1999). Washington, D.C., Representatives: Private Interests in National Policymaking, 1982-83. ICPSR..
  3. Esterling, Kevin M. "Constructing and Repairing our Bridges: Statistical Considerations When Placing Agents into Legislative Preference Space." Available at SSRN 3107266 (2018)..
  4. Gerhard H. Fischer and Ivo W. Molenaar, “Rasch models: Foundations, recent developments, and applications,” Springer Science & Business Media, 2012, and Patrick Mair and Reinhold Hatzinger, “Extended Rasch modeling: The eRm package for the application of IRT models in R,” 2007.

Table of Contents

Close