Table of Contents
- Summary of Findings
- Introduction
- What Americans Think about Why Men Do and Do Not Take Leave from Work to Care for Loved Ones
- Who Has Access to and Uses Family and Medical Leave from Work?
- Six in 10 Americans Anticipate Needing to Take Leave from Work in the Future
- Affording Leave: How Americans Get Pay When They Take Leave and How They Cover the Gaps
- Conclusion
- Methods
- Bibliography
Methods
This research was underpinned by two separate data collections: an online survey and a series of five online discussions. Detailed methods for each data collection are outlined below.
Focus Group Methods
In order to understand the experiences and beliefs of a broad swath of American adult men and women from across the United States regarding men and caregiving, we conducted five three-day long online discussions using the 20|20 Research’s facilitation platform QualBoard. 20|20 recruited and screened focus group participants for each of the online discussions. The groups were conducted over four weeks in May 2019 and included a total of 68 participants. Participants were compensated for their time. The five groups were each with separate populations, with each group drawing the following populations from across the United States:
- A general population group of men 18 and older,
- A general population group of women 18 and older,
- A group of fathers of children ages zero to eight,
- A group of men who are currently caring for another adult, and
- A group of men who work in caregiving professions such as nursing or early childhood education. Physicians were excluded.
Better Life Lab at New America supplied 20|20 with six open-ended discussion prompt modules, with a first module of questions released early in the morning and a second in the early afternoon of each day of the three days that each board was active. All of the modules from the previous days remained available for respondents to engage with on the following days. The boards were live for five days, to allow participants extra time to finish answering questions. Participants could respond to moderators, moderators could ask participants follow-up questions to learn more about their experiences, and participants could ask questions of each other or comment on one another’s thoughts. Participants were asked about their experiences with leave, how they would feel about their employer offering a paid leave benefit, how they feel about coworkers using the benefit, and how they would feel about the government offering a universal paid leave policy. Researchers at the Better Life Lab used a grounded theory methodology to develop a coding scheme for the focus group transcripts and analyzed the data using these codes to identify common themes.
All moderators for the focus groups were women and interacted with participants using their actual first names and portraits as their avatars, which may have limited the disclosures some men made about their feelings around caregiving and paid leave. Other than those participants who explicitly gave us permission to report on their stories as journalists after the focus groups concluded, all focus group participant names have been changed to pseudonyms chosen by the authors of this report. The promise of anonymity in all public records may have encouraged participants to be open and honest.
Both the quantitative and qualitative components of the study included modules specifically on paid family and medical leave, including questions about whether Americans have taken leave, how they paid for it, whether they anticipate needing leave in the future, and why they think men do or do not take leave.
The transcripts of these focus group discussions were coded using a grounded theory methodology. Coders began by reading the full transcripts of all five discussion boards. Coders then read through the transcripts a second time, noting themes. Themes were generated based on clear differences amongst participants on the questions, and common attitudes, beliefs, and behaviors, as well as participants’ stated desires, motivations, and barriers. The coders then compared their notes and established a common list of codes that was all-inclusive of the noted findings, collapsing overlapping categories together without losing differences or details, and including working definitions of each code and how it should be applied. Using the established list of approximately 60 codes across the categories of Behavior, Beliefs, and Attitudes, coders went back through the five transcripts coding utterances with relevant codes. Coders ran two tests for coding accuracy—comparing their application of codes on the answers to two distinct discussion questions in two groups’ transcripts. Coders agreed on the application of codes in over 90 percent of cases. The key trends and themes these codes revealed are detailed throughout the report, with select quotations from participants that best exemplify these findings.
Survey Methodology
The study included a nationally representative online and phone survey of 2,966 adults residing in the United States. The survey was fielded between April 25 and May 16, 2019, with an average interview length of 14 minutes and an overall margin of error of +/- 2.75%. The survey was conducted in English and Spanish by NORC at the University of Chicago on its AmeriSpeak platform for New America. Funded and operated by NORC at the University of Chicago, AmeriSpeak® is a probability-based panel designed to be representative of the U.S. household population. Randomly selected U.S. households are sampled with a known, non-zero probability of selection from the NORC National Sample Frame, and then contacted by U.S. mail, email, telephone, and field interviewers (face to face).
The survey includes an oversample of the men 18 and older, as well as two additional non-probability oversamples of fathers of children zero to eight and men who currently work in caregiving professions. NORC partnered with Dynata for the father of zero to eight year-olds and professional male caregiver samples. The oversamples of men and fathers are included in this analysis. The professional caregiver oversample is separate, cannot be weighted back to the general population sample, and is not included in this analysis, nor is in include in the n=2966. This research was done to support a better understanding of the perceived caregiving responsibilities of men and women with a focus on the parenting and caregiving roles of men.
Panelists were offered the cash equivalent of $3. Toward the end of the field period, the incentive was increased to the cash equivalent of $7. New America and NORC collaborated on the writing of the survey instrument.
Sampling
A general population of U.S. adults age 18 years and older was selected from NORC’s AmeriSpeak Panel for this study. Additionally, male respondents from the Dynata panel were screened for parental status (fathers of zero to eight year-olds) and professional occupation (professional male caregivers).
The sample for a specific study is selected from the AmeriSpeak Panel using sampling strata based on age, race, Hispanic ethnicity, education, and gender (48 sampling strata in total). The size of the selected sample per sampling stratum is determined by the population distribution for each stratum. In addition, sample selection takes into account expected differential survey completion rates by demographic groups so that the set of panel members with a completed interview for a study is a representative sample of the target population. If the panel household has more than one active adult panel member, only one adult in the household is eligible for selection (random within-household sampling). Panelists selected for an AmeriSpeak study earlier in the business week are not eligible for sample selection until the following business week.
A small sample of English-speaking AmeriSpeak web-mode panelists were invited on April 12 for a pretest. In total, NORC collected 40 pretest interviews. The initial data from the pretest was reviewed by NORC and was delivered to New America.
Changes to CATI (i.e., question text or response options customized for phone interviews) question text were made before fielding the main survey to collect the 3,200 interviews.
In total, NORC collected 3,297 interviews, 3,040 by web mode and 257 by phone mode.
Data Processing
NORC prepared a fully labeled data file of respondent survey data and demographic data for New America. NORC applied cleaning rules to the survey data for quality control by implementing the following rules:
- Removed respondents who completed the survey in 2 minutes or less (10 cases)
- Removed suspicious grid item respondents (13 cases)
- Removed over-collection of doctors to match the contract requirement of the male professional caregiver sample composition of less than 10 percent doctors (99 cases randomly selected).
Statistical Weighting
NORC produced two weights for this survey data:
- Weight1: Post-stratification weights of General Population, aged 18+ (n=2,966)
- Weight2: Post-stratification weights of Fathers (n=1,158)
The third population group for this survey—men who work in caregiving professions (n=331)—did not receive weights. This sample should be analyzed unweighted.
Statistical weights for the study eligible respondents were calculated using panel base sampling weights to start.
Panel base sampling weights for all sampled housing units are computed as the inverse of probability of selection from the NORC National Frame (the sampling frame that is used to sample housing units for AmeriSpeak) or address-based sample. The sample design and recruitment protocol for the AmeriSpeak Panel involves subsampling of initial non-respondent housing units. These subsampled non-respondent housing units are selected for an in-person follow-up. The subsample of housing units that are selected for the nonresponse follow-up (NRFU) have their panel base sampling weights inflated by the inverse of the subsampling rate. The base sampling weights are further adjusted to account for unknown eligibility and nonresponse among eligible housing units. The household-level nonresponse adjusted weights are then post-stratified to external counts for the number of households obtained from the Current Population Survey. Then, these household-level post-stratified weights are assigned to each eligible adult in every recruited household. Furthermore, a person-level nonresponse adjustment accounts for nonresponding adults within a recruited household.
Finally, panel weights are raked to external population totals associated with age, sex, education, race/Hispanic ethnicity, housing tenure, telephone status, and Census Division. The external population totals are obtained from the Current Population Survey. The weights adjusted to the external population totals are the final panel weights.
Study-specific base sampling weights are derived using a combination of the final panel weight and the probability of selection associated with the sampled panel member. Since not all sampled panel members respond to the survey interview, an adjustment is needed to account and adjust for survey non-respondents. This adjustment decreases potential nonresponse bias associated with sampled panel members who did not complete the survey interview for the study.
Thus, the nonresponse adjusted survey weights for the study are adjusted via a raking ratio method to general population totals associated with the following socio-demographic characteristics: age (four levels), Hispanic ethnicity, and education, each controlled by gender and father status as well as age (seven levels), race/Hispanic ethnicity, and Census Division, each controlled by gender. The same nonresponse adjusted survey weights for the study are adjusted via raking ratio method to father totals associated with age (four levels), Hispanic ethnicity, and education.
For the weights of fathers with children age zero to eight, calibration techniques were used to adjust the opt-in father sample from Dynata. The final opt-in respondents are assigned a base weight of one, then are adjusted via raking ratio method to population totals associated with age (four levels), Hispanic ethnicity, and education. The combined AmeriSpeak and Dynata opt-in panel sample weight is obtained by determining an optimal composition factor for combining the final raked AmeriSpeak and opt-in panel sample; the optimal composition factor for the combined weights is computed based on a criterion of minimizing the mean squared error associated with key survey estimates. The purpose of calibration is to adjust the weights for the nonprobability sample so as to bring weighted distributions of the nonprobability sample in line with the population distribution for characteristics correlated with the survey variables. Such calibration adjustments help to reduce potential bias, yielding more accurate population estimates. Finally, the combined weights for fathers with children age zero to eight together with all other fathers produce the final father’s weight. The weights, adjusted to the external population totals, are the final study weights.
Raking and re-raking is done during the weighting process such that the weighted demographic distribution of the survey completes resemble the demographic distribution in the target population. The assumption is that the key survey items are related to the demographics. Therefore, by aligning the survey respondent demographics with the target population, the key survey items should also be in closer alignment with the target population.