Jewish Americans in 2020: Answers to frequently asked questions

Here are answers to some frequently asked questions about Pew Research Center’s report “Jewish Americans in 2020.”

How did you find the Jews in your survey? Did you use lists of synagogue members?

No, we did not use any membership lists from synagogues or other Jewish organizations. Nor did we select people because they have common Jewish names. Although the survey focuses on Jewish adults, we began by taking a large sample of the general public. We randomly selected residential addresses from a U.S. Postal Service computerized list and mailed letters to people across the country, asking them to take a short survey. That survey was what we call a “screener.” It contained about 25 questions, including a few about religion and other elements of Jewish identity.

If respondents indicated they consider themselves Jewish in any of several possible ways (including culturally, ethnically or because of their family background), they were asked to take a longer survey and were offered a modest incentive – usually $10, sometimes $20 for those who took the survey online, and $5o for those who completed the survey by mail – to complete the extended questionnaire. To ensure that people could participate even if they did not have internet access, they could choose to take the screener either online (at a secure web site) or on paper (by mailing it back to us in a postage-paid envelope). The same was true for the extended questionnaire.

How did you confirm that respondents are actually Jewish? Who do you count as Jewish?

Generally, we relied on people’s self-identification, as we do in all our surveys. First, we asked the respondents, “What is your present religion, if any?” Then, we included two additional questions to determine whether they were eligible to take the extended survey. One was: “Aside from religion, do you consider yourself to be Jewish in any way (for example, ethnically, culturally or because of your family’s background)?” The other asked whether the respondents were raised Jewish or had a parent who was Jewish. Anyone who said they are Jewish in the first question, or who answered “yes” to either of the follow-up questions, was eligible for the longer survey.

However, not everyone who took the extended survey was automatically counted as Jewish in our report on the survey’s findings. For analytical purposes, we applied a narrower definition of Jewishness that includes two groups: Jews by religion, i.e., people who say their religion is Jewish and do not profess any other religion (this excludes people who say they are both Jewish and Christian, for example); and Jews of no religion, i.e., people who describe themselves, religiously, as atheist, agnostic or “nothing in particular” but who had at least one Jewish parent or were raised Jewish, and who still consider themselves Jewish in some way (such as ethnically, culturally or because of their family background). For more information on who counts as Jewish in the survey, see this sidebar.

What did you do to make sure that Orthodox Jews are adequately represented?

Orthodox Jews are a challenging group to survey for several reasons, including that they make up a relatively small slice of all U.S. Jews and that they tend to live in geographic clusters, mostly in and around major cities. Working with experts on the Jewish population and leading survey methodologists, we took several steps to ensure we covered this important subgroup.

First, we included a paper option – allowing respondents to fill out the survey by hand, on a printed questionnaire, and mail it back to us in a pre-paid envelope – in part because internet use may be lower among Orthodox Jews than among U.S. adults overall. Second, we stratified our sample of the U.S. public; basically, we divided the country into several geographic strata, or layers, allowing us to “oversample” areas where many Jews reside. We mailed more letters to these areas than we would have otherwise – though, crucially, we also accounted for this in the weighting of the survey results, ensuring that respondents in each geographic area are represented in proportion to their true share of the U.S. population. Third, we paid particular attention to three counties in New York and New Jersey (Kings and Rockland counties in New York and Ocean County in New Jersey) that have large Orthodox communities, oversampling within these counties and handling them specially in the weighting. And, finally, the weighting also modeled Orthodox Jews independently so their unique demographic profile would be better reflected in the data. See the Methodology for more details.

Since 2013, has the Jewish population been growing or shrinking?

The Jewish population appears to be holding steady as a share of the total U.S. adult population. In 2013, we estimated that 2.2% of U.S. adults were Jewish. Based on the new survey, an estimated 2.4% are Jewish. Given the complexity of the two surveys and the methodological differences between them, those percentages are quite similar. We think it’s safer to conclude that the adult Jewish population has roughly kept pace with the growth of the overall U.S. population than it would be to focus on small differences between the 2013 and 2020 Jewish “incidence” rates (the percentage of Jews found in each survey’s random sample of the general public).

In absolute numbers – as opposed to percentages – there probably are more Jews in the United States now than there were in 2013. The 2020 estimate for Jews of all ages in the U.S. is 7.5 million, compared with 6.7 million in 2013. But we’re not claiming that the U.S. Jewish population has risen by exactly 800,000 in seven years. As we say in our report, the precision of these population estimates “should not be exaggerated.” Even though they are derived from samples of the U.S. public that are very large compared with most surveys (more than 68,000 interviews in the 2020 study), they are still subject to sampling error and other practical considerations that produce uncertainty. What’s more, the 2013 study was conducted via random-digit-dial phone interviewing, while the 2020 study was conducted via mail and web. This methodological change complicates the comparisons, and it’s always possible, as with any two surveys, that one estimate might be a little high and the other a little low. Yet another consideration is that there are many possible definitions of Jewishness, and different definitions can produce very different population sizes. In sum, we need to be humble about our capacity to precisely quantify changes in the Jewish population based on two surveys that used different sampling methods.

Why didn’t you conduct the survey over the phone like you did in 2013? Why can’t I compare the 2013 and 2020 survey results?

For several decades, many national polling organizations, including Pew Research Center, administered surveys mostly by telephone. However, in the last several years more and more organizations have moved most of their U.S. surveys online, as has the Center. The main reason for the shift is that fewer people are participating in telephone polling, causing response rates to plummet.

For this same reason, we decided to conduct the new survey of U.S. Jews by mail and on the web instead of over the phone. This proved to be a beneficial decision, as the survey obtained an overall response rate of 16.6%. By comparison, the average response rate for national telephone surveys had dropped to under 6%.

However, one consequence of changing from a telephone survey (in which a live interviewer asks the questions) to a survey conducted online or via mail (in which respondents complete a written questionnaire by themselves) is the potential for what pollsters call mode effects. In a nutshell, mode effects are when people answer questions differently in a self-administered web or paper survey than they would if they were answering them on a telephone with a live interviewer. This can happen, for example, with sensitive topics (e.g., drug use) because respondents may be reluctant to give socially undesirable answers when speaking to another person. But mode effects can also happen for innocuous reasons simply because people process questions that they can see and read themselves differently from questions that they can only hear read to them.

To determine which questions were subject to mode effects, the Center conducted an experiment alongside (but separate from) the Jewish survey. We found that questions about concrete experiences generally did not exhibit a discernible mode effect. But the experiment found several types of questions that are not comparable. These include questions that are sensitive or difficult, which have a higher refusal rate on the phone mode; questions with socially desirable answers; and questions subject to “recency” effects because of the order in which response options must be offered on the phone. See Appendix B of the report for more details.

Why does the new Pew Research Center report include results for some groups (e.g., Orthodox Jews, Conservative Jews, Reform Jews, and those with no denominational affiliation) but not others (e.g., Haredi Jews and Modern Orthodox Jews)? How do you decide which of these kinds of crosstabulations to include in the report?

Our report examines the attitudes, experiences and characteristics of both the U.S. Jewish population as a whole as well as a variety of key subgroups of the Jewish population. For example, many of our analyses compare the views of Jews by religion to those of Jews of no religion. We also often examine responses offered by Jewish respondents from various denominational streams of Judaism. We compare how the views of younger Jews compare with those of older Jews, and how men compare with women. These are just a few examples of how we crosstabulate findings throughout our report.

Of course, our report does not include every conceivable “crosstab” that might be of interest to readers. This would be impossible, because the number is practically limitless. But there are some fairly obvious crosstabulations that we would have liked to include in our report but could not because we had insufficient sample size to do so.

For example, we would have liked to compare the characteristics of Haredi Jews with those of Modern Orthodox Jews. Similarly, the survey’s samples of Jewish respondents who are Black, Hispanic, Asian, or who identify with other races or multiracial categories are too small to analyze separately, either as individual subgroups or even combined into a single “non-White” category. And we would have liked to compare the views of younger Jews to those of older Jews within various denominational streams. But we do not report results for these kinds of crosstabs because, in our view, we have too few interviews with people in these categories to support reliable analysis.

How does Pew Research Center determine when there is a sufficient number of people in any given category to be able to report on their characteristics? Our general rule of thumb is that the subgroup in question must have an effective sample size of at least 100 people. That is to say, even after taking into account the loss of precision that occurs in all surveys as a result of the sampling design (e.g., stratifying areas of the country into high-density and low-density Jewish areas, and then oversampling the high-density areas) and the weighting of the data (to ensure that demographic groups are represented in their proper proportion), estimates based on the subgroup in question must be at least as precise as estimates based on a simple random sample of 100 people.

Another way of saying the same thing is that Pew Research Center generally does not report results for subgroups of the population if the margin of error for estimates based on members of the group in question exceeds +/- 10 percentage points. There are a very limited number of exceptions to this general rule of thumb that are included in the report based on the carefully considered judgment of the researchers who worked on the project. These exceptions are reported only if the substantive point being illustrated by the analysis holds true even given the relatively large margin of error of the estimate in question. Such exceptions are few and far between, however, and all of the standard crosstabulations that routinely appear in the charts and tables in the report are based on groups with effective sample sizes of at least 100.

Perceptive readers may notice that there are some groups that were analyzed in the 2013 study that had margins of error as large as +/- 12.9 percentage points. The Center has tightened its standards since 2013 and no longer reports results (in this study or in other studies) for groups with margins of error that large. As a practical matter, this means there were subgroups of the Jewish population that were reported on and discussed in the 2013 study that are not discussed in the 2020 study.

How many “Jews of color” are there?

The phrase “Jews of color” has begun to appear frequently in Jewish publications in recent years, but there does not seem to be a precise, widely accepted definition of who is included and who isn’t. Consider people from the Middle East, for example. The U.S. census historically has classified Americans of Middle Eastern origin as White, but some don’t view themselves that way. Or take Jews of Ashkenazi (European) heritage whose grandparents fled from Europe to Latin America in the 1930s and who recently moved to the United States. Should they count as Hispanic, Latino or people of color? Some scholars and activists say “yes.” Others say “no.”

The 2020 survey did not ask Jewish Americans whether they consider themselves people of color, so we don’t know how many self-identify with that phrase. And, in the absence of a clear definition, we’re reluctant to assign people to a category. As a result, Pew Research Center does not have an estimate of the number of “Jews of color” in the United States. However, the survey did include several questions about race, ethnicity, Jewish heritage categories (Ashkenazi, Sephardic and Mizrahi) and country of birth. These questions can be examined separately or in combination to explore various kinds of diversity in the Jewish population.

For example, 4% of Jewish adults identify as Hispanic, 1% as Black, and less than 1% as Asian. In all, 92% identify as White and non-Hispanic, while a total of 8% identify with other racial or ethnic categories, including multiracial. Separately, two-thirds of U.S. Jews identify as Ashkenazi (following the Jewish customs of Europe), 3% as Sephardic (following the Jewish customs of medieval Spain), 1% as Mizrahi (following the Jewish customs of the Middle East), and 6% as some combination of those or other categories; the remainder say that they don’t know which heritage category applies to them or that none of them do.

If one were to take the 8% of Jewish adults who identify as Hispanic, Black, Asian, other (non-White) race or multiracial and subtract from them everyone who also says their Jewish heritage is solely Ashkenazi (European), one would be left with 5% of U.S. Jews. Alternatively, if one were to add to the same 8% all those who consider themselves Sephardic or Mizrahi, the total would rise to 14% of all U.S. Jews. And if one were to include a measure of geographic diversity by adding – on top of the previous 14% figure – all Jewish adults born outside the U.S., Canada, Europe or the former Soviet Union, or who have a parent born anywhere besides the U.S., Canada, Europe or the former Soviet Union, then 17% of U.S. Jews would fall into all those categories, combined. These are just a few examples of the many ways these overlapping measures can be examined.

Why didn’t you offer a nonbinary option when you asked people about their gender?

Until the summer of 2020, many surveys that Pew Research Center conducted in the U.S. included a standard demographic question asking: “Are you male or female?” This question was worded to be consistent with the way the U.S. Census Bureau asks about gender, which is necessary for statistical reasons to ensure that surveys are representative of U.S. adults overall.

However, in June and July of 2020 – after data collection for the new survey of Jewish Americans had been completed – the Center conducted an experiment to see whether we could devise a new version of this question that would reflect changing norms around gender identity while also still aligning with the U.S. Census Bureau’s question. Ultimately, we determined that a new version can ask: Do you describe yourself as a man, a woman, or in some other way? This question is now standard in our surveys conducted in the United States, including in our American Trends Panel.Unfortunately, the change came after the 2020 Jewish survey had been fielded.

What impact did the coronavirus pandemic have on the survey? Did it cause any delays?

Yes, 2020 was a challenging time for everyone, including those of us at Pew Research Center, and it affected some aspects of our work, including slowing down our analysis of the results of the Jewish survey. Fortunately, the fielding of the survey was mostly completed before the pandemic struck the United States in full force. We had received 86% of the responses to the screening survey, and 74% of the extended surveys, before March 15, 2020.

But the coronavirus outbreak shut down much of the economy while we still were mailing out paper versions of the survey, and for a short time we worried that mail deliveries might be curtailed. In the end, the U.S. Postal Service kept operating, and we continued to receive completed surveys in the mail through April and May of 2020.

We concluded the fieldwork in June, about a month behind schedule, and then spent several months cleaning the data and working through our rigorous quality-control processes. Since the project also included a mode experiment, those processes were repeated separately for the mode study data as well.

In the meantime, our offices closed, our staff began working from their homes, and our broader research agenda shifted to include a series of surveys about the impact of the pandemic on American religious life. It became apparent in the fall of 2020 that we would not finish the Jewish survey project before the end of the year, as we originally had hoped. But our panel of expert advisers encouraged us to take the time needed to carry out a sophisticated modeling approach to the weighting of the data, to analyze the data thoroughly from multiple perspectives, and to write a comprehensive report on the survey’s findings.

In the end, the publication of the report, in May 2021, is roughly six months behind the schedule we had set for ourselves two years earlier, in early 2019. Some of the results – particularly on political questions – are not as timely as we had hoped they would be, and the data on the economic well-being of U.S. Jews, as well as rates of attendance at religious services, largely reflects conditions prior to the onset of the pandemic.

Leave a Comment