Numbers, Facts and Trends Shaping Your World

Libraries and Learning

Methodology

The Educational Ecosystem 2015 Survey, sponsored by Pew Research Center, obtained telephone interviews with a nationally representative sample of 2,752 adults living in the United States. Interviews were conducted via landline (nLL=963) and cellphone (nC=1,789; including 1,059 without a landline phone). The survey was conducted by Princeton Survey Research Associates International (PSRAI). The interviews were administered in English and Spanish by Princeton Data Source, LLC from Oct. 13 to Nov. 15, 2015. Statistical results are weighted to correct known demographic discrepancies. The margin of sampling error for the complete set of weighted data is ±2.1 percentage points. For results based on Internet users4 (n=2,428), the margin of sampling error is ±2.3 percentage points.

Details on the design, execution and analysis of the survey are discussed below.

DESIGN AND DATA COLLECTION PROCEDURES

Sample Design

A combination of landline and cellular random-digit-dial (RDD) samples was used to represent all adults in the United States who have access to either a landline or cellular telephone. Both samples were provided by Survey Sampling International, LLC (SSI) according to PSRAI specifications.

Numbers for the landline sample were drawn with equal probabilities from active blocks (area code + exchange + two-digit block number) that contained one or more residential directory listings. The cellular sample was not list-assisted, but was drawn through a systematic sampling from dedicated wireless 100-blocks and shared service 100-blocks with no directory-listed landline numbers.

Contact Procedures

Interviews were conducted from Oct. 13 to Nov. 15, 2015. As many as seven attempts were made to contact every sampled telephone number. Sample was released for interviewing in replicates, which are representative subsamples of the larger sample. Using replicates to control the release of sample ensures that complete call procedures are followed for the entire sample. Calls were staggered over times of day and days of the week to maximize the chance of making contact with potential respondents. Interviewing was spread as evenly as possible across the days in field. When necessary, each telephone number was called at least one time during the day in an attempt to complete an interview.

For the landline sample, interviewers asked to speak with the youngest adult male or female currently at home based on a random rotation. If no male/female was available, interviewers asked to speak with the youngest adult of the other gender. This systematic respondent selection technique has been shown to produce samples that closely mirror the population in terms of age and gender when combined with cell interviewing.

For the cellular sample, interviews were conducted with the person who answered the phone. Interviewers verified that the person was an adult and in a safe place before administering the survey. The cellular respondents were offered a post-paid cash reimbursement for their participation.

WEIGHTING AND ANALYSIS

Weighting is generally used in survey analysis to compensate for sample designs and patterns of non-response that might bias results. The sample was weighted to match national adult general population parameters. A two-stage weighting procedure was used to weight this dual-frame sample.

The first stage of weighting corrected for different probabilities of selection associated with the number of adults in each household and each respondent’s telephone usage patterns5

This weighting also adjusts for the overlapping landline and cell sample frames and the relative sizes of each frame and each sample.

The first-stage weight for the ith case can be expressed as:

PI_2016.04.07_Learning-and-Libraries_M-01

Where SLL = the size of the landline sample FLL = the size of the landline sample frame SCP = the size of the cell sample FCP = the size of the cell sample frame ADi = Number of adults in household i LLi=1 if respondent has a landline phone, otherwise LL=0. CPi=1 if respondent has a cellphone, otherwise CP=0.

The second stage of weighting balances sample demographics to population parameters. The sample is balanced to match national population parameters for sex, age, education, race, Hispanic origin, region (U.S. Census definitions), population density and telephone usage. The Hispanic origin was split out based on nativity: U.S. born and non-U.S. born. The white, non-Hispanic subgroup was also balanced on age, education and region.

The basic weighting parameters came from the U.S. Census Bureau’s 2013 American Community Survey (ACS) data.6 The population density parameter was derived from Census 2010 data. The telephone usage parameter came from an analysis of the July-December 2014 National Health Interview Survey.7

Weighting was accomplished using Sample Balancing, a special iterative sample weighting program that simultaneously balances the distributions of all variables using a statistical technique called the Deming Algorithm. Weights were trimmed to prevent individual interviews from having too much influence on the final results. The use of these weights in statistical analysis ensures that the demographic characteristics of the sample closely approximate the demographic characteristics of the national population. Table 1 compares weighted and unweighted sample distributions to population parameters.

Table 1: Sample Demographics

Effects of Sample Design on Statistical Inference

Post-data collection statistical adjustments require analysis procedures that reflect departures from simple random sampling. PSRAI calculates the effects of these design features so that an appropriate adjustment can be incorporated into tests of statistical significance when using these data. The so-called “design effect” or deff represents the loss in statistical efficiency that results from systematic non-response. The total sample design effect for this survey is 1.28.

PSRAI calculates the composite design effect for a sample of size n, with each case having a weight, wi as:

PI_2016.04.07_Learning-and-Libraries_M-03

In a wide range of situations, the adjusted standard error of a statistic should be calculated by multiplying the usual formula by the square root of the design effect (√deff ). Thus, the formula for computing the 95% confidence interval around a percentage is:

PI_2016.04.07_Learning-and-Libraries_M-04

where is the sample estimate and n is the unweighted number of sample cases in the group being considered.

The survey’s margin of error is the largest 95% confidence interval for any estimated proportion based on the total sample – the one around 50%. For example, the margin of error for the entire sample is ±2.1 percentage points. This means that in 95 out every 100 samples drawn using the same methodology, estimated proportions based on the entire sample will be no more than 2.1 percentage points away from their true values in the population. It is important to remember that sampling fluctuations are only one possible source of error in a survey estimate. Other sources, such as respondent selection bias, questionnaire wording and reporting inaccuracy, may contribute additional error of greater or lesser magnitude.

RESPONSE RATE

Table 2 reports the disposition of all sampled telephone numbers ever dialed from the original telephone number samples. The response rate estimates the fraction of all eligible samples that were ultimately interviewed. Response rates are computed according to American Association for Public Opinion Research standards.8 Thus the response rate for both the landline and cellular samples was 9%.

Table 2. Sample Disposition
← Prev Page
1 2 3 4 5
Next Page →
  1. Internet user definition includes those who use the internet or email at least occasionally or access the internet on a cellphone, tablet or other mobile handheld device at least occasionally.
  2. i.e., whether respondents have only a landline telephone, only a cellphone or both kinds of telephone.
  3. ACS analysis was based on all adults excluding those living in institutional group quarters.
  4. Blumberg SJ, Luke JV. Wireless substitution: Early release of estimates from the National Health Interview Survey, July-December, 2014. National Center for Health Statistics. Jun 2015.
  5. The American Association for Public Opinion Research. 2011. Standard Definitions: Final Dispositions of Case Codes and Outcome Rates for Surveys. 7th edition. AAPOR.
Icon for promotion number 1

Sign up for The Briefing

Weekly updates on the world of news & information

Icon for promotion number 1

Sign up for our weekly newsletter

Fresh data delivery Saturday mornings