Sunday, July 27, 2008

The Census Bureau's Random Groups Standard Error Estimator

Rather than a simple random sample, the Census Bureau relied upon a systematic sampling method to select sample units (where a sample unit was defined as either a household or a person living in group quarters) for the 1990 Census sample. This made things easier for census takers--instead of trying to develop a random sample of households in their area, they would administer the long form questionnaire to every sixth household (or every eighth household, or every other household, depending on what part of the country they lived in). This procedure should (hopefully) result in a sample that was more reflective of the population as a whole, than a more haphazard scheme.

The formulas for the standard error a few blogs up were for a simple random sample. One thing that survey statisticians like to do is to come up with different formulas when the sampling method is other than simple random sampling. Generally, your target is a smaller standard error, which makes your confidence intervals appear smaller and your survey estimates more precise.

For estimation for the 1990 Census, the US was divided into just over 60,000 distinct weighting areas, or areas for which sample weights were derived. A "sample weight" is the number of units in the larger population which a sample unit represents (for weighting purposes). For example, if the sampling ratio for households is one-in-six, then the initial guess at how many population households a sampled household represents is six (itself plus five others). As mentioned previously, everyone answered certain questions (on the Short Form), and a systematic sample was selected for the Long Form. Information from questions which everyone answered was used to adjust the sample weights. For example, since everyone answered the question on "race", the sample weights were adjusted to make certain that the sample estimates of persons by "race" equaled the population count of people by race. This hopefully made the estimates from the sample data more reflective of the population.

For each of the 60,000 weighting areas, a standard error estimate, using the random-groups method, was computed for each of 1,804 sample data items.

Within each weighting area, sample units (a sample unit being either a housing unit or a person residing in a group quarter) were assigned systematically among 25 random groups.

For each of the 25 random groups, a separate estimate of the lotal for each of 1,804 sample data items was computed by multiplying the weighted count for the sample data item within the random group by 25. For each data item for which the total number of people with a particular characteristic was estimated from the sample data, the random-groups standard error estimate was then computed from the 25 different estimates of the total from the random groups.

This is hard to describe without formulas. You had better look this up in my paper:

http://losinger.110mb.com/documents/Random_Groups.pdf

For each data item within a weighting area, a design effect was calculated as the ratio of the random groups standard error estimator to the simple random sampling standard error for a one-in-six random sample (mentioned in my previous post).


For a state report of sample data, the design effects for eaeh data item were averaged across the weighting areas in the state. Then, a generalized design effect for each data item type (for example, all dala items that dealt with occupation) was computed. The generalized design effect was weighted in favor of data items that had higher population estimates.

In my paper, I present a hypothetical example of data that mighl have arisen from the random-groups method. For a weighting area in Vermont, weighted counts of Whites and Blacks are listed for the 25 random groups. In my hypothetical weighting area, there are no persons of other race. The standard errors assuming simple random sampling are the same for Whites and Blacks (as one would expect for a binomial variable). However, the random-groups standard error estimate is much higher for Whites than for Blacks. And, the design effect is nearly five times higher for the estimate of Whites than the estimate of Blacks. Since the generalized design effect computed for groups of data items was weighted in favor of data items that had higher population estimates, the generalized design effect computed for race for the state of Vermont was quite high.

Data on race were frequently included in 1990 U.S. census sample data products. Because race was asked of every census respondent (i.e., it was a census 100-percent data item), and because the weighting process used by the Census Bureau effectively forced the sample estimates by race to match the 100-percent Census counts by race, the standard errors for estimates of race probably should have bccn considered to be zero. However, generalized design effects were still published hy race, although set to arbitrary constants for all reports (rather than as computed by this method).

More on my proposed modification next time.

No comments: