Stat Testing and Base Sizes
Statistical testing (or stat testing) helps you understand whether differences between groups of data are meaningful — or if they could have occurred by random chance. In Glass, stat testing is built into the Build Crosstab tool, allowing you to compare results between banner points (such as demographics, waves, or custom variables) with a single click.
What Stat Testing Does
Statistical testing identifies whether two groups’ results are significantly different at a given confidence level. For example:
58% of Males selected “Yes,” vs. 49% of Females — is that difference real or random?
Stat testing answers that question. In the Glass platform, you can apply statistical tests directly in Crosstabs to compare:
Demographic groups (e.g., Gender, Age, Region)
Custom segments or variables
Waves in longitudinal studies
When enabled, Glass will calculate the statistical significance of differences between groups for each question and flag them using alpha characters in your export.
How to Enable Stat Testing
Stat testing is available on the Crosstab Configuration page:
Go to the Build Crosstab icon on the left sidebar.
Click the Crosstab Configuration tab.
Check the box labeled Include Stat Testing.
Choose your confidence level:
90% (α = 0.10)
95% (α = 0.05) - default and most commonly used
99% (α = 0.01)
When you export your Crosstab file, statistically significant differences will be shown with letters in the table (for example, “A,” “B,” or “C”) corresponding to the groups in your banner.
Pro Tip: Start with the 95% confidence level. It strikes the right balance between sensitivity and reliability for most survey applications.
How It Works in Glass
The Glass platform performs column-based statistical testing. That means:
Each column in your Crosstab (each banner point) is compared to the other columns in the same question.
Differences are tested using standard two-proportion z-tests (for categorical questions) or t-tests (for means, averages, or numeric scales).
Stat testing is only applied where there are valid respondent counts in both groups being compared.
If you’re viewing results for multiple waves, for example, Wave 1 vs. Wave 2, Glass will test each against the others and show where differences are statistically significant.
Understanding Base Sizes
Every percentage in your Crosstab is based on a base size - the number of respondents included in that cell or group.
You’ll find these base sizes displayed directly under each banner point in your export. They’re critical for interpreting results correctly because small bases can make differences look bigger (or smaller) than they really are.
Guidelines for interpreting base sizes:
n < 30: Treat results as directional only — too small for reliable statistical testing.
n = 30–100: Use caution; results can be unstable or highly variable.
n > 100: Generally reliable for comparing subgroups.
n > 300: Highly stable results; good confidence for directional insights and stat testing.
Pro Tip: If your base size is too small, Glass may not run statistical tests for that cell — this prevents false positives from unreliable sample sizes.
What Stat Testing Flags Mean
In your Crosstab export:
Each banner column is assigned a letter (A, B, C, etc.).
When you see a letter next to a percentage, it means that this value is statistically higher than the column represented by that letter.
Example:
Gender | % Selected |
A. Male (n=250) | 58% B |
B. Female (n=250) | 49% |
This means the male result (58%) is significantly higher than the female result (49%) at your selected confidence level.
Stat Testing Across Overlapping Segments
One important consideration when interpreting results is whether the groups you’re comparing are mutually exclusive or overlapping.
Mutually exclusive segments (e.g., Male vs. Female) have no shared respondents. Statistical tests between them are fully independent and appropriate.
Overlapping segments, however, share some of the same respondents — for example:
“Millennials” and “Brand Users” may overlap if many Millennials use the brand.
“Heavy Buyers” and “Loyal Customers” might include many of the same people.
When you run stat testing between overlapping groups, Glass will still calculate differences, but the results should be interpreted with caution. Because the same respondents may appear in both groups, their answers are not fully independent, which means:
The test can underestimate the true variance (making differences look more significant than they really are).
The comparison may violate statistical independence assumptions, especially when overlap is high.
Best Practices for Overlapping Segments:
Use stat testing as directional guidance only for overlapping groups.
Clearly document in your report when comparisons include shared respondents.
Where possible, create mutually exclusive variables for testing (e.g., “Brand User Only” vs. “Non-User”).
For broad exploratory insight, overlapping segments are fine — but for formal statistical claims, always use non-overlapping group definitions.
Pro Tip: You can verify whether segments overlap by checking your Survey Variables or Definitions tabs in your Crosstab export – if the same respondent IDs appear in multiple segments, those groups are overlapping.
