Discover the normal distribution through random processes
At each peg, a ball has equal probability of going left or right. After n rows, the number of right-bounces follows a binomial distribution. With many rows and equal probability, the distribution approximates a normal (Gaussian) curve — the Central Limit Theorem in action. Standard deviation σ = √(np(1-p)) measures spread. This underlies all statistical inference, from polls to quantum measurements.
Plus 148+ other Pro labs covering AP Physics, Biology, Chemistry, Earth Science, and Math — with unlimited simulation time, advanced parameters, and detailed analytics.
Already have an account?
Sign in →Plinko Probability turns the famous game-show board into a rigorous statistics laboratory. Each ball dropped through the board hits a series of pegs, bouncing left or right at each one independently. After traveling through all the rows, the ball lands in one of the bins at the bottom, and the histogram of where many balls land reveals the underlying probability distribution. When each peg has an equal 50/50 chance of sending the ball left or right, the number of right-bounces out of n rows follows a binomial distribution. The critical insight — the Central Limit Theorem in action — is that this binomial distribution increasingly resembles a normal (Gaussian, bell-shaped) curve as the number of rows grows. This happens because the final bin position is the sum of many small independent random steps, and the CLT says that sums of independent random variables tend toward a normal distribution regardless of the individual step distribution. The mean of the distribution is μ = n × p (where p is the probability of going right) and the standard deviation is σ = √(n × p × (1-p)). Plinko makes these abstract formulas physically visible: you can literally watch the bell curve assemble itself one ball at a time, and you can see how changing the bias (prob_right) skews the histogram away from the center.
MisconceptionPast balls influence where the next ball will land — if many balls fell right recently, the next is more likely to go left.
CorrectEach peg encounter is fully independent of every other. There is no memory in the board. This is the gambler's fallacy: the belief that random outcomes must 'balance out' in the short run. In reality, the probability at each peg is exactly prob_right on every bounce regardless of what previous balls did. The long-run distribution converges to the expected shape not because individual outcomes compensate, but because the law of large numbers averages out over many trials.
MisconceptionAll bins are equally likely because the ball makes random choices at every peg.
CorrectThe choices are random, but the number of different paths leading to each bin varies enormously. The center bin can be reached by many different sequences of lefts and rights (e.g., C(6,3) = 20 paths for a 6-row board), while the extreme bins can only be reached by one path each (all right or all left). More paths to a bin means higher probability — exactly the logic of Pascal's triangle and the binomial coefficients.
MisconceptionIncreasing the number of rows makes the distribution narrower because there are more pegs to average over.
CorrectMore rows make the distribution wider, not narrower. Standard deviation grows as √(n × p × (1-p)), which increases with n. However, if you normalize by dividing by n (looking at the fraction of right-bounces rather than the count), the normalized distribution does become narrower as n increases. The absolute spread grows; the relative spread shrinks. This is an important distinction in understanding the CLT.
MisconceptionThe normal distribution that emerges is exact for any number of rows.
CorrectThe binomial distribution only approximates a normal distribution, and the approximation improves as n (number of rows) increases and as p stays near 0.5. For small n or extreme p values (near 10% or 90%), the distribution is visibly discrete and asymmetric rather than bell-shaped. The simulation makes this visible: compare the histogram at 2 rows vs. 12 rows, or at prob_right 50% vs. 20%.
The final bin position is the sum of many independent random steps, each taking value 0 (left) or 1 (right). The Central Limit Theorem states that the sum of a large number of independent, identically distributed random variables tends toward a normal distribution regardless of the shape of the individual variable's distribution. With 6 or more rows, the binomial distribution of total right-bounces is already a reasonable approximation to a Gaussian bell curve centered at n × p with standard deviation √(n × p × (1-p)).
The number of paths from the top peg to each bottom bin is given by the binomial coefficients, which form Pascal's triangle. For a board with n rows, the k-th bin (counting from zero) has C(n,k) paths leading to it. Because each path is equally likely (when prob_right = 50%), the relative probabilities of the bins are proportional to the corresponding row of Pascal's triangle — a direct geometric connection between combinatorics and probability.
With prob_right exactly 50%, the theoretical distribution is symmetric, but any finite sample will typically show some asymmetry due to random variation. With 100 balls and 6 rows, the standard error on each bin count is large enough to produce noticeable left-right imbalance in many individual runs. This is not a flaw — it is exactly what random sampling looks like. Running the simulation multiple times and comparing shows how the average of many runs converges to the symmetric theoretical shape.
The formula σ = √(n × p × (1-p)) predicts the spread of the landing distribution. With n = 6 rows and p = 0.5, σ = √(6 × 0.5 × 0.5) ≈ 1.22 bins. Note that the 68/95 rule is a large-n normal approximation; for small discrete cases like n = 6, the exact binomial probabilities differ from those percentages. For n = 6, p = 0.5, the exact probability of landing within 1 bin of the center is about 78%, not 68%. You can verify this by counting the fraction of balls in the central bins after dropping 100 balls. The formula also shows why increasing rows widens the distribution: σ grows with √n.
Not exactly — Plinko follows a binomial distribution, which is discrete (balls can only land in whole-numbered bins). The normal distribution is continuous. However, the binomial distribution converges toward the normal distribution as the number of rows increases, especially when prob_right is near 50%. Normal approximation improves for larger n and p not too close to 0 or 1; for small n or extreme p, use exact binomial probabilities rather than the normal approximation.
Setting prob_right away from 50% models any binary random process with unequal probabilities: a biased coin, an allele with 30% transmission probability, a manufacturing process where 20% of items are defective, or a medical test with a known false-positive rate. The shift in the histogram peak to μ = n × p and the change in spread to σ = √(n × p × (1-p)) illustrate how the binomial distribution generalizes beyond the fair-coin case, making it applicable across biology, quality control, genetics, and opinion polling.