Most strategies for the basic risk communication problem of expressing the probability of a well defined event presume the probability is precisely characterized as a real number. In practice, however, such probabilities can often only be estimated from data limited in abundance and precision. Likewise, risk analyses often yield imprecisely specified probabilities because of measurement error, small sample sizes, model uncertainty, and demographic uncertainty from estimating continuous variables from discrete data. Under the theory of confidence structures, the binomial probability of an event estimated from binary data with k successes out of n trials is associated with a particular structure that has the form of a p-box, i.e., bounds on a cumulative distribution function. When n is large, this structure approximates the beta distribution obtained by Bayesians under a binomial sampling model and the Jeffreys prior, and asymptotically it approximates the scalar frequentist estimate k/n. But when n is small, it is imprecise and cannot be approximated by any single distribution because of demographic uncertainty. These confidence structures make apparent the importance of the size of n to the reliability of the estimate. If n is large, the probability estimate is more reliable than if n is small. When a risk analysis yields a result in the form of a precise distribution or imprecise p-box for an event’s probability, we can approximate the result with a confidence structure corresponding to a binomial probability estimated for some values of k and n. Thus we can characterize the event probability from the risk analysis with a terse, natural-language expression of the form “k out of n”, where k and n are nonnegative integers and 0≤k≤n. We call this the equivalent binomial count, and argue that it condenses both the probability and uncertainty about that probability into a form that psychometry suggests will be intelligible to humans. Gigerenzer calls such integer pairs “natural frequencies” because humans appear to natively understand their implications, including what the size of n says about the reliability of the probability estimate. We describe preliminary data collected with Amazon Mechanical Turk that appear to show that humans correctly understand these expressions.