I have been trying to figure this out for several days now but to no avail.....also asked in another forum and nobody seemed to have a clue. Hopefully you guys can help

I understand that basically the Law of Large Numbers stipulates the proportional difference from a sample/actual value to the expected value (mean) decreases with a large number of trials (to infinity). What I'm trying to figure out is the

__optimal least number of trials__ required for this convergence to reach (near) the expected value within a specific confidence precision, i.e. 95%. Admittedly my math isn't great, but perhaps this is not possible.

Here's the problem: Say you have a finite population (discrete distribution I assume) with an expected value of 30 and standard deviation of 100. How many trials are necessary to reach a 95% confidence level that you have converged to the expected value? Hopefully I am making this clear. I've figured out how to use Chebyshev to calculate the trials necessary for a specific probability and margin of error (i.e. biased coin toss with 48% chance of heads, 95% confidence and 1% margin of error requires 49,920 trials). But that is different from what I am asking because my mean and standard deviation are absolute values— not probabilities. The only other approach I've tried so far was squaring the coefficient of variation and dividing it by alpha, but was told elsewhere that this was not correct. I also think it may be incorrect because the trials required can exceed the finite population.

Any ideas? Is this calculation even possible?