
Advanced Statistics Advanced Probability and Statistics Math Forum 
 LinkBack  Thread Tools  Display Modes 
November 9th, 2008, 11:25 AM  #1 
Newbie Joined: Nov 2008 Posts: 1 Thanks: 0  Theoretical
This may be more imagination than knowledge, but I don't think it will hurt to have it seen anyway...  Hello, the following is an explanation of the formula that I wanted you to look over. Even though I am very ametuer in this area, I hope that you find this document to be worth your time and attention. Thank you...... This first page will consist of some of the variables and symbols that I will be using in the formula so that you can get familiar with them and refer back to them if necessary. I hope that the explanations are clear. I threw it together pretty quickly and just a few aren’t included below and I assume that you will already know what some of the symbols and variables stand for because they a commonly used in many mathematical statements: N= a number of trials of the particular event being measured N0 = the number of trials that have been run by the time the first successful result is attained M= the number of successful results [((N0 + 1)–N0)…N0]1/ M1 means that I add one to the number of trials, I subtract the number of trials to get the number 1 [ ]1 Could be seen to represent the fact that I am not just looking at numbers when I come up with the numbers in the calculations but I am looking at number that represent a certain group of trials that I want to consider ( example: I have 5 trials go by before I get my first result. That would mean [((5 + 1)5…5]1/1. (N0 + 1)–N0)…N0), means that I am looking at numbers 15 up until I get to the number 5. If you look to the denominator, first of all, you will see 1, 2, 3…x. I am looking to divide in order to get an average so, for this first set of calculations where a ratio is achieved, I would simply use the number 1. As more and more ratios are attained then I, of course, divide by how ever many ratios I have at top (in the numerator), after the results have been added together N=log(1DC) /log(1 p). This gives you the number of trials required to occur before there is a certain chance that a successful outcome will occur. Also see the following website for further explanation: http://saliu.tripod.com/Saliu2.htm The [/] / x(1) 1, 2, 3…x portion of the formula is concerned within that portion of the formula, it is simply a replacement for theoretical probability. It is an empirical probability. In fact, it is the empirical probability that is attained in the other portions of the formula [/]1, 2, 3…x is meant to represent the first number in the ratio after it has been averaged __________________________________________________ ______________________________________ In my explanation I give the following as the final version of the formula: 1(1p)NM ([/] / x) + 1 … …with the value for [/] / x having been determined by the following formula: [[((N0 + 1)–N0)…N0]1/ M1 +…[((2N1 + 1)–2N1)…2N1]2/ M2 +…[/]3, 4, 5…x < [2(N=log(1DC) /log(1 [/] / x(1) 1, 2, 3…x)] / [/]1, 2, 3…x] 1, 2, 3…x < [2(N= log (1DC) / log (1 [/] / x(1) 1, 2, 3…x)] / [/]1, 2, 3…x The only problem is that in using the final version1(1p)NM ([/] / x) + 1 …I was attempting to use the results from previous trials in order determine a current probability that updated itself after every trial. After some discussion with a Mr. Don Prohaska, the individual whom I have been corresponding with, I was informed that one can not use previous trials in order to gain insight into what the probability would be for the “next” trail other than to say that the probability is the same for each and every trial. I was told that this formula would only succeed in giving me the probability of a certain number of successes occurring within a certain number of trials given the original probability of the event in question. He stated that one measurement was independent from the other. Still, I understand that there is a way to make a measurement that takes previous trials into consideration: Markov Chains. I still have to do some reading and study into this method so that I may gain understanding of how it works, but in the meantime I have made some changes with my formula in order to take a different approach. I have a different variation on the final version where I replace 1(1p)NM ([/] / x) + 1 with (N / [/] / x)M. Now, let me go into the first version for a while. With the first version, I am actually attempting to do two things at once (maybe even three come to think of it). The formula takes the number of trials compared to the number of successes and gets an initial empirical probability from that info (or, some other more complex method can replace that method if necessary). It then takes that info and determines the most ideal number of trials to be used in the measurement. Here, I am reasoning that if you measure too few or too many trials in a measurement of the empirical probability, your results will be less accurate. I therefore try to determine what is the "perfect number" I should use in the measurement. When measuring the coin toss, I believe I determined that number to be no less than 2 trials. Lastly, I want to decide what the chance is that a particular outcome will occur. From what I have been told, it is possible to determine the chance that a particular outcome will occur given its probability and the number of trials that have gone by with out a success. In other words, if you flip a coin it has a 0.5 probability of landing on Heads. Now, let's say that you flipped the coin several times and the coin landed on tails time and time again. Now, let's say that you got to the 10th flip but it still hadn't landed on heads. On the 10th flip, you would have a 99% chance (I am thinking that chance is different that probability) of the coin landing on heads. My formula just takes this type of scenario a step further and adjusts the percentage chance according to a changing number of successes as well as a changing number of trials. In other words, let’s say it landed on heads on the 10th flip. My formula would then adjust the value of the percentage chance of it landing on heads on the 11th flip also. To get a better explanation on some of my formula and what I am attempting, copy and paste the following site onto your browser and take a look at the info about the "Degree of Certainty": http://saliu.tripod.com/Saliu2.htm You can also visit the following to see if you agree w/ more info listed by the maker of that site: http://www.saliu.com/theoryofprobability.html Again, I attempted to use the "Empirical Probability" and the "Degree of Certainty" to determine how many trials to consider by simply dividing the number of successes by the number of trials, and using the ratio of trials and successes to determine the interval between individual empirical measurements, adding all the individual empiricals, dividing the sum total of the values of those measurements by the number of intervals in order to get an average which is then used as the "Empirical Probability", inserting that value in for "[/] / x(1)", inserting a target value for the "DC" (which was 99.9%) and solving for "N" then multiplying the answer by 2. [[((N0 + 1)–N0)…N0]1/ M1 +…[((2N1 + 1)–2N1)…2N1]2/ M2 +…[/]3, 4, 5…x < [2(N=log(1DC) /log(1 [/] / x(1) 1, 2, 3…x)] / [/]1, 2, 3…x] 1, 2, 3…x < [2(N= log (1DC) / log (1 [/] / x(1) 1, 2, 3…x)] / [/]1, 2, 3…x After that, I consider all of the trials that fall within that range and determine the DC for that group of items. Also, I adjust the DC as trials and successes occur over a period of time where the number of trials and successes changes 1(1p)N – M ([/] / x) + 1 (even though I explain these steps in a linear manner, all of it is really occurring simultaneously because one calculation effects or is inputted into another even thought it is all one formula). If you are uncertain as to what I am doing in the formulas that I’ve been going over then you can get an even more detailed explanation in the document that is attached to this email. Now, if you agree with Mr. Prohaska and say that 1(1p)N – M ([/] / x) + 1 is useless in determining a probability that made use of previous trials, then I have another method. It replaces the previous formula with (N / [/] / x)M. Just as in the previous formula, [/] / x is the symbolic representation of the empirical probability, and it lies at the heart of the mathematical process and the value for that expression is determined in a long series of calculations. It, the second variation, is very simple and it makes use of the probability of the events that it measures but isn’t restricted by the limitations that govern over the formulas and equations that are based on the three basic axioms of probability. This formula takes an additional step beyond those axioms even while obeying and making new use of them. Of course my experience is limited and I could be wrong, but I believe this could better allow for measurements of how prior trials affect a current outcome. I believe that the reason is that the 3 accepted axioms only create a narrow definition for how probability is structured for a single trial, theoretical probability. Therefore, when one uses these axioms as a base or foundation on which to build equations and formulas, the limits of their definition are transferred into those equations. In other words, since these axioms define the characteristics of probability for one event, the math that is created from them is geared to measure only one event, hence no one is better able to measure a series of events where the past outcomes effect the current outcome. Forgive me if my language or terminology is not exact as I attempt to go through an explanation here. The 1st axiom suggests that the probability multiplied by the number of subsets for a particular event must be a real number that is equal to or less than the number 1 but more than then number 0: 1 = or > P(E) > or = 0. Taken a step further and going beyond a single trial, theoretical probability and an axiom that defines the characteristics of a single trial event in probability, one can expand upon this rule and say that the number of successes within a particular series of trials multiplied by the probability must always be a real number that is equal to or less than the number of trials but equal to or greater than the number 0: Here, Nx = the number of successful outcomes for a particular event and Px = the respective probability for that particular event. N = or > Nx (Px) > or = 0 Even though I’ve never seen any version of this equation, this would be the basis for attaining an empirical probability the same way that the previous axiom allows one to define a theoretical probability. When considering the second axiom, the sample space (a universe of possible outcomes) multiplied by the probability must equal the number 1: S (P) = 1 Expand upon this also and one can attain another, more encompassing version of the axiom where the universal laws governing multiple trials are defined as well as those that govern over one single event. It says the following: When the number of successes of each particular event is multiplied by its respective probability, and then that number value is added to all the other results, that number must be equal to N: Here, N1 = the number of successes that where observed for a particular event and P1 = the probability of that same event. N1 (P1) + … Nx (Px) = N In addition to those variations on the first two axioms and following a similar line of thought, you can also observe that the expected number of successes is equal to the number of trials multiplied by the probability: N (P). Now, when you subtract the actual number of successes that have been observed, you are able to make comparisons and gauge the likelihood of one event occurring against another possible event: N (P) – M. The difference between these two values is the difference between the expected number of results and the actual number of results and this can give you either a positive or a negative number. A positive number indicating that there are fewer successes than expected and a negative number indicating that there more successes than expected. It will be a whole number, a decimal, or a combination of the two. As you can see in my formula, instead of using a theoretical probability (represented by the variable “P”) I use an empirical probability (represented by the symbols [/] / x). I end up with the formula (N / [/] / x)M This formula is to be applied to each event and the event that is calculated to have the highest number value is the one that is viewed as being the most probable based on the original probability and the event’s “performance” over any given number of past trials. Quick example: Lets say your flipping a coin. For now, let’s just assume that the empirical probability is 0.5 for heads and 0.5 for tails (or you could say ½ for each). Let’s say you record ten trials where you got heads 7 times and tails 3 times. If you put the info for heads into the formula, what you get is the following: (10 / 2/1) 7 = 2 for heads and (10 / 2/1) 3 = 2 for tails. These results are telling you that, given the value of the probability, there are 2 more heads than there should be, and that there should be 2 more tails than there are in the results. This tells you that you should be looking for tails as a result soon by using the number value of 0 as a pivot point toward which either measurement of events is attracted. In other words, the number of successes vs. the number of trials of any event will always tend toward a balance where it becomes proportional to the probability. I admit that one problem here is that unless you know when a streak is over, it isn’t nearly as valuable to know whether or not an event is “over or under drawn.” But I’m sure there are ways to measure this accurately, and I have a method in mind myself. Also, there are observations in established mathematics that support the previous ideas. Here is some math and logic to justify my previous paragraph. Consider an equation that you may be familiar with: f(n)=q^(n1)p I got this one from Mr. Prohaska and my understanding is that it is for the geometric probability and give you the probability of “such and such” number of successes occur in “such and such” number of trials. Given the flipping a coin example, you have a 0.5 probability of getting heads or getting tails when you flip a coin on the first trial. Contrarily, the calculation for the probability for the coin landing on heads 10 times in a row would be ½^(101) ½ = 0.001. The same calculation would apply for getting 10 tails in a row. This is a much lower probability than when the first trial was observed. If you use your imagination a bit to picture the process, you will see in your mind’s eye that this effect happens when going either way, a large number of heads in a row or a large number of tails in a row. The further you get from one trial without having a success in a particular event, the lower the probability. That would mean that the further you get from the first trial without a success the rarer the occurrence. That would mean that there are very few occurrences where you will have 10 failures in a row but most of the successes should happen within the first few trials. This means that, more often than not, the occurrence of a success will be proportional to its probability. In other words, if there is a probability of 0.5 or ½, there will tend to be 1 success for every couple of trials. And also, even when this does not happen and a streak occurs(two or more heads in a row, for instance), it will eventually (and I think this may tend to occur relatively soon afterward) be balanced out because the opposite event is just as likely to yield the same results as the event in question (2 or more tails in a row). Given this info, that successes tend to proportion themselves to the probability, and given the previous formula (N / [/] / x)M which tracks the relationship between the expected number of successes for a given number of trials and the actual number of success by subtraction, it can be concluded that the value for the (N / [/] / x)M will tend toward 0 and that 0 will tend to act as a pivot point to where the calculation will always return to and pass going back and forth to the + or – side of the number line. It is because all the calculation does is track the number of successes in relation to the most probable number of successes with 0 indicating that the number of successes is exactly equal to the number of expected successes. 1(1p)N – M ([/] / x) + 1 Here, the expession “([/] / x) + 1” is equal to the following: [[((N0 + 1)–N0)…N0]1/ M1 +…[((2N1 + 1)–2N1)…2N1]2/ M2 +…[/]3, 4, 5…x < [2(N=log(1DC) /log(1 [/] / x(1) 1, 2, 3…x)] / [/]1, 2, 3…x] 1, 2, 3…x < [2(N= log (1DC) / log (1 [/] / x(1) 1, 2, 3…x)] / [/]1, 2, 3…x N= a number of trials of the particular event being measured N0 = the number of trials that have been run by the time the first successful result is attained (M= the number of successful results). [((N0 + 1)–N0)…N0]1/ M1 means that I add one to the number of trials, I subtract the number of trials to get the number 1. I then place ellipses marks from there to the variable that represents the number of trials. That says that I want to look at all the numbers between 1 and N0 . That would mean [((5 + 1)5…5]1/1. Another way to say that is 5 + 1 = 6. – the original 5 = 1. Since I am doing this in the [ ]1 symbol. 1…5, or, (N0 + 1)–N0)…N0), means that I am looking at numbers 15 up until I get to the number 5. I also have / M1 . Since M1 is always equal to 1 in this formula, I will be left with 5/1. That is my first “ratio”. 5 trials to 1 result. If you then look to the denominator, first of all, you will see 1, 2, 3…x. As more and more ratios are attained then I, of course, divide by how ever many ratios I have at top (in the numerator) I am left with the next step which is +…[((2N + 1) – 2N)…2N] / M2 . Let us say that 5 more trials have occurred since the first measurement. This part of the formula means that I am looking to add more results to the first group of trials that have been recorded in order to get a new and more accurate measurement. Here, the number I want to consider up to will always be twice the number of trials considered in the first result. 2N here would mean 2*5 which is 10. That is, on the tenth result I will take another measurement and then average the results in order to get a more accurate empirical probability. Again, +…,once the first measurement is made a new target is calculated for where the next measurement will take place. The ellipses mean that there is a wait till that target is met and also that everything between the target and the last measurement is considered in the new measurement. The target is going to be 2N, or, 2*5, which is 10. Therefore, you get (2*5 + 1) – 2*5) which is equal to 1. Then you are looking at all the trials from 1 …2N . That means 110. Let’s say that this time 2 additional results where attained. That means that M2 would then be equal to 3. The first result added to the 2 new results that where recorded during trials 610. What you have now is 10 / 3. This is then added to 5 / 1 and divided by 2 in order to get an average. The next set of trials will be a little different though. Even though it is not represented in the previous equation, the new average will be represented by the following set of signs and variables: ( [/] / x )2 in replacement of Nx. In order to get a new target you should multiply it by 3. The next time you multiply it by 4 and etc. Of course, you will soon have to round the numbers to the nearest whole number. The last on the top begins with +…[/]3, 4, 5…x . This is just used to represent all the “overlapping” groups of trials to come that are not represented in full expression in this formula. 

Tags 
theoretical 
Search tags for this page 
Click on a term to search for related topics.

Thread Tools  
Display Modes  

Similar Threads  
Thread  Thread Starter  Forum  Replies  Last Post 
Limit proof (for theoretical class)  rummi  Calculus  1  October 21st, 2012 12:48 PM 
theoretical question on probability theory  FemaleDiogenes  Advanced Statistics  9  March 24th, 2012 03:14 AM 
Maximum Theoretical Gas Mileage?  ndamato  Physics  2  January 7th, 2008 01:08 PM 
Empirical and Theoretical Methods  symmetry  Advanced Statistics  0  April 7th, 2007 03:06 AM 
Empirical or Theoretical Probability?  symmetry  Advanced Statistics  2  January 29th, 2007 01:25 PM 