December 4th, 2016, 01:39 AM  #21 
Newbie Joined: Mar 2016 From: Saskatoon, Saskatchewan, Canada Posts: 29 Thanks: 1 Math Focus: Logic 
I must make sure you understand that I'm arguing that the Bell's inequality is an 'error' when USED as a means to prove or disprove QM, not that it lacks mathematical validity in itself. What need to be compared is how the two similar problems of the Monty Hall problem and the Bell's Inequality is being used. They are identical in form but Bell's theorem is being used to treat the 2/3 solution as the defaulted correct one. What is false in using real experiments for the Monty Hall problem is that NATURE is itself NOT 'fair' to treat possibilities evenly distributed. As such, there are the two kinds of probabilities of which only the 1/2 is correct if asking this about nature. Ironically, the identical form is being used by QM experiments that default to TRUSTING that the LOGIC of the Monty Hall problem IS correct via nature and then when NATURE demonstrates the 1/2 solution is correct, it treats NATURE itself at fault and NOT the faulty use of the logic. It is like using a calculator set in Base2 that reads "10" as some result to mean Base10's "10". In the 'experiments' used to prove the validity of Monty Hall's problem, it cheats by using what they falsely believe is sincerely 'random' when all computers necessarily draw random numbers using a logical assumption based on consistent 'fairness'. You can't have both. This is also its own kind of "Uncertainty Principle": If you try to make a selection of odds 'fair', it loses itself being sincerely 'random'; If you try to make it 'random', it can no longer BE 'fair'. Do you follow? [P.S. I haven't used probability nomenclature and using P(x) form conflicts with how I use this normally for predicate logic and functions elsewhere. While I will eventually use it, if you can follow using simple fractions, can you do so? It'll also make it harder for those others who lack the literal Statistical symbols to follow. Thanks.] 
December 5th, 2016, 03:02 AM  #22  
Senior Member Joined: Apr 2014 From: Glasgow Posts: 1,993 Thanks: 652 Math Focus: Physics, mathematical modelling, numerical and computational solutions  Quote:
Quote:
Quote:
I wouldn't call P(A) and P(AB) two different 'kinds' of probabilities, they just describe how the probabilities of a result change depending on some prior knowledge. Quote:
The Monty Hall problem was a quizquestion where the problem was set up on purpose to have evenly distributed probabilities, leading nevertheless to a rather counterintuitive result, because people forget that the probability of choosing the winning door given that a false door has been presented is not equal to the probability of choosing the winning door on its own. If you roll an unfair, weighted, die and get a result, that process is still random and you can still use bogstandard statistics to determine useful things, like the expectation values and standard deviation and all that good stuff. Quantum mechanics makes use discrete probabilities and probability density functions, which are continuous distributions. Rarely are those distributions flat, but they are still random systems. The uncertainty principle has nothing to do with probability and is one of the most common misconceptions picked by students studying quantum mechanics. The uncertainty principle is more closely tied with measurements of mutual observables and the ability to obtain those observables to a certain precision. It has little to do with Bell's theorem or entanglement. Also... computers are the worst example of randomness because the way they derive random numbers is using a giant array of predetermined random numbers and an index based on the system clock. Quote:
Last edited by Benit13; December 5th, 2016 at 03:06 AM.  
December 6th, 2016, 01:28 AM  #23  
Newbie Joined: Mar 2016 From: Saskatoon, Saskatchewan, Canada Posts: 29 Thanks: 1 Math Focus: Logic  Quote:
For the Monty Hall problem, the error is assuming the probability IF ONE game is only played is probable by 2/3. It reduces to only 1/2 correctly interpreted by most hearing of this puzzle because the way DeVos originally told the story lacks clarity to how often one CAN play. The distinction in people's minds it to the INDEPENDENCE of the game. I just gave this example on another site: In a lottery that has a normal independent odds of winning as 1/14 million treats each individual's purchase independently. HOWEVER, the odds of ANYONE winning is anywhere from 1/3 to 1/10 depending on how many tickets are sold. As such, the 'increase' here is due to the fact that we remove the independent nature of a 'win' because we are not concerning ourselves with ONE independent ticket purchase. Quote:
If you are not knowing the difference between 'fairness' and 'randomness', I find this lack of understanding highly odd and suspect. Nature treats "independent" events distinct from a collection of them. You do NOT have a better chance at winning a lottery by simply buying more tickets in time from independent games. Your chances DO improve where you buy more tickets in one game though. So in the Monty Hall game, treating an independent single game as though all possibilities sincerely exist requires you PROVE this true by PRESENTING all parallel worlds that this game is simultaneously being played. When you use multiple games to justify the odds of 2/3, this only occurs IN EXPERIMENT because it ASSURES the 'fairness' (the equal distribution) among all possibilities. But then the 2/3 result by this method is flawed if it assumes this is 'random'. This directly relates to the BellEPR experiments because they FALSELY treat the collection of events as valid when NATURE itself, unlike human mathematicians, is defaulted to be trusted. That is why you get the 1/2 ACTUAL results in this. YET, the QMexperimenters turn this around: they are treating the mathematics involved as the inerrant factor by DEMANDING that Nature must abide to showing a 2/3 result. Because nature does not, instead of assuming the mathematics being used is illegitimate, they assert that the experiment PROVES that nature itself is 'flawed' (weird). Superposition is a fraud. And as I already understand the experiment SHOULD be 1/2 AND Nature DOES prove this as 1/2, how would you think that I and not you are making the mistake? I'm guessing that regardless of my validity here, I think that the QM experiments are about politics just as with a lot of the crap going on in science inappropriately using math and logic.  
December 6th, 2016, 01:34 AM  #24  
Newbie Joined: Mar 2016 From: Saskatoon, Saskatchewan, Canada Posts: 29 Thanks: 1 Math Focus: Logic  Quote:
A perfectly 'random' system is necessarily 'unfair'!! I said that you CAN'T have a system where you CAN be 'random' AND 'fair' at the same time. The more random, the less 'fair'; the more 'fair', the less random. "Fairness" in this context is like having inside knowledge of a trade deal that gives you a more 'fair' chance to win in the Stock Market. But then it is not 'random' if you have such privileged information and why it is illegal.  
December 6th, 2016, 03:32 AM  #25  
Senior Member Joined: Apr 2014 From: Glasgow Posts: 1,993 Thanks: 652 Math Focus: Physics, mathematical modelling, numerical and computational solutions  Quote:
Quote:
Quote:
Some people like to make "randomness" a continuous parameter that varies between 0 and 1 where 1 is a flat distribution and 0 is Kronecker delta, but that's a detail. Quote:
Quote:
Quote:
Quote:
I don't mind someone trying to point out flaws in existing theories, but it might help if you actually look at Bell's theorem rather than working with analogies. Quote:
Quote:
Last edited by Benit13; December 6th, 2016 at 03:43 AM.  
December 6th, 2016, 03:36 AM  #26  
Senior Member Joined: Apr 2014 From: Glasgow Posts: 1,993 Thanks: 652 Math Focus: Physics, mathematical modelling, numerical and computational solutions  Quote:
 
December 7th, 2016, 02:37 PM  #27  
Newbie Joined: Mar 2016 From: Saskatoon, Saskatchewan, Canada Posts: 29 Thanks: 1 Math Focus: Logic  Quote:
I still am not familiar enough with Statistics 'formally' to make sense of your interpretation without a digression into that. As such, if you want to discuss the distinction of dependence versus independence of factors involved, I prefer you either not use some predetermined concepts of an area I am not familiar with or RECONSTRUCT your own understanding relative to me here. The "independence" I am referring to means that when events are repeated, EACH local probability cannot be extended to multiple events. For example, if you buy a 6/49 lottery ticket, the odds to win are approximately 1/(14 Million) so, if you buy one ticket to one draw, you do NOT increase your odds by being sure to play in every draw distinctly. The odds reset at each game. BUT, if you ask the odds of ANYONE winning, the 'odds' of this DO increase. Without concerning the math, we know that "at least someone" will win within only a few draws on average. If you find something specific about my explanation that DOES show my own error in assimilating an independent versus and independent event, please point this out specifically. You seem to be telling me that I'm making some error in this without being willing to point where you believe I am doing this. Quote:
"Randomness" is the relative UNPREDICTABILITY of a specific outcome given two or more possibilities. So if I tossed a coin, the sincere 'randomness' of the result is perfectly unpredictable UNTIL the toss is completed. This is like the Heisenberg's uncertainty principle in the NONCopenhagen interpretation. The nature of our lack of ability to determine the outcome specifically other than to SAY we have "1 possibility out of 2 possibilities" for a coin toss does NOT speak more than this. It doesn't assert that nature itself assures that of one independent toss that our worlds split identically in two worlds with one having a head and the other a tails when we have IDENTICAL factors going in. That is, it is still 'determined' by nature to be one hundred percent a head and zero of tails OR zero percent a head and one hundred percent a tail. What the "Copenhagen" interpretation says though is that (a) Nature itself is indeterminate with respect to itself even in a contingent set of initial factors, and (b) that we CAN actually prove this by using precisely an experiment as laid out by the example of the boxes and applying Bell's Theorem. What the scientists involved inappropriately do is to think that they can opt to choose the 'collective' statistic probability (like 1/2 for tossing coins) and then show by nature that it contradicts this by not realizing they are using a different KIND of statistic (like the 1 or 0 outcomes of tossed coins). Then they mistaken the fact that since 1/2 doesn't equal 0 nor 1, for example using coins, that nature itself (having only a 0 or 1 outcome in independent events) makes nature 'flawed' to be assumed "DETERMINED". And thus, nature itself is indeterminate. For the Monty Hall problem, in only ONE game, the odds are 1/2 to win prior to the final result and NOT 2/3 because nature doesn't CARE whether we originally began with three doors or not. Removing one third of the losing possibilities is equal to gaining one third of the nonlosing possibility. I just gave the following example elsewhere: Let us say that I charged you \$3/game of the Monty Hall puzzleversion game. Then let us assume that the prize is \$3. If we played multiple games you'd win only 1/2 of the time as you lose in all the money involved regardless of how you interpret the odds as either 1/2 or 2/3 to win. THIS is the way to treat the reality of such a puzzle because it 'costs' in reality where assuming it 'free' cheats in at least SOME WAY SOMEWHERE! [made some errors in what follows and will redo this example in a follow up post instead of reediting what may have been already read.] If you chose to switch all the time deluded that it matters, then if you played 99 games, your input cost is \$297 total. Mine also is \$297 as the 'prize' I'm offering you to win. You win 2/3 of the games (66/99) [\$198] and so those wins cancel out your investment in for those games. In fact, you still lose 1/2 of your total investment for playing at \$198/\$297. I also lose this amount which is simply because we each contributed \$3 to each game. I'd lose \$198 invested in for putting the expense into the prize as you would for playing. We might agree to split the remaining loss the game consumes and each have a balance of $0. Quote:
Last edited by Scott Mayers; December 7th, 2016 at 03:21 PM.  
December 7th, 2016, 03:53 PM  #28 
Newbie Joined: Mar 2016 From: Saskatoon, Saskatchewan, Canada Posts: 29 Thanks: 1 Math Focus: Logic 
To correct the game analogy using money, Assume you play at \$1/game and I offer a \$2 prize if you win. If you lose, I get to retain the prize back AND the extra \$1 profit. In one game uniquely, you either lose \$1 OR gain \$1. This is what I meant by playing one unique game. Only where we play multiple games do you gain an advantage. For 99 games, you'd win 2/3 for switching at a profit of \$132 at a gain of \$132  \$99 = \$33. This is a gain of 1/2 of what you invested in at my loss of 1/2 by the long run and why only in repeated games do you have the advantage. Thus, for the Monty Hall game, playing only one game acts as the independent factor. In the Bell's Theorem being used with three doors, EACH event is independent with respect to Nature unlike the single event in the Monty Hall problem. This is because neither the 'host' nor the 'guest' played by Mulder and Scully know which results they'll get that match until the results are shown. So neither gets privileged knowledge prior to the other upon opening their doors. So that MUST be 1/2 and NOT 2/3 as a mathematical expectation. Last edited by Scott Mayers; December 7th, 2016 at 04:00 PM. Reason: Determining dollar signs 
December 8th, 2016, 04:21 AM  #29  
Senior Member Joined: Apr 2014 From: Glasgow Posts: 1,993 Thanks: 652 Math Focus: Physics, mathematical modelling, numerical and computational solutions  Quote:
Quote:
Quote:
Quote:
Quote:
$\displaystyle \Delta x \Delta p \ge \frac{\bar{h}}{2}$ which states that if you simultaneously measure the position, x, and momentum, p, of a quantum particle (or quantum system), the measurement error of those two measurements is fundamentally constrained by $\displaystyle \frac{\bar{h}}{2}$. That is, if you try and measure p to a very high precision with a measuring device (that is, you reduce the measurement error $\displaystyle \Delta p$), then because of the inequality, $\displaystyle \Delta x$ consequently must be increase (i.e. become imprecise), irregardless of how good your measuring device is. A similar uncertainty principle can be found also for energy and time. The uncertainty principles are derived based on the momentum and position operators and a commutativity test. As I previously stated, one of the biggest misconceptions by students in QM courses is that Heisenberg's uncertainty has something to do with wave collapse. It does not. The state of an unobserved particle is uncertain until it is measured, yes, but this is not Heisenberg's uncertainty principle. Quote:
What Bell's theorem does is show that for a certain situation (entanglement) no theory can exist which allows for a deterministic evaluation of the outcome of an entangled system. Quote:
Quote:
Quote:
Quote:
Quote:
Quote:
$\displaystyle E(win) = 3 \times \frac{2}{3} + 3 \times \frac{1}{3} = 2 + 1 = 3$ This is half the number of trials, so if you chose at random whether to swap or not, the probability of winning is n/E(win) = 1/2. In the Monty Hall problem, there is a choice of whether to swap to the other door or not. If people aren't informed about the probability of winning if they swap to the other door, then they are effectively reducing the expected number wins from 4 to some other number between 4 and 2. 2 is the worst case expectation value. Quote:
Quote:
Quote:
In quantum mechanics, the explanation is about as dry as it gets, which is that the particles are better described as quantum particles with waveparticle duality rather than bogstandard particles or bogstandard waves and therefore the outcome of the result is dictated based on quantum mechanical operators, such as the Schrodinger equation or whatever, operated on wave functions, and projections of those continuous observables to achieve eigenvalues which are the set of measurable observables. Now... try explaining that to a layperson? You have to try talking about paths and integrating over all possible paths and all this stuff... the reality is, the best understanding of quantum mechanics is obtained by just discarding preconceived notions of reality and logic and just get a QM textbook and follow it through. Last edited by Benit13; December 8th, 2016 at 04:30 AM.  
December 8th, 2016, 01:38 PM  #30 
Newbie Joined: Mar 2016 From: Saskatoon, Saskatchewan, Canada Posts: 29 Thanks: 1 Math Focus: Logic 
Hi Benit, There is a lot to consider above and I'll redress these distinctly. (1) I'm hesitant to use the terminology of probability precisely BECAUSE there are differences in interpretation. I can't trust that you'll have the same interpretation as I even where we use the same and why I prefer the expansion. I also thank you for doing this as it helps to see which interpretation you accept. You might think there is only ONE but this is not the case. And my first understanding in context of you above is THAT you interpret the probabilities as a 'practical' measure that we humans use, not something about nature itself. But, to me, this proves you don't actually understand the controversy that Einstein had against Bohr and what the "Einstein, Rosen, Podolski" paper was intentionally against the Copenhagen interpretation of QM. The Copenhagen interpretation was that NATURE itself cannot 'determine' reality locally. That is, it treats probability AS having superposition of all possibilities in REALITY until we observe it. This IS the "collapsing" concept being mentioned. It says that our ACT of observing makes a 'real' diverse reality collapse to only ONE unique reality. So I'm confused at your apparent acceptance of the math involved as being merely practical yet hold also that nature is still itself can HAVE real 'superpositions'. These are contradictory views. One says that the moon exists even if we aren't looking at it; the other says the moon is both there and not there but only asserts being real WHEN we look at it. Can you clarify how you appear to treat the Heisenberg Uncertainty principle as just about practicality YET then turn around and accept some belief that science somewhere has proven that superposition exists? You can't hold these beliefs simultaneously unless you are justly indeterminate (or random) yourself. (2) You didn't seem to see how the Monty Hall problem relates to the Bell/QM experiments as summarily (and appropriately) presented by Brian Greene's approach to explain this by example. I'm presently demonstrating HOW these relate elsewhere and can do so here too if you are still not on board with this. Have you changed your mind or do you need proof of comparison? (3) On "Randomness" versus "Fairness", I'm not sure if you understand this as I do. You didn't relate to the comparison to the Heisenberg's Uncertainty principle either. You CAN treat the formulation of this in the same way. The more you are certain to be 'random', the less you can expect predictability by some standard of 'weights' you use to assign to some probability. For instance, a truly 'random' coin toss is permanently unpredictable and so loses sincere 'fairness' because nature doesn't care to place each possibility as being equally 'probable' or it would ALWAYS be like clockwork. Absolute 'fairness' of a coin toss would be perfectly predictable like clockwork as: If "heads" is initial, then we'd have each toss as follows, Tails Heads Tails Heads Tails Heads ... This means that for each 'toss' it treats each outcome fair in each trial (toss). It would thus be easy to assure that the odds are guaranteed (known) as long as you know the initial toss. This is perfectly 'determinate'. So when "fair" as this, if nature held this fairness, you would ALWAYS get even ONE toss of every TWO to be 1/2 Heads as with Tails. If perfectly unfair, there would be no reason to assert that 1000000000000 Heads in a row is as equally possible for nature to randomly generate as only 1 uniquely to be Heads. This would be a perfect state of unpredictability and how you can equally relate this to the Uncertainty principle. Just replace the position and velocity with fairness and randomness. The inequality involved is when we COMPARE the two measures by a common standard. For coin tosses, this is that 1 head maps to 2 possible outcomes, a head or a tail. The expected outcome then replaces the Planck constant as 1/2. The numerical coefficient (1/2 x 1/(2pi) of h coefficient as reduced to 1/(4pi)] for this is '2'. as in [2 (coefficient) x 1/2 (constant of comparison)]. Then this is the same as asserting that the measure of randomness x the measure of fairness is less than or equal to 1. Because nature rarely allows us to observe an exact agreement of 1/2 for coin tosses, the more 'fair' something is, the less fair it is if it exactly = 1. Our actual measures are necessarily less than this exactness unless we KNOW for certain that something is PERFECTLY Random OR PERFECTLY Fair. All other possibilities must be less than 100%. Can you now see this? ...or do I need to be even more precise? I'll leave this at this point to see your response. P.S. Thank you for your participation with me here! If I turn out to be wrong, then I'll be willing to eat my hat. (Okay, I hope it doesn't matter that I don't own a hat! ) 

Tags 
bell, error, theorem 
Thread Tools  
Display Modes  

Similar Threads  
Thread  Thread Starter  Forum  Replies  Last Post 
Bell's theorem: simulating spooky action at distance of Quantum Mechanics  humbleteleskop  Physics  17  July 25th, 2015 03:38 AM 
Interpolating (I think?) from a bell curve.  marlan  Probability and Statistics  1  March 10th, 2015 08:20 PM 
Something about Bell's Theorem  J Thomas  Physics  4  January 13th, 2015 09:47 PM 
Percentage error and Binomial theorem  Sam1990  Applied Math  1  October 18th, 2014 01:53 PM 
Taylor's theorem. Usage + Quantify error?  bentley4  Calculus  1  May 19th, 2011 10:23 AM 