My Math Forum  

Go Back   My Math Forum > Science Forums > Physics

Physics Physics Forum


Thanks Tree2Thanks
Reply
 
LinkBack Thread Tools Display Modes
December 4th, 2016, 12:39 AM   #21
Newbie
 
Joined: Mar 2016
From: Saskatoon, Saskatchewan, Canada

Posts: 29
Thanks: 1

Math Focus: Logic
I must make sure you understand that I'm arguing that the Bell's inequality is an 'error' when USED as a means to prove or disprove QM, not that it lacks mathematical validity in itself.

What need to be compared is how the two similar problems of the Monty Hall problem and the Bell's Inequality is being used. They are identical in form but Bell's theorem is being used to treat the 2/3 solution as the defaulted correct one.

What is false in using real experiments for the Monty Hall problem is that NATURE is itself NOT 'fair' to treat possibilities evenly distributed. As such, there are the two kinds of probabilities of which only the 1/2 is correct if asking this about nature.

Ironically, the identical form is being used by QM experiments that default to TRUSTING that the LOGIC of the Monty Hall problem IS correct via nature and then when NATURE demonstrates the 1/2 solution is correct, it treats NATURE itself at fault and NOT the faulty use of the logic. It is like using a calculator set in Base-2 that reads "10" as some result to mean Base-10's "10".

In the 'experiments' used to prove the validity of Monty Hall's problem, it cheats by using what they falsely believe is sincerely 'random' when all computers necessarily draw random numbers using a logical assumption based on consistent 'fairness'. You can't have both.

This is also its own kind of "Uncertainty Principle": If you try to make a selection of odds 'fair', it loses itself being sincerely 'random'; If you try to make it 'random', it can no longer BE 'fair'.

Do you follow?

[P.S. I haven't used probability nomenclature and using P(x) form conflicts with how I use this normally for predicate logic and functions elsewhere. While I will eventually use it, if you can follow using simple fractions, can you do so? It'll also make it harder for those others who lack the literal Statistical symbols to follow. Thanks.]
Scott Mayers is offline  
 
December 5th, 2016, 02:02 AM   #22
Senior Member
 
Joined: Apr 2014
From: Glasgow

Posts: 2,068
Thanks: 692

Math Focus: Physics, mathematical modelling, numerical and computational solutions
Quote:
Originally Posted by Scott Mayers View Post
I must make sure you understand that I'm arguing that the Bell's inequality is an 'error' when USED as a means to prove or disprove QM, not that it lacks mathematical validity in itself.
Sure...

Quote:
What need to be compared is how the two similar problems of the Monty Hall problem and the Bell's Inequality is being used. They are identical in form but Bell's theorem is being used to treat the 2/3 solution as the defaulted correct one.
No... Bell's theorem has nothing to do with the Monty Hall problem. Furthermore, I'm not even sure you've identified a problem with the Monty Hall problem either.

Quote:
What is false in using real experiments for the Monty Hall problem is that NATURE is itself NOT 'fair' to treat possibilities evenly distributed. As such, there are the two kinds of probabilities of which only the 1/2 is correct if asking this about nature.
I'm not aware of an assumption that the probability density functions in Bell's theorem need to be evenly distributed.

I wouldn't call P(A) and P(A|B) two different 'kinds' of probabilities, they just describe how the probabilities of a result change depending on some prior knowledge.

Quote:
Ironically, the identical form is being used by QM experiments that default to TRUSTING that the LOGIC of the Monty Hall problem IS correct via nature and then when NATURE demonstrates the 1/2 solution is correct, it treats NATURE itself at fault and NOT the faulty use of the logic. It is like using a calculator set in Base-2 that reads "10" as some result to mean Base-10's "10".

In the 'experiments' used to prove the validity of Monty Hall's problem, it cheats by using what they falsely believe is sincerely 'random' when all computers necessarily draw random numbers using a logical assumption based on consistent 'fairness'. You can't have both.
The above makes no sense at all. What are you trying to say?

The Monty Hall problem was a quiz-question where the problem was set up on purpose to have evenly distributed probabilities, leading nevertheless to a rather counter-intuitive result, because people forget that the probability of choosing the winning door given that a false door has been presented is not equal to the probability of choosing the winning door on its own.

If you roll an unfair, weighted, die and get a result, that process is still random and you can still use bog-standard statistics to determine useful things, like the expectation values and standard deviation and all that good stuff.

Quantum mechanics makes use discrete probabilities and probability density functions, which are continuous distributions. Rarely are those distributions flat, but they are still random systems.

The uncertainty principle has nothing to do with probability and is one of the most common misconceptions picked by students studying quantum mechanics. The uncertainty principle is more closely tied with measurements of mutual observables and the ability to obtain those observables to a certain precision. It has little to do with Bell's theorem or entanglement.

Also... computers are the worst example of randomness because the way they derive random numbers is using a giant array of predetermined random numbers and an index based on the system clock.

Quote:
This is also its own kind of "Uncertainty Principle": If you try to make a selection of odds 'fair', it loses itself being sincerely 'random'; If you try to make it 'random', it can no longer BE 'fair'.
No, this is not true. 'Unfair' systems are still random systems. The uncertainty principle is associated with the precision of measured observables and is something different entirely.
Thanks from topsquark

Last edited by Benit13; December 5th, 2016 at 02:06 AM.
Benit13 is offline  
December 6th, 2016, 12:28 AM   #23
Newbie
 
Joined: Mar 2016
From: Saskatoon, Saskatchewan, Canada

Posts: 29
Thanks: 1

Math Focus: Logic
Quote:
Originally Posted by Benit13 View Post
No... Bell's theorem has nothing to do with the Monty Hall problem. Furthermore, I'm not even sure you've identified a problem with the Monty Hall problem either.
It has everything to do with this. And I'm betting that given the times, Bell just as likely borrowed this thought from Martin Gardner's original prison version.

For the Monty Hall problem, the error is assuming the probability IF ONE game is only played is probable by 2/3. It reduces to only 1/2 correctly interpreted by most hearing of this puzzle because the way DeVos originally told the story lacks clarity to how often one CAN play.

The distinction in people's minds it to the INDEPENDENCE of the game.

I just gave this example on another site: In a lottery that has a normal independent odds of winning as 1/14 million treats each individual's purchase independently. HOWEVER, the odds of ANYONE winning is anywhere from 1/3 to 1/10 depending on how many tickets are sold. As such, the 'increase' here is due to the fact that we remove the independent nature of a 'win' because we are not concerning ourselves with ONE independent ticket purchase.


Quote:
I'm not aware of an assumption that the probability density functions in Bell's theorem need to be evenly distributed.

I wouldn't call P(A) and P(A|B) two different 'kinds' of probabilities, they just describe how the probabilities of a result change depending on some prior knowledge.
You are going beyond even what is necessary to the logic going into this. I'm not sure why you are insisting on imposing more than is necessary here?

If you are not knowing the difference between 'fairness' and 'randomness', I find this lack of understanding highly odd and suspect.

Nature treats "independent" events distinct from a collection of them. You do NOT have a better chance at winning a lottery by simply buying more tickets in time from independent games. Your chances DO improve where you buy more tickets in one game though.

So in the Monty Hall game, treating an independent single game as though all possibilities sincerely exist requires you PROVE this true by PRESENTING all parallel worlds that this game is simultaneously being played.

When you use multiple games to justify the odds of 2/3, this only occurs IN EXPERIMENT because it ASSURES the 'fairness' (the equal distribution) among all possibilities. But then the 2/3 result by this method is flawed if it assumes this is 'random'.

This directly relates to the Bell-EPR experiments because they FALSELY treat the collection of events as valid when NATURE itself, unlike human mathematicians, is defaulted to be trusted. That is why you get the 1/2 ACTUAL results in this. YET, the QM-experimenters turn this around: they are treating the mathematics involved as the inerrant factor by DEMANDING that Nature must abide to showing a 2/3 result. Because nature does not, instead of assuming the mathematics being used is illegitimate, they assert that the experiment PROVES that nature itself is 'flawed' (weird).

Superposition is a fraud. And as I already understand the experiment SHOULD be 1/2 AND Nature DOES prove this as 1/2, how would you think that I and not you are making the mistake?

I'm guessing that regardless of my validity here, I think that the QM experiments are about politics just as with a lot of the crap going on in science inappropriately using math and logic.
Scott Mayers is offline  
December 6th, 2016, 12:34 AM   #24
Newbie
 
Joined: Mar 2016
From: Saskatoon, Saskatchewan, Canada

Posts: 29
Thanks: 1

Math Focus: Logic
Quote:
Originally Posted by Benit13 View Post
No, this is not true. 'Unfair' systems are still random systems. The uncertainty principle is associated with the precision of measured observables and is something different entirely.
This is precisely WHAT you SHOULD interpret this.

A perfectly 'random' system is necessarily 'unfair'!! I said that you CAN'T have a system where you CAN be 'random' AND 'fair' at the same time. The more random, the less 'fair'; the more 'fair', the less random.

"Fairness" in this context is like having inside knowledge of a trade deal that gives you a more 'fair' chance to win in the Stock Market. But then it is not 'random' if you have such privileged information and why it is illegal.
Scott Mayers is offline  
December 6th, 2016, 02:32 AM   #25
Senior Member
 
Joined: Apr 2014
From: Glasgow

Posts: 2,068
Thanks: 692

Math Focus: Physics, mathematical modelling, numerical and computational solutions
Quote:
Originally Posted by Scott Mayers View Post
It has everything to do with this. And I'm betting that given the times, Bell just as likely borrowed this thought from Martin Gardner's original prison version.
Well, it looks like we have reached an impasse. Good luck to you.

Quote:
You are going beyond even what is necessary to the logic going into this. I'm not sure why you are insisting on imposing more than is necessary here?
Well, if you keep insisting that Bell's theorem is wrong because $\displaystyle P(A|B) \neq P(A)$ in a standard statistics problem (which is daft), I will keep insisting you are wrong with reasons why.

Quote:
If you are not knowing the difference between 'fairness' and 'randomness', I find this lack of understanding highly odd and suspect.
Random things are things that have a set of possible of outcomes with each trial rather than single outcome. That's it. Fairness is associated with each outcome having the same probability given a trial. In continuous probability distributions, this is a flat distribution. In discrete probabilities, they all just have the same value.

Some people like to make "randomness" a continuous parameter that varies between 0 and 1 where 1 is a flat distribution and 0 is Kronecker delta, but that's a detail.

Quote:
Nature treats "independent" events distinct from a collection of them. You do NOT have a better chance at winning a lottery by simply buying more tickets in time from independent games. Your chances DO improve where you buy more tickets in one game though.
Agreed... increasing the number of trials makes you more likely to obtain a single successful trial.

Quote:
So in the Monty Hall game, treating an independent single game as though all possibilities sincerely exist requires you PROVE this true by PRESENTING all parallel worlds that this game is simultaneously being played.
That's a weird way of stating "you need to know the set of all the possible outcomes", but okay... I don't see an issue with that.

Quote:
When you use multiple games to justify the odds of 2/3, this only occurs IN EXPERIMENT because it ASSURES the 'fairness' (the equal distribution) among all possibilities. But then the 2/3 result by this method is flawed if it assumes this is 'random'.
That last sentence is not true. It can be shown theoretically that the probability of getting the winning door given that a losing door has been shown to you is 2/3 if you choose the other door. The fact that it is experimentally viable is just reassuring. The Monty Hall problem is random by design because for a given trial (opening a door) there is a set of possible outcomes (win or lose).

Quote:
This directly relates to the Bell-EPR experiments because they FALSELY treat the collection of events as valid when NATURE itself, unlike human mathematicians, is defaulted to be trusted.

That is why you get the 1/2 ACTUAL results in this. YET, the QM-experimenters turn this around: they are treating the mathematics involved as the inerrant factor by DEMANDING that Nature must abide to showing a 2/3 result. Because nature does not, instead of assuming the mathematics being used is illegitimate, they assert that the experiment PROVES that nature itself is 'flawed' (weird).
No, experimenters don't demand anything, they just measure the number of entangled particle pairs that pass through with a given spin angular momentum and polarisation. Comparing the results to Bell's inequality shows a violation.

I don't mind someone trying to point out flaws in existing theories, but it might help if you actually look at Bell's theorem rather than working with analogies.

Quote:
Superposition is a fraud.
Well, classical wave experiments show that superposition occurs and are easily performed at home with equipment you can buy from a shop. Young's double slit shows that you can achieve similar outcomes with photons, electrons, atoms and even some molecules (buckyballs springs to mind).

Quote:
And as I already understand the experiment SHOULD be 1/2 AND Nature DOES prove this as 1/2, how would you think that I and not you are making the mistake?

I'm guessing that regardless of my validity here, I think that the QM experiments are about politics just as with a lot of the crap going on in science inappropriately using math and logic.
Don't throw your toys out of the pram because someone disagrees with you on the internet.

Last edited by Benit13; December 6th, 2016 at 02:43 AM.
Benit13 is offline  
December 6th, 2016, 02:36 AM   #26
Senior Member
 
Joined: Apr 2014
From: Glasgow

Posts: 2,068
Thanks: 692

Math Focus: Physics, mathematical modelling, numerical and computational solutions
Quote:
Originally Posted by Scott Mayers View Post
A perfectly 'random' system is necessarily 'unfair'!! I said that you CAN'T have a system where you CAN be 'random' AND 'fair' at the same time. The more random, the less 'fair'; the more 'fair', the less random.

"Fairness" in this context is like having inside knowledge of a trade deal that gives you a more 'fair' chance to win in the Stock Market. But then it is not 'random' if you have such privileged information and why it is illegal.
This is not true. A die is random because it has 6 possible outcomes rather than 1. That die is 'fair' if the probability of each outcome is the same (1/6).
Benit13 is offline  
December 7th, 2016, 01:37 PM   #27
Newbie
 
Joined: Mar 2016
From: Saskatoon, Saskatchewan, Canada

Posts: 29
Thanks: 1

Math Focus: Logic
Quote:
Originally Posted by Benit13 View Post
Quote:
Originally Posted by Scott Mayers
I'm not sure why you are insisting on imposing more than is necessary here?
Well, if you keep insisting that Bell's theorem is wrong because $\displaystyle P(A|B) \neq P(A)$ in a standard statistics problem (which is daft), I will keep insisting you are wrong with reasons why.
I tried to correct your misinterpretation THAT Bell's Theorem is not the issue but rather that its USE by scientists in modern experiments to supposedly prove THAT the EPR argument is wrong and that quantum mechanic's Copenhagen interpretation is correct, is what is 'wrong'. I could have used a better title.

I still am not familiar enough with Statistics 'formally' to make sense of your interpretation without a digression into that. As such, if you want to discuss the distinction of dependence versus independence of factors involved, I prefer you either not use some predetermined concepts of an area I am not familiar with or RECONSTRUCT your own understanding relative to me here.

The "independence" I am referring to means that when events are repeated, EACH local probability cannot be extended to multiple events. For example, if you buy a 6/49 lottery ticket, the odds to win are approximately 1/(14 Million) so, if you buy one ticket to one draw, you do NOT increase your odds by being sure to play in every draw distinctly. The odds reset at each game.

BUT, if you ask the odds of ANYONE winning, the 'odds' of this DO increase. Without concerning the math, we know that "at least someone" will win within only a few draws on average.

If you find something specific about my explanation that DOES show my own error in assimilating an independent versus and independent event, please point this out specifically. You seem to be telling me that I'm making some error in this without being willing to point where you believe I am doing this.

Quote:
Random things are things that have a set of possible of outcomes with each trial rather than single outcome. That's it. Fairness is associated with each outcome having the same probability given a trial. In continuous probability distributions, this is a flat distribution. In discrete probabilities, they all just have the same value.

Some people like to make "randomness" a continuous parameter that varies between 0 and 1 where 1 is a flat distribution and 0 is Kronecker delta, but that's a detail.
Without knowing the specific background on the math history, I can't determine what you are or are not understanding of me here.

"Randomness" is the relative UNPREDICTABILITY of a specific outcome given two or more possibilities. So if I tossed a coin, the sincere 'randomness' of the result is perfectly unpredictable UNTIL the toss is completed. This is like the Heisenberg's uncertainty principle in the NON-Copenhagen interpretation. The nature of our lack of ability to determine the outcome specifically other than to SAY we have "1 possibility out of 2 possibilities" for a coin toss does NOT speak more than this. It doesn't assert that nature itself assures that of one independent toss that our worlds split identically in two worlds with one having a head and the other a tails when we have IDENTICAL factors going in. That is, it is still 'determined' by nature to be one hundred percent a head and zero of tails OR zero percent a head and one hundred percent a tail.

What the "Copenhagen" interpretation says though is that (a) Nature itself is indeterminate with respect to itself even in a contingent set of initial factors, and (b) that we CAN actually prove this by using precisely an experiment as laid out by the example of the boxes and applying Bell's Theorem.

What the scientists involved inappropriately do is to think that they can opt to choose the 'collective' statistic probability (like 1/2 for tossing coins) and then show by nature that it contradicts this by not realizing they are using a different KIND of statistic (like the 1 or 0 outcomes of tossed coins). Then they mistaken the fact that since 1/2 doesn't equal 0 nor 1, for example using coins, that nature itself (having only a 0 or 1 outcome in independent events) makes nature 'flawed' to be assumed "DETERMINED". And thus, nature itself is indeterminate.

For the Monty Hall problem, in only ONE game, the odds are 1/2 to win prior to the final result and NOT 2/3 because nature doesn't CARE whether we originally began with three doors or not. Removing one third of the losing possibilities is equal to gaining one third of the non-losing possibility.

I just gave the following example elsewhere:

Let us say that I charged you \$3/game of the Monty Hall puzzle-version game. Then let us assume that the prize is \$3. If we played multiple games you'd win only 1/2 of the time as you lose in all the money involved regardless of how you interpret the odds as either 1/2 or 2/3 to win. THIS is the way to treat the reality of such a puzzle because it 'costs' in reality where assuming it 'free' cheats in at least SOME WAY SOMEWHERE!

[made some errors in what follows and will redo this example in a follow up post instead of re-editing what may have been already read.]
If you chose to switch all the time deluded that it matters, then if you played 99 games, your input cost is \$297 total. Mine also is \$297 as the 'prize' I'm offering you to win. You win 2/3 of the games (66/99) [\$198] and so those wins cancel out your investment in for those games. In fact, you still lose 1/2 of your total investment for playing at \$198/\$297. I also lose this amount which is simply because we each contributed \$3 to each game. I'd lose \$198 invested in for putting the expense into the prize as you would for playing. We might agree to split the remaining loss the game consumes and each have a balance of $0.

Quote:
Well, classical wave experiments show that superposition occurs and are easily performed at home with equipment you can buy from a shop. Young's double slit shows that you can achieve similar outcomes with photons, electrons, atoms and even some molecules (buckyballs springs to mind).
This still ignores that the very interpretation of the OBSERVATION of the interference pattern is itself subject to indeterminacy. In other words, you can't treat specific observations that you like as being privileged to being correct but others not when subject to the same identical flaws. If we trust the pattern as 'proof' of a superposition of all possible paths, then you are begging that that observation is NOT subject to the Heisenberg's uncertainty principle as an "observation" in and of itself!!

Last edited by Scott Mayers; December 7th, 2016 at 02:21 PM.
Scott Mayers is offline  
December 7th, 2016, 02:53 PM   #28
Newbie
 
Joined: Mar 2016
From: Saskatoon, Saskatchewan, Canada

Posts: 29
Thanks: 1

Math Focus: Logic
To correct the game analogy using money,

Assume you play at \$1/game and I offer a \$2 prize if you win. If you lose, I get to retain the prize back AND the extra \$1 profit.

In one game uniquely, you either lose \$1 OR gain \$1. This is what I meant by playing one unique game.

Only where we play multiple games do you gain an advantage. For 99 games, you'd win 2/3 for switching at a profit of \$132 at a gain of \$132 - \$99 = \$33. This is a gain of 1/2 of what you invested in at my loss of 1/2 by the long run and why only in repeated games do you have the advantage. Thus, for the Monty Hall game, playing only one game acts as the independent factor.



In the Bell's Theorem being used with three doors, EACH event is independent with respect to Nature unlike the single event in the Monty Hall problem. This is because neither the 'host' nor the 'guest' played by Mulder and Scully know which results they'll get that match until the results are shown. So neither gets privileged knowledge prior to the other upon opening their doors. So that MUST be 1/2 and NOT 2/3 as a mathematical expectation.

Last edited by Scott Mayers; December 7th, 2016 at 03:00 PM. Reason: Determining dollar signs
Scott Mayers is offline  
December 8th, 2016, 03:21 AM   #29
Senior Member
 
Joined: Apr 2014
From: Glasgow

Posts: 2,068
Thanks: 692

Math Focus: Physics, mathematical modelling, numerical and computational solutions
Quote:
Originally Posted by Scott Mayers View Post
I tried to correct your misinterpretation THAT Bell's Theorem is not the issue but rather that its USE by scientists in modern experiments to supposedly prove THAT the EPR argument is wrong and that quantum mechanic's Copenhagen interpretation is correct, is what is 'wrong'. I could have used a better title.

I still am not familiar enough with Statistics 'formally' to make sense of your interpretation without a digression into that. As such, if you want to discuss the distinction of dependence versus independence of factors involved, I prefer you either not use some predetermined concepts of an area I am not familiar with or RECONSTRUCT your own understanding relative to me here.
Okay... but the statistics I am stating on these forums is fairly basic statistics that is found in any school textbook, so if you are finding it difficult to understand my terminology, I can point you towards some statistics resources.

Quote:
The "independence" I am referring to means that when events are repeated, EACH local probability cannot be extended to multiple events. For example, if you buy a 6/49 lottery ticket, the odds to win are approximately 1/(14 Million) so, if you buy one ticket to one draw, you do NOT increase your odds by being sure to play in every draw distinctly. The odds reset at each game.
Sure, the probability of winning the jackpot doesn't change with the number of trials.

Quote:
BUT, if you ask the odds of ANYONE winning, the 'odds' of this DO increase. Without concerning the math, we know that "at least someone" will win within only a few draws on average.
Agreed, the likelihood of a single win increases if the number of trials increases.

Quote:
If you find something specific about my explanation that DOES show my own error in assimilating an independent versus and independent event, please point this out specifically. You seem to be telling me that I'm making some error in this without being willing to point where you believe I am doing this.
I've already stated in previous posts what your errors are. If you are not sure, re-read the previous posts.

Quote:
Without knowing the specific background on the math history, I can't determine what you are or are not understanding of me here.

"Randomness" is the relative UNPREDICTABILITY of a specific outcome given two or more possibilities.

So if I tossed a coin, the sincere 'randomness' of the result is perfectly unpredictable UNTIL the toss is completed. This is like the Heisenberg's uncertainty principle in the NON-Copenhagen interpretation.
No, that is not Heisenberg's uncertainty principle at all. Heisenberg's uncertainty principle is

$\displaystyle \Delta x \Delta p \ge \frac{\bar{h}}{2}$

which states that if you simultaneously measure the position, x, and momentum, p, of a quantum particle (or quantum system), the measurement error of those two measurements is fundamentally constrained by $\displaystyle \frac{\bar{h}}{2}$. That is, if you try and measure p to a very high precision with a measuring device (that is, you reduce the measurement error $\displaystyle \Delta p$), then because of the inequality, $\displaystyle \Delta x$ consequently must be increase (i.e. become imprecise), irregardless of how good your measuring device is. A similar uncertainty principle can be found also for energy and time. The uncertainty principles are derived based on the momentum and position operators and a commutativity test.

As I previously stated, one of the biggest misconceptions by students in QM courses is that Heisenberg's uncertainty has something to do with wave collapse. It does not. The state of an unobserved particle is uncertain until it is measured, yes, but this is not Heisenberg's uncertainty principle.

Quote:
The nature of our lack of ability to determine the outcome specifically other than to SAY we have "1 possibility out of 2 possibilities" for a coin toss does NOT speak more than this. It doesn't assert that nature itself assures that of one independent toss that our worlds split identically in two worlds with one having a head and the other a tails when we have IDENTICAL factors going in. That is, it is still 'determined' by nature to be one hundred percent a head and zero of tails OR zero percent a head and one hundred percent a tail.
It's fine here until that last sentence, which makes no sense. Both interpretations try to explain what might be going on with an unobserved quantum system and what wave collapse means to that system. Unobserved quantum systems are inherently random and not deterministic, regardless of which interpretation you are using. That is because there is no set of mathematics which currently exists that allows one to determine with absolute certainty whether an outcome is going to be one result or another. That is, there is never a situation in quantum mechanics where there is only a single outcome and the probability of that outcome is 1. If some sort of system seems to start behaving like that, it isn't a random process and therefore not a quantum system.

What Bell's theorem does is show that for a certain situation (entanglement) no theory can exist which allows for a deterministic evaluation of the outcome of an entangled system.

Quote:
What the "Copenhagen" interpretation says though is that (a) Nature itself is indeterminate with respect to itself even in a contingent set of initial factors, and (b) that we CAN actually prove this by using precisely an experiment as laid out by the example of the boxes and applying Bell's Theorem.
Point a) is fine, but point b) is wrong. Bell's theorem doesn't prove that quantum systems are indeterminate, it specifies a constraint by which a particular quantum system, involving two particles, can or can't be treated in the same way as individual particles with regular run-of-the-mill statistics. It is as much a statement about how compound states are treated in quantum systems as it is about wave collapse.

Quote:
What the scientists involved inappropriately do is to think that they can opt to choose the 'collective' statistic probability (like 1/2 for tossing coins) and then show by nature that it contradicts this by not realizing they are using a different KIND of statistic (like the 1 or 0 outcomes of tossed coins).
The latter part makes no sense. Is it random or not? If it is random, you can use statistics (in some form). If it also a quantum system then Bell's theorem is relevant. If the system is not random, then you can't use statistics at all (not even regular statistics) and Bell's theorem is irrelevant. Also... what makes the choice of probabilities in Bell's experiment 'collective'? The theory involves probability density functions for the angular momentum, some of which are based on individual angular momentum states and one which treats the photons as a compound system.

Quote:
Then they mistaken the fact that since 1/2 doesn't equal 0 nor 1, for example using coins, that nature itself (having only a 0 or 1 outcome in independent events) makes nature 'flawed' to be assumed "DETERMINED". And thus, nature itself is indeterminate.
All quantum systems are indeterminate by definition. If it is a deterministic system, it is not a quantum system. It makes no sense to claim that scientists are incorrect in assuming their systems are random when the whole point of studying them is because they are random and therefore rather difficult to understand.

Quote:
For the Monty Hall problem, in only ONE game, the odds are 1/2 to win prior to the final result and NOT 2/3 because nature doesn't CARE whether we originally began with three doors or not. Removing one third of the losing possibilities is equal to gaining one third of the non-losing possibility.
This is just dead wrong. The probability of a result given some other result (which is written down as P(A|B)) is sometimes not the same as the probability of a result (P(A)) on its own merit. It's elementary statistics and the Monty Hall problem is only one example where this is the case. It is easy to create statistical scenarios where this is true.

Quote:
I just gave the following example elsewhere:

Let us say that I charged you \$3/game of the Monty Hall puzzle-version game. Then let us assume that the prize is \$3.
If these are the conditions I would never play your game because the winnings are the same as the charge; it would be a waste of time unless you made the game fun in some way.

Quote:
If we played multiple games you'd win only 1/2 of the time as you lose in all the money involved regardless of how you interpret the odds as either 1/2 or 2/3 to win. THIS is the way to treat the reality of such a puzzle because it 'costs' in reality where assuming it 'free' cheats in at least SOME WAY SOMEWHERE!
No, the probability of winning in each trial depends on that final decision of whether to stick with the door originally chosen or swap to the other. If I undergo the Monty Hall problem and decide to stick with the door I originally chose, the probability of me winning is 1/3 (P(win|swap)). If I change to the other door instead, my chances of winning are 2/3 (P(win|no swap)). The expected number of wins can then be calculated based the number of trials and the number of times I pick one door or the other. For example, let's say I play the Monty Hall problem 6 times and I choose to stick with the door I originally chose every time. The expected number of wins would be 2. If I swap, the expected number of wins increases to 4. If I choose to stick 3 times and choose to swap 3 times, then the expected number of wins is

$\displaystyle E(win) = 3 \times \frac{2}{3} + 3 \times \frac{1}{3} = 2 + 1 = 3$

This is half the number of trials, so if you chose at random whether to swap or not, the probability of winning is n/E(win) = 1/2.

In the Monty Hall problem, there is a choice of whether to swap to the other door or not. If people aren't informed about the probability of winning if they swap to the other door, then they are effectively reducing the expected number wins from 4 to some other number between 4 and 2. 2 is the worst case expectation value.

Quote:
[made some errors in what follows and will redo this example in a follow up post instead of re-editing what may have been already read.]
No worries

Quote:
This still ignores that the very interpretation of the OBSERVATION of the interference pattern is itself subject to indeterminacy. In other words, you can't treat specific observations that you like as being privileged to being correct but others not when subject to the same identical flaws.
Agreed... Bell's theorem does not anything about whether one should accept one interpretation over the other.

Quote:
If we trust the pattern as 'proof' of a superposition of all possible paths, then you are begging that that observation is NOT subject to the Heisenberg's uncertainty principle as an "observation" in and of itself!!
This makes no sense. Superposition happens in experiments with individual particles (like electrons) whether we like it or not; the challenge is try and explain what's happening.

In quantum mechanics, the explanation is about as dry as it gets, which is that the particles are better described as quantum particles with wave-particle duality rather than bog-standard particles or bog-standard waves and therefore the outcome of the result is dictated based on quantum mechanical operators, such as the Schrodinger equation or whatever, operated on wave functions, and projections of those continuous observables to achieve eigenvalues which are the set of measurable observables. Now... try explaining that to a lay-person? You have to try talking about paths and integrating over all possible paths and all this stuff... the reality is, the best understanding of quantum mechanics is obtained by just discarding preconceived notions of reality and logic and just get a QM textbook and follow it through.

Last edited by Benit13; December 8th, 2016 at 03:30 AM.
Benit13 is offline  
December 8th, 2016, 12:38 PM   #30
Newbie
 
Joined: Mar 2016
From: Saskatoon, Saskatchewan, Canada

Posts: 29
Thanks: 1

Math Focus: Logic
Hi Benit,

There is a lot to consider above and I'll redress these distinctly.

(1) I'm hesitant to use the terminology of probability precisely BECAUSE there are differences in interpretation. I can't trust that you'll have the same interpretation as I even where we use the same and why I prefer the expansion. I also thank you for doing this as it helps to see which interpretation you accept. You might think there is only ONE but this is not the case. And my first understanding in context of you above is THAT you interpret the probabilities as a 'practical' measure that we humans use, not something about nature itself. But, to me, this proves you don't actually understand the controversy that Einstein had against Bohr and what the "Einstein, Rosen, Podolski" paper was intentionally against the Copenhagen interpretation of QM.

The Copenhagen interpretation was that NATURE itself cannot 'determine' reality locally. That is, it treats probability AS having superposition of all possibilities in REALITY until we observe it. This IS the "collapsing" concept being mentioned. It says that our ACT of observing makes a 'real' diverse reality collapse to only ONE unique reality. So I'm confused at your apparent acceptance of the math involved as being merely practical yet hold also that nature is still itself can HAVE real 'superpositions'. These are contradictory views. One says that the moon exists even if we aren't looking at it; the other says the moon is both there and not there but only asserts being real WHEN we look at it.

Can you clarify how you appear to treat the Heisenberg Uncertainty principle as just about practicality YET then turn around and accept some belief that science somewhere has proven that superposition exists? You can't hold these beliefs simultaneously unless you are justly indeterminate (or random) yourself.

(2) You didn't seem to see how the Monty Hall problem relates to the Bell/QM experiments as summarily (and appropriately) presented by Brian Greene's approach to explain this by example. I'm presently demonstrating HOW these relate elsewhere and can do so here too if you are still not on board with this. Have you changed your mind or do you need proof of comparison?

(3) On "Randomness" versus "Fairness", I'm not sure if you understand this as I do. You didn't relate to the comparison to the Heisenberg's Uncertainty principle either. You CAN treat the formulation of this in the same way. The more you are certain to be 'random', the less you can expect predictability by some standard of 'weights' you use to assign to some probability. For instance, a truly 'random' coin toss is permanently unpredictable and so loses sincere 'fairness' because nature doesn't care to place each possibility as being equally 'probable' or it would ALWAYS be like clockwork.

Absolute 'fairness' of a coin toss would be perfectly predictable like clockwork as:

If "heads" is initial, then we'd have each toss as follows,

Tails
Heads
Tails
Heads
Tails
Heads
...

This means that for each 'toss' it treats each outcome fair in each trial (toss). It would thus be easy to assure that the odds are guaranteed (known) as long as you know the initial toss. This is perfectly 'determinate'. So when "fair" as this, if nature held this fairness, you would ALWAYS get even ONE toss of every TWO to be 1/2 Heads as with Tails.

If perfectly unfair, there would be no reason to assert that 1000000000000 Heads in a row is as equally possible for nature to randomly generate as only 1 uniquely to be Heads. This would be a perfect state of unpredictability and how you can equally relate this to the Uncertainty principle. Just replace the position and velocity with fairness and randomness. The inequality involved is when we COMPARE the two measures by a common standard. For coin tosses, this is that 1 head maps to 2 possible outcomes, a head or a tail. The expected outcome then replaces the Planck constant as 1/2. The numerical coefficient (1/2 x 1/(2pi) of h coefficient as reduced to 1/(4pi)] for this is '2'. as in [2 (coefficient) x 1/2 (constant of comparison)]. Then this is the same as asserting that the measure of randomness x the measure of fairness is less than or equal to 1.

Because nature rarely allows us to observe an exact agreement of 1/2 for coin tosses, the more 'fair' something is, the less fair it is if it exactly = 1. Our actual measures are necessarily less than this exactness unless we KNOW for certain that something is PERFECTLY Random OR PERFECTLY Fair. All other possibilities must be less than 100%.

Can you now see this? ...or do I need to be even more precise?


I'll leave this at this point to see your response.

P.S. Thank you for your participation with me here! If I turn out to be wrong, then I'll be willing to eat my hat. (Okay, I hope it doesn't matter that I don't own a hat! )
Scott Mayers is offline  
Reply

  My Math Forum > Science Forums > Physics

Tags
bell, error, theorem



Thread Tools
Display Modes


Similar Threads
Thread Thread Starter Forum Replies Last Post
Bell's theorem: simulating spooky action at distance of Quantum Mechanics humbleteleskop Physics 17 July 25th, 2015 03:38 AM
Interpolating (I think?) from a bell curve. marlan Probability and Statistics 1 March 10th, 2015 08:20 PM
Something about Bell's Theorem J Thomas Physics 4 January 13th, 2015 08:47 PM
Percentage error and Binomial theorem Sam1990 Applied Math 1 October 18th, 2014 01:53 PM
Taylor's theorem. Usage + Quantify error? bentley4 Calculus 1 May 19th, 2011 10:23 AM





Copyright © 2017 My Math Forum. All rights reserved.