
Math Software Math Software  Mathematica, Matlab, Calculators, Graphing Software 
 LinkBack  Thread Tools  Display Modes 
May 29th, 2013, 06:35 PM  #1 
Senior Member Joined: Jan 2013 Posts: 209 Thanks: 3  Brainstorming  Help design a pure math op sys at bit level?
The plan is an absolutely minimalist math software defined at the bit level, statistically converging toward its defined behaviors but always with an exponentially small chance it will do something else. It will be offered to everyone and capable of becoming the foundation of a complete global infrastructure at the bit level, so no need for anyone to fight about it. Nobody owns the most basic truths of math. The number system starts at, and proceeds by deriving later operators, and all of its statistical only, not repeatable exactly except in convergence... * Hard coded driver optimization ability to do peephole optimization at runtime of common calculations, like sine or cooley tukey or sigmoid, with transparent integration at the bit level. * Variable bit length integer addresses and integer values at each, only the raw data at this level, lazy evaluated would be the default. * Lisp/Scheme/Continuation/Lambda style immutable objects and nothing else. * Harmony Search of any subset of address space. * Central Limit Theorem, the fact that if you sum C^2 random numbers which are each 1 or 1, it has a standard deviation of C. This will probably affect the addressing, as positional number system's digits each being sqrt(2) times smaller, instead of 2 times smaller, than the last, is the basic idea but I don't think its exactly that to hold statistically constant the otherwise expansion of bell curve as each next random number is added (constant standard deviation as more data flows in). * User interface of 1 pixel per integer address, lay them out on screen any way each user likes and chooses to define semantics of the data structures built on top of. * Huffman Compression of any subset of address space. * Derivation of PI from 2 sets of random bits. * Derivation of E from sequentially 1 more random bit at a time, or it may need to be the expansion of a square which would be something like 2*oldSize+1. * Derivation of relative PLUS of Twos Complement style binary fractions, or maybe its the sqrt(2) kind with 2 times as many digits and many representations of the same scalar. * Derivation of relative MULTIPLY. * Derivation of relative EXPONENT. * Bayes' Rule, as in Bayesian Networks. * Restricted Boltzmann Machine using sigmoid and simulated annealing. * One recursion unit of Cooley Tukey Fast Fourier Transform, to be used in combination with other operators as replacement for sine and cosine which are actually just twos complement integers wrapped around the 2 sets of random bits which statistically form a circle. * Neuromodulation as recursion of simulated annealing. * Topological manifolds using my "amount of dimension" math "How to calculate number of dimensions using only dot product " viewtopic.php?f=22&t=37721 * Zero sum universe made of nonunitary (ordinary unbalanced AI) associative memory objects such as boltzmann machines which converge from many states to fewer states, mapping them in any way. Zero sum guarantees convergence toward statistical accuracy overall even if we do not know how to balance many of the parts individually. It takes only 2 random bits on average to return a random bit observation of any fraction chance, even if the fraction is lazy evaluated as an irrational fraction like pi/5, since we only need to calculate its bits up to the number of random bits consumed before getting the first 1. Consume random bits until you get the first 1, counting how many consumed. Go directly to that binary digit and return it as the random bit observation (this was known hundreds of years ago, I hear). * Binary search of recursive cardinalities, instead of a hard coded 2 levels of cardinality like IEEE 754 64 bit floating points having 11 exponent bits and 52 digit bits and 1 sign bit. The 11 and 52 would be variable length bit streams, lazy evaluated, and usable recursively similar to how Scheme optimizes Tail Recursion, but on that point I'm still very confused how to proceed except it looks very much like theres some simple way to make it all fit together. * High dimensional, and variable dimensional, fractional fourier transform which in plain english means if your ears are a 2d grid of time vs frequency you can rotate simultaneous notes between frequency and time, so they're not so much simultaneous but become closer together in frequency. Rotation was derived early in the system. * Unification of Newcomb's Paradox and Cooley Tukey Fast Fourier Transform recursion unit in the context of harmony search and time symmetric universal math operators, or in other words, Rock Paper Scissors as a measure of general intelligence. * Root control flow at each decentralized unit of calculation is entropy based, like Entropica or some uses of Boltzmann Machine. Of course we won't get to all of these right away. Start small with just the basics of math, central limit theorem, conditional probability, etc. Add the later operators when needed for practical apps. For now I'd be happen to see plus and multiply derived instead of pushed into IEEE standards of bit layouts, more like Eurisko calculated everything out to rediculously impractical levels of abstraction and then asked you for a faster code to peephole optimize it. Purely minimalist math operating system, nanokernel really. Hopefully the whole definition of the system should fit on 1 printed page if written in math, or probably much larger for the practical code to calculate the math since there are severe translation difficulties between math and Java/C/Lisp/etc. Eventually I'd like to explore a wireless gray goo style of cell processor which has a wireless range of about 1 millimeter per tiny part of a computer. 2 test cases: Rule 110 and Conway's Game Of Life. Can it outperform Hashlife? I want to understand the most basic things first. This is just how I see it all fitting together long term. This is one thing, not a collection of features. Multiply is not a separate operator from plus or the central limit theorem of how sums of 1 and 1 converge. A bell curve and a wave are not data structures written in later. They are derived from the central limit theorem on sets of random bits. Is it capable of a tech singularity? Every computer is. Some computers go more directly to the core of math. A computer does only what you program it to. As an operating system, its a blank slate. We are missing some core operators of math which are awkwardly instead built on top of arbitrary data structures like base 10 and a twos complement format of integers of which all possible values together do not sum to 0 as there is 1 more negative value than positive which would be fixed by adding 1/2 to all such values before any further mappings especially the sqrt(2) style of thing with the central limit theorem doubling the number of digits, which I'm not exactly sure of but its something like that. Anyone want to help brainstorm a minimalist design for such a bit level statistical operating system (nanokernel)? Its just math. Math has no degrees of freedom, is the way it is because of the facts of combinations of 1s and 0s. God can not make 17 not be a prime number. It has no degrees of freedom. 

Tags 
bit, brainstorming, design, level, math, pure, sys 
Thread Tools  
Display Modes  

Similar Threads  
Thread  Thread Starter  Forum  Replies  Last Post 
The Design of Manipulatives for Math Education  cale.moore  Academic Guidance  1  October 25th, 2012 03:21 PM 
Learning Pure math as a doctor  321  Academic Guidance  2  April 3rd, 2010 01:56 PM 
math level for engineering bachelor  dark19  Academic Guidance  4  October 18th, 2009 09:29 AM 
GCE O LEVEL PURE MATHEMATICS (7362)  Tin Maung Oo  Math Books  1  November 11th, 2008 10:34 AM 
math level for engineering bachelor  dark19  New Users  4  December 31st, 1969 04:00 PM 