Introduction to physical constants proposed evaluations scheme.
Mathematical constants used in derivations.
The mathematical constants to be mainly used are π, the isoperimetric quotient, the number density formula, and a sphere packing density factor. These will be applied in an overall scheme to quantify the physical constants as integer powers of 2 times 3 times π. In the case of either 2 or 3, they are also allowed to have a value ending with x.5 - representing the square root of 2 and 3 respectively.
Existing fundamental physical constant values at NIST.
It seems reasonable to question current physical constant values, as they bear no resemblance to π, with the one exception, that of the μ0 being that of 4π. All the others, however, represents either established relationships and/or measured quantities. They are but decimal attempts at precision. Currently, the accepted authority on these values, NIST, admits to having to fudge values a bit, as the values they have do not quite work out when applied to the known relationships. So, they use a smoothing technique to minimize their overall errors.
Derivation of the speed of light.
A key element of the derivation is manifesting a proper value for the speed of light. The current value, although deemed as exact, is better stated as agreed upon by mutual decision. It would be difficult to accept that it was exact at 2.99792458 and endless zeros. Nist derives their speed of light from this equation
the speed of light = √
1 / ( μ0 times ξ0 ).
I also used this equation to derive the speed of light.
While NIST shows a value for the permittivity of a vacuum ξ0, I would suggest their value to be merely close to the right value, and more importantly, misses the concept of where this value comes from mathematically. This decimal value not just ignores π, but it ignores the underlying themes involved. I would like to address this idea of a MATHEMATICAL under-structure as a theme a little bit.
Empty physical space.
Imagine EMPTY space as the μ0, being equal to 4π, also makes it equal to 720 degrees, as π = 180 degrees. Coincidentally, the interior angles of any polyhedron, like a tetrahedron, rhombic dodecahedron, pentagonal dodecahedron, cube or sphere even...all share this feature. One might think then, of a chunk of empty space when visualizing the μ0 constant.
Occupied physical space.
In contrast, is the permittivity of a vacuum ξ0. This time imagine - SOLID matter. From the perspective of the author, it is logical to equate this with a sphere. The sphere, like all polyhedron, manifests a mathematical value known as an isoperimetric quotient. Spheres have a value of 1, and all other polyhedron are less than 1. The formula takes the ratio of surface area cubed over the volume squared....times 36π.
Nist's value for ξ0 is 8.854187817.
Whereas 1 / 36π, = 8.84194128288308.
This difference of just -0.000000000000012246537 m-3 kg-1 s4A-3 seems reasonable enough justification to follow-up with any ramifications of this tiny adjustment to this constant value for one permittivity of a vacuum aka the electric constant aka ξ0.
When μ0 valued at 4π is plugged in with ξ0 as 1 / 36π, then the speed of light squared, as per NIST =
1 / ( μ0 ξ0 ) =
1 / [ 4 π (1/36 π) ] = 3 EXACTLY ! Using c times μ0, correspondingly derives the conductance quantum as 12 π, using the NIST equation for it - μ0 times light.
Exponential usage
Additionally, it would not seem weird that some kind of 2, 3, π low exponent qualifier might indeed be pertinent, as the volume of a sphere is 4/3 π r 3. Also given that spheres are known to cluster, gives rise, certainly, to the potentialities of mathematically applying both number density and sphere density attributions to those clusters. Just imagine, if they had all the above as established values and I tried to use their current NIST values and foist my theory, that they were really decimal values from hell, and not directly related to π.
Interestingly, since the constants are manifested by powers of the factors of 2,3 and π, we can mathematically employ the addition of exponents to multiply, and the subtraction of exponents to perform division.
Comparing of results from cross-multiplication.
The methodology used, employed cross-multiplying both mathematical and physical constants, and placing the results in decimal sequence, restricting all results to be 1.0 or bigger to less than 10. This is a procedure to compare significant digits. Thus equality between two values means that the significant digits are the same but the integer values of 10 may not be. For example, the comparison scheme, first converts ALL values to be within 1 through less than 10. So, 0.12345 = 1.2345 = 12.345 = 123450000000000. In each event, the result is compared as 1.2345.
I must also note that a special scheme that was used to compare significant digits between different constants. Most importantly, the units were just ignored. However, this is quite important, they could just as well have been carry along data too. Then all values are ranged from 1≥ ≤ 10. The final step is to now ignore the 10 to the n factor, noting that it too can be carried along data. Now we can compare apples to apples...we wind up with a case of values having equal significant digits or not. It can be stated too, that when the significant digits do not match, that equality CAN be ruled out.
Variables used for selection of individual constant values
Interestingly, my criteria for choosing specific 2,3,π selections, was based upon several factors. They has to be low exponents, they had to be close in decimal value, and the events when two constants multiplied together equaled this selected value. After many years of plugging in perhaps thousands of constants ( mostly mathematical ),
patterns are relationships appeared. It is only recently, that I allowed myself "decimal closeness" criteria to range far enough away from the existing value to find what I believe to be the proper vale for Avogadro's number AN. Given their value of 6.02214199, I never looked lower than 6.0 for any candidates. All those I tried, seemed "unconnected" to things by virtue of looking at those equations, that when multiplied together, gave a "candidate" value. Long story shortened: I was given reason to believe that this value might be somewhat lower, and when I tried 5.9 to 6.0.... things connected this time. Funny how 6.02214199 - 5.89462752182205 is only 0.12751447 but it just seems more cause it stretches between integers.
Using the above values for the constants, results in many thousands of equations. Each will be EXACTLY equivalent decimally.to endless decimals. Nist has fewer than 10 and only because they have been "forced". Any values derived from the "exact" 2.99792458000000000000000.will be invalid. Measure light all you wantbut its' value relates to the permittivity of a vacuum, ξ0which relates to the isoperimetric value of a sphere and a factor of MATHEMATICALLY, EXACTLY, 36π...and this "forces" light to be exactly 3.
Of particular interest, is the result of Planck times c2 times 2π results with the first radiation constant c1 coming to 3/8, versus 3.74177107 for NIST. Also of some interest is their Josephson constant times their von Klitzing constant equals 4π - 0.017477696. Whereas in this scheme, that result is precisely 4π.