Enter the content which will be displayed in sticky bar
Prof. Roger A. Rydin
local time: 2018-12-12 02:39 (-04:00 DST)
Prof. Roger A. Rydin Abstracts
Titles
  • A Philosophic Discussion of the Concept of Mass and the Principle of Isolation (2013) [Updated 5 years ago]
  • A New Approach to Finding Magic Numbers for Heavy and Superheavy Elements (2011) [Updated 7 years ago]
    by Roger A. Rydin   read the paper:
  • Finding Magic Numbers for Heavy and Super Heavy Elements (2011) [Updated 7 years ago]
    by Roger A. Rydin   read the paper:
  • New Magic Numbers in the Continent of Isotopes (2011) [Updated 7 years ago]
  • The Theory of Mercury's Anomalous Precession (2011) [Updated 1 year ago]
    by Roger A. Rydin   read the paper:
  • The Mantra of Theoretical Physics: Relativity Reigns (2010) [Updated 1 year ago]
    by Roger A. Rydin   read the paper:
  • Comments on Infinite Energy #87 Letters, on Nuclear Fuel Reprocessing and the Smart Grid using Wind Power (Letter to the Editor) (2010) [Updated 1 year ago]
  • Wave Mechanics without Waves: A New Classical Model for Nuclear Reactions (2009) [Updated 7 years ago]
    by Roger A. Rydin   read the paper:
  • Critique of Electromagnetic Models of the Nucleus, re Older Theory (2009) [Updated 7 years ago]
    by Roger A. Rydin   read the paper:
  • Some of Bob Heaston's Influential Thoughts that Affected Modern Physics (2009) [Updated 1 year ago]
  • Le Verrier's 1859 Paper on Mercury, and Possible Reasons for Mercury's Anomalous Precession (2009) [Updated 1 year ago]
    by Roger A. Rydin   read the paper:
  • Electrodynamic Model of the Nucleus (Letter to the Editor) (2009) [Updated 5 years ago]
    by Charles William Lucas, Roger A. Rydin   read the paper:
  • The Problem with Theoretical Physics (2008) [Updated 1 year ago]
    by Roger A. Rydin   read the paper:
  • A Case Against Tired Light and the Big Bang (2007) [Updated 1 year ago]
  • The Big Bang in Controversy (2007) [Updated 1 year ago]
  • Experimental Evidence that the Density of the Universe Is Not Constant (2006) [Updated 1 year ago]
  • Comments on the Scientific Consensus of Climate Change (2006) [Updated 7 years ago]
    by Roger A. Rydin   read the paper:
  • Using a Chemical Engineering Fortran Code to Develop a PC-Based Simulator for Accelerator-Driven Subcritical Systems (ADSS) (2006) [Updated 7 years ago]
  • New Model for a Spherical 'Big Bang' (2005) [Updated 1 year ago]
  • Applications of the Monte Carlo Adjoint Shielding Methodology (2005) [Updated 7 years ago]
  • Einstein's Greatest Blunder (2004) [Updated 1 year ago]
  • Comments on The Structure of Matter in the Meta Model (2004) [Updated 7 years ago]
  • Cross Correlation of Deep Redshift Galactic Pencil Survey Data (2003) [Updated 7 years ago]
  • A New Nuclear Energy Source for Supernovae, Etc. (2003) [Updated 7 years ago]
  • The Big Wave Model of the Universe (2002) [Updated 7 years ago]
  • Critique of the Meta Model of the Universe (2002) [Updated 1 year ago]

  • Abstracts Details
  • A Philosophic Discussion of the Concept of Mass and the Principle of Isolation (2013) [Updated 5 years ago]

    All known bodies have the property of mass, where a pound of feathers has the same mass as a pound of iron. But both feathers and iron are made of different elements, each of whose mass is less than the sum of its constituent protons and neutrons, so where did the excess mass go when they were formed? If the iron ball is accelerated to near light speed, its mass seems to have increased, so where did this extra mass come from if it actually happens at all? If the mass is located in a far fast moving galaxy, has it changed? These properties are explained in terms of the electromagnetic nucleus model of Charles Lucas.


  • A New Approach to Finding Magic Numbers for Heavy and Superheavy Elements (2011) [Updated 7 years ago]
    by Roger A. Rydin   read the paper:

    Annals of Nuclear Energy 38: 238-242 (2011).

    For at least sixty years, scientists have known that certain numbers of protons or neutrons in nuclei formed closed shells of some kind, producing additional stability to nuclei that possess these properties. The most stable nuclei, or nuclei exhibiting enhanced stability, are called doubly magic. Only recently, Lucas has explained that the magic numbers are really composites of several sub shells filling, rather than being single shells. In addition, his theory leads to a conclusion that protons and neutrons fill sub shells in a different way. This is because the protons are charged particles, and by Coulomb repulsion they try to get as far away from each other as possible, hence tending to occupy the outer regions of nuclei. Neutrons, being uncharged but possibly polarizable, tend to occupy both outer and inner shells and to possibly increase the number in an outer shell when the nuclei are heavy in a similar way to electrons filling in inner shells in the Lanthanide and Actinide series.

    Using these ideas, and following a simple modification of Lucas? geometrical packing scheme, individual candidates for new magic proton numbers and new magic neutron numbers have been identified. Amazingly, these new magic numbers correspond to the experimentally identified superheavy element distribution to a very large extent, and even correspond to magic numbers suggested using very sophisticated theoretical physics methods and computations. As an added bonus, the newly suggested magic numbers correspond to the long lived Thorium and Uranium isotopes, and to the Fermium isotopes, which may help explain the shape of the Peninsula of Heavy isotopes. They also suggest going back to reassess somewhat lighter isotopes to see if some magic effects have been missed.


  • Finding Magic Numbers for Heavy and Super Heavy Elements (2011) [Updated 7 years ago]
    by Roger A. Rydin   read the paper:

    For at least sixty years, scientists have known that certain numbers of protons or neutrons in nuclei formed closed shells of some kind, producing additional stability to nuclei that possess these properties. The most stable nuclei, or nuclei exhibiting enhanced stability, are called doubly magic. Only recently, Lucas has explained that the magic numbers are really composites of several sub shells filling, rather than being single shells. Protons are charged particles, and by Coulomb repulsion they try to get as far away from each other as possible, hence tending to occupy the outer regions of nuclei. Neutrons, being uncharged but possibly polarizable, tend to occupy both outer and inner shells and to possibly increase the number in an outer shell when the nuclei are heavy in a similar way to electrons filling in inner shells in the Lanthanide and Actinide series.

    Using a simple modification of Lucas? geometrical packing scheme, individual candidates for new magic proton numbers and new magic neutron numbers have been identified. Amazingly, these new magic numbers correspond to the experimentally identified superheavy element distribution to a very large extent. As an added bonus, the newly suggested magic numbers correspond to the long lived Thorium and Uranium isotopes, and to the Fermium isotopes, which may help explain the shape of the Peninsula of Heavy isotopes. They also suggest going back to reassess somewhat lighter isotopes in the Continent. This has been done for several isotopes, and magic effects do indeed appear.


  • New Magic Numbers in the Continent of Isotopes (2011) [Updated 7 years ago]

    Annals of Nuclear Energy

    Recently, Lucas has explained that the magic numbers are really composites of several sub shells filling, rather than being single shells. In addition, his theory leads to the conclusion that protons and neutrons fill sub shells differently, because the protons tend to occupy outer shells first, while neutrons tend to occupy both outer and inner shells. Using a simple modification of Lucas' geometrical packing scheme, new magic proton and neutron numbers were postulated for the superheavy nuclei, which matched the island of stability distribution to a large extent. These magic numbers were also tested against some nuclei in the peninsula of isotopes, and found to be relevant.

    Now, in consideration of a suggestion made many years ago by Linus Pauling, new magic numbers between 50 and 82 are postulated in the continent of isotopes, and compared to experimental data. These new numbers have strong support among the three elements considered by Pauling, namely Cadmium, Tin and Tellurium, and the results suggest that the range of isotopes found, their stability, and their lifetimes are strongly affected by the new magic numbers. In addition, if 58 nucleons are taken to be magic, then the double hump fission distribution for heavy isotopes is better matched than by using the previous magic number of 50.


  • The Theory of Mercury's Anomalous Precession (2011) [Updated 1 year ago]
    by Roger A. Rydin   read the paper:

    Urbain Le Verrier published a preliminary paper in 1841 on the Theory of Mercury, and a definitive paper in 1859. He discovered a small unexplained shift in the perihelion of Mercury of 39? per century. The results were corrected in 1895 by Simon Newcomb, who increased the anomalous shift by about 10%. Albert Einstein, at the end of his 1916 paper on General Relativity, gave a specific solution for the perihelion shift which exactly matched the discrepancy. Dating from the 1947 Clemence review paper, that explanation and precise value have remained to the present time, being completely accepted by theoretical physicists as absolutely true. Modern numerical fittings of planetary orbits called Ephemerides contain linearized General Relativity corrections that cannot be turned off to see if discrepancies between observation and computation still exist of the magnitude necessary to support the General Relativity estimates of the differences.

    The highly technical 1859 Le Verrier paper was written in French. The partial translation given here throws light on Le Verrier's analysis and thought processes, and points out that the masses he used for Earth and Mercury are quite different from present day values. A 1924 paper by a professor of Celestial Mechanics critiques both the Einstein and the Le Verrier analyses, and a 1993 paper gives a different and better fit to some of Le Verrier's data. Nonetheless, the effect of errors in planet masses seems to give new condition equations that do not change the perihelion discrepancy by a large amount. The question now is whether or not the excess shift of the perihelion of Mercury is real and has been properly explained in terms of General Relativity, or if there are other reasons for the observations. There are significant arguments that General Relativity has not been proven experimentally, and that it contains mathematical errors that invalidate its predictions. Vankov has analyzed Einstein?s 1915 derivation and concludes that when an inconsistency is corrected, there is no perihelion shift at all!


  • The Mantra of Theoretical Physics: Relativity Reigns (2010) [Updated 1 year ago]
    by Roger A. Rydin   read the paper:

    Theoretical Physicist, Lee Smolin, laments in his book that very little progress has taken place in theoretical physics since 1980. This is primarily because almost all the funding and faculty positions in physics have gone to string theorists, who have produced theories that cannot be experimentally verified. It is his opinion that more promising new directions are needed, led by seers and thinkers like Einstein, such that the great mysteries of the universe can be solved. His own special preference is for Quantum Gravity research. His approach is to develop new theories, find experiments to test them, and then show that they produce unexpected predictions that can be verified. Experiments come in after the theory has been developed, and not before. All acceptable theories must contain the successful aspects of Special Relativity, General Relativity, and field theory ala Maxwell's equations, and those theories should be compatible with the Standard Model of particles. His goal is the unification of as much as possible in physics.

    The approach taken in this paper is to show that all of Smolin's criteria for valid new theories contain flaws, and these criteria have not been proven to be true experimentally as he asserts. Errors have crept in since Einstein's development of Special Relativity and General Relativity that have set theoretical physics back a century. Maxwell's equations should be stated in terms of potentials rather than fields, which doom the Standard Model. Indeed, we need to go back and find out where we went wrong, and develop a new particle model and a new theory of movement in the universe.


  • Comments on Infinite Energy #87 Letters, on Nuclear Fuel Reprocessing and the Smart Grid using Wind Power (Letter to the Editor) (2010) [Updated 1 year ago]

    I purchased my first issue of Infinite Energy because I noted that several people I have met at NPA meetings were taking part in the discussion. Since you offer a forum for all opinions, I would like to comment on two things I read in this issue.

    First, on the letter about Thomson scattering causing redshift, I must strongly disagree with the conclusions of the author Dean Mamas. It is absolutely clear that Compton scattering cannot do the job, but I argue that Thomson scattering cannot do it either. The deep redshift pencil surveys of galaxies in 6 directions indicate that the density of galaxies is periodic with a period of about 400 million light years, is damped approximately as 1-over-r-squared from earth, and is spherically symmetric about an origin in the vicinity of Virgo, about 70 million light years from earth. Charles Sven has remarked on the spherical symmetry in the direction of Virgo, and I have demonstrated in my last NPA papers that the density drops in a similar manner in the entire Sloan SDSS survey.

    The equations solved by Mamas do not include the n(r) density variation, so the exponential solution is not correct. Tom Van Flandern has concluded that only a uniform density of exotic matter could provide a tired light redshift. But this then argues that exotic matter, if it exists, cannot interact with a non-uniform distribution of real matter if it is to remain uniform, and if it does interact then it will fail to meet the conditions needed for tired light.

    Experimentally, the galaxy periodicity is too perfect to be caused by a non-uniform tired light variation. Periodicity does not mean quantization, which is Halton Arp?s explanation. Hence, the only conclusion that can be made is that the galaxies are actually moving in space in a radial direction from an origin, and that movement has nothing to do with the Big Bang model, i.e., General Relativity has nothing to do with the movement at all! The movement is in space, but space is not uniformly expanding in space, which is consistent with Peter Erickson?s argument about absolute space. The data also falsifies the idea of a static universe.

    The second paper I would like to comment on is Walter Chase?s article on the Dynamics of Black Holes. He assumes that the only things we know about black holes are that they are massive, nothing gets out, and the mathematics of General Relativity solely governs what happens inside them. Actually, we know a fair amount about black holes. We have pulsars, or neutron stars that are made up entirely of a ball of neutrons at nuclear density. There is some data about quark stars that apparently have measured densities greater than nuclear density, so some of the neutrons have been crushed to their quark constituents. We also know that there is a Chandrasekhar mass limit of a few solar masses where a neutron star would have to collapse to a black hole if more mass is added. And finally, we know that gravity does get out, so gravity must not be bound by the speed of light, but acts almost instantaneously as Van Flandern reports. Newton?s action-at-a-distance seems to work, even though we still can?t explain why.

    At the most recent NPA meeting, Bob Heaston presented an analysis of the history of Einstein?s derivation of General Relativity over a ten year period, primarily concentrating on the physics terms on the right-hand-side of the equations. He concluded that Einstein made a conceptual error when he set the speed of light equal to 1.0, which had the effect of creating a singularity in the solutions. Heaston calls the offending term Eta = Gm/rc2, which is a measure of how much mass is contained in a given sized ball. He concludes that Eta <= 1.0, where 1.0 is the Heaston Limit where mass spontaneously converts to energy. Hence, a singularity is not possible, and with that, the Big Bang, inflation, string theory and black hole singularities are physically impossible. Peter Erickson in his book argues forcefully that space-time is also impossible, that time is here this instant and is replaced the next instant, and that motion bends in space while space remains absolute. Hence, Erickson argues that the left-hand-side tensor equations in General Relativity are also wrong.

    The inescapable conclusion is that the Schwarzchild metric doesn?t apply to the black hole problem at all, and thus Chase?s mathematical solutions are not meaningful. The black hole is a finite sized ball of crushed neutrons whose gravitational attraction gathers any mass that enters and causes light to bend in a circle at the event horizon such that the net effect is total internal reflection. Whether or not we can form equations that govern such things as temperature and pressure in such an environment depends first on knowing what is inside and how that matter interacts with itself. It reminds me of what Dorothy said about arriving in Oz, ?this isn?t Kansas anymore?. We should derive our mathematics from the experimental results, instead of deriving beautiful mathematical relationships and then looking for physical situations they apply to!


  • Wave Mechanics without Waves: A New Classical Model for Nuclear Reactions (2009) [Updated 7 years ago]
    by Roger A. Rydin   read the paper:

    A comparison is made between the conventional way nuclear reactions were considered to take place circa 1960, and how they might take place using a new Classical electromagnetic model of the nucleus proposed by Charles Lucas. The old analysis used an analytic model of a compound nucleus and the linear Schr?dinger wave equation to predict reaction cross sections by treating the wave solutions as quantum mechanical probabilities. Actual experimental data had to be inserted into the equation to get the response for each case.

    The new model considers the perturbed nucleus in terms of mechanical vibrations that lead to unstable states that foster decay to restore stability. The form of the resulting balance equation is similar to the Schr?dinger wave equation, so the mechanical model has the potential to produce similar results without using waves.


  • Critique of Electromagnetic Models of the Nucleus, re Older Theory (2009) [Updated 7 years ago]
    by Roger A. Rydin   read the paper:

    Early 1950s models of nuclei considered that they were made up solely of protons and neutrons having approximately equal sizes. A large nucleus was an approximately spherical ensemble made up of these packed entities. Nuclear reactions were treated using Quantum Mechanics, by solving the Schr?dinger wave equation for wave amplitudes inside and outside a nucleus and relating these to cross sections.

    Beginning in the 1990s, new Classical Electromagnetic models of nuclei were developed. These are quite totally different from those in the theoretical Standard Model which uses quarks and gluons, etc. Electrons and protons were considered to be spinning charged fibers, and a neutron was considered to be a paired combination of an electron and a proton. With these new nucleon models, nuclei were crudely modeled as concentric shells of these particles placed in a 3D spatial configuration by a geometrical mapping theory called Combinatorial Geometry. This theory was able to correctly predict the "Magic Numbers" as combinations of shells filling and emptying as nucleons were added. An improved Semi-Empirical Binding Energy Formula was developed, which accurately predicts the binding energies of stable and radioactive isotopes, and also correctly predicts their spins. The Electromagnetic model was improved by making detailed spatial and directional force balances using a variational minimization technique, which predicts decay energies for various reactions.

    In 2004, a new Electromagnetic model of all types of particles was developed based upon a three-level scheme of wrapping fractionally charged fibers. This new fiber model has yet to be used to redo the nucleus calculations. However, Electromagnetic models have the potential to describe nuclear reactions in terms of unstable vibrations whose equations are analogs to the Schr?dinger equation. The purpose of this paper is to discuss all of these models, and to predict where the research should go next.


  • Some of Bob Heaston's Influential Thoughts that Affected Modern Physics (2009) [Updated 1 year ago]

    Beginning in about 1977, Bob Heaston spent his spare time as a Theoretical Physicist, trying to make sense of Modern Physics as it pertained to General Relativity and the Standard Model of particles. He took the time to read extensively in the literature, and to try to understand what the famous theoretical physicists were doing with old and new concepts. He was especially interested in the interconnections between various constants of physics, including critical points, lengths, times and forces. He explored the various fundamental magnitudes in physics, numerical coincidences and bounds. He was especially interested in the unification of the four forces, which he defined differently from the standard definition as the Heaston Equations.

    Bob made a major discovery by expressing the Heaston Limit, where mass inherently converts to energy. The consequence of the Heaston Limit is that there are no singularities in General Relativity. The result is: No Big Bang; No Inflation; No String Theory; Finite Black Holes; No Hawking Miniature Black Holes. Bob noted that Planck?s Constant did not appear in General Relativity, but had to be included somewhere if the forces were to be unified. He defined a Quantum Force to do this task, but it is not clear what this force does or how it replaces the Weak Force. However, such a quantum force does exist in Cahill's new theory of gravity. Bob also did not understand the Standard Model, and called it a mathematical construct that should be replaced by a better particle model. Such a model may correspond to Lucas? 3-level Charged-Fiber Model of Particles, and particle decay reactions. Based on such Electro-dynamic models as developed by Lucas, Bergman and Boudreaux and Baxter, Rydin has postulated that unstable nuclear vibrations can account for beta and positron decay, alpha decay, fission and nuclear de-excitation. What is still needed is a model for a photon and the graviton that binds it into mass. Bob Heaston was a driving force in pushing physics research into new directions.


  • Le Verrier's 1859 Paper on Mercury, and Possible Reasons for Mercury's Anomalous Precession (2009) [Updated 1 year ago]
    by Roger A. Rydin   read the paper:

    Urbain Le Verrier became famous by discovering Neptune from its effect on the orbit of Uranus. He turned his
    attention to Mercury, publishing a preliminary paper in 1841 and a definitive paper in 1859 on the Theory of
    Mercury. He discovered an unexplained shift in the perihelion of Mercury, which he attributed to the presence of a
    small unknown planet he called Vulcan, which was never found. The results were corrected in 1895 by Simon
    Newcomb, who increased the anomalous shift by about 10% but offered no new explanation for it. Albert Einstein,
    at the end of his 1916 paper on General Relativity, derived a specific solution for the perihelion shift which exactly
    matched the discrepancy. Dating from the 1947 Clemence review paper, that explanation and precise value have
    remained to the present time, being completely accepted by theoretical physicists as absolutely true.

    The highly technical 1859 Le Verrier paper was written in French, and astronomers and theorists have gone on from
    the study of the Solar System to the study of the Universe. The partial translation given here throws light on Le
    Verrier's analysis and thought processes, and points out that the masses he used for Earth and Mercury are quite
    different from present day values, possibly leading to a different fit to the old data. A 1924 paper by a professor of
    Celestial Mechanics critiques both the Einstein and the Le Verrier analyses, and a 1993 paper gives a different and
    better fit to some of Le Verrier's data. Nonetheless, the effect of errors in planet masses seems to give new condition
    equations that do not change the perihelion discrepancy by a large amount. The question now is whether or not the
    excess shift of the perihelion of Mercury has been properly explained in terms of General Relativity, or if there are
    other reasons for the observations, such as effects from a comet, or the asteroid belt, etc.


  • Electrodynamic Model of the Nucleus (Letter to the Editor) (2009) [Updated 5 years ago]
    by Charles William Lucas, Roger A. Rydin   read the paper:

    Nuclear Science and Engineering, 161: 255-256 (2009). Nuclear engineers are well aware of the importance of the closed nuclear shell ?magic numbers? to nuclear engineering. Magic numbers are responsible for double-hump fission curves, the existence of delayed neutrons, and for xenon poisoning and xenon-induced power oscillations in reactors. Engineers are also aware of the binding energy per nucleon curve, and the fact that fusion is energetically possible for low-A nuclides while fission and alpha decay are energetically possible for high-A nuclides. All of this information came from experimental data. The magic numbers were inferred by noting discontinuities in nuclear systematic studies.1 The binding energy data were qualitatively fit by the semiempirical mass equation, which was an attempt to combine the liquid drop model and the quantized nuclear shell model. For more than 40 years, no theory was put forward that could quantitatively explain why all of these ideas worked...


  • The Problem with Theoretical Physics (2008) [Updated 1 year ago]
    by Roger A. Rydin   read the paper:

    The Editor of the Australian magazine Cosmos asked, ?Is it time to call a spade a spade, and admit that theoretical physics is heading down the wrong track?? In this particular issue was a feature article discussing inflation and string theory, and others about Stephen Hawking, the Standard Model of particle physics, and dark matter and energy. All pointed to various failures of accepted theory to adequately predict experimental results, even after years of trying. This paper discusses what nuclear engineer Roger Rydin and some of his associates in the Natural Philosophy Alliance (NPA) have to say about critical cosmological data that is being ignored, about obvious errors in relativity theory, and about new particle and gravitational theories that might point the way to new directions in physics. All of these NPA authors point to theoretical failures in the Big Bang model, to unwarranted corrections made to experimental cosmological data to make it fit the Big Bang model, and to conceptual errors in particle physics brought about by including relativistic theories and corrections that do not adequately explain all the experimental data. Many of these new ideas represent simpler theoretical models that explain more experimental data, and hence meet the essence of Occam?s Razor as areas worth developing.


  • A Case Against Tired Light and the Big Bang (2007) [Updated 1 year ago]

    ?Tired light? has been proposed as a mechanism whereby measured redshifts can be explained for a static universe that is not moving at all. Hubble?s law normally states that galactic distance varies linearly with redshift, interpreted as relative velocity in a Doppler sense. A tired photon supposedly loses energy linearly with distance traveled to exactly compensate for Hubble?s law. Experimental evidence is presented that the universe is highly heterogeneous and its density decreases strongly with distance from the Earth. Since highly correlated patterns of periodicity also result in this environment, conditions for a linear loss of photon energy with distance are not met. Furthermore, at least for Compton scattering, energy loss can only occur when the photon changes direction. This makes it impossible for a single photon to travel in a net straight line and still lose energy, regardless of the distribution of scattering masses. Hence, ?tired light? is a false premise from two standpoints, existence of a non-uniform density distribution of matter in the universe and lack of a viable physical mechanism to cause energy loss with distance traveled.


  • The Big Bang in Controversy (2007) [Updated 1 year ago]

    Sepp Hasslberger blog, blog.hasslberger.com/2007/03/the_big_bang_in_controversy.html, March 14, 2007, posted October 2009.


  • Experimental Evidence that the Density of the Universe Is Not Constant (2006) [Updated 1 year ago]

    The primary assumption of the Big Bang model is that the universe is homogeneous and isotropic, and hence the density of the universe is constant everywhere. A secondary assumption is that the expansion of the Universe has no fixed center. Both assumptions are contradicted by voluminous experimental evidence. Independent analyses place the center of the universe at about a hundred fifty million light years from the Milky Way. General graphical analysis of the Sloan Survey indicates that the spatial density of galaxies, taken as a surrogate measure of the density of matter in the universe, drops from this center in an approximate 1-over-r-squared pattern. Furthermore, analysis of the Deep Galactic Pencil Surveys indicates that a radial ~450 million light year periodic variation is superimposed on this pattern. The same pattern exists for Quasars and other artifacts.


  • Comments on the Scientific Consensus of Climate Change (2006) [Updated 7 years ago]
    by Roger A. Rydin   read the paper:

    Just the other day at my granddaughter?s birthday party in Chapel Hill, one of the parents of a toddler asked me about Global Warming. He said, ?Why can?t scientists give us ordinary people a clear answer about climate change? Don?t they have the right answer all worked out, so they can show us the proof? You are a scientist, so don?t you know something about it?? These are fair questions, which beg a thoughtful answer.

    I admitted that this was not my particular field of study, but I didn?t think that the data was at all conclusive as to whether or not climate change was real. I told him that contemporary scientists were often wrong about things, for example, about the Earth being at the center of the universe, which was subsequently proven to be untrue. Nonetheless, it got Galileo into a lot of trouble. Even if 90% of the scientists are in consensus on any given subject, does that mean that they are right, or are the other 10% right, or just a single dissident? After all, science is not a popularity contest, like American Idol. The winner is not the one who gets the most votes, even if the voting is restricted to highly qualified voters.


  • Using a Chemical Engineering Fortran Code to Develop a PC-Based Simulator for Accelerator-Driven Subcritical Systems (ADSS) (2006) [Updated 7 years ago]

    2006 International Conference on Scientific Computing CSC'06: June 26-29, 2006, Las Vegas, USA, 2006.

    There has been substantial interest in accelerator-driven subcritical systems (ADSS) for producing power and for destruction of high-level radioactive waste. However, there are questions about the best way to control such a system throughout its fuel cycle. Nuclear reactor technology is now decades old and the technique of controlling reactors by controlling reactivity using control rods, to assure safe and reliable operation is understood well enough to use reactors in civilian public utilities.

    The dynamics of a neutron-source-driven subcritical reactor are different and faster than those of a critical reactor, and control of the source can also be used. This requires detailed system level modeling. Noting that, at a top-level view, that an ADSS must contain many of the same components and concepts as a chemical process system ? and noting that there is much published software code available which model these components and concepts for chemical process systems ? it seems reasonable to inquire whether such code might be usefully applied to model ADSS dynamics. This paper presents the results of this inquiry and proposes further study.


  • New Model for a Spherical 'Big Bang' (2005) [Updated 1 year ago]

    A new model is postulated for the formation of the Universe based upon deep-pencil-survey experimental data, which indicates that galaxies in the Universe are moving radially in a spherical sense from an origin in the Virgo Cluster rather than being carried along in space that is expanding uniformly in a center-less sense. The Cosmic Microwave Background is centered in Virgo, the only known blueshift galaxies are in the direction of Virgo, and a unique star-less galaxy of hydrogen has just been discovered there. All these observations are consistent with the new model.


  • Applications of the Monte Carlo Adjoint Shielding Methodology (2005) [Updated 7 years ago]

    American Nuclear Society Topical Meeting on Monte Carlo, MC2005, Chattanooga, TN, April 17-21, 2005, pp.12/1-12, 2005.

    The Monte Carlo Adjoint Shielding (MASH) method has been developed to handle the case of a very complicated radiation shield that is irradiated from a distant source. It is not possible to accurately transport radiation to the vicinity of the shield and then through the shield to a dose point using Monte Carlo alone, even with bootstrap methods. Likewise, it is not possible to use discrete methods for the complete problem because of the shield geometry. The solution is to use a discrete method to transport radiation to the vicinity of the shield and then couple the incident radiation with an adjoint or importance Monte Carlo solution that starts all adjoint particles from the detector position. The methodology has been verified by experiment to give accuracies of 10 ? 20% for shielded vehicles 200 ? 1000 meters from a prompt fission source, and for vehicles situated on a large fallout field. The method has also been applied to a small concrete building and to a covered foxhole. Possible future applications are to large buildings in an urban environment, but more research has to be done to verify that the method is applicable to these problems.


  • Einstein's Greatest Blunder (2004) [Updated 1 year ago]

    The 1915 exposition of Einstein's Theory of General Relativity (GR), plus the 1929 empirical statement of Hubble's Law, were the basis of an assumption that both of these applied to the evolution of the Universe.  That idea persists to this day.  These equations are so appealing, mathematically, that they are also assumed to apply to black holes and to inflation and string theory, even though these solutions are not amenable to experimental confirmation.  Einstein admitted that the addition of what he called the "Cosmological Constant" to his tensor set of 10 coupled 2nd order equations, forcing a ststic balance, was his "Greatest Blunder".  Actually, his greatest blunder was allowing the scientific community to believe that his GR equations applied to the evolution of the Universe at all!


  • Comments on The Structure of Matter in the Meta Model (2004) [Updated 7 years ago]

  • Cross Correlation of Deep Redshift Galactic Pencil Survey Data (2003) [Updated 7 years ago]

  • A New Nuclear Energy Source for Supernovae, Etc. (2003) [Updated 7 years ago]

  • The Big Wave Model of the Universe (2002) [Updated 7 years ago]

  • Critique of the Meta Model of the Universe (2002) [Updated 1 year ago]

    In reading Van Flandern?s book [1] Dark Matter, Missing Planets & New Comets, which is aptly subtitled, ?Paradoxes Resolved and Origins Illuminated?, it is possible to agree with many of the author?s arguments, and yet be unsatisfied with the thrust of many others. It is only fair to examine these points of agreement and disagreement in some depth, if for no other reason than to set a healthy dialog in motion. The present discussion will be limited to the material presented in the first five chapters of the book that describe the META Model of the universe.

    If the reader had expected to find that the META Model is a phenomenological description of how the universe started, how it evolved, and what is going to happen to it, he will be sorely disappointed. As I did, the reader will reach the end of Chapter 5 and the last sentence, ?This completes the exposition of the META Model?, and not have the slightest idea of how the various arguments presented in those chapters fit together to describe the universe! In fact, he will have to jump to the very end of Chapter 22, in a section denoted ?Note added in proof?, to find out that the essence of the META Model is that the ?universe is infinite in both space and time, and is not expanding at all?! In other words, it has always been the way it is, and it will continue to be that way in the future. Since the META Model says that the universe is constant, then the expansion of the universe must apparently be an illusion.

    Just prior to this startling conclusion is the statement, ?If the field of astronomy were not presently over-invested in the expanding universe paradigm, it is clear that modern observations would now compel us to adopt a static universe model as the basis of any sound cosmological theory?. The seven tests that are used in the book only compare Friedmann uniform expansion models to static models, and do not include any other alternatives, so that the comparison is incomplete. What if Einstein?s General Theory of Relativity has nothing at all to do with the evolution of the universe? On the subject of what might cause the redshift if it is not due to the expansion velocity of the universe, the author states that he favors the explanation that the particle or wave serving as the carrier of gravity, dubbed ?gravitons?, would cause an apparent redshift by inelastic scattering interactions with the light passing large distances through the universe. This is hardly either a proof of validity of the META Model, or a ringing endorsement of how it works. It is also at odds with Arp?s [2] explanation of redshift using Narliker?s theory that lets mass grow as a function of time. In the absence of such a META proof, we must examine the individual concepts that make up the META Model.