Tuesday, August 14, 2012

Why is the expansion of the universe accelerating?

I started looking around a little bit to figure out what is the empirical evidence that the expansion of the universe is accelerating.

Reference 1

If a plot of distance as a function of recession speed has a shallower slope at large recession speeds, then the universe was expanding more rapidly in the past than it is now. If a plot of distance versus recession speed is steeper at large recession speeds, then the universe was expanding more slowly in the past. (This verbal description, I realize, may be a bit confusing; a look at Figure 28-17 of the textbook should make things clearer.)  : Universe (sixth edition) by Roger A. Freedman and William J. Kauffmann III (W. H. Freeman & Co., New York, 2002)

Reference 2
At minute #30:00 there is a graph showing that the expected feature of the standard cosmological model is in the opposite direction.  Instead of getting a higher redshift as they look out at further supernova, they are getting a lower redshift.

If I am understanding everything right, the supernova data turned out opposite the way they should have expected if the Standard Model is correct.  I think, though, if they really took a look at Milne's Model, they would find a much simpler explanation for the data that they have.

Monday, August 13, 2012

Emilio Segre Lecture

Today, I watched a full hour video of Andrew Lange discussing cosmology:  http://www.youtube.com/watch?v=e_4bMIqmV9U

Here is a short discussion:


I was struck by the "radical" idea of Guth, Steinhardt, and Susskind, that the universe was once tinier than the nucleus of the atom.  Not because it was too amazing that it should have been that small, but for the opposite reason.  The idea that it was any finite size at all is problematic, because it indicates an infinite number of independent non-cauasally connected events occurring throughout a finite space.

I think it is a rational expectation that the universe started from a geometric point in space-time.  If you have it start at any finite size, whether that be a meter across, a micron across, or smaller than the nucleus of an atom, it's all the same; regardless; because you'd be talking about a bunch of events happening in different spots, spontaneously, and all at the same time, without any communication between them.

As far as inflation goes, perhaps Lange would not think it were such a "fantastic" idea if he were aware of the simplest implications of the Lorentz Transformation on an event in the immediate past.  The trouble, I think, is that though the mathematics are really obvious, the flexibility of our minds to deal with the idea that acceleration can cause a past event to move further into the past is difficult to "believe."  So long as they won't permit Special Relativity in Cosmology, of course, they won't be able to figure out how simple it is.

Thursday, August 2, 2012

Clusters, Galaxies, Supernovae, and Hypernovae

I taught Astronomy for the first time this last summer at Carl Sandburg College.  It was an interesting experience, being as I had taught physics quite a bit, but I'd never really even taken an Astronomy course.  During the few weeks prior to the class I was racing through the textbook as fast as I could.

When I came to the part about globular clusters and open clusters, I started to fit some things together.


The basic idea here is that a similar event might have drastically different results depending on the surrounding density.  In these videos I'm imagining something like a Type II supernova occurring in three different environments--high density, medium density, and low density.  In the high density case, the end result is a full-blown galaxy.  In the medium density case, the end result is a globular cluster.  In the low density case, the result is an open cluster.

We could go lower-density still, to the point where we just get nebula--gas and dust clouds out of the supernova explosion, or we could go to the other extreme, a hypernova.

In an ordinary Type II supernova "The enormous inward gravitational pull of matter ensures catastrophe...  Gravity overwhelms the pressure of the hot gas, and the str implodes, falling in on itself.  The core temperature rises to nearly 10 billion Kelvin.  At these temperatures individual photons are energetic enough to split iron into lighter nuclei and then break those lighter nuclei apart until only protons and neutrons remain.  This process is known as photodisintegration.  In less than a second, the collapsing core undoes all the effects of nuclear fusion that occurred during the previous 10 million years.  But to split iron and lighter nuclei into smaller pieces requires a lot of energy.  After all, this splitting is the opposite of the fusion reaction that generated the star's energy during earlier times.  The process thus absorbs some of the core's heat energy, reducing the pressure and accelerating the collapse....  There is nothing to prevent the collapse from continuing all the way to the point at which the neutrons themselves come into contact with each other, at the incredible density of 1015 kg/m3... by the time the collapse is actually halted, the core overshoots its point of equilibrium and may reach a density as high as 1015 kg/mbefore beginning to reexpand.  Like a fast-moving ball hitting a brick wall, the core becomes compressed, stops, then rebounds with a vengeance."  (From Astronomy; A beginner's Guide to the Universe, Chaisson McMillan)

Now imagine an environment where the infalling matter is so dense that this photodisintegration/compaction/reexpansion process repeats itself in a region outside the core.

Wednesday, August 1, 2012

AGAINST The Standard Cosmological Model

Video:  Scientific Method  (May 1, 2012)
Some of the text from this video comes from "Astronomy-A Beginner's Guide to the Universe, Sixth Edition, by Chaisson/McMillan., page 18-20"  My impression of the Standard Model of Cosmology is that it does not, in fact, meet the criteria of the Scientific Method.  It has not made predictions that have been repeatedly confirmed.  Instead, it has made no predictions at all.  The basis of the standard model is to have so many predictions out there by so many different people that when something turns out to be right, one of those people will get a prize.  But that's not a prediction.  That's just a lottery.

July 9, 2012.

From:  http://www.ast.cam.ac.uk/~pettini/Physical%20Cosmology/lecture02.pdf

Context:  http://www.physicsforums.com/showpost.php?p=3989700&postcount=47

This is me expressing some dismay over a blunder, which I think Einstein himself made, but it seems no-one has ever noticed it. I should point out that my initial reaction: "This is a nonsense argument" has somewhat become more nuanced. If General Relativists will stand by their argument and acknowledge that this means there is an observer dependent force on distant objects, then it is no more unbelievable than the idea that the TIME at distant locations is observer dependent. The only trouble I have with it now, is that they have already rejected the idea that time is observer dependent by invoking that there must be a cosmological scale factor a(t). They have just replaced one unbelievable statement (relative distant time) with another unbelievable statement (relative distant force.)

 I don't have any problem with having two possible hypotheses--the trouble I have is that the previous hypothesis (observer dependent time) is rejected and the latter hypothesis (observer dependent force) is accepted, and nowhere in the literature do you see any acknowledgement that these two ideas are different.

July 10, 2012
Still reading the same document as above.  This video is probably not terribly clear what I'm trying to get across.  In it, I'm trying to remember what I know about hyperbolic geometry and relativistic velocity addition.  What I want to make clear here is that the Pettini article uses an incorrect form of velocity addition, and anything derived from an incorrect formula is probably also incorrect.

In this video, I'm whining and making excuses for the first 30 seconds or so, but then I give you some good animations of Milne's Model.  This is to show the striking difference between the Kinematic Universe described by A.E. Milne and the Stretchy-Space universe described by Einstein and Eddington.  This video highlights the difference between having distant time being an observer dependent quantity, and having distant  force being an observer dependent quantity.  I have an incomplete comment at the end.  "If you do your bookkeeping right, you get the same answer."  What does this mean?  Either he distant forces are NOT in fact observer dependent, and you'd get the same answer (zero) for the force for all observers, or it IS observer dependent, in case all observers find distant objects are accelerating toward them.

Here is a record of the Edit War I had on Wikipedia with "scienceapologist."  I eventually gave up the edit war, because I actually have some semblance of a life, and I don't have the kind of time to devote to fighting a fairly sizable group of people who want to present false information on Wikipedia.  I believe they reason that since scientific consensus believes that Milne was wrong, they're duty is to present scientific consensus rather than what Milne's model actually was.  In my opinion, they are wrong.  They should allow Milne's actual words, and text and equations from Milne's actual books to be presented in the article on Milne's model.

How come I can't find any documentation on why don't people believe in using Lorentz Transformations on the large scale?  Twofish Quant says "you have to do some digging."  Really?  Don't you think, if I've been asking this question probably at least once per year for the last 10 years on internet forums that SOMEONE would have known by now?  No, the reason I can't find any documentation on it is because it was just forgotten.  Nobody gave it any real thought, outside Milne and Epstein.  In this video I try to explain something that seems to me so obvious that it hardly needs explanation, but I find myself stumbling to explain it.  I will need to re-do this sometime in the future.

TwofishQuant claims that the density of the universe was about 100 times the density of water, just three minutes after the Big Bang.  I disagree and believe the mass of the 10 or 20 nearest stars were packed into a radius of about a mile.  This enormous density is greater than that of neutron stars, but would have been prevented from collapse by a balance of forces, i.e. the sum of forces in an infinite isotropic distribution is zero.  (That's the central disagreement, here, by the way... Einstein and Eddington apparently believed, via this "Birkhoff's Theorem" argument, that the sum of forces in an infinite isotropic distribution was observer dependent.)

General Relativity experts frequently invoke "the data" (generally) in order to support their argument that the Standard Model is correct.  It's very strange that most of "the data" they invoke (specifically) runs counter to their expectations.  Here I'm expressing a wish that there were more people making an analysis of the Milne's model besides myself, and that an honest effort could be made to fit the data into Milne's model--as a living breathing theory, instead of treating Milne's model as a null hypothesis that we want to reject via statistics, because "we" already believe it's nonsense anyway.