Astrophysics and Gravitation:
Planck’s Early Results
Planck Collaboration (2011). Planck Early Results: The Planck mission arXiv arXiv: 1101.2022v1
The Early Results Papers from the Planck Collaboration are based on the data acquired by the Planck satellite between August 13th, 2009 to June 6th, 2010. This work is “an overview of the history of Planck in its first year of operations” and was released along side Planck’s Early Release Compact Source Catalogue, “the first data product based on Planck to be released publicly”. Andrew Jaffe has a great summary of the results so far.
For more, see Planck: First results.
Sukanya Chakrabarti, Frank Bigiel, Philip Chang, & Leo Blitz (2011). Finding Dark Galaxies From Their Tidal Imprints arXiv arXiv: 1101.0815v1
From the abstract:
We describe ongoing work on a new method that allows one to determine the mass and relative position (in galactocentric radius and azimuth) of galactic companions purely from analysis of observed disturbances in gas disks….This approach has broad implications for many areas of astrophysics — for the indirect detection of dark matter (or dark-matter dominated dwarf galaxies), and for galaxy evolution in its use as a decipher for the dynamical impact of satellites on galactic disks. Here, we provide a proof of principle of the method by applying it to infer and quantitatively characterize optically visible galactic companions of local spirals, from the analysis of observed disturbances in outer gas disks.”
The tl;dr version is that they have a technique for detecting companion galaxies that need not be optically visible (which is great, because sometimes we can’t see things for reasons other than them being made out of dark matter) and this paper acts as a proof of concept by using it to correctly infer and characterize galaxies that we already can observe. Does it say anything about having detected a dark matter galaxy? No. If such things existed it could be used to detect them (if they were acting as companion to regular matter galaxies), but it doesn’t say anything about their existence. I genuinely feel I read a different paper than the authors who wrote the below two articles.
Not So Standard Standard Candle
From the NASA press release:
Astronomers have turned up the first direct proof that “standard candles” used to illuminate the size of the universe, termed Cepheids, shrink in mass, making them not quite as standard as once thought. The findings, made with NASA’s Spitzer Space Telescope, will help astronomers make even more precise measurements of the size, age and expansion rate of our universe.
Obviously, this is rather significant, but the immediate consequence of standard candles not being standard isn’t that it will allow for accurate future measurements of things, it’s that it calls into question the current measurements we have (for things like galactic distances).
From lead author of the study, Massimo Marengo*:
When using Cepheids as standard candles, we must be extra careful because, much like actual candles, they are consumed as they burn.
*He’s also an author on a wonderfully titled paper, Close Binaries with Infrared Excess: Destroyers of Worlds?.
For more, see Cosmology Standard Candle not so Standard After All.
Dynamical Coupled Dark Energy?
Baldi, M., & Pettorino, V. (2011). High-z massive clusters as a test for dynamical coupled dark energy Monthly Notices of the Royal Astronomical Society: Letters DOI: 10.1111/j.1745-3933.2010.00975.x
The recent detection by Jee et al. of the massive cluster XMMU J2235.3−2557 at a redshift z≈ 1.4, with an estimated mass M324= (6.4 ± 1.2) × 1014 M⊙, has been claimed to be a possible challenge to the standard ΛCDM cosmological model. More specifically, the probability to detect such a cluster has been estimated to be ∼0.005 if a ΛCDM model with Gaussian initial conditions is assumed, resulting in a 3σ discrepancy from the standard cosmological model. In this Letter we propose to use high-redshift clusters as the one detected in Jee et al. to compare the cosmological constant scenario with interacting dark energy models. We show that coupled dark energy models, where an interaction is present between dark energy and cold dark matter, can significantly enhance the probability to observe very massive clusters at high redshift.
So I actually haven’t read this paper yet, but was told by a cosmologist friend that it might turn out to be significant. I think it also might be too outside my area for me to have a serious comment on, but for the astro/dark/cosmologists out there, it could be worth a read.
High Energy Physics and Particles:
Beyond Feynman Diagrams
Turok, N. (2011). Particle physics: Beyond Feynman’s diagrams Nature, 469 (7329), 165-166 DOI: 10.1038/469165a
Generations of physicists have spent much of their lives using Richard Feynman’s famous diagrams to calculate how particles interact. New mathematical tools are simplifying the results and suggesting improved underlying principles.
So this was a news piece more than it was anything, but I thought Neil raised some interesting questions in it: Feynman diagrams aren’t cutting edge anymore, should we be looking for better approaches in QFT?
The formulation of quantum field theory used in Feynman’s rules emphasizes locality, the principle that particle interactions occur at specific points in space-time; and unitarity, the principle that quantum-mechanical probabilities must sum to unity. However, the price of making these features explicit is that a huge amount of redundancy (technically known as gauge freedom) is introduced at intermediate steps, only to eventually cancel out in the final, physical result.
There are non-Feynman calculus approaches to some classical (you know what I mean) problems, see: An Operator Product Expansion for Polygonal null Wilson Loops and The All-Loop Integrand For Scattering Amplitudes in Planar N=4 SYM (new formulations of QFT), but do these methods provide more insight than Feynman diagrams, while escaping some of the problems? Neil thinks so (and many others agree).
Quantum field theory is the most powerful mathematical formalism known to physics, successfully predicting, for example, the magnetic moment of the electron to one part in a trillion. The recent discovery of mathematical structures that are now seen to control quantum field theory is likely to be of enormous significance, allowing us not only to calculate complex physical processes relevant to real experiments, but also to tackle fundamental questions such as the quantum structure of space-time itself. The fact that the new formulations of the theory jettison much of the traditional language of quantum field theory, and yet are both simpler and more effective, suggests that an improved set of founding principles may also be at hand.
He makes a convincing argument. Maybe someday we won’t even be teaching Feynman diagrams for practical use.