The God has created a man in order that he creates that the God fails to do



Tuesday, 24 September 2013

Dmitri Ivanenko and Lev Landau - two archival photos




Dmitri Ivanenko and Lev Landau - two soviet genius-physicists.






D. Ivanenko and L. Landau (1927)





D. Ivanenko, L. Landau and Jennie Kannegisser (in future lady Peierls)




Tuesday, 10 September 2013

Who is who among universities in 2013


New world ranking of universities "QS Top University Ranking 2013/2014" has been published. It contains 800 universities.

The top ten positions are occupied by 6 universities of USA and 4 of United Kingdom.

In the top twenty: 11 - USA, 6 - United Kingdom, 2 – Switzerland, 1 -  Canada.

In the first 50 Universities: 19 – USA; 8 - United Kingdom; 5 - China (including 3 of Hong-Kong ); 4 - Australia; 3 – Canada;  2 – Switzerland, France, Japan and Singapore; 1 – Germany, Netherlands  and Korea.

See # for the ranking of Russian universities.


Wednesday, 21 August 2013

My Lectures on Supergeometry



G. Sardanashvily, Lectures on supergeometryarXiv: 0910.0092

Elements of supergeometry are an ingredient in many contemporary classical and quantum field models involving odd fields. For instance, this is the case of SUSY field theory, BRST theory, supergravity. Addressing to theoreticians, these Lectures aim to summarize the relevant material on supergeometry of modules over graded commutative rings, graded manifolds and supermanifolds. 

Contents

1. Graded tensor calculus, 2. Graded dierential calculus and connections, 3. Geometry of graded manifolds, 4. Superfunctions, 5. Supermanifolds, 6. DeWitt supermanifolds, 7. Supervector bundles, 8. Superconnections, 9. Principal superconnections, 10. Supermetric, 11. Graded principal bundles. 

Introduction

Supergeometry is phrased in terms of Z_2-graded modules and sheaves over Z_2-graded commutative algebras. Their algebraic properties naturally generalize those of modules and sheaves over commutative algebras, but supergeometry is not a particular case of noncommutative geometry because of a dierent definition of graded erivations.

In these Lectures, we address supergeometry of modules over graded commutative rings (Lecture 2), graded manifolds (Lectures 3 and 11) and supermanifolds.

It should be emphasized from the beginning that graded manifolds are not supermanifolds, though every graded manifold determines a DeWitt H∞-supermanifold, and vice versa (see Theorem 6.2 below). Both graded manifolds and supermanifolds are phrased in terms of sheaves of graded commutative algebras. However, graded manifolds are characterized by sheaves on smooth manifolds, while supermanifolds are constructed by gluing of sheaves of supervector spaces. Note that there are different types of supermanifolds; these are H∞-, G∞-, GH∞-, G-, and DeWitt supermanifolds. For instance, supervector bundles are defined in the category of G-supermanifolds.





Saturday, 10 August 2013

Is supersymmetry illusive?

“Despite the success of the Large Hadron Collider, evidence for the follow-up theory – supersymmetry – has proved elusive” #

“All would be perfect except that no one has detected any of the many expected supersymmetric particles. “ #

Thus, it seems that supersymmetries, described by generalization of Lie algebras to Lie superalgebras, are illusive. This is also about supergravity based on a super extension of a Poincare Lie algebra.


At the same time, we observe particles both of the even Grassmann parity (photons) and the odd one (fermions). Moreover, gauge symmetries are parameterized by odd ghosts, and BRST theory at present is the generally accepted technique of gauge field quantization. These facts motivate us to develop Grassmann-graded Lagrangian theory of even and odd fields, in general.

References


G. Giachetta, L. Mangiarotti, G. Sardanashvily, Advanced Classical Field Theory (2009)






Sunday, 4 August 2013

Thursday, 25 July 2013

Who is who in modern cosmology history


V.Mukhanov and A.Starobinsky were awarded the 2013 Gruber Cosmology Prize (#) . The following Alexei Starobinsky Laureate Profile (#) provides a brilliant sketch of modern cosmology history.

In 1979, the universe was in trouble – at least from a cosmologist’s point of view.  Compelling evidence for the Big Bang theory – an interpretation of the universe as expanding over time – dated only to the mid-1960s.  But already theorists found themselves confronting a problem that threatened to undermine that theory:  Why is the universe so uniform, or homogeneous, on scales much greater than the size of its largest structures – the web of superclusters of galaxies that span hundreds of millions of light-years (a light year being the distance light travels in a year, or about 6 trillion miles)? 

According to the Big Bang theory, galaxies on the whole are being carried away from one another on the expansion of space itself, so that no matter where you are in the universe, the rest of the universe seems to be receding from you.  Yet if you look at the most distant part of the universe in one direction, and the most distant part of the universe in the opposite direction, they will be remarkably similar.  They’re billions of light years apart, double the distance that light or any other kind of information could have traveled since the Big Bang, so how could they “know” to be alike?

Alexei Starobinsky, then a senior researcher at the Landau Institute for Theoretical Physics in Moscow, wasn’t working on that problem, but he helped to solve it anyway. 

He had been trying, instead, to figure out how the origin of a Big Bang universe might have worked, a task that took him down the rabbit hole of quantum gravity – the attempt to combine quantum mechanics and the general theory of relativity.  In 1979, he discovered that the universe could have gone through an extraordinarily rapid exponential expansion in the first moments of its existence. In the same year he calculated the generation of gravitational waves during this exponential expansion.

Shortly after it, the American physicist Alan Guth proposed a brilliant idea that the stage of the exponential expansion of the early universe, which he called “inflation,” could explain the incredible uniformity of our universe and resolve many other outstanding problems of the Big Bang cosmology. This clarified a potential significance of the regime of the exponential expansion. However, Guth immediately recognized that his proposal had a flaw: the world described by his scenario would become either empty or very non-uniform at the end of inflation. This problem was solved by Andrei Linde, who introduced several major modifications of inflationary theory, such as “new inflation” (later also developed by Albrecht and Steinhardt), “chaotic inflation”, and “eternal chaotic inflation.” A new cosmological paradigm was born.

Starobinsky’s work inspired two of his fellow theoreticians in Moscow, Viatcheslav Mukhanov and G. V. Chibisov (now deceased).  While an exponential expansion of a newborn universe would explain the large-scale homogeneity we see today, Mukhanov and Chibisov also realized that Heisenberg’s uncertainty principle prohibits absolute homogeneity. 
“There would always remain small wiggles, or small inhomogeneities, in the distribution of the matter,” Mukhanov explains.  “But normally these kinds of inhomogeneities are extremely small.”  What would have happened, Mukhanov and Chibisov wondered, to the inhomogeneities that were present during the exponential expansion?  In 1981, Mukhanov and Chibisov concluded that the exponentially rapid expansion would stretch tiny quantum fluctuations to an enormously large size. After that, these fluctuations would grow in amplitude and become the seeds for the galaxy formation.

“We were thinking we could take these small inhomogeneities and amplify them in the expanding universe,” Mukhanov says.  He and Chibisov concluded that in a certain sense these primordial wiggles would be the universe today:  the things that make the universe inhomogeneous on smaller scales; the structures that make the universe more than empty space. 
               
In 1982, several scientists, including Starobinsky, outlined a theory of quantum fluctuations generated in new inflation. This theory was very similar to the theory developed by Mukhanov and Chibisov in the context of the Starobinsky model. Investigation of inflationary fluctuations culminated in 1985 in the work by Mukhanov, who developed a rigorous theory of these fluctuations applicable to a broad class of inflationary models, including new and chaotic inflation.


Later, cosmologists calculated how those inhomogeneities would appear in the cosmic microwave background (CMB), the relic radiation dating to the moment when the universe was 380,000 years old. At that time, hydrogen atoms and photons (packets of light) decoupled, leaving a kind of “flashbulb” image that pervades the universe to this day.  Since then, numerous observations of the CMB have found an exquisite match with Mukhanov and Chibisov’s theoretical predictions, most recently in the release of data from the European Space Agency’s Planck observatory.”


Monday, 1 July 2013

Impact Factor 2012 of Journals in mathematical physics

New Impact Factor 2012 has been announced.

Impact Factor 2012 of some journals closed to our International Journal of Geometric Methods in Modern Physics (IJGMMP) by subject and style is the following:

Journal Title

Impact Factor 2012
Impact Factor 2011
Impact Factor 2010
Impact Factor 2009
Impact Factor 2008
5-Year
Impact
Factor
2.415
1.819
0.842
0.969
0.916
1.367
1.971
1.941
2.000
2.067
2.075
2.012
1.766
1.564
1.641
1.577
1.540
1.514
1.296
1.291
1.291
1.318
1.085
1.284
1.092
1.213
1.290
1.190
1.258
1.102
1.055
0.818
0.652
0.714
0.683
0.911
IJGMMP (WS)
0.951
0.856
0.757
1.612
1.464
1.265
0.756
0.643
0.734
0.658
0.576
0.626

















See also Total List of journals in mathematical physics.

Sunday, 16 June 2013

«Теорминимум-XXI». Современный курс теоретической физики.

Этот курс был задуман как современный «Теорминимум-XXI» в качестве альтернативы известному "Курсу теоретической физики" Ландау и Лифшица, который отражает уровень теоретической физики середины прошлого века. Но уже тогда в 70-е годы зародилась совсем другая теоретическая физика, основанная на математическом аппарате дифференциальной геометрии и алгебраической топологии. Она была стимулирована успехами теории калибровочных полей как универсального механизма описания фундаментальных взаимодействий и ее строгой математической формулировкой в терминах геометрии расслоенных пространств.

Расслоения, связности и многообразия струй, суперсимметрии, супергеометрия и некоммутативная геометрия, гомологии и когомологии, солитоны, инстантоны и топологические заряды, многомерные модели, топологическая теория поля, аномалии, квантовые группы и алгебры Хопфа, геометрическое и деформационное квантования, группоиды, алгеброиды и т. д. составляют стандартный контент современных квантовых и полевых моделей. Ничего этого нет ни у Ландау - Лифшица, ни в подавляющем большинстве отечественных университетских учебников и курсов.

Представляемый курс теоретической физики «Современные методы теории поля» включает 5 томов:

Г.А. Сарданашвили, «Современные методы теории поля. 1. Геометрия и классические поля» (УРСС, 1996) (2-е изд. 2011)

Г.А. Сарданашвили, «Современные методы теории поля. 2. Геометрия и классическая механика» (УРСС, 1998)

Г.А. Сарданашвили, «Современные методы теории поля. 3. Алгебраическая квантовая теория» (УРСС, 1999) (2-е изд. 2011)

Г.А. Сарданашвили, «Современные методы теории поля. 4. Геометрия и квантовые поля» (УРСС, 2000)

Г.А. Сарданашвили, «Современные методы теории поля. 5. Гравитация» (УРСС, 1996) (2-е изд. 2011).

Это своего рода адаптированный «Теорминимум-XXI» для тех, кто собирается начать заниматься современной теоретической и математической физикой. Но для профессиональной работы он недостаточен. Экспертное изложение необходимых математических методов и теоретических моделей дано в монографиях:

L. Mangiarotti, G. Sardanashvily, Connections in Classical and Quantum Field Theory (World Scientific, 2000),

G. Giachetta, L. Mangiarotti, G. Sardanashvily, Geometric and Algebraic Topological Methods in Quantum Mechanics (World Scientific, 2005),

G. Giachetta, L. Mangiarotti, G. Sardanashvily, Advanced Classical Field Theory (World Scientific, 2009),

G. Giachetta, L. Mangiarotti, G. Sardanashvily, Geometric Methods in Classical and Quantum Mechanics (World Scientific, 2010),

G.Sardanashvily, Lectures on Differential Geometry of Modules and Rings. Application to Quantum Theory (Lambert Academic Publishing, Saarbrucken, 2012),

G.Sardanashvily, Advanced Differential Geometry for Theoreticians. Fiber bundles, jet manifolds and Lagrangian theory (Lambert Academic Publishing, Saarbrucken, 2013),


которые доступны на странице Monographs моего сайта, его дубликата в Google, а также в MendeleY.

Теорминимум-XXI на Facebook

Monday, 10 June 2013

My review: “Geometric formulation of non-autonomous mechanics”

G.Sardanashvily, Geometric formulation of non-autonomous mechanics, Int. J. Geom. Methods Mod. Phys. 10 (2013) 1350061 # 

Abstract

We address classical and quantum mechanics in a general setting of arbitrary time-dependent transformations. Classical non-relativistic mechanics is formulated as a particular field theory on smooth fibre bundles over a time axis R. Connections on these bundles describe reference frames. Quantum time-dependent mechanics is phrased in geometric terms of Banach and Hilbert bundles and connections on these bundles. A quantization scheme speaking this language is geometric quantization. 

Introduction 

The technique of symplectic manifolds is well known to provide the adequate Hamiltonian formulation of autonomous mechanics. Its realistic example is a mechanical system whose configuration space is a manifold M and whose phase space is the cotangent bundle T*M of M provided with the canonical symplectic form Om on T*M. Any autonomous Hamiltonian system locally is of this type.

However, this geometric formulation of autonomous mechanics is not extended to mechanics under time-dependent transformations because the symplectic form Om fails to be invariant under these transformations. As a palliative variant, one has developed time-dependent mechanics on a configuration space Q=RxM where R is the time axis. Its phase space RxT*M is provided with the pull-back presymplectic form. However, this presymplectic form also is broken by time-dependent transformations.

We address non-relativistic mechanics in a case of arbitrary time-dependent transformations. Its configuration space is a fibre bundle Q->R. Its velocity space is the first order jet manifold of sections of Q->R. A phase space is the vertical cotangent bundle V*Q of Q->R.

This formulation of non-relativistic mechanics is similar to that of classical field theory on fibre bundles over a base of dimension >1. A difference between mechanics and field theory however lies in the fact that connections on bundles over R are flat, and they fail to be dynamic variables, but describe reference frames.

Note that relativistic mechanics is adequately formulated as particular classical string theory of one-dimensional submanifolds.

Time-dependent integrable Hamiltonian systems and mechanics with time-dependent parameters also are considered.



Sunday, 2 June 2013

Quantum field theory: generating functionals as a measure (from my Scientific Biography)

Third period (1978 - 1990) ...

The fact that a gravitational field by its physical nature is a Higgs field, drew my attention to a general problem of description of a Higgs vacuum. In united models of fundamental interactions, its occurrence is regarded as a kind of phase transition at a certain energy (temperature). I made various attempts to approach this problem [37,62], in particular, developed an idea that a Higgs vacuum is a source of a Higgs gravitational field [51]. However, any substantive theory of a Higgs vacuum still not constructed.

Working on the problem of Higgs vacuum, I met the fact that, in general, there is no mathematically correct formulation of quantum field theory, either axiomatic or in the form of the perturbation theory. The latter is formulated in terms of the so-called functional integrals, but they are not any true integrals, and their properties are defined by analogy with those on finite-dimensional spaces in order to reproduce Feynman diagrams. Pursuing the well-known GNS construction in quantum theory, I knew its expansion to unnormed involutive algebras and, in particular, to an algebra of free fields represented by probe (rapidly decreasing at infinity) functions. These functions form a nuclear space S. Continuous forms of this algebra are generalized functions that make up the dual space S'. A problem was with a description of a system of interacting fields, appeared and disappeared in some instants of time. Such a system is characterized by chronological forms on a space S, but they are not continuous. However, I have found that, after a Wick rotation to Euclidean fields, their chronological form (Euclidean Green function) are continuous. Moreover, they are derived from a generating functional which, by virtue of the well-known theorem, is a measure on a space of generalized functions S'. This construction provides a good mathematical basis of quantum field theory [54,59]. It was extended to fermion fields [57]. However, a problem lies in the fact that it is impossible to write these measures in an explicit form, with the exception of Gaussian measures for fields without interaction. In addition, properties of these measures differ from those adopted for functional integrals in perturbation quantum field theory. For example, there is no translationally invariant Lebesgue measure on S', and I even tried to use this fact for describing a Higgs vacuum [62].


Later I repeatedly returned to attempts to build a generating functional of quantum field theory as a measure, but so far unsuccessfully. However, in the language of measures I described nonequivalent representations of algebras of canonical commutation relations, modeled on nuclear spaces [105].

References:

G. Sardanashvily, True functional integrals in Algebraic Quantum Field Theory, arXiv: hep-th/9410107


Monday, 20 May 2013

Against the Impact Factor



There is a pressing need to improve the ways in which the output of scientific research is evaluated by funding agencies, academic institutions, and other parties.

To address this issue, a group of editors and publishers of scholarly journals met during the Annual Meeting of The American Society for Cell Biology (ASCB) in San Francisco, CA, on December 16, 2012. The group developed a set of recommendations, referred to as the San Francisco Declaration on Research Assessment. We invite interested parties across all scientific disciplines to indicate their support by adding their names to this Declaration.

The outputs from scientific research are many and varied, including: research articles reporting new knowledge, data, reagents, and software; intellectual property; and highly trained young scientists. Funding agencies, institutions that employ scientists, and scientists themselves, all have a desire, and need, to assess the quality and impact of scientific outputs. It is thus imperative that scientific output is measured accurately and evaluated wisely.

The Journal Impact Factor is frequently used as the primary parameter with which to compare the scientific output of individuals and institutions. The Journal Impact Factor, as calculated by Thomson Reuters, was originally created as a tool to help librarians identify journals to purchase, not as a measure of the scientific quality of research in an article. With that in mind, it is critical to understand that the Journal Impact Factor has a number of well-documented deficiencies as a tool for research assessment. These limitations include: A) citation distributions within journals are highly skewed; B) the properties of the Journal Impact Factor are field-specific: it is a composite of multiple, highly diverse article types, including primary research papers and reviews; C) Journal Impact Factors can be manipulated (or "gamed") by editorial policy; and D) data used to calculate the Journal Impact Factors are neither transparent nor openly available to the public.

Below we make a number of recommendations for improving the way in which the quality of research output is evaluated. Outputs other than research articles will grow in importance in assessing research effectiveness in the future, but the peer-reviewed research paper will remain a central research output that informs research assessment. Our recommendations therefore focus primarily on practices relating to research articles published in peer-reviewed journals but can and should be extended by recognizing additional products, such as datasets, as important research outputs. These recommendations are aimed at funding agencies, academic institutions, journals, organizations that supply metrics, and individual researchers.

A number of themes run through these recommendations:
  • the need to eliminate the use of journal-based metrics, such as Journal Impact Factors, in funding, appointment, and promotion considerations;
  • the need to assess research on its own merits rather than on the basis of the journal in which the research is published; and
  • the need to capitalize on the opportunities provided by online publication (such as relaxing unnecessary limits on the number of words, figures, and references in articles, and exploring new indicators of significance and impact).
We recognize that many funding agencies, institutions, publishers, and researchers are already encouraging improved practices in research assessment. Such steps are beginning to increase the momentum toward more sophisticated and meaningful approaches to research evaluation that can now be built upon and adopted by all of the key constituencies involved.

The signatories of the San Francisco Declaration on Research Assessment support the adoption of the following practices in research assessment.

General Recommendation
1. Do not use journal-based metrics, such as Journal Impact Factors, as a surrogate measure of the quality of individual research articles, to assess an individual scientist's contributions, or in hiring, promotion, or funding decisions.

For funding agencies
2. Be explicit about the criteria used in evaluating the scientific productivity of grant applicants and clearly highlight, especially for early-stage investigators, that the scientific content of a paper is much more important than publication metrics or the identity of the journal in which it was published.
3. For the purposes of research assessment, consider the value and impact of all research outputs (including datasets and software) in addition to research publications, and consider a broad range of impact measures including qualitative indicators of research impact, such as influence on policy and practice.

For institutions
4. Be explicit about the criteria used to reach hiring, tenure, and promotion decisions, clearly highlighting, especially for early-stage investigators, that the scientific content of a paper is much more important than publication metrics or the identity of the journal in which it was published.
5. For the purposes of research assessment, consider the value and impact of all research outputs (including datasets and software) in addition to research publications, and consider a broad range of impact measures including qualitative indicators of research impact, such as influence on policy and practice.

For publishers
6. Greatly reduce emphasis on the journal impact factor as a promotional tool, ideally by ceasing to promote the impact factor or by presenting the metric in the context of a variety of journal-based metrics (e.g., 5-year impact factor, EigenFactor, SCImago, h-index, editorial and publication times, etc.) that provide a richer view of journal performance.
7. Make available a range of article-level metrics to encourage a shift toward assessment based on the scientific content of an article rather than publication metrics of the journal in which it was published.
8. Encourage responsible authorship practices and the provision of information about the specific contributions of each author.
9. Whether a journal is open-access or subscription-based, remove all reuse limitations on reference lists in research articles and make them available under the Creative Commons Public Domain Dedication.
10. Remove or reduce the constraints on the number of references in research articles, and, where appropriate, mandate the citation of primary literature in favor of reviews in order to give credit to the group(s) who first reported a finding.

For organizations that supply metrics
11. Be open and transparent by providing data and methods used to calculate all metrics.
12. Provide the data under a licence that allows unrestricted reuse, and provide computational access to data, where possible.
13. Be clear that inappropriate manipulation of metrics will not be tolerated; be explicit about what constitutes inappropriate manipulation and what measures will be taken to combat this.
14. Account for the variation in article types (e.g., reviews versus research articles), and in different subject areas when metrics are used, aggregated, or compared.

For researchers
15. When involved in committees making decisions about funding, hiring, tenure, or promotion, make assessments based on scientific content rather than publication metrics.
16. Wherever appropriate, cite primary literature in which observations are first reported rather than reviews in order to give credit where credit is due.
17. Use a range of article metrics and indicators on personal/supporting statements, as evidence of the impact of individual published articles and other research outputs.
18. Challenge research assessment practices that rely inappropriately on Journal Impact Factors and promote and teach best practice that focuses on the value and influence of specific research outputs.