There are many paths to take if you are interested in doing fundamental physics research in hopes of discovering the secrets of the universe (awkward phrasing there: this makes the universe is like the Bush administration, I guess?) Here are my three favorite ways to do fundamental theoretical physics.
- Do it yourself. This is the traditional method. Of course you have to be more than a bit delusional to think you might actually be able to contribute some positive net effect, but such long odds don’t seem to influence many people’s choice of this method.
- Build a computer to do it for you. If you are a believer that artificial intelligence is just around the corner, then why would you think that you stand any chance against the coming mighty computer intelligence? So quit your job as a theoretical physicist and convince Google to build you a mighty physics brain. Call it, oh, I don’t know, “Deep Thought”. Just make sure you are careful in wording your queries to the machine.
- Join SETI. The often overlooked approach. I mean, if (*ahem*) there are aliens, and if (*ahem*) SETI can find them, the aliens are like to be more advanced than us (*ahem*) and then we can simply ask them our deepest physics questions. Of course this may not get you the answer to your physics question in your lifetime, or even in the lifetime of the alien, but I do believe that is the list of your worries for this method.
I suspect some level of perhaps misplaced self-confidence is a prerequisit for a career in theoretical physics.
I think a typing pool with an infinite number of monkeys probably offers the best chance of a successful result in theoretical physics. However, it might be possible for an expert in ESP to read the minds of an infinite number of typing monkeys and to intuit a successful unified theory before any of them types it up. Anyone who believes he/she can arrive at a successful unified theory single-handed surely belongs in this latter class of ‘thinkers’. I include myself in this group. I am thinking the universe is curved like a banana…
So, it seems the sure fire way to success is to build your own radio-scopes to do SETI yourself and, yes, build your own AI system to analyze the results, converse with any aliens, keep the electric bills paid after your gone, and of course, to do random Monte Carlo simulations.
I’ll get started tonight!
A British zoo tried the whole monkey + typewriter thing a year or so ago. Turns out all the monkeys did was defecate on the typewriters.
Find the right giant(s), and get them to give you a lift up on their shoulders.
for a while the favorite method was: wait for Ed Witten to write another paper or give another presentation.
A British zoo tried the whole monkey + typewriter thing a year or so ago. Turns out all the monkeys did was defecate on the typewriters.
So, it did successfully emulate theoretical physics!
(Just kidding. Don’t mean it, so don’t flame me!)
Hence one reason to collaborate is to guarantee that someone other than an anonymous referee actually knows what you’re doing.
That assumes one can find a willing collaborator. This is easier said than done sometimes. I, for one, have only one or two papers with co-authors and not for lack of trying (and I don’t think my personality is that abrasive – The Pontiff himself managed to survive dinner with me a few months ago).
So, it did successfully emulate theoretical physics!
Yes, except we defecate on our laptops rather than typewriters (after all, it is the 21st century).
I have stated since 1975, and never been contradicted, that I am the first person in the world to solve an open scientific question, from the literature, with a genetic algorithm. This is a methodology-specific subset of “Build a computer to do it for you.”
The problem dealt with nonlinear waves defined by a set of differential equations thought to be unsolvable, but which I solved by having the GA (which I wrote in APL running on CDC 6600) searching the space of possible formulae whose “fitness” was how closely they matched the data from simulations of the nonlinear waves.
Once the software I’d built told me the answer, I first of all had a publication, and then painstakingly found better and better ways to reverse engineer the answer with shorter and shorter proofs, culminating in a 1-line proof (not counting the notational set up) that assumes the reader knows all about Krohn-Rhodes decomposition of the semigroup of differential operators, and convolution integrals of nonlinear responses to Dirac deltas, and a willingness to do silly things in Laplace Transform phase space.
This breakthrough did not result in my PhD dissertation being approved (nor was it rejected), because those few who could do the Math did not understand the problem, and vice versa.
Hence one reason to collaborate is to guarantee that someone other than an anonymous referee actually knows what you’re doing.
If one must collaborate (and I think it a VERY good idea) then the Web is the best collaborationware since writing was invented. But that’s another story…
careful is not flippant. Symmetries evince large bore puffery. The real world is about symmetry breakings. Exceptions not the rules do the hard work.
The weaker the interaction the greater the symmetry breakings. The universe is powered by snit. No interaction is as weak as gravitation. Do left and right shoes vacuum free fall identically? Somebody should look (pdf).
Ian’s statement: “I, for one, have only one or two papers with co-authors and not for lack of trying (and I don’t think my personality is that abrasive” saddens me.
(1) Collaboration is a key to success and happiness;
(2) I probably have a much more abrasive personality, which has not stood in the way of my having approximately 100 published collaborators, including at least one Nobel laureate whose name I drop too often;
(3) Thus, as I assert without proof that abrasion is not correlated with collaboration, we must dig deeper to find the fundamental law;
(4) A candidate for that law is this: “If 2 people collaborate, this does not mean that each has only 1/2 as much work to do. It is closer to 60% or 70% — better experimental results need to be found.”
(5) If you ask each of N collaborators on a successful project how much of the contribution was theirs, personally, the sum far exceeds 100% with 200% to 300% not being unusual;
(6) If you ask each of N collaborators on an UNsuccessful project how much of the contribution was theirs, personally, the sum falls significantly below 100% with 5% to 10% not being unusual;
(7) Wake up and smell the overpriced coffee: the 21st century is the age of Wikiscience, where no paper is ever complete, but merely represented by a current on-line version with MANY contributors, most of whom never meet face-to-face, some pseudonymous, and an unknown number not human at all.
Cf. Phys Rev papers with 300 co-authors, Erdos Number, Bacon Number, Asimov Number, and the classic Nature letter to the editor on LPU = “Least Publishable Unit.”
Why use an infinite number of monkeys when an infinite number of graduate students are cheaper to feed? Just don’t count on the NSF funding it before the (real) proposal to build a trailer park around a Doppler radar … to act as tornado bait.
Jonathan, you published a paper based on a calculation done in APL? I am *very* impressed.
CCPhysicist: Collaborate, always collaborate, and make computers do the heavy lifting; remember the LPU (Least Publishable Unit) and always break up your research results into as many publications as feasible. The ones that I referred to include (a few oddball other crept in here, on subjects such as Quantum Computing and Science Fiction):
Partial List of my Fundamental Computational Biophysics and Molecular Biology publications/presentations:
Jonathan V. Post, “Analysis of Enzyme Waves: Success
through Simulation”, Proceedings of the Summer
Computer Simulation Conference, Seattle, WA, 25-27
August 1980, pp.691-695, AFIPS Press, 1815 North Lynn
Street, Suite 800, Arlington, VA 22209
Jonathan V. Post, “Simulation of Metabolic Dynamics”,
Proceedings of the Fourth Annual Symposium on Computer
Applications in Medical Care, Washington, DC, 2-5
November 1980
Jonathan V. Post, “Enzyme System Cybernetics”,
Proceedings of the International Conference on Applied
Systems Research and Cybernetics, Acapulco, Mexico,
12-15 December 1980
Jonathan V. Post, “Enzyme System Cybernetics”, Applied
Systems Research and Cybernetics, ed. G.E. Lasker,
Pergamon Press, 1981, Vol.IV, pp.1883-1888, ISBN:
0-08-027196-0 (set), ISBN: 0-08-0271201 (Vol.IV)
Jonathan V. Post, “Alternating Current Chemistry,
Enzyme Waves, and Metabolic Chaos”, NATO Workshop on
Coherent and Emergent Phenomena in Biomolecular
Systems, Tucson, AZ 15-19 January 1991
Jonathan V. Post, “Nonlinear Enzyme Waves, Simulated
Metabolism Dynamics, and Protein Nanotechnology”,
poster session, 2nd Artificial Life Workshop, 5-9 Feb
1990, Santa Fe, NM
Jonathan V. Post, “Continuous Semigroups, Nonlinear
Enzyme Waves, and Simulated Metabolism Dynamics”,
accepted for Semigroup Forum (Mathematics journal), 15
May 1990 not published as employer accidentally erased
only digital file of paper]
Jonathan V. Post, “Is Functional Identity of Products
a Necessary Condition for the Selective Neutrality of
Structural Gene Allele?”, Population Biologists of New
England (PBONE), Brown University, Providence, RI,
June 1976
Jonathan V. Post, “Enzyme Kinetics and Selection of
Structural Gene Products — A Theoretical
Consideration”, Society for the Study of Evolution,
Ithaca, NY, June 1977
Jonathan V. Post, “Birth of the Biocomputer”,
color-videotaped lecture to audience of 200, at
opening of A.P.P.L.E.’s new world headquarters, Kent,
WA, 15 Mar 1983
Jonathan V. Post et.al., “Part Human, Part Machine”,
panel discussion on cyborgs, prosthesis, robots,
nanotechnology, Westercon 37, Portland Marriott,
Portland, OR, 30 Jun 1984
Jonathan V. Post (moderator), Prof. Vernor Vinge, Paul
Preuss, Greg Bear, F. Eugene Yates (Director, Crump
Institute for Medical Engineering, UCLA), “New
Machines, New Life Forms”, UCLA Extension’s Symposium
on Science and Science Fiction, Westwood, CA, 9 Nov
1986
Jonathan V. Post, Dean R. Lambe, Laura Mixon, Walter
John Williams, “Nanotechnology”, panel discussion,
Nolacon: 46th World Science Fiction Convention,
Sheraton Grand B, New Orleans, LA, 4 Sep 1988
Jonathan Vos Post, “The Evolution of Controllability
in Enzyme System Dynamics”, Proc. 5th International
Conference on Complex Systems, Boston, Massachusetts,
16-21 May 2004. Abstract: A building block of all
living organisms’ metabolism is the “enzyme chain.” A
chemical “substrate” diffuses into the (open) system.
A first enzyme transforms it into a first intermediate
metabolite. A second enzyme transforms the first
intermediate into a second intermediate metabolite.
Eventually, an Nth intermediate, the “product”
diffuses out of the open system. What we most often
see in nature is that the behavior of the first enzyme
is regulated by a feedback loop sensitive to the
concentration of product. This is accomplished by the
first enzyme in the chain being “allosteric”, with one
active site for binding with the substrate, and a
second active site for binding with the product.
Normally, as the concentration of product increases,
the catalytic efficiency of the first enzyme is
decreased (inhibited). To anthropomorphize, when the
enzyme chain is making too much product for the
organism’s good, the first enzyme in the chain is
told: “whoa, slow down there.” Such feedback can lead
to oscillation, or, as this author first pointed out,
“nonperiodic oscillation” (for which, at the time, the
term “chaos” had not yet been introduced). But why
that single feedback loop, known as “endproduct
inhibition” [Umbarger, 1956], and not other possible
control systems? What exactly is evolution doing, in
adapting systems to do complex things with control of
flux (flux meaning the mass of chemicals flowing
through the open system in unit time)? This
publication emphasizes the results of Kacser and the
results of Savageau, in the context of this author’s
theory. Other publications by this author [Post, 9
refs] explain the context and literature on the
dynamic behavior of enzyme system kinetics in living
metabolisms; the use of interactive computer
simulations to analyze such behavior; the emergent
behaviors “at the edge of chaos”; the mathematical
solution in the neighborhood of steady state of
previously unsolved systems of nonlinear
Michaelis-Menton equations [Michaelis-Menten, 1913];
and a deep reason for those solutions in terms of
Krohn-Rhodes Decomposition of the Semigroup of
Differential Operators of the systems of nonlinear
Michaelis-Menton equations. Living organisms are not
test tubes in which are chemical reactions have
reached equilibrium. They are made of cells, each cell
of which is an “open system” in which energy, entropy,
and certain molecules can pass through cell membranes.
Due to conservation of mass, the rate of stuff going
in (averaged over time) equals the rate of stuff going
out. That rate is called “flux.” If what comes into
the open system varies as a function of time, what is
inside the system varies as a function of time, and
what leaves the system varies as a function of time.
Post’s related publications provide a general solution
to the relationship between the input function of time
and the output function of time, in the neighborhood
of steady state. But the behavior of the open system,
in its complexity, can also be analyzed in terms of
mathematical Control Theory. This leads immediately to
questions of “Control of Flux.”
For a draft paper on the wiki of ICCS [International
Conference on Complex Systems], see also:
http://necsi.org/community/wiki/index.php/Evolutionary_channel_capacity
Quantum Coincidence with Amino Acid Molecular Weight
by
Jonathan Vos Post
Draft 4.0 of 20 April 2007, 8 pp., 3200 words
0.0 ABSTRACT:
This reports and speculates on what I presume to be a
coincidence: the Mean Molecular Weight of the 20
Standard Human Amino Acids, in Daltons, is within one
part in one thousand of the inverse of the
dimensionless fine structure constant.
Kip Thorne at Caltech asks: “How do you know that this
is a coincidence?” Mere coincidence is suggested by at
least 8 factors: (a) different organisms have slightly
different amino acids; (b) humans have amino acids
altered (methylation) after incorporation in peptides;
(c) humans have hydroxyproline as a 21st amino acid,
but only in collagen; (d) the mean molecular weight of
human amino acids has changed substantially over time
by evolution of the genetic code (we calculate this
change explicitly); (e) there is no causal connection
between amino acids and fine structure constant; (f)
there is no consensus mechanism connecting the fine
structure constant with the mass of the hydrogen atom;
(g) isotope differences are significant as
second-order (especially Carbon and Oxygen); (h) the
fine structure constant may be changing over time.
This is discussed in three contexts: (a) Why this is
surely a mere coincidence; (b) Parallels to recent Li
and Zhang paper in arXiv; (c) relationship with
companion papers by Jonathan Vos Post; (d)
distinguishing this from bogus Intelligent Design
arguments.
Open questions relate to other organisms, the history
of amino acid regulation in the genome, future
changes, and the correlations between physical
constants and mathematical biology.
[end abstract]
1. arXiv:0710.5046 [pdf]
Title: Comparative Quantum Cosmology: Causality, Singularity, and Boundary Conditions
Authors: Philip V. Fellman, Jonathan Vos Post, Christine M. Carmichael, Andrew Carmichael Post
Comments: 17 pages, 2 figures. 7th International Conference on Complex Systems
Subjects: General Relativity and Quantum Cosmology (gr-qc)
2. arXiv:0707.4036 [pdf]
Title: Disrupting Terrorist Networks, a dynamic fitness landscape approach
Authors: Philip V. Fellman, Jonathan P. Clemens, Roxana Wright, Jonathan Vos Post, Matthew Dadmun
Comments: 12 pages, 8 figures. Proceedings of the 2006 annual meeting of the North American Association for Computation in the
Social and Organizational Sciences
Subjects: Adaptation and Self-Organizing Systems (nlin.AO)
3. arXiv:0707.0854 [pdf]
Title: Adaptation and Coevolution on an Emergent Global
Competitive Landscape
Authors: Philip V. Fellman, Jonathan Vos Post, Roxana Wright, Usha Dasari
Comments: 16 pages, 5th International Conference on Complex Systems
Subjects: Adaptation and Self-Organizing Systems (nlin.AO)
4. arXiv:0707.0324 [pdf]
Title: Quantum Nash Equilibria and Quantum Computing
Authors: Philip V. Fellman, Jonathan Vos Post
Comments: 18 Pages, 6th International Conference on Complex
Systems. Available from this http URL
Journal-ref: InterJournal Complex Systems, 1846, 2006
Subjects: Physics and Society (physics.soc-ph); Computational
Physics (physics.comp-ph)
====================
Yeee!!! JVP writes a lot. As for collaboration, the sum of the work has to add up to more than one. If everybody works on their own part of the problem, you have to connect the parts. That means everybody works on the interfaces. Interfaces in the typical logical thought train are fuzzy, so everybody does a lot of work on the interfaces.
Dear David, what you say about fuzzy trains at interfaces is well said. That reminds me of the since-demolished Pennsylvania Station in New York, which Isaac Asimov admitted was the inspiration for Trantor.
I’m no Isaac Asimov by writing volume — he wrote 90 words per minute, 10 hours per day, 363 days per year, and sold every word. But the magazine, journal, anthology, proceedings print streams are less than the online quasi-publications, this being the 21st century age of wikiscience.
You may also find 167 of my comments or sequences linked to the arXiv in the Online Encyclopedia of Integer Sequence, indexed and linked to here (10 per page, 17 pages):
http://www.research.att.com/~njas/sequences/?q=jonathan+vos+post+arxiv&language=english&go=Search
As of this morning, I contributed 1,983 of the 140,000+ web pages of the Online Encyclopedia of Integer Sequence, roughly 1/72 of the decade’s total from thousands of contributors. That’s much more than my 241 contributions to the university-hosted Prime Curios web site, or the 19 in Wolfram-hosted MathWorld. Add in thosee 4 papers coauthored with Fellman et al on the arXiv, and I have 2,247 edited on-line publications. These do not count the 1,000+ pages of my own web domain, nor blogs, and are not traditionally edited nor refereed in some cases, but nearly so in reputability.
Adding the hardcopy and the pixel publications, presentations, and braodcasts gets me to roughly 3,000 total. That’s almost a third as many as the KGB-connected Communist head of a USSR institute who had his name put as coauthor on every publication by the vast fleet of scientists and engineers. Then there’s that guy who uses bots to edit demographic pages at wikipedia, who claims to have meta-authored over 100,000 web pages.
I learned a lot about co-authorship from Science Fiction — Eando Binder, Boris and Arkady Strugatsky, Korbluth and several, NIven and Pournelle. I learned more at Caltech, where primadonnas of Physics form metastable 2-particle and 3-particle resonances, and lab-stabilized clusters. Then grad school, where we my department a prolific, collaborative, and plagiaristic Chairman. Then in Hollywood, where having a good lawyer is more important than having a good accountant, and where the difference between “and” and “&” in co-authorship leads to half the work of the Writers Guild arbitrators. To preserve our marriage, my wife and I shall not write another novel together. Co-authors argue at the interfaces. By definition, no man can win an argument with his wife…
July 19, 2008, 1:54 pm
A Book With 90,000 Authors
By Noam Cohen
Among the unlikelier announcements made at Wikipedia’s conference in Alexandria, Egypt, was the bold claim on Friday that the online encyclopedia was about to make history in print publishing: creating the book with the most credited individual authors ever — approximately 90,000.
[truncated]