首页 | 本学科首页   官方微博 | 高级检索  
     


INAUGURAL ARTICLE by a Recently Elected Academy Member:Algorithms,complexity, and the sciences
Authors:Christos Papadimitriou
Affiliation:Simons Institute for the Theory of Computing, University of California, Berkeley, CA, 94720
Abstract:Algorithms, perhaps together with Moore’s law, compose the engine of the information technology revolution, whereas complexity—the antithesis of algorithms—is one of the deepest realms of mathematical investigation. After introducing the basic concepts of algorithms and complexity, and the fundamental complexity classes P (polynomial time) and NP (nondeterministic polynomial time, or search problems), we discuss briefly the P vs. NP problem. We then focus on certain classes between P and NP which capture important phenomena in the social and life sciences, namely the Nash equlibrium and other equilibria in economics and game theory, and certain processes in population genetics and evolution. Finally, an algorithm known as multiplicative weights update (MWU) provides an algorithmic interpretation of the evolution of allele frequencies in a population under sex and weak selection. All three of these equivalences are rife with domain-specific implications: The concept of Nash equilibrium may be less universal—and therefore less compelling—than has been presumed; selection on gene interactions may entail the maintenance of genetic variation for longer periods than selection on single alleles predicts; whereas MWU can be shown to maximize, for each gene, a convex combination of the gene’s cumulative fitness in the population and the entropy of the allele distribution, an insight that may be pertinent to the maintenance of variation in evolution.Information technology has inundated and changed our world, as it is transforming the ways we live, work, play, learn, interact, and understand science and the world around us. One driving force behind this deluge is quite obvious: Computer hardware has become more cheap, fast, and innovative over the past half century, riding as it does on the exponent of Moore’s law (1). Progress in efficient algorithms—methods for solving computational problems in ways that take full advantage of fast hardware—is arguably of even greater importance.Algorithms have been known since antiquity. In the third century BC Euclid wrote about his algorithm for finding the greatest common divisor of two integers. The French scholar G. Lamé noted in 1845 (2) that Euclid’s algorithm is efficient, because it terminates after a number of arithmetic operations that grow proportionately to the length of the input—what we call today the number of bits of the two numbers. [In fact, one of the very few works on the subject of algorithms that have been published in PNAS is a 1976 article by Andrew Yao and Donald Knuth, revisiting and refining that analysis (3).] In the ninth century CE, the Arab mathematician Al Khwarizmi codified certain elementary algorithms for adding, dividing, etc., decimal numbers—the precise algorithms we learn today at elementary school. In fact, these simple and powerful algorithms were a major incentive for the eventual adoption of the decimal number system in Europe (ca. 1500 CE), an innovation that helped precipitate a social and scientific revolution comparable in impact to the one we are living in now.The study of efficient algorithms—algorithms that perform the required tasks within favorable time limits—started in the 1950s, soon after the first computer, and is now a very well-developed mathematical field within computer science. By the 1960s, researchers had begun to measure algorithms by the criterion of polynomial time, that is, to consider an algorithm efficient, or satisfactory, if the total number of operations it performs is always bounded from above by a polynomial function (as opposed to an exponential function) of the size of the input. For example, sorting n numbers can be done with about n?log?n comparisons, whereas discovering the best alignment of two DNA sequences with n nucleotides can take in the worst case time proportional to n2 (but can be performed in linear time for sequences that do align well); these are both considered “satisfactory” according to this criterion.
Keywords:lens of computation   complexity of equilibria   multiplicative weights update
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号