History Podcasts

Evolution of Genetic Health Improvements in Humans Stall over Last Millennium

Evolution of Genetic Health Improvements in Humans Stall over Last Millennium


We are searching data for your request:

Forums and discussions:
Manuals and reference books:
Data from registers:
Wait the end of the search in all databases.
Upon completion, a link will appear to access the found materials.

Had an arrow in his back not felled the legendary Iceman some 5,300 years ago, he would have likely dropped dead from a heart attack. Written in the DNA of his remains was a propensity for cardiovascular disease.

Heart problems were much more common in the genes of our ancient ancestors than in ours today, according to a new study by the Georgia Institute of Technology, which computationally compared genetic disease factors in modern humans with those of people through the millennia.

Overall, the news from the study is good. Evolution appears, through the ages, to have weeded out genetic influences that promote disease, while promulgating influences that protect from disease.

Charted data clearly illustrate a progressive improvement over the millennia in the genetic foundations of health, in nearly all diseases examined. Smaller shapes indicate better overall foundations. The dotted round line labeled 50% indicates average modern human disease allele occurrence. (Credit: Georgia Tech / LaChance, Berens, Cooper, Callahan)

Evolutionary Double-take

But for us modern folks, there's also a hint of bad news. That generally healthy trend might have reversed in the last 500 to 1,000 years, meaning that, with the exception of cardiovascular ailments, disease risks found in our genes may be on the rise. For mental health, our genetic underpinnings looked especially worse than those of our ancient forebears.

Though the long-term positive trend looks very clear in the data, it's too early to tell if the initial impression of a shorter-term reversal will hold. Further research in this brand-new field could dismiss it.

"That could well happen," said principal investigator Joe Lachance, an assistant professor in Georgia Tech's School of Biological Sciences. "But it was still perplexing to see a good many of our ancestors' genomes looking considerably healthier than ours do. That wasn't really expected."

Lachance, former postdoctoral assistant Ali Berens, and undergraduate student Taylor Cooper published their results in the journal Human Biology . They hope that by better understanding our evolutionary history, researchers will someday be able to project future human populations' genomic health forward, as well as perhaps their medical needs.

Georgia Tech Assistant Professor Joe Lachance (l.) and undergraduate researcher Taylor Cooper (r.) performed the comparative genetic analysis with former Georgia Tech researcher Ali Berens (not shown). Credit: Georgia Tech / Christopher Moore

Dismal Distant Past

Despite what may be a striking, recent negative trend, through the millennia genetic risks to health clearly appear to have diminished, according to the study's main finding. "That was to be expected because larger populations are better able to purge disease-causing genetic variants," Lachance said.

The researchers scoured DNA records covering thousands of years of human remains along with those of our distant evolutionary cousins, such as Neanderthals, for genetic locations, or "loci," associated with common diseases. "We looked at heart disease, digestive problems, dental health, muscle disorders, psychiatric issues, and some other traits," Cooper said.

After determining that they could computationally compare 3,180 disease loci common to ancients and modern humans, the researchers checked for genetic variants, or "alleles," associated with the likelihood of those diseases, or associated with the protection from them. Nine millennia ago and before that, the genetic underpinnings of the diseases looked dismal.

"Humans way back then, and Neanderthals and Denisovans -- they're our distant evolutionary cousins -- they appear to have had a lot more alleles that promoted disease than we do," Lachance said. "The genetic risks for cardiovascular disease were particularly troubling in the past."

A young looking but sick and weak Neanderthal ( CC BY-NC-ND 2.0 )

Crumbling Health Genetics?

As millennia marched on, overall genetic health foundations got much better, the study's results showed. The frequency of alleles that promote disease dropped while protective alleles rose at a steady clip.

Then again, there's that nagging initial impression in the study's data that, for a few centuries now, things may have gone off track. "Our genetic risk was on a downward trend, but in the last 500 or 1,000 years, our lifestyles and environments changed," Lachance said.

This is speculation, but perhaps better food, shelter, clothing, and medicine have made humans less susceptible to disease alleles, so having them in our DNA is no longer as likely to kill us before we reproduce and pass them on.

  • Could Plastic Bottles Made by Ancient Americans be the Cause of their Health Decline?
  • Ancient Healing Methods Offer an Alternative Paradigm in Health
  • Paid Sick Days and Physicians at Work: Ancient Egyptians had State-Supported Health Care

A Grain of Data Salt

Also, the betterment over millennia in genetic health underpinnings seen in the analysis of select genes from 147 ancestors stands out so clearly that the researchers have had to wonder if the reversal in pattern in recent centuries, which seems so inconsistent with that long-term trend, is not perhaps a coincidence in the initial data set. The scientists would like to analyze more data sets to feel more confident about the apparent reversal.

"We'd like to see more studies done on samples taken from humans who lived from 400 years ago to now," Cooper said.

They would also like to do more research on the positioning of the genetic health of ancients relative to modern humans. "We may be overestimating the genetic health of previous hominins (humans and evolutionary cousins including Neanderthals)," Lachance said, "and we may need to shift estimates of hereditary disease risks for them over, which would mean they all had a lot worse health than we currently think."

Until then, the researchers are taking the apparent slump in the genetic bedrock of health in recent centuries with a grain of salt. But that does not change the main observation.

"The trend shows clear long-term reduction over millennia in ancient genetic health risks," said Berens, a former postdoctoral assistant. Viewed in graphs, the improvement is eye-popping.

Modern people have much worse genetic likelihoods for psychiatric disorders plus headaches and migraines. ( CC0)

More Psychiatric Disorders

If the initial finding on the reversal does eventually hold up, it will mean that people who lived in the window of time from 2,000 to 6,000 years ago appear to have had, on the whole, DNA less prone to promoting disease than we do today, particularly for mental health. We moderns racked up much worse genetic likelihoods for depression, bipolar disorder, and schizophrenia.

"We did look genetically better on average for cardiovascular and dental health," Lachance said. "But at every time interval we examined, ancient individuals looked healthier for psychiatric disorders, and we looked worse."

Add to that a higher potential for migraine headaches.

The Iceman Cometh

Drilling down in the data leads to individual genetic health profiles of famous ancients like the Altai Neanderthal, the Denisovan, and "Ötzi" the Iceman. Ötzi, like us, was Homo sapiens .

Along with his dicey heart, the Iceman probably contended with lactose intolerance and allergies. Their propensity was also written in his DNA, but so was a likelihood for strapping muscles and enviable levelheadedness, making him a potentially formidable hunter or warrior.

Reproduction of Ötzi, South Tyrol Museum of Archaeology (CC BY-SA 3.0 )

With his bow, recovered near his cadaver on a high mountain pass, Ötzi could have easily slain prey or foe at 100 paces. But the bow was unfinished and unstrung one fateful day around 3,300 BC, leaving the Iceman with little defense against the enemy archer who punctured an artery near his left shoulder blade.

The Iceman probably bled to death within minutes. Eventually, snow entombed him, and he lay frozen in the ice until a summer glacier melt in 1991 re-exposed him to view. Two German hikers came upon his mummified corpse that September on a ridge above Austria's Ötztal valley, which gave the popular press fodder to nickname him "Ötzi."

  • New Scans of Ancient Pompeii Victims Reveal Great Teeth and Good Health
  • The life expectancy myth, and why many ancient humans lived long healthy lives
  • Is the Paleo Movement Genetically Out of Sync with Modern Humans?

DNA Tatters

The near ideal condition of his remains, including genetic ones, has proven a treasure trove for scientific study. But Ötzi is an extraordinary exception.

Usually, flesh-bare, dry bones or fragments are all that is left of ancient hominins or even just people who died a century ago. "Ancient DNA samples may not contain complete genomic information, and that can limit comparison possibilities, so we have to rely on mathematical models to account for the gaps," Berens said.

Collecting and analyzing more DNA samples from ancients will require vigorous effort by researchers across disciplines. But added data will give scientists a better idea of where the genetic underpinnings of human health came from, and where they're headed for our great grandchildren.

Top image: Reconstruction of a Neanderthal in the Neanderthal Museum, Mettmann, Germany.

The article, originally titled ‘ You and some 'cavemen' get a genetic checkup’ by Ben Brumfield was originally published on Science Daily.

Source: Georgia Institute of Technology. "You and some 'cavemen' get a genetic checkup: Evolution has improved upon the genetic foundations of human health for millennia. But could that have recently gone in reverse?" ScienceDaily. ScienceDaily, 23 August 2017. www.sciencedaily.com/releases/2017/08/170823140650.htm


Evolution of Genetic Health Improvements in Humans Stall over Last Millennium - History

Evolution and the Origins of Disease

The principles of evolution by natural selection are finally beginning to inform medicine by Randolph M. Nesse and George C. Williams .

Thoughtful contemplation of the human body elicits awe--in equal measure with perplexity. The eye, for instance, has long been an object of wonder, with the clear, living tissue of the cornea curving just the right amount, the iris adjusting to brightness and the lens to distance, so that the optimal quantity of light focuses exactly on the surface of the retina. Admiration of such apparent perfection soon gives way, however, to consternation. Contrary to any sensible design, blood vessels and nerves traverse the inside of the retina, creating a blind spot at their point of exit.

The body is a bundle of such jarring contradictions. For each exquisite heart valve, we have a wisdom tooth. Strands of DNA direct the development of the 10 trillion cells that make up a human adult but then permit his or her steady deterioration and eventual death. Our immune system can identify and destroy a million kinds of foreign matter, yet many bacteria can still kill us. These contradictions make it appear as if the body was designed by a team of superb engineers with occasional interventions by Rube Goldberg.

In fact, such seeming incongruities make sense but only when we investigate the origins of the body's vulnerabilities while keeping in mind the wise words of distinguished geneticist Theodosius Dobzhansky: "Nothing in biology makes sense except in the light of evolution." Evolutionary biology is, of course, the scientific foundation for all biology, and biology is the foundation for all medicine. To a surprising degree, however, evolutionary biology is just now being recognized as a basic medical science. The enterprise of studying medical problems in an evolutionary context has been termed Darwinian medicine. Most medical research tries to explain the causes of an individual's disease and seeks therapies to cure or relieve deleterious conditions. These efforts are traditionally based on consideration of proximate issues, the straightforward study of the body's anatomic and physiological mechanisms as they currently exist. In contrast, Darwinian medicine asks why the body is designed in a way that makes us all vulnerable to problems like cancer, atherosclerosis, depression and choking, thus offering a broader context in which to conduct research.

The evolutionary explanations for the body's flaws fall into surprisingly few categories. First, some discomforting conditions, such as pain, fever, cough, vomiting and anxiety, are actually neither diseases nor design defects but rather are evolved defenses. Second, conflicts with other organisms--Escherichia coli or crocodiles, for instance--are a fact of life. Third, some circumstances, such as the ready availability of dietary fats, are so recent that natural selection has not yet had a chance to deal with them. Fourth, the body may fall victim to trade-offs between a trait's benefits and its costs a textbook example is the sickle cell gene, which also protects against malaria. Finally, the process of natural selection is constrained in ways that leave us with suboptimal design features, as in the case of the mammalian eye.

Perhaps the most obviously useful defense mechanism is coughing people who cannot clear foreign matter from their lungs are likely to die from pneumonia. The capacity for pain is also certainly beneficial. The rare individuals who cannot feel pain fail even to experience discomfort from staying in the same position for long periods. Their unnatural stillness impairs the blood supply to their joints, which then deteriorate. Such pain-free people usually die by early adulthood from tissue damage and infections. Cough or pain is usually interpreted as disease or trauma but is actually part of the solution rather than the problem. These defensive capabilities, shaped by natural selection, are kept in reserve until needed.

Less widely recognized as defenses are fever, nausea, vomiting, diarrhea, anxiety, fatigue, sneezing and inflammation. Even some physicians remain unaware of fever's utility. No mere increase in metabolic rate, fever is a carefully regulated rise in the set point of the body's thermostat. The higher body temperature facilitates the destruction of pathogens. Work by Matthew J. Kluger of the Lovelace Institute in Albuquerque, N.M., has shown that even cold-blooded lizards, when infected, move to warmer places until their bodies are several degrees above their usual temperature. If prevented from moving to the warm part of their cage, they are at increased risk of death from the infection. In a similar study by Evelyn Satinoff of the University of Delaware, elderly rats, who can no longer achieve the high fevers of their younger lab companions, also instinctively sought hotter environments when challenged by infection.

A reduced level of iron in the blood is another misunderstood defense mechanism. People suffering from chronic infection often have decreased levels of blood iron. Although such low iron is sometimes blamed for the illness, it actually is a protective response: during infection, iron is sequestered in the liver, which prevents invading bacteria from getting adequate supplies of this vital element.

Morning sickness has long been considered an unfortunate side effect of pregnancy. The nausea, however, coincides with the period of rapid tissue differentiation of the fetus, when development is most vulnerable to interference by toxins. And nauseated women tend to restrict their intake of strong-tasting, potentially harmful substances. These observations led independent researcher Margie Profet to hypothesize that the nausea of pregnancy is an adaptation whereby the mother protects the fetus from exposure to toxins. Profet tested this idea by examining pregnancy outcomes. Sure enough, women with more nausea were less likely to suffer miscarriages. (This evidence supports the hypothesis but is hardly conclusive. If Profet is correct, further research should discover that pregnant females of many species show changes in food preferences. Her theory also predicts an increase in birth defects among offspring of women who have little or no morning sickness and thus eat a wider variety of foods during pregnancy.)

Another common condition, anxiety, obviously originated as a defense in dangerous situations by promoting escape and avoidance. A 1992 study by Lee A. Dugatkin of the University of Louisville evaluated the benefits of fear in guppies. He grouped them as timid, ordinary or bold, depending on their reaction to the presence of smallmouth bass. The timid hid, the ordinary simply swam away, and the bold maintained their ground and eyed the bass. Each guppy group was then left alone in a tank with a bass. After 60 hours, 40 percent of the timid guppies had survived, as had only 15 percent of the ordinary fish. The entire complement of bold guppies, on the other hand, wound up aiding the transmission of bass genes rather than their own.

Selection for genes promoting anxious behaviors implies that there should be people who experience too much anxiety, and indeed there are. There should also be hypophobic individuals who have insufficient anxiety, either because of genetic tendencies or antianxiety drugs. The exact nature and frequency of such a syndrome is an open question, as few people come to psychiatrists complaining of insufficient apprehension. But if sought, the pathologically nonanxious may be found in emergency rooms, jails and unemployment lines.

The utility of common and unpleasant conditions such as diarrhea, fever and anxiety is not intuitive. If natural selection shapes the mechanisms that regulate defensive responses, how can people get away with using drugs to block these defenses without doing their bodies obvious harm? Part of the answer is that we do, in fact, sometimes do ourselves a disservice by disrupting defenses.

Herbert L. DuPont of the University of Texas at Houston and Richard B. Hornick of Orlando Regional Medical Center studied the diarrhea caused by Shigella infection and found that people who took antidiarrhea drugs stayed sick longer and were more likely to have complications than those who took a placebo. In another example, Eugene D. Weinberg of Indiana University has documented that well-intentioned attempts to correct perceived iron deficiencies have led to increases in infectious disease, especially amebiasis, in parts of Africa. Although the iron in most oral supplements is unlikely to make much difference in otherwise healthy people with everyday infections, it can severely harm those who are infected and malnourished. Such people cannot make enough protein to bind the iron, leaving it free for use by infectious agents.

On the morning-sickness front, an antinausea drug was recently blamed for birth defects. It appears that no consideration was given to the possibility that the drug itself might be harmless to the fetus but could still be associated with birth defects, by interfering with the mother's defensive nausea.

Another obstacle to perceiving the benefits of defenses arises from the observation that many individuals regularly experience seemingly worthless reactions of anxiety, pain, fever, diarrhea or nausea. The explanation requires an analysis of the regulation of defensive responses in terms of signal-detection theory. A circulating toxin may come from something in the stomach. An organism can expel it by vomiting, but only at a price. The cost of a false alarm--vomiting when no toxin is truly present--is only a few calories. But the penalty for a single missed authentic alarm--failure to vomit when confronted with a toxin--may be death.

Natural selection therefore tends to shape regulation mechanisms with hair triggers, following what we call the smoke-detector principle. A smoke alarm that will reliably wake a sleeping family in the event of any fire will necessarily give a false alarm every time the toast burns. The price of the human body's numerous "smoke alarms" is much suffering that is completely normal but in most instances unnecessary. This principle also explains why blocking defenses is so often free of tragic consequences. Because most defensive reactions occur in response to insignificant threats, interference is usually harmless the vast majority of alarms that are stopped by removing the battery from the smoke alarm are false ones, so this strategy may seem reasonable. Until, that is, a real fire occurs.

Conflicts with Other Organisms

Natural selection is unable to provide us with perfect protection against all pathogens, because they tend to evolve much faster than humans do. E. coli, for example, with its rapid rates of reproduction, has as much opportunity for mutation and selection in one day as humanity gets in a millennium. And our defenses, whether natural or artificial, make for potent selection forces. Pathogens either quickly evolve a counterdefense or become extinct. Amherst College biologist Paul W. Ewald has suggested classifying phenomena associated with infection according to whether they benefit the host, the pathogen, both or neither. Consider the runny nose associated with a cold. Nasal mucous secretion could expel intruders, speed the pathogen's transmission to new hosts or both [see "The Evolution of Virulence," by Paul W. Ewald Scientific American, April 1993]. Answers could come from studies examining whether blocking nasal secretions shortens or prolongs illness, but few such studies have been done.

Humanity won huge battles in the war against pathogens with the development of antibiotics and vaccines. Our victories were so rapid and seemingly complete that in 1969 U.S. Surgeon General William H. Stewart said that it was "time to close the book on infectious disease." But the enemy, and the power of natural selection, had been underestimated. The sober reality is that pathogens apparently can adapt to every chemical researchers develop. ("The war has been won," one scientist more recently quipped. "By the other side.")

Antibiotic resistance is a classic demonstration of natural selection. Bacteria that happen to have genes that allow them to prosper despite the presence of an antibiotic reproduce faster than others, and so the genes that confer resistance spread quickly. As shown by Nobel laureate Joshua Lederberg of the Rockefeller University, they can even jump to different species of bacteria, borne on bits of infectious DNA. Today some strains of tuberculosis in New York City are resistant to all three main antibiotic treatments patients with those strains have no better chance of surviving than did TB patients a century ago. Stephen S. Morse of Columbia University notes that the multidrug-resistant strain that has spread throughout the East Coast may have originated in a homeless shelter across the street from Columbia-Presbyterian Medical Center. Such a phenomenon would indeed be predicted in an environment where fierce selection pressure quickly weeds out less hardy strains. The surviving bacilli have been bred for resistance.

Many people, including some physicians and scientists, still believe the outdated theory that pathogens necessarily become benign after long association with hosts. Superficially, this makes sense. An organism that kills rapidly may never get to a new host, so natural selection would seem to favor lower virulence. Syphilis, for instance, was a highly virulent disease when it first arrived in Europe, but as the centuries passed it became steadily more mild. The virulence of a pathogen is, however, a life history trait that can increase as well as decrease, depending on which option is more advantageous to its genes.

For agents of disease that are spread directly from person to person, low virulence tends to be beneficial, as it allows the host to remain active and in contact with other potential hosts. But some diseases, like malaria, are transmitted just as well--or better--by the incapacitated. For such pathogens, which usually rely on intermediate vectors like mosquitoes, high virulence can give a selective advantage. This principle has direct implications for infection control in hospitals, where health care workers' hands can be vectors that lead to selection for more virulent strains.

In the case of cholera, public water supplies play the mosquitoes' role. When water for drinking and bathing is contaminated by waste from immobilized patients, selection tends to increase virulence, because more diarrhea enhances the spread of the organism even if individual hosts quickly die. But, as Ewald has shown, when sanitation improves, selection acts against classical Vibrio cholerae bacteria in favor of the more benign El Tor biotype. Under these conditions, a dead host is a dead end. But a less ill and more mobile host, able to infect many others over a much longer time, is an effective vehicle for a pathogen of lower virulence. In another example, better sanitation leads to displacement of the aggressive Shigella flexneri by the more benign S. sonnei.

NEW ENVIRONMENTS, NEW THREATS

Such considerations may be relevant for public policy. Evolutionary theory predicts that clean needles and the encouragement of safe sex will do more than save numerous individuals from HIV infection. If humanity's behavior itself slows HIV transmission rates, strains that do not soon kill their hosts have the long-term survival advantage over the more virulent viruses that then die with their hosts, denied the opportunity to spread. Our collective choices can change the very nature of HIV.

Conflicts with other organisms are not limited to pathogens. In times past, humans were at great risk from predators looking for a meal. Except in a few places, large carnivores now pose no threat to humans. People are in more danger today from smaller organisms' defenses, such as the venoms of spiders and snakes. Ironically, our fears of small creatures, in the form of phobias, probably cause more harm than any interactions with those organisms do. Far more dangerous than predators or poisoners are other members of our own species. We attack each other not to get meat but to get mates, territory and other resources. Violent conflicts between individuals are overwhelmingly between young men in competition and give rise to organizations to advance these aims. Armies, again usually composed of young men, serve similar objectives, at huge cost.

Even the most intimate human relationships give rise to conflicts having medical implications. The reproductive interests of a mother and her infant, for instance, may seem congruent at first but soon diverge. As noted by biologist Robert L. Trivers in a now classic 1974 paper, when her child is a few years old, the mother's genetic interests may be best served by becoming pregnant again, whereas her offspring benefits from continuing to nurse. Even in the womb there is contention. From the mother's vantage point, the optimal size of a fetus is a bit smaller than that which would best serve the fetus and the father. This discord, according to David Haig of Harvard University, gives rise to an arms race between fetus and mother over her levels of blood pressure and blood sugar, sometimes resulting in hypertension and diabetes during pregnancy.

Making rounds in any modern hospital provides sad testimony to the prevalence of diseases humanity has brought on itself. Heart attacks, for example, result mainly from atherosclerosis, a problem that became widespread only in this century and that remains rare among hunter-gatherers. Epidemiological research furnishes the information that should help us prevent heart attacks: limit fat intake, eat lots of vegetables, and exercise hard each day. But hamburger chains proliferate, diet foods languish on the shelves, and exercise machines serve as expensive clothing hangers throughout the land. The proportion of overweight Americans is one third and rising. We all know what is good for us. Why do so many of us continue to make unhealthy choices?

Our poor decisions about diet and exercise are made by brains shaped to cope with an environment substantially different from the one our species now inhabits. On the African savanna, where the modern human design was fine-tuned, fat, salt and sugar were scarce and precious. Individuals who had a tendency to consume large amounts of fat when given the rare opportunity had a selective advantage. They were more likely to survive famines that killed their thinner companions. And we, their descendants, still carry those urges for foodstuffs that today are anything but scarce. These evolved desires--inflamed by advertisements from competing food corporations that themselves survive by selling us more of whatever we want to buy--easily defeat our intellect and willpower. How ironic that humanity worked for centuries to create environments that are almost literally flowing with milk and honey, only to see our success responsible for much modern disease and untimely death.

Increasingly, people also have easy access to many kinds of drugs, especially alcohol and tobacco, that are responsible for a huge proportion of disease, health care costs and premature death. Although individuals have always used psychoactive substances, widespread problems materialized only following another environmental novelty: the ready availability of concentrated drugs and new, direct routes of administration, especially injection. Most of these substances, including nicotine, cocaine and opium, are products of natural selection that evolved to protect plants from insects. Because humans share a common evolutionary heritage with insects, many of these substances also affect our nervous system.

This perspective suggests that it is not just defective individuals or disordered societies that are vulnerable to the dangers of psychoactive drugs all of us are susceptible because drugs and our biochemistry have a long history of interaction. Understanding the details of that interaction, which is the focus of much current research from both a proximate and evolutionary perspective, may well lead to better treatments for addiction.

The relatively recent and rapid increase in breast cancer must be the result in large part of changing environments and ways of life, with only a few cases resulting solely from genetic abnormalities. Boyd Eaton and his colleagues at Emory University reported that the rate of breast cancer in today's "nonmodern" societies is only a tiny fraction of that in the U.S. They hypothesize that the amount of time between menarche and first pregnancy is a crucial risk factor, as is the related issue of total lifetime number of menstrual cycles. In hunter-gatherers, menarche occurs at about age 15 or later, followed within a few years by pregnancy and two or three years of nursing, then by another pregnancy soon after. Only between the end of nursing and the next pregnancy will the woman menstruate and thus experience the high levels of hormones that may adversely affect breast cells.

In modern societies, in contrast, menarche occurs at age 12 or 13--probably at least in part because of a fat intake sufficient to allow an extremely young woman to nourish a fetus--and the first pregnancy may be decades later or never. A female hunter-gatherer may have a total of 150 menstrual cycles, whereas the average woman in modern societies has 400 or more. Although few would suggest that women should become pregnant in their teens to prevent breast cancer later, early administration of a burst of hormones to simulate pregnancy may reduce the risk. Trials to test this idea are now under way at the University of California at San Diego.

Trade-offs and Constraints

Compromise is inherent in every adaptation. Arm bones three times their current thickness would almost never break, but Homo sapiens would be lumbering creatures on a never-ending quest for calcium. More sensitive ears might sometimes be useful, but we would be distracted by the noise of air molecules banging into our eardrums.

Such trade-offs also exist at the genetic level. If a mutation offers a net reproductive advantage, it will tend to increase in frequency in a population even if it causes vulnerability to disease. People with two copies of the sickle cell gene, for example, suffer terrible pain and die young. People with two copies of the "normal" gene are at high risk of death from malaria. But individuals with one of each are protected from both malaria and sickle cell disease. Where malaria is prevalent, such people are fitter, in the Darwinian sense, than members of either other group. So even though the sickle cell gene causes disease, it is selected for where malaria persists. Which is the "healthy" allele in this environment? The question has no answer. There is no one normal human genome--there are only genes.

Many other genes that cause disease must also have offered benefits, at least in some environments, or they would not be so common. Because cystic fibrosis (CF) kills one out of 2,500 Caucasians, the responsible genes would appear to be at great risk of being eliminated from the gene pool. And yet they endure. For years, researchers mused that the CF gene, like the sickle cell gene, probably conferred some advantage. Recently a study by Gerald B. Pier of Harvard Medical School and his colleagues gave substance to this informed speculation: having one copy of the CF gene appears to decrease the chances of the bearer acquiring a typhoid fever infection, which once had a 15 percent mortality.

Aging may be the ultimate example of a genetic trade-off. In 1957 one of us (Williams) suggested that genes that cause aging and eventual death could nonetheless be selected for if they had other effects that gave an advantage in youth, when the force of selection is stronger. For instance, a hypothetical gene that governs calcium metabolism so that bones heal quickly but that also happens to cause the steady deposition of calcium in arterial walls might well be selected for even though it kills some older people. The influence of such pleiotropic genes (those having multiple effects) has been seen in fruit flies and flour beetles, but no specific example has yet been found in humans. Gout, however, is of particular interest, because it arises when a potent antioxidant, uric acid, forms crystals that precipitate out of fluid in joints. Antioxidants have antiaging effects, and plasma levels of uric acid in different species of primates are closely correlated with average adult life span. Perhaps high levels of uric acid benefit most humans by slowing tissue aging, while a few pay the price with gout.

Other examples are more likely to contribute to more rapid aging. For instance, strong immune defenses protect us from infection but also inflict continuous, low-level tissue damage. It is also possible, of course, that most genes that cause aging have no benefit at any age--they simply never decreased reproductive fitness enough in the natural environment to be selected against. Nevertheless, over the next decade research will surely identify specific genes that accelerate senescence, and researchers will soon thereafter gain the means to interfere with their actions or even change them. Before we tinker, however, we should determine whether these actions have benefits early in life.

Because evolution can take place only in the direction of time's arrow, an organism's design is constrained by structures already in place. As noted, the vertebrate eye is arranged backward. The squid eye, in contrast, is free from this defect, with vessels and nerves running on the outside, penetrating where necessary and pinning down the retina so it cannot detach. The human eye's flaw results from simple bad luck hundreds of millions of years ago, the layer of cells that happened to become sensitive to light in our ancestors was positioned differently from the corresponding layer in ancestors of squids. The two designs evolved along separate tracks, and there is no going back.

Such path dependence also explains why the simple act of swallowing can be life-threatening. Our respiratory and food passages intersect because in an early lungfish ancestor the air opening for breathing at the surface was understandably located at the top of the snout and led into a common space shared by the food passageway. Because natural selection cannot start from scratch, humans are stuck with the possibility that food will clog the opening to our lungs.

The path of natural selection can even lead to a potentially fatal cul-de-sac, as in the case of the appendix, that vestige of a cavity that our ancestors employed in digestion. Because it no longer performs that function, and as it can kill when infected, the expectation might be that natural selection would have eliminated it. The reality is more complex. Appendicitis results when inflammation causes swelling, which compresses the artery supplying blood to the appendix. Blood flow protects against bacterial growth, so any reduction aids infection, which creates more swelling. If the blood supply is cut off completely, bacteria have free rein until the appendix bursts. A slender appendix is especially susceptible to this chain of events, so appendicitis may, paradoxically, apply the selective pressure that maintains a large appendix. Far from arguing that everything in the body is perfect, an evolutionary analysis reveals that we live with some very unfortunate legacies and that some vulnerabilities may even be actively maintained by the force of natural selection.

Evolution of Darwinian Medicine

Despite the power of the Darwinian paradigm, evolutionary biology is just now being recognized as a basic science essential for medicine. Most diseases decrease fitness, so it would seem that natural selection could explain only health, not disease. A Darwinian approach makes sense only when the object of explanation is changed from diseases to the traits that make us vulnerable to diseases. The assumption that natural selection maximizes health also is incorrect--selection maximizes the reproductive success of genes. Those genes that make bodies having superior reproductive success will become more common, even if they compromise the individual's health in the end.

Finally, history and misunderstanding have presented obstacles to the acceptance of Darwinian medicine. An evolutionary approach to functional analysis can appear akin to naive teleology or vitalism, errors banished only recently, and with great effort, from medical thinking. And, of course, whenever evolution and medicine are mentioned together, the specter of eugenics arises. Discoveries made through a Darwinian view of how all human bodies are alike in their vulnerability to disease will offer great benefits for individuals, but such insights do not imply that we can or should make any attempt to improve the species. If anything, this approach cautions that apparent genetic defects may have unrecognized adaptive significance, that a single "normal" genome is nonexistent and that notions of "normality" tend to be simplistic.

The systematic application of evolutionary biology to medicine is a new enterprise. Like biochemistry at the beginning of this century, Darwinian medicine very likely will need to develop in several incubators before it can prove its power and utility. If it must progress only from the work of scholars without funding to gather data to test their ideas, it will take decades for the field to mature. Departments of evolutionary biology in medical schools would accelerate the process, but for the most part they do not yet exist. If funding agencies had review panels with evolutionary expertise, research would develop faster, but such panels remain to be created. We expect that they will.

The evolutionary viewpoint provides a deep connection between the states of disease and normal functioning and can integrate disparate avenues of medical research as well as suggest fresh and important areas of inquiry. Its utility and power will ultimately lead to recognition of evolutionary biology as a basic medical science.

RANDOLPH M. NESSE and GEORGE C. WILLIAMS are the authors of the 1994 book Why We Get Sick: The New Science of Darwinian Medicine. Nesse received his medical degree from the University of Michigan Medical School in 1974. He is now professor of psychiatry at that institution and is director of the Evolution and Human Adaptation Program at the university's Institute for Social Research. Williams received his doctorate in 1955 from the University of California, Los Angeles, and quickly became one of the world's foremost evolutionary theorists. A member of the National Academy of Sciences, he is professor emeritus of ecology and evolution at the State University of New York at Stony Brook and edits the Quarterly Review of Biology.


Have human societies evolved? Evidence from history and pre-history

I ask whether social evolutionary theories found in sociology, archaeology, and anthropology are useful in explaining human development from the Stone Age to the present-day. My data are partly derived from the four volumes of The Sources of Social Power, but I add statistical data on the growth of complexity and power in human groups. I distinguish three levels of evolutionary theory. The first level offers a minimalist definition of evolution in terms of social groups responding and adapting to changes in their social and natural environment. This is acceptable but trivial. The hard part is to elucidate what kinds of response are drawn from what changes, and all sociology shares in this difficulty. This model also tends to over-emphasize external pressures and neglect human inventiveness. The second level of theory is “multilineal” evolution in which various paths of endogenous development, aided by diffusion of practices between societies, dominate the historical and pre-historical record. This is acceptable as a model applied to some times, places, and practices, but when applied more generally it slides into a multi-causal analysis that is also conventional in the social sciences. The third level is a theory of general evolution for the entire human experience. Here materialist theories are dominant but they are stymied by their neglect of ideological, military, and political power relations. There is no acceptable theory of general social evolution. Thus the contribution of social evolutionary theory to the social sciences has been limited.

This is a preview of subscription content, access via your institution.


How the language you speak aligns to your genetic origins and may impact research on your health

Fig. 1: Population structure and genetic affinities of South-Eastern Bantu-speaking (SEB) groups from South Africa correspond to both linguistic phylogeny and geographic distribution. Credit: Nature Communications (2021). DOI: 10.1038/s41467-021-22207-y

A new study challenges the presumption that all South-Eastern-Bantu speaking groups are a single genetic entity.

The South-Eastern-Bantu (SEB) language family includes isiZulu, isiXhosa, siSwati, Xitsonga, Tshivenda, Sepedi, Sesotho and Setswana.

Almost 80% of South Africans speak one of the SEB family languages as their first language. Their origins can be traced to farmers ofWest-Central Africa whose descendants over the past two millennia spread south of the equator and finally into Southern Africa.

Since then, varying degrees of sedentism [the practice of living in one place for a long time], population movements and interaction with Khoe and San communities, as well as people speaking other SEB languages, ultimately generated what are today distinct Southern African languages such as isiZulu, isiXhosa and Sesotho.

Despite these linguistic differences, these groups are treated mostly as a single group in genetic studies.

Understanding genetic diversity in a population is critical to the success of disease genetic studies. If two genetically distinct populations are treated as one, the methods normally used to find disease genes could become error prone.

Consideration of these genetic differences is critical to providing a reliable understanding of the genetics of complex diseases, such as diabetes and hypertension, in South Africans.

Dr. Dhriti Sengupta and Dr. Ananyo Choudhury in the Sydney Brenner Institute for Molecular Bioscience (SBIMB) at Wits University were joint lead authors of the paper published in Nature Communications on 7 April 2021.

The study comprised a multidisciplinary team of geneticists, bioinformaticians, linguists, historians and archaeologists from Wits University (Michèle Ramsay, Scott Hazelhurst, Shaun Aron and Gavin Whitelaw), the University of Limpopo, and partners in Belgium, Sweden and Switzerland.

"South Eastern Bantu-speakers have a clear linguistic division—they speak more than nine distinct languages—and their geography is clear: some of the groups are found more frequently in the north, some in central, and some in southern Africa. Yet despite these characteristics, the SEB groups have so far been treated as a single genetic entity," says Choudhury.

The study found that SEB speaking groups are too different to be treated as a single genetic unit.

"So if you are treating say, Tsonga and Xhosa, as the same population—as was often done until now—you might get a completely wrong gene implicated for a disease," says Sengupta.

The study, titled: Genetic substructure and complex demographic history of South African Bantu speakers aimed to find out whether the SEB speakers are indeed a single genetic entity or if they have enough genetic differences to be grouped into smaller units.

Genetic data from more than 5000 participants speaking eight different southern African languages were generated and analyzed.

These languages are isiZulu, isiXhosa, siSwati, Xitsonga, Tshivenda, Sepedi, Sesotho and Setswana.

Participants were recruited from research sites in Soweto in Gauteng, Agincourt in Mpumalanga, and Dikgale in Limpopo province.

Genetic differences reflect geography, language and history

The study detected major variations in genetic contribution from the Khoe and San into SEB speaking groups some groups have received a lot of genetic influx from Khoe and San people, while others have had a very little genetic exchange with these groups.

This variation ranged on average from about 2% in Tsonga to more than 20% in Xhosa and Tswana.

This suggests that SEB speaking groups are too different to be treated as a single genetic unit.

"The study showed that there could be substantial errors in disease gene discovery and disease risk estimation if the differences between South-Eastern-Bantu speaking groups are not taken into consideration," says Sengupta.

The genetic data also show major differences in the history of these groups over the last 1000 years. Genetic exchanges were found to have occurred at different points in time, suggesting a unique journey of each group across the southern African landscape over the past millennium.

These genetic differences are strong enough to impact the outcomes of biomedical genetic research.

Sengupta emphasizes, however, that ethnolinguistic identities are complex and cautioned against extrapolating broad conclusions from the findings regarding genetic differences.

"Although genetic data showed differences [separation] between groups, there was also a substantial amount of overlap [similarity]. So while findings regarding differences could have huge value from a research perspective, they should not be generalized," she says.

A genetic blueprint for future health

A common approach to identify if a genetic variant causes or predisposes us to a disease is to take a set of individuals with a disease (e.g., high blood pressure or diabetes) and another set of healthy individuals without the disease, and then compare the occurrence of many genetic variants in the two sets.

If a variant shows a notable frequency difference between the two sets it is assumed that the genetic variant could be associated with the disease.

"However, this approach depends entirely on the underlying assumption that the two groups consist of genetically similar individuals. One of the major highlights of our study is the observation that Bantu-speakers from two geographic regions—or two ethnolinguistic groups—cannot be treated as if they are the same when it comes to disease genetic studies," says Choudhury.

Future studies, especially those testing a small number of variants, need to be more nuanced and have balanced ethnolinguistic and geographic representation, he says.

This study is the second landmark study in African population genetics, published in the last six months, led by researchers in the Sydney Brenner Institute for Molecular Bioscience in the Faculty of Health Sciences at Wits University.

Professor Michèle Ramsay, director of the SBIMB and corresponding author of the study, says: "The in-depth analysis of several large African genetic datasets has just begun. We look forward to mining these datasets to provide new insights into key population histories and the genetics of complex diseases in Africa."


3. Improvements ahead: How humans and AI might evolve together in the next decade

Other questions to the experts in this canvassing invited their views on the hopeful things that will occur in the next decade and for examples of specific applications that might emerge. What will human-technology co-evolution look like by 2030? Participants in this canvassing expect the rate of change to fall in a range anywhere from incremental to extremely impactful. Generally, they expect AI to continue to be targeted toward efficiencies in workplaces and other activities, and they say it is likely to be embedded in most human endeavors.

The greatest share of participants in this canvassing said automated systems driven by artificial intelligence are already improving many dimensions of their work, play and home lives and they expect this to continue over the next decade. While they worry over the accompanying negatives of human-AI advances, they hope for broad changes for the better as networked, intelligent systems are revolutionizing everything, from the most pressing professional work to hundreds of the little “everyday” aspects of existence.

One respondent’s answer covered many of the improvements experts expect as machines sit alongside humans as their assistants and enhancers. An associate professor at a major university in Israel wrote, “In the coming 12 years AI will enable all sorts of professions to do their work more efficiently, especially those involving ‘saving life’: individualized medicine, policing, even warfare (where attacks will focus on disabling infrastructure and less in killing enemy combatants and civilians). In other professions, AI will enable greater individualization, e.g., education based on the needs and intellectual abilities of each pupil/student. Of course, there will be some downsides: greater unemployment in certain ‘rote’ jobs (e.g., transportation drivers, food service, robots and automation, etc.).”

This section begins with experts sharing mostly positive expectations for the evolution of humans and AI. It is followed by separate sections that include their thoughts about the potential for AI-human partnerships and quality of life in 2030, as well as the future of jobs, health care and education.

AI will be integrated into most aspects of life, producing new efficiencies and enhancing human capacities

Many of the leading experts extolled the positives they expect to continue to expand as AI tools evolve to do more things for more people.

Martijn van Otterlo, author of “Gatekeeping Algorithms with Human Ethical Bias” and assistant professor of artificial intelligence at Tilburg University in the Netherlands, wrote, “Even though I see many ethical issues, potential problems and especially power imbalance/misuse issues with AI (not even starting about singularity issues and out-of-control AI), I do think AI will change most lives for the better, especially looking at the short horizon of 2030 even more-so, because even bad effects of AI can be considered predominantly ‘good’ by the majority of people. For example, the Cambridge Analytica case has shown us the huge privacy issues of modern social networks in a market economy, but, overall, people value the extraordinary services Facebook offers to improve communication opportunities, sharing capabilities and so on.”

…we need to be thoughtful about how these technologies are implemented and used, but, on the whole, I see these as constructive. Vint Cerf

Vint Cerf, Internet Hall of Fame member and vice president and chief internet evangelist at Google, said, “I see AI and machine learning as augmenting human cognition a la Douglas Engelbart. There will be abuses and bugs, some harmful, so we need to be thoughtful about how these technologies are implemented and used, but, on the whole, I see these as constructive.”

Mícheál Ó Foghlú, engineering director and DevOps Code Pillar at Google’s Munich office, said, “The trend is that AI/ML models in specific domains can out-perform human experts (e.g., certain cancer diagnoses based on image-recognition in retina scans). I think it would be fairly much the consensus that this trend would continue, and many more such systems could aid human experts to be more accurate.”

Craig Mathias, principal at Farpoint Group, an advisory firm specializing in wireless networking and mobile computing, commented, “Many if not most of the large-scale technologies that we all depend upon – such as the internet itself, the power grid, and roads and highways – will simply be unable to function in the future without AI, as both solution complexity and demand continue to increase.”

Matt Mason, a roboticist and the former director of the Robotics Institute at Carnegie Mellon University, wrote, “AI will present new opportunities and capabilities to improve the human experience. While it is possible for a society to behave irrationally and choose to use it to their detriment, I see no reason to think that is the more likely outcome.”

Mike Osswald, vice president of experience innovation at Hanson Inc., commented, “I’m thinking of a world in which people’s devices continuously assess the world around them to keep a population safer and healthier. Thinking of those living in large urban areas, with devices forming a network of AI input through sound analysis, air quality, natural events, etc., that can provide collective notifications and insight to everyone in a certain area about the concerns of environmental factors, physical health, even helping provide no quarter for bad actors through community policing.”

Barry Hughes, senior scientist at the Center for International Futures at the University of Denver, commented, “I was one of the original test users of the ARPANET and now can hardly imagine living without the internet. Although AI will be disruptive through 2030 and beyond, meaning that there will be losers in the workplace and growing reasons for concern about privacy and AI/cyber-related crime, on the whole I expect that individuals and societies will make choices on use and restriction of use that benefit us. Examples include likely self-driving vehicles at that time, when my wife’s deteriorating vision and that of an increased elderly population will make it increasingly liberating. I would expect rapid growth in use for informal/non-traditional education as well as some more ambivalent growth in the formal-education sector. Big-data applications in health-related research should be increasingly productive, and health care delivery should benefit. Transparency with respect to its character and use, including its developers and their personal benefits, is especially important in limiting the inevitable abuse.”

Dana Klisanin, psychologist, futurist and game designer, predicted, “People will increasingly realize the importance of interacting with each other and the natural world and they will program AI to support such goals, which will in turn support the ongoing emergence of the ‘slow movement.’ For example, grocery shopping and mundane chores will be allocated to AI (smart appliances), freeing up time for preparation of meals in keeping with the slow food movement. Concern for the environment will likewise encourage the growth of the slow goods/slow fashion movement. The ability to recycle, reduce, reuse will be enhanced by the use of in-home 3D printers, giving rise to a new type of ‘craft’ that is supported by AI. AI will support the ‘cradle-to-grave’ movement by making it easier for people to trace the manufacturing process from inception to final product.”

Liz Rykert, president at Meta Strategies, a consultancy that works with technology and complex organizational change, responded, “The key for networked AI will be the ability to diffuse equitable responses to basic care and data collection. If bias remains in the programming it will be a big problem. I believe we will be able to develop systems that will learn from and reflect a much broader and more diverse population than the systems we have now.”

Michael R. Nelson, a technology policy expert for a leading network services provider who worked as a technology policy aide in the Clinton administration, commented, “Most media reports focus on how machine learning will directly affect people (medical diagnosis, self-driving cars, etc.) but we will see big improvements in infrastructure (traffic, sewage treatment, supply chain, etc.).”

Gary Arlen, president of Arlen Communications, wrote, “After the initial frenzy recedes about specific AI applications (such as autonomous vehicles, workplace robotics, transaction processing, health diagnoses and entertainment selections), specific applications will develop – probably in areas barely being considered today. As with many new technologies, the benefits will not apply equally, potentially expanding the haves-and-have-nots dichotomy. In addition, as AI delves into new fields – including creative work such as design, music/art composition – we may see new legal challenges about illegal appropriation of intellectual property (via machine learning). However, the new legal tasks from such litigation may not need a conventional lawyer – but could be handled by AI itself. Professional health care AI poses another type of dichotomy. For patients, AI could be a bonanza, identifying ailments, often in early stages (based on early symptoms), and recommending treatments. At the same time, such automated tasks could impact employment for medical professionals. And again, there are legal challenges to be determined, such as liability in the case of a wrong action by the AI. Overall, there is no such thing as ‘most people,’ but many individuals and groups – especially in professional situations – WILL live better lives thanks to AI, albeit with some severe adjustment pains.”

Tim Morgan, a respondent who provided no identifying details, said, “Algorithmic machine learning will be our intelligence amplifier, exhaustively exploring data and designs in ways humans alone cannot. The world was shocked when IBM’s Deep Blue computer beat Garry Kasparov in 1997. What emerged later was the realization that human and AI ‘centaurs’ could combine to beat anyone, human or AI. The synthesis is more than the sum of the parts.”

Marshall Kirkpatrick, product director of influencer marketing, responded, “If the network can be both decentralized and imbued with empathy, rather than characterized by violent exploitation, then we’re safe. I expect it will land in between, hopefully leaning toward the positive. For example, I expect our understanding of self and freedom will be greatly impacted by an instrumentation of a large part of memory, through personal logs and our data exhaust being recognized as valuable just like when we shed the term ‘junk DNA.’ Networked AI will bring us new insights into our own lives that might seem as far-fetched today as it would have been 30 years ago to say, ‘I’ll tell you what music your friends are discovering right now.’ AI is most likely to augment humanity for the better, but it will take longer and not be done as well as it could be. Hopefully we’ll build it in a way that will help us be comparably understanding to others.”

Daniel A. Menasce, professor of computer science at George Mason University, commented, “AI and related technologies coupled with significant advances in computer power and decreasing costs will allow specialists in a variety of disciplines to perform more efficiently and will allow non-specialists to use computer systems to augment their skills. Some examples include health delivery, smart cities and smart buildings. For these applications to become reality, easy-to-use user interfaces, or better yet transparent user interfaces will have to be developed.”

Technology progression and advancement has always been met with fear and anxiety, giving way to tremendous gains for humankind as we learn to enhance the best of the changes and adapt and alter the worst. David Wells

David Wells, chief financial officer at Netflix, responded, “Technology progression and advancement has always been met with fear and anxiety, giving way to tremendous gains for humankind as we learn to enhance the best of the changes and adapt and alter the worst. Continued networked AI will be no different but the pace of technological change has increased, which is different and requires us to more quickly adapt. This pace is different and presents challenges for some human groups and societies that we will need to acknowledge and work through to avoid marginalization and political conflict. But the gains from better education, medical care and crime reduction will be well worth the challenges.”

Rik Farrow, editor of login: for the USENIX association, wrote, “Humans do poorly when it comes to making decisions based on facts, rather than emotional issues. Humans get distracted easily. There are certainly things that AI can do better than humans, like driving cars, handling finances, even diagnosing illnesses. Expecting human doctors to know everything about the varieties of disease and humans is silly. Let computers do what they are good at.”

Steve Crocker, CEO and co-founder of Shinkuro Inc. and Internet Hall of Fame member, responded, “AI and human-machine interaction has been under vigorous development for the past 50 years. The advances have been enormous. The results are marbled through all of our products and systems. Graphics, speech [and] language understanding are now taken for granted. Encyclopedic knowledge is available at our fingertips. Instant communication with anyone, anywhere exists for about half the world at minimal cost. The effects on productivity, lifestyle and reduction of risks, both natural and man-made, have been extraordinary and will continue. As with any technology, there are opportunities for abuse, but the challenges for the next decade or so are not significantly different from the challenges mankind has faced in the past. Perhaps the largest existential threat has been the potential for nuclear holocaust. In comparison, the concerns about AI are significantly less.”

James Kadtke, expert on converging technologies at the Institute for National Strategic Studies at the U.S. National Defense University, wrote, “Barring the deployment of a few different radically new technologies, such as general AI or commercial quantum computers, the internet and AI [between now and 2030] will proceed on an evolutionary trajectory. Expect internet access and sophistication to be considerably greater, but not radically different, and also expect that malicious actors using the internet will have greater sophistication and power. Whether we can control both these trends for positive outcomes is a public policy issue more than a technological one.”

Tim Morgan, a respondent who provided no identifying details, said, “Human/AI collaboration over the next 12 years will improve the overall quality of life by finding new approaches to persistent problems. We will use these adaptive algorithmic tools to explore whole new domains in every industry and field of study: materials science, biotech, medicine, agriculture, engineering, energy, transportation and more. … This goes beyond computability into human relationships. AIs are beginning to understand and speak the human language of emotion. The potential of affective computing ranges from productivity-increasing adaptive interfaces, to ‘pre-crime’ security monitoring of airports and other gathering places, to companion ‘pets’ which monitor their aging owners and interact with them in ways that improve their health and disposition. Will there be unseen dangers or consequences? Definitely. That is our pattern with our tools. We invent them, use them to improve our lives and then refine them when we find problems. AI is no different.”

Ashok Goel, director of the human-centered computing Ph.D. program at Georgia Tech, wrote, “Human-AI interaction will be multimodal: We will directly converse with AIs, for example. However, much of the impact of AI will come in enhancing human-human interaction across both space (we will be networked with others) and time (we will have access to all our previously acquired knowledge). This will aid, augment and amplify individual and collective human intelligence in unprecedented and powerful ways.”

David Cake, an leader with Electronic Frontiers Australia and vice-chair of the ICANN GNSO Council, wrote, “In general, machine learning and related technologies have the capacity to greatly reduce human error in many areas where it is currently very problematic and make available good, appropriately tailored advice to people to whom it is currently unavailable, in literally almost every field of human endeavour.”

Fred Baker, an independent networking technologies consultant, longtime leader in the Internet Engineering Task Force and engineering fellow with Cisco, commented, “In my opinion, developments have not been ‘out of control,’ in the sense that the creation of Terminator’s Skynet or the HAL 9000 computer might depict them. Rather, we have learned to automate processes in which neural networks have been able to follow data to its conclusion (which we call ‘big data’) unaided and uncontaminated by human intuition, and sometimes the results have surprised us. These remain, and in my opinion will remain, to be interpreted by human beings and used for our purposes.”

Bob Frankston, software innovation pioneer and technologist based in North America, wrote, “It could go either way. AI could be a bureaucratic straitjacket and tool of surveillance. I’m betting that machine learning will be like the X-ray in giving us the ability to see new wholes and gain insights.”

Perry Hewitt, a marketing, content and technology executive, wrote, “Today, voice-activated technologies are an untamed beast in our homes. Some 16% of Americans have a smart speaker, and yet they are relatively dumb devices: They misinterpret questions, offer generic answers and, to the consternation of some, are turning our kids into a**holes. I am bullish on human-machine interactions developing a better understanding of and improving our daily routines. I think in particular of the working parent, often although certainly not exclusively a woman, who carries so much information in their head. What if a human-machine collaboration could stock the house with essentials, schedule the pre-camp pediatrician appointments and prompt drivers for the alternate-side parking/street cleaning rules. The ability for narrow AI to assimilate new information (the bus is supposed to come at 7:10 but a month into the school year is known to actually come at 7:16) could keep a family connected and informed with the right data, and reduce the mental load of household management.”

John McNutt, a professor in the school of public policy and administration at the University of Delaware, responded, “Throwing out technology because there is a potential downside is not how human progress takes place. In public service, a turbulent environment has created a situation where knowledge overload can seriously degrade our ability to do the things that are essential to implement policies and serve the public good. AI can be the difference between a public service that works well and one that creates more problems than it solves.”

Randy Marchany, chief information security officer at Virginia Tech and director of Virginia Tech’s IT Security Laboratory, said, “AI-human interaction in 2030 will be in its ‘infancy’ stage. AI will need to go to ‘school’ in a manner similar to humans. They will amass large amounts of data collected by various sources but need ‘ethics’ training to make good decisions. Just as kids are taught a wide variety of info and some sort of ethics (religion, social manners, etc.), AI will need similar training. Will AI get the proper training? Who decides the training content?”

Robert Stratton, cybersecurity expert, said, “While there is widespread acknowledgement in a variety of disciplines of the potential benefits of machine learning and artificial intelligence technologies, progress has been tempered by their misapplication. Part of data science is knowing the right tool for a particular job. As more-rigorous practitioners begin to gain comfort and apply these tools to other corpora it’s reasonable to expect some significant gains in efficiency, insight or profitability in many fields. This may not be visible to consumers except through increased product choice, but it may include everything from drug discovery to driving.”

A data analyst for an organization developing marketing solutions said, “Assuming that policies are in place to prevent the abuse of AI and programs are in place to find new jobs for those who would be career-displaced, there is a lot of potential in AI integration. By 2030, most AI will be used for marketing purposes and be more annoying to people than anything else as they are bombarded with personalized ads and recommendations. The rest of AI usage will be its integration into more tedious and repetitive tasks across career fields. Implementing AI in this fashion will open up more time for humans to focus on long-term and in-depth tasks that will allow further and greater societal progression. For example, AI can be trained to identify and codify qualitative information from surveys, reviews, articles, etc., far faster and in greater quantities than even a team of humans can. By having AI perform these tasks, analysts can spend more time parsing the data for trends and information that can then be used to make more-informed decisions faster and allow for speedier turn-around times. Minor product faults can be addressed before they become widespread, scientists can generate semiannual reports on environmental changes rather than annual or biannual.”

Helena Draganik, a professor at the University of Gdańsk in Poland, responded, “AI will not change humans. It will change the relations between them because it can serve as an interpreter of communication. It will change our habits (as an intermediation technology). AI will be a great commodity. It will help in cases of health problems (diseases). It will also generate a great ‘data industry’ (big data) market and a lack of anonymity and privacy. Humanity will more and more depend on energy/electricity. These factors will create new social, cultural, security and political problems.”

There are those who think there won’t be much change by 2030.

Christine Boese, digital strategies professional, commented, “I believe it is as William Gibson postulated, ‘The future is already here, it just not very evenly distributed.’ What I know from my work in user-experience design and in exposure to many different Fortune 500 IT departments working in big data and analytics is that the promise and potential of AI and machine learning is VASTLY overstated. There has been so little investment in basic infrastructure, entire chunks of our systems won’t even be interoperable. The AI and machine learning code will be there, in a pocket here, a pocket there, but system-wide, it is unlikely to be operating reliably as part of the background radiation against which many of us play and work online.”

An anonymous respondent wrote, “While various deployments of new data science and computation will help firms cut costs, reduce fraud and support decision-making that involves access to more information than an individual can manage, organisations, professions, markets and regulators (public and private) usually take many more than 12 years to adapt effectively to a constantly changing set of technologies and practices. This generally causes a decline in service quality, insecurity over jobs and investments, new monopoly businesses distorting markets and social values, etc. For example, many organisations will be under pressure to buy and implement new services, but unable to access reliable market information on how to do this, leading to bad investments, distractions from core business, and labour and customer disputes.”

Mario Morino, chairman of the Morino Institute and co-founder of Venture Philanthropy Partners, commented, “While I believe AI/ML will bring enormous benefits, it may take us several decades to navigate through the disruption and transition they will introduce on multiple levels.”

Daniel Berninger, an internet pioneer who led the first VoIP deployments at Verizon, HP and NASA, currently founder at Voice Communication Exchange Committee (VCXC), said, “The luminaries claiming artificial intelligence will surpass human intelligence and promoting robot reverence imagine exponentially improving computation pushes machine self-actualization from science fiction into reality. The immense valuations awarded Google, Facebook, Amazon, Tesla, et al., rely on this machine-dominance hype to sell infinite scaling. As with all hype, pretending reality does not exist does not make reality go away. Moore’s Law does not concede the future to machines, because human domination of the planet does not owe to computation. Any road map granting machines self-determination includes ‘miracle’ as one of the steps. You cannot turn a piece of wood into a real boy. AI merely ‘models’ human activity. No amount of improvement in the development of these models turns the ‘model’ into the ‘thing.’ Robot reverence attempts plausibility by collapsing the breadth of human potential and capacities. It operates via ‘denialism’ with advocates disavowing the importance of anything they cannot model. In particular, super AI requires pretending human will and consciousness do not exist. Human beings remain the source of all intent and the judge of all outcomes. Machines provide mere facilitation and mere efficiency in the journey from intent to outcome. The dehumanizing nature of automation and the diseconomy of scale of human intelligence is already causing headaches that reveal another AI Winter arriving well before 2030.”

Paul Kainen, futurist and director of the Lab for Visual Mathematics at Georgetown University, commented, “Quantum cat here: I expect complex superposition of strong positive, negative and null as typical impact for AI. For the grandkids’ sake, we must be positive!”

The following one-liners from anonymous respondents also tie into AI in 2030:

  • AnInternet Hall of Fame member wrote, “You’ll talk to your digital assistant in a normal voice and it will just be there – it will often anticipate your needs, so you may only need to talk to it to correct or update it.”
  • The director of a cognitive research groupat one of the world’s top AI and large-scale computing companies predicted that by 2030, “Smartphone-equivalent devices will support true natural-language dialog with episodic memory of past interactions. Apps will become low-cost digital workers with basic commonsense reasoning.”
  • AnanonymousInternet Hall of Fame member said, “The equivalent of the ‘Star Trek’ universal translator will become practical, enabling travelers to better interact with people in countries they visit, facilitate online discussions across language barriers, etc.”
  • An Internet of Things researcher commented, “We need to balance between human emotions and machine intelligence – can machines be emotional? – that’s the frontier we have to conquer.”
  • An anonymous respondent wrote, “2030 is still quite possibly before the advent of human-level AI. During this phase AI is still mostly augmenting human efforts – increasingly ubiquitous, optimizing the systems that surround us and being replaced when their optimization criteria are not quite perfect – rather than pursuing those goals programmed into them, whether we find the realization of those goals desirable or not.”
  • A research scientist who works for Google said, “Things will be better, although many people are deeply worried about the effects of AI.”
  • AnARPANET and internet pioneer wrote, “The kind of AI we are currently able to build as good for data analysis but far, far away from ‘human’ levels of performance the next 20 years won’t change this, but we will have valuable tools to help analyze and control our world.”
  • An artificial intelligence researcher working for one of the world’s most powerful technology companies wrote, “AI will enhance our vision and hearing capabilities, remove language barriers, reduce time to find information we care about and help in automating mundane activities.”
  • A manager with a major digital innovation company said, “Couple the information storage with the ever-increasing ability to rapidly search and analyze that data, and the benefits to augmenting human intelligence with this processed data will open up new avenues of technology and research throughout society.”

Other anonymous respondents commented:

  • “AI will help people to manage the increasingly complex world we are forced to navigate. It will empower individuals to not be overwhelmed.”
  • “AI will reduce human error in many contexts: driving, workplace, medicine and more.”
  • “In teaching it will enhance knowledge about student progress and how to meet individual needs it will offer guidance options based on the unique preferences of students that can guide learning and career goals.”
  • “2030 is only 12 years from now, so I expect that systems like Alexa and Siri will be more helpful but still of only medium utility.”
  • “AI will be a useful tool I am quite a ways away from fearing SkyNet and the rise of the machines.”
  • “AI will produce major benefits in the next 10 years, but ultimately the question is one of politics: Will the world somehow manage to listen to the economists, even when their findings are uncomfortable?”
  • “I strongly believe that an increasing use of numerical control will improve the lives of people in general.”
  • “AI will help us navigate choices, find safer routes and avenues for work and play, and help make our choices and work more consistent.”
  • “Many factors will be at work to increase or decrease human welfare, and it will be difficult to separate them.”

AI will optimize and augment people’s lives

The hopeful experts in this sample generally expect that AI will work to optimize, augment and improve human activities and experiences. They say it will save time and it will save lives via health advances and the reduction of risks and of poverty. They hope it will spur innovation and broaden opportunities, increase the value of human-to-human experiences, augment humans and increase individuals’ overall satisfaction with life.

Clay Shirky, writer and consultant on the social and economic effects of internet technologies and vice president at New York University, said, “All previous forms of labor-saving devices, from the level to the computer, have correlated with increased health and lifespan in the places that have adopted them.”

Jamais Cascio, research fellow at the Institute for the Future, wrote, “Although I do believe that in 2030 AI will have made our lives better, I suspect that popular media of the time will justifiably highlight the large-scale problems: displaced workers, embedded bias and human systems being too deferential to machine systems. But AI is more than robot soldiers, autonomous cars or digital assistants with quirky ‘personalities.’ Most of the AI we will encounter in 2030 will be in-the-walls, behind-the-scenes systems built to adapt workspaces, living spaces and the urban environment to better suit our needs. Medical AI will keep track of medication and alert us to early signs of health problems. Environmental AI will monitor air quality, heat index and other indicators relevant to our day’s tasks. Our visual and audio surroundings may be altered or filtered to improve our moods, better our focus or otherwise alter our subconscious perceptions of the world. Most of this AI will be functionally invisible to us, as long as it’s working properly. The explicit human-machine interface will be with a supervisor system that coordinates all of the sub-AI – and undoubtedly there will be a lively business in creating supervisor systems with quirky personalities.”

Mike Meyer, chief information officer at Honolulu Community College, wrote, “Social organizations will be increasingly administered by AI/ML systems to ensure equity and consistency in provisioning of services to the population. The steady removal of human emotion-driven discrimination will rebalance social organizations creating true equitable opportunity to all people for the first time in human history. People will be part of these systems as censors, in the old imperial Chinese model, providing human emotional intelligence where that is needed to smooth social management. All aspects of human existence will be affected by the integration of AI into human societies. Historically this type of base paradigmatic change is both difficult and unstoppable. The results will be primarily positive but will produce problems both in the process of change and in totally new types of problems that will result from the ways that people do adapt the new technology-based processes.”

Mark Crowley, an assistant professor, expert in machine learning and core member of the Institute for Complexity and Innovation at the University of Waterloo in Ontario, Canada, wrote, “While driving home on a long commute from work the human will be reading a book in the heads-up screen of the windshield. The car will be driving autonomously on the highway for the moment. The driver will have an idea to note down and add to a particular document all this will be done via voice. In the middle of this a complicated traffic arrangement will be seen approaching via other networked cars. The AI will politely interrupt the driver, put away the heads-up display and warn the driver they may need to take over in the next 10 seconds or so. The conversation will be flawless and natural, like Jarvis in ‘Avengers,’ even charming. But it will be tasks-focused to the car, personal events, notes and news.”

Theodore Gordon, futurist, management consultant and co-founder of the Millennium Project, commented, “There will be ups and downs, surely, but the net is, I believe, good. The most encouraging uses of AI will be in early warning of terror activities, incipient diseases and environmental threats and in improvements in decision-making.”

Yvette Wohn, director of the Social Interaction Lab and expert on human-computer interaction at the New Jersey Institute of Technology, said, “One area in which artificial intelligence will become more sophisticated will be in its ability to enrich the quality of life so that the current age of workaholism will transition into a society where leisure, the arts, entertainment and culture are able to enhance the well-being of society in developed countries and solve issues of water production, food growth/distribution and basic health provision in developing countries.”

Ken Goldberg, distinguished chair in engineering, director of AUTOLAB’s and CITRIS’ “people and robots” initiative, and founding member of the Berkeley Artificial Intelligence Research Lab at the University of California, Berkeley, said, “As in the past 50+ years, AI will be combined with IA (intelligence augmentation) to enhance humans’ ability to work. One example might be an AI-based ‘Devil’s Advocate’ that would challenge my decisions with insightful questions (as long as I can turn it off periodically).”

Rich Ling, a professor of media technology at Nanyang Technological University, responded, “The ability to address complex issues and to better respond to and facilitate the needs of people will be the dominant result of AI.”

An anonymous respondent wrote, “There will be an explosive increase in the number of autonomous cognitive agents (e.g., robots), and humans will interact more and more with them, being unaware, most of the time, if it is interactivity with a robot or with another human. This will increase the number of personal assistants and the level of service.”

As daily a user of the Google Assistant on my phone and both Google Home and Alexa, I feel like AI has already been delivering significant benefits to my daily life for a few years. Fred Davis

Fred Davis, mentor at Runway Incubator in San Francisco, responded, “As daily a user of the Google Assistant on my phone and both Google Home and Alexa, I feel like AI has already been delivering significant benefits to my daily life for a few years. My wife and I take having an always-on omnipresent assistant on hand for granted at this point. Google Home’s ability to tell us apart and even respond with different voices is a major step forward in making computers people-literate, rather than the other way around. There’s always a concern about privacy, but so far it hasn’t caused us any problems. Obviously, this could change and instead of a helpful friend I might look at these assistants as creepy strangers. Maintaining strict privacy and security controls is essential for these types of services.”

Andrew Tutt, an expert in law and author of “An FDA for Algorithms,” which called for “critical thought about how best to prevent, deter and compensate for the harms that they cause,” said, “AI will be absolutely pervasive and absolutely seamless in its integration with everyday life. It will simply become accepted that AI are responsible for ever-more-complex and ever-more-human tasks. By 2030, it will be accepted that when you wish to hail a taxi the taxi will have no driver – it will be an autonomously driven vehicle. Robots will be responsible for more-dynamic and complex roles in manufacturing plants and warehouses. Digital assistants will play an important and interactive role in everyday interactions ranging from buying a cup of coffee to booking a salon appointment. It will no longer be unexpected to call a restaurant to book a reservation, for example, and speak to a ‘digital’ assistant who will pencil you in. These interactions will be incremental but become increasingly common and increasingly normal. My hope is that the increasing integration of AI into everyday life will vastly increase the amount of time that people can devote to tasks they find meaningful.”

L. Schomaker, professor at the University of Groningen and scientific director of the Artificial Intelligence and Cognitive Engineering (ALICE) research institute, said, “In the 1990s, you went to a PC on a desktop in a room in your house. In the 2010s you picked a phone from your pocket and switched it on. By 2030 you will be online 24/7 via miniature devices such as in-ear continuous support, advice and communications.”

Michael Wollowski, associate professor of computer science and software engineering at Rose-Hulman Institute of Technology and expert in the Internet of Things, diagrammatic systems, and artificial intelligence, wrote, “Assuming that industry and government are interested in letting the consumer choose and influence the future, there will be many fantastic advances of AI. I believe that AI and the Internet of Things will bring about a situation in which technology will be our guardian angel. For example, self-driving cars will let us drive faster than we ever drove before, but they will only let us do things that they can control. Since computers have much better reaction time than people, it will be quite amazing. Similarly, AI and the Internet of Things will let us conduct out lives to the fullest while ensuring that we live healthy lives. Again, it is like having a guardian angel that lets us do things, knowing they can save us from stupidity.”

Steve King, partner at Emergent Research, said, “2030 is less than 12 years away. So … the most likely scenario is AI will have a modest impact on the lives of most humans over this time frame. Having said that, we think the use of AI systems will continue to expand, with the greatest growth coming from systems that augment and complement human capabilities and decision-making. This is not to say there won’t be negative impacts from the use of AI. Jobs will be replaced, and certain industries will be disrupted. Even scarier, there are many ways AI can be weaponized. But like most technological advancements, we think the overall impact of AI will be additive – at least over the next decade or so.”

Vassilis Galanos, a Ph.D. student and teaching assistant actively researching future human-machine symbiosis at the University of Edinburgh, commented, “2030 is not that far away, so there is no room for extremely utopian/dystopian hopes and fears. … Given that AI is already used in everyday life (social-media algorithms, suggestions, smartphones, digital assistants, health care and more), it is quite probable that humans will live in a harmonious co-existence with AI as much as they do now – to a certain extent – with computer and internet technologies.”

Charlie Firestone, communications and society program executive director and vice president at the Aspen Institute, commented, “I remain optimistic that AI will be a tool that humans will use, far more widely than today, to enhance quality of life such as medical remedies, education and the environment. For example, the AI will help us to conserve energy in homes and in transportation by identifying exact times and temperatures we need, identifying sources of energy that will be the cheapest and the most efficient. There certainly are dire scenarios, particularly in the use of AI for surveillance, a likely occurrence by 2030. I am hopeful that AI and other technologies will identify new areas of employment as it eliminates many jobs.”

Pedro U. Lima, an associate professor of computer science at Instituto Superior Técnico in Lisbon, Portugal, said, “Overall, I see AI-based technology relieving us from repetitive and/or heavy and/or dangerous tasks, opening new challenges for our activities. I envisage autonomous mobile robots networked with a myriad of other smart devices, helping nurses and doctors at hospitals in daily activities, working as a ‘third hand’ and (physical and emotional) support to patients. I see something similar happening in factories, where networked robot systems will help workers on their tasks, relieving them from heavy duties.”

John Laird, a professor of computer science and engineering at the University of Michigan, responded, “There will be a continual off-loading of mundane intellectual and physical tasks on to AI and robotic systems. In addition to helping with everyday activities, it will significantly help the mentally and physically impaired and disabled. There will also be improvements in customized/individualized education and training of humans, and conversely, the customization of AI systems by everyday users. We will be transitioning from current programming practices to user customization. Automated driving will be a reality, eliminating many deaths but also having significant societal changes.”

Steven Polunsky, director of the Alabama Transportation Policy Research Center at the University of Alabama, wrote, “AI will allow public transportation systems to better serve existing customers by adjusting routes, travel times and stops to optimize service. New customers will also see advantages. Smart transportation systems will allow public transit to network with traffic signals and providers of ‘last-mile’ trips to minimize traffic disruption and inform decision making about modal (rail, bus, mobility-on-demand) planning and purchasing.”

Sanjiv Das, a professor of data science and finance at Santa Clara University, responded, “AI will enhance search to create interactive reasoning and analytical systems. Search engines today do not know ‘why’ we want some information and hence cannot reason about it. They also do not interact with us to help with analysis. An AI system that collects information based on knowing why it is needed and then asks more questions to refine its search would be clearly available well before 2030. These ‘search-thinking bots’ will also write up analyses based on parameters elicited from conversation and imbue these analyses with different political (left/right) and linguistic (aggressive/mild) slants, chosen by the human, using advances in language generation, which are already well under way. These ‘intellectual’ agents will become companions, helping us make sense of our information overload. I often collect files of material on my cloud drive that I found interesting or needed to read later, and these agents would be able to summarize and engage me in a discussion of these materials, very much like an intellectual companion. It is unclear to me if I would need just one such agent, though it seems likely that different agents with diverse personalities may be more interesting! As always, we should worry what the availability of such agents might mean for normal human social interaction, but I can also see many advantages in freeing up time for socializing with other humans as well as enriched interactions, based on knowledge and science, assisted by our new intellectual companions.”

Lawrence Roberts, designer and manager of ARPANET, the precursor to the internet and Internet Hall of Fame member, commented, “AI voice recognition, or text, with strong context understanding and response will allow vastly better access to website, program documentation, voice call answering, and all such interactions will greatly relieve user frustration with getting information. It will mostly provide service where no or little human support is being replaced as it is not available today in large part. For example, finding and/or doing a new or unused function of the program or website one is using. Visual, 3D-space-recognition AI to support better-than-human robot activity including vehicles, security surveillance, health scans and much more.”

Christopher Yoo, a professor of law, communication and computer and information science at the University of Pennsylvania Law School, responded, “AI is good at carrying out tasks that follow repetitive patterns. In fact, AI is better than humans. Shifting these functions to machines will improve performance. It will also allow people to shift their efforts to high-value-added and more-rewarding directions, an increasingly critical consideration in developing world countries where population is declining. Research on human-computer interaction (HCI) also reveals that AI-driven pattern recognition will play a critical role in expanding humans’ ability to extend the benefits of computerization. HCI once held that our ability to gain the benefit from computers would be limited by the total amount of time people can spend sitting in front of a screen and inputting characters through a keyboard. The advent of AI-driven HCI will allow that to expand further and will reduce the amount of customization that people will have to program in by hand. At the same time, AI is merely a tool. All tools have their limits and can be misused. Even when humans are making the decisions instead of machines, blindly following the results of a protocol without exercising any judgment, can have disastrous results. Future applications of AI will thus likely involve both humans and machines if they are to fulfill their potential.”

Joseph Konstan, distinguished professor of computer science specializing in human-computer interaction and AI at the University of Minnesota, predicted, “Widespread deployment of AI has immense potential to help in key areas that affect a large portion of the world’s population, including agriculture, transportation (more efficiently getting food to people) and energy. Even as soon as 2030, I expect we’ll see substantial benefits for many who are today disadvantaged, including the elderly and physically handicapped (who will have greater choices for mobility and support) and those in the poorest part of the world.”

The future of work: Some predict new work will emerge or solutions will be found, while others have deep concerns about massive job losses and an unraveling society

A number of expert insights on this topic were shared earlier in this report. These additional observations add to the discussion of hopes and concerns about the future of human jobs. This segment starts with comments from those who are hopeful that the job situation and related social issues will turn out well. It is followed by statements from those who are pessimistic.

Respondents who were positive about the future of AI and work

Bob Metcalfe, Internet Hall of Fame member, co-inventor of Ethernet, founder of 3Com and now professor of innovation and entrepreneurship at the University of Texas at Austin, said, “Pessimists are often right, but they never get anything done. All technologies come with problems, sure, but … generally, they get solved. The hardest problem I see is the evolution of work. Hard to figure out. Forty percent of us used to know how to milk cows, but now less than 1% do. We all used to tell elevator operators which floor we wanted, and now we press buttons. Most of us now drive cars and trucks and trains, but that’s on the verge of being over. AIs are most likely not going to kill jobs. They will handle parts of jobs, enhancing the productivity of their humans.”

Stowe Boyd, founder and managing director at Work Futures, said, “There is a high possibility that unchecked expansion of AI could rapidly lead to widespread unemployment. My bet is that governments will step in to regulate the spread of AI, to slow the impacts of this phenomenon as a result of unrest by the mid 2020s. That regulation might include, for example, not allowing AIs to serve as managers of people in the workplace, but only to augment the work of people on a task or process level. So, we might see high degrees of automation in warehouses, but a human being would be ‘in charge’ in some sense. Likewise, fully autonomous freighters might be blocked by regulations.”

An anonymous respondent wrote, “Repeatedly throughout history people have worried that new technologies would eliminate jobs. This has never happened, so I’m very skeptical it will this time. Having said that, there will be major short-term disruptions in the labor market and smart governments should begin to plan for this by considering changes to unemployment insurance, universal basic income, health insurance, etc. This is particularly the case in America, where so many benefits are tied to employment. I would say there is almost zero chance that the U.S. government will actually do this, so there will be a lot of pain and misery in the short and medium term, but I do think ultimately machines and humans will peacefully coexist. Also, I think a lot of the projections on the use of AI are ridiculous. Regardless of the existence of the technology, cross-state shipping is not going to be taken over by automated trucks any time soon because of legal and ethical issues that have not been worked out.”

Steven Miller, vice provost and professor of information systems at Singapore Management University, said, “It helps to have a sense of the history of technological change over the past few hundred years (even longer). Undoubtedly, new ways of using machines and new machine capabilities will be used to create economic activities and services that were either a) not previously possible, or b) previously too scarce and expensive, and now can be plentiful and inexpensive. This will create a lot of new activities and opportunities. At the same time, we know some existing tasks and jobs with a high proportion of those tasks will be increasingly automated. So we will simultaneously have both new opportunity creation as well as technological displacement. Even so, the long-term track record shows that human societies keep finding ways of creating more and more economically viable jobs. Cognitive automation will obviously enhance the realms of automation, but even with tremendous progress in this technology, there are and will continue to be limits. Humans have remarkable capabilities to deal with and adapt to change, so I do not see the ‘end of human work.’ The ways people and machines combine together will change – and there will be many new types of human-machine symbiosis. Those who understand this and learn to benefit from it will proposer.”

Henry E. Brady, dean of the Goldman School of Public Policy at the University of California, Berkeley, wrote, “AI can replace people in jobs that require sophisticated and accurate pattern matching – driving, diagnoses based upon medical imaging, proofreading and other areas. There is also the fact that in the past technological change has mostly led to new kinds of jobs rather than the net elimination of jobs. Furthermore, I also believe that there may be limits to what AI can do. It is very good at pattern matching, but human intelligence goes far beyond pattern matching and it is not clear that computers will be able to compete with humans beyond pattern matching. It also seems clear that even the best algorithms will require constant human attention to update, check and revise them.”

If we embrace the inevitable evolution of technology to replace redundant tasks, we can encourage today’s youth to pursue more creative and strategic pursuits. Geoff Livingston

Geoff Livingston, author and futurist, commented, “The term AI misleads people. What we should call the trend is machine learning or algorithms. ‘Weak’ AI as it is called – today’s AI – reduces repetitive tasks that most people find mundane. This in turn produces an opportunity to escape the trap of the proletariat, being forced into monotonous labor to earn a living. Instead of thinking of the ‘Terminator,’ we should view the current trend as an opportunity to seek out and embrace the tasks that we truly love, including more creative pursuits. If we embrace the inevitable evolution of technology to replace redundant tasks, we can encourage today’s youth to pursue more creative and strategic pursuits. Further, today’s workers can learn how to manage machine learning or embrace training to pursue new careers that they may enjoy more. My fear is that many will simply reject change and blame technology, as has often been done. One could argue much of today’s populist uprising we are experiencing globally finds its roots in the current displacements caused by machine learning as typified by smart manufacturing. If so, the movement forward will be troublesome, rife with dark bends and turns that we may regret as cultures and countries.”

Marek Havrda, director at NEOPAS and strategic adviser for the GoodAI project, a private research and development company based in Prague that focuses on the development of artificial general intelligence and AI applications, explained the issue from his point of view, “The development and implementation of artificial intelligence has brought about questions of the impact it will have on employment. Machines are beginning to fill jobs that have been traditionally reserved for humans, such as driving a car or prescribing medical treatment. How these trends may unfold is a crucial question. We may expect the emergence of ‘super-labour,’ a labour defined by super-high-added-value of human activity due to augmentation by AI. Apart from the ability to deploy AI, super-labour will be characterised by creativity and the ability to co-direct and supervise safe exploration of business opportunities together with perseverance in attaining defined goals. An example may be that by using various online, AI gig workers (and maybe several human gig workers), while leveraging AI to its maximum potential … at all aspects from product design to marketing and after-sales care, three people could create a new service and ensure its smooth delivery for which a medium-size company would be needed today. We can expect growing inequalities between those who have access and are able to use technology and those who do not. However, it seems more important how big a slice of the AI co-generated ‘pie’ is accessible to all citizens in absolute terms (e.g., having enough to finance public service and other public spending) which would make everyone better off than in pre-AI age, than the relative inequalities.”

Yoram Kalman, an associate professor at the Open University of Israel and member of The Center for Internet Research at the University of Haifa, wrote, “In essence, technologies that empower people also improve their lives. I see that progress in the area of human-machine collaboration empowers people by improving their ability to communicate and to learn, and thus my optimism. I do not fear that these technologies will take the place of people, since history shows that again and again people used technologies to augment their abilities and to be more fulfilled. Although in the past, too, it seemed as if these technologies would leave people unemployed and useless, human ingenuity and the human spirit always found new challenges that could best be tackled by humans.”

Thomas H. Davenport, distinguished professor of information technology and management at Babson College and fellow of the MIT Initiative on the Digital Economy, responded, “So far, most implementations of AI have resulted in some form of augmentation, not automation. Surveys of managers suggest that relatively few have automation-based job loss as the goal of their AI initiatives. So while I am sure there will be some marginal job loss, I expect that AI will free up workers to be more creative and to do more unstructured work.”

Yvette Wohn, director of the Social Interaction Lab and expert on human-computer interaction at the New Jersey Institute of Technology, commented, “Artificial intelligence will be naturally integrated into our everyday lives. Even though people are concerned about computers replacing the jobs of humans the best-case scenario is that technology will be augmenting human capabilities and performing functions that humans do not like to do. Smart farms and connected distribution systems will hopefully eliminate urban food deserts and enable food production in areas not suited for agriculture. Artificial intelligence will also become better at connecting people and provide immediate support to people who are in crisis situations.”

A principal architect for a major global technology company responded, “AI is a prerequisite to achieving a post-scarcity world, in which people can devote their lives to intellectual pursuits and leisure rather than to labor. The first step will be to reduce the amount of labor required for production of human necessities. Reducing tedium will require changes to the social fabric and economic relationships between people as the demand for labor shrinks below the supply, but if these challenges can be met then everyone will be better off.”

Tom Hood, an expert in corporate accounting and finance, said, “By 2030, AI will stand for Augmented Intelligence and will play an ever-increasing role in working side-by-side with humans in all sectors to add its advanced and massive cognitive and learning capabilities to critical human domains like medicine, law, accounting, engineering and technology. Imagine a personal bot powered by artificial intelligence working by your side (in your laptop or smartphone) making recommendations on key topics by providing up-to-the-minute research or key pattern recognition and analysis of your organization’s data? One example is a CPA in tax given a complex global tax situation amid constantly changing tax laws in all jurisdictions who would be able to research and provide guidance on the most complex global issues in seconds. It is my hope for the future of artificial intelligence in 2030 that we will be augmenting our intelligence with these ‘machines.’”

A professor of computer science expert in systems who works at a major U.S. technological university wrote, “By 2030, we should expect advances in AI, networking and other technologies enabled by AI and networks, e.g., the growing areas of persuasive and motivational technologies, to improve the workplace in many ways beyond replacing humans with robots.”

The following one-liners from anonymous respondents express a bright future for human jobs:

  • “History of technology shows that the number of new roles and jobs created will likely exceed the number of roles and jobs that are destroyed.”
  • “AI will not be competing with humanity but augmenting it for the better.”
  • “We make a mistake when we look for direct impact without considering the larger picture – we worry about a worker displaced by a machine rather than focus on broader opportunities for a better-trained and healthier workforce where geography or income no longer determine access not just to information but to relevant and appropriate information paths.”
  • “AI can significantly improve usability and thus access to the benefits of technology. Many powerful technical tools today require detailed expertise, and AI can bring more of those to a larger swath of the population.”

Respondents who have fears about AI’s impact on work

A section earlier in this report shared a number of key experts’ concerns about the potential negative impact of AI on the socioeconomic future if steps are not taken soon to begin to adjust to a future with far fewer jobs for humans. Many additional respondents to this canvassing shared fears about this.

Wout de Natris, an internet cybercrime and security consultant based in Rotterdam, Netherlands, wrote, “Hope: Advancement in health care, education, decision-making, availability of information, higher standards in ICT-security, global cooperation on these issues, etc. Fear: Huge segments of society, especially the middle classes who carry society in most ways, e.g., through taxes, savings and purchases, will be rendered jobless through endless economic cuts by industry, followed by governments due to lower tax income. Hence all of society suffers. Can governments and industry refrain from an overkill of surveillance? Otherwise privacy values keep declining, leading to a lower quality of life.”

Jonathan Taplin, director emeritus at the University of Southern California’s Annenberg Innovation Lab, wrote, “My fear is that the current political class is completely unprepared for the disruptions that AI and robotics applied at scale will bring to our economy. While techno-utopians point to universal basic income as a possible solution to wide-scale unemployment, there is no indication that anyone in politics has an appetite for such a solution. And because I believe that meaningful work is essential to human dignity, I’m not sure that universal basic income would be helpful in the first place.”

Alex Halavais, an associate professor of social technologies at Arizona State University, wrote, “AI is likely to rapidly displace many workers over the next 10 years, and so there will be some potentially significant negative effects at the social and economic level in the short run.”

Uta Russmann, professor in the department of communication at FHWien der WKW University of Applied Sciences for Management & Communication, said, “Many people will not be benefitting from this development, as robots will do their jobs. Blue-collar workers, people working in supermarkets stacking shelves, etc., will not be needed less, but the job market will not offer them any other possibilities. The gap between rich and poor will increase as the need for highly skilled and very well-paid people increases and the need for less skilled workers will decrease tremendously.”

Ross Stapleton-Gray, principal at Stapleton-Gray and Associates, an information technology and policy consulting firm, commented, “Human-machine interaction could be for good or for ill. It will be hugely influenced by decisions on social priorities. We may be at a tipping point in recognizing that social inequities need to be addressed, so, say, a decreased need for human labor due to AI will result in more time for leisure, education, etc., instead of increasing wealth inequity.”

Aneesh Aneesh, author of “Global Labor: Algocratic Modes of Organization” and professor at the University of Wisconsin, Milwaukee, responded, “Just as automation left large groups of working people behind even as the United States got wealthier as a country, it is quite likely that AI systems will automate the service sector in a similar way. Unless the welfare state returns with a vengeance, it is difficult to see the increased aggregate wealth resulting in any meaningful gains for the bottom half of society.”

Alper Dincel of T.C. Istanbul Kultur University in Turkey, wrote, “Unqualified people won’t find jobs, as machines and programs take over easy work in the near future. Machines will also solve performance problems. There is no bright future for most people if we don’t start to try finding solutions.”

Jason Abbott, professor and director at the Center for Asian Democracy at University of Louisville, said, “AI is likely to create significant challenges to the labor force as previously skilled (semi-skilled) jobs are replaced by AI – everything from AI in trucks and distribution to airlines, logistics and even medical records and diagnoses.”

Kenneth R. Fleischmann, an associate professor at the University of Texas at Austin’s School of Information, responded, “In corporate settings, I worry that AI will be used to replace human workers to a disproportionate extent, such that the net economic benefit of AI is positive, but that economic benefit is not distributed equally among individuals, with a smaller number of wealthy individuals worldwide prospering, and a larger number of less wealthy individuals worldwide suffering from fewer opportunities for gainful employment.”

Gerry Ellis, founder and digital usability and accessibility consultant at Feel The BenefIT, responded, “Technology has always been far more quickly developed and adopted in the richer parts of the world than in the poorer regions where new technology is generally not affordable. AI cannot be taken as a stand-alone technology but in conjunction with other converging technologies like augmented reality, robotics, virtual reality, the Internet of Things, big data analysis, etc. It is estimated that around 80% of jobs that will be done in 2030 do not exist yet. One of the reasons why unskilled and particularly repetitive jobs migrate to poor countries is because of cheap labour costs, but AI combined with robotics will begin to do many of these jobs. For all of these reasons combined, the large proportion of the earth’s population that lives in the under-developed and developing world is likely to be left behind by technological developments. Unless the needs of people with disabilities are taken into account when designing AI related technologies, the same is true for them (or I should say ‘us,’ as I am blind).”

Karen Oates, director of workforce development and financial stability for La Casa de Esperanza, commented, “Ongoing increases in the use of AI will not benefit the working poor and low-to-middle-income people. Having worked with these populations for 10 years I’ve already observed many of these people losing employment when robots and self-operating forklifts are implemented. Although there are opportunities to program and maintain these machines, realistically people who have the requisite knowledge and education will fill those roles. The majority of employers will be unwilling to invest the resources to train employees unless there is an economic incentive from the government to do so. Many lower-wage workers won’t have the confidence to return to school to develop new knowledge/skills when they were unsuccessful in the past. As the use of AI increases, low-wage workers will lose the small niche they hold in our economy.”

Peggy Lahammer, director of health/life sciences at Robins Kaplan LLP and legal market analyst, commented, “Jobs will continue to change and as many disappear new ones will be created. These changes will have an impact on society as many people are left without the necessary skills.”

A European computer science professor expert in machine learning commented, “The social sorting systems introduced by AI will most likely define and further entrench the existing world order of the haves and the have-nots, making social mobility more difficult and precarious given the unpredictability of AI-driven judgements of fit. The interesting problem to solve will be the fact that initial designs of AI will come with built-in imaginaries of what ‘good’ or ‘correct’ constitutes. The level of flexibility designed in to allow for changes in normative perceptions and judgements will be key to ensuring that AI driven-systems support rather than obstruct productive social change.”

Stephen McDowell, a professor of communication at Florida State University and expert in new media and internet governance, commented, “Much of our daily lives is made up of routines and habits that we repeat, and AI could assist in these practices. However, just because some things we do are repetitive does not mean they are insignificant. We draw a lot of meaning from things we do on a daily, weekly or annual basis, whether by ourselves or with others. Cultural practices such as cooking, shopping, cleaning, coordinating and telling stories are crucial parts of building our families and larger communities. Similarly, at work, some of the routines are predictable, but are also how we gain a sense of mastery and expertise in a specific domain. In both these examples, we will have to think about how we define knowledge, expertise, collaboration, and growth and development.”

David Sarokin, author of “Missed Information: Better Information for Building a Wealthier, More Sustainable Future,” commented, “My biggest concern is that our educational system will not keep up with the demands of our modern times. It is doing a poor job of providing the foundations to our students. As more and more jobs are usurped by AI-endowed machines – everything from assembling cars to flipping burgers – those entering the workplace will need a level of technical sophistication that few graduates possess these days.”

Justin Amyx, a technician with Comcast, said, “My worry is automation. Automation occurs usually with mundane tasks that fill low-paying, blue-collar-and-under jobs. Those jobs will disappear – lawn maintenance, truck drivers and fast food, to name a few. Those un-skilled or low-skilled workers will be jobless. Unless we have training programs to take care of worker displacement there will be issues.”

The future of health care: Great expectations for many lives saved, extended and improved, mixed with worries about data abuses and a divide between ‘the haves and have-nots’

Many of these experts have high hopes for continued incremental advances across all aspects of health care and life extension. They predict a rise in access to various tools, including digital agents that can perform rudimentary exams with no need to visit a clinic, a reduction in medical errors and better, faster recognition of risks and solutions. They also worry over the potential for a widening health care divide between those who can afford cutting-edge tools and treatments and those less privileged. They also express concerns about the potential for data abuses such as the denial of insurance or coverage or benefits for select people or procedures.

Leonard Kleinrock, Internet Hall of Fame member and co-director of the first host-to-host online connection and professor of computer science at the University of California, Los Angeles, predicted, “As AI and machine learning improve, we will see highly customized interactions between humans and their health care needs. This mass customization will enable each human to have her medical history, DNA profile, drug allergies, genetic makeup, etc., always available to any caregiver/medical professional that they engage with, and this will be readily accessible to the individual as well. Their care will be tailored to their specific needs and the very latest advances will be able to be provided rapidly after the advances are established. The rapid provision of the best medical treatment will provide great benefits. In hospital settings, such customized information will dramatically reduce the occurrence of medical injuries and deaths due to medical errors. My hope and expectation is that intelligent agents will be able to assess the likely risks and the benefits that ensue from proposed treatments and procedures, far better than is done now by human evaluators, such humans, even experts, typically being poor decision makers in the face of uncertainty. But to bring this about, there will need to be carefully conducted tests and experimentation to assess the quality of the outcomes of AI-based decision making in this field. However, as with any ‘optimized’ system, one must continually be aware of the fragility of optimized systems when they are applied beyond the confines of their range of applicability.”

Kenneth Grady, futurist, founding author of the Algorithmic Society blog and adjunct and advisor at the Michigan State University College of Law, responded, “In the next dozen years, AI will still be moving through a phase where it will augment what humans can do. It will help us sift through, organize and even evaluate the mountains of data we create each day. For example, doctors today still work with siloed data. Each patient’s vital signs, medicines, dosage rates, test results and side effects remain trapped in isolated systems. Doctors must evaluate this data without the benefit of knowing how it compares to the thousands of other patients around the country (or world) with similar problems. They struggle to turn the data into effective treatments by reading research articles and mentally comparing them to each patient’s data. As it evolves, AI will improve the process. Instead of episodic studies, doctors will have near-real-time access to information showing the effects of treatment regimes. Benefits and risks of drug interactions will be identified faster. Novel treatments will become evident more quickly. Doctors will still manage the last mile, interpreting the analysis generated through AI. This human in the loop approach will remain critical during this phase. As powerful as AI will become, it still will not match humans on understanding how to integrate treatment with values. When will a family sacrifice effectiveness of treatment to prolong quality of life? When two life-threatening illnesses compete, which will the patient want treated first? This will be an important learning phase, as humans understand the limits of AI.”

Charles Zheng, a researcher into machine learning and AI with the National Institute of Mental Health, commented, “In the year 2030, I expect AI will be more powerful than they currently are, but not yet at human level for most tasks. A patient checking into a hospital will be directed to the correct desk by a robot. The receptionist will be aided by software that listens to their conversation with the patient and automatically populates the information fields without needing the receptionist to type the information. Another program cross-references the database in the cloud to check for errors. The patient’s medical images would first be automatically labeled by a computer program before being sent to a radiologist.”

A professor of computer science expert in systems who works at a major U.S. technological university wrote, “By 2030 … physiological monitoring devices (e.g., lower heartbeats and decreasing blood sugar levels) could indicate lower levels of physical alertness. Smart apps could detect those decaying physical conditions (at an individual level) and suggest improvements to the user (e.g., taking a coffee break with a snack). Granted, there may be large-scale problems caused by AI and robots, e.g., massive unemployment, but the recent trends seem to indicate small improvements such as health monitor apps outlined above, would be more easily developed and deployed successfully.”

Kenneth Cukier, author and senior editor at The Economist, commented, “AI will be making more decisions in life, and some people will be uneasy with that. But these are decisions that are more effectively done by machines, such as assessing insurance risk, the propensity to repay a loan or to survive a disease. A good example is health care: Algorithms, not doctors, will be diagnosing many diseases, even if human doctors are still ‘in the loop.’ The benefit is that healthcare can reach down to populations that are today underserved: the poor and rural worldwide.”

Gabor Melli, senior director of engineering for AI and machine learning for Sony PlayStation, responded, “My hope is that by 2030 most of humanity will have ready access to health care and education through digital agents.”

Kate Eddens, research scientist at the Indiana University Network Science Institute, responded, “There is an opportunity for AI to enhance human ability to gain critical information in decision-making, particularly in the world of health care. There are so many moving parts and components to understanding health care needs and deciding how to proceed in treatment and prevention. With AI, we can program algorithms to help refine those decision-making processes, but only when we train the AI tools on human thinking, a tremendous amount of real data and actual circumstances and experiences. There are some contexts in which human bias and emotion can be detrimental to decision-making. For example, breast cancer is over-diagnosed and over-treated. While mammography guidelines have changed to try to reflect this reality, strong human emotion powered by anecdotal experience leaves some practitioners unwilling to change their recommendations based on evidence and advocacy groups reluctant to change their stance based on public outcry. Perhaps there is an opportunity for AI to calculate a more specific risk for each individual person, allowing for a tailored experience amid the broader guidelines. If screening guidelines change to ‘recommended based on individual risk,’ it lessens the burden on both the care provider and the individual. People still have to make their own decisions, but they may be able to do so with more information and a greater understanding of their own risk and reward. This is such a low-tech and simple example of AI, but one in which AI can – importantly – supplement human decision-making without replacing it.”

Angelique Hedberg, senior corporate strategy analyst at RTI International, said, “The greatest advancements and achievements will be in health – physical, mental and environmental. The improvements will have positive trickle-down impacts on education, work, gender equality and reduced inequality. AI will redefine our understanding of health care, optimizing existing processes while simultaneously redefining how we answer questions about what it means to be healthy, bringing care earlier in the cycle due to advances in diagnostics and assessment, i.e. in the future preventative care identifies and initiates treatment for illness before symptoms present. The advances will not be constrained to humans they will include animals and the built environment. This will happen across the disease spectrum. Advanced ‘omics’ will empower better decisions. There will be a push and a pull by the market and individuals. This is a global story, with fragmented and discontinuous moves being played out over the next decade as we witness wildly different experiments in health across the globe. This future is full of hope for individuals and communities. My greatest hope is for disabled individuals and those currently living with disabilities. I’m excited for communities and interpersonal connections as the work in this future will allow for and increase the value of the human-to-human experiences. Progress is often only seen in retrospect I hope the speed of exponential change allows everyone to enjoy the benefits of these collaborations.”

An anonymous respondent wrote, “In health care, I hope AI will improve the diagnostics and reduce the number of errors. Doctors cannot recall all the possibilities they have problems correlating all the symptoms and recognizing the patterns. I hope that in the future patients will be interviewed by computers, which will correlate the described symptoms with results of tests. I hope that with the further development of AI and cognitive computing there will be fewer errors in reports of medical imaging and diagnosis.”

Eduardo Vendrell, a computer science professor at the Polytechnic University of Valencia in Spain, responded, “In the field of health, many solutions will appear that will allow us to anticipate current problems and discover other risk situations more efficiently. The use of personal gadgets and other domestic devices will allow interacting directly with professionals and institutions in any situation of danger or deterioration of our health.”

…I foresee an increased development of mobile (remote) 24/7 health care services and personalized medicine thanks to AI and human-machine collaboration applied to the field. Monica Murero

Monica Murero, director of the E-Life International Institute and associate professor in sociology of new technology at the University of Naples Federico II in Italy, commented, “In health care, I foresee positive outcomes in terms of reducing human mistakes, that are currently still creating several failures. Also, I foresee an increased development of mobile (remote) 24/7 health care services and personalized medicine thanks to AI and human-machine collaboration applied to the field.”

Uta Russmann, professor in the department of communication at FHWien der WKW University of Applied Sciences for Management & Communication, said, “Life expectancy is increasing (globally) and human-machine/AI collaboration will help older people to manage their life on their own by taking care of them, helping them in the household (taking down the garbage, cleaning up, etc.) as well as keeping them company – just like cats and dogs do, but it will be a much more ‘advanced’ interaction.”

Lindsey Andersen, an activist at the intersection of human rights and technology for Freedom House and Internews, now doing graduate research at Princeton University, commented, “AI will augment human intelligence. In health care, for example, it will help doctors more accurately diagnose and treat disease and continually monitor high-risk patients through internet-connected medical devices. It will bring health care to places with a shortage of doctors, allowing health care workers to diagnose and treat disease anywhere in the world and to prevent disease outbreaks before they start.”

An anonymous respondent said, “The most important place where AI will make a difference is in health care of the elderly. Personal assistants are already capable of many important tasks to help make sure older adults stay in their home. But adding to that emotion detection, more in-depth health monitoring and AI-based diagnostics will surely enhance the power of these tools.”

Denis Parra, assistant professor of computer science in the school of engineering at the Pontifical Catholic University of Chile Chile, commented, “I live in a developing country. Whilst there are potential negative aspects of AI (loss of jobs), for people with disabilities AI technology could improve their lives. I imagine people entering a government office or health facility where people with eye- or ear-related disabilities could effortlessly interact to state their necessities and resolve their information needs.”

Timothy Leffel, research scientist, National Opinion Research Center (NORC) at the University of Chicago, said, “Formulaic transactions and interactions are particularly ripe for automation. This can be good in cases where human error can cause problems, e.g., for well-understood diagnostic medical testing.”

Jean-Daniel Fekete, researcher in human-computer interaction at INRIA in France, said, “Humans and machines will integrate more, improving health through monitoring and easing via machine control. Personal data will then become even more revealing and intrusive and should be kept under personal control.”

Joe Whittaker, a former professor of sciences and associate director of the NASA GESTAR program, now associate provost at Jackson State University, responded, “My hope is that AI/human-machine interface will become commonplace especially in the academic research and health care arena. I envision significant advances in brain-machine interface to facilitate mitigation of physical and mental challenges. Similar uses in robotics should also be used to assist the elderly.”

James Gannon, global head of eCompliance for emerging technology, cloud and cybersecurity at Novartis, responded, “AI will increase the speed and availability to develop drugs and therapies for orphan indications. AI will assist in general lifestyle and health care management for the average person.”

Jay Sanders, president and CEO of the Global Telemedicine Group, responded, “AI will bring collective expertise to the decision point, and in health care, bringing collective expertise to the bedside will save many lives now lost by individual medical errors.”

Geoff Arnold, CTO for the Verizon Smart Communities organization, said, “One of the most important trends over the next 12 years is the aging population and the high costs of providing them with care and mobility. AI will provide better data-driven diagnoses of medical and cognitive issues and it will facilitate affordable AV-based paratransit for the less mobile. It will support, not replace, human care-givers.”

John Lazzaro, retired professor of electrical engineering and computer science, University of California, Berkeley, commented, “When I visit my primary care physician today, she spends a fair amount time typing into an EMS application as she’s talking to me. In this sense, the computer has already arrived in the clinic. An AI system that frees her from this clerical task – that can listen and watch and distill the doctor-patient interaction into actionable data – would be an improvement. A more-advanced AI system would be able to form a ‘second opinion’ based on this data as the appointment unfolds, discreetly advising the doctor via a wearable. The end goal is a reduction in the number of ‘false starts’ in-patient diagnosis. If you’ve read Lisa Sander’s columns in the New York Times, where she traces the arc of difficult diagnoses, you understand the real clinical problem that this system addresses.”

Steve Farnsworth, chief marketing officer at Demand Marketing, commented, “Machine learning and AI offer tools to turn that into actionable data. One project using machine learning and big data already was able to predict SIDS correctly 94% of the time. Imagine AI looking at diagnostics, tests and successful treatments of millions of medical cases. We would instantly have a deluge of new cures and know the most effective treatment options using only the data, medicines and therapies we have now. The jump in quality health care alone for humans is staggering. This is only one application for AI.”

Daniel Siewiorek, a professor with the Human-Computer Interaction Institute at Carnegie Mellon University, predicted, “AI will enable systems to perform labor-intensive activities where there are labor shortages. For example, consider recovery from an injury. There is a shortage of physical therapists to monitor and correct exercises. AI would enable a virtual coach to monitor, correct and encourage a patient. Virtual coaches could take on the persona of a human companion or a pet, allowing the aging population to live independently.”

Joly MacFie, president of the Internet Society, New York chapter, commented, “AI will have many benefits for people with disabilities and health issues. Much of the aging baby boomer generation will be in this category.”

The overall hopes for the future of health care are tempered by concerns that there will continue to be inequities in access to the best care and worries that private health data may be used to limit people’s options.

Craig Burdett, a respondent who provided no identifying details, wrote, “While most AI will probably be a positive benefit, the possible darker side of AI could lead to a loss of agency for some. For example, in a health care setting an increasing use of AI could allow wealthier patients access to significantly-more-advanced diagnosis agents. When coupled with a supportive care team, these patients could receive better treatment and a greater range of treatment options. Conversely, less-affluent patients may be relegated to automated diagnoses and treatment plants with little opportunity for interaction to explore alternative treatments. AI could, effectively, manage long-term health care costs by offering lesser treatment (and sub-optimal recovery rates) to individuals perceived to have a lower status. Consider two patients with diabetes. One patient, upon diagnosis, modifies their eating and exercise patterns (borne out by embedded diagnostic tools) and would benefit from more advanced treatment. The second patient fails to modify their behaviour resulting in substantial ongoing treatment that could be avoided by simple lifestyle choices. An AI could subjectively evaluate that the patient has little interest in their own health and withhold more expensive treatment options leading to a shorter lifespan and an overall cost saving.”

Sumandra Majee, an architect at F5 Networks Inc., said, “AI, deep learning, etc., will become more a part of daily life in advanced countries. This will potentially widen the gap between technology-savvy people and economically well-to-do folks and the folks with limited access to technology. However, I am hopeful that in the field of healthcare, especially when it comes to diagnosis, AI will significantly augment the field, allowing doctors to do a far better job. Many of the routines aspects of checkups can be done via technology. There is no reason an expert human has to be involved in basic A/B testing to reach a conclusion. Machines can be implemented for those tasks and human doctors should only do the critical parts. I do see AI playing a negative role in education, where students may not often actually do the hard work of learning through experience. It might actually make the overall population dumber.”

Timothy Graham, a postdoctoral research fellow in sociology and computer science at Australian National University, commented, “In health care, we see current systems already under heavy criticism (e.g., the My Health Record system in Australia, or the NHS Digital program), because they are nudging citizens into using the system through an ‘opt-out’ mechanism and there are concerns that those who do not opt out may be profiled, targeted and/or denied access to services based on their own data.”

Valarie Bell, a computational social scientist at the University of North Texas, commented, “Let’s say medical diagnosis is taken over by machines, computers and robotics – how will stressful prognoses be communicated? Will a hologram or a computer deliver ‘the bad news’ instead of a physician? Given the health care industry’s inherent profit motives it would be easy for them to justify how much cheaper it would be to simply have devices diagnose, prescribe treatment and do patient care, without concern for the importance of human touch and interactions. Thus, we may devolve into a health care system where the rich actually get a human doctor while everyone else, or at least the poor and uninsured, get the robot.”

The following one-liners from anonymous respondents also tie into the future of health care:

  • “People could use a virtual doctor for information and first-level response so much time could be saved!”
  • “The merging of data science and AI could benefit strategic planning of the future research and development efforts that should be undertaken by humanity.”
  • “I see economic efficiencies and advances in preventive medicine and treatment of disease, however, I do think there will be plenty of adverse consequences.”
  • “Data can reduce errors – for instance, in clearly taking into account the side effects of a medicine or use of multiple medications.”
  • “Human-machine/AI collaboration will reduce barriers to proper medical treatment through better recordkeeping and preventative measures.”
  • “AI can take over many of the administrative tasks current doctors must do, allowing them more time with patients.”

The future of education: High hopes for advances in adaptive and individualized learning, but some doubt that there will be any significant progress and worry over digital divide

Over the past few decades, experts and amateurs alike have predicted the internet would have large-scale impacts on education. Many of these hopes have not lived up to the hype. Some respondents to this canvassing said the advent of AI could foster those changes. They expect to see more options for affordable adaptive and individualized learning solutions, including digital agents or “AI assistants” that work to enhance student-teacher interactions and effectiveness.

Barry Chudakov, founder and principal of Sertain Research and author of “Metalifestream,” commented, “In the learning environment, AI has the potential to finally demolish the retain-to-know learning (and regurgitate) model. Knowing is no longer retaining – machine intelligence does that it is making significant connections. Connect and assimilate becomes the new learning model.”

Lou Gross, professor of mathematical ecology and expert in grid computing, spatial optimization and modeling of ecological systems at the University of Tennessee, Knoxville, said, “I see AI as assisting in individualized instruction and training in ways that are currently unavailable or too expensive. There are hosts of school systems around the world that have some technology but are using it in very constrained ways. AI use will provide better adaptive learning and help achieve a teacher’s goal of personalizing education based on each student’s progress.”

Guy Levi, chief innovation officer for the Center for Educational Technology, based in Israel, wrote, “In the field of education AI will promote personalization, which almost by definition promotes motivation. The ability to move learning forward all the time by a personal AI assistant, which opens the learning to new paths, is a game changer. The AI assistants will also communicate with one another and will orchestrate teamwork and collaboration. The AI assistants will also be able to manage diverse methods of learning, such as productive failure, teach-back and other innovating pedagogies.”

Micah Altman, a senior fellow at the Brookings Institution and head scientist in the program on information science at MIT Libraries, wrote, “These technologies will help to adapt learning (and other environments) to the needs of each individual by translating language, aiding memory and providing us feedback on our own emotional and cognitive state and on the environment. We all need adaptation each of us, practically every day, is at times tired, distracted, fuzzy-headed or nervous, which limits how we learn, how we understand and how we interact with others. AI has the potential to assist us to engage with the world better – even when conditions are not ideal – and to better understand ourselves.”

Shigeki Goto, Asia-Pacific internet pioneer, Internet Hall of Fame member and a professor of computer science at Waseda University, commented, “AI is already applied to personalized medicine for an individual patient. Similarly, it will be applied to learning or education to realize ‘personalized learning’ or tailored education. We need to collect data which covers both of successful learning and failure experiences, because machine learning requires positive and negative data.”

Andreas Kirsch, fellow at Newspeak House, formerly with Google and DeepMind in Zurich and London, wrote, “Higher education outside of normal academia will benefit further from AI progress and empower more people with access to knowledge and information. For example, question-and-answer systems will improve. Tech similar to Google Translate and WaveNet will lower the barrier of knowledge acquisition for non-English speakers. At the same time, child labor will be reduced because robots will be able to perform the tasks far cheaper and faster, forcing governments in Asia to find real solutions.”

Kristin Jenkins, executive director of BioQUEST Curriculum Consortium, said, “One of the benefits of this technology is the potential to have really effective, responsive education resources. We know that students benefit from immediate feedback and the opportunity to practice applying new information repeatedly to enhance mastery. AI systems are perfect for analyzing students’ progress, providing more practice where needed and moving on to new material when students are ready. This allows time with instructors to focus on more-complex learning, including 21st-century skills.”

Mike Meyer, chief information officer at Honolulu Community College, commented, “Adult education availability and relevance will undergo a major transformation. Community colleges will become more directly community centers for both occupational training and greatly expanded optional liberal arts, art, crafts and hobbies. Classes will, by 2030, be predominantly augmented-reality-based, with a full mix of physical and virtual students in classes presented in virtual classrooms by national and international universities and organizations. The driving need will be expansion of knowledge for personal interest and enjoyment as universal basic income or equity will replace the automated tasks that had provided subsistence jobs in the old system.”

Jennifer Groff, co-founder of the Center for Curriculum Redesign, an international non-governmental organization dedicated to redesigning education for the 21st century, wrote, “The impact on learning and learning environments has the potential to be one of the most positive future outcomes. Learning is largely intangible and invisible, making it a ‘black box’ – and our tools to capture and support learning to this point have been archaic. Think large-scale assessment. Learners need tools that help them understand where they are in a learning pathway, how they learn best, what they need next and so on. We’re only just beginning to use technology to better answer these questions. AI has the potential to help us better understand learning, gain insights into learners at scale and, ultimately, build better learning tools and systems for them. But as a large social system, it is also prey to the complications of poor public policy that ultimately warps and diminishes AI’s potential positive impact.”

Norton Gusky, an education-technology consultant, wrote, “By 2030 most learners will have personal profiles that will tap into AI/machine learning. Learning will happen everywhere and at any time. There will be appropriate filters that will limit the influence of AI, but ethical considerations will also be an issue.”

Cliff Zukin, professor of public policy and political science at Rutgers University’s School of Planning and Public Policy and the Eagleton Institute of Politics, said, “It takes ‘information’ out of the category of a commodity, and more information makes for better decisions and is democratizing. Education, to me, has always been the status leveler, correcting, to some extent, for birth luck and social mobility. This will be like Asimov’s ‘Foundation,’ where everyone is plugged into the data-sphere. There is a dark side (later) but overall a positive.”

However, some expect that there will be a continuing digital divide in education, with the privileged having more access to advanced tools and more capacity for using them well, while the less-privileged lag behind.

Henning Schulzrinne, co-chair of the Internet Technical Committee of the IEEE Communications Society, professor at Columbia University and Internet Hall of Fame member, said, “Human-mediated education will become a luxury good. Some high school- and college-level teaching will be conducted partially by video and AI-graded assignments, using similar platforms to the MOOC [massive open online courses] models today, with no human involvement, to deal with increasing costs for education (‘robo-TA’).”

Huge segments of society will be left behind or excluded completely from the benefits of digital advances – many persons in underserved communities as well as others who are socio-economically challenged. Joe Whittaker

Joe Whittaker, a former professor of sciences and associate director of the NASA GESTAR program, now associate provost at Jackson State University, responded, “Huge segments of society will be left behind or excluded completely from the benefits of digital advances – many persons in underserved communities as well as others who are socio-economically challenged. This is due to the fact that these persons will be under-prepared generally, with little or no digital training or knowledge base. They rarely have access to the relatively ubiquitous internet, except when at school or in the workplace. Clearly, the children of these persons will be greatly disadvantaged.”

Some witnesses of technology’s evolution over the past few decades feel that its most-positive potential has been disappointingly delayed. After witnessing the slower-than-expected progress of tech’s impact on public education since the 1990s, they are less hopeful than others.

Ed Lyell, longtime educational technologies expert and professor at Adams State University, said education has been held back to this point by the tyranny of the status quo. He wrote, “By 2030, lifelong learning will become more widespread for all ages. The tools already exist, including Khan Academy and YouTube. We don’t have to know as much, just how to find information when we want it. We will have on-demand, 24/7 ‘schooling.’ This will make going to sit-down classroom schools more and more a hindrance to our learning. The biggest negative will be from those protecting current, status-quo education including teachers/faculty, school boards and college administrators. They are protecting their paycheck- or ego-based role. They will need training, counseling and help to embrace the existing and forthcoming change as good for all learners. Part of the problem now is that they do not want to acknowledge the reality of how current schools are today. Some do a good job, yet these are mostly serving already smarter, higher-income communities. Parents fight to have their children have a school like they experienced, forgetting how inefficient and often useless it was. AI can help customize curricula to each learner and guide/monitor their journey through multiple learning activities, including some existing schools, on-the-job learning, competency-based learning, internships and such. You can already learn much more, and more efficiently, using online resources than almost all of the classes I took in my public schooling and college, all the way through getting a Ph.D.”

A consultant and analyst also said that advances in education have been held back by entrenched interests in legacy education systems, writing, “The use of technology in education is minimal today due to the existence and persistence of the classroom-in-a-school model. As we have seen over the last 30 years, the application of artificial intelligence in the field of man/machine interface has grown in many unexpected directions. Who would have thought back in the late 1970s that the breadth of today’s online (i.e., internet) capabilities could emerged? I believe we are just seeing the beginning of the benefits of the man/machine interface for mankind. The institutionalized education model must be eliminated to allow education of each and every individual to grow. The human brain can be ‘educated’ 24 hours a day by intelligent ‘educators’ who may not even be human in the future. Access to information is no longer a barrier as it was 50 years ago. The next step now is to remove the barrier of structured human delivery of learning in the classroom.”

Brock Hinzmann, a partner in the Business Futures Network who worked for 40 years as a futures researcher at SRI International, was hopeful in his comments but also issued a serious warning. He wrote: “Most of the improvements in the technologies we call AI will involve machine learning from big data to improve the efficiency of systems, which will improve the economy and wealth. It will improve emotion and intention recognition, augment human senses and improve overall satisfaction in human-computer interfaces. There will also be abuses in monitoring personal data and emotions and in controlling human behavior, which we need to recognize early and thwart. Intelligent machines will recognize patterns that lead to equipment failures or flaws in final products and be able to correct a condition or shut down and pinpoint the problem. Autonomous vehicles will be able to analyze data from other vehicles and sensors in the roads or on the people nearby to recognize changing conditions and avoid accidents. In education and training, AI learning systems will recognize learning preferences, styles and progress of individuals and help direct them toward a personally satisfying outcome.

“However, governments or religious organizations may monitor emotions and activities using AI to direct them to ‘feel’ a certain way, to monitor them and to punish them if their emotional responses at work, in education or in public do not conform to some norm. Education could become indoctrination democracy could become autocracy or theocracy.”


Contents

Early humans

Genetic measurements indicate that the ape lineage which would lead to Homo sapiens diverged from the lineage that would lead to chimpanzees and bonobos, the closest living relatives of modern humans, around 4.6 to 6.2 million years ago. [23] Anatomically modern humans arose in Africa about 300,000 years ago, [24] and reached behavioural modernity about 50,000 years ago. [25]

Modern humans spread rapidly from Africa into the frost-free zones of Europe and Asia around 60,000 years ago. [26] The rapid expansion of humankind to North America and Oceania took place at the climax of the most recent ice age, when temperate regions of today were extremely inhospitable. Yet, humans had colonized nearly all the ice-free parts of the globe by the end of the Ice Age, some 12,000 years ago. [27] Other hominids such as Homo erectus had been using simple wood and stone tools for millennia, but as time progressed, tools became far more refined and complex.

Perhaps as early as 1.8 million years ago, but certainly by 500,000 years ago, humans began using fire for heat and cooking. [28] They also developed language in the Paleolithic period [29] and a conceptual repertoire that included systematic burial of the dead and adornment of the living. Early artistic expression can be found in the form of cave paintings and sculptures made from ivory, stone, and bone, showing a spirituality generally interpreted as animism, or even shamanism. [30] During this period, all humans lived as hunter-gatherers, and were generally nomadic. [31] Archaeological and genetic data suggest that the source populations of Paleolithic hunter-gatherers survived in sparsely wooded areas and dispersed through areas of high primary productivity while avoiding dense forest cover. [32]

Rise of civilization

The Neolithic Revolution, beginning around 10,000 BCE, saw the development of agriculture, which fundamentally changed the human lifestyle. Farming developed around 10,000 BCE in the Middle East, around 7000 BCE in what is now China, around 6000 BCE in the Indus Valley and Europe, and around 4000 BCE in the Americas. [33] Cultivation of cereal crops and the domestication of animals occurred around 8500 BCE in the Middle East, where wheat and barley were the first crops and sheep and goats were domesticated. [34] In the Indus Valley, crops were cultivated by 6000 BCE, along with domesticated cattle. The Yellow River valley in China cultivated millet and other cereal crops by about 7000 BCE, but the Yangtze valley domesticated rice earlier, by at least 8000 BCE. In the Americas, sunflowers were cultivated by about 4000 BCE, and maize and beans were domesticated in Central America by 3500 BCE. Potatoes were first cultivated in the Andes Mountains of South America, where the llama was also domesticated. [33] Metal-working, starting with copper around 6000 BCE, was first used for tools and ornaments. Gold soon followed, with its main use being for ornaments. The need for metal ores stimulated trade, as many of the areas of early human settlement were lacking in ores. Bronze, an alloy of copper and tin, was first known from around 2500 BCE, but did not become widely used until much later. [35]

Though early proto-cities appeared at Jericho and Catal Huyuk around 6000 BCE, [36] the first civilizations did not emerge until around 3000 BCE in Egypt [37] and Mesopotamia. [38] These cultures gave birth to the invention of the wheel, [39] mathematics, [40] bronze-working, sailing boats, the pottery wheel, woven cloth, construction of monumental buildings, [41] and writing. [42] Scholars now recognize that writing may have independently developed in at least four ancient civilizations: Mesopotamia (between 3400 and 3100 BC), Egypt (around 3250 BC), [43] [44] China (2000 BC), [45] and lowland Mesoamerica (by 650 BC). [46]

Farming permitted far denser populations, which in time organized into states. Agriculture also created food surpluses that could support people not directly engaged in food production. [47] The development of agriculture permitted the creation of the first cities. These were centres of trade, manufacturing and political power. [48] Cities established a symbiosis with their surrounding countrysides, absorbing agricultural products and providing, in return, manufactured goods and varying degrees of military control and protection.

The development of cities was synonymous with the rise of civilization. [a] Early civilizations arose first in Lower Mesopotamia (3000 BCE), [50] [51] followed by Egyptian civilization along the Nile River (3000 BCE), [11] the Harappan civilization in the Indus River Valley (in present-day India and Pakistan 2500 BCE), [52] [53] and Chinese civilization along the Yellow and Yangtze Rivers (2200 BCE). [12] [13] These societies developed a number of unifying characteristics, including a central government, a complex economy and social structure, sophisticated language and writing systems, and distinct cultures and religions. Writing facilitated the administration of cities, the expression of ideas, and the preservation of information. [54]

Entities such as the Sun, Moon, Earth, sky, and sea were often deified. [55] Shrines developed, which evolved into temple establishments, complete with a complex hierarchy of priests and priestesses and other functionaries. Typical of the Neolithic was a tendency to worship anthropomorphic deities. Among the earliest surviving written religious scriptures are the Egyptian Pyramid Texts, the oldest of which date to between 2400 and 2300 BCE. [56]

Cradles of civilization

The Bronze Age is part of the three-age system (Stone Age, Bronze Age, Iron Age) that for some parts of the world describes effectively the early history of civilization. During this era the most fertile areas of the world saw city-states and the first civilizations develop. These were concentrated in fertile river valleys: the Tigris and Euphrates in Mesopotamia, the Nile in Egypt, [57] the Indus in the Indian subcontinent, [52] and the Yangtze and Yellow Rivers in China.

Sumer, located in Mesopotamia, is the first known complex civilization, developing the first city-states in the 4th millennium BCE. [51] It was in these cities that the earliest known form of writing, cuneiform script, appeared around 3000 BCE. [58] [59] Cuneiform writing began as a system of pictographs. These pictorial representations eventually became simplified and more abstract. [59] Cuneiform texts were written on clay tablets, on which symbols were drawn with a blunt reed used as a stylus. [58] Writing made the administration of a large state far easier.

Transport was facilitated by waterways—by rivers and seas. The Mediterranean Sea, at the juncture of three continents, fostered the projection of military power and the exchange of goods, ideas, and inventions. This era also saw new land technologies, such as horse-based cavalry and chariots, that allowed armies to move faster.

These developments led to the rise of territorial states and empires. In Mesopotamia there prevailed a pattern of independent warring city-states and of a loose hegemony shifting from one city to another. [ citation needed ] In Egypt, by contrast, first there was a dual division into Upper and Lower Egypt which was shortly followed by unification of all the valley around 3100 BCE, followed by permanent pacification. [60] In Crete the Minoan civilization had entered the Bronze Age by 2700 BCE and is regarded as the first civilization in Europe. [61] Over the next millennia, other river valleys saw monarchical empires rise to power. [ citation needed ] In the 25th – 21st centuries BCE, the empires of Akkad and Sumer arose in Mesopotamia. [62]

Over the following millennia, civilizations developed across the world. Trade increasingly became a source of power as states with access to important resources or controlling important trade routes rose to dominance. [ citation needed ] By 1400 BCE, [ contradictory ] Mycenaean Greece began to develop, [63] and ended with the Late Bronze Age collapse that started to affect many Mediterranean civilizations between 1200 and 1150 BCE. In India, this era was the Vedic period, which laid the foundations of Hinduism and other cultural aspects of early Indian society, and ended in the 6th century BCE. [64] From around 550 BCE, many independent kingdoms and republics known as the Mahajanapadas were established across the subcontinent. [65]

As complex civilizations arose in the Eastern Hemisphere, the indigenous societies in the Americas remained relatively simple and fragmented into diverse regional cultures. During the formative stage in Mesoamerica (about 1500 BCE to 500 CE), more complex and centralized civilizations began to develop, mostly in what is now Mexico, Central America, and Peru. They included civilizations such as the Olmec, Maya, Zapotec, Moche, and Nazca. They developed agriculture, growing maize, chili peppers, cocoa, tomatoes, and potatoes, crops unique to the Americas, and creating distinct cultures and religions. These ancient indigenous societies would be greatly affected, for good and ill, by European contact during the early modern period.

Axial Age

Beginning in the 8th century BCE, the "Axial Age" saw the development of a set of transformative philosophical and religious ideas, mostly independently, in many different places. [ citation needed ] Chinese Confucianism, Indian Buddhism and Jainism, and Jewish monotheism are all claimed by some scholars to have developed in the 6th century BCE. (Karl Jaspers' Axial-Age theory also includes Persian Zoroastrianism, but other scholars dispute his timeline for Zoroastrianism.) In the 5th century BCE, Socrates and Plato made substantial advances in the development of ancient Greek philosophy.

In the East, three schools of thought would dominate Chinese thinking well into the 20th century. These were Taoism, Legalism, and Confucianism. The Confucian tradition, which would become particularly dominant, looked for political morality not to the force of law but to the power and example of tradition. Confucianism would later spread to the Korean Peninsula and toward Japan.

In the West, the Greek philosophical tradition, represented by Socrates, Plato, Aristotle, and other philosophers, [66] along with accumulated science, technology, and culture, diffused throughout Europe, Egypt, the Middle East, and Northwest India, starting in the 4th century BCE after the conquests of Alexander III of Macedon (Alexander the Great). [67]

Regional empires

The millennium from 500 BCE to 500 CE saw a series of empires of unprecedented size develop. Well-trained professional armies, unifying ideologies, and advanced bureaucracies created the possibility for emperors to rule over large domains whose populations could attain numbers upwards of tens of millions of subjects. The great empires depended on military annexation of territory and on the formation of defended settlements to become agricultural centres. The relative peace that the empires brought encouraged international trade, most notably the massive trade routes in the Mediterranean, the maritime trade web in the Indian Ocean, and the Silk Road. In southern Europe, the Greeks (and later the Romans), in an era known as "classical antiquity," established cultures whose practices, laws, and customs are considered the foundation of contemporary Western culture.

There were a number of regional empires during this period. The kingdom of the Medes helped to destroy the Assyrian Empire in tandem with the nomadic Scythians and the Babylonians. Nineveh, the capital of Assyria, was sacked by the Medes in 612 BCE. [68] The Median Empire gave way to successive Iranian empires, including the Achaemenid Empire (550–330 BCE), the Parthian Empire (247 BCE–224 CE), and the Sasanian Empire (224–651 CE).

Several empires began in modern-day Greece. First was the Delian League (from 477 BCE) [69] and the succeeding Athenian Empire (454–404 BCE), centred in present-day Greece. Later, Alexander the Great (356–323 BCE), of Macedon, founded an empire of conquest, extending from present-day Greece to present-day India. [70] [71] The empire divided shortly after his death, but the influence of his Hellenistic successors made for an extended Hellenistic period (323–31 BCE) [72] throughout the region.

In Asia, the Maurya Empire (322–185 BCE) existed in present-day India [73] in the 3rd century BCE, most of South Asia was united to the Maurya Empire by Chandragupta Maurya and flourished under Ashoka the Great. From the 3rd century CE, the Gupta dynasty oversaw the period referred to as ancient India's Golden Age. From the 4th to 6th centuries, northern India was ruled by the Gupta Empire. In southern India, three prominent Dravidian kingdoms emerged: the Cheras, [ citation needed ] Cholas, [74] and Pandyas. The ensuing stability contributed to heralding in the golden age of Hindu culture in the 4th and 5th centuries.

In Europe, the Roman Empire, centered in present-day Italy, began in the 7th century BCE. [75] In the 3rd century BCE the Roman Republic began expanding its territory through conquest and alliances. [76] By the time of Augustus (63 BCE – 14 CE), the first Roman Emperor, Rome had already established dominion over most of the Mediterranean. The empire would continue to grow, controlling much of the land from England to Mesopotamia, reaching its greatest extent under the emperor Trajan (died 117 CE). In the 3rd century CE, the empire split into western and eastern regions, with (usually) separate emperors. The Western empire would fall, in 476 CE, to German influence under Odoacer. The eastern empire, now known as the Byzantine Empire, with its capital at Constantinople, would continue for another thousand years, until Constantinople was conquered by the Ottoman Empire in 1453.

In China, the Qin dynasty (221–206 BCE), the first imperial dynasty of China, was followed by the Han Empire (206 BCE – 220 CE). The Han Dynasty was comparable in power and influence to the Roman Empire that lay at the other end of the Silk Road. Han China developed advanced cartography, shipbuilding, and navigation. The Chinese invented blast furnaces, and created finely tuned copper instruments. As with other empires during the Classical Period, Han China advanced significantly in the areas of government, education, mathematics, astronomy, technology, and many others. [77]

In Africa, the Kingdom of Aksum, centred in present-day Ethiopia, established itself by the 1st century CE as a major trading empire, dominating its neighbours in South Arabia and Kush and controlling the Red Sea trade. It minted its own currency and carved enormous monolithic steles such as the Obelisk of Axum to mark their emperors' graves.

Successful regional empires were also established in the Americas, arising from cultures established as early as 2500 BCE. [78] In Mesoamerica, vast pre-Columbian societies were built, the most notable being the Zapotec Empire (700 BCE – 1521 CE), [79] and the Maya civilization, which reached its highest state of development during the Mesoamerican Classic period (c. 250–900 CE), [80] but continued throughout the Post-Classic period until the arrival of the Spanish in the 16th century CE. Maya civilization arose as the Olmec mother culture gradually declined. The great Mayan city-states slowly rose in number and prominence, and Maya culture spread throughout the Yucatán and surrounding areas. The later empire of the Aztecs was built on neighbouring cultures and was influenced by conquered peoples such as the Toltecs.

Some areas experienced slow but steady technological advances, with important developments such as the stirrup and moldboard plough arriving every few centuries. There were, however, in some regions, periods of rapid technological progress. Most important, perhaps, was the Hellenistic period in the region of the Mediterranean, during which hundreds of technologies were invented. [81] Such periods were followed by periods of technological decay, as during the Roman Empire's decline and fall and the ensuing early medieval period.

Declines, falls, and resurgence

The ancient empires faced common problems associated with maintaining huge armies and supporting a central bureaucracy. These costs fell most heavily on the peasantry, while land-owning magnates increasingly evaded centralized control and its costs. Barbarian pressure on the frontiers hastened internal dissolution. China's Han dynasty fell into civil war in 220 CE, beginning the Three Kingdoms period, while its Roman counterpart became increasingly decentralized and divided about the same time in what is known as the Crisis of the Third Century. The great empires of Eurasia were all located on temperate and subtropical coastal plains. From the Central Asian steppes, horse-based nomads, mainly Mongols and Turks, dominated a large part of the continent. The development of the stirrup and the breeding of horses strong enough to carry a fully armed archer made the nomads a constant threat to the more settled civilizations.

The gradual break-up of the Roman Empire, spanning several centuries after the 2nd century CE, coincided with the spread of Christianity outward from the Middle East. [82] The Western Roman Empire fell under the domination of Germanic tribes in the 5th century, [83] and these polities gradually developed into a number of warring states, all associated in one way or another with the Catholic Church. [84] The remaining part of the Roman Empire, in the eastern Mediterranean, continued as what came to be called the Byzantine Empire. [85] Centuries later, a limited unity would be restored to western Europe through the establishment in 962 of a revived "Roman Empire", [86] later called the Holy Roman Empire, [87] comprising a number of states in what is now Germany, Austria, Switzerland, Czech Republic, Belgium, Italy, and parts of France. [88] [89]

In China, dynasties would rise and fall, but, by sharp contrast to the Mediterranean-European world, dynastic unity would be restored. After the fall of the Eastern Han Dynasty [90] and the demise of the Three Kingdoms, nomadic tribes from the north began to invade in the 4th century, eventually conquering areas of northern China and setting up many small kingdoms. [ citation needed ] The Sui Dynasty successfully reunified the whole of China [91] in 581, [92] and laid the foundations for a Chinese golden age under the Tang dynasty (618–907).

The term "Post-classical Era", though derived from the Eurocentric name of the era of "Classical antiquity", takes in a broader geographic sweep. The era is commonly dated from the 5th-century fall of the Western Roman Empire, which fragmented into many separate kingdoms, some of which would later be confederated under the Holy Roman Empire.

The Eastern Roman, or Byzantine, Empire survived until late in the Post-classical, or Medieval, period.

The Post-classical period also encompasses the Early Muslim conquests, the subsequent Islamic Golden Age, and the commencement and expansion of the Arab slave trade, followed by the Mongol invasions of the Middle East, Central Asia, and Eastern Europe [ citation needed ] and the founding around 1280 of the Ottoman Empire. [93] South Asia saw a series of middle kingdoms of India, followed by the establishment of Islamic empires in India.

In western Africa, the Mali Empire and the Songhai Empire developed. On the southeast coast of Africa, Arabic ports were established where gold, spices, and other commodities were traded. This allowed Africa to join the Southeast Asia trading system, bringing it contact with Asia this, along with Muslim culture, resulted in the Swahili culture.

China experienced the successive Sui, Tang, Song, Yuan, and early Ming dynasties. Middle Eastern trade routes along the Indian Ocean, and the Silk Road through the Gobi Desert, provided limited economic and cultural contact between Asian and European civilizations.

During the same period, civilizations in the Americas, such as the Mississippian culture, Ancestral Puebloans, Inca, Maya, and Aztecs, reached their zenith. All would be compromised by, then conquered after, contact with European colonists at the beginning of the Modern period.

Greater Middle East

Prior to the advent of Islam in the 7th century, the Middle East was dominated by the Byzantine Empire and the Persian Sasanian Empire, which frequently fought each other for control of several disputed regions. This was also a cultural battle, with the Byzantine Hellenistic and Christian culture competing against the Persian Iranian traditions and Zoroastrian religion. The formation of the Islamic religion created a new contender that quickly surpassed both of these empires. Islam greatly affected the political, economic, and military history of the Old World, especially the Middle East.

From their centre on the Arabian Peninsula, Muslims began their expansion during the early Postclassical Era. By 750 CE, they came to conquer most of the Near East, North Africa, and parts of Europe, ushering in an era of learning, science, and invention known as the Islamic Golden Age. The knowledge and skills of the ancient Near East, Greece, and Persia were preserved in the Postclassical Era by Muslims, who also added new and important innovations from outside, such as the manufacture of paper from China and decimal positional numbering from India.

Much of this learning and development can be linked to geography. Even prior to Islam's presence, the city of Mecca had served as a centre of trade in Arabia, and the Islamic prophet Muhammad himself was a merchant. With the new Islamic tradition of the Hajj, the pilgrimage to Mecca, the city became even more a centre for exchanging goods and ideas. The influence held by Muslim merchants over African-Arabian and Arabian-Asian trade routes was tremendous. As a result, Islamic civilization grew and expanded on the basis of its merchant economy, in contrast to the Europeans, Indians, and Chinese, who based their societies on an agricultural landholding nobility. Merchants brought goods and their Islamic faith to China, India, Southeast Asia, and the kingdoms of western Africa, and returned with new discoveries and inventions.

Motivated by religion and dreams of conquest, European leaders launched a number of Crusades to try to roll back Muslim power and retake the Holy Land. The Crusades were ultimately unsuccessful and served more to weaken the Byzantine Empire, especially with the 1204 sack of Constantinople. The Byzantine Empire began to lose increasing amounts of territory to the Ottoman Turks. Arab domination of the region ended in the mid-11th century with the arrival of the Seljuq Turks, migrating south from the Turkic homelands in Central Asia. In the early 13th century, a new wave of invaders, the Mongol Empire, swept through the region but were eventually eclipsed by the Turks [ citation needed ] and the founding of the Ottoman Empire in modern-day Turkey around 1280. [93]

North Africa saw the rise of polities formed by the Berbers, such as the Marinid dynasty in Morocco, the Zayyanid dynasty in Algeria, and the Hafsid dynasty in Tunisia. The region will later be called the Barbary Coast and will host pirates and privateers who will use several North African ports for their raids against the coastal towns of several European countries in search of slaves to be sold in North African markets as part of the Barbary slave trade.

Starting with the Sui dynasty (581–618), the Chinese began expanding into eastern Central Asia, and confronted Turkic nomads, who were becoming the most dominant ethnic group in Central Asia. [94] [95] Originally the relationship was largely cooperative, but in 630 the Tang dynasty began an offensive against the Turks, [96] capturing areas of the Mongolian Ordos Desert. In the 8th century, Islam began to penetrate the region and soon became the sole faith of most of the population, though Buddhism remained strong in the east. [ weasel words ] [ citation needed ] The desert nomads of Arabia could militarily match the nomads of the steppe, and the early Arab Empire gained control over parts of Central Asia. [94] The Hephthalites were the most powerful of the nomad groups in the 6th and 7th centuries, and controlled much of the region. In the 9th through 13th centuries the region was divided among several powerful states, including the Samanid Empire [ citation needed ] the Seljuk Empire, [97] and the Khwarezmid Empire. The largest empire to rise out of Central Asia developed when Genghis Khan united the tribes of Mongolia. The Mongol Empire spread to comprise all of Central Asia and China as well as large parts of Russia and the Middle East. [ citation needed ] After Genghis Khan died in 1227, [98] most of Central Asia continued to be dominated by a successor state, Chagatai Khanate. In 1369, Timur, a Turkic leader in the Mongol military tradition, conquered most of the region and founded the Timurid Empire. Timur's large empire collapsed soon after his death, however. The region then became divided into a series of smaller khanates that were created by the Uzbeks. These included the Khanate of Khiva, the Khanate of Bukhara, and the Khanate of Kokand, all of whose capitals are located in present-day Uzbekistan.

In the aftermath of the Byzantine–Sasanian wars, the Caucasus saw Armenia and Georgia flourish as independent realms free from foreign suzerainty. However, with the Byzantine and Sasanian empires exhausted from war, the Arabs were given the opportunity to proceed to the Caucasus during the early Muslim conquests. By the 13th century, the arrival of the Mongols saw the region invaded and subjugated once again.

Europe

Europe during the Early Middle Ages was characterized by depopulation, deurbanization, and barbarian invasion, all of which had begun in Late Antiquity. The barbarian invaders formed their own new kingdoms in the remains of the Western Roman Empire. In the 7th century, North Africa and the Middle East, once part of the Eastern Roman Empire, became part of the Caliphate after conquest by Muhammad's successors. Although there were substantial changes in society and political structures, most of the new kingdoms incorporated as many of the existing Roman institutions as they could. Christianity expanded in western Europe, and monasteries were founded. In the 7th and 8th centuries the Franks, under the Carolingian dynasty, established an empire covering much of western Europe [ citation needed ] it lasted until the 9th century, when it succumbed to pressure from new invaders—the Vikings, [99] Magyars, and Saracens.

During the High Middle Ages, which began after 1000, the population of Europe increased greatly as technological and agricultural innovations allowed trade to flourish and crop yields to increase. Manorialism—the organization of peasants into villages that owed rents and labour service to nobles—and feudalism—a political structure whereby knights and lower-status nobles owed military service to their overlords in return for the right to rents from lands and manors—were two of the ways of organizing medieval society that developed during the High Middle Ages. Kingdoms became more centralized after the decentralizing effects of the break-up of the Carolingian Empire. The Crusades, first preached in 1095, were an attempt by western Christians from nations such as the Kingdom of England, the Kingdom of France and the Holy Roman Empire to regain control of the Holy Land from the Muslims and succeeded for long enough to establish some Christian states in the Near East. Italian merchants imported slaves to work in households or in sugar processing. [ citation needed ] Intellectual life was marked by scholasticism and the founding of universities, while the building of Gothic cathedrals was one of the outstanding artistic achievements of the age.

The Late Middle Ages were marked by difficulties and calamities. Famine, plague, and war devastated the population of western Europe. [ citation needed ] The Black Death alone killed approximately 75 to 200 million people between 1347 and 1350. [100] [101] It was one of the deadliest pandemics in human history. Starting in Asia, the disease reached Mediterranean and western Europe during the late 1340s, [102] and killed tens of millions of Europeans in six years between a third and a half of the population perished.

The Middle Ages witnessed the first sustained urbanization of northern and western Europe and it lasted until the beginning of the early modern period in the 16th century, [20] marked by the rise of nation states, [103] the division of Western Christianity in the Reformation, [104] the rise of humanism in the Italian Renaissance, [105] and the beginnings of European overseas expansion which allowed for the Columbian Exchange.

In Central and Eastern Europe, in 1386, the Kingdom of Poland and the Grand Duchy of Lithuania (the latter including territories of modern Belarus and Ukraine), facing depredations by the Teutonic Knights and later also threats from Muscovy, the Crimean Tatars, and the Ottoman Empire, formed a personal union through the marriage of Poland's Queen Jadwiga to Lithuanian Grand Duke Jogaila, who became King Władysław II Jagiełło of Poland. For the next four centuries, until the 18th-century Partitions of the Polish-Lithuanian Commonwealth by Prussia, Russia, and Austria, the two polities conducted a federated condominium, long Europe's largest state, which welcomed diverse ethnicities and religions, including most of the world's Jews, furthered scientific thought (e.g., Copernicus's heliocentric theory), and—in a last-ditch effort to preserve their sovereignty—adopted the Constitution of 3 May 1791, the world's second modern written constitution after the U.S. Constitution that went into effect in 1789.

Sub-Saharan Africa

Medieval Sub-Saharan Africa was home to many different civilizations. The Kingdom of Aksum declined in the 7th century as Islam cut it off from its Christian allies and its people moved further into the Ethiopian Highlands for protection. They eventually gave way to the Zagwe dynasty who are famed for their rock cut architecture at Lalibela. The Zagwe would then fall to the Solomonic dynasty who claimed descent from the Aksumite emperors [ citation needed ] and would rule the country well into the 20th century. In the West African Sahel region, many Islamic empires rose, such as the Ghana Empire, the Mali Empire, the Songhai Empire, and the Kanem–Bornu Empire. They controlled the trans-Saharan trade in gold, ivory, salt and slaves.

South of the Sahel, civilizations rose in the coastal forests where horses and camels could not survive. [ citation needed ] These include the Yoruba city of Ife, noted for its art, [106] and the Oyo Empire, the Kingdom of Benin of the Edo people centred in Benin City, the Igbo Kingdom of Nri which produced advanced bronze art at Igbo-Ukwu, and the Akan who are noted for their intricate architecture. [ citation needed ]

Central Africa saw the birth of several states, including the Kingdom of Kongo. In what is now modern Southern Africa, native Africans created various kingdoms such as the Kingdom of Mutapa. They flourished through trade with the Swahili people on the East African coast. They built large defensive stone structures without mortar such as Great Zimbabwe, capital of the Kingdom of Zimbabwe, Khami, capital of Kingdom of Butua, and Danangombe (Dhlo-Dhlo), capital of the Rozwi Empire. The Swahili people themselves were the inhabitants of the East African coast from Kenya to Mozambique who traded extensively with Asians and Arabs, who introduced them to Islam. They built many port cities such as Mombasa, Zanzibar and Kilwa, which were known to Chinese sailors under Zheng He and Islamic geographers.

South Asia

In northern India, after the fall (550 CE) of the Gupta Empire, the region was divided into a complex and fluid network of smaller kingly states. [ citation needed ]

Early Muslim incursions began in the west in 712 CE, when the Arab Umayyad Caliphate annexed much of present-day Pakistan. Arab military advance was largely halted at that point, but Islam still spread in India, largely due to the influence of Arab merchants along the western coast.

The ninth century saw a Tripartite Struggle for control of northern India, among the Pratihara Empire, the Pala Empire, and the Rashtrakuta Empire. Some of the important states that emerged in India at this time included the Bahmani Sultanate and the Vijayanagara Empire.

Post-classical dynasties in South India included those of the Chalukyas, the Hoysalas, the Cholas, the Islamic Mughals, the Marathas, and the Mysores. Science, engineering, art, literature, astronomy, and philosophy flourished under the patronage of these kings. [ citation needed ]

Northeast Asia

After a period of relative disunity, China was reunified by the Sui dynasty in 581 [ citation needed ] and under the succeeding Tang dynasty (618–907) China entered a Golden Age. [107] The Tang Empire competed with the Tibetan Empire (618–842) for control of areas in Inner and Central Asia. [108] The Tang dynasty eventually splintered, however, and after half a century of turmoil the Song dynasty reunified China, [ citation needed ] when it was, according to William McNeill, the "richest, most skilled, and most populous country on earth". [109] Pressure from nomadic empires to the north became increasingly urgent. By 1142, North China had been lost to the Jurchens in the Jin–Song Wars, and the Mongol Empire [110] conquered all of China in 1279, along with almost half of Eurasia's landmass. After about a century of Mongol Yuan dynasty rule, the ethnic Chinese reasserted control with the founding of the Ming dynasty (1368).

In Japan, the imperial lineage had been established by this time, and during the Asuka period (538–710) the Yamato Province developed into a clearly centralized state. [111] Buddhism was introduced, and there was an emphasis on the adoption of elements of Chinese culture and Confucianism. The Nara period of the 8th century [112] marked the emergence of a strong Japanese state and is often portrayed as a golden age. [ citation needed ] During this period, the imperial government undertook great public works, including government offices, temples, roads, and irrigation systems. [ citation needed ] The Heian period (794 to 1185) saw the peak of imperial power, followed by the rise of militarized clans, and the beginning of Japanese feudalism. The feudal period of Japanese history, dominated by powerful regional lords (daimyōs) and the military rule of warlords (shōguns) such as the Ashikaga shogunate and Tokugawa shogunate, stretched from 1185 to 1868. The emperor remained, but mostly as a figurehead, and the power of merchants was weak.

Postclassical Korea saw the end of the Three Kingdoms era, the three kingdoms being Goguryeo, Baekje and Silla. Silla conquered Baekje in 660, and Goguryeo in 668, [113] marking the beginning of the North–South States Period (남북국시대), with Unified Silla in the south and Balhae, a successor state to Goguryeo, in the north. [114] In 892 CE, this arrangement reverted to the Later Three Kingdoms, with Goguryeo (then called Taebong and eventually named Goryeo) emerging as dominant, unifying the entire peninsula by 936. [115] The founding Goryeo dynasty ruled until 1392, succeeded by the Joseon dynasty, which ruled for approximately 500 years.

Southeast Asia

The beginning of the Middle Ages in Southeast Asia saw the fall (550 CE) of the Kingdom of Funan to the Chenla Empire, which was then replaced by the Khmer Empire (802 CE). The Khmer people's capital city, Angkor, was the largest city in the world prior to the industrial age and contained over a thousand temples, the most famous being Angkor Wat.

The Sukhothai (1238 CE) and Ayutthaya (1351 CE) kingdoms were major powers of the Thai people, who were influenced by the Khmer.

Starting in the 9th century, the Pagan Kingdom rose to prominence in modern Myanmar. Its collapse brought about political fragmention that ended with the rise of the Toungoo Empire in the 16th century.

Other notable kingdoms of the period include the Srivijayan Empire and the Lavo Kingdom (both coming into prominence in the 7th century), the Champa and the Hariphunchai (both about 750), the Đại Việt (968), Lan Na (13th century), Majapahit (1293), Lan Xang (1354), and the Kingdom of Ava (1364).

This period saw the spread of Islam to present-day Indonesia (beginning in the 13th century) and the emergence of the Malay states, including the Malacca Sultanate and the Bruneian Empire.

In the Philippines, several polities arose during this period, including the Rajahnate of Maynila, the Rajahnate of Cebu, the Rajahnate of Butuan.

Oceania

In the region of Oceania, the Tuʻi Tonga Empire was founded in the 10th century CE and expanded between 1200 and 1500. Tongan culture, language, and hegemony spread widely throughout Eastern Melanesia, Micronesia, and Central Polynesia during this period, [116] influencing East 'Uvea, Rotuma, Futuna, Samoa, and Niue, as well as specific islands and parts of Micronesia (Kiribati, Pohnpei, and miscellaneous outliers), Vanuatu, and New Caledonia (specifically, the Loyalty Islands, with the main island being predominantly populated by the Melanesian Kanak people and their cultures). [117]

In northern Australia, there is evidence that some aboriginal populations regularly traded with Makassan fishermen from Indonesia before the arrival of Europeans. [118] [119]

At around the same time, a powerful thalassocracy appeared in Eastern Polynesia, centered around the Society Islands, specifically on the sacred Taputapuatea marae, which drew in Eastern Polynesian colonists from places as far away as Hawaii, New Zealand (Aotearoa), and the Tuamotu Islands for political, spiritual and economic reasons, until the unexplained collapse of regular long-distance voyaging in the Eastern Pacific a few centuries before Europeans began exploring the area.

Indigenous written records from this period are virtually nonexistent, as it seems that all Pacific Islanders, with the possible exception of the enigmatic Rapa Nui and their currently undecipherable Rongorongo script, had no writing systems of any kind until after their introduction by European colonists. However, some indigenous prehistories can be estimated and academically reconstructed through careful, judicious analysis of native oral traditions, colonial ethnography, archeology, physical anthropology, and linguistics research.

Americas

In North America, this period saw the rise of the Mississippian culture in the modern-day United States c. 800 CE, marked by the extensive 12th-century urban complex at Cahokia. The Ancestral Puebloans and their predecessors (9th – 13th centuries) built extensive permanent settlements, including stone structures that would remain the largest buildings in North America until the 19th century. [120]

In Mesoamerica, the Teotihuacan civilization fell and the Classic Maya collapse occurred. The Aztec Empire came to dominate much of Mesoamerica in the 14th and 15th centuries.

In South America, the 14th and 15th centuries saw the rise of the Inca. The Inca Empire of Tawantinsuyu, with its capital at Cusco, spanned the entire Andes, making it the most extensive pre-Columbian civilization. The Inca were prosperous and advanced, known for an excellent road system and unrivaled masonry.

In the linear, global, historiographical approach, modern history (the "modern period," the "modern era," "modern times") is the history of the period following post-classical history (in Europe known as the "Middle Ages"), spanning from about 1500 to the present. "Contemporary history" includes events from around 1945 to the present. (The definitions of both terms, "modern history" and "contemporary history", have changed over time, as more history has occurred, and so have their start dates.) [121] [122] Modern history can be further broken down into periods:

  • The early modern period began around 1500 and ended around 1815. Notable historical milestones included the continued European Renaissance (whose start is dated variously between 1200 and 1401), the Age of Exploration, the Islamic gunpowder empires, the Protestant Reformation, [123][124] and the American Revolution. With the Scientific Revolution, new information about the world was discovered via empirical observation[125] and the scientific method, by contrast with the earlier emphasis on reason and "innate knowledge". The Scientific Revolution received impetus from Johannes Gutenberg's introduction to Europe of printing, using movable type, and from the invention of the telescope and microscope. Globalization was fuelled by international trade and colonization.
  • The late modern period began sometime around 1750–1815, as Europe experienced the Industrial Revolution and the military-political turbulence of the French Revolution and the Napoleonic Wars, which were followed by the Pax Britannica. The late modern period continues either to the end of World War II, in 1945, or to the present. Other notable historical milestones included the Great Divergence and the Russian Revolution. (a period also dubbed Pax Americana in geopolitics) includes historic events from approximately 1945 that are closely relevant to the present time. Major developments include the Cold War, continual hot wars and proxy wars, the Jet Age, the DNA revolution, the Green Revolution, [b]artificial satellites and global positioning systems (GPS), development of the supranational European Union, the Information Age, rapid economic development in India and China, increasing terrorism, and a daunting array of global ecological crises headed by the imminent existential threat of runaway global warming.

The defining features of the modern era developed predominantly in Europe, and so different periodizations are sometimes applied to other parts of the world. When the European periods are used globally, this is often in the context of contact with European culture in the Age of Discovery. [127]

In the humanities and social sciences, the norms, attitudes, and practices arising during the modern period are known as modernity. The corresponding terms for post-World War II culture are postmodernity or late modernity.

Early modern period

The "Early Modern period" [c] was the period between the Middle Ages and the Industrial Revolution—roughly 1500 to 1800. [20] The Early Modern period was characterized by the rise of science, and by increasingly rapid technological progress, secularized civic politics, and the nation state. Capitalist economies began their rise, initially in northern Italian republics such as Genoa. The Early Modern period saw the rise and dominance of mercantilist economic theory, and the decline and eventual disappearance, in much of the European sphere, of feudalism, serfdom, and the power of the Catholic Church. The period included the Protestant Reformation, the disastrous Thirty Years' War, the Age of Exploration, European colonial expansion, the peak of European witch-hunting, the Scientific Revolution, and the Age of Enlightenment. [d]

Renaissance

Europe's Renaissance – the "rebirth" of classical culture, beginning in the 14th century and extending into the 16th – comprised the rediscovery of the classical world's cultural, scientific, and technological achievements, and the economic and social rise of Europe.

The Renaissance engendered a culture of inquisitiveness which ultimately led to Humanism [128] and the Scientific Revolution. [129]

This period, which saw social and political upheavals, and revolutions in many intellectual pursuits, is also celebrated for its artistic developments and the attainments of such polymaths as Leonardo da Vinci and Michelangelo, who inspired the term "Renaissance man."

European expansion

During this period, European powers came to dominate most of the world. Although the most developed regions of European classical civilization were more urbanized than any other region of the world, European civilization had undergone a lengthy period of gradual decline and collapse. During the Early Modern Period, Europe was able to regain its dominance historians still debate the causes.

Europe's success in this period stands in contrast to other regions. For example, one of the most advanced civilizations of the Middle Ages was China. It had developed an advanced monetary economy by 1000 CE. China had a free peasantry who were no longer subsistence farmers, and could sell their produce and actively participate in the market. According to Adam Smith, writing in the 18th century, China had long been one of the richest, most fertile, best cultivated, most industrious, most urbanized, and most prosperous countries in the world. It enjoyed a technological advantage and had a monopoly in cast iron production, piston bellows, suspension bridge construction, printing, and the compass. However, it seemed to have long since stopped progressing. Marco Polo, who visited China in the 13th century, describes its cultivation, industry, and populousness almost in the same terms as travellers would in the 18th century.

One theory of Europe's rise holds that Europe's geography played an important role in its success. The Middle East, India and China are all ringed by mountains and oceans but, once past these outer barriers, are nearly flat. By contrast, the Pyrenees, Alps, Apennines, Carpathians and other mountain ranges run through Europe, and the continent is also divided by several seas. This gave Europe some degree of protection from the peril of Central Asian invaders. Before the era of firearms, these nomads were militarily superior to the agricultural states on the periphery of the Eurasian continent and, as they broke out into the plains of northern India or the valleys of China, were all but unstoppable. These invasions were often devastating. The Golden Age of Islam was ended by the Mongol sack of Baghdad in 1258. India and China were subject to periodic invasions, and Russia spent a couple of centuries under the Mongol-Tatar yoke. Central and western Europe, logistically more distant from the Central Asian heartland, proved less vulnerable to these threats.

Geography contributed to important geopolitical differences. For most of their histories, China, India, and the Middle East were each unified under a single dominant power that expanded until it reached the surrounding mountains and deserts. [ citation needed ] In 1600 the Ottoman Empire controlled almost all the Middle East, [130] the Ming dynasty ruled China, [131] [132] and the Mughal Empire held sway over India. By contrast, Europe was almost always divided into a number of warring states. Pan-European empires, with the notable exception of the Roman Empire, tended to collapse soon after they arose. Another doubtless important geographic factor in the rise of Europe was the Mediterranean Sea, which, for millennia, had functioned as a maritime superhighway fostering the exchange of goods, people, ideas and inventions.

Nearly all the agricultural civilizations have been heavily constrained by their environments. Productivity remained low, and climatic changes easily instigated boom-and-bust cycles that brought about civilizations' rise and fall. By about 1500, however, there was a qualitative change in world history. Technological advance and the wealth generated by trade gradually brought about a widening of possibilities. [133]

Many have also argued that Europe's institutions allowed it to expand, that property rights and free-market economics were stronger than elsewhere due to an ideal of freedom peculiar to Europe. In recent years, however, scholars such as Kenneth Pomeranz have challenged this view. Europe's maritime expansion unsurprisingly—given the continent's geography—was largely the work of its Atlantic states: Portugal, Spain, England, France, and the Netherlands. Initially the Portuguese and Spanish Empires were the predominant conquerors and sources of influence, and their union resulted in the Iberian Union, the first global empire on which the "sun never set". Soon the more northern English, French and Dutch began to dominate the Atlantic. In a series of wars fought in the 17th and 18th centuries, culminating with the Napoleonic Wars, Britain emerged as the new world power.

Regional developments

Persia came under the rule of the Safavid Empire in 1501, succeeded by the Afsharid Empire in 1736, the Zand Empire in 1751, and the Qajar Empire in 1794. Areas to the north and east in Central Asia were held by Uzbeks and Pashtuns. The Ottoman Empire, after taking Constantinople in 1453, quickly gained control of the Middle East, the Balkans, and most of North Africa.

In Africa, this period saw a decline in many civilizations and an advancement in others. The Swahili Coast declined after coming under the Portuguese Empire and later the Omani Empire. In West Africa, the Songhai Empire fell to the Moroccans in 1591 when they invaded with guns. The Bono State which gave birth to numerous Akan states in search of gold such as Akwamu, Akyem, Fante, Adanse etc. [134] The South African Kingdom of Zimbabwe gave way to smaller kingdoms such as Mutapa, Butua, and Rozvi. Ethiopia suffered from the 1531 invasion from neighbouring Muslim Adal Sultanate, and in 1769 entered the Zemene Mesafint (Age of Princes) during which the Emperor became a figurehead and the country was ruled by warlords, though the royal line later would recover under Emperor Tewodros II. The Ajuran Sultanate, in the Horn of Africa, began to decline in the 17th century, succeeded by the Geledi Sultanate. Other civilizations in Africa advanced during this period. The Oyo Empire experienced its golden age, as did the Kingdom of Benin. The Ashanti Empire rose to power in what is modern day Ghana in 1670. The Kingdom of Kongo also thrived during this period.

In China, the Ming gave way in 1644 to the Qing, the last Chinese imperial dynasty, which would rule until 1912. Japan experienced its Azuchi–Momoyama period (1568–1603), followed by the Edo period (1603–1868). The Korean Joseon dynasty (1392–1910) ruled throughout this period, successfully repelling 16th- and 17th-century invasions from Japan and China. Japan and China were significantly affected during this period by expanded maritime trade with Europe, particularly the Portuguese in Japan. During the Edo period, Japan would pursue isolationist policies, to eliminate foreign influences.

On the Indian subcontinent, the Delhi Sultanate and the Deccan sultanates would give way, beginning in the 16th century, to the Mughal Empire. [ citation needed ] Starting in the northwest, the Mughal Empire would by the late 17th century come to rule the entire subcontinent, [135] except for the southernmost Indian provinces, which would remain independent. Against the Muslim Mughal Empire, the Hindu Maratha Empire was founded on the west coast in 1674, gradually gaining territory—a majority of present-day India—from the Mughals over several decades, particularly in the Mughal–Maratha Wars (1681–1701). The Maratha Empire would in 1818 fall under the control of the British East India Company, with all former Maratha and Mughal authority devolving in 1858 to the British Raj.

In 1511 the Portuguese overthrew the Malacca Sultanate in present-day Malaysia and Indonesian Sumatra. The Portuguese held this important trading territory (and the valuable associated navigational strait) until overthrown by the Dutch in 1641. The Johor Sultanate, centred on the southern tip of the Malay Peninsula, became the dominant trading power in the region. European colonization expanded with the Dutch in the Netherlands East Indies, the Portuguese in East Timor, and the Spanish in the Philippines. Into the 19th century, European expansion would affect the whole of Southeast Asia, with the British in Myanmar and Malaysia and the French in Indochina. Only Thailand would successfully resist colonization.

The Pacific islands of Oceania would also be affected by European contact, starting with the circumnavigational voyage of Ferdinand Magellan, who landed on the Marianas and other islands in 1521. Also notable were the voyages (1642–44) of Abel Tasman to present-day Australia, New Zealand and nearby islands, and the voyages (1768–1779) of Captain James Cook, who made the first recorded European contact with Hawaii. Britain would found its first colony on Australia in 1788.

In the Americas, the western European powers vigorously colonized the newly discovered continents, largely displacing the indigenous populations, and destroying the advanced civilizations of the Aztecs and the Incas. Spain, Portugal, Britain, and France all made extensive territorial claims, and undertook large-scale settlement, including the importation of large numbers of African slaves. Portugal claimed Brazil. Spain claimed the rest of South America, Mesoamerica, and southern North America. Britain colonized the east coast of North America, and France colonized the central region of North America. Russia made incursions onto the northwest coast of North America, with a first colony in present-day Alaska in 1784, and the outpost of Fort Ross in present-day California in 1812. [136] In 1762, in the midst of the Seven Years' War, France secretly ceded most of its North American claims to Spain in the Treaty of Fontainebleau. Thirteen of the British colonies declared independence as the United States of America in 1776, ratified by the Treaty of Paris in 1783, ending the American Revolutionary War. Napoleon Bonaparte won France's claims back from Spain in the Napoleonic Wars in 1800, but sold them to the United States in 1803 as the Louisiana Purchase.

In Russia, Ivan the Terrible was crowned in 1547 as the first Tsar of Russia, and by annexing the Turkic khanates in the east, transformed Russia into a regional power. The countries of western Europe, while expanding prodigiously through technological advancement and colonial conquest, competed with each other economically and militarily in a state of almost constant war. Often the wars had a religious dimension, either Catholic versus Protestant, or (primarily in eastern Europe) Christian versus Muslim. Wars of particular note include the Thirty Years' War, the War of the Spanish Succession, the Seven Years' War, and the French Revolutionary Wars. Napoleon came to power in France in 1799, an event foreshadowing the Napoleonic Wars of the early 19th century.

Late modern period

1750–1914

The Scientific Revolution changed humanity's understanding of the world and led to the Industrial Revolution, a major transformation of the world's economies. The Scientific Revolution in the 17th century had had little immediate effect on industrial technology only in the second half of the 18th century did scientific advances begin to be applied substantially to practical invention. The Industrial Revolution began in Great Britain and used new modes of production—the factory, mass production, and mechanization—to manufacture a wide array of goods faster and using less labour than previously required. The Age of Enlightenment also led to the beginnings of modern democracy in the late-18th century American and French Revolutions. Democracy and republicanism would grow to have a profound effect on world events and on quality of life.

After Europeans had achieved influence and control over the Americas, imperial activities turned to the lands of Asia and Oceania. In the 19th century the European states had social and technological advantage over Eastern lands. [ citation needed ] Britain gained control of the Indian subcontinent, Egypt and the Malay Peninsula the French took Indochina while the Dutch cemented their control over the Dutch East Indies. The British also colonized Australia, New Zealand and South Africa with large numbers of British colonists emigrating to these colonies. Russia colonized large pre-agricultural areas of Siberia. In the late 19th century, the European powers divided the remaining areas of Africa. Within Europe, economic and military challenges created a system of nation states, and ethno-linguistic groupings began to identify themselves as distinctive nations with aspirations for cultural and political autonomy. This nationalism would become important to peoples across the world in the 20th century.

During the Second Industrial Revolution, the world economy became reliant on coal as a fuel, as new methods of transport, such as railways and steamships, effectively shrank the world. Meanwhile, industrial pollution and environmental damage, present since the discovery of fire and the beginning of civilization, accelerated drastically.

The advantages that Europe had developed by the mid-18th century were two: an entrepreneurial culture, [137] and the wealth generated by the Atlantic trade (including the African slave trade). By the late 16th century, silver from the Americas accounted for the Spanish empire's wealth. [ citation needed ] The profits of the slave trade and of West Indian plantations amounted to 5% of the British economy at the time of the Industrial Revolution. [138] While some historians conclude that, in 1750, labour productivity in the most developed regions of China was still on a par with that of Europe's Atlantic economy, [139] other historians such as Angus Maddison hold that the per-capita productivity of western Europe had by the late Middle Ages surpassed that of all other regions. [140]

1914–1945

The 20th century opened with Europe at an apex of wealth and power, and with much of the world under its direct colonial control or its indirect domination. Much of the rest of the world was influenced by heavily Europeanized nations: the United States and Japan.

As the century unfolded, however, the global system dominated by rival powers was subjected to severe strains, and ultimately yielded to a more fluid structure of independent nations organized on Western models.

This transformation was catalyzed by wars of unparalleled scope and devastation. World War I led to the collapse of four empires – Austria-Hungary, the German Empire, the Ottoman Empire, and the Russian Empire – and weakened Great Britain and France.

In the war's aftermath, powerful ideologies rose to prominence. The Russian Revolution of 1917 created the first communist state, while the 1920s and 1930s saw militaristic fascist dictatorships gain control in Italy, Germany, Spain, and elsewhere.

Ongoing national rivalries, exacerbated by the economic turmoil of the Great Depression, helped precipitate World War II. The militaristic dictatorships of Europe and Japan pursued an ultimately doomed course of imperialist expansionism, in the course of which Nazi Germany (Germany under Adolf Hitler) orchestrated the genocide of six million Jews in the Holocaust, while Imperial Japan murdered millions of Chinese.

The World War II defeat of the Axis Powers opened the way for the advance of communism into Central Europe, Yugoslavia, Bulgaria, Romania, Albania, China, North Vietnam, and North Korea.

Contemporary history

1945–2000

When World War II ended in 1945, the United Nations was founded in the hope of preventing future wars, [141] as the League of Nations had been formed following World War I. [142] The war had left two countries, the United States and the Soviet Union, with principal power to influence international affairs. [143] Each was suspicious of the other and feared a global spread of the other's, respectively capitalist and communist, political-economic model. This led to the Cold War, a forty-five-year stand-off and arms race between the United States and its allies, on one hand, and the Soviet Union and its allies on the other. [144]

With the development of nuclear weapons during World War II, and with their subsequent proliferation, all of humanity were put at risk of nuclear war between the two superpowers, as demonstrated by many incidents, most prominently the October 1962 Cuban Missile Crisis. Such war being viewed as impractical, the superpowers instead waged proxy wars in non-nuclear-armed Third World countries [145] [146]

In China, Mao Zedong implemented industrialization and collectivization reforms as part of the Great Leap Forward (1958–1962), leading to the starvation deaths (1959–1961) of tens of millions of people.

Between 1969 and 1972, as part of the Cold War space race, twelve men landed on the Moon and safely returned to Earth. [e]

The Cold War ended peacefully in 1991 after the Pan-European Picnic, the subsequent fall of the Iron Curtain and the Berlin Wall, and the collapse of the Eastern Bloc and the Warsaw Pact. The Soviet Union fell apart, partly due to its inability to compete economically with the United States and Western Europe. However, the United States likewise began to show signs of slippage in its geopolitical influence, [148] [f] even as its private sector, now less inhibited by the claims of the public sector, increasingly sought private advantage to the prejudice of the public weal. [g] [h] [i]

In the early postwar decades, the colonies in Asia and Africa of the Belgian, British, Dutch, French, and other west European empires won their formal independence. [153] However, these newly independent countries often faced challenges in the form of neocolonialism, sociopolitical disarray, poverty, illiteracy, and endemic tropical diseases. [154] [j] [k]

Most Western European and Central European countries gradually formed a political and economic community, the European Union, which expanded eastward to include former Soviet-satellite countries. [157] [158] [159] The European Union's effectiveness was handicapped by the immaturity of its common economic and political institutions, [l] somewhat comparable to the inadequacy of United States institutions under the Articles of Confederation prior to the adoption of the U.S. Constitution that came into force in 1789. Asian, African, and South American countries followed suit and began taking tentative steps toward forming their own respective continental associations.

Cold War preparations to deter or to fight a third world war accelerated advances in technologies that, though conceptualized before World War II, had been implemented for that war's exigencies, such as jet aircraft, rocketry, and electronic computers. In the decades after World War II, these advances led to jet travel, artificial satellites with innumerable applications including global positioning systems (GPS), and the Internet—inventions that have revolutionized the movement of people, ideas, and information.

However, not all scientific and technological advances in the second half of the 20th century required an initial military impetus. That period also saw ground-breaking developments such as the discovery of the structure of DNA, [161] the consequent sequencing of the human genome, the worldwide eradication of smallpox, the discovery of plate tectonics, manned and unmanned exploration of space and of previously inaccessible parts of Earth, and foundational discoveries in physics phenomena ranging from the smallest entities (particle physics) to the greatest entity (physical cosmology).

21st century

The 21st century has been marked by growing economic globalization and integration, with consequent increased risk to interlinked economies, as exemplified by the Great Recession of the late 2000s and early 2010s. [162] This period has also seen the expansion of communications with mobile phones and the Internet, which have caused fundamental societal changes in business, politics, and individuals' personal lives.

Worldwide competition for resources has risen due to growing populations and industrialization, especially in India, China, and Brazil. The increased demands are contributing to increased environmental degradation and to global warming.

International tensions were heightened in connection with the efforts of some nuclear-armed states to induce North Korea to give up its nuclear weapons, and to prevent Iran from developing nuclear weapons. [163]

In 2020, the COVID-19 pandemic became the first pandemic in the 21st century to substantially disrupt global trading and cause recessions in the global economy. [164]


GENETICS OF RESISTANCE

The appearance and dissemination of antibiotic-resistant pathogens have stimulated countless studies of the genetic aspects of the different phenomena associated with resistance development, such as gene pickup, heterologous expression, HGT, and mutation (29, 58, 149). The genetics of plasmids is not discussed in any detail here, nor are the interactions between plasmid-encoded and chromosomal resistances, except to say that early preconceptions about the stability, ubiquity, and host ranges of r genes and their vectors have largely become fiction. For example, acquisition of resistance has long been assumed to incur a serious energy cost to the microorganism, and indeed, many resistant mutants may be growth limited under laboratory conditions. As a result, it was considered that multidrug-resistant strains would be unstable and short-lived in the absence of selection (10). However, as frequently demonstrated, laboratory conditions (especially culture media) do not duplicate real-life circumstances available evidence suggests that pathogens with multiple mutations and combinations of r genes evolve and survive successfully in vivo. Two recent studies of the development of multimutant, multidrug-resistant S. aureus and M. tuberculosis provide examples that overturn earlier beliefs. In the first study, isolates from a hospitalized patient treated with vancomycin were sampled at frequent intervals after hospital admission and analyzed by genome sequencing. In the steps to the development of the final (mortal) isolate, 35 mutations could be identified over the course of 3 months (96)! Similarly, it has been reported that genome sequencing of antibiotic-resistant strains of M. tuberculosis revealed 29 independent mutations in an MDR strain and 35 mutations in an XDR strain. The functions of these mutations are not understood they could well be compensatory changes. Such studies emphasize the need for detailed systems biology analyses of resistance development in situ.

Resistance Gene Transmission

Essentially any of the accessory genetic elements found in bacteria are capable of acquiring r genes and promoting their transmission the type of element involved varies with the genus of the pathogen. There are similarities but also clear differences between the Gram-positive and Gram-negative bacteria nonetheless, plasmid-mediated transmission is far and away the most common mechanism of HGT (100). Surprisingly, bacteriophages carrying antibiotic r genes have rarely been identified in the environment or in hospital isolates of resistant bacteria however, there is no question about the association of phages with the insertional mechanisms required for the formation of mobile resistance elements and with the functions of chromosomally associated r genes. They are frequently seen as phage 𠇏ingerprints” flanking genes encoding resistance or virulence on different vectors. It appears that such events are quite common in S. aureus (127).

Gene transmission by conjugation has been studied extensively in the laboratory and in microcosms approximating environmental conditions, and the frequencies of the transfer events often vary significantly. Experiments suggest that frequencies of conjugative transmission in nature are probably several orders of magnitude higher than those under laboratory conditions (129). It has been shown that transfer in the intestinal tracts of animals and humans occurs ad libitum (126) it's a bordello down there! Recent studies have demonstrated diverse antibiotic r genes in the human gut microbiome (128).

In the streptococci, meningococci, and related genera, the exchange of both virulence and pathogenicity genes is highly promiscuous the principal mechanism for DNA traffic appears to be transformation (52, 70, 131). Finally, with respect to direct DNA acquisition in the environment, Acinetobacter spp. are naturally competent, and HGT is frequent (14) pathogenic strains typically carry large genomic islands (83, 108). Might Acinetobacter and related environmental genera play roles in the capture and passage of r genes from environment to clinic? Such processes surely involve multiple steps and intermediate bacterial strains, but it has been suggested that heterogeneous gene exchange occurs readily in networks of multihost interactions (48).

Horizontal gene transfer has occurred throughout evolutionary history, and one can consider two independent sets of events, largely differentiated by their time span and the strength of selection pressure. What happened during the evolution of bacteria and other microbes and organisms over several billions of years cannot be compared to the phenomenon of antibiotic resistance development and transfer over the last century. Contemporary selection pressure of antibiotic use and disposal is much more intense selection is largely for survival in hostile environments rather than for traits providing fitness in slowly evolving populations.

Consistent with the concept of the recent evolution of antibiotic resistance plasmids and multiresistant strains, studies with collections of bacterial pathogens isolated before the 𠇊ntibiotic era” showed that plasmids were common but r genes were rare (38). Genome sequence analyses of environmental microbes revealed that they are replete with plasmids—mostly large and often carrying multigene pathways responsible for the biodegradation of xenobiotic molecules, such as the polychlorinated phenolic compounds that have been used and distributed widely since the days of the industrial revolution. In summary, what is occurring in our lifetimes is an evolutionary process intensified by anthropogenic influences rather than the slower, random course of natural evolution. The existing processes of gene acquisition, transfer, modification, and expression that were in place are expanding and accelerating in the modern biosphere.

Laboratory studies have characterized numerous genetic mechanisms implicated in the evolution of antibiotic-resistant populations the roles of plasmids, phages, and transformation are well established, but other processes may exist. For example, bacterial cell-cell fusion might be favored in complex mixed microbial communities, such as those found in biofilms (61). The efficiency of the processes is not critical selection and the efficiency of heterologous gene expression are likely the most important constraints. However, low-level expression of a potential r gene in a new host may provide partial protection from an antagonist (5) subsequent gene tailoring by mutation with selection would lead to improved expression. Promoter function under environmental conditions is not well understood (32) it appears that promoters of Gram-positive origin can function well in Gram-negative bacteria but that the converse is not often true. Does this imply a favored direction for bacterial gene transfer, as Brisson-Noël et al. have suggested (24)? During therapeutic use, the exposure of bacterial pathogens to high concentrations of antibiotics for extended periods creates severe selection pressure and leads to higher levels of resistance. The pathway from an environmental gene to a clinical r gene is not known, but it obviously occurs with some facility. Knowledge of the intermediate steps in this important process would be revealing—how many steps are there from source to clinic?

In the laboratory, HGT occurs under a variety of conditions and can be enhanced by physical means that facilitate DNA exchange, for example, physical proximity by immobilization on a filter or agar surface, and there are likely numerous other environmental factors that promote gene uptake. It is worth noting that antibiotics, especially at subinhibitory concentrations, may facilitate the process of antibiotic resistance development (41). For example, they have been shown to enhance gene transfer and recombination (35), in part through activating the SOS system (66, 67) in addition, antimicrobials have been shown to induce phage production from lysogens. Such factors may play important roles in enhancing the frequency of gene exchange in environments such as farms, hospitals, and sewage systems, which provide ideal incubation conditions for r gene acquisition.

On the positive side, it should be noted that studies of antibiotic resistance mechanisms and their associated gene transfer mechanisms in pathogens have played seminal roles in the development of recombinant DNA methods, providing the experimental foundation for the modern biotechnology industry (72). The use of restriction enzymes and plasmid cloning techniques completely transformed biology. The subsequent extension of bacterial recombinant DNA methods to plant, animal, and human genetic manipulations required only minor technical modifications, such as the construction of appropriate bifunctional antibiotics and cognate r genes in pro- and eukaryotes. The applications are truly universal, with increasingly evident benefits to all aspects of pure and applied biology.

Integrons

Integrons are unusual gene acquisition elements that were first identified and characterized by Stokes and Hall in 1987 (132) retrospective analyses have indicated that they were associated with the r genes present in the Shigella isolates that characterized the first wave of transferable plasmid-mediated resistance in Japan in the 1950s (83). The “Japanese” plasmids were studied for some 30 years before the integron structure was identified, although resistance determinant components were identified early as composite elements of the plasmids. Figure ​ Figure5 5 shows the structure of an integron, its essential functions, and its resistance determinants. Integrons are not themselves mobile genetic elements but can become so in association with a variety of transfer and insertion functions (74). They are critical intermediates in the pickup and expression of r genes (the upstream promoter is highly efficient) and are the source of the majority of the transferable antibiotic r genes found in gammaproteobacteria (60). Recently, it was demonstrated that the process of integron gene capture and expression is activated by the SOS system (37, 66, 67). In a broader view, increasing evidence suggests that components such as integrons and their gene cassettes played important roles in genome evolution and fluidity within the bacterial kingdom (21, 93).

Integron structure and gene capture mechanism. This figure indicates the basic elements of integrons, as found in bacterial genomes. The structure consists of an integrase (Int) with the Pint and PC promoters in the 3′ end of the gene, with its associated cassette attachment or insertion site (attI). The integrase catalyzes the sequential recombination of circularized gene cassettes into the distal attachment site to create an operon-like arrangement (ant1 r , ant2 r , and so on) of r genes transcribed from the strong PC promoter (132). Three classes of integrons have been identified that differ in their integrase genes.

There have been many excellent reviews on the topic of integrons. The complete three-dimensional structure of an integrase has been determined, and the mechanism of r gene cassette acquisition is now well understood (22). Well over 100 cassettes have been identified, covering all the major classes of antibiotics (118). There is functional and genomic evidence that these elements, long thought to be exclusive to Gram-negative bacteria, are present in Gram-positive bacteria as well (97) however, a general role for integrons in antibiotic resistance development in Gram-positive bacteria remains to be established. Most striking is the discovery of very large numbers of integron cassettes in natural environments that do not code for (known) resistance characters. These findings came from high-throughput sequencing of soil metagenomic DNAs and from PCR analyses of DNA samples isolated from diverse soils, sediments, and other natural environments by use of integron-integrase-specific primers (62). Metagenomic analyses of bacterial populations from hospitals, agricultural sites, wastewater treatment plants, and similar environmental sources have revealed many complete integrons with r gene cassettes, underlining the universal importance of integron-mediated gene pickup in resistance evolution. The origins of integrons are not known, although the similarity of sequence between the integrases and bacteriophage recombinases suggests an evolutionary relationship.

Finally, it should be noted that the evolution of different types of antibiotic resistance elements in different clinical and natural environments probably involves a variety of integrated genetic processes. Other acquisition and transfer mechanisms have been identified, and the combinatorial nature of the process of resistance development should not be underestimated (46, 139, 148).


Abstract

The human population displays wide variety in demographic history, ancestry, content of DNA derived from hominins or ancient populations, adaptation, traits, copy number variation, drug response, and more. These polymorphisms are of broad interest to population geneticists, forensics investigators, and medical professionals. Historically, much of that knowledge was gained from population survey projects. Although many commercial arrays exist for genome-wide single-nucleotide polymorphism genotyping, their design specifications are limited and they do not allow a full exploration of biodiversity. We thereby aimed to design the Diversity of REcent and Ancient huMan (DREAM)—an all-inclusive microarray that would allow both identification of known associations and exploration of standing questions in genetic anthropology, forensics, and personalized medicine. DREAM includes probes to interrogate ancestry informative markers obtained from over 450 human populations, over 200 ancient genomes, and 10 archaic hominins. DREAM can identify 94% and 61% of all known Y and mitochondrial haplogroups, respectively, and was vetted to avoid interrogation of clinically relevant markers. To demonstrate its capabilities, we compared its FST distributions with those of the 1000 Genomes Project and commercial arrays. Although all arrays yielded similarly shaped (inverse J) FST distributions, DREAM’s autosomal and X-chromosomal distributions had the highest mean FST, attesting to its ability to discern subpopulations. DREAM performances are further illustrated in biogeographical, identical by descent, and copy number variation analyses. In summary, with approximately 800,000 markers spanning nearly 2,000 genes, DREAM is a useful tool for genetic anthropology, forensic, and personalized medicine studies.


The 20 Biggest Advances in Tech Over the Last 20 Years

Another decade is over. With the 2020s upon us, now is the perfect time to reflect on the immense technological advancements that humanity has made since the dawn of the new millennium.

This article explores, in no particular order, 20 of the most significant technological advancements we have made in the last 20 years.

  1. Smartphones: Mobile phones existed before the 21st century. However, in the past 20 years, their capabilities have improved enormously. In June 2007, Apple released the iPhone, the first touchscreen smartphone with mass-market appeal. Many other companies took inspiration from the iPhone. As a consequence, smartphones have become an integral part of day-to-day life for billions of people around the world. Today, we take pictures, navigate without maps, order food, play games, message friends, listen to music, etc. all on our smartphones. Oh, and you can also use them to call people.
  2. Flash Drives: First sold by IBM in 2000, the USB flash drive allows you to easily store files, photos or videos with a storage capacity so large that it would be unfathomable just a few decades ago. Today, a 128GB flash drive, available for less than $20 on Amazon, has more than 80,000 times the storage capacity of a 1.44MB floppy disk, which was the most popular type of storage disk in the 1990s.
  3. Skype: Launched in August 2003, Skype transformed the way that people communicate across borders. Before Skype, calling friends or family abroad cost huge amounts of money. Today, speaking to people on the other side of the world, or even video calling with them, is practically free.
  4. Google: Google’s search engine actually premiered in the late 1990s, but the company went public in 2004, leading to its colossal growth. Google revolutionized the way that people search for information online. Every hour there are more than 228 million Google searches. Today Google is part of Alphabet Inc., a company that offers dozens of services such as translations, Gmail, Docs, Chrome web browser, and more.
  5. Google Maps: In February 2005, Google launched its mapping service, which changed the way that many people travel. With the app available on virtually all smartphones, Google Maps has made getting lost virtually impossible. It’s easy to forget that just two decades ago, most travel involved extensive route planning, with paper maps nearly always necessary when venturing to unfamiliar places.
  6. Human Genome Project: In April 2003, scientists successfully sequenced the entire human genome. Through the sequencing of our roughly 23,000 genes, the project shed light on many different scientific fields, including disease treatment, human migration, evolution, and molecular medicine.
  7. YouTube: In May 2005, the first video was uploaded to what today is the world’s most popular video-sharing website. From Harvard University lectures on quantum mechanics and favorite T.V. episodes to “how-to” tutorials and funny cat videos, billions of pieces of content can be streamed on YouTube for free.
  8. Graphene: In 2004, researchers at the University of Manchester became the first scientists to isolate graphene. Graphene is an atom-thin carbon allotrope that can be isolated from graphite, the soft, flaky material used in pencil lead. Although humans have been using graphite since the Neolithic era, isolating graphene was previously impossible. With its unique conductive, transparent, and flexible properties, graphene has enormous potential to create more efficient solar panels, water filtration systems, and even defenses against mosquitos.
  9. Bluetooth: While Bluetooth technology was officially unveiled in 1999, it was only in the early 2000s that manufacturers began to adopt Bluetooth for use in computers and mobile phones. Today, Bluetooth is featured in a wide range of devices and has become an integral part of many people’s day-to-day lives.
  10. Facebook: First developed in 2004, Facebook was not the first social media website. Due to its simplicity to use, however, Facebook quickly overtook existing social networking sites like Friendster and Myspace. With 2.41 billion active users per month (almost a third of the world’s population), Facebook has transformed the way billions of people share news and personal experiences with one another.
  11. Curiosity, the Mars Rover: First launched in November 2011, Curiosity is looking for signs of habitability on Mars. In 2014, the rover uncovered one of the biggest space discoveries of this millennium when it found water under the surface of the red planet. Curiosity’s work could help humans become an interplanetary species in just a few decades’ time.
  12. Electric Cars: Although electric cars are not a 21st-century invention, it wasn’t until the 2000s that these vehicles were built on a large scale. Commercially available electric cars, such as the Tesla Roadster or the Nissan Leaf, can be plugged into any electrical socket to charge. They do not require fossil fuels to run. Although still considered a fad by some, electric cars are becoming ever more popular, with more than 1.5 million units sold in 2018.
  13. Driverless Cars: In August 2012, Google announced that its automated vehicles had completed over 300,000 miles of driving, accident-free. Although Google’s self-driving cars are the most popular at the moment, almost all car manufacturers have created or are planning to develop automated cars. Currently, these cars are in testing stages, but provided that the technology is not hindered by overzealous regulations, automated cars will likely be commercially available in the next few years.
  14. The Large Hadron Collider (LHC): With its first test run in 2013, the LHC became the world’s largest and most powerful particle accelerator. It’s also the world’s largest single machine. The LHC allows scientists to run experiments on some of the most complex theories in physics. Its most important finding so far is the Higgs-Boson particle. The discovery of this particle lends strong support to the “standard model of particle physics,” which describes most of the fundamental forces in the universe.
  15. AbioCor Artificial Heart: In 2001, the AbioCor artificial heart, which was created by the Massachusetts-based company AbioMed, became the first artificial heart to successfully replace a human heart in heart transplant procedures. The AbioCor artificial heart powers itself. Unlike previous artificial hearts, it doesn’t need intrusive wires that heighten the likelihood of infection and death.
  16. 3D Printing: Although 3D printers as we know them today began in the 1980s, the development of cheaper manufacturing methods and open-source software contributed to a 3D printing revolution over the last two decades. Today, 3D printers are being used to print spare parts, whole houses, medicines, bionic limbs, and even entire human organs.
  17. Amazon Kindle: In November 2007, Amazon released the Kindle. Since then, a plethora of e-readers has changed the way millions of people read. Thanks to e-readers, people don’t need to carry around heavy stacks of books, and independent authors can get their books to an audience of millions of people without going through a publisher.
  18. Stem Cell Research: Previously the stuff of science fiction, stem cells (i.e., basic cells that can become almost any type of cell in the body) are being used to grow, among other things, kidney, lung, brain, and heart tissue. This technology will likely save millions of lives in the coming decades as it means that patients will no longer have to wait for donor organs or take harsh medicines to treat their ailments.
  19. Multi-Use Rockets: In November and December of 2015, two separate private companies, Blue Origin and SpaceX, successfully landed reusable rockets. This development greatly cheapens the cost of getting to space and brings commercial space travel one step closer to reality.
  20. Gene Editing: In 2012, researchers from Harvard University, the University of California at Berkeley, and the Broad Institute each independently discovered that a bacterial immune system known as CRISPR could be used as a gene-editing tool to change an organism’s DNA. By cutting out pieces of harmful DNA, gene-editing technology will likely change the future of medicine and could eventually eradicate some major diseases.

However you choose to celebrate this new year, take a moment to think about the immense technological advancements of the last 20 years, and remember that despite what you may read in the newspapers or see on TV, humans continue to reach new heights of prosperity.


Evolution of Genetic Health Improvements in Humans Stall over Last Millennium - History

1. Evolution of life on Earth

4. Agricultural farming and settlements

8. Technological Revolution

9. Sustainability Revolution

Ages: following the big bang 13.8 billion years ago, time passed two-thirds of the way to the present before the formation of the Sun 4.57 billion years ago. Rescaled to a calendar year, starting with the big bang at 00:00:00 on 1 January (), the Sun forms on 1 September (), the Earth on 2 September (), earliest signs of life appear on 13 September (), earliest true mammals on 26 December (), and humans just 2 hours before year’s end (). For a year that starts with the earliest true mammals (), the dinosaurs go extinct on 17 August (), earliest primates appear on 9 September (), and humans at dawn of 25 December (). For a year that starts with the earliest humans (), our own species appears on 19 November (), the first built constructions on 8 December (), and agricultural farming begins at midday on 29 December ().

Quantities: 1 thousand = 10³ 1 million = 10⁶ 1 billion = 10⁹ 1 trillion = 10¹².

Units: metre (m) kilometre (km) hectare (ha) kilogramme (kg) kilowatt-hour (kWh).

Distances: 10⁹ nanometres in 1 m 1,000 m in 1 km 9.46 trillion km in 1 light-year. For example: 0.1-nanometre diameter of a hydrogen atom 40,000-km circumference of Earth 150 million km from Earth to the Sun 300,000 km travelled by light in 1 second, and almost 10 trillion km in 1 year 27 thousand light-years from Earth to the Galactic Center of the Milky Way 46.5 billion light-years from Earth to the edge of the observable universe.

Areas: 100 × 100 m or 10,000 m² in 1 ha 100 ha in 1 km². For example: 3 million ha (30,000 km²) area of Belgium 4 billion ha (40 million km²) of livestock grazing on Earth 14.9 billion ha (149 million km²) of global land area.

Volumes: 1 billion m³ in 1 km³ 1 trillion m³ in 1,000 km³. For example: 1 billion grains in 1 m³ of sand 2.5 trillion m³ (2,500 km³) of water in Lake Victoria.

Masses: 1,000 g in 1 kg 1,000 kg in 1 tonne. For example: 100-tonne mass of a blue whale 500 million tonnes of global human biomass.

Power and Energy: 1 watt of power uses 1 joule of energy per second about 740 watts in 1 horsepower 3,600 kilojoules (kJ) or 860 kilocalories (kcal) in 1 kWh of energy, sustaining 1,000 watts for 1 hour. For example, a 100-watt incandescent light bulb illuminates a room 80 watts sustain human basal metabolic rate, using 6,900 kJ or 1,650 kcal or 1.92 kWh of energy per day rice and maize have an energy value per kg of 15,280 kJ or 3,650 kcal or 4.25 kWh, wheat has nine-tenths this energy, and beef has two-thirds crude oil and natural gas provide a heat value per kg of about 45,000 kJ or 10,750 kcal or 12.5 kWh, coal gives nearly half this heat, and firewood a third.

Inspiration: �s ewig Unbegreifliche an der Welt ist ihre Begreiflichkeit [The eternal mystery of the world is its comprehensibility].” Albert Einstein (1936).

C. Patrick Doncaster, 13 June 2021, one of the then 7,769,092,270 (rising by 151 per minute, 79 million per year)


Watch the video: ΥΓΕΙΑΣ ΘΕΜΑΤΑ. Μοριακή Βιολογία και καρκίνος. Σάββατο 09 Ιουνίου. Ώρα μμ (May 2022).