header logo image


Page 1,088«..1020..1,0871,0881,0891,090..1,1001,110..»

Types of stem cells and their current uses | Europe’s stem …

October 26th, 2016 1:42 am

Types of stem cells

Not all stem cells come from an early embryo. In fact, we have stem cells in our bodies all our lives. One way to think about stem cells is to divide them into three categories:

You can read in detail about the properties of these different types of stem cells and current research work in our other fact sheets. Here, we compare the progress made towards therapies for patients using different stem cell types, and the challenges or limitations that still need to be addressed.

Embryonic stem cells (ESCs) cells have unlimited potential to produce specialised cells of the body, which suggests enormous possibilities for disease research and for providing new therapies. Human ESCs were first grown in the lab in 1998. Recently, human ESCs that meet the strict quality requirements for use in patients have been produced. These clinical grade human ESCs have been approved for use in a very small number of early clinical trials. One example is a clinical trial carried out by The London Project to Cure Blindness, using ESCs to produce a particular type of eye cell for treatment of patients with age-related macular degeneration. The biotechnology company ACT is also using human ESCs to make cells for patients with an eye disease: Stargardts macular dystrophy.

Current challenges facing ESC research include ethical considerations and the need to ensure that ESCs fully differentiate into the required specialised cells before transplantation into patients. If the initial clinical trials are successful in terms of safety and patient benefit, ESC research may soon begin to deliver its first clinical applications.

Many tissues in the human body are maintained and repaired throughout life by stem cells. These tissue stem cells are very different from embryonic stem cells.

Blood and skin stem cells: therapy pioneers Stem cell therapy has been in routine use since the 1970s! Bone marrow transplants are able to replace a patients diseased blood system for life, thanks to the properties of blood stem cells. Many thousands of patients benefit from this kind of treatment every year, although some do suffer from complications: the donors immune cells sometimes attack the patients tissues (graft-versus-host disease or GVHD) and there is a risk of infection during the treatment because the patients own bone marrow cells must be killed with chemotherapy before the transplant can take place.

Skin stem cells have been used since the 1980s to grow sheets of new skin in the lab for severe burn patients. However, the new skin has no hair follicles, sweat glands or sebaceous (oil) glands, so the technique is far from perfect and further research is needed to improve it. Currently, the technique is mainly used to save the lives of patients who have third degree burns over very large areas of their bodies and is only carried out in a few clinical centres.

Cord blood stem cells Cord blood stem cells can be harvested from the umbilical cord of a baby after birth. The cells can be frozen (cryopreserved) in cell banks and are currently used to treat children with cancerous blood disorders such as leukaemia, as well as genetic blood diseases like Fanconi anaemia. Treatment of adults has so far been more challenging but adults have been successfully treated with double cord transplants. The most commonly held view is that success in adults is restricted by the number of cells that can be obtained from one umbilical cord, but immune response may also play a role.One advantage of cord blood transplants is that they appear to be less likely than conventional bone marrow transplants to be rejected by the immune system, or to result in a reaction such as Graft versus Host Disease. Nevertheless, cord blood must still be matched to the patient to be successful.

There are limitations to the types of disease that can be treated: cord blood stem cells can only be used to make new blood cells for blood disease therapies. Although some studies have suggested cord blood may contain stem cells that can produce other types of specialised cells not related to the blood, none of this research has yet been widely reproduced and confirmed. No therapies for non-blood-related diseases have yet been developed using blood stem cells from either cord blood or the adult bone marrow.

Mesenchymal stem cells Mesenchymal stem cells (MSCs) are found in the bone marrow and are responsible for bone and cartilage repair. They also produce fat cells. Early research suggested that MSCs could differentiate into many other types of cells but it is now clear that this is not the case. MSCs, like all tissue stem cells, are not pluripotent but multipotent they can make a limited number of types of cells, but NOT all types of cells of the body. Claims have also been made that MSCs can be obtained from a wide variety of tissues in addition to bone marrow. These claims have not been confirmed and scientists are still debating the exact nature of cells obtained from these other tissues.

No treatments using mesenchymal stem cells are yet proven. Some clinical trials are investigating the safety and effectiveness of MSC treatments for repairing bone or cartilage. Other trials are investigating whether MSCs might help repair blood vessel damage linked to heart attacks or diseases such as critical limb ischaemia, but it is not yet clear whether these treatments will be effective. MSCs do not themselves produce blood vessel cells but might support other cells to repair damage. Indeed MSCs appear to play a crucial role in supporting blood stem cells.

Several claims have been made that MSCs can avoid detection by the immune system and that MSCs taken from one person can be transplanted into another with little or no risk of rejection by the body. The results of other studies have not supported these claims. It has also been suggested that MSCs may be able to affect immune responses in the body to reduce inflammation and help treat transplant rejection or autoimmune diseases. Again, this has yet to be conclusively proven but is an area of ongoing investigation.

Stem cells in the eye Clinical studies in patients have shown that tissue stem cells taken from an area of the eye called the limbus can be used to repair damage to the cornea the transparent layer at the front of the eye. If the cornea is severely damaged, for example by a chemical burn, limbal stem cells can be taken from the patient, multiplied in the lab and transplanted back onto the patients damaged eye(s) to restore sight. However, this can only help patients who have some undamaged limbal stem cells remaining in one of their eyes. The treatment has been shown to be safe and effective in early stage trials. Further studies with larger numbers of patients must now be carried out before this therapy can be approved by regulatory authorities for widespread use in Europe.

A relatively recent breakthrough in stem cell research is the discovery that specialised adult cells can be reprogrammed into cells that behave like embryonic stem cells, termed induced pluripotent stem cells (iPSCs). The generation of iPSCs has huge implications for disease research and drug development. For example, researchers have generated brain cells from iPSCs made from skin samples belonging to patients with neurological disorders such as Downs syndrome or Parkinsons disease. These lab-grown brain cells show signs of the patients diseases. This has implications for understanding how the diseases actually happen researchers can watch the process in a dish and for searching for and testing new drugs. Such studies give a taste of the wide range of disease research being carried out around the world using iPSCs.

The discovery of iPSCs also raised hopes that cells could be made from a patients own skin in order to treat their disease, avoiding the risk of immune rejection. However, use of iPSCs in cell therapy is theoretical at the moment. The technology is very new and the reprogramming process is not yet well understood. Scientists need to find ways to produce iPSCs safely. Current techniques involve genetic modification, which can sometimes result in the cells forming tumours. The cells must also be shown to completely and reproducibly differentiate into the required types of specialised cells to meet standards suitable for use in patients.

Stem cells are important tools for disease research and offer great potential for use in the clinic. Some adult stem cell sources are currently used for therapy, although they have limitations. The first clinical trials using cells made from embryonic stem cells are just beginning. Meanwhile, induced pluripotent stem cells are already of great use in research, but a lot of work is needed before they can be considered for use in the clinic. An additional avenue of current research is transdifferentiation converting one type of specialised cell directly into another.

All these different research approaches are important if stem cell research is to achieve its potential for delivering therapies for many debilitating diseases. The table below gives a brief overview of the different types of stem cells and their uses. You can also download this table as a pdf.

See the article here:
Types of stem cells and their current uses | Europe's stem ...

Read More...

Animal Biotechnology | Bioscience Topics | About Bioscience

October 25th, 2016 10:40 am

Related Links http://www.bbsrc.ac.uk

The Biotechnology and Biological Sciences Research Council (BBSRC) is the United Kingdoms principal funder of basic and strategic biological research. To deliver its mission, the BBSRC supports research and training in universities and research centers and promotes knowledge transfer from research to applications in business, industry and policy, and public engagement in the biosciences. The site contains extensive articles on the ethical and social issues involved in animal biotechnology.

The Department of Agriculture (USDA) provides leadership on food, agriculture, natural resources and related issues through public policy, the best available science and efficient management. The National Institute of Food and Agriculture is part of the USDA; its site contains information about the science behind animal biotechnology and a glossary of terms. Related topics also are searchable, including animal breeding, genetics and many others.

The Pew Initiative on Food and Biotechnology is an independent, objective source of information on agricultural biotechnology. Funded by a grant from the Pew Charitable Trusts to the University of Richmond, it advocates neither for nor against agricultural biotechnology. Instead, the initiative is committed to providing information and encouraging dialogue so consumers and policy-makers can make their own informed decisions.

Animal biotechnology is the use of science and engineering to modify living organisms. The goal is to make products, to improve animals and to develop microorganisms for specific agricultural uses.

Examples of animal biotechnology include creating transgenic animals (animals with one or more genes introduced by human intervention), using gene knock out technology to make animals with a specific inactivated gene and producing nearly identical animals by somatic cell nuclear transfer (or cloning).

The animal biotechnology in use today is built on a long history. Some of the first biotechnology in use includes traditional breeding techniques that date back to 5000 B.C.E. Such techniques include crossing diverse strains of animals (known as hybridizing) to produce greater genetic variety. The offspring from these crosses then are bred selectively to produce the greatest number of desirable traits. For example, female horses have been bred with male donkeys to produce mules, and male horses have been bred with female donkeys to produce hinnies, for use as work animals, for the past 3,000 years. This method continues to be used today.

The modern era of biotechnology began in 1953, when American biochemist James Watson and British biophysicist Francis Crick presented their double-helix model of DNA. That was followed by Swiss microbiologist Werner Arbers discovery in the 1960s of special enzymes, called restriction enzymes, in bacteria. These enzymes cut the DNA strands of any organism at precise points. In 1973, American geneticist Stanley Cohen and American biochemist Herbert Boyer removed a specific gene from one bacterium and inserted it into another using restriction enzymes. That event marked the beginning of recombinant DNA technology, or genetic engineering. In 1977, genes from other organisms were transferred to bacteria, an achievement that led eventually to the first transfer of a human gene.

Animal biotechnology in use today is based on the science of genetic engineering. Under the umbrella of genetic engineering exist other technologies, such as transgenics and cloning, that also are used in animal biotechnology.

Transgenics (also known as recombinant DNA) is the transferal of a specific gene from one organism to another. Gene splicing is used to introduce one or more genes of an organism into a second organism. A transgenic animal is created once the second organism incorporates the new DNA into its own genetic material.

In gene splicing, DNA cannot be transferred directly from its original organism, the donor, to the recipient organism, or the host. Instead, the donor DNA must be cut and pasted, or recombined, into a compatible fragment of DNA from a vector an organism that can carry the donor DNA into the host. The host organism often is a rapidly multiplying microorganism such as a harmless bacterium, which serves as a factory where the recombined DNA can be duplicated in large quantities. The subsequently produced protein then can be removed from the host and used as a genetically engineered product in humans, other animals, plants, bacteria or viruses. The donor DNA can be introduced directly into an organism by techniques such as injection through the cell walls of plants or into the fertilized egg of an animal.

This transferring of genes alters the characteristics of the organism by changing its protein makeup. Proteins, including enzymes and hormones, perform many vital functions in organisms. Individual genes direct an animals characteristics through the production of proteins.

Scientists use reproductive cloning techniques to produce multiple copies of mammals that are nearly identical copies of other animals, including transgenic animals, genetically superior animals and animals that produce high quantities of milk or have some other desirable trait. To date, cattle, sheep, pigs, goats, horses, mules, cats, rats and mice have been cloned, beginning with the first cloned animal, a sheep named Dolly, in 1996.

Reproductive cloning begins with somatic cell nuclear transfer (SCNT). In SCNT, scientists remove the nucleus from an egg cell (oocyte) and replace it with a nucleus from a donor adult somatic cell, which is any cell in the body except for an oocyte or sperm. For reproductive cloning, the embryo is implanted into a uterus of a surrogate female, where it can develop into a live being.

In addition to the use of transgenics and cloning, scientists can use gene knock out technology to inactivate, or knock out, a specific gene. It is this technology that creates a possible source of replacement organs for humans. The process of transplanting cells, tissues or organs from one species to another is referred to as xenotransplantation. Currently, the pig is the major animal being considered as a viable organ donor to humans. Unfortunately, pig cells and human cells are not immunologically compatible. Pigs, like almost all mammals, have markers on their cells that enable the human immune system to recognize them as foreign and reject them. Genetic engineering is used to knock out the pig gene responsible for the protein that forms the marker to the pig cells.

Animal biotechnology has many potential uses. Since the early 1980s, transgenic animals have been created with increased growth rates, enhanced lean muscle mass, enhanced resistance to disease or improved use of dietary phosphorous to lessen the environmental impacts of animal manure. Transgenic poultry, swine, goats and cattle that generate large quantities of human proteins in eggs, milk, blood or urine also have been produced, with the goal of using these products as human pharmaceuticals. Human pharmaceutical proteins include enzymes, clotting factors, albumin and antibodies. The major factor limiting the widespread use of transgenic animals in agricultural production systems is their relatively inefficient production rate (a success rate of less than 10 percent).

A specific example of these particular applications of animal biotechnology is the transfer of the growth hormone gene of rainbow trout directly into carp eggs. The resulting transgenic carp produce both carp and rainbow trout growth hormones and grow to be one-third larger than normal carp. Another example is the use of transgenic animals to clone large quantities of the gene responsible for a cattle growth hormone. The hormone is extracted from the bacterium, is purified and is injected into dairy cows, increasing their milk production by 10 to 15 percent. That growth hormone is called bovine somatotropin or BST.

Another major application of animal biotechnology is the use of animal organs in humans. Pigs currently are used to supply heart valves for insertion into humans, but they also are being considered as a potential solution to the severe shortage in human organs available for transplant procedures.

While predicting the future is inherently risky, some things can be said with certainty about the future of animal biotechnology. The government agencies involved in the regulation of animal biotechnology, mainly the Food and Drug Administration (FDA), likely will rule on pending policies and establish processes for the commercial uses of products created through the technology. In fact, as of March 2006, the FDA was expected to rule in the next few months on whether to approve meat and dairy products from cloned animals for sale to the public. If these animals and animal products are approved for human consumption, several companies reportedly are ready to sell milk, and perhaps meat, from cloned animals most likely cattle and swine. It also is expected that technologies will continue to be developed in the field, with much hope for advances in the use of animal organs in human transplant operations.

The potential benefits of animal biotechnology are numerous and include enhanced nutritional content of food for human consumption; a more abundant, cheaper and varied food supply; agricultural land-use savings; a decrease in the number of animals needed for the food supply; improved health of animals and humans; development of new, low-cost disease treatments for humans; and increased understanding of human disease.

Yet despite these potential benefits, several areas of concern exist around the use of biotechnology in animals. To date, a majority of the American public is uncomfortable with genetic modifications to animals.

According to a survey conducted by the Pew Initiative on Food and Biotechnology, 58 percent of those polled said they opposed scientific research on the genetic engineering of animals. And in a Gallup poll conducted in May 2004, 64 percent of Americans polled said they thought it was morally wrong to clone animals.

Concerns surrounding the use of animal biotechnology include the unknown potential health effects to humans from food products created by transgenic or cloned animals, the potential effects on the environment and the effects on animal welfare.

Before animal biotechnology will be used widely by animal agriculture production systems, additional research will be needed to determine if the benefits of animal biotechnology outweigh these potential risks.

The main question posed about the safety of food produced through animal biotechnology for human consumption is, Is it safe to eat? But answering that question isnt simple. Other questions must be answered first, such as, What substances expressed as a result of the genetic modification are likely to remain in food? Despite these questions, the National Academies of Science (NAS) released a report titled Animal Biotechnology: Science-Based Concerns stating that the overall concern level for food safety was determined to be low. Specifically, the report listed three specific food concerns: allergens, bioactivity and the toxicity of unintended expression products.

The potential for new allergens to be expressed in the process of creating foods from genetically modified animals is a real and valid concern, because the process introduces new proteins. While food allergens are not a new issue, the difficulty comes in how to anticipate these adequately, because they only can be detected once a person is exposed and experiences a reaction.

Another food safety issue, bioactivity, asks, Will putting a functional protein like a growth hormone in an animal affect the person who consumes food from that animal? As of May 2006, scientists cannot say for sure if the proteins will.

Finally, concern exists about the toxicity of unintended expression products in the animal biotechnology process. While the risk is considered low, there is no data available. The NAS report stated it still needs be proven that the nutritional profile does not change in these foods and that no unintended and potentially harmful expression products appear.

Another major concern surrounding the use of animal biotechnology is the potential for negative impact to the environment. These potential harms include the alteration of the ecologic balance regarding feed sources and predators, the introduction of transgenic animals that alter the health of existing animal populations and the disruption of reproduction patterns and their success.

To assess the risk of these environmental harms, many more questions must be answered, such as: What is the possibility the altered animal will enter the environment? Will the animals introduction change the ecological system? Will the animal become established in the environment? and Will it interact with and affect the success of other animals in the new community? Because of the many uncertainties involved, it is challenging to make an assessment.

To illustrate a potential environmental harm, consider that if transgenic salmon with genes engineered to accelerate growth were released into the natural environment, they could compete more successfully for food and mates than wild salmon. Thus, there also is concern that genetically engineered organisms will escape and reproduce in the natural environment. It is feared existing species could be eliminated, thus upsetting the natural balance of organisms.

The regulation of animal biotechnology currently is performed under existing government agencies. To date, no new regulations or laws have been enacted to deal with animal biotechnology and related issues. The main governing body for animal biotechnology and their products is the FDA. Specifically, these products fall under the new animal drug provisions of the Food, Drug, and Cosmetic Act (FDCA). In this use, the introduced genetic construct is considered the drug. This lack of concrete regulatory guidance has produced many questions, especially because the process for bringing genetically engineered animals to market remains unknown.

Currently, the only genetically engineered animal on the market is the GloFish, a transgenic aquarium fish engineered to glow in the dark. It has not been subject to regulation by the FDA, however, because it is not believed to be a threat to the environment.

Many people question the use of an agency that was designed specifically for drugs to regulate live animals. The agencys strict confidentiality provisions and lack of an environmental mandate in the FDCA also are concerns. It still is unclear how the agencys provisions will be interpreted for animals and how multiple agencies will work together in the regulatory system.

When animals are genetically engineered for biomedical research purposes (as pigs are, for example, in organ transplantation studies), their care and use is carefully regulated by the Department of Agriculture. In addition, if federal funds are used to support the research, the work further is regulated by the Public Health Service Policy on Humane Care and Use of Laboratory Animals.

Whether products generated from genetically engineered animals should be labeled is yet another controversy surrounding animal biotechnology. Those opposed to mandatory labeling say it violates the governments traditional focus on regulating products, not processes. If a product of animal biotechnology has been proven scientifically by the FDA to be safe for human consumption and the environment and not materially different from similar products produced via conventional means, these individuals say it is unfair and without scientific rationale to single out that product for labeling solely because of the process by which it was made.

On the other hand, those in favor of mandatory labeling argue labeling is a consumer right-to-know issue. They say consumers need full information about products in the marketplace including the processes used to make those products not for food safety or scientific reasons, but so they can make choices in line with their personal ethics.

On average, it takes seven to nine years and an investment of about $55 million to develop, test and market a new genetically engineered product. Consequently, nearly all researchers involved in animal biotechnology are protecting their investments and intellectual property through the patent system. In 1988, the first patent was issued on a transgenic animal, a strain of laboratory mice whose cells were engineered to contain a cancer-predisposing gene. Some people, however, are opposed ethically to the patenting of life forms, because it makes organisms the property of companies. Other people are concerned about its impact on small farmers. Those opposed to using the patent system for animal biotechnology have suggested using breed registries to protect intellectual property.

Ethical and social considerations surrounding animal biotechnology are of significant importance. This especially is true because researchers and developers worry the future market success of any products derived from cloned or genetically engineered animals will depend partly on the publics acceptance of those products.

Animal biotechnology clearly has its skeptics as well as its outright opponents. Strict opponents think there is something fundamentally immoral about the processes of transgenics and cloning. They liken it to playing God. Moreover, they often oppose animal biotechnology on the grounds that it is unnatural. Its processes, they say, go against nature and, in some cases, cross natural species boundaries.

Still others question the need to genetically engineer animals. Some wonder if it is done so companies can increase profits and agricultural production. They believe a compelling need should exist for the genetic modification of animals and that we should not use animals only for our own wants and needs. And yet others believe it is unethical to stifle technology with the potential to save human lives.

While the field of ethics presents more questions than it answers, it is clear animal biotechnology creates much discussion and debate among scientists, researchers and the American public. Two main areas of debate focus on the welfare of animals involved and the religious issues related to animal biotechnology.

Perhaps the most controversy and debate regarding animal biotechnology surrounds the animals themselves. While it has been noted that animals might, in fact, benefit from the use of animal biotechnology through improved health, for example the majority of discussion is about the known and unknown potential negative impacts to animal welfare through the process.

For example, calves and lambs produced through in vitro fertilization or cloning tend to have higher birth weights and longer gestation periods, which leads to difficult births that often require cesarean sections. In addition, some of the biotechnology techniques in use today are extremely inefficient at producing fetuses that survive. Of the transgenic animals that do survive, many do not express the inserted gene properly, often resulting in anatomical, physiological or behavioral abnormalities. There also is a concern that proteins designed to produce a pharmaceutical product in the animals milk might find their way to other parts of the animals body, possibly causing adverse effects.

Animal telos is a concept derived from Aristotle and refers to an animals fundamental nature. Disagreement exists as to whether it is ethical to change an animals telos through transgenesis. For example, is it ethical to create genetically modified chickens that can tolerate living in small cages? Those opposed to the concept say it is a clear sign we have gone too far in changing that animal.

Those unopposed to changing an animals telos, however, argue it could benefit animals by fitting them for living conditions for which they are not naturally suited. In this way, scientists could create animals that feel no pain.

Religion plays a crucial part in the way some people view animal biotechnology. For some people, these technologies are considered blasphemous. In effect, God has created a perfect, natural order, they say, and it is sinful to try to improve that order by manipulating the basic ingredient of all life, DNA. Some religions place great importance on the integrity of species, and as a result, those religions followers strongly oppose any effort to change animals through genetic modification.

Not all religious believers make these assertions, however, and different believers of the same religion might hold differing views on the subject. For example, Christians do not oppose animal biotechnology unanimously. In fact, some Christians support animal biotechnology, saying the Bible teaches humanitys dominion over nature. Some modern theologians even see biotechnology as a challenging, positive opportunity for us to work with God as co-creators.

Transgenic animals can pose problems for some religious groups. For example, Muslims, Sikhs and Hindus are forbidden to eat certain foods. Such religious requirements raise basic questions about the identity of animals and their genetic makeup. If, for example, a small amount of genetic material from a fish is introduced into a melon (in order to allow it grow to in lower temperatures), does that melon become fishy in any meaningful sense? Some would argue all organisms share common genetic material, so the melon would not contain any of the fishs identity. Others, however, believe the transferred genes are exactly what make the animal distinctive; therefore the melon would be forbidden to be eaten as well.

Follow this link:
Animal Biotechnology | Bioscience Topics | About Bioscience

Read More...

History of biotechnology – Wikipedia

October 21st, 2016 6:41 pm

Biotechnology is the application of scientific and engineering principles to the processing of materials by biological agents to provide goods and services.[1] From its inception, biotechnology has maintained a close relationship with society. Although now most often associated with the development of drugs, historically biotechnology has been principally associated with food, addressing such issues as malnutrition and famine. The history of biotechnology begins with zymotechnology, which commenced with a focus on brewing techniques for beer. By World War I, however, zymotechnology would expand to tackle larger industrial issues, and the potential of industrial fermentation gave rise to biotechnology. However, both the single-cell protein and gasohol projects failed to progress due to varying issues including public resistance, a changing economic scene, and shifts in political power.

Yet the formation of a new field, genetic engineering, would soon bring biotechnology to the forefront of science in society, and the intimate relationship between the scientific community, the public, and the government would ensue. These debates gained exposure in 1975 at the Asilomar Conference, where Joshua Lederberg was the most outspoken supporter for this emerging field in biotechnology. By as early as 1978, with the development of synthetic human insulin, Lederberg's claims would prove valid, and the biotechnology industry grew rapidly. Each new scientific advance became a media event designed to capture public support, and by the 1980s, biotechnology grew into a promising real industry. In 1988, only five proteins from genetically engineered cells had been approved as drugs by the United States Food and Drug Administration (FDA), but this number would skyrocket to over 125 by the end of the 1990s.

The field of genetic engineering remains a heated topic of discussion in today's society with the advent of gene therapy, stem cell research, cloning, and genetically modified food. While it seems only natural nowadays to link pharmaceutical drugs as solutions to health and societal problems, this relationship of biotechnology serving social needs began centuries ago.

Biotechnology arose from the field of zymotechnology or zymurgy, which began as a search for a better understanding of industrial fermentation, particularly beer. Beer was an important industrial, and not just social, commodity. In late 19th-century Germany, brewing contributed as much to the gross national product as steel, and taxes on alcohol proved to be significant sources of revenue to the government.[2] In the 1860s, institutes and remunerative consultancies were dedicated to the technology of brewing. The most famous was the private Carlsberg Institute, founded in 1875, which employed Emil Christian Hansen, who pioneered the pure yeast process for the reliable production of consistent beer. Less well known were private consultancies that advised the brewing industry. One of these, the Zymotechnic Institute, was established in Chicago by the German-born chemist John Ewald Siebel.

The heyday and expansion of zymotechnology came in World War I in response to industrial needs to support the war. Max Delbrck grew yeast on an immense scale during the war to meet 60 percent of Germany's animal feed needs.[2] Compounds of another fermentation product, lactic acid, made up for a lack of hydraulic fluid, glycerol. On the Allied side the Russian chemist Chaim Weizmann used starch to eliminate Britain's shortage of acetone, a key raw material for cordite, by fermenting maize to acetone.[3] The industrial potential of fermentation was outgrowing its traditional home in brewing, and "zymotechnology" soon gave way to "biotechnology."

With food shortages spreading and resources fading, some dreamed of a new industrial solution. The Hungarian Kroly Ereky coined the word "biotechnology" in Hungary during 1919 to describe a technology based on converting raw materials into a more useful product. He built a slaughterhouse for a thousand pigs and also a fattening farm with space for 50,000 pigs, raising over 100,000 pigs a year. The enterprise was enormous, becoming one of the largest and most profitable meat and fat operations in the world. In a book entitled Biotechnologie, Ereky further developed a theme that would be reiterated through the 20th century: biotechnology could provide solutions to societal crises, such as food and energy shortages. For Ereky, the term "biotechnologie" indicated the process by which raw materials could be biologically upgraded into socially useful products.[4]

This catchword spread quickly after the First World War, as "biotechnology" entered German dictionaries and was taken up abroad by business-hungry private consultancies as far away as the United States. In Chicago, for example, the coming of prohibition at the end of World War I encouraged biological industries to create opportunities for new fermentation products, in particular a market for nonalcoholic drinks. Emil Siebel, the son of the founder of the Zymotechnic Institute, broke away from his father's company to establish his own called the "Bureau of Biotechnology," which specifically offered expertise in fermented nonalcoholic drinks.[1]

The belief that the needs of an industrial society could be met by fermenting agricultural waste was an important ingredient of the "chemurgic movement."[4] Fermentation-based processes generated products of ever-growing utility. In the 1940s, penicillin was the most dramatic. While it was discovered in England, it was produced industrially in the U.S. using a deep fermentation process originally developed in Peoria, Illinois.[5] The enormous profits and the public expectations penicillin engendered caused a radical shift in the standing of the pharmaceutical industry. Doctors used the phrase "miracle drug", and the historian of its wartime use, David Adams, has suggested that to the public penicillin represented the perfect health that went together with the car and the dream house of wartime American advertising.[2] Beginning in the 1950s, fermentation technology also became advanced enough to produce steroids on industrially significant scales.[6] Of particular importance was the improved semisynthesis of cortisone which simplified the old 31 step synthesis to 11 steps.[7] This advance was estimated to reduce the cost of the drug by 70%, making the medicine inexpensive and available.[8] Today biotechnology still plays a central role in the production of these compounds and likely will for years to come.[9][10]

Even greater expectations of biotechnology were raised during the 1960s by a process that grew single-cell protein. When the so-called protein gap threatened world hunger, producing food locally by growing it from waste seemed to offer a solution. It was the possibilities of growing microorganisms on oil that captured the imagination of scientists, policy makers, and commerce.[1] Major companies such as British Petroleum (BP) staked their futures on it. In 1962, BP built a pilot plant at Cap de Lavera in Southern France to publicize its product, Toprina.[1] Initial research work at Lavera was done by Alfred Champagnat,[11] In 1963, construction started on BP's second pilot plant at Grangemouth Oil Refinery in Britain.[11]

As there was no well-accepted term to describe the new foods, in 1966 the term "single-cell protein" (SCP) was coined at MIT to provide an acceptable and exciting new title, avoiding the unpleasant connotations of microbial or bacterial.[1]

The "food from oil" idea became quite popular by the 1970s, when facilities for growing yeast fed by n-paraffins were built in a number of countries. The Soviets were particularly enthusiastic, opening large "BVK" (belkovo-vitaminny kontsentrat, i.e., "protein-vitamin concentrate") plants next to their oil refineries in Kstovo (1973) [12][13] and Kirishi (1974).[citation needed]

By the late 1970s, however, the cultural climate had completely changed, as the growth in SCP interest had taken place against a shifting economic and cultural scene (136). First, the price of oil rose catastrophically in 1974, so that its cost per barrel was five times greater than it had been two years earlier. Second, despite continuing hunger around the world, anticipated demand also began to shift from humans to animals. The program had begun with the vision of growing food for Third World people, yet the product was instead launched as an animal food for the developed world. The rapidly rising demand for animal feed made that market appear economically more attractive. The ultimate downfall of the SCP project, however, came from public resistance.[1]

This was particularly vocal in Japan, where production came closest to fruition. For all their enthusiasm for innovation and traditional interest in microbiologically produced foods, the Japanese were the first to ban the production of single-cell proteins. The Japanese ultimately were unable to separate the idea of their new "natural" foods from the far from natural connotation of oil.[1] These arguments were made against a background of suspicion of heavy industry in which anxiety over minute traces of petroleum was expressed. Thus, public resistance to an unnatural product led to the end of the SCP project as an attempt to solve world hunger.

Also, in 1989 in the USSR, the public environmental concerns made the government decide to close down (or convert to different technologies) all 8 paraffin-fed-yeast plants that the Soviet Ministry of Microbiological Industry had by that time.[citation needed]

In the late 1970s, biotechnology offered another possible solution to a societal crisis. The escalation in the price of oil in 1974 increased the cost of the Western world's energy tenfold.[1] In response, the U.S. government promoted the production of gasohol, gasoline with 10 percent alcohol added, as an answer to the energy crisis.[2] In 1979, when the Soviet Union sent troops to Afghanistan, the Carter administration cut off its supplies to agricultural produce in retaliation, creating a surplus of agriculture in the U.S. As a result, fermenting the agricultural surpluses to synthesize fuel seemed to be an economical solution to the shortage of oil threatened by the Iran-Iraq war. Before the new direction could be taken, however, the political wind changed again: the Reagan administration came to power in January 1981 and, with the declining oil prices of the 1980s, ended support for the gasohol industry before it was born.[1]

Biotechnology seemed to be the solution for major social problems, including world hunger and energy crises. In the 1960s, radical measures would be needed to meet world starvation, and biotechnology seemed to provide an answer. However, the solutions proved to be too expensive and socially unacceptable, and solving world hunger through SCP food was dismissed. In the 1970s, the food crisis was succeeded by the energy crisis, and here too, biotechnology seemed to provide an answer. But once again, costs proved prohibitive as oil prices slumped in the 1980s. Thus, in practice, the implications of biotechnology were not fully realized in these situations. But this would soon change with the rise of genetic engineering.

The origins of biotechnology culminated with the birth of genetic engineering. There were two key events that have come to be seen as scientific breakthroughs beginning the era that would unite genetics with biotechnology. One was the 1953 discovery of the structure of DNA, by Watson and Crick, and the other was the 1973 discovery by Cohen and Boyer of a recombinant DNA technique by which a section of DNA was cut from the plasmid of an E. coli bacterium and transferred into the DNA of another.[14] This approach could, in principle, enable bacteria to adopt the genes and produce proteins of other organisms, including humans. Popularly referred to as "genetic engineering," it came to be defined as the basis of new biotechnology.

Genetic engineering proved to be a topic that thrust biotechnology into the public scene, and the interaction between scientists, politicians, and the public defined the work that was accomplished in this area. Technical developments during this time were revolutionary and at times frightening. In December 1967, the first heart transplant by Christian Barnard reminded the public that the physical identity of a person was becoming increasingly problematic. While poetic imagination had always seen the heart at the center of the soul, now there was the prospect of individuals being defined by other people's hearts.[1] During the same month, Arthur Kornberg announced that he had managed to biochemically replicate a viral gene. "Life had been synthesized," said the head of the National Institutes of Health.[1] Genetic engineering was now on the scientific agenda, as it was becoming possible to identify genetic characteristics with diseases such as beta thalassemia and sickle-cell anemia.

Responses to scientific achievements were colored by cultural skepticism. Scientists and their expertise were looked upon with suspicion. In 1968, an immensely popular work, The Biological Time Bomb, was written by the British journalist Gordon Rattray Taylor. The author's preface saw Kornberg's discovery of replicating a viral gene as a route to lethal doomsday bugs. The publisher's blurb for the book warned that within ten years, "You may marry a semi-artificial man or womanchoose your children's sextune out painchange your memoriesand live to be 150 if the scientific revolution doesnt destroy us first."[1] The book ended with a chapter called "The Future If Any." While it is rare for current science to be represented in the movies, in this period of "Star Trek", science fiction and science fact seemed to be converging. "Cloning" became a popular word in the media. Woody Allen satirized the cloning of a person from a nose in his 1973 movie Sleeper, and cloning Adolf Hitler from surviving cells was the theme of the 1976 novel by Ira Levin, The Boys from Brazil.[1]

In response to these public concerns, scientists, industry, and governments increasingly linked the power of recombinant DNA to the immensely practical functions that biotechnology promised. One of the key scientific figures that attempted to highlight the promising aspects of genetic engineering was Joshua Lederberg, a Stanford professor and Nobel laureate. While in the 1960s "genetic engineering" described eugenics and work involving the manipulation of the human genome, Lederberg stressed research that would involve microbes instead.[1] Lederberg emphasized the importance of focusing on curing living people. Lederberg's 1963 paper, "Biological Future of Man" suggested that, while molecular biology might one day make it possible to change the human genotype, "what we have overlooked is euphenics, the engineering of human development."[1] Lederberg constructed the word "euphenics" to emphasize changing the phenotype after conception rather than the genotype which would affect future generations.

With the discovery of recombinant DNA by Cohen and Boyer in 1973, the idea that genetic engineering would have major human and societal consequences was born. In July 1974, a group of eminent molecular biologists headed by Paul Berg wrote to Science suggesting that the consequences of this work were so potentially destructive that there should be a pause until its implications had been thought through.[1] This suggestion was explored at a meeting in February 1975 at California's Monterey Peninsula, forever immortalized by the location, Asilomar. Its historic outcome was an unprecedented call for a halt in research until it could be regulated in such a way that the public need not be anxious, and it led to a 16-month moratorium until National Institutes of Health (NIH) guidelines were established.

Joshua Lederberg was the leading exception in emphasizing, as he had for years, the potential benefits. At Asilomar, in an atmosphere favoring control and regulation, he circulated a paper countering the pessimism and fears of misuses with the benefits conferred by successful use. He described "an early chance for a technology of untold importance for diagnostic and therapeutic medicine: the ready production of an unlimited variety of human proteins. Analogous applications may be foreseen in fermentation process for cheaply manufacturing essential nutrients, and in the improvement of microbes for the production of antibiotics and of special industrial chemicals."[1] In June 1976, the 16-month moratorium on research expired with the Director's Advisory Committee (DAC) publication of the NIH guidelines of good practice. They defined the risks of certain kinds of experiments and the appropriate physical conditions for their pursuit, as well as a list of things too dangerous to perform at all. Moreover, modified organisms were not to be tested outside the confines of a laboratory or allowed into the environment.[14]

Atypical as Lederberg was at Asilomar, his optimistic vision of genetic engineering would soon lead to the development of the biotechnology industry. Over the next two years, as public concern over the dangers of recombinant DNA research grew, so too did interest in its technical and practical applications. Curing genetic diseases remained in the realms of science fiction, but it appeared that producing human simple proteins could be good business. Insulin, one of the smaller, best characterized and understood proteins, had been used in treating type 1 diabetes for a half century. It had been extracted from animals in a chemically slightly different form from the human product. Yet, if one could produce synthetic human insulin, one could meet an existing demand with a product whose approval would be relatively easy to obtain from regulators. In the period 1975 to 1977, synthetic "human" insulin represented the aspirations for new products that could be made with the new biotechnology. Microbial production of synthetic human insulin was finally announced in September 1978 and was produced by a startup company, Genentech.[15] Although that company did not commercialize the product themselves, instead, it licensed the production method to Eli Lilly and Company. 1978 also saw the first application for a patent on a gene, the gene which produces human growth hormone, by the University of California, thus introducing the legal principle that genes could be patented. Since that filing, almost 20% of the more than 20,000 genes in the human DNA have been patented.[citation needed]

The radical shift in the connotation of "genetic engineering" from an emphasis on the inherited characteristics of people to the commercial production of proteins and therapeutic drugs was nurtured by Joshua Lederberg. His broad concerns since the 1960s had been stimulated by enthusiasm for science and its potential medical benefits. Countering calls for strict regulation, he expressed a vision of potential utility. Against a belief that new techniques would entail unmentionable and uncontrollable consequences for humanity and the environment, a growing consensus on the economic value of recombinant DNA emerged.[citation needed]

With ancestral roots in industrial microbiology that date back centuries, the new biotechnology industry grew rapidly beginning in the mid-1970s. Each new scientific advance became a media event designed to capture investment confidence and public support.[15] Although market expectations and social benefits of new products were frequently overstated, many people were prepared to see genetic engineering as the next great advance in technological progress. By the 1980s, biotechnology characterized a nascent real industry, providing titles for emerging trade organizations such as the Biotechnology Industry Organization (BIO).

The main focus of attention after insulin were the potential profit makers in the pharmaceutical industry: human growth hormone and what promised to be a miraculous cure for viral diseases, interferon. Cancer was a central target in the 1970s because increasingly the disease was linked to viruses.[14] By 1980, a new company, Biogen, had produced interferon through recombinant DNA. The emergence of interferon and the possibility of curing cancer raised money in the community for research and increased the enthusiasm of an otherwise uncertain and tentative society. Moreover, to the 1970s plight of cancer was added AIDS in the 1980s, offering an enormous potential market for a successful therapy, and more immediately, a market for diagnostic tests based on monoclonal antibodies.[16] By 1988, only five proteins from genetically engineered cells had been approved as drugs by the United States Food and Drug Administration (FDA): synthetic insulin, human growth hormone, hepatitis B vaccine, alpha-interferon, and tissue plasminogen activator (TPa), for lysis of blood clots. By the end of the 1990s, however, 125 more genetically engineered drugs would be approved.[16]

Genetic engineering also reached the agricultural front as well. There was tremendous progress since the market introduction of the genetically engineered Flavr Savr tomato in 1994.[16] Ernst and Young reported that in 1998, 30% of the U.S. soybean crop was expected to be from genetically engineered seeds. In 1998, about 30% of the US cotton and corn crops were also expected to be products of genetic engineering.[16]

Genetic engineering in biotechnology stimulated hopes for both therapeutic proteins, drugs and biological organisms themselves, such as seeds, pesticides, engineered yeasts, and modified human cells for treating genetic diseases. From the perspective of its commercial promoters, scientific breakthroughs, industrial commitment, and official support were finally coming together, and biotechnology became a normal part of business. No longer were the proponents for the economic and technological significance of biotechnology the iconoclasts.[1] Their message had finally become accepted and incorporated into the policies of governments and industry.

According to Burrill and Company, an industry investment bank, over $350 billion has been invested in biotech since the emergence of the industry, and global revenues rose from $23 billion in 2000 to more than $50 billion in 2005. The greatest growth has been in Latin America but all regions of the world have shown strong growth trends. By 2007 and into 2008, though, a downturn in the fortunes of biotech emerged, at least in the United Kingdom, as the result of declining investment in the face of failure of biotech pipelines to deliver and a consequent downturn in return on investment.[17]

Excerpt from:
History of biotechnology - Wikipedia

Read More...

Veterinary medicine – Wikipedia

October 20th, 2016 7:45 pm

"Animal hospital" redirects here. For the BBC television show, see Animal Hospital.

Veterinary medicine is the branch of medicine that deals with the prevention, diagnosis and treatment of disease, disorder and injury in non-human animals. The scope of veterinary medicine is wide, covering all animal species, both domesticated and wild, with a wide range of conditions which can affect different species.

Veterinary medicine is widely practiced, both with and without professional supervision. Professional care is most often led by a veterinary physician (also known as a vet, veterinary surgeon or veterinarian), but also by paraveterinary workers such as veterinary nurses or technicians. This can be augmented by other paraprofessionals with specific specialisms such as animal physiotherapy or dentistry, and species relevant roles such as farriers.

Veterinary science helps human health through the monitoring and control of zoonotic disease (infectious disease transmitted from non-human animals to humans), food safety, and indirectly through human applications from basic medical research. They also help to maintain food supply through livestock health monitoring and treatment, and mental health by keeping pets healthy and long living. Veterinary scientists often collaborate with epidemiologists, and other health or natural scientists depending on type of work. Ethically, veterinarians are usually obliged to look after animal welfare.

The Egyptian Papyrus of Kahun (1900 BCE) and Vedic literature in ancient India offer one of the first written records of veterinary medicine. (See also Shalihotra) ( Buddhism) First Buddhist Emperor of India edicts of Asoka reads: "Everywhere King Piyadasi (Asoka) made two kinds of medicine () available, medicine for people and medicine for animals. Where there were no healing herbs for people and animals, he ordered that they be bought and planted."

The first attempts to organize and regulate the practice of treating animals tended to focus on horses because of their economic significance. In the Middle Ages from around 475 CE, farriers combined their work in horseshoeing with the more general task of "horse doctoring". In 1356, the Lord Mayor of London, concerned at the poor standard of care given to horses in the city, requested that all farriers operating within a seven-mile radius of the City of London form a "fellowship" to regulate and improve their practices. This ultimately led to the establishment of the Worshipful Company of Farriers in 1674.[3]

Meanwhile, Carlo Ruini's book Anatomia del Cavallo, (Anatomy of the Horse) was published in 1598. It was the first comprehensive treatise on the anatomy of a non-human species.[4]

The first veterinary college was founded in Lyon, France in 1762 by Claude Bourgelat.[5] According to Lupton, after observing the devastation being caused by cattle plague to the French herds, Bourgelat devoted his time to seeking out a remedy. This resulted in his founding a veterinary college in Lyon in 1761, from which establishment he dispatched students to combat the disease; in a short time, the plague was stayed and the health of stock restored, through the assistance rendered to agriculture by veterinary science and art."[6]

The Odiham Agricultural Society was founded in 1783 in England to promote agriculture and industry,[7] and played an important role in the foundation of the veterinary profession in Britain. A founding member, Thomas Burgess, began to take up the cause of animal welfare and campaign for the more humane treatment of sick animals.[8] A 1785 Society meeting resolved to "promote the study of Farriery upon rational scientific principles.

The physician James Clark wrote a treatise entitled Prevention of Disease in which he argued for the professionalization of the veterinary trade, and the establishment of veterinary colleges. This was finally achieved in 1790, through the campaigning of Granville Penn, who persuaded the Frenchman, Benoit Vial de St. Bel to accept the professorship of the newly established Veterinary College in London.[7] The Royal College of Veterinary Surgeons was established by royal charter in 1844. Veterinary science came of age in the late 19th century, with notable contributions from Sir John McFadyean, credited by many as having been the founder of modern Veterinary research.[9]

In the United States, the first schools were established in the early 19th century in Boston, New York and Philadelphia. In 1879, Iowa Agricultural College became the first land grant college to establish a school of veterinary medicine.[10]

Veterinary care and management is usually led by a veterinary physician (usually called a vet, veterinary surgeon or veterinarian). This role is the equivalent of a doctor in human medicine, and usually involves post-graduate study and qualification.

In many countries, the local nomenclature for a vet is a protected term, meaning that people without the prerequisite qualifications and/or registration are not able to use the title, and in many cases, the activities that may be undertaken by a vet (such as animal treatment or surgery) are restricted only to those people who are registered as vet. For instance, in the United Kingdom, as in other jurisdictions, animal treatment may only be performed by registered vets (with a few designated exceptions, such as paraveterinary workers), and it is illegal for any person who is not registered to call themselves a vet or perform any treatment.

Most vets work in clinical settings, treating animals directly. These vets may be involved in a general practice, treating animals of all types; may be specialized in a specific group of animals such as companion animals, livestock, laboratory animals, zoo animals or horses; or may specialize in a narrow medical discipline such as surgery, dermatology, laboratory animal medicine, or internal medicine.

As with healthcare professionals, vets face ethical decisions about the care of their patients. Current debates within the profession include the ethics of purely cosmetic procedures on animals, such as declawing of cats, docking of tails, cropping of ears and debarking on dogs.

Paraveterinary workers, including veterinary nurses, technicians and assistants, either assist vets in their work, or may work within their own scope of practice, depending on skills and qualifications, including in some cases, performing minor surgery.

The role of paraveterinary workers is less homogeneous globally than that of a vet, and qualification levels, and the associated skill mix, vary widely.

A number of professions exist within the scope of veterinary medicine, but which may not necessarily be performed by vets or veterinary nurses. This includes those performing roles which are also found in human medicine, such as practitioners dealing with musculoskeletal disorders, including osteopaths, chiropractors and physiotherapists.

There are also roles which are specific to animals, but which have parallels in human society, such as animal grooming and animal massage.

Some roles are specific to a species or group of animals, such as farriers, who are involved in the shoeing of horses, and in many cases have a major role to play in ensuring the medical fitness of the horse.

Exotic veterinary care is the scope of treatment, diagnosis and care for animals persisting of the nontraditional domesticated animals. An exotic animal can be briefly described as one that isn't normally domesticated or owned, there-go, exotic. The research and study of veterinary medicine pertains to this form of treatment and care only on a smaller scale due to demand and resources available for this field of work.

Veterinary research includes research on prevention, control, diagnosis, and treatment of diseases of animals and on the basic biology, welfare, and care of animals. Veterinary research transcends species boundaries and includes the study of spontaneously occurring and experimentally induced models of both human and animal disease and research at human-animal interfaces, such as food safety, wildlife and ecosystem health, zoonotic diseases, and public policy.[11]

As in medicine, randomized controlled trials are fundamental also in veterinary medicine to establish the effectiveness of a treatment.[12] However, clinical veterinary research is far behind human medical research, with fewer randomized controlled trials, that have a lower quality and that are mostly focused on research animals.[13] Possible improvement consists in creation of network for inclusion of private veterinary practices in randomized controlled trials.

Read the original:
Veterinary medicine - Wikipedia

Read More...

Nanomedicine – Wikipedia

October 20th, 2016 7:43 pm

Nanomedicine is the medical application of nanotechnology.[1] Nanomedicine ranges from the medical applications of nanomaterials and biological devices, to nanoelectronic biosensors, and even possible future applications of molecular nanotechnology such as biological machines. Current problems for nanomedicine involve understanding the issues related to toxicity and environmental impact of nanoscale materials (materials whose structure is on the scale of nanometers, i.e. billionths of a meter).

Functionalities can be added to nanomaterials by interfacing them with biological molecules or structures. The size of nanomaterials is similar to that of most biological molecules and structures; therefore, nanomaterials can be useful for both in vivo and in vitro biomedical research and applications. Thus far, the integration of nanomaterials with biology has led to the development of diagnostic devices, contrast agents, analytical tools, physical therapy applications, and drug delivery vehicles.

Nanomedicine seeks to deliver a valuable set of research tools and clinically useful devices in the near future.[2][3] The National Nanotechnology Initiative expects new commercial applications in the pharmaceutical industry that may include advanced drug delivery systems, new therapies, and in vivo imaging.[4] Nanomedicine research is receiving funding from the US National Institutes of Health, including the funding in 2005 of a five-year plan to set up four nanomedicine centers.

Nanomedicine sales reached $16 billion in 2015, with a minimum of $3.8 billion in nanotechnology R&D being invested every year. Global funding for emerging nanotechnology increased by 45% per year in recent years, with product sales exceeding $1 trillion in 2013.[5] As the nanomedicine industry continues to grow, it is expected to have a significant impact on the economy.

Nanotechnology has provided the possibility of delivering drugs to specific cells using nanoparticles.

The overall drug consumption and side-effects may be lowered significantly by depositing the active agent in the morbid region only and in no higher dose than needed. Targeted drug delivery is intended to reduce the side effects of drugs with concomitant decreases in consumption and treatment expenses. Drug delivery focuses on maximizing bioavailability both at specific places in the body and over a period of time. This can potentially be achieved by molecular targeting by nanoengineered devices.[6][7] More than $65 billion are wasted each year due to poor bioavailability.[citation needed] A benefit of using nanoscale for medical technologies is that smaller devices are less invasive and can possibly be implanted inside the body, plus biochemical reaction times are much shorter. These devices are faster and more sensitive than typical drug delivery.[8] The efficacy of drug delivery through nanomedicine is largely based upon: a) efficient encapsulation of the drugs, b) successful delivery of drug to the targeted region of the body, and c) successful release of the drug.[citation needed]

Drug delivery systems, lipid- [9] or polymer-based nanoparticles,[10] can be designed to improve the pharmacokinetics and biodistribution of the drug.[11][12][13] However, the pharmacokinetics and pharmacodynamics of nanomedicine is highly variable among different patients.[14] When designed to avoid the body's defence mechanisms,[15] nanoparticles have beneficial properties that can be used to improve drug delivery. Complex drug delivery mechanisms are being developed, including the ability to get drugs through cell membranes and into cell cytoplasm. Triggered response is one way for drug molecules to be used more efficiently. Drugs are placed in the body and only activate on encountering a particular signal. For example, a drug with poor solubility will be replaced by a drug delivery system where both hydrophilic and hydrophobic environments exist, improving the solubility.[16] Drug delivery systems may also be able to prevent tissue damage through regulated drug release; reduce drug clearance rates; or lower the volume of distribution and reduce the effect on non-target tissue. However, the biodistribution of these nanoparticles is still imperfect due to the complex host's reactions to nano- and microsized materials[15] and the difficulty in targeting specific organs in the body. Nevertheless, a lot of work is still ongoing to optimize and better understand the potential and limitations of nanoparticulate systems. While advancement of research proves that targeting and distribution can be augmented by nanoparticles, the dangers of nanotoxicity become an important next step in further understanding of their medical uses.[17]

Nanoparticles can be used in combination therapy for decreasing antibiotic resistance or for their antimicrobial properties.[18][19][20] Nanoparticles might also used to circumvent multidrug resistance (MDR) mechanisms.[21]

Two forms of nanomedicine that have already been tested in mice and are awaiting human trials that will be using gold nanoshells to help diagnose and treat cancer,[22] and using liposomes as vaccine adjuvants and as vehicles for drug transport.[23][24] Similarly, drug detoxification is also another application for nanomedicine which has shown promising results in rats.[25] Advances in Lipid nanotechnology was also instrumental in engineering medical nanodevices and novel drug delivery systems as well as in developing sensing applications.[26] Another example can be found in dendrimers and nanoporous materials. Another example is to use block co-polymers, which form micelles for drug encapsulation.[10]

Polymeric nano-particles are a competing technology to lipidic (based mainly on Phospholipids) nano-particles. There is an additional risk of toxicity associated with polymers not widely studied or understood. The major advantages of polymers is stability, lower cost and predictable characterisation. However, in the patient's body this very stability (slow degradation) is a negative factor. Phospholipids on the other hand are membrane lipids (already present in the body and surrounding each cell), have a GRAS (Generally Recognised As Safe) status from FDA and are derived from natural sources without any complex chemistry involved. They are not metabolised but rather absorbed by the body and the degradation products are themselves nutrients (fats or micronutrients).[citation needed]

Protein and peptides exert multiple biological actions in the human body and they have been identified as showing great promise for treatment of various diseases and disorders. These macromolecules are called biopharmaceuticals. Targeted and/or controlled delivery of these biopharmaceuticals using nanomaterials like nanoparticles and Dendrimers is an emerging field called nanobiopharmaceutics, and these products are called nanobiopharmaceuticals.[citation needed]

Another highly efficient system for microRNA delivery for example are nanoparticles formed by the self-assembly of two different microRNAs deregulated in cancer.[27]

Another vision is based on small electromechanical systems; nanoelectromechanical systems are being investigated for the active release of drugs. Some potentially important applications include cancer treatment with iron nanoparticles or gold shells.Nanotechnology is also opening up new opportunities in implantable delivery systems, which are often preferable to the use of injectable drugs, because the latter frequently display first-order kinetics (the blood concentration goes up rapidly, but drops exponentially over time). This rapid rise may cause difficulties with toxicity, and drug efficacy can diminish as the drug concentration falls below the targeted range.[citation needed]

Some nanotechnology-based drugs that are commercially available or in human clinical trials include:

Existing and potential drug nanocarriers have been reviewed.[38][39][40][41]

Nanoparticles have high surface area to volume ratio. This allows for many functional groups to be attached to a nanoparticle, which can seek out and bind to certain tumor cells. Additionally, the small size of nanoparticles (10 to 100 nanometers), allows them to preferentially accumulate at tumor sites (because tumors lack an effective lymphatic drainage system).[42] Limitations to conventional cancer chemotherapy include drug resistance, lack of selectivity, and lack of solubility. Nanoparticles have the potential to overcome these problems.[43]

In photodynamic therapy, a particle is placed within the body and is illuminated with light from the outside. The light gets absorbed by the particle and if the particle is metal, energy from the light will heat the particle and surrounding tissue. Light may also be used to produce high energy oxygen molecules which will chemically react with and destroy most organic molecules that are next to them (like tumors). This therapy is appealing for many reasons. It does not leave a "toxic trail" of reactive molecules throughout the body (chemotherapy) because it is directed where only the light is shined and the particles exist. Photodynamic therapy has potential for a noninvasive procedure for dealing with diseases, growth and tumors. Kanzius RF therapy is one example of such therapy (nanoparticle hyperthermia) .[citation needed] Also, gold nanoparticles have the potential to join numerous therapeutic functions into a single platform, by targeting specific tumor cells, tissues and organs.[44][45]

In vivo imaging is another area where tools and devices are being developed. Using nanoparticle contrast agents, images such as ultrasound and MRI have a favorable distribution and improved contrast. This might be accomplished by self assembled biocompatible nanodevices that will detect, evaluate, treat and report to the clinical doctor automatically.[citation needed]

The small size of nanoparticles endows them with properties that can be very useful in oncology, particularly in imaging. Quantum dots (nanoparticles with quantum confinement properties, such as size-tunable light emission), when used in conjunction with MRI (magnetic resonance imaging), can produce exceptional images of tumor sites. Nanoparticles of cadmium selenide (quantum dots) glow when exposed to ultraviolet light. When injected, they seep into cancer tumors. The surgeon can see the glowing tumor, and use it as a guide for more accurate tumor removal.These nanoparticles are much brighter than organic dyes and only need one light source for excitation. This means that the use of fluorescent quantum dots could produce a higher contrast image and at a lower cost than today's organic dyes used as contrast media. The downside, however, is that quantum dots are usually made of quite toxic elements.[citation needed]

Tracking movement can help determine how well drugs are being distributed or how substances are metabolized. It is difficult to track a small group of cells throughout the body, so scientists used to dye the cells. These dyes needed to be excited by light of a certain wavelength in order for them to light up. While different color dyes absorb different frequencies of light, there was a need for as many light sources as cells. A way around this problem is with luminescent tags. These tags are quantum dots attached to proteins that penetrate cell membranes. The dots can be random in size, can be made of bio-inert material, and they demonstrate the nanoscale property that color is size-dependent. As a result, sizes are selected so that the frequency of light used to make a group of quantum dots fluoresce is an even multiple of the frequency required to make another group incandesce. Then both groups can be lit with a single light source. They have also found a way to insert nanoparticles[46] into the affected parts of the body so that those parts of the body will glow showing the tumor growth or shrinkage or also organ trouble.[47]

Nanotechnology-on-a-chip is one more dimension of lab-on-a-chip technology. Magnetic nanoparticles, bound to a suitable antibody, are used to label specific molecules, structures or microorganisms. Gold nanoparticles tagged with short segments of DNA can be used for detection of genetic sequence in a sample. Multicolor optical coding for biological assays has been achieved by embedding different-sized quantum dots into polymeric microbeads. Nanopore technology for analysis of nucleic acids converts strings of nucleotides directly into electronic signatures.[citation needed]

Sensor test chips containing thousands of nanowires, able to detect proteins and other biomarkers left behind by cancer cells, could enable the detection and diagnosis of cancer in the early stages from a few drops of a patient's blood.[48]Nanotechnology is helping to advance the use of arthroscopes, which are pencil-sized devices that are used in surgeries with lights and cameras so surgeons can do the surgeries with smaller incisions. The smaller the incisions the faster the healing time which is better for the patients. It is also helping to find a way to make an arthroscope smaller than a strand of hair.[49]

Research on nanoelectronics-based cancer diagnostics could lead to tests that can be done in pharmacies. The results promise to be highly accurate and the product promises to be inexpensive. They could take a very small amount of blood and detect cancer anywhere in the body in about five minutes, with a sensitivity that is a thousand times better than in a conventional laboratory test. These devices that are built with nanowires to detect cancer proteins; each nanowire detector is primed to be sensitive to a different cancer marker. The biggest advantage of the nanowire detectors is that they could test for anywhere from ten to one hundred similar medical conditions without adding cost to the testing device.[50] Nanotechnology has also helped to personalize oncology for the detection, diagnosis, and treatment of cancer. It is now able to be tailored to each individuals tumor for better performance. They have found ways that they will be able to target a specific part of the body that is being affected by cancer.[51]

Magnetic micro particles are proven research instruments for the separation of cells and proteins from complex media. The technology is available under the name Magnetic-activated cell sorting or Dynabeads among others. More recently it was shown in animal models that magnetic nanoparticles can be used for the removal of various noxious compounds including toxins, pathogens, and proteins from whole blood in an extracorporeal circuit similar to dialysis.[52][53] In contrast to dialysis, which works on the principle of the size related diffusion of solutes and ultrafiltration of fluid across a semi-permeable membrane, the purification with nanoparticles allows specific targeting of substances. Additionally larger compounds which are commonly not dialyzable can be removed.[citation needed]

The purification process is based on functionalized iron oxide or carbon coated metal nanoparticles with ferromagnetic or superparamagnetic properties.[54] Binding agents such as proteins,[53]antibodies,[52]antibiotics,[55] or synthetic ligands[56] are covalently linked to the particle surface. These binding agents are able to interact with target species forming an agglomerate. Applying an external magnetic field gradient allows exerting a force on the nanoparticles. Hence the particles can be separated from the bulk fluid, thereby cleaning it from the contaminants.[57][58]

The small size (< 100nm) and large surface area of functionalized nanomagnets leads to advantageous properties compared to hemoperfusion, which is a clinically used technique for the purification of blood and is based on surface adsorption. These advantages are high loading and accessibility of the binding agents, high selectivity towards the target compound, fast diffusion, small hydrodynamic resistance, and low dosage.[59]

This approach offers new therapeutic possibilities for the treatment of systemic infections such as sepsis by directly removing the pathogen. It can also be used to selectively remove cytokines or endotoxins[55] or for the dialysis of compounds which are not accessible by traditional dialysis methods. However the technology is still in a preclinical phase and first clinical trials are not expected before 2017.[60]

Nanotechnology may be used as part of tissue engineering to help reproduce or repair or reshape damaged tissue using suitable nanomaterial-based scaffolds and growth factors. Tissue engineering if successful may replace conventional treatments like organ transplants or artificial implants. Nanoparticles such as graphene, carbon nanotubes, molybdenum disulfide and tungsten disulfide are being used as reinforcing agents to fabricate mechanically strong biodegradable polymeric nanocomposites for bone tissue engineering applications. The addition of these nanoparticles in the polymer matrix at low concentrations (~0.2 weight%) leads to significant improvements in the compressive and flexural mechanical properties of polymeric nanocomposites.[61][62] Potentially, these nanocomposites may be used as a novel, mechanically strong, light weight composite as bone implants.[citation needed]

For example, a flesh welder was demonstrated to fuse two pieces of chicken meat into a single piece using a suspension of gold-coated nanoshells activated by an infrared laser. This could be used to weld arteries during surgery.[63] Another example is nanonephrology, the use of nanomedicine on the kidney.

Neuro-electronic interfacing is a visionary goal dealing with the construction of nanodevices that will permit computers to be joined and linked to the nervous system. This idea requires the building of a molecular structure that will permit control and detection of nerve impulses by an external computer. A refuelable strategy implies energy is refilled continuously or periodically with external sonic, chemical, tethered, magnetic, or biological electrical sources, while a nonrefuelable strategy implies that all power is drawn from internal energy storage which would stop when all energy is drained. A nanoscale enzymatic biofuel cell for self-powered nanodevices have been developed that uses glucose from biofluids including human blood and watermelons.[64] One limitation to this innovation is the fact that electrical interference or leakage or overheating from power consumption is possible. The wiring of the structure is extremely difficult because they must be positioned precisely in the nervous system. The structures that will provide the interface must also be compatible with the body's immune system.[65]

Molecular nanotechnology is a speculative subfield of nanotechnology regarding the possibility of engineering molecular assemblers, machines which could re-order matter at a molecular or atomic scale. Nanomedicine would make use of these nanorobots, introduced into the body, to repair or detect damages and infections. Molecular nanotechnology is highly theoretical, seeking to anticipate what inventions nanotechnology might yield and to propose an agenda for future inquiry. The proposed elements of molecular nanotechnology, such as molecular assemblers and nanorobots are far beyond current capabilities.[1][65][66][67] Future advances in nanomedicine could give rise to life extension through the repair of many processes thought to be responsible for aging. K. Eric Drexler, one of the founders of nanotechnology, postulated cell repair machines, including ones operating within cells and utilizing as yet hypothetical molecular machines, in his 1986 book Engines of Creation, with the first technical discussion of medical nanorobots by Robert Freitas appearing in 1999.[1]Raymond Kurzweil, a futurist and transhumanist, stated in his book The Singularity Is Near that he believes that advanced medical nanorobotics could completely remedy the effects of aging by 2030.[68] According to Richard Feynman, it was his former graduate student and collaborator Albert Hibbs who originally suggested to him (circa 1959) the idea of a medical use for Feynman's theoretical micromachines (see nanotechnology). Hibbs suggested that certain repair machines might one day be reduced in size to the point that it would, in theory, be possible to (as Feynman put it) "swallow the doctor". The idea was incorporated into Feynman's 1959 essay There's Plenty of Room at the Bottom.[69]

Read more from the original source:
Nanomedicine - Wikipedia

Read More...

Longevity – Wikipedia

October 20th, 2016 7:43 pm

The word "longevity" is sometimes used as a synonym for "life expectancy" in demography - however, the term "longevity" is sometimes meant to refer only to especially long-lived members of a population, whereas "life expectancy" is always defined statistically as the average number of years remaining at a given age. For example, a population's life expectancy at birth is the same as the average age at death for all people born in the same year (in the case of cohorts). Longevity is best thought of as a term for general audiences meaning 'typical length of life' and specific statistical definitions should be clarified when necessary.

Reflections on longevity have usually gone beyond acknowledging the brevity of human life and have included thinking about methods to extend life. Longevity has been a topic not only for the scientific community but also for writers of travel, science fiction, and utopian novels.

There are many difficulties in authenticating the longest human life span ever by modern verification standards, owing to inaccurate or incomplete birth statistics. Fiction, legend, and folklore have proposed or claimed life spans in the past or future vastly longer than those verified by modern standards, and longevity narratives and unverified longevity claims frequently speak of their existence in the present.

A life annuity is a form of longevity insurance.

Various factors contribute to an individual's longevity. Significant factors in life expectancy include gender, genetics, access to health care, hygiene, diet and nutrition, exercise, lifestyle, and crime rates. Below is a list of life expectancies in different types of countries:[3]

Population longevities are increasing as life expectancies around the world grow:[1][4]

The Gerontology Research Group validates current longevity records by modern standards, and maintains a list of supercentenarians; many other unvalidated longevity claims exist. Record-holding individuals include:[citation needed]

Evidence-based studies indicate that longevity is based on two major factors, genetics and lifestyle choices.[5]

Twin studies have estimated that approximately 20-30% the variation in human lifespan can be related to genetics, with the rest due to individual behaviors and environmental factors which can be modified.[6] Although over 200 gene variants have been associated with longevity according to a US-Belgian-UK research database of human genetic variants,[7] these explain only a small fraction of the heritability.[8] A 2012 study found that even modest amounts of leisure time physical exercise can extend life expectancy by as much as 4.5 years.[9]

Lymphoblastoid cell lines established from blood samples of centenarians have significantly higher activity of the DNA repair protein PARP (Poly ADP ribose polymerase) than cell lines from younger (20 to 70 year old) individuals.[10] The lymphocytic cells of centenarians have characteristics typical of cells from young people, both in their capability of priming the mechanism of repair after H2O2 sublethal oxidative DNA damage and in their PARP gene expression.[11] These findings suggest that elevated PARP gene expression contributes to the longevity of centenarians, consistent with the DNA damage theory of aging.[12]

A study of the regions of the world known as blue zones, where people commonly live active lives past 100 years of age, speculated that longevity is related to a healthy social and family life, not smoking, eating a plant-based diet, frequent consumption of legumes and nuts, and engaging in regular physical activity.[13] In a cohort study, the combination of a plant based diet, normal BMI, and not smoking accounted for differences up to 15 years in life expectancy.[14] Korean court records going back to 1392 indicate that the average lifespan of eunuchs was 70.0 1.76 years, which was 14.419.1 years longer than the lifespan of non-castrated men of similar socio-economic status.[15] The Alameda County Study hypothesized three additional lifestyle characteristics that promote longevity: limiting alcohol consumption, sleeping 7 to 8 hours per night, and not snacking (eating between meals), although the study found the association between these characteristics and mortality is "weak at best".[16] There are however many other possible factors potentially affecting longevity, including the impact of high peer competition, which is typically experienced in large cities.[17]

In preindustrial times, deaths at young and middle age were more common than they are today. This is not due to genetics, but because of environmental factors such as disease, accidents, and malnutrition, especially since the former were not generally treatable with pre-20th century medicine. Deaths from childbirth were common in women, and many children did not live past infancy. In addition, most people who did attain old age were likely to die quickly from the above-mentioned untreatable health problems. Despite this, we do find many examples of pre-20th century individuals attaining lifespans of 75 years or greater, including Benjamin Franklin, Thomas Jefferson, John Adams, Cato the Elder, Thomas Hobbes, Eric of Pomerania, Christopher Polhem, and Michelangelo. This was also true for poorer people like peasants or laborers. Genealogists will almost certainly find ancestors living to their 70s, 80s and even 90s several hundred years ago.

For example, an 1871 census in the UK (the first of its kind, but personal data from other censuses dates back to 1841 and numerical data back to 1801) found the average male life expectancy as being 44, but if infant mortality is subtracted, males who lived to adulthood averaged 75 years. The present male life expectancy in the UK is 77 years for males and 81 for females, while the United States averages 74 for males and 80 for females.

Studies have shown that black American males have the shortest lifespans of any group of people in the US, averaging only 69 years (Asian-American females average the longest).[18] This reflects overall poorer health and greater prevalence of heart disease, obesity, diabetes, and cancer among black American men.

Women normally outlive men, and this was as true in pre-industrial times as today. Theories for this include smaller bodies (and thus less stress on the heart), a stronger immune system (since testosterone acts as an immunosuppressant), and less tendency to engage in physically dangerous activities.

There is a current debate as to whether or not the pursuit of longevity is a worthwhile health care goal for the United States. Bioethicist Ezekiel Emanuel, who is also one of the architects of ObamaCare, has stated that the pursuit of longevity via the compression of morbidity explanation is a "fantasy" and that life is not worth living after age 75; therefore longevity should not be a goal of health care policy.[19] This has been refuted by neurosurgeon Miguel Faria, who states that life can be worthwhile in healthy old age; that the compression of morbidity is a real phenomenon; that longevity should be pursued in association with quality of life.[20] Faria has discussed how longevity in association with leading healthy lifestyles can lead to the postponement of senescence as well as happiness and wisdom in old age.[21]

All of the biological organisms have a limited longevity, and different species of animals and plants have different potentials of longevity. Misrepair-accumulation aging theory [22][23] suggests that the potential of longevity of an organism is related to its structural complexity.[24] Limited longevity is due to the limited structural complexity of the organism. If a species of organisms has too high structural complexity, most of its individuals would die before the reproduction age, and the species could not survive. This theory suggests that limited structural complexity and limited longevity are essential for the survival of a species.

Longevity traditions are traditions about long-lived people (generally supercentenarians), and practices that have been believed to confer longevity.[25][26] A comparison and contrast of "longevity in antiquity" (such as the Sumerian King List, the genealogies of Genesis, and the Persian Shahnameh) with "longevity in historical times" (common-era cases through twentieth-century news reports) is elaborated in detail in Lucian Boia's 2004 book Forever Young: A Cultural History of Longevity from Antiquity to the Present and other sources.[27]

The Fountain of Youth reputedly restores the youth of anyone who drinks of its waters. The New Testament, following older Jewish tradition, attributes healing to the Pool of Bethesda when the waters are "stirred" by an angel.[28] After the death of Juan Ponce de Len, Gonzalo Fernndez de Oviedo y Valds wrote in Historia General y Natural de las Indias (1535) that Ponce de Len was looking for the waters of Bimini to cure his aging.[29] Traditions that have been believed to confer greater human longevity also include alchemy,[30] such as that attributed to Nicolas Flamel. In the modern era, the Okinawa diet has some reputation of linkage to exceptionally high ages.[31]

More recent longevity claims are subcategorized by many editions of Guinness World Records into four groups: "In late life, very old people often tend to advance their ages at the rate of about 17 years per decade .... Several celebrated super-centenarians (over 110 years) are believed to have been double lives (father and son, relations with the same names or successive bearers of a title) .... A number of instances have been commercially sponsored, while a fourth category of recent claims are those made for political ends ...."[32] The estimate of 17 years per decade was corroborated by the 1901 and 1911 British censuses.[32] Mazess and Forman also discovered in 1978 that inhabitants of Vilcabamba, Ecuador, claimed excessive longevity by using their fathers' and grandfathers' baptismal entries.[32][33]Time magazine considered that, by the Soviet Union, longevity had been elevated to a state-supported "Methuselah cult".[34]Robert Ripley regularly reported supercentenarian claims in Ripley's Believe It or Not!, usually citing his own reputation as a fact-checker to claim reliability.[35]

The U.S. Census Bureau view on the future of longevity is that life expectancy in the United States will be in the mid-80s by 2050 (up from 77.85 in 2006) and will top out eventually in the low 90s, barring major scientific advances that can change the rate of human aging itself, as opposed to merely treating the effects of aging as is done today. The Census Bureau also predicted that the United States would have 5.3 million people aged over 100 in 2100. The United Nations has also made projections far out into the future, up to 2300, at which point it projects that life expectancies in most developed countries will be between 100 and 106 years and still rising, though more and more slowly than before. These projections also suggest that life expectancies in poor countries will still be less than those in rich countries in 2300, in some cases by as much as 20 years. The UN itself mentioned that gaps in life expectancy so far in the future may well not exist, especially since the exchange of technology between rich and poor countries and the industrialization and development of poor countries may cause their life expectancies to converge fully with those of rich countries long before that point, similarly to the way life expectancies between rich and poor countries have already been converging over the last 60 years as better medicine, technology, and living conditions became accessible to many people in poor countries. The UN has warned that these projections are uncertain, and cautions that any change or advancement in medical technology could invalidate such projections.[36]

Recent increases in the rates of lifestyle diseases, such as obesity, diabetes, hypertension, and heart disease, may eventually slow or reverse this trend toward increasing life expectancy in the developed world, but have not yet done so. The average age of the US population is getting higher[37] and these diseases show up in older people.[38]

Jennifer Couzin-Frankel examined how much mortality from various causes would have to drop in order to boost life expectancy and concluded that most of the past increases in life expectancy occurred because of improved survival rates for young people. She states that it seems unlikely that life expectancy at birth will ever exceed 85 years.[39]Michio Kaku argues that genetic engineering, nanotechnology and future breakthroughs will accelerate the rate of life expectancy increase indefinitely.[40] Already genetic engineering has allowed the life expectancy of certain primates to be doubled, and for human skin cells in labs to divide and live indefinitely without becoming cancerous.[41]

However, since 1840, record life expectancy has risen linearly for men and women, albeit more slowly for men. For women the increase has been almost three months per year, for men almost 2.7 months per year. In light of steady increase, without any sign of limitation, the suggestion that life expectancy will top out must be treated with caution. Scientists Oeppen and Vaupel observe that experts who assert that "life expectancy is approaching a ceiling ... have repeatedly been proven wrong." It is thought that life expectancy for women has increased more dramatically owing to the considerable advances in medicine related to childbirth.[42]

Mice have been genetically engineered to live twice as long as ordinary mice. Drugs such as deprenyl are a part of the prescribing pharmacopia of veterinarians specifically to increase mammal lifespan. A large plurality of research chemicals have been described at the scientific literature that increase the lifespan of a number of species.

Some argue that molecular nanotechnology will greatly extend human life spans. If the rate of increase of life span can be raised with these technologies to a level of twelve months increase per year, this is defined as effective biological immortality and is the goal of radical life extension.

Currently living:

Non-living:

Certain exotic organisms do not seem to be subject to aging and can live indefinitely. Examples include Tardigrades and Hydras. That is not to say that these organisms cannot die, merely that they only die as a result of disease or injury rather than age-related deterioration (and that they are not subject to the Hayflick limit).

Longevity

Here is the original post:
Longevity - Wikipedia

Read More...

Alternative medicine – Wikipedia

October 20th, 2016 7:42 pm

Alternative or fringe medicine is any practice claimed to have the healing effects of medicine and is: proven not to work; has no scientific evidence showing that it works; or that is solely harmful.[n 1][n 2][n 3] Alternative medicine is not a part of medicine,[n 1][n 4][n 5][n 6] or science-based healthcare systems.[1][2][4] It consists of a wide variety of practices, products, and therapiesranging from those that are biologically plausible but not well tested, to those with known harmful and toxic effects.[n 4][5][6][7][8][9] Despite significant costs in testing alternative medicine, including $2.5 billion spent by the United States government, almost none have shown any effectiveness beyond that of false treatments (placebo).[10][11] Perceived effects of alternative medicine are caused by the placebo effect, decreased effects of functional treatment (and thus also decreased side-effects), and regression toward the mean where spontaneous improvement is credited to alternative therapies.

Complementary medicine or integrative medicine is when alternative medicine is used together with functional medical treatment, in a belief that it "complements" (improves the efficacy of) the treatment.[n 7][13][14][15][16] However, significant drug interactions caused by alternative therapies may instead negatively influence the treatment, making treatments less effective, notably cancer therapy.[17][18]CAM is an abbreviation of complementary and alternative medicine.[19][20] It has also be called sCAM or SCAM for "so-called complementary and alternative medicine" or "supplements and complementary and alternative medicine".[21][22]Holistic health or holistic medicine claims to take into account the "whole" person, including spirituality in its treatmentsand is a similar concept. Due to its many names the field has been criticized for intense rebranding of what are essentially the same practices: as soon as one name is declared synonymous with quackery, a new is chosen.

Alternative medical diagnoses and treatments are not included in the science-based treatments taught in medical schools, and are not used in medical practice where treatments are based on scientific knowledge. Alternative therapies are either unproven, disproved, or impossible to prove,[n 8][5][13][24][25] and are often based on religion, tradition, superstition, belief in supernatural energies, pseudoscience, errors in reasoning, propaganda, or fraud.[5][26][6][13] Regulation and licensing of alternative medicine and health care providers varies between and within countries. Marketing alternative therapies as treating or preventing cancer is illegal in many countries including the United States and most parts of the European Union.

Alternative medicine has been criticized for being based on misleading statements, quackery, pseudoscience, antiscience, fraud, or poor scientific methodology. Promoting alternative medicine has been called dangerous and unethical.[n 9][28] Testing alternative medicine that have no scientific basis has been called a waste of scarce medical research resources.[29][30] Critics have said "there is really no such thing as alternative medicine, just medicine that works and medicine that doesn't",[31] and the problem is not only that it does not work, but that the "underlying logic is magical, childish or downright absurd".[32] There have also been calls that the concept of any alternative medicine that works is paradoxical, as any treatment proven to work is simply "medicine".[33]

Alternative medicine consists of a wide range of health care practices, products, and therapies. The shared feature is a claim to heal that is not based on the scientific method. Alternative medicine practices are diverse in their foundations and methodologies.[1] Alternative medicine practices may be classified by their cultural origins or by the types of beliefs upon which they are based.[5][26][1][13] Methods may incorporate or be based on traditional medicinal practices of a particular culture, folk knowledge, supersition, spiritual beliefs, belief in supernatural energies (antiscience), pseudoscience, errors in reasoning, propaganda, fraud, new or different concepts of health and disease, and any bases other than being proven by scientific methods.[5][26][6][13] Different cultures may have their own unique traditional or belief based practices developed recently or over thousands of years, and specific practices or entire systems of practices.

Alternative medicine, such as using naturopathy or homeopathy in place of conventional medicine, is based on belief systems not grounded in science.[1]

Homeopathy is a system developed in a belief that a substance that causes the symptoms of a disease in healthy people cures similar symptoms in sick people.[n 10] It was developed before knowledge of atoms and molecules, and of basic chemistry, which shows that repeated dilution as practiced in homeopathy produces only water, and that homeopathy is scientifically implausible.[36][37][38][39] Homeopathy is considered quackery in the medical community.[40]

Naturopathic medicine is based on a belief that the body heals itself using a supernatural vital energy that guides bodily processes,[41] a view in conflict with the paradigm of evidence-based medicine.[42] Many naturopaths have opposed vaccination,[43] and "scientific evidence does not support claims that naturopathic medicine can cure cancer or any other disease".[44]

Alternative medical systems may be based on traditional medicine practices, such as traditional Chinese medicine, Ayurveda in India, or practices of other cultures around the world.[1]

Traditional Chinese medicine is a combination of traditional practices and beliefs developed over thousands of years in China, together with modifications made by the Communist party. Common practices include herbal medicine, acupuncture (insertion of needles in the body at specified points), massage (Tui na), exercise (qigong), and dietary therapy. The practices are based on belief in a supernatural energy called qi, considerations of Chinese Astrology and Chinese numerology, traditional use of herbs and other substances found in Chinaa belief that the tongue contains a map of the body that reflects changes in the body, and an incorrect model of the anatomy and physiology of internal organs.[5][45][46][47][48][49]

The Chinese Communist Party Chairman Mao Zedong, in response to a lack of modern medical practitioners, revived acupuncture, and had its theory rewritten to adhere to the political, economic, and logistic necessities of providing for the medical needs of China's population.[50][pageneeded] In the 1950s the "history" and theory of traditional Chinese medicine was rewritten as communist propaganda, at Mao's insistence, to correct the supposed "bourgeois thought of Western doctors of medicine".Acupuncture gained attention in the United States when President Richard Nixon visited China in 1972, and the delegation was shown a patient undergoing major surgery while fully awake, ostensibly receiving acupuncture rather than anesthesia. Later it was found that the patients selected for the surgery had both a high pain tolerance and received heavy indoctrination before the operation; these demonstration cases were also frequently receiving morphine surreptitiously through an intravenous drip that observers were told contained only fluids and nutrients.[45]Cochrane reviews found acupuncture is not effective for a wide range of conditions.[52] A systematic review of systematic reviews found that for reducing pain, real acupuncture was no better than sham acupuncture.[53] Although, other reviews have found that acupuncture is successful at reducing chronic pain, where as sham acupuncture was not found to be better than a placebo as well as no-acupuncture groups.[54]

Ayurvedic medicine is a traditional medicine of India. Ayurveda believes in the existence of three elemental substances, the doshas (called Vata, Pitta and Kapha), and states that a balance of the doshas results in health, while imbalance results in disease. Such disease-inducing imbalances can be adjusted and balanced using traditional herbs, minerals and heavy metals. Ayurveda stresses the use of plant-based medicines and treatments, with some animal products, and added minerals, including sulfur, arsenic, lead, copper sulfate.[citation needed]

Safety concerns have been raised about Ayurveda, with two U.S. studies finding about 20 percent of Ayurvedic Indian-manufactured patent medicines contained toxic levels of heavy metals such as lead, mercury and arsenic. Other concerns include the use of herbs containing toxic compounds and the lack of quality control in Ayurvedic facilities. Incidents of heavy metal poisoning have been attributed to the use of these compounds in the United States.[8][57][58][59]

Bases of belief may include belief in existence of supernatural energies undetected by the science of physics, as in biofields, or in belief in properties of the energies of physics that are inconsistent with the laws of physics, as in energy medicine.[1]

Biofield therapies are intended to influence energy fields that, it is purported, surround and penetrate the body.[1] Writers such as noted astrophysicist and advocate of skeptical thinking (Scientific skepticism) Carl Sagan (1934-1996) have described the lack of empirical evidence to support the existence of the putative energy fields on which these therapies are predicated.

Acupuncture is a component of traditional Chinese medicine. Proponents of acupuncture believe that a supernatural energy called qi flows through the universe and through the body, and helps propel the bloodand that blockage of this energy leads to disease.[46] They believe that inserting needles in various parts of the body, determined by astrological calculations, can restore balance to the blocked flows and thereby cure disease.[46]

Chiropractic was developed in the belief that manipulating the spine affects the flow of a supernatural vital energy and thereby affects health and disease.

In the western version of Japanese Reiki, practitioners place their palms on the patient near Chakras that they believe are centers of supernatural energies, and believe that these supernatural energies can transfer from the practitioner's palms to heal the patient.

Bioelectromagnetic-based therapies use verifiable electromagnetic fields, such as pulsed fields, alternating-current, or direct-current fields in an unconventional manner.[1]Magnetic healing does not claim existence of supernatural energies, but asserts that magnets can be used to defy the laws of physics to influence health and disease.

Mind-body medicine takes a holistic approach to health that explores the interconnection between the mind, body, and spirit. It works under the premise that the mind can affect "bodily functions and symptoms".[1] Mind body medicines includes healing claims made in yoga, meditation, deep-breathing exercises, guided imagery, hypnotherapy, progressive relaxation, qi gong, and tai chi.[1]

Yoga, a method of traditional stretches, exercises, and meditations in Hinduism, may also be classified as an energy medicine insofar as its healing effects are believed to be due to a healing "life energy" that is absorbed into the body through the breath, and is thereby believed to treat a wide variety of illnesses and complaints.[61]

Since the 1990s, tai chi (t'ai chi ch'uan) classes that purely emphasise health have become popular in hospitals, clinics, as well as community and senior centers. This has occurred as the baby boomers generation has aged and the art's reputation as a low-stress training method for seniors has become better known.[62][63] There has been some divergence between those that say they practice t'ai chi ch'uan primarily for self-defence, those that practice it for its aesthetic appeal (see wushu below), and those that are more interested in its benefits to physical and mental health.

Qigong, chi kung, or chi gung, is a practice of aligning body, breath, and mind for health, meditation, and martial arts training. With roots in traditional Chinese medicine, philosophy, and martial arts, qigong is traditionally viewed as a practice to cultivate and balance qi (chi) or what has been translated as "life energy".[64]

Substance based practices use substances found in nature such as herbs, foods, non-vitamin supplements and megavitamins, animal and fungal products, and minerals, including use of these products in traditional medical practices that may also incorporate other methods.[1][11][65] Examples include healing claims for nonvitamin supplements, fish oil, Omega-3 fatty acid, glucosamine, echinacea, flaxseed oil, and ginseng.[66]Herbal medicine, or phytotherapy, includes not just the use of plant products, but may also include the use of animal and mineral products.[11] It is among the most commercially successful branches of alternative medicine, and includes the tablets, powders and elixirs that are sold as "nutritional supplements".[11] Only a very small percentage of these have been shown to have any efficacy, and there is little regulation as to standards and safety of their contents.[11] This may include use of known toxic substances, such as use of the poison lead in traditional Chinese medicine.[66]

Manipulative and body-based practices feature the manipulation or movement of body parts, such as is done in bodywork and chiropractic manipulation.

Osteopathic manipulative medicine, also known as osteopathic manipulative treatment, is a core set of techniques of osteopathy and osteopathic medicine distinguishing these fields from mainstream medicine.[67]

Religion based healing practices, such as use of prayer and the laying of hands in Christian faith healing, and shamanism, rely on belief in divine or spiritual intervention for healing.

Shamanism is a practice of many cultures around the world, in which a practitioner reaches an altered states of consciousness in order to encounter and interact with the spirit world or channel supernatural energies in the belief they can heal.[68]

Some alternative medicine practices may be based on pseudoscience, ignorance, or flawed reasoning.[69] This can lead to fraud.[5]

Practitioners of electricity and magnetism based healing methods may deliberately exploit a patient's ignorance of physics to defraud them.[13]

"Alternative medicine" is a loosely defined set of products, practices, and theories that are believed or perceived by their users to have the healing effects of medicine,[n 2][n 4] but whose effectiveness has not been clearly established using scientific methods,[n 2][n 3][5][6][23][25] whose theory and practice is not part of biomedicine,[n 4][n 1][n 5][n 6] or whose theories or practices are directly contradicted by scientific evidence or scientific principles used in biomedicine.[5][26][6] "Biomedicine" is that part of medical science that applies principles of biology, physiology, molecular biology, biophysics, and other natural sciences to clinical practice, using scientific methods to establish the effectiveness of that practice. Alternative medicine is a diverse group of medical and health care systems, practices, and products that originate outside of biomedicine,[n 1] are not considered part of biomedicine,[1] are not widely used by the biomedical healthcare professions,[74] and are not taught as skills practiced in biomedicine.[74] Unlike biomedicine,[n 1] an alternative medicine product or practice does not originate from the sciences or from using scientific methodology, but may instead be based on testimonials, religion, tradition, superstition, belief in supernatural energies, pseudoscience, errors in reasoning, propaganda, fraud, or other unscientific sources.[n 3][5][6][13] The expression "alternative medicine" refers to a diverse range of related and unrelated products, practices, and theories, originating from widely varying sources, cultures, theories, and belief systems, and ranging from biologically plausible practices and products and practices with some evidence, to practices and theories that are directly contradicted by basic science or clear evidence, and products that have proven to be ineffective or even toxic and harmful.[n 4][7][8]

Alternative medicine, complementary medicine, holistic medicine, natural medicine, unorthodox medicine, fringe medicine, unconventional medicine, and new age medicine are used interchangeably as having the same meaning (are synonyms) in some contexts,[75][76][77] but may have different meanings in other contexts, for example, unorthodox medicine may refer to biomedicine that is different from what is commonly practiced, and fringe medicine may refer to biomedicine that is based on fringe science, which may be scientifically valid but is not mainstream.

The meaning of the term "alternative" in the expression "alternative medicine", is not that it is an actual effective alternative to medical science, although some alternative medicine promoters may use the loose terminology to give the appearance of effectiveness.[5]Marcia Angell stated that "alternative medicine" is "a new name for snake oil. There's medicine that works and medicine that doesn't work."[78] Loose terminology may also be used to suggest meaning that a dichotomy exists when it does not, e.g., the use of the expressions "western medicine" and "eastern medicine" to suggest that the difference is a cultural difference between the Asiatic east and the European west, rather than that the difference is between evidence-based medicine and treatments that don't work.[5]

"Complementary medicine" refers to use of alternative medical treatments alongside conventional medicine, in the belief that it increases the effectiveness of the science-based medicine.[79][80][81] An example of "complementary medicine" is use of acupuncture (sticking needles in the body to influence the flow of a supernatural energy), along with using science-based medicine, in the belief that the acupuncture increases the effectiveness or "complements" the science-based medicine.[81] "CAM" is an abbreviation for "complementary and alternative medicine".

The expression "Integrative medicine" (or "integrated medicine") is used in two different ways. One use refers to a belief that medicine based on science can be "integrated" with practices that are not. Another use refers only to a combination of alternative medical treatments with conventional treatments that have some scientific proof of efficacy, in which case it is identical with CAM.[16] "holistic medicine" (or holistic health) is an alternative medicine practice that claims to treat the "whole person" and not just the illness.

"Traditional medicine" and "folk medicine" refer to prescientific practices of a culture, not to what is traditionally practiced in cultures where medical science dominates. "Eastern medicine" typically refers to prescientific traditional medicines of Asia. "Western medicine", when referring to modern practice, typically refers to medical science, and not to alternative medicines practiced in the west (Europe and the Americas). "Western medicine", "biomedicine", "mainstream medicine", "medical science", "science-based medicine", "evidence-based medicine", "conventional medicine", "standard medicine", "orthodox medicine", "allopathic medicine", "dominant health system", and "medicine", are sometimes used interchangeably as having the same meaning, when contrasted with alternative medicine, but these terms may have different meanings in some contexts, e.g., some practices in medical science are not supported by rigorous scientific testing so "medical science" is not strictly identical with "science-based medicine", and "standard medical care" may refer to "best practice" when contrasted with other biomedicine that is less used or less recommended.[n 11][84]

Prominent members of the science[31][85] and biomedical science community[24] assert that it is not meaningful to define an alternative medicine that is separate from a conventional medicine, that the expressions "conventional medicine", "alternative medicine", "complementary medicine", "integrative medicine", and "holistic medicine" do not refer to anything at all.[24][31][85][86] Their criticisms of trying to make such artificial definitions include: "There's no such thing as conventional or alternative or complementary or integrative or holistic medicine. There's only medicine that works and medicine that doesn't;"[24][31][85] "By definition, alternative medicine has either not been proved to work, or been proved not to work. You know what they call alternative medicine that's been proved to work? Medicine;"[33] "There cannot be two kinds of medicine conventional and alternative. There is only medicine that has been adequately tested and medicine that has not, medicine that works and medicine that may or may not work. Once a treatment has been tested rigorously, it no longer matters whether it was considered alternative at the outset. If it is found to be reasonably safe and effective, it will be accepted;"[24] and "There is no alternative medicine. There is only scientifically proven, evidence-based medicine supported by solid data or unproven medicine, for which scientific evidence is lacking."[86]

Others in both the biomedical and CAM communities point out that CAM cannot be precisely defined because of the diversity of theories and practices it includes, and because the boundaries between CAM and biomedicine overlap, are porous, and change. The expression "complementary and alternative medicine" (CAM) resists easy definition because the health systems and practices it refers to are diffuse, and its boundaries poorly defined.[7][n 12] Healthcare practices categorized as alternative may differ in their historical origin, theoretical basis, diagnostic technique, therapeutic practice and in their relationship to the medical mainstream. Some alternative therapies, including traditional Chinese medicine (TCM) and Ayurveda, have antique origins in East or South Asia and are entirely alternative medical systems;[91] others, such as homeopathy and chiropractic, have origins in Europe or the United States and emerged in the eighteenth and nineteenth centuries. Some, such as osteopathy and chiropractic, employ manipulative physical methods of treatment; others, such as meditation and prayer, are based on mind-body interventions. Treatments considered alternative in one location may be considered conventional in another.[94] Thus, chiropractic is not considered alternative in Denmark and likewise osteopathic medicine is no longer thought of as an alternative therapy in the United States.[94]

One common feature of all definitions of alternative medicine is its designation as "other than" conventional medicine. For example, the widely referenced descriptive definition of complementary and alternative medicine devised by the US National Center for Complementary and Integrative Health (NCCIH) of the National Institutes of Health (NIH), states that it is "a group of diverse medical and health care systems, practices, and products that are not generally considered part of conventional medicine."[1] For conventional medical practitioners, it does not necessarily follow that either it or its practitioners would no longer be considered alternative.[n 13]

Some definitions seek to specify alternative medicine in terms of its social and political marginality to mainstream healthcare.[99] This can refer to the lack of support that alternative therapies receive from the medical establishment and related bodies regarding access to research funding, sympathetic coverage in the medical press, or inclusion in the standard medical curriculum.[99] In 1993, the British Medical Association (BMA), one among many professional organizations who have attempted to define alternative medicine, stated that it[n 14] referred to "...those forms of treatment which are not widely used by the conventional healthcare professions, and the skills of which are not taught as part of the undergraduate curriculum of conventional medical and paramedical healthcare courses."[74] In a US context, an influential definition coined in 1993 by the Harvard-based physician,[100] David M. Eisenberg,[101] characterized alternative medicine "as interventions neither taught widely in medical schools nor generally available in US hospitals".[102] These descriptive definitions are inadequate in the present-day when some conventional doctors offer alternative medical treatments and CAM introductory courses or modules can be offered as part of standard undergraduate medical training;[103] alternative medicine is taught in more than 50 per cent of US medical schools and increasingly US health insurers are willing to provide reimbursement for CAM therapies. In 1999, 7.7% of US hospitals reported using some form of CAM therapy; this proportion had risen to 37.7% by 2008.[105]

An expert panel at a conference hosted in 1995 by the US Office for Alternative Medicine (OAM),[106][n 15] devised a theoretical definition[106] of alternative medicine as "a broad domain of healing resources... other than those intrinsic to the politically dominant health system of a particular society or culture in a given historical period."[107] This definition has been widely adopted by CAM researchers,[106] cited by official government bodies such as the UK Department of Health,[108] attributed as the definition used by the Cochrane Collaboration,[109] and, with some modification,[dubious discuss] was preferred in the 2005 consensus report of the US Institute of Medicine, Complementary and Alternative Medicine in the United States.[n 4]

The 1995 OAM conference definition, an expansion of Eisenberg's 1993 formulation, is silent regarding questions of the medical effectiveness of alternative therapies.[110] Its proponents hold that it thus avoids relativism about differing forms of medical knowledge and, while it is an essentially political definition, this should not imply that the dominance of mainstream biomedicine is solely due to political forces.[110] According to this definition, alternative and mainstream medicine can only be differentiated with reference to what is "intrinsic to the politically dominant health system of a particular society of culture".[111] However, there is neither a reliable method to distinguish between cultures and subcultures, nor to attribute them as dominant or subordinate, nor any accepted criteria to determine the dominance of a cultural entity.[111] If the culture of a politically dominant healthcare system is held to be equivalent to the perspectives of those charged with the medical management of leading healthcare institutions and programs, the definition fails to recognize the potential for division either within such an elite or between a healthcare elite and the wider population.[111]

Normative definitions distinguish alternative medicine from the biomedical mainstream in its provision of therapies that are unproven, unvalidated, or ineffective and support of theories with no recognized scientific basis. These definitions characterize practices as constituting alternative medicine when, used independently or in place of evidence-based medicine, they are put forward as having the healing effects of medicine, but are not based on evidence gathered with the scientific method.[1][13][24][79][80][113] Exemplifying this perspective, a 1998 editorial co-authored by Marcia Angell, a former editor of the New England Journal of Medicine, argued that:

This line of division has been subject to criticism, however, as not all forms of standard medical practice have adequately demonstrated evidence of benefit, [n 1][84] and it is also unlikely in most instances that conventional therapies, if proven to be ineffective, would ever be classified as CAM.[106]

Public information websites maintained by the governments of the US and of the UK make a distinction between "alternative medicine" and "complementary medicine", but mention that these two overlap. The National Center for Complementary and Integrative Health (NCCIH) of the National Institutes of Health (NIH) (a part of the US Department of Health and Human Services) states that "alternative medicine" refers to using a non-mainstream approach in place of conventional medicine and that "complementary medicine" generally refers to using a non-mainstream approach together with conventional medicine, and comments that the boundaries between complementary and conventional medicine overlap and change with time.[1]

The National Health Service (NHS) website NHS Choices (owned by the UK Department of Health), adopting the terminology of NCCIH, states that when a treatment is used alongside conventional treatments, to help a patient cope with a health condition, and not as an alternative to conventional treatment, this use of treatments can be called "complementary medicine"; but when a treatment is used instead of conventional medicine, with the intention of treating or curing a health condition, the use can be called "alternative medicine".[115]

Similarly, the public information website maintained by the National Health and Medical Research Council (NHMRC) of the Commonwealth of Australia uses the acronym "CAM" for a wide range of health care practices, therapies, procedures and devices not within the domain of conventional medicine. In the Australian context this is stated to include acupuncture; aromatherapy; chiropractic; homeopathy; massage; meditation and relaxation therapies; naturopathy; osteopathy; reflexology, traditional Chinese medicine; and the use of vitamin supplements.[116]

The Danish National Board of Health's "Council for Alternative Medicine" (Sundhedsstyrelsens Rd for Alternativ Behandling (SRAB)), an independent institution under the National Board of Health (Danish: Sundhedsstyrelsen), uses the term "alternative medicine" for:

In General Guidelines for Methodologies on Research and Evaluation of Traditional Medicine, published in 2000 by the World Health Organization (WHO), complementary and alternative medicine were defined as a broad set of health care practices that are not part of that country's own tradition and are not integrated into the dominant health care system.[118] Some herbal therapies are mainstream in Europe but are alternative in the US.[120]

The history of alternative medicine may refer to the history of a group of diverse medical practices that were collectively promoted as "alternative medicine" beginning in the 1970s, to the collection of individual histories of members of that group, or to the history of western medical practices that were labeled "irregular practices" by the western medical establishment.[5][121][122][123][124] It includes the histories of complementary medicine and of integrative medicine. Before the 1970s, western practitioners that were not part of the increasingly science-based medical establishment were referred to "irregular practitioners", and were dismissed by the medical establishment as unscientific and as practicing quackery.[121][122] Until the 1970's, irregular practice became increasingly marginalized as quackery and fraud, as western medicine increasingly incorporated scientific methods and discoveries, and had a corresponding increase in success of its treatments.[123] In the 1970s, irregular practices were grouped with traditional practices of nonwestern cultures and with other unproven or disproven practices that were not part of biomedicine, with the entire group collectively marketed and promoted under the single expression "alternative medicine".[5][121][122][123][125]

Use of alternative medicine in the west began to rise following the counterculture movement of the 1960s, as part of the rising new age movement of the 1970s.[5][126][127] This was due to misleading mass marketing of "alternative medicine" being an effective "alternative" to biomedicine, changing social attitudes about not using chemicals and challenging the establishment and authority of any kind, sensitivity to giving equal measure to beliefs and practices of other cultures (cultural relativism), and growing frustration and desperation by patients about limitations and side effects of science-based medicine.[5][122][123][124][125][127][128] At the same time, in 1975, the American Medical Association, which played the central role in fighting quackery in the United States, abolished its quackery committee and closed down its Department of Investigation.[121]:xxi[128] By the early to mid 1970s the expression "alternative medicine" came into widespread use, and the expression became mass marketed as a collection of "natural" and effective treatment "alternatives" to science-based biomedicine.[5][128][129][130] By 1983, mass marketing of "alternative medicine" was so pervasive that the British Medical Journal (BMJ) pointed to "an apparently endless stream of books, articles, and radio and television programmes urge on the public the virtues of (alternative medicine) treatments ranging from meditation to drilling a hole in the skull to let in more oxygen".[128] In this 1983 article, the BMJ wrote, "one of the few growth industries in contemporary Britain is alternative medicine", noting that by 1983, "33% of patients with rheumatoid arthritis and 39% of those with backache admitted to having consulted an alternative practitioner".[128]

By about 1990, the American alternative medicine industry had grown to a $27 Billion per year, with polls showing 30% of Americans were using it.[127][131] Moreover, polls showed that Americans made more visits for alternative therapies than the total number of visits to primary care doctors, and American out-of-pocket spending (non-insurance spending) on alternative medicine was about equal to spending on biomedical doctors.[121]:172 In 1991, Time magazine ran a cover story, "The New Age of Alternative Medicine: Why New Age Medicine Is Catching On".[127][131] In 1993, the New England Journal of Medicine reported one in three Americans as using alternative medicine.[127] In 1993, the Public Broadcasting System ran a Bill Moyers special, Healing and the Mind, with Moyers commenting that "...people by the tens of millions are using alternative medicine. If established medicine does not understand that, they are going to lose their clients."[127]

Another explosive growth began in the 1990s, when senior level political figures began promoting alternative medicine, investing large sums of government medical research funds into testing alternative medicine, including testing of scientifically implausible treatments, and relaxing government regulation of alternative medicine products as compared to biomedical products.[5][121]:xxi[122][123][124][125][132][133] Beginning with a 1991 appropriation of $2 million for funding research of alternative medicine research, federal spending grew to a cumulative total of about $2.5 billion by 2009, with 50% of Americans using alternative medicine by 2013.[10][134]

In 1991, pointing to a need for testing because of the widespread use of alternative medicine without authoritative information on its efficacy, United States Senator Tom Harkin used $2 million of his discretionary funds to create the Office for the Study of Unconventional Medical Practices (OSUMP), later renamed to be the Office of Alternative Medicine (OAM).[121]:170[135][136] The OAM was created to be within the National Institute of Health (NIH), the scientifically prestigious primary agency of the United States government responsible for biomedical and health-related research.[121]:170[135][136] The mandate was to investigate, evaluate, and validate effective alternative medicine treatments, and alert the public as the results of testing its efficacy.[131][135][136][137]

Sen. Harkin had become convinced his allergies were cured by taking bee pollen pills, and was urged to make the spending by two of his influential constituents.[131][135][136] Bedell, a longtime friend of Sen. Harkin, was a former member of the United States House of Representatives who believed that alternative medicine had twice cured him of diseases after mainstream medicine had failed, claiming that cow's milk colostrum cured his Lyme disease, and an herbal derivative from camphor had prevented post surgical recurrence of his prostate cancer.[121][131] Wiewel was a promoter of unproven cancer treatments involving a mixture of blood sera that the Food and Drug Administration had banned from being imported.[131] Both Bedell and Wiewel became members of the advisory panel for the OAM. The company that sold the bee pollen was later fined by the Federal Trade Commission for making false health claims about their bee-pollen products reversing the aging process, curing allergies, and helping with weight loss.[138]

In 1993, Britain's Prince Charles, who claimed that homeopathy and other alternative medicine was an effective alternative to biomedicine, established the Foundation for Integrated Health (FIH), as a charity to explore "how safe, proven complementary therapies can work in conjunction with mainstream medicine".[139] The FIH received government funding through grants from Britain's Department of Health.[139]

In 1994, Sen. Harkin (D) and Senator Orrin Hatch (R) introduced the Dietary Supplement Health and Education Act (DSHEA).[140][141] The act reduced authority of the FDA to monitor products sold as "natural" treatments.[140] Labeling standards were reduced to allow health claims for supplements based only on unconfirmed preliminary studies that were not subjected to scientific peer review, and the act made it more difficult for the FDA to promptly seize products or demand proof of safety where there was evidence of a product being dangerous.[141] The Act became known as the "The 1993 Snake Oil Protection Act" following a New York Times editorial under that name.[140]

Senator Harkin complained about the "unbendable rules of randomized clinical trials", citing his use of bee pollen to treat his allergies, which he claimed to be effective even though it was biologically implausible and efficacy was not established using scientific methods.[135][142] Sen. Harkin asserted that claims for alternative medicine efficacy be allowed not only without conventional scientific testing, even when they are biologically implausible, "It is not necessary for the scientific community to understand the process before the American public can benefit from these therapies."[140] Following passage of the act, sales rose from about $4 billion in 1994, to $20 billion by the end of 2000, at the same time as evidence of their lack of efficacy or harmful effects grew.[140] Senator Harkin came into open public conflict with the first OAM Director Joseph M. Jacobs and OAM board members from the scientific and biomedical community.[136] Jacobs' insistence on rigorous scientific methodology caused friction with Senator Harkin.[135][142][143] Increasing political resistance to the use of scientific methodology was publicly criticized by Dr. Jacobs and another OAM board member complained that "nonsense has trickled down to every aspect of this office".[135][142] In 1994, Senator Harkin appeared on television with cancer patients who blamed Dr. Jacobs for blocking their access to untested cancer treatment, leading Jacobs to resign in frustration.[135][142]

In 1995, Wayne Jonas, a promoter of homeopathy and political ally of Senator Harkin, became the director of the OAM, and continued in that role until 1999.[144] In 1997, the NCCAM budget was increased from $12 million to $20 million annually.[145] From 1990 to 1997, use of alternative medicine in the US increased by 25%, with a corresponding 50% increase in expenditures.[146] The OAM drew increasing criticism from eminent members of the scientific community with letters to the Senate Appropriations Committee when discussion of renewal of funding OAM came up.[121]:175 Nobel laureate Paul Berg wrote that prestigious NIH should not be degraded to act as a cover for quackery, calling the OAM "an embarrassment to serious scientists."[121]:175[145] The president of the American Physical Society wrote complaining that the government was spending money on testing products and practices that "violate basic laws of physics and more clearly resemble witchcraft".[121]:175[145] In 1998, the President of the North Carolina Medical Association publicly called for shutting down the OAM.[147]

In 1998, NIH director and Nobel laureate Harold Varmus came into conflict with Senator Harkin by pushing to have more NIH control of alternative medicine research.[148] The NIH Director placed the OAM under more strict scientific NIH control.[145][148] Senator Harkin responded by elevating OAM into an independent NIH "center", just short of being its own "institute", and renamed to be the National Center for Complementary and Alternative Medicine (NCCAM). NCCAM had a mandate to promote a more rigorous and scientific approach to the study of alternative medicine, research training and career development, outreach, and "integration". In 1999, the NCCAM budget was increased from $20 million to $50 million.[147][148] The United States Congress approved the appropriations without dissent. In 2000, the budget was increased to about $68 million, in 2001 to $90 million, in 2002 to $104 million, and in 2003, to $113 million.[147]

In 2004, modifications of the European Parliament's 2001 Directive 2001/83/EC, regulating all medicine products, were made with the expectation of influencing development of the European market for alternative medicine products.[149] Regulation of alternative medicine in Europe was loosened with "a simplified registration procedure" for traditional herbal medicinal products.[149][150] Plausible "efficacy" for traditional medicine was redefined to be based on long term popularity and testimonials ("the pharmacological effects or efficacy of the medicinal product are plausible on the basis of long-standing use and experience."), without scientific testing.[149][150] The Committee on Herbal Medicinal Products (HMPC) was created within the European Medicines Agency in London (EMEA). A special working group was established for homeopathic remedies under the Heads of Medicines Agencies.[149]

Through 2004, alternative medicine that was traditional to Germany continued to be a regular part of the health care system, including homeopathy and anthroposophic medicine.[149] The German Medicines Act mandated that science-based medical authorities consider the "particular characteristics" of complementary and alternative medicines.[149] By 2004, homeopathy had grown to be the most used alternative therapy in France, growing from 16% of the population using homeopathic medicine in 1982, to 29% by 1987, 36% percent by 1992, and 62% of French mothers using homeopathic medicines by 2004, with 94.5% of French pharmacists advising pregnant women to use homeopathic remedies.[151] As of 2004[update], 100 million people in India depended solely on traditional German homeopathic remedies for their medical care.[152] As of 2010[update], homeopathic remedies continued to be the leading alternative treatment used by European physicians.[151] By 2005, sales of homeopathic remedies and anthroposophical medicine had grown to $930 million Euros, a 60% increase from 1995.[151][153]

In 2008, London's The Times published a letter from Edzard Ernst that asked the FIH to recall two guides promoting alternative medicine, saying: "the majority of alternative therapies appear to be clinically ineffective, and many are downright dangerous." In 2010, Brittan's FIH closed after allegations of fraud and money laundering led to arrests of its officials.[139]

In 2009, after a history of 17 years of government testing and spending of nearly $2.5 billion on research had produced almost no clearly proven efficacy of alternative therapies, Senator Harkin complained, "One of the purposes of this center was to investigate and validate alternative approaches. Quite frankly, I must say publicly that it has fallen short. It think quite frankly that in this center and in the office previously before it, most of its focus has been on disproving things rather than seeking out and approving."[148][154][155] Members of the scientific community criticized this comment as showing Senator Harkin did not understand the basics of scientific inquiry, which tests hypotheses, but never intentionally attempts to "validate approaches".[148] Members of the scientific and biomedical communities complained that after a history of 17 years of being tested, at a cost of over $2.5 Billion on testing scientifically and biologically implausible practices, almost no alternative therapy showed clear efficacy.[10] In 2009, the NCCAM's budget was increased to about $122 million.[148] Overall NIH funding for CAM research increased to $300 Million by 2009.[148] By 2009, Americans were spending $34 Billion annually on CAM.[156]

Since 2009, according to Art. 118a of the Swiss Federal Constitution, the Swiss Confederation and the Cantons of Switzerland shall within the scope of their powers ensure that consideration is given to complementary medicine.[157]

In 2012, the Journal of the American Medical Association (JAMA) published a criticism that study after study had been funded by NCCAM, but "failed to prove that complementary or alternative therapies are anything more than placebos".[158] The JAMA criticism pointed to large wasting of research money on testing scientifically implausible treatments, citing "NCCAM officials spending $374,000 to find that inhaling lemon and lavender scents does not promote wound healing; $750,000 to find that prayer does not cure AIDS or hasten recovery from breast-reconstruction surgery; $390,000 to find that ancient Indian remedies do not control type 2 diabetes; $700,000 to find that magnets do not treat arthritis, carpal tunnel syndrome, or migraine headaches; and $406,000 to find that coffee enemas do not cure pancreatic cancer."[158] It was pointed out that negative results from testing were generally ignored by the public, that people continue to "believe what they want to believe, arguing that it does not matter what the data show: They know what works for them".[158] Continued increasing use of CAM products was also blamed on the lack of FDA ability to regulate alternative products, where negative studies do not result in FDA warnings or FDA-mandated changes on labeling, whereby few consumers are aware that many claims of many supplements were found not to have not to be supported.[158]

By 2013, 50% of Americans were using CAM.[134] As of 2013[update], CAM medicinal products in Europe continued to be exempted from documented efficacy standards required of other medicinal products.[159]

In 2014 the NCCAM was renamed to the National Center for Complementary and Integrative Health (NCCIH) with a new charter requiring that 12 of the 18 council members shall be selected with a preference to selecting leading representatives of complementary and alternative medicine, 9 of the members must be licensed practitioners of alternative medicine, 6 members must be general public leaders in the fields of public policy, law, health policy, economics, and management, and 3 members must represent the interests of individual consumers of complementary and alternative medicine.[160]

Much of what is now categorized as alternative medicine was developed as independent, complete medical systems. These were developed long before biomedicine and use of scientific methods. Each system was developed in relatively isolated regions of the world where there was little or no medical contact with pre-scientific western medicine, or with each other's systems. Examples are traditional Chinese medicine and the Ayurvedic medicine of India.

Other alternative medicine practices, such as homeopathy, were developed in western Europe and in opposition to western medicine, at a time when western medicine was based on unscientific theories that were dogmatically imposed by western religious authorities. Homeopathy was developed prior to discovery of the basic principles of chemistry, which proved homeopathic remedies contained nothing but water. But homeopathy, with its remedies made of water, was harmless compared to the unscientific and dangerous orthodox western medicine practiced at that time, which included use of toxins and draining of blood, often resulting in permanent disfigurement or death.[122]

Other alternative practices such as chiropractic and osteopathic manipulative medicine were developed in the United States at a time that western medicine was beginning to incorporate scientific methods and theories, but the biomedical model was not yet totally dominant. Practices such as chiropractic and osteopathic, each considered to be irregular practices by the western medical establishment, also opposed each other, both rhetorically and politically with licensing legislation. Osteopathic practitioners added the courses and training of biomedicine to their licensing, and licensed Doctor of Osteopathic Medicine holders began diminishing use of the unscientific origins of the field. Without the original nonscientific practices and theories, osteopathic medicine is now considered the same as biomedicine.

Further information: Rise of modern medicine

Until the 1970s, western practitioners that were not part of the medical establishment were referred to "irregular practitioners", and were dismissed by the medical establishment as unscientific, as practicing quackery.[122] Irregular practice became increasingly marginalized as quackery and fraud, as western medicine increasingly incorporated scientific methods and discoveries, and had a corresponding increase in success of its treatments.

Dating from the 1970s, medical professionals, sociologists, anthropologists and other commentators noted the increasing visibility of a wide variety of health practices that had neither derived directly from nor been verified by biomedical science.[161] Since that time, those who have analyzed this trend have deliberated over the most apt language with which to describe this emergent health field.[161] A variety of terms have been used, including heterodox, irregular, fringe and alternative medicine while others, particularly medical commentators, have been satisfied to label them as instances of quackery.[161] The most persistent term has been alternative medicine but its use is problematic as it assumes a value-laden dichotomy between a medical fringe, implicitly of borderline acceptability at best, and a privileged medical orthodoxy, associated with validated medico-scientific norms.[162] The use of the category of alternative medicine has also been criticized as it cannot be studied as an independent entity but must be understood in terms of a regionally and temporally specific medical orthodoxy.[163] Its use can also be misleading as it may erroneously imply that a real medical alternative exists.[164] As with near-synonymous expressions, such as unorthodox, complementary, marginal, or quackery, these linguistic devices have served, in the context of processes of professionalisation and market competition, to establish the authority of official medicine and police the boundary between it and its unconventional rivals.[162]

An early instance of the influence of this modern, or western, scientific medicine outside Europe and North America is Peking Union Medical College.[165][n 16][n 17]

From a historical perspective, the emergence of alternative medicine, if not the term itself, is typically dated to the 19th century.[166] This is despite the fact that there are variants of Western non-conventional medicine that arose in the late-eighteenth century or earlier and some non-Western medical traditions, currently considered alternative in the West and elsewhere, which boast extended historical pedigrees.[162] Alternative medical systems, however, can only be said to exist when there is an identifiable, regularized and authoritative standard medical practice, such as arose in the West during the nineteenth century, to which they can function as an alternative.

During the late eighteenth and nineteenth centuries regular and irregular medical practitioners became more clearly differentiated throughout much of Europe and,[168] as the nineteenth century progressed, most Western states converged in the creation of legally delimited and semi-protected medical markets.[169] It is at this point that an "official" medicine, created in cooperation with the state and employing a scientific rhetoric of legitimacy, emerges as a recognizable entity and that the concept of alternative medicine as a historical category becomes tenable.[170]

As part of this process, professional adherents of mainstream medicine in countries such as Germany, France, and Britain increasingly invoked the scientific basis of their discipline as a means of engendering internal professional unity and of external differentiation in the face of sustained market competition from homeopaths, naturopaths, mesmerists and other nonconventional medical practitioners, finally achieving a degree of imperfect dominance through alliance with the state and the passage of regulatory legislation.[162][164] In the US the Johns Hopkins University School of Medicine, based in Baltimore, Maryland, opened in 1893, with William H. Welch and William Osler among the founding physicians, and was the first medical school devoted to teaching "German scientific medicine".[171]

Buttressed by increased authority arising from significant advances in the medical sciences of the late 19th century onwardsincluding development and application of the germ theory of disease by the chemist Louis Pasteur and the surgeon Joseph Lister, of microbiology co-founded by Robert Koch (in 1885 appointed professor of hygiene at the University of Berlin), and of the use of X-rays (Rntgen rays)the 1910 Flexner Report called upon American medical schools to follow the model of the Johns Hopkins School of Medicine, and adhere to mainstream science in their teaching and research. This was in a belief, mentioned in the Report's introduction, that the preliminary and professional training then prevailing in medical schools should be reformed, in view of the new means for diagnosing and combating disease made available the sciences on which medicine depended.[n 18][173]

Putative medical practices at the time that later became known as "alternative medicine" included homeopathy (founded in Germany in the early 19c.) and chiropractic (founded in North America in the late 19c.). These conflicted in principle with the developments in medical science upon which the Flexner reforms were based, and they have not become compatible with further advances of medical science such as listed in Timeline of medicine and medical technology, 19001999 and 2000present, nor have Ayurveda, acupuncture or other kinds of alternative medicine.[citation needed]

At the same time "Tropical medicine" was being developed as a specialist branch of western medicine in research establishments such as Liverpool School of Tropical Medicine founded in 1898 by Alfred Lewis Jones, London School of Hygiene & Tropical Medicine, founded in 1899 by Patrick Manson and Tulane University School of Public Health and Tropical Medicine, instituted in 1912. A distinction was being made between western scientific medicine and indigenous systems. An example is given by an official report about indigenous systems of medicine in India, including Ayurveda, submitted by Mohammad Usman of Madras and others in 1923. This stated that the first question the Committee considered was "to decide whether the indigenous systems of medicine were scientific or not".[174][175]

By the later twentieth century the term 'alternative medicine' entered public discourse,[n 19][178] but it was not always being used with the same meaning by all parties. Arnold S. Relman remarked in 1998 that in the best kind of medical practice, all proposed treatments must be tested objectively, and that in the end there will only be treatments that pass and those that do not, those that are proven worthwhile and those that are not. He asked 'Can there be any reasonable "alternative"?'[179] But also in 1998 the then Surgeon General of the United States, David Satcher,[180] issued public information about eight common alternative treatments (including acupuncture, holistic and massage), together with information about common diseases and conditions, on nutrition, diet, and lifestyle changes, and about helping consumers to decipher fraud and quackery, and to find healthcare centers and doctors who practiced alternative medicine.[181]

By 1990, approximately 60 million Americans had used one or more complementary or alternative therapies to address health issues, according to a nationwide survey in the US published in 1993 by David Eisenberg.[182] A study published in the November 11, 1998 issue of the Journal of the American Medical Association reported that 42% of Americans had used complementary and alternative therapies, up from 34% in 1990.[146] However, despite the growth in patient demand for complementary medicine, most of the early alternative/complementary medical centers failed.[183]

Mainly as a result of reforms following the Flexner Report of 1910[184]medical education in established medical schools in the US has generally not included alternative medicine as a teaching topic.[n 20] Typically, their teaching is based on current practice and scientific knowledge about: anatomy, physiology, histology, embryology, neuroanatomy, pathology, pharmacology, microbiology and immunology.[186] Medical schools' teaching includes such topics as doctor-patient communication, ethics, the art of medicine,[187] and engaging in complex clinical reasoning (medical decision-making).[188] Writing in 2002, Snyderman and Weil remarked that by the early twentieth century the Flexner model had helped to create the 20th-century academic health center, in which education, research, and practice were inseparable. While this had much improved medical practice by defining with increasing certainty the pathophysiological basis of disease, a single-minded focus on the pathophysiological had diverted much of mainstream American medicine from clinical conditions that were not well understood in mechanistic terms, and were not effectively treated by conventional therapies.[189]

By 2001 some form of CAM training was being offered by at least 75 out of 125 medical schools in the US.[190] Exceptionally, the School of Medicine of the University of Maryland, Baltimore includes a research institute for integrative medicine (a member entity of the Cochrane Collaboration).[191][192] Medical schools are responsible for conferring medical degrees, but a physician typically may not legally practice medicine until licensed by the local government authority. Licensed physicians in the US who have attended one of the established medical schools there have usually graduated Doctor of Medicine (MD).[193] All states require that applicants for MD licensure be graduates of an approved medical school and complete the United States Medical Licensing Exam (USMLE).[193]

The British Medical Association, in its publication Complementary Medicine, New Approach to Good Practice (1993), gave as a working definition of non-conventional therapies (including acupuncture, chiropractic and homeopathy): "...those forms of treatment which are not widely used by the orthodox health-care professions, and the skills of which are not part of the undergraduate curriculum of orthodox medical and paramedical health-care courses." By 2000 some medical schools in the UK were offering CAM familiarisation courses to undergraduate medical students while some were also offering modules specifically on CAM.[195]

The Cochrane Collaboration Complementary Medicine Field explains its "Scope and Topics" by giving a broad and general definition for complementary medicine as including practices and ideas outside the domain of conventional medicine in several countriesand defined by its users as preventing or treating illness, or promoting health and well being, and which complement mainstream medicine in three ways: by contributing to a common whole, by satisfying a demand not met by conventional practices, and by diversifying the conceptual framework of medicine.[196]

Proponents of an evidence-base for medicine[n 21][198][199][200][201] such as the Cochrane Collaboration (founded in 1993 and from 2011 providing input for WHO resolutions) take a position that all systematic reviews of treatments, whether "mainstream" or "alternative", ought to be held to the current standards of scientific method.[192] In a study titled Development and classification of an operational definition of complementary and alternative medicine for the Cochrane Collaboration (2011) it was proposed that indicators that a therapy is accepted include government licensing of practitioners, coverage by health insurance, statements of approval by government agencies, and recommendation as part of a practice guideline; and that if something is currently a standard, accepted therapy, then it is not likely to be widely considered as CAM.[106]

That alternative medicine has been on the rise "in countries where Western science and scientific method generally are accepted as the major foundations for healthcare, and 'evidence-based' practice is the dominant paradigm" was described as an "enigma" in the Medical Journal of Australia.[202]

Critics in the US say the expression is deceptive because it implies there is an effective alternative to science-based medicine, and that complementary is deceptive because it implies that the treatment increases the effectiveness of (complements) science-based medicine, while alternative medicines that have been tested nearly always have no measurable positive effect compared to a placebo.[5][203][204][205]

Some opponents, focused upon health fraud, misinformation, and quackery as public health problems in the US, are highly critical of alternative medicine, notably Wallace Sampson and Paul Kurtz founders of Scientific Review of Alternative Medicine and Stephen Barrett, co-founder of The National Council Against Health Fraud and webmaster of Quackwatch.[206] Grounds for opposing alternative medicine stated in the US and elsewhere include that:

Paul Offit proposed that "alternative medicine becomes quackery" in four ways, by:[85]

A United States government agency, the National Center on Complementary and Integrative Health (NCCIH), created its own classification system for branches of complementary and alternative medicine that divides them into five major groups. These groups have some overlap, and distinguish two types of energy medicine: veritable which involves scientifically observable energy (including magnet therapy, colorpuncture and light therapy) and putative, which invokes physically undetectable or unverifiable energy.[215]

Alternative medicine practices and beliefs are diverse in their foundations and methodologies. The wide range of treatments and practices referred to as alternative medicine includes some stemming from nineteenth century North America, such as chiropractic and naturopathy, others, mentioned by Jtte, that originated in eighteenth- and nineteenth-century Germany, such as homeopathy and hydropathy,[164] and some that have originated in China or India, while African, Caribbean, Pacific Island, Native American, and other regional cultures have traditional medical systems as diverse as their diversity of cultures.[1]

Examples of CAM as a broader term for unorthodox treatment and diagnosis of illnesses, disease, infections, etc.,[216] include yoga, acupuncture, aromatherapy, chiropractic, herbalism, homeopathy, hypnotherapy, massage, osteopathy, reflexology, relaxation therapies, spiritual healing and tai chi.[216] CAM differs from conventional medicine. It is normally private medicine and not covered by health insurance.[216] It is paid out of pocket by the patient and is an expensive treatment.[216] CAM tends to be a treatment for upper class or more educated people.[146]

The NCCIH classification system is -

Alternative therapies based on electricity or magnetism use verifiable electromagnetic fields, such as pulsed fields, alternating-current, or direct-current fields in an unconventional manner rather than claiming the existence of imponderable or supernatural energies.[1]

Substance based practices use substances found in nature such as herbs, foods, non-vitamin supplements and megavitamins, and minerals, and includes traditional herbal remedies with herbs specific to regions where the cultural practices.[1] Nonvitamin supplements include fish oil, Omega-3 fatty acid, glucosamine, echinacea, flaxseed oil or pills, and ginseng, when used under a claim to have healing effects.[66]

Mind-body interventions, working under the premise that the mind can affect "bodily functions and symptoms",[1] include healing claims made in hypnotherapy,[217] and in guided imagery, meditation, progressive relaxation, qi gong, tai chi and yoga.[1] Meditation practices including mantra meditation, mindfulness meditation, yoga, tai chi, and qi gong have many uncertainties. According to an AHRQ review, the available evidence on meditation practices through September 2005 is of poor methodological quality and definite conclusions on the effects of meditation in healthcare cannot be made using existing research.[218][219]

Naturopathy is based on a belief in vitalism, which posits that a special energy called vital energy or vital force guides bodily processes such as metabolism, reproduction, growth, and adaptation.[41] The term was coined in 1895[220] by John Scheel and popularized by Benedict Lust, the "father of U.S. naturopathy".[221] Today, naturopathy is primarily practiced in the United States and Canada.[222] Naturopaths in unregulated jurisdictions may use the Naturopathic Doctor designation or other titles regardless of level of education.[223]

Read more from the original source:
Alternative medicine - Wikipedia

Read More...

Genetics – Wikipedia

October 20th, 2016 7:41 pm

This article is about the general scientific term. For the scientific journal, see Genetics (journal).

Genetics is the study of genes, genetic variation, and heredity in living organisms.[1][2] It is generally considered a field of biology, but it intersects frequently with many of the life sciences and is strongly linked with the study of information systems.

The father of genetics is Gregor Mendel, a late 19th-century scientist and Augustinian friar. Mendel studied 'trait inheritance', patterns in the way traits were handed down from parents to offspring. He observed that organisms (pea plants) inherit traits by way of discrete "units of inheritance". This term, still used today, is a somewhat ambiguous definition of what is referred to as a gene.

Trait inheritance and molecular inheritance mechanisms of genes are still primary principles of genetics in the 21st century, but modern genetics has expanded beyond inheritance to studying the function and behavior of genes. Gene structure and function, variation, and distribution are studied within the context of the cell, the organism (e.g. dominance) and within the context of a population. Genetics has given rise to a number of sub-fields including epigenetics and population genetics. Organisms studied within the broad field span the domain of life, including bacteria, plants, animals, and humans.

Genetic processes work in combination with an organism's environment and experiences to influence development and behavior, often referred to as nature versus nurture. The intra- or extra-cellular environment of a cell or organism may switch gene transcription on or off. A classic example is two seeds of genetically identical corn, one placed in a temperate climate and one in an arid climate. While the average height of the two corn stalks may be genetically determined to be equal, the one in the arid climate only grows to half the height of the one in the temperate climate due to lack of water and nutrients in its environment.

The word genetics stems from the Ancient Greek genetikos meaning "genitive"/"generative", which in turn derives from genesis meaning "origin".[3][4][5]

The observation that living things inherit traits from their parents has been used since prehistoric times to improve crop plants and animals through selective breeding.[6] The modern science of genetics, seeking to understand this process, began with the work of Gregor Mendel in the mid-19th century.[7]

Prior to Mendel, Imre Festetics, a Hungarian noble, who lived in Kszeg before Mendel, was the first who used the word "genetics". He described several rules of genetic inheritance in his work The genetic law of the Nature (Die genetische Gestze der Natur, 1819). His second law is the same as what Mendel published. In his third law, he developed the basic principles of mutation (he can be considered a forerunner of Hugo de Vries.)[8]

Other theories of inheritance preceded his work. A popular theory during Mendel's time was the concept of blending inheritance: the idea that individuals inherit a smooth blend of traits from their parents.[9] Mendel's work provided examples where traits were definitely not blended after hybridization, showing that traits are produced by combinations of distinct genes rather than a continuous blend. Blending of traits in the progeny is now explained by the action of multiple genes with quantitative effects. Another theory that had some support at that time was the inheritance of acquired characteristics: the belief that individuals inherit traits strengthened by their parents. This theory (commonly associated with Jean-Baptiste Lamarck) is now known to be wrongthe experiences of individuals do not affect the genes they pass to their children,[10] although evidence in the field of epigenetics has revived some aspects of Lamarck's theory.[11] Other theories included the pangenesis of Charles Darwin (which had both acquired and inherited aspects) and Francis Galton's reformulation of pangenesis as both particulate and inherited.[12]

Modern genetics started with Gregor Johann Mendel, a scientist and Augustinian friar who studied the nature of inheritance in plants. In his paper "Versuche ber Pflanzenhybriden" ("Experiments on Plant Hybridization"), presented in 1865 to the Naturforschender Verein (Society for Research in Nature) in Brnn, Mendel traced the inheritance patterns of certain traits in pea plants and described them mathematically.[13] Although this pattern of inheritance could only be observed for a few traits, Mendel's work suggested that heredity was particulate, not acquired, and that the inheritance patterns of many traits could be explained through simple rules and ratios.

The importance of Mendel's work did not gain wide understanding until the 1890s, after his death, when other scientists working on similar problems re-discovered his research. William Bateson, a proponent of Mendel's work, coined the word genetics in 1905.[14][15] (The adjective genetic, derived from the Greek word genesis, "origin", predates the noun and was first used in a biological sense in 1860.)[16] Bateson both acted as a mentor and was aided significantly by the work of women scientists from Newnham College at Cambridge, specifically the work of Becky Saunders, Nora Darwin Barlow, and Muriel Wheldale Onslow.[17] Bateson popularized the usage of the word genetics to describe the study of inheritance in his inaugural address to the Third International Conference on Plant Hybridization in London, England, in 1906.[18]

After the rediscovery of Mendel's work, scientists tried to determine which molecules in the cell were responsible for inheritance. In 1911, Thomas Hunt Morgan argued that genes are on chromosomes, based on observations of a sex-linked white eye mutation in fruit flies.[19] In 1913, his student Alfred Sturtevant used the phenomenon of genetic linkage to show that genes are arranged linearly on the chromosome.[20]

Although genes were known to exist on chromosomes, chromosomes are composed of both protein and DNA, and scientists did not know which of the two is responsible for inheritance. In 1928, Frederick Griffith discovered the phenomenon of transformation (see Griffith's experiment): dead bacteria could transfer genetic material to "transform" other still-living bacteria. Sixteen years later, in 1944, the AveryMacLeodMcCarty experiment identified DNA as the molecule responsible for transformation.[21] The role of the nucleus as the repository of genetic information in eukaryotes had been established by Hmmerling in 1943 in his work on the single celled alga Acetabularia.[22] The HersheyChase experiment in 1952 confirmed that DNA (rather than protein) is the genetic material of the viruses that infect bacteria, providing further evidence that DNA is the molecule responsible for inheritance.[23]

James Watson and Francis Crick determined the structure of DNA in 1953, using the X-ray crystallography work of Rosalind Franklin and Maurice Wilkins that indicated DNA had a helical structure (i.e., shaped like a corkscrew).[24][25] Their double-helix model had two strands of DNA with the nucleotides pointing inward, each matching a complementary nucleotide on the other strand to form what looks like rungs on a twisted ladder.[26] This structure showed that genetic information exists in the sequence of nucleotides on each strand of DNA. The structure also suggested a simple method for replication: if the strands are separated, new partner strands can be reconstructed for each based on the sequence of the old strand. This property is what gives DNA its semi-conservative nature where one strand of new DNA is from an original parent strand.[27]

Although the structure of DNA showed how inheritance works, it was still not known how DNA influences the behavior of cells. In the following years, scientists tried to understand how DNA controls the process of protein production.[28] It was discovered that the cell uses DNA as a template to create matching messenger RNA, molecules with nucleotides very similar to DNA. The nucleotide sequence of a messenger RNA is used to create an amino acid sequence in protein; this translation between nucleotide sequences and amino acid sequences is known as the genetic code.[29]

With the newfound molecular understanding of inheritance came an explosion of research.[30] A notable theory arose from Tomoko Ohta in 1973 with her amendment to the neutral theory of molecular evolution through publishing the nearly neutral theory of molecular evolution. In this theory, Ohta stressed the importance of natural selection and the environment to the rate at which genetic evolution occurs.[31] One important development was chain-termination DNA sequencing in 1977 by Frederick Sanger. This technology allows scientists to read the nucleotide sequence of a DNA molecule.[32] In 1983, Kary Banks Mullis developed the polymerase chain reaction, providing a quick way to isolate and amplify a specific section of DNA from a mixture.[33] The efforts of the Human Genome Project, Department of Energy, NIH, and parallel private efforts by Celera Genomics led to the sequencing of the human genome in 2003.[34]

At its most fundamental level, inheritance in organisms occurs by passing discrete heritable units, called genes, from parents to progeny.[35] This property was first observed by Gregor Mendel, who studied the segregation of heritable traits in pea plants.[13][36] In his experiments studying the trait for flower color, Mendel observed that the flowers of each pea plant were either purple or whitebut never an intermediate between the two colors. These different, discrete versions of the same gene are called alleles.

In the case of the pea, which is a diploid species, each individual plant has two copies of each gene, one copy inherited from each parent.[37] Many species, including humans, have this pattern of inheritance. Diploid organisms with two copies of the same allele of a given gene are called homozygous at that gene locus, while organisms with two different alleles of a given gene are called heterozygous.

The set of alleles for a given organism is called its genotype, while the observable traits of the organism are called its phenotype. When organisms are heterozygous at a gene, often one allele is called dominant as its qualities dominate the phenotype of the organism, while the other allele is called recessive as its qualities recede and are not observed. Some alleles do not have complete dominance and instead have incomplete dominance by expressing an intermediate phenotype, or codominance by expressing both alleles at once.[38]

When a pair of organisms reproduce sexually, their offspring randomly inherit one of the two alleles from each parent. These observations of discrete inheritance and the segregation of alleles are collectively known as Mendel's first law or the Law of Segregation.

Geneticists use diagrams and symbols to describe inheritance. A gene is represented by one or a few letters. Often a "+" symbol is used to mark the usual, non-mutant allele for a gene.[39]

In fertilization and breeding experiments (and especially when discussing Mendel's laws) the parents are referred to as the "P" generation and the offspring as the "F1" (first filial) generation. When the F1 offspring mate with each other, the offspring are called the "F2" (second filial) generation. One of the common diagrams used to predict the result of cross-breeding is the Punnett square.

When studying human genetic diseases, geneticists often use pedigree charts to represent the inheritance of traits.[40] These charts map the inheritance of a trait in a family tree.

Organisms have thousands of genes, and in sexually reproducing organisms these genes generally assort independently of each other. This means that the inheritance of an allele for yellow or green pea color is unrelated to the inheritance of alleles for white or purple flowers. This phenomenon, known as "Mendel's second law" or the "Law of independent assortment", means that the alleles of different genes get shuffled between parents to form offspring with many different combinations. (Some genes do not assort independently, demonstrating genetic linkage, a topic discussed later in this article.)

Often different genes can interact in a way that influences the same trait. In the Blue-eyed Mary (Omphalodes verna), for example, there exists a gene with alleles that determine the color of flowers: blue or magenta. Another gene, however, controls whether the flowers have color at all or are white. When a plant has two copies of this white allele, its flowers are whiteregardless of whether the first gene has blue or magenta alleles. This interaction between genes is called epistasis, with the second gene epistatic to the first.[41]

Many traits are not discrete features (e.g. purple or white flowers) but are instead continuous features (e.g. human height and skin color). These complex traits are products of many genes.[42] The influence of these genes is mediated, to varying degrees, by the environment an organism has experienced. The degree to which an organism's genes contribute to a complex trait is called heritability.[43] Measurement of the heritability of a trait is relativein a more variable environment, the environment has a bigger influence on the total variation of the trait. For example, human height is a trait with complex causes. It has a heritability of 89% in the United States. In Nigeria, however, where people experience a more variable access to good nutrition and health care, height has a heritability of only 62%.[44]

The molecular basis for genes is deoxyribonucleic acid (DNA). DNA is composed of a chain of nucleotides, of which there are four types: adenine (A), cytosine (C), guanine (G), and thymine (T). Genetic information exists in the sequence of these nucleotides, and genes exist as stretches of sequence along the DNA chain.[45]Viruses are the only exception to this rulesometimes viruses use the very similar molecule, RNA, instead of DNA as their genetic material.[46] Viruses cannot reproduce without a host and are unaffected by many genetic processes, so tend not to be considered living organisms.

DNA normally exists as a double-stranded molecule, coiled into the shape of a double helix. Each nucleotide in DNA preferentially pairs with its partner nucleotide on the opposite strand: A pairs with T, and C pairs with G. Thus, in its two-stranded form, each strand effectively contains all necessary information, redundant with its partner strand. This structure of DNA is the physical basis for inheritance: DNA replication duplicates the genetic information by splitting the strands and using each strand as a template for synthesis of a new partner strand.[47]

Genes are arranged linearly along long chains of DNA base-pair sequences. In bacteria, each cell usually contains a single circular genophore, while eukaryotic organisms (such as plants and animals) have their DNA arranged in multiple linear chromosomes. These DNA strands are often extremely long; the largest human chromosome, for example, is about 247 million base pairs in length.[48] The DNA of a chromosome is associated with structural proteins that organize, compact and control access to the DNA, forming a material called chromatin; in eukaryotes, chromatin is usually composed of nucleosomes, segments of DNA wound around cores of histone proteins.[49] The full set of hereditary material in an organism (usually the combined DNA sequences of all chromosomes) is called the genome.

While haploid organisms have only one copy of each chromosome, most animals and many plants are diploid, containing two of each chromosome and thus two copies of every gene.[37] The two alleles for a gene are located on identical loci of the two homologous chromosomes, each allele inherited from a different parent.

Many species have so-called sex chromosomes that determine the gender of each organism.[50] In humans and many other animals, the Y chromosome contains the gene that triggers the development of the specifically male characteristics. In evolution, this chromosome has lost most of its content and also most of its genes, while the X chromosome is similar to the other chromosomes and contains many genes. The X and Y chromosomes form a strongly heterogeneous pair.

When cells divide, their full genome is copied and each daughter cell inherits one copy. This process, called mitosis, is the simplest form of reproduction and is the basis for asexual reproduction. Asexual reproduction can also occur in multicellular organisms, producing offspring that inherit their genome from a single parent. Offspring that are genetically identical to their parents are called clones.

Eukaryotic organisms often use sexual reproduction to generate offspring that contain a mixture of genetic material inherited from two different parents. The process of sexual reproduction alternates between forms that contain single copies of the genome (haploid) and double copies (diploid).[37] Haploid cells fuse and combine genetic material to create a diploid cell with paired chromosomes. Diploid organisms form haploids by dividing, without replicating their DNA, to create daughter cells that randomly inherit one of each pair of chromosomes. Most animals and many plants are diploid for most of their lifespan, with the haploid form reduced to single cell gametes such as sperm or eggs.

Although they do not use the haploid/diploid method of sexual reproduction, bacteria have many methods of acquiring new genetic information. Some bacteria can undergo conjugation, transferring a small circular piece of DNA to another bacterium.[51] Bacteria can also take up raw DNA fragments found in the environment and integrate them into their genomes, a phenomenon known as transformation.[52] These processes result in horizontal gene transfer, transmitting fragments of genetic information between organisms that would be otherwise unrelated.

The diploid nature of chromosomes allows for genes on different chromosomes to assort independently or be separated from their homologous pair during sexual reproduction wherein haploid gametes are formed. In this way new combinations of genes can occur in the offspring of a mating pair. Genes on the same chromosome would theoretically never recombine. However, they do via the cellular process of chromosomal crossover. During crossover, chromosomes exchange stretches of DNA, effectively shuffling the gene alleles between the chromosomes.[53] This process of chromosomal crossover generally occurs during meiosis, a series of cell divisions that creates haploid cells.

The first cytological demonstration of crossing over was performed by Harriet Creighton and Barbara McClintock in 1931. Their research and experiments on corn provided cytological evidence for the genetic theory that linked genes on paired chromosomes do in fact exchange places from one homolog to the other.

The probability of chromosomal crossover occurring between two given points on the chromosome is related to the distance between the points. For an arbitrarily long distance, the probability of crossover is high enough that the inheritance of the genes is effectively uncorrelated.[54] For genes that are closer together, however, the lower probability of crossover means that the genes demonstrate genetic linkage; alleles for the two genes tend to be inherited together. The amounts of linkage between a series of genes can be combined to form a linear linkage map that roughly describes the arrangement of the genes along the chromosome.[55]

Genes generally express their functional effect through the production of proteins, which are complex molecules responsible for most functions in the cell. Proteins are made up of one or more polypeptide chains, each of which is composed of a sequence of amino acids, and the DNA sequence of a gene (through an RNA intermediate) is used to produce a specific amino acid sequence. This process begins with the production of an RNA molecule with a sequence matching the gene's DNA sequence, a process called transcription.

This messenger RNA molecule is then used to produce a corresponding amino acid sequence through a process called translation. Each group of three nucleotides in the sequence, called a codon, corresponds either to one of the twenty possible amino acids in a protein or an instruction to end the amino acid sequence; this correspondence is called the genetic code.[56] The flow of information is unidirectional: information is transferred from nucleotide sequences into the amino acid sequence of proteins, but it never transfers from protein back into the sequence of DNAa phenomenon Francis Crick called the central dogma of molecular biology.[57]

The specific sequence of amino acids results in a unique three-dimensional structure for that protein, and the three-dimensional structures of proteins are related to their functions.[58][59] Some are simple structural molecules, like the fibers formed by the protein collagen. Proteins can bind to other proteins and simple molecules, sometimes acting as enzymes by facilitating chemical reactions within the bound molecules (without changing the structure of the protein itself). Protein structure is dynamic; the protein hemoglobin bends into slightly different forms as it facilitates the capture, transport, and release of oxygen molecules within mammalian blood.

A single nucleotide difference within DNA can cause a change in the amino acid sequence of a protein. Because protein structures are the result of their amino acid sequences, some changes can dramatically change the properties of a protein by destabilizing the structure or changing the surface of the protein in a way that changes its interaction with other proteins and molecules. For example, sickle-cell anemia is a human genetic disease that results from a single base difference within the coding region for the -globin section of hemoglobin, causing a single amino acid change that changes hemoglobin's physical properties.[60] Sickle-cell versions of hemoglobin stick to themselves, stacking to form fibers that distort the shape of red blood cells carrying the protein. These sickle-shaped cells no longer flow smoothly through blood vessels, having a tendency to clog or degrade, causing the medical problems associated with this disease.

Some DNA sequences are transcribed into RNA but are not translated into protein productssuch RNA molecules are called non-coding RNA. In some cases, these products fold into structures which are involved in critical cell functions (e.g. ribosomal RNA and transfer RNA). RNA can also have regulatory effects through hybridization interactions with other RNA molecules (e.g. microRNA).

Although genes contain all the information an organism uses to function, the environment plays an important role in determining the ultimate phenotypes an organism displays. This is the complementary relationship often referred to as "nature and nurture". The phenotype of an organism depends on the interaction of genes and the environment. An interesting example is the coat coloration of the Siamese cat. In this case, the body temperature of the cat plays the role of the environment. The cat's genes code for dark hair, thus the hair-producing cells in the cat make cellular proteins resulting in dark hair. But these dark hair-producing proteins are sensitive to temperature (i.e. have a mutation causing temperature-sensitivity) and denature in higher-temperature environments, failing to produce dark-hair pigment in areas where the cat has a higher body temperature. In a low-temperature environment, however, the protein's structure is stable and produces dark-hair pigment normally. The protein remains functional in areas of skin that are colder such as its legs, ears, tail and face so the cat has dark-hair at its extremities.[61]

Environment plays a major role in effects of the human genetic disease phenylketonuria.[62] The mutation that causes phenylketonuria disrupts the ability of the body to break down the amino acid phenylalanine, causing a toxic build-up of an intermediate molecule that, in turn, causes severe symptoms of progressive mental retardation and seizures. However, if someone with the phenylketonuria mutation follows a strict diet that avoids this amino acid, they remain normal and healthy.

A popular method for determining how genes and environment ("nature and nurture") contribute to a phenotype involves studying identical and fraternal twins, or other siblings of multiple births.[63] Because identical siblings come from the same zygote, they are genetically the same. Fraternal twins are as genetically different from one another as normal siblings. By comparing how often a certain disorder occurs in a pair of identical twins to how often it occurs in a pair of fraternal twins, scientists can determine whether that disorder is caused by genetic or postnatal environmental factors whether it has "nature" or "nurture" causes. One famous example is the multiple birth study of the Genain quadruplets, who were identical quadruplets all diagnosed with schizophrenia.[64] However such tests cannot separate genetic factors from environmental factors affecting fetal development.

The genome of a given organism contains thousands of genes, but not all these genes need to be active at any given moment. A gene is expressed when it is being transcribed into mRNA and there exist many cellular methods of controlling the expression of genes such that proteins are produced only when needed by the cell. Transcription factors are regulatory proteins that bind to DNA, either promoting or inhibiting the transcription of a gene.[65] Within the genome of Escherichia coli bacteria, for example, there exists a series of genes necessary for the synthesis of the amino acid tryptophan. However, when tryptophan is already available to the cell, these genes for tryptophan synthesis are no longer needed. The presence of tryptophan directly affects the activity of the genestryptophan molecules bind to the tryptophan repressor (a transcription factor), changing the repressor's structure such that the repressor binds to the genes. The tryptophan repressor blocks the transcription and expression of the genes, thereby creating negative feedback regulation of the tryptophan synthesis process.[66]

Differences in gene expression are especially clear within multicellular organisms, where cells all contain the same genome but have very different structures and behaviors due to the expression of different sets of genes. All the cells in a multicellular organism derive from a single cell, differentiating into variant cell types in response to external and intercellular signals and gradually establishing different patterns of gene expression to create different behaviors. As no single gene is responsible for the development of structures within multicellular organisms, these patterns arise from the complex interactions between many cells.

Within eukaryotes, there exist structural features of chromatin that influence the transcription of genes, often in the form of modifications to DNA and chromatin that are stably inherited by daughter cells.[67] These features are called "epigenetic" because they exist "on top" of the DNA sequence and retain inheritance from one cell generation to the next. Because of epigenetic features, different cell types grown within the same medium can retain very different properties. Although epigenetic features are generally dynamic over the course of development, some, like the phenomenon of paramutation, have multigenerational inheritance and exist as rare exceptions to the general rule of DNA as the basis for inheritance.[68]

During the process of DNA replication, errors occasionally occur in the polymerization of the second strand. These errors, called mutations, can affect the phenotype of an organism, especially if they occur within the protein coding sequence of a gene. Error rates are usually very low1 error in every 10100million basesdue to the "proofreading" ability of DNA polymerases.[69][70] Processes that increase the rate of changes in DNA are called mutagenic: mutagenic chemicals promote errors in DNA replication, often by interfering with the structure of base-pairing, while UV radiation induces mutations by causing damage to the DNA structure.[71] Chemical damage to DNA occurs naturally as well and cells use DNA repair mechanisms to repair mismatches and breaks. The repair does not, however, always restore the original sequence.

In organisms that use chromosomal crossover to exchange DNA and recombine genes, errors in alignment during meiosis can also cause mutations.[72] Errors in crossover are especially likely when similar sequences cause partner chromosomes to adopt a mistaken alignment; this makes some regions in genomes more prone to mutating in this way. These errors create large structural changes in DNA sequence duplications, inversions, deletions of entire regions or the accidental exchange of whole parts of sequences between different chromosomes (chromosomal translocation).

Mutations alter an organism's genotype and occasionally this causes different phenotypes to appear. Most mutations have little effect on an organism's phenotype, health, or reproductive fitness.[73] Mutations that do have an effect are usually deleterious, but occasionally some can be beneficial.[74] Studies in the fly Drosophila melanogaster suggest that if a mutation changes a protein produced by a gene, about 70 percent of these mutations will be harmful with the remainder being either neutral or weakly beneficial.[75]

Population genetics studies the distribution of genetic differences within populations and how these distributions change over time.[76] Changes in the frequency of an allele in a population are mainly influenced by natural selection, where a given allele provides a selective or reproductive advantage to the organism,[77] as well as other factors such as mutation, genetic drift, genetic draft,[78]artificial selection and migration.[79]

Over many generations, the genomes of organisms can change significantly, resulting in evolution. In the process called adaptation, selection for beneficial mutations can cause a species to evolve into forms better able to survive in their environment.[80] New species are formed through the process of speciation, often caused by geographical separations that prevent populations from exchanging genes with each other.[81] The application of genetic principles to the study of population biology and evolution is known as the "modern synthesis".

By comparing the homology between different species' genomes, it is possible to calculate the evolutionary distance between them and when they may have diverged. Genetic comparisons are generally considered a more accurate method of characterizing the relatedness between species than the comparison of phenotypic characteristics. The evolutionary distances between species can be used to form evolutionary trees; these trees represent the common descent and divergence of species over time, although they do not show the transfer of genetic material between unrelated species (known as horizontal gene transfer and most common in bacteria).[82]

Although geneticists originally studied inheritance in a wide range of organisms, researchers began to specialize in studying the genetics of a particular subset of organisms. The fact that significant research already existed for a given organism would encourage new researchers to choose it for further study, and so eventually a few model organisms became the basis for most genetics research.[83] Common research topics in model organism genetics include the study of gene regulation and the involvement of genes in development and cancer.

Organisms were chosen, in part, for convenienceshort generation times and easy genetic manipulation made some organisms popular genetics research tools. Widely used model organisms include the gut bacterium Escherichia coli, the plant Arabidopsis thaliana, baker's yeast (Saccharomyces cerevisiae), the nematode Caenorhabditis elegans, the common fruit fly (Drosophila melanogaster), and the common house mouse (Mus musculus).

Medical genetics seeks to understand how genetic variation relates to human health and disease.[84] When searching for an unknown gene that may be involved in a disease, researchers commonly use genetic linkage and genetic pedigree charts to find the location on the genome associated with the disease. At the population level, researchers take advantage of Mendelian randomization to look for locations in the genome that are associated with diseases, a method especially useful for multigenic traits not clearly defined by a single gene.[85] Once a candidate gene is found, further research is often done on the corresponding gene the orthologous gene in model organisms. In addition to studying genetic diseases, the increased availability of genotyping methods has led to the field of pharmacogenetics: the study of how genotype can affect drug responses.[86]

Individuals differ in their inherited tendency to develop cancer,[87] and cancer is a genetic disease.[88] The process of cancer development in the body is a combination of events. Mutations occasionally occur within cells in the body as they divide. Although these mutations will not be inherited by any offspring, they can affect the behavior of cells, sometimes causing them to grow and divide more frequently. There are biological mechanisms that attempt to stop this process; signals are given to inappropriately dividing cells that should trigger cell death, but sometimes additional mutations occur that cause cells to ignore these messages. An internal process of natural selection occurs within the body and eventually mutations accumulate within cells to promote their own growth, creating a cancerous tumor that grows and invades various tissues of the body.

Normally, a cell divides only in response to signals called growth factors and stops growing once in contact with surrounding cells and in response to growth-inhibitory signals. It usually then divides a limited number of times and dies, staying within the epithelium where it is unable to migrate to other organs. To become a cancer cell, a cell has to accumulate mutations in a number of genes (37) that allow it to bypass this regulation: it no longer needs growth factors to divide, it continues growing when making contact to neighbor cells, and ignores inhibitory signals, it will keep growing indefinitely and is immortal, it will escape from the epithelium and ultimately may be able to escape from the primary tumor, cross the endothelium of a blood vessel, be transported by the bloodstream and will colonize a new organ, forming deadly metastasis. Although there are some genetic predispositions in a small fraction of cancers, the major fraction is due to a set of new genetic mutations that originally appear and accumulate in one or a small number of cells that will divide to form the tumor and are not transmitted to the progeny (somatic mutations). The most frequent mutations are a loss of function of p53 protein, a tumor suppressor, or in the p53 pathway, and gain of function mutations in the ras proteins, or in other oncogenes.

DNA can be manipulated in the laboratory. Restriction enzymes are commonly used enzymes that cut DNA at specific sequences, producing predictable fragments of DNA.[89] DNA fragments can be visualized through use of gel electrophoresis, which separates fragments according to their length.

The use of ligation enzymes allows DNA fragments to be connected. By binding ("ligating") fragments of DNA together from different sources, researchers can create recombinant DNA, the DNA often associated with genetically modified organisms. Recombinant DNA is commonly used in the context of plasmids: short circular DNA molecules with a few genes on them. In the process known as molecular cloning, researchers can amplify the DNA fragments by inserting plasmids into bacteria and then culturing them on plates of agar (to isolate clones of bacteria cells). ("Cloning" can also refer to the various means of creating cloned ("clonal") organisms.)

DNA can also be amplified using a procedure called the polymerase chain reaction (PCR).[90] By using specific short sequences of DNA, PCR can isolate and exponentially amplify a targeted region of DNA. Because it can amplify from extremely small amounts of DNA, PCR is also often used to detect the presence of specific DNA sequences.

DNA sequencing, one of the most fundamental technologies developed to study genetics, allows researchers to determine the sequence of nucleotides in DNA fragments. The technique of chain-termination sequencing, developed in 1977 by a team led by Frederick Sanger, is still routinely used to sequence DNA fragments.[91] Using this technology, researchers have been able to study the molecular sequences associated with many human diseases.

As sequencing has become less expensive, researchers have sequenced the genomes of many organisms, using a process called genome assembly, which utilizes computational tools to stitch together sequences from many different fragments.[92] These technologies were used to sequence the human genome in the Human Genome Project completed in 2003.[34] New high-throughput sequencing technologies are dramatically lowering the cost of DNA sequencing, with many researchers hoping to bring the cost of resequencing a human genome down to a thousand dollars.[93]

Next generation sequencing (or high-throughput sequencing) came about due to the ever-increasing demand for low-cost sequencing. These sequencing technologies allow the production of potentially millions of sequences concurrently.[94][95] The large amount of sequence data available has created the field of genomics, research that uses computational tools to search for and analyze patterns in the full genomes of organisms. Genomics can also be considered a subfield of bioinformatics, which uses computational approaches to analyze large sets of biological data. A common problem to these fields of research is how to manage and share data that deals with human subject and personally identifiable information. See also genomics data sharing.

On 19 March 2015, a leading group of biologists urged a worldwide ban on clinical use of methods, particularly the use of CRISPR and zinc finger, to edit the human genome in a way that can be inherited.[96][97][98][99] In April 2015, Chinese researchers reported results of basic research to edit the DNA of non-viable human embryos using CRISPR.[100][101]

See the original post here:
Genetics - Wikipedia

Read More...

Biotechnology – Wikipedia

October 20th, 2016 7:41 pm

"Bioscience" redirects here. For the scientific journal, see BioScience. For life sciences generally, see life science.

Biotechnology is the use of living systems and organisms to develop or make products, or "any technological application that uses biological systems, living organisms or derivatives thereof, to make or modify products or processes for specific use" (UN Convention on Biological Diversity, Art. 2).[1] Depending on the tools and applications, it often overlaps with the (related) fields of bioengineering, biomedical engineering, biomanufacturing, molecular engineering, etc.

For thousands of years, humankind has used biotechnology in agriculture, food production, and medicine.[2] The term is largely believed to have been coined in 1919 by Hungarian engineer Kroly Ereky. In the late 20th and early 21st century, biotechnology has expanded to include new and diverse sciences such as genomics, recombinant gene techniques, applied immunology, and development of pharmaceutical therapies and diagnostic tests.[2]

The wide concept of "biotech" or "biotechnology" encompasses a wide range of procedures for modifying living organisms according to human purposes, going back to domestication of animals, cultivation of the plants, and "improvements" to these through breeding programs that employ artificial selection and hybridization. Modern usage also includes genetic engineering as well as cell and tissue culture technologies. The American Chemical Society defines biotechnology as the application of biological organisms, systems, or processes by various industries to learning about the science of life and the improvement of the value of materials and organisms such as pharmaceuticals, crops, and livestock.[3] As per European Federation of Biotechnology, Biotechnology is the integration of natural science and organisms, cells, parts thereof, and molecular analogues for products and services.[4] Biotechnology also writes on the pure biological sciences (animal cell culture, biochemistry, cell biology, embryology, genetics, microbiology, and molecular biology). In many instances, it is also dependent on knowledge and methods from outside the sphere of biology including:

Conversely, modern biological sciences (including even concepts such as molecular ecology) are intimately entwined and heavily dependent on the methods developed through biotechnology and what is commonly thought of as the life sciences industry. Biotechnology is the research and development in the laboratory using bioinformatics for exploration, extraction, exploitation and production from any living organisms and any source of biomass by means of biochemical engineering where high value-added products could be planned (reproduced by biosynthesis, for example), forecasted, formulated, developed, manufactured and marketed for the purpose of sustainable operations (for the return from bottomless initial investment on R & D) and gaining durable patents rights (for exclusives rights for sales, and prior to this to receive national and international approval from the results on animal experiment and human experiment, especially on the pharmaceutical branch of biotechnology to prevent any undetected side-effects or safety concerns by using the products).[5][6][7]

By contrast, bioengineering is generally thought of as a related field that more heavily emphasizes higher systems approaches (not necessarily the altering or using of biological materials directly) for interfacing with and utilizing living things. Bioengineering is the application of the principles of engineering and natural sciences to tissues, cells and molecules. This can be considered as the use of knowledge from working with and manipulating biology to achieve a result that can improve functions in plants and animals.[8] Relatedly, biomedical engineering is an overlapping field that often draws upon and applies biotechnology (by various definitions), especially in certain sub-fields of biomedical and/or chemical engineering such as tissue engineering, biopharmaceutical engineering, and genetic engineering.

Although not normally what first comes to mind, many forms of human-derived agriculture clearly fit the broad definition of "'utilizing a biotechnological system to make products". Indeed, the cultivation of plants may be viewed as the earliest biotechnological enterprise.

Agriculture has been theorized to have become the dominant way of producing food since the Neolithic Revolution. Through early biotechnology, the earliest farmers selected and bred the best suited crops, having the highest yields, to produce enough food to support a growing population. As crops and fields became increasingly large and difficult to maintain, it was discovered that specific organisms and their by-products could effectively fertilize, restore nitrogen, and control pests. Throughout the history of agriculture, farmers have inadvertently altered the genetics of their crops through introducing them to new environments and breeding them with other plants one of the first forms of biotechnology.

These processes also were included in early fermentation of beer.[9] These processes were introduced in early Mesopotamia, Egypt, China and India, and still use the same basic biological methods. In brewing, malted grains (containing enzymes) convert starch from grains into sugar and then adding specific yeasts to produce beer. In this process, carbohydrates in the grains were broken down into alcohols such as ethanol. Later other cultures produced the process of lactic acid fermentation which allowed the fermentation and preservation of other forms of food, such as soy sauce. Fermentation was also used in this time period to produce leavened bread. Although the process of fermentation was not fully understood until Louis Pasteur's work in 1857, it is still the first use of biotechnology to convert a food source into another form.

Before the time of Charles Darwin's work and life, animal and plant scientists had already used selective breeding. Darwin added to that body of work with his scientific observations about the ability of science to change species. These accounts contributed to Darwin's theory of natural selection.[10]

For thousands of years, humans have used selective breeding to improve production of crops and livestock to use them for food. In selective breeding, organisms with desirable characteristics are mated to produce offspring with the same characteristics. For example, this technique was used with corn to produce the largest and sweetest crops.[11]

In the early twentieth century scientists gained a greater understanding of microbiology and explored ways of manufacturing specific products. In 1917, Chaim Weizmann first used a pure microbiological culture in an industrial process, that of manufacturing corn starch using Clostridium acetobutylicum, to produce acetone, which the United Kingdom desperately needed to manufacture explosives during World War I.[12]

Biotechnology has also led to the development of antibiotics. In 1928, Alexander Fleming discovered the mold Penicillium. His work led to the purification of the antibiotic compound formed by the mold by Howard Florey, Ernst Boris Chain and Norman Heatley to form what we today know as penicillin. In 1940, penicillin became available for medicinal use to treat bacterial infections in humans.[11]

The field of modern biotechnology is generally thought of as having been born in 1971 when Paul Berg's (Stanford) experiments in gene splicing had early success. Herbert W. Boyer (Univ. Calif. at San Francisco) and Stanley N. Cohen (Stanford) significantly advanced the new technology in 1972 by transferring genetic material into a bacterium, such that the imported material would be reproduced. The commercial viability of a biotechnology industry was significantly expanded on June 16, 1980, when the United States Supreme Court ruled that a genetically modified microorganism could be patented in the case of Diamond v. Chakrabarty.[13] Indian-born Ananda Chakrabarty, working for General Electric, had modified a bacterium (of the Pseudomonas genus) capable of breaking down crude oil, which he proposed to use in treating oil spills. (Chakrabarty's work did not involve gene manipulation but rather the transfer of entire organelles between strains of the Pseudomonas bacterium.

Revenue in the industry is expected to grow by 12.9% in 2008. Another factor influencing the biotechnology sector's success is improved intellectual property rights legislationand enforcementworldwide, as well as strengthened demand for medical and pharmaceutical products to cope with an ageing, and ailing, U.S. population.[14]

Rising demand for biofuels is expected to be good news for the biotechnology sector, with the Department of Energy estimating ethanol usage could reduce U.S. petroleum-derived fuel consumption by up to 30% by 2030. The biotechnology sector has allowed the U.S. farming industry to rapidly increase its supply of corn and soybeansthe main inputs into biofuelsby developing genetically modified seeds which are resistant to pests and drought. By boosting farm productivity, biotechnology plays a crucial role in ensuring that biofuel production targets are met.[15]

Biotechnology has applications in four major industrial areas, including health care (medical), crop production and agriculture, non food (industrial) uses of crops and other products (e.g. biodegradable plastics, vegetable oil, biofuels), and environmental uses.

For example, one application of biotechnology is the directed use of organisms for the manufacture of organic products (examples include beer and milk products). Another example is using naturally present bacteria by the mining industry in bioleaching. Biotechnology is also used to recycle, treat waste, clean up sites contaminated by industrial activities (bioremediation), and also to produce biological weapons.

A series of derived terms have been coined to identify several branches of biotechnology; for example:

The investment and economic output of all of these types of applied biotechnologies is termed as "bioeconomy".

In medicine, modern biotechnology finds applications in areas such as pharmaceutical drug discovery and production, pharmacogenomics, and genetic testing (or genetic screening).

Pharmacogenomics (a combination of pharmacology and genomics) is the technology that analyses how genetic makeup affects an individual's response to drugs.[17] It deals with the influence of genetic variation on drug response in patients by correlating gene expression or single-nucleotide polymorphisms with a drug's efficacy or toxicity.[18] By doing so, pharmacogenomics aims to develop rational means to optimize drug therapy, with respect to the patients' genotype, to ensure maximum efficacy with minimal adverse effects.[19] Such approaches promise the advent of "personalized medicine"; in which drugs and drug combinations are optimized for each individual's unique genetic makeup.[20][21]

Biotechnology has contributed to the discovery and manufacturing of traditional small molecule pharmaceutical drugs as well as drugs that are the product of biotechnology biopharmaceutics. Modern biotechnology can be used to manufacture existing medicines relatively easily and cheaply. The first genetically engineered products were medicines designed to treat human diseases. To cite one example, in 1978 Genentech developed synthetic humanized insulin by joining its gene with a plasmid vector inserted into the bacterium Escherichia coli. Insulin, widely used for the treatment of diabetes, was previously extracted from the pancreas of abattoir animals (cattle and/or pigs). The resulting genetically engineered bacterium enabled the production of vast quantities of synthetic human insulin at relatively low cost.[22][23] Biotechnology has also enabled emerging therapeutics like gene therapy. The application of biotechnology to basic science (for example through the Human Genome Project) has also dramatically improved our understanding of biology and as our scientific knowledge of normal and disease biology has increased, our ability to develop new medicines to treat previously untreatable diseases has increased as well.[23]

Genetic testing allows the genetic diagnosis of vulnerabilities to inherited diseases, and can also be used to determine a child's parentage (genetic mother and father) or in general a person's ancestry. In addition to studying chromosomes to the level of individual genes, genetic testing in a broader sense includes biochemical tests for the possible presence of genetic diseases, or mutant forms of genes associated with increased risk of developing genetic disorders. Genetic testing identifies changes in chromosomes, genes, or proteins.[24] Most of the time, testing is used to find changes that are associated with inherited disorders. The results of a genetic test can confirm or rule out a suspected genetic condition or help determine a person's chance of developing or passing on a genetic disorder. As of 2011 several hundred genetic tests were in use.[25][26] Since genetic testing may open up ethical or psychological problems, genetic testing is often accompanied by genetic counseling.

Genetically modified crops ("GM crops", or "biotech crops") are plants used in agriculture, the DNA of which has been modified with genetic engineering techniques. In most cases the aim is to introduce a new trait to the plant which does not occur naturally in the species.

Examples in food crops include resistance to certain pests,[27] diseases,[28] stressful environmental conditions,[29] resistance to chemical treatments (e.g. resistance to a herbicide[30]), reduction of spoilage,[31] or improving the nutrient profile of the crop.[32] Examples in non-food crops include production of pharmaceutical agents,[33]biofuels,[34] and other industrially useful goods,[35] as well as for bioremediation.[36][37]

Farmers have widely adopted GM technology. Between 1996 and 2011, the total surface area of land cultivated with GM crops had increased by a factor of 94, from 17,000 square kilometers (4,200,000 acres) to 1,600,000km2 (395 million acres).[38] 10% of the world's crop lands were planted with GM crops in 2010.[38] As of 2011, 11 different transgenic crops were grown commercially on 395 million acres (160 million hectares) in 29 countries such as the USA, Brazil, Argentina, India, Canada, China, Paraguay, Pakistan, South Africa, Uruguay, Bolivia, Australia, Philippines, Myanmar, Burkina Faso, Mexico and Spain.[38]

Genetically modified foods are foods produced from organisms that have had specific changes introduced into their DNA with the methods of genetic engineering. These techniques have allowed for the introduction of new crop traits as well as a far greater control over a food's genetic structure than previously afforded by methods such as selective breeding and mutation breeding.[39] Commercial sale of genetically modified foods began in 1994, when Calgene first marketed its Flavr Savr delayed ripening tomato.[40] To date most genetic modification of foods have primarily focused on cash crops in high demand by farmers such as soybean, corn, canola, and cotton seed oil. These have been engineered for resistance to pathogens and herbicides and better nutrient profiles. GM livestock have also been experimentally developed, although as of November 2013 none are currently on the market.[41]

There is a scientific consensus[42][43][44][45] that currently available food derived from GM crops poses no greater risk to human health than conventional food,[46][47][48][49][50] but that each GM food needs to be tested on a case-by-case basis before introduction.[51][52][53] Nonetheless, members of the public are much less likely than scientists to perceive GM foods as safe.[54][55][56][57] The legal and regulatory status of GM foods varies by country, with some nations banning or restricting them, and others permitting them with widely differing degrees of regulation.[58][59][60][61]

GM crops also provide a number of ecological benefits, if not used in excess.[62] However, opponents have objected to GM crops per se on several grounds, including environmental concerns, whether food produced from GM crops is safe, whether GM crops are needed to address the world's food needs, and economic concerns raised by the fact these organisms are subject to intellectual property law.

Industrial biotechnology (known mainly in Europe as white biotechnology) is the application of biotechnology for industrial purposes, including industrial fermentation. It includes the practice of using cells such as micro-organisms, or components of cells like enzymes, to generate industrially useful products in sectors such as chemicals, food and feed, detergents, paper and pulp, textiles and biofuels.[63] In doing so, biotechnology uses renewable raw materials and may contribute to lowering greenhouse gas emissions and moving away from a petrochemical-based economy.[64]

The environment can be affected by biotechnologies, both positively and adversely. Vallero and others have argued that the difference between beneficial biotechnology (e.g. bioremediation to clean up an oil spill or hazard chemical leak) versus the adverse effects stemming from biotechnological enterprises (e.g. flow of genetic material from transgenic organisms into wild strains) can be seen as applications and implications, respectively.[65] Cleaning up environmental wastes is an example of an application of environmental biotechnology; whereas loss of biodiversity or loss of containment of a harmful microbe are examples of environmental implications of biotechnology.

The regulation of genetic engineering concerns approaches taken by governments to assess and manage the risks associated with the use of genetic engineering technology, and the development and release of genetically modified organisms (GMO), including genetically modified crops and genetically modified fish. There are differences in the regulation of GMOs between countries, with some of the most marked differences occurring between the USA and Europe.[66] Regulation varies in a given country depending on the intended use of the products of the genetic engineering. For example, a crop not intended for food use is generally not reviewed by authorities responsible for food safety.[67] The European Union differentiates between approval for cultivation within the EU and approval for import and processing. While only a few GMOs have been approved for cultivation in the EU a number of GMOs have been approved for import and processing.[68] The cultivation of GMOs has triggered a debate about coexistence of GM and non GM crops. Depending on the coexistence regulations incentives for cultivation of GM crops differ.[69]

In 1988, after prompting from the United States Congress, the National Institute of General Medical Sciences (National Institutes of Health) (NIGMS) instituted a funding mechanism for biotechnology training. Universities nationwide compete for these funds to establish Biotechnology Training Programs (BTPs). Each successful application is generally funded for five years then must be competitively renewed. Graduate students in turn compete for acceptance into a BTP; if accepted, then stipend, tuition and health insurance support is provided for two or three years during the course of their Ph.D. thesis work. Nineteen institutions offer NIGMS supported BTPs.[70] Biotechnology training is also offered at the undergraduate level and in community colleges.

The literature about Biodiversity and the GE food/feed consumption has sometimes resulted in animated debate regarding the suitability of the experimental designs, the choice of the statistical methods or the public accessibility of data. Such debate, even if positive and part of the natural process of review by the scientific community, has frequently been distorted by the media and often used politically and inappropriately in anti-GE crops campaigns.

Domingo, Jos L.; Bordonaba, Jordi Gin (2011). "A literature review on the safety assessment of genetically modified plants" (PDF). Environment International. 37: 734742. doi:10.1016/j.envint.2011.01.003. In spite of this, the number of studies specifically focused on safety assessment of GM plants is still limited. However, it is important to remark that for the first time, a certain equilibrium in the number of research groups suggesting, on the basis of their studies, that a number of varieties of GM products (mainly maize and soybeans) are as safe and nutritious as the respective conventional non-GM plant, and those raising still serious concerns, was observed. Moreover, it is worth mentioning that most of the studies demonstrating that GM foods are as nutritional and safe as those obtained by conventional breeding, have been performed by biotechnology companies or associates, which are also responsible of commercializing these GM plants. Anyhow, this represents a notable advance in comparison with the lack of studies published in recent years in scientific journals by those companies.

Krimsky, Sheldon (2015). "An Illusory Consensus behind GMO Health Assessment" (PDF). Science, Technology, & Human Values: 132. doi:10.1177/0162243915598381. I began this article with the testimonials from respected scientists that there is literally no scientific controversy over the health effects of GMOs. My investigation into the scientific literature tells another story.

And contrast:

Panchin, Alexander Y.; Tuzhikov, Alexander I. (January 14, 2016). "Published GMO studies find no evidence of harm when corrected for multiple comparisons". Critical Reviews in Biotechnology. doi:10.3109/07388551.2015.1130684. ISSN0738-8551. Here, we show that a number of articles some of which have strongly and negatively influenced the public opinion on GM crops and even provoked political actions, such as GMO embargo, share common flaws in the statistical evaluation of the data. Having accounted for these flaws, we conclude that the data presented in these articles does not provide any substantial evidence of GMO harm.

The presented articles suggesting possible harm of GMOs received high public attention. However, despite their claims, they actually weaken the evidence for the harm and lack of substantial equivalency of studied GMOs. We emphasize that with over 1783 published articles on GMOs over the last 10 years it is expected that some of them should have reported undesired differences between GMOs and conventional crops even if no such differences exist in reality.

and

Yang, Y.T.; Chen, B. (2016). "Governing GMOs in the USA: science, law and public health". Journal of the Science of Food and Agriculture. 96: 18511855. doi:10.1002/jsfa.7523. It is therefore not surprising that efforts to require labeling and to ban GMOs have been a growing political issue in the USA (citing Domingo and Bordonaba, 2011).

Overall, a broad scientific consensus holds that currently marketed GM food poses no greater risk than conventional food... Major national and international science and medical associations have stated that no adverse human health effects related to GMO food have been reported or substantiated in peer-reviewed literature to date.

Despite various concerns, today, the American Association for the Advancement of Science, the World Health Organization, and many independent international science organizations agree that GMOs are just as safe as other foods. Compared with conventional breeding techniques, genetic engineering is far more precise and, in most cases, less likely to create an unexpected outcome.

Pinholster, Ginger (October 25, 2012). "AAAS Board of Directors: Legally Mandating GM Food Labels Could "Mislead and Falsely Alarm Consumers"". American Association for the Advancement of Science. Retrieved February 8, 2016.

"REPORT 2 OF THE COUNCIL ON SCIENCE AND PUBLIC HEALTH (A-12): Labeling of Bioengineered Foods" (PDF). American Medical Association. 2012. Retrieved March 19, 2016. Bioengineered foods have been consumed for close to 20 years, and during that time, no overt consequences on human health have been reported and/or substantiated in the peer-reviewed literature.

GM foods currently available on the international market have passed safety assessments and are not likely to present risks for human health. In addition, no effects on human health have been shown as a result of the consumption of such foods by the general population in the countries where they have been approved. Continuous application of safety assessments based on the Codex Alimentarius principles and, where appropriate, adequate post market monitoring, should form the basis for ensuring the safety of GM foods.

"Genetically modified foods and health: a second interim statement" (PDF). British Medical Association. March 2004. Retrieved March 21, 2016. In our view, the potential for GM foods to cause harmful health effects is very small and many of the concerns expressed apply with equal vigour to conventionally derived foods. However, safety concerns cannot, as yet, be dismissed completely on the basis of information currently available.

When seeking to optimise the balance between benefits and risks, it is prudent to err on the side of caution and, above all, learn from accumulating knowledge and experience. Any new technology such as genetic modification must be examined for possible benefits and risks to human health and the environment. As with all novel foods, safety assessments in relation to GM foods must be made on a case-by-case basis.

Members of the GM jury project were briefed on various aspects of genetic modification by a diverse group of acknowledged experts in the relevant subjects. The GM jury reached the conclusion that the sale of GM foods currently available should be halted and the moratorium on commercial growth of GM crops should be continued. These conclusions were based on the precautionary principle and lack of evidence of any benefit. The Jury expressed concern over the impact of GM crops on farming, the environment, food safety and other potential health effects.

The Royal Society review (2002) concluded that the risks to human health associated with the use of specific viral DNA sequences in GM plants are negligible, and while calling for caution in the introduction of potential allergens into food crops, stressed the absence of evidence that commercially available GM foods cause clinical allergic manifestations. The BMA shares the view that that there is no robust evidence to prove that GM foods are unsafe but we endorse the call for further research and surveillance to provide convincing evidence of safety and benefit.

See more here:
Biotechnology - Wikipedia

Read More...

Arthritis – Wikipedia

October 19th, 2016 3:40 pm

Arthritis is a term often used to mean any disorder that affects joints.[1] Symptoms generally include joint pain and stiffness.[1] Other symptoms may include redness, warmth, swelling, and decreased range of motion of the affected joints.[1][2] In some types other organs are also affected.[3] Onset can be gradual or sudden.[4]

There are over 100 types of arthritis.[5][4] The most common forms are osteoarthritis (degenerative joint disease) and rheumatoid arthritis. Osteoarthritis usually occurs with age and affects the fingers, knees, and hips. Rheumatoid arthritis is an autoimmune disorder that often affects the hands and feet.[3] Other types include gout, lupus, fibromyalgia, and septic arthritis.[3][6] They are all types of rheumatic disease.[1]

Treatment may include resting the joint and alternating between applying ice and heat. Weight loss and exercise may also be useful.[3] Pain medications such as ibuprofen and acetaminophen (paracetamol) may be used.[7] In some a joint replacement may be useful.[3]

Osteoarthritis affects more than 3.8% of people while rheumatoid arthritis affects about 0.24% of people.[8] Gout affects about 1 to 2% of the Western population at some point in their lives.[9] In Australia and the United States more than 20% of people have a type of arthritis.[6][10] Overall the disease becomes more common with age.[6] Arthritis is a common reason that people miss work and can result in a decreased quality of life.[7] The term is from Greek arthro- meaning joint and -itis meaning inflammation.[11]

There are several diseases where joint pain is primary, and is considered the main feature. Generally when a person has "arthritis" it means that they have one of these diseases, which include:

Joint pain can also be a symptom of other diseases. In this case, the arthritis is considered to be secondary to the main disease; these include:

An undifferentiated arthritis is an arthritis that does not fit into well-known clinical disease categories, possibly being an early stage of a definite rheumatic disease.[16]

Disability due to musculoskeletal disorders increased by 45% from 1990 to 2010. Of these, osteoarthritis is the fastest increasing major health condition.[17] Among the many reports on the increased prevalence of musculoskeletal conditions, data from Africa are lacking and underestimated. A systematic review assessed the prevalence of arthritis in Africa and included twenty population-based and seven hospital-based studies.[18] The majority of studies, twelve, were from South Africa. Nine studies were well-conducted, eleven studies were of moderate quality, and seven studies were conducted poorly. The results of the systematic review were as follows:

Pain, which can vary in severity, is a common symptom in virtually all types of arthritis. Other symptoms include swelling, joint stiffness and aching around the joint(s). Arthritic disorders like lupus and rheumatoid arthritis can affect other organs in the body, leading to a variety of symptoms.[20] Symptoms may include:

It is common in advanced arthritis for significant secondary changes to occur. For example, arthritic symptoms might make it difficult for a person to move around and/or exercise, which can lead to secondary effects, such as:

These changes, in addition to the primary symptoms, can have a huge impact on quality of life.

Arthritis is the most common cause of disability in the USA. More than 20 million individuals with arthritis have severe limitations in function on a daily basis.[21]Absenteeism and frequent visits to the physician are common in individuals who have arthritis. Arthritis can make it very difficult for individuals to be physically active and some become home bound.

It is estimated that the total cost of arthritis cases is close to $100 billion of which almost 50% is from lost earnings. Each year, arthritis results in nearly 1 million hospitalizations and close to 45 million outpatient visits to health care centers.[22]

Decreased mobility, in combination with the above symptoms, can make it difficult for an individual to remain physically active, contributing to an increased risk of obesity, high cholesterol or vulnerability to heart disease.[23] People with arthritis are also at increased risk of depression, which may be a response to numerous factors, including fear of worsening symptoms.[24]

Diagnosis is made by clinical examination from an appropriate health professional, and may be supported by other tests such as radiology and blood tests, depending on the type of suspected arthritis.[25] All arthritides potentially feature pain. Pain patterns may differ depending on the arthritides and the location. Rheumatoid arthritis is generally worse in the morning and associated with stiffness; in the early stages, patients often have no symptoms after a morning shower. Osteoarthritis, on the other hand, tends to be worse after exercise. In the aged and children, pain might not be the main presenting feature; the aged patient simply moves less, the infantile patient refuses to use the affected limb.[citation needed]

Elements of the history of the disorder guide diagnosis. Important features are speed and time of onset, pattern of joint involvement, symmetry of symptoms, early morning stiffness, tenderness, gelling or locking with inactivity, aggravating and relieving factors, and other systemic symptoms. Physical examination may confirm the diagnosis, or may indicate systemic disease. Radiographs are often used to follow progression or help assess severity.

Blood tests and X-rays of the affected joints often are performed to make the diagnosis. Screening blood tests are indicated if certain arthritides are suspected. These might include: rheumatoid factor, antinuclear factor (ANF), extractable nuclear antigen, and specific antibodies.

Osteoarthritis is the most common form of arthritis.[26] It can affect both the larger and the smaller joints of the body, including the hands, wrists, feet, back, hip, and knee. The disease is essentially one acquired from daily wear and tear of the joint; however, osteoarthritis can also occur as a result of injury. In recent years, some joint or limb deformities, such as knock-knee or acetabular overcoverage or dysplasia, have also been considered as a predisposing factor for knee or hip osteoarthritis. Osteoarthritis begins in the cartilage and eventually causes the two opposing bones to erode into each other. The condition starts with minor pain during physical activity, but soon the pain can be continuous and even occur while in a state of rest. The pain can be debilitating and prevent one from doing some activities. Osteoarthritis typically affects the weight-bearing joints, such as the back, knee and hip. Unlike rheumatoid arthritis, osteoarthritis is most commonly a disease of the elderly. More than 30 percent of women have some degree of osteoarthritis by age 65. Risk factors for osteoarthritis include prior joint trauma, obesity, and a sedentary lifestyle.

Rheumatoid arthritis (RA) is a disorder in which the body's own immune system starts to attack body tissues. The attack is not only directed at the joint but to many other parts of the body. In rheumatoid arthritis, most damage occurs to the joint lining and cartilage which eventually results in erosion of two opposing bones. RA often affects joints in the fingers, wrists, knees and elbows, is symmetrical (appears on both sides of the body), and can lead to severe deformity in a few years if not treated. RA occurs mostly in people aged 20 and above. In children, the disorder can present with a skin rash, fever, pain, disability, and limitations in daily activities. With earlier diagnosis and aggressive treatment, many individuals can lead a better quality of life than if going undiagnosed for long after RA's onset. The drugs to treat RA range from corticosteroids to monoclonal antibodies given intravenously. Treatments also include analgesics such as NSAIDs and disease-modifying antirheumatic drugs (DMARDs), while in rare cases, surgery may be required to replace joints, but there is no cure for the disease.[27]

Treatment with DMARDs is designed to initiate an adaptive immune response, in part by CD4+ T helper (Th) cells, specifically Th17 cells.[28] Th17 cells are present in higher quantities at the site of bone destruction in joints and produce inflammatory cytokines associated with inflammation, such as interleukin-17 (IL-17).[29]

Bone erosion is a central feature of rheumatoid arthritis. Bone continuously undergoes remodeling by actions of bone resorbing osteoclasts and bone forming osteoblasts. One of the main triggers of bone erosion in the joints in rheumatoid arthritis is inflammation of the synovium, caused in part by the production of pro-inflammatory cytokines and receptor activator of nuclear factor kappa B ligand (RANKL), a cell surface protein present in Th17 cells and osteoblasts.[29] Osteoclast activity can be directly induced by osteoblasts through the RANK/RANKL mechanism.[30]

Lupus is a common collagen vascular disorder that can be present with severe arthritis. Other features of lupus include a skin rash, extreme photosensitivity, hair loss, kidney problems, lung fibrosis and constant joint pain.[31]

Gout is caused by deposition of uric acid crystals in the joint, causing inflammation. There is also an uncommon form of gouty arthritis caused by the formation of rhomboid crystals of calcium pyrophosphate known as pseudogout. In the early stages, the gouty arthritis usually occurs in one joint, but with time, it can occur in many joints and be quite crippling. The joints in gout can often become swollen and lose function. Gouty arthritis can become particularly painful and potentially debilitating when gout cannot successfully be treated.[32] When uric acid levels and gout symptoms cannot be controlled with standard gout medicines that decrease the production of uric acid (e.g., allopurinol, febuxostat) or increase uric acid elimination from the body through the kidneys (e.g., probenecid), this can be referred to as refractory chronic gout or RCG.[33]

Infectious arthritis is another severe form of arthritis. It presents with sudden onset of chills, fever and joint pain. The condition is caused by bacteria elsewhere in the body. Infectious arthritis must be rapidly diagnosed and treated promptly to prevent irreversible joint damage.[37]

Psoriasis can develop into psoriatic arthritis. With psoriatic arthritis, most individuals develop the skin problem first and then the arthritis. The typical features are of continuous joint pains, stiffness and swelling. The disease does recur with periods of remission but there is no cure for the disorder. A small percentage develop a severe painful and destructive form of arthritis which destroys the small joints in the hands and can lead to permanent disability and loss of hand function.[38]

There is no known cure for either rheumatoid or osteoarthritis. Treatment options vary depending on the type of arthritis and include physical therapy, lifestyle changes (including exercise and weight control), orthopedic bracing, and medications. Joint replacement surgery may be required in eroding forms of arthritis. Medications can help reduce inflammation in the joint which decreases pain. Moreover, by decreasing inflammation, the joint damage may be slowed.

In general, studies have shown that physical exercise of the affected joint can noticeably improve long-term pain relief. Furthermore, exercise of the arthritic joint is encouraged to maintain the health of the particular joint and the overall body of the person.[39]

Individuals with arthritis can benefit from both physical and occupational therapy. In arthritis the joints become stiff and the range of movement can be limited. Physical therapy has been shown to significantly improve function, decrease pain, and delay need for surgical intervention in advanced cases.[40] Exercise prescribed by a physical therapist has been shown to be more effective than medications in treating osteoarthritis of the knee. Exercise often focuses on improving muscle strength, endurance and flexibility. In some cases, exercises may be designed to train balance. Occupational therapy can provide assistance with activities as well as equipment.

There are several types of medications that are used for the treatment of arthritis. Treatment typically begins with medications that have the fewest side effects with further medications being added if insufficiently effective.[41]

Depending on the type of arthritis, the medications that are given may be different. For example, the first-line treatment for osteoarthritis is acetaminophen (paracetamol) while for inflammatory arthritis it involves non-steroidal anti-inflammatory drugs (NSAIDs) like ibuprofen. Opioids and NSAIDs are less well tolerated.[42]

Rheumatoid arthritis (RA) is autoimmune so in addition to using pain medications and anti-inflammatory drugs, this type uses another category of drug called disease modifying anti-rheumatic drugs (DMARDS). An example of this type of drug is Methotrexate. These types of drugs act on the immune system and slow down the progression of RA.

A number of rheumasurgical interventions have been incorporated in the treatment of arthritis since the 1950s. Arthroscopic surgery for osteoarthritis of the knee provides no additional benefit to optimized physical and medical therapy.[43]

A Cochrane review in 2000 concluded that transcutaneous electrical nerve stimulation (TENS) for knee osteoarthritis was more effective in pain control than placebo.[44][needs update]Low level laser therapy may be considered for relief of pain and stiffness associated with arthritis.[45] Evidence of benefit is tentative.[46][47]

Pulsed electromagnetic field therapy has tentative evidence supporting improved functioning but no evidence of improved pain in osteoarthritis.[48] The FDA has not approved PEMF for the treatment of arthritis. In Canada, PEMF devices are legally licensed by Health Canada for the treatment of pain associated with arthritic conditions.

Arthritis is predominantly a disease of the elderly, but children can also be affected by the disease. More than 70% of individuals in North America affected by arthritis are over the age of 65.[citation needed] Arthritis is more common in women than men at all ages and affects all races, ethnic groups and cultures. In the United States a CDC survey based on data from 20072009 showed 22.2% (49.9 million) of adults aged 18 years had self-reported doctor-diagnosed arthritis, and 9.4% (21.1 million or 42.4% of those with arthritis) had arthritis-attributable activity limitation (AAAL). With an aging population, this number is expected to increase.[49]

While evidence of primary ankle osteoarthritis has been discovered in dinosaurs,[50] the first known traces of human arthritis date back as far as 4500 BC. In early reports, arthritis was frequently referred to as the most common ailment of prehistoric peoples.[51] It was noted in skeletal remains of Native Americans found in Tennessee and parts of what is now Olathe, Kansas. Evidence of arthritis has been found throughout history, from tzi, a mummy (circa 3000 BC) found along the border of modern Italy and Austria, to the Egyptian mummies circa 2590 BC.[52]

In 1715, William Musgrave published the second edition of his most important medical work, De arthritide symptomatica, which concerned arthritis and its effects.[53]

Continue reading here:
Arthritis - Wikipedia

Read More...

Russia – Wikipedia, the free encyclopedia

October 17th, 2016 4:44 pm

Coordinates: 60N 90E / 60N 90E / 60; 90

Rossijskaja Federacija

26 December 1991

Russia (i; Russian: , tr. Rossija; IPA:[rsij]; from the Greek: Rus'), also officially known as the Russian Federation[12] (Russian: , tr. Rossijskaja Federacija; IPA:[rsijskj fdratsj]), is a transcontinental country in Eurasia.[13] At 17,075,200 square kilometres (6,592,800sqmi),[14] Russia is the largest country in the world, covering more than one eighth of Earth's inhabited land area,[15][16][17] and the ninth most populous, with over 146.6million people at the end of March 2016.[6][7] Extending across the entirety of northern Asia and much of Eastern Europe, Russia spans eleven time zones and incorporates a wide range of environments and landforms. From northwest to southeast, Russia shares land borders with Norway, Finland, Estonia, Latvia, Lithuania and Poland (both with Kaliningrad Oblast), Belarus, Ukraine, Georgia, Azerbaijan, Kazakhstan, China, Mongolia, and North Korea. It shares maritime borders with Japan by the Sea of Okhotsk and the U.S. state of Alaska across the Bering Strait.

The nation's history began with that of the East Slavs, who emerged as a recognizable group in Europe between the 3rd and 8th centuries AD.[18] Founded and ruled by a Varangian warrior elite and their descendants, the medieval state of Rus arose in the 9th century. In 988 it adopted Orthodox Christianity from the Byzantine Empire,[19] beginning the synthesis of Byzantine and Slavic cultures that defined Russian culture for the next millennium.[19] Rus' ultimately disintegrated into a number of smaller states; most of the Rus' lands were overrun by the Mongol invasion and became tributaries of the nomadic Golden Horde in the 13th century.[20] The Grand Duchy of Moscow gradually reunified the surrounding Russian principalities, achieved independence from the Golden Horde, and came to dominate the cultural and political legacy of Kievan Rus'. By the 18th century, the nation had greatly expanded through conquest, annexation, and exploration to become the Russian Empire, which was the third largest empire in history, stretching from Poland on the west to Alaska on the east.[21][22]

Following the Russian Revolution, the Russian Soviet Federative Socialist Republic became the largest and leading constituent of the Union of Soviet Socialist Republics, the world's first constitutionally socialist state.[23] The Soviet Union played a decisive role in the Allied victory in World WarII,[24][25] and emerged as a recognized superpower and rival to the United States during the Cold War. The Soviet era saw some of the most significant technological achievements of the 20th century, including the world's first human-made satellite and the launching of the first humans in space. By the end of 1990, the Soviet Union had the world's second largest economy, largest standing military in the world and the largest stockpile of weapons of mass destruction.[26][27][28] Following the partition of the Soviet Union in 1991, fourteen independent republics emerged from the USSR; as the largest, most populous, and most economically developed republic, the Russian SFSR reconstituted itself as the Russian Federation and is recognized as the continuing legal personality and sole successor state of the Soviet Union.[29] It is governed as a federal semi-presidential republic.

The Russian economy ranks as the twelfth largest by nominal GDP and sixth largest by purchasing power parity in 2015.[30] Russia's extensive mineral and energy resources are the largest such reserves in the world,[31] making it one of the leading producers of oil and natural gas globally.[32][33] The country is one of the five recognized nuclear weapons states and possesses the largest stockpile of weapons of mass destruction.[34] Russia is a great power and a permanent member of the United Nations Security Council, as well as a member of the G20, the Council of Europe, the Asia-Pacific Economic Cooperation (APEC), the Shanghai Cooperation Organisation (SCO), the Organization for Security and Co-operation in Europe (OSCE), and the World Trade Organization (WTO), as well as being the leading member of the Commonwealth of Independent States (CIS), the Collective Security Treaty Organization (CSTO) and one of the five members of the Eurasian Economic Union (EEU), along with Armenia, Belarus, Kazakhstan, and Kyrgyzstan.

The name Russia is derived from Rus, a medieval state populated mostly by the East Slavs. However, this proper name became more prominent in the later history, and the country typically was called by its inhabitants " " (russkaja zemlja), which can be translated as "Russian Land" or "Land of Rus'". In order to distinguish this state from other states derived from it, it is denoted as Kievan Rus' by modern historiography. The name Rus itself comes from Rus people, a group of Varangians (possibly Swedish Vikings)[35][36] who founded the state of Rus ().

An old Latin version of the name Rus' was Ruthenia, mostly applied to the western and southern regions of Rus' that were adjacent to Catholic Europe. The current name of the country, (Rossija), comes from the Byzantine Greek designation of the Kievan Rus', Rossaspelt (Rosa pronounced[rosia]) in Modern Greek.[37]

The standard way to refer to citizens of Russia is "Russians".[38]

Nomadic pastoralism developed in the Pontic-Caspian steppe beginning in the Chalcolithic.[39]

In classical antiquity, the Pontic Steppe was known as Scythia. Beginning in the 8th century BC, Ancient Greek traders brought their civilization to the trade emporiums in Tanais and Phanagoria. The Romans settled on the western part of the Caspian Sea, where their empire stretched towards the east.[dubious discuss][40] In the 3rd to 4th centuries AD a semi-legendary Gothic kingdom of Oium existed in Southern Russia until it was overrun by Huns. Between the 3rd and 6th centuries AD, the Bosporan Kingdom, a Hellenistic polity which succeeded the Greek colonies,[41] was also overwhelmed by nomadic invasions led by warlike tribes, such as the Huns and Eurasian Avars.[42] A Turkic people, the Khazars, ruled the lower Volga basin steppes between the Caspian and Black Seas until the 10th century.[43]

The ancestors of modern Russians are the Slavic tribes, whose original home is thought by some scholars to have been the wooded areas of the Pinsk Marshes.[44] The East Slavs gradually settled Western Russia in two waves: one moving from Kiev toward present-day Suzdal and Murom and another from Polotsk toward Novgorod and Rostov. From the 7th century onwards, the East Slavs constituted the bulk of the population in Western Russia[45] and assimilated the native Finno-Ugric peoples, including the Merya, the Muromians, and the Meshchera.

The establishment of the first East Slavic states in the 9th century coincided with the arrival of Varangians, the traders, warriors and settlers from the Baltic Sea region. Primarily they were Vikings of Scandinavian origin, who ventured along the waterways extending from the eastern Baltic to the Black and Caspian Seas.[46] According to the Primary Chronicle, a Varangian from Rus' people, named Rurik, was elected ruler of Novgorod in 862. In 882 his successor Oleg ventured south and conquered Kiev,[47] which had been previously paying tribute to the Khazars, founding Kievan Rus'. Oleg, Rurik's son Igor and Igor's son Sviatoslav subsequently subdued all local East Slavic tribes to Kievan rule, destroyed the Khazar khaganate and launched several military expeditions to Byzantium and Persia.

In the 10th to 11th centuries Kievan Rus' became one of the largest and most prosperous states in Europe.[48] The reigns of Vladimir the Great (9801015) and his son Yaroslav the Wise (10191054) constitute the Golden Age of Kiev, which saw the acceptance of Orthodox Christianity from Byzantium and the creation of the first East Slavic written legal code, the Russkaya Pravda.

In the 11th and 12th centuries, constant incursions by nomadic Turkic tribes, such as the Kipchaks and the Pechenegs, caused a massive migration of Slavic populations to the safer, heavily forested regions of the north, particularly to the area known as Zalesye.[49]

The age of feudalism and decentralization was marked by constant in-fighting between members of the Rurik Dynasty that ruled Kievan Rus' collectively. Kiev's dominance waned, to the benefit of Vladimir-Suzdal in the north-east, Novgorod Republic in the north-west and Galicia-Volhynia in the south-west.

Ultimately Kievan Rus' disintegrated, with the final blow being the Mongol invasion of 123740[50] that resulted in the destruction of Kiev[51] and the death of about half the population of Rus'.[52] The invading Mongol elite, together with their conquered Turkic subjects (Cumans, Kipchaks, Bulgars), became known as Tatars, forming the state of the Golden Horde, which pillaged the Russian principalities; the Mongols ruled the Cuman-Kipchak confederation and Volga Bulgaria (modern-day southern and central expanses of Russia) for over two centuries.[53]

Galicia-Volhynia was eventually assimilated by the Kingdom of Poland, while the Mongol-dominated Vladimir-Suzdal and Novgorod Republic, two regions on the periphery of Kiev, established the basis for the modern Russian nation.[19] The Novgorod together with Pskov retained some degree of autonomy during the time of the Mongol yoke and were largely spared the atrocities that affected the rest of the country. Led by Prince Alexander Nevsky, Novgorodians repelled the invading Swedes in the Battle of the Neva in 1240, as well as the Germanic crusaders in the Battle of the Ice in 1242, breaking their attempts to colonize the Northern Rus'.

The most powerful state to eventually arise after the destruction of Kievan Rus' was the Grand Duchy of Moscow ("Muscovy" in the Western chronicles), initially a part of Vladimir-Suzdal. While still under the domain of the Mongol-Tatars and with their connivance, Moscow began to assert its influence in the Central Rus' in the early 14th century, gradually becoming the leading force in the process of the Rus' lands' reunification and expansion of Russia.[citation needed] Moscow's last rival, the Novgorod Republic, prospered as the chief fur trade center and the easternmost port of the Hanseatic League.

Times remained difficult, with frequent Mongol-Tatar raids. Agriculture suffered from the beginning of the Little Ice Age. As in the rest of Europe, plague was a frequent occurrence between 1350 and 1490.[54] However, because of the lower population density and better hygienewidespread practicing of banya, a wet steam baththe death rate from plague was not as severe as in Western Europe,[55] and population numbers recovered by 1500.[54]

Led by Prince Dmitry Donskoy of Moscow and helped by the Russian Orthodox Church, the united army of Russian principalities inflicted a milestone defeat on the Mongol-Tatars in the Battle of Kulikovo in 1380. Moscow gradually absorbed the surrounding principalities, including formerly strong rivals such as Tver and Novgorod.

IvanIII ("the Great") finally threw off the control of the Golden Horde and consolidated the whole of Central and Northern Rus' under Moscow's dominion. He was also the first to take the title "Grand Duke of all the Russias".[56] After the fall of Constantinople in 1453, Moscow claimed succession to the legacy of the Eastern Roman Empire. IvanIII married Sophia Palaiologina, the niece of the last Byzantine emperor ConstantineXI, and made the Byzantine double-headed eagle his own, and eventually Russia's, coat-of-arms.

In development of the Third Rome ideas, the Grand Duke IvanIV (the "Terrible")[57] was officially crowned the first Tsar ("Caesar") of Russia in 1547. The Tsar promulgated a new code of laws (Sudebnik of 1550), established the first Russian feudal representative body (Zemsky Sobor) and introduced local self-management into the rural regions.[58][59]

During his long reign, Ivan the Terrible nearly doubled the already large Russian territory by annexing the three Tatar khanates (parts of the disintegrated Golden Horde): Kazan and Astrakhan along the Volga River, and the Siberian Khanate in southwestern Siberia. Thus, by the end of the 16th century Russia was transformed into a multiethnic, multidenominational and transcontinental state.

However, the Tsardom was weakened by the long and unsuccessful Livonian War against the coalition of Poland, Lithuania, and Sweden for access to the Baltic coast and sea trade.[60] At the same time, the Tatars of the Crimean Khanate, the only remaining successor to the Golden Horde, continued to raid Southern Russia.[61] In an effort to restore the Volga khanates, Crimeans and their Ottoman allies invaded central Russia and were even able to burn down parts of Moscow in 1571.[62] But in the next year the large invading army was thoroughly defeated by Russians in the Battle of Molodi, forever eliminating the threat of an OttomanCrimean expansion into Russia. The slave raids of Crimeans, however, did not cease until the late 17th century though the construction of new fortification lines across Southern Russia, such as the Great Abatis Line, constantly narrowed the area accessible to incursions.[63]

The death of Ivan's sons marked the end of the ancient Rurik Dynasty in 1598, and in combination with the famine of 160103[64] led to civil war, the rule of pretenders, and foreign intervention during the Time of Troubles in the early 17th century.[65] The Polish-Lithuanian Commonwealth occupied parts of Russia, including Moscow. In 1612, the Poles were forced to retreat by the Russian volunteer corps, led by two national heroes, merchant Kuzma Minin and Prince Dmitry Pozharsky. The Romanov Dynasty acceded to the throne in 1613 by the decision of Zemsky Sobor, and the country started its gradual recovery from the crisis.

Russia continued its territorial growth through the 17th century, which was the age of Cossacks. Cossacks were warriors organized into military communities, resembling pirates and pioneers of the New World. In 1648, the peasants of Ukraine joined the Zaporozhian Cossacks in rebellion against Poland-Lithuania during the Khmelnytsky Uprising in reaction to the social and religious oppression they had been suffering under Polish rule. In 1654, the Ukrainian leader, Bohdan Khmelnytsky, offered to place Ukraine under the protection of the Russian Tsar, AlekseyI. Aleksey's acceptance of this offer led to another Russo-Polish War. Finally, Ukraine was split along the Dnieper River, leaving the western part, right-bank Ukraine, under Polish rule and the eastern part (Left-bank Ukraine and Kiev) under Russian rule. Later, in 167071, the Don Cossacks led by Stenka Razin initiated a major uprising in the Volga Region, but the Tsar's troops were successful in defeating the rebels.

In the east, the rapid Russian exploration and colonisation of the huge territories of Siberia was led mostly by Cossacks hunting for valuable furs and ivory. Russian explorers pushed eastward primarily along the Siberian River Routes, and by the mid-17th century there were Russian settlements in Eastern Siberia, on the Chukchi Peninsula, along the Amur River, and on the Pacific coast. In 1648, the Bering Strait between Asia and North America was passed for the first time by Fedot Popov and Semyon Dezhnyov.

Under Peter the Great, Russia was proclaimed an Empire in 1721 and became recognized as a world power. Ruling from 1682 to 1725, Peter defeated Sweden in the Great Northern War, forcing it to cede West Karelia and Ingria (two regions lost by Russia in the Time of Troubles),[66] as well as Estland and Livland, securing Russia's access to the sea and sea trade.[67] On the Baltic Sea Peter founded a new capital called Saint Petersburg, later known as Russia's "Window to Europe". Peter the Great's reforms brought considerable Western European cultural influences to Russia.

The reign of PeterI's daughter Elizabeth in 174162 saw Russia's participation in the Seven Years' War (175663). During this conflict Russia annexed East Prussia for a while and even took Berlin. However, upon Elisabeth's death, all these conquests were returned to the Kingdom of Prussia by pro-Prussian PeterIII of Russia.

CatherineII ("the Great"), who ruled in 176296, presided over the Age of Russian Enlightenment. She extended Russian political control over the Polish-Lithuanian Commonwealth and incorporated most of its territories into Russia during the Partitions of Poland, pushing the Russian frontier westward into Central Europe. In the south, after successful Russo-Turkish Wars against Ottoman Turkey, Catherine advanced Russia's boundary to the Black Sea, defeating the Crimean Khanate. As a result of victories over Qajar Iran through the Russo-Persian Wars, by the first half of the 19th century Russia also made significant territorial gains in Transcaucasia and the North Caucasus, forcing the former to irrevocably cede what is nowadays Georgia, Dagestan, Azerbaijan and Armenia to Russia.[68][69] This continued with AlexanderI's (180125) wresting of Finland from the weakened kingdom of Sweden in 1809 and of Bessarabia from the Ottomans in 1812. At the same time, Russians colonized Alaska and even founded settlements in California, such as Fort Ross.

In 18031806, the first Russian circumnavigation was made, later followed by other notable Russian sea exploration voyages. In 1820, a Russian expedition discovered the continent of Antarctica.

In alliances with various European countries, Russia fought against Napoleon's France. The French invasion of Russia at the height of Napoleon's power in 1812 failed miserably as the obstinate resistance in combination with the bitterly cold Russian winter led to a disastrous defeat of invaders, in which more than 95% of the pan-European Grande Arme perished.[70] Led by Mikhail Kutuzov and Barclay de Tolly, the Russian army ousted Napoleon from the country and drove through Europe in the war of the Sixth Coalition, finally entering Paris. AlexanderI headed Russia's delegation at the Congress of Vienna that defined the map of post-Napoleonic Europe.

The officers of the Napoleonic Wars brought ideas of liberalism back to Russia with them and attempted to curtail the tsar's powers during the abortive Decembrist revolt of 1825. At the end of the conservative reign of NicolasI (182555), a zenith period of Russia's power and influence in Europe was disrupted by defeat in the Crimean War. Between 1847 and 1851, about one million people died of Asiatic cholera.[71]

Nicholas's successor AlexanderII (185581) enacted significant changes in the country, including the emancipation reform of 1861. These Great Reforms spurred industrialization and modernized the Russian army, which had successfully liberated Bulgaria from Ottoman rule in the 187778 Russo-Turkish War.

The late 19th century saw the rise of various socialist movements in Russia. AlexanderII was killed in 1881 by revolutionary terrorists, and the reign of his son AlexanderIII (188194) was less liberal but more peaceful. The last Russian Emperor, NicholasII (18941917), was unable to prevent the events of the Russian Revolution of 1905, triggered by the unsuccessful Russo-Japanese War and the demonstration incident known as Bloody Sunday. The uprising was put down, but the government was forced to concede major reforms, including granting the freedoms of speech and assembly, the legalization of political parties, and the creation of an elected legislative body, the State Duma of the Russian Empire. The Stolypin agrarian reform led to a massive peasant migration and settlement into Siberia. More than four million settlers arrived in that region between 1906 and 1914.[72]

In 1914, Russia entered World WarI in response to Austria-Hungary's declaration of war on Russia's ally Serbia, and fought across multiple fronts while isolated from its Triple Entente allies. In 1916, the Brusilov Offensive of the Russian Army almost completely destroyed the military of Austria-Hungary. However, the already-existing public distrust of the regime was deepened by the rising costs of war, high casualties, and rumors of corruption and treason. All this formed the climate for the Russian Revolution of 1917, carried out in two major acts.

The February Revolution forced Nicholas II to abdicate; he and his family were imprisoned and later executed in Yekaterinburg during the Russian Civil War. The monarchy was replaced by a shaky coalition of political parties that declared itself the Provisional Government. An alternative socialist establishment existed alongside, the Petrograd Soviet, wielding power through the democratically elected councils of workers and peasants, called Soviets. The rule of the new authorities only aggravated the crisis in the country, instead of resolving it. Eventually, the October Revolution, led by Bolshevik leader Vladimir Lenin, overthrew the Provisional Government and gave full governing power to the Soviets, leading to the creation of the world's first socialist state.

Following the October Revolution, a civil war broke out between the anti-Communist White movement and the new Soviet regime with its Red Army. Bolshevist Russia lost its Ukrainian, Polish, Baltic, and Finnish territories by signing the Treaty of Brest-Litovsk that concluded hostilities with the Central Powers of World WarI. The Allied powers launched an unsuccessful military intervention in support of anti-Communist forces. In the meantime both the Bolsheviks and White movement carried out campaigns of deportations and executions against each other, known respectively as the Red Terror and White Terror. By the end of the civil war, Russia's economy and infrastructure were heavily damaged. Millions became White migrs,[73] and the Povolzhye famine of 1921 claimed up to 5million victims.[74]

The Russian Soviet Federative Socialist Republic (called Russian Socialist Federative Soviet Republic at the time), together with the Ukrainian, Byelorussian, and Transcaucasian Soviet Socialist Republics, formed the Union of Soviet Socialist Republics (USSR), or Soviet Union, on 30 December 1922. Out of the 15 republics that would make up the USSR, the largest in size and over half of the total USSR population was the Russian SFSR, which came to dominate the union for its entire 69-year history.

Following Lenin's death in 1924, a troika was designated to govern the Soviet Union. However, Joseph Stalin, an elected General Secretary of the Communist Party, managed to suppress all opposition groups within the party and consolidate power in his hands. Leon Trotsky, the main proponent of world revolution, was exiled from the Soviet Union in 1929, and Stalin's idea of Socialism in One Country became the primary line. The continued internal struggle in the Bolshevik party culminated in the Great Purge, a period of mass repressions in 193738, during which hundreds of thousands of people were executed, including original party members and military leaders accused of coup d'tat plots.[75]

Under Stalin's leadership, the government launched a planned economy, industrialisation of the largely rural country, and collectivization of its agriculture. During this period of rapid economic and social change, millions of people were sent to penal labor camps,[76] including many political convicts for their opposition to Stalin's rule; millions were deported and exiled to remote areas of the Soviet Union.[76] The transitional disorganisation of the country's agriculture, combined with the harsh state policies and a drought, led to the Soviet famine of 19321933.[77] The Soviet Union, though with a heavy price, was transformed from a largely agrarian economy to a major industrial powerhouse in a short span of time.

The Appeasement policy of Great Britain and France towards Adolf Hitler's annexation of Austria and Czechoslovakia did not stem an increase in the power of Nazi Germany and initiated a threat of war to the Soviet Union.[citation needed] Around the same time, the Third Reich allied with the Empire of Japan, a rival of the USSR in the Far East and an open enemy of the USSR in the SovietJapanese Border Wars in 193839.

In August 1939, after another failure of attempts to establish an anti-Nazi alliance with Britain and France,[citation needed] the Soviet government decided to improve relations with Germany by concluding the Molotov-Ribbentrop Pact, pledging non-aggression between the two countries and dividing Eastern Europe into their respective spheres of influence. While Hitler conquered Poland and France and other countries acted on a single front at the start of World WarII, the USSR was able to build up its military and claim some of the former territories of the Russian Empire, Western Ukraine, Hertza region and Northern Bukovina as a result of the Soviet invasion of Poland, Winter War, occupation of the Baltic states and Soviet occupation of Bessarabia and Northern Bukovina.

On 22 June 1941, Nazi Germany broke the non-aggression treaty and invaded the Soviet Union with the largest and most powerful invasion force in human history,[78] opening the largest theater of World WarII. Although the German army had considerable early success, their attack was halted in the Battle of Moscow. Subsequently, the Germans were dealt major defeats first at the Battle of Stalingrad in the winter of 194243,[79] and then in the Battle of Kursk in the summer of 1943. Another German failure was the Siege of Leningrad, in which the city was fully blockaded on land between 1941 and 1944 by German and Finnish forces, and suffered starvation and more than a million deaths, but never surrendered.[80] Under Stalin's administration and the leadership of such commanders as Georgy Zhukov and Konstantin Rokossovsky, Soviet forces took Eastern Europe in 194445 and captured Berlin in May 1945. In August 1945 the Soviet Army ousted the Japanese from China's Manchukuo and North Korea, contributing to the allied victory over Japan.

The 194145 period of World WarII is known in Russia as the "Great Patriotic War". The Soviet Union together with the United States, the United Kingdom and China were considered as the Big Four of Allied powers in World War II [81] and later became the Four Policemen which was the foundation of the United Nations Security Council.[82] During this war, which included many of the most lethal battle operations in human history, Soviet military and civilian deaths were 10.6million and 15.9million respectively,[83] accounting for about a third of all World WarII casualties. The full demographic loss to the Soviet peoples was even greater.[84] The Soviet economy and infrastructure suffered massive devastation which caused the Soviet famine of 194647[85] but the Soviet Union emerged as an acknowledged military superpower on the continent.

After the war, Eastern and Central Europe including East Germany and part of Austria was occupied by Red Army according to the Potsdam Conference. Dependent socialist governments were installed in the Eastern Bloc satellite states. Becoming the world's second nuclear weapons power, the USSR established the Warsaw Pact alliance and entered into a struggle for global dominance, known as the Cold War, with the United States and NATO. The Soviet Union supported revolutionary movements across the world, including the newly formed People's Republic of China, the Democratic People's Republic of Korea and, later on, the Republic of Cuba. Significant amounts of Soviet resources were allocated in aid to the other socialist states.[86]

After Stalin's death and a short period of collective rule, the new leader Nikita Khrushchev denounced the cult of personality of Stalin and launched the policy of de-Stalinization. The penal labor system was reformed and many prisoners were released and rehabilitated (many of them posthumously).[87] The general easement of repressive policies became known later as the Khrushchev Thaw. At the same time, tensions with the United States heightened when the two rivals clashed over the deployment of the United States Jupiter missiles in Turkey and Soviet missiles in Cuba.

In 1957, the Soviet Union launched the world's first artificial satellite, Sputnik1, thus starting the Space Age. Russia's cosmonaut Yuri Gagarin became the first human to orbit the Earth, aboard the Vostok1 manned spacecraft on 12 April 1961.

Following the ousting of Khrushchev in 1964, another period of collective rule ensued, until Leonid Brezhnev became the leader. The era of the 1970s and the early 1980s was designated later as the Era of Stagnation, a period when economic growth slowed and social policies became static. The 1965 Kosygin reform aimed for partial decentralization of the Soviet economy and shifted the emphasis from heavy industry and weapons to light industry and consumer goods but was stifled by the conservative Communist leadership.

In 1979, after a Communist-led revolution in Afghanistan, Soviet forces entered that country at the request of the new regime. The occupation drained economic resources and dragged on without achieving meaningful political results. Ultimately, the Soviet Army was withdrawn from Afghanistan in 1989 due to international opposition, persistent anti-Soviet guerilla warfare, and a lack of support by Soviet citizens.

From 1985 onwards, the last Soviet leader Mikhail Gorbachev, who sought to enact liberal reforms in the Soviet system, introduced the policies of glasnost (openness) and perestroika (restructuring) in an attempt to end the period of economic stagnation and to democratise the government. This, however, led to the rise of strong nationalist and separatist movements. Prior to 1991, the Soviet economy was the second largest in the world,[88] but during its last years it was afflicted by shortages of goods in grocery stores, huge budget deficits, and explosive growth in the money supply leading to inflation.[89]

By 1991, economic and political turmoil began to boil over, as the Baltic republics chose to secede from the Soviet Union. On 17 March, a referendum was held, in which the vast majority of participating citizens voted in favour of changing the Soviet Union into a renewed federation. In August 1991, a coup d'tat attempt by members of Gorbachev's government, directed against Gorbachev and aimed at preserving the Soviet Union, instead led to the end of the Communist Party of the Soviet Union. On 25 December 1991, the USSR was dissolved into 15 post-Soviet states.

In June 1991, Boris Yeltsin became the first directly elected President in Russian history when he was elected President of the Russian Soviet Federative Socialist Republic, which became the independent Russian Federation in December of that year. During and after the disintegration of the Soviet Union, wide-ranging reforms including privatization and market and trade liberalization were undertaken,[90] including radical changes along the lines of "shock therapy" as recommended by the United States and the International Monetary Fund.[91] All this resulted in a major economic crisis, characterized by a 50% decline in both GDP and industrial output between 1990 and 1995.[90][92]

The privatization largely shifted control of enterprises from state agencies to individuals with inside connections in the government. Many of the newly rich moved billions in cash and assets outside of the country in an enormous capital flight.[93] The depression of the economy led to the collapse of social services; the birth rate plummeted while the death rate skyrocketed.[94] Millions plunged into poverty, from a level of 1.5% in the late Soviet era to 3949% by mid-1993.[95] The 1990s saw extreme corruption and lawlessness, the rise of criminal gangs and violent crime.[96]

The 1990s were plagued by armed conflicts in the North Caucasus, both local ethnic skirmishes and separatist Islamist insurrections. From the time Chechen separatists declared independence in the early 1990s, an intermittent guerrilla war has been fought between the rebel groups and the Russian military. Terrorist attacks against civilians carried out by separatists, most notably the Moscow theater hostage crisis and Beslan school siege, caused hundreds of deaths and drew worldwide attention.

Russia took up the responsibility for settling the USSR's external debts, even though its population made up just half of the population of the USSR at the time of its dissolution.[97] High budget deficits caused the 1998 Russian financial crisis[98] and resulted in a further GDP decline.[90]

On 31 December 1999, President Yeltsin unexpectedly resigned, handing the post to the recently appointed Prime Minister, Vladimir Putin, who then won the 2000 presidential election. Putin suppressed the Chechen insurgency although sporadic violence still occurs throughout the Northern Caucasus. High oil prices and the initially weak currency followed by increasing domestic demand, consumption, and investments has helped the economy grow for nine straight years, improving the standard of living and increasing Russia's influence on the world stage.[99] However, since the World economic crisis of 2008 and a subsequent drop in oil prices, Russia's economy has stagnated and poverty has again started to rise.[100] While many reforms made during the Putin presidency have been generally criticized by Western nations as undemocratic,[101] Putin's leadership over the return of order, stability, and progress has won him widespread admiration in Russia.[102]

On 2 March 2008, Dmitry Medvedev was elected President of Russia while Putin became Prime Minister. Putin returned to the presidency following the 2012 presidential elections, and Medvedev was appointed Prime Minister.

In 2014, after President Viktor Yanukovych of Ukraine fled as a result of a revolution, Putin requested and received authorization from the Russian Parliament to deploy Russian troops to Ukraine.[103][104][105][106][107] Following a Crimean referendum in which separation was favored by a large majority of voters, but not accepted internationally,[108][109][110][111][112][113] the Russian leadership announced the accession of Crimea into the Russian Federation. On 27 March the United Nations General Assembly voted in favor of a non-binding resolution opposing the Russian annexation of Crimea by a vote of 100 in favour, 11 against and 58 abstentions.[114]

In September 2015, Russia started military intervention in the Syrian Civil War, consisting of air strikes against militant groups of the Islamic State, al-Nusra Front (al-Qaeda in the Levant), and the Army of Conquest.

It could face charges for attacking Syrian City Aleppo. It has also been accused of being behind the Hillary Clinton e-mail hack.

According to the Constitution of Russia, the country is a federation and semi-presidential republic, wherein the President is the head of state[115] and the Prime Minister is the head of government. The Russian Federation is fundamentally structured as a multi-party representative democracy, with the federal government composed of three branches:

The president is elected by popular vote for a six-year term (eligible for a second term, but not for a third consecutive term).[116] Ministries of the government are composed of the Premier and his deputies, ministers, and selected other individuals; all are appointed by the President on the recommendation of the Prime Minister (whereas the appointment of the latter requires the consent of the State Duma). Leading political parties in Russia include United Russia, the Communist Party, the Liberal Democratic Party, and A Just Russia. In 2013, Russia was ranked as 122nd of 167 countries in the Democracy Index, compiled by The Economist Intelligence Unit,[117] while the World Justice Project currently ranks Russia 80th of 99 countries surveyed in terms of rule of law.[118]

The Russian Federation is recognized in international law as a successor state of the former Soviet Union.[29] Russia continues to implement the international commitments of the USSR, and has assumed the USSR's permanent seat in the UN Security Council, membership in other international organisations, the rights and obligations under international treaties, and property and debts. Russia has a multifaceted foreign policy. As of 2009[update], it maintains diplomatic relations with 191 countries and has 144 embassies. The foreign policy is determined by the President and implemented by the Ministry of Foreign Affairs of Russia.[119]

As the successor to a former superpower, Russia's geopolitical status has often been debated, particularly in relation to unipolar and multipolar views on the global political system. While Russia is commonly accepted to be a great power, in recent years it has been characterized by a number of world leaders,[120][121] scholars,[122] commentators and politicians[123] as a currently reinstating or potential superpower.[124][125][126]

As one of five permanent members of the UN Security Council, Russia plays a major role in maintaining international peace and security. The country participates in the Quartet on the Middle East and the Six-party talks with North Korea. Russia is a member of the G8 industrialized nations, the Council of Europe, OSCE, and APEC. Russia usually takes a leading role in regional organisations such as the CIS, EurAsEC, CSTO, and the SCO.[127] Russia became the 39th member state of the Council of Europe in 1996.[128] In 1998, Russia ratified the European Convention on Human Rights. The legal basis for EU relations with Russia is the Partnership and Cooperation Agreement, which came into force in 1997. The Agreement recalls the parties' shared respect for democracy and human rights, political and economic freedom and commitment to international peace and security.[129] In May 2003, the EU and Russia agreed to reinforce their cooperation on the basis of common values and shared interests.[130] Former President Vladimir Putin had advocated a strategic partnership with close integration in various dimensions including establishment of EU-Russia Common Spaces.[131] Since the dissolution of the Soviet Union, Russia has developed a friendlier relationship with the United States and NATO. The NATO-Russia Council was established in 2002 to allow the United States, Russia and the 27 allies in NATO to work together as equal partners to pursue opportunities for joint collaboration.[132]

Russia maintains strong and positive relations with other BRIC countries. India is the largest customer of Russian military equipment and the two countries share extensive defense and strategic relations.[133] In recent years, the country has strengthened bilateral ties especially with the People's Republic of China by signing the Treaty of Friendship as well as building the Trans-Siberian oil pipeline and gas pipeline from Siberia to China.[134][135]

An important aspect of Russia's relations with the West is the criticism of Russia's political system and human rights management (including LGBT rights, media freedom, and reports about killed journalists) by Western governments, the mass media and the leading democracy and human rights watchdogs. In particular, such organisations as the Amnesty International and Human Rights Watch consider Russia to have not enough democratic attributes and to allow few political rights and civil liberties to its citizens.[136][137]Freedom House, an international organisation funded by the United States, ranks Russia as "not free", citing "carefully engineered elections" and "absence" of debate.[138] Russian authorities dismiss these claims and especially criticise Freedom House. The Russian Ministry of Foreign Affairs has called the 2006 Freedom in the World report "prefabricated", stating that the human rights issues have been turned into a political weapon in particular by the United States. The ministry also claims that such organisations as Freedom House and Human Rights Watch use the same scheme of voluntary extrapolation of "isolated facts that of course can be found in any country" into dominant tendencies.[139]

The Russian military is divided into the Ground Forces, Navy, and Air Force. There are also three independent arms of service: Strategic Missile Troops, Aerospace Defence Forces, and the Airborne Troops. In 2006, the military had 1.037million personnel on active duty.[140] It is mandatory for all male citizens aged 1827 to be drafted for a year of service in Armed Forces.[99]

Russia has the largest stockpile of nuclear weapons in the world. It has the second largest fleet of ballistic missile submarines and is the only country apart from the United States with a modern strategic bomber force.[34][141] Russia's tank force is the largest in the world, its surface navy and air force are among the largest ones.

The country has a large and fully indigenous arms industry, producing most of its own military equipment with only few types of weapons imported. Russia is one of the world's top supplier of arms, a spot it has held since 2001, accounting for around 30% of worldwide weapons sales[142] and exporting weapons to about 80 countries.[143] The Stockholm International Peace Research Institute, SIPRI, found that Russia was the second biggest exporter of arms in 2010-14, increasing their exports by 37 per cent from the period 2005-2009. In 2010-14, Russia delivered weapons to 56 states and to rebel forces in eastern Ukraine.[144]

The Russian government's published 2014 military budget is about 2.49 trillion rubles (approximately US$69.3 billion), the third largest in the world behind the US and China. The official budget is set to rise to 3.03 trillion rubles (approximately US$83.7 billion) in 2015, and 3.36 trillion rubles (approximately US$93.9 billion) in 2016.[145] However, unofficial estimates put the budget significantly higher, for example the Stockholm International Peace Research Institute (SIPRI) 2013 Military Expenditure Database estimated Russia's military expenditure in 2012 at US$90.749 billion.[146] This estimate is an increase of more than US$18 billion on SIPRI's estimate of the Russian military budget for 2011 (US$71.9 billion).[147] As of 2014[update], Russia's military budget is higher than any other European nation.

According to 2012 Global Peace Index, Russia is the sixth least peaceful out of 162 countries in the world, principally because of its defense industry. Russia has historically ranked low on the index since its inception in 2007.[148]

According to the Constitution, the country comprises eighty-five federal subjects,[149] including the Republic of Crimea and the federal city of Sevastopol, whose recent establishment is internationally disputed and criticized as illegal annexation.[150] In 1993, when the Constitution was adopted, there were eighty-nine federal subjects listed, but later some of them were merged. These subjects have equal representationtwo delegates eachin the Federation Council.[151] However, they differ in the degree of autonomy they enjoy.

Federal subjects are grouped into eight federal districts, each administered by an envoy appointed by the President of Russia.[154] Unlike the federal subjects, the federal districts are not a subnational level of government, but are a level of administration of the federal government. Federal districts' envoys serve as liaisons between the federal subjects and the federal government and are primarily responsible for overseeing the compliance of the federal subjects with the federal laws.

Russia is the largest country in the world; its total area is 17,125,200 square kilometres (6,612,100sqmi).[155][156] There are 23 UNESCO World Heritage Sites in Russia, 40 UNESCO biosphere reserves,[157] 41 national parks and 101 nature reserves. It lies between latitudes 41 and 82 N, and longitudes 19 E and 169 W.

Russia's territorial expansion was achieved largely in the late 16th century under the Cossack Yermak Timofeyevich during the reign of Ivan the Terrible, at a time when competing city-states in the western regions of Russia had banded together to form one country. Yermak mustered an army and pushed eastward where he conquered nearly all the lands once belonging to the Mongols, defeating their ruler, Khan Kuchum.[158]

Russia has a wide natural resource base, including major deposits of timber, petroleum, natural gas, coal, ores and other mineral resources.

The two most widely separated points in Russia are about 8,000km (4,971mi) apart along a geodesic line. These points are: a 60km (37mi) long Vistula Spit the boundary with Poland separating the Gdask Bay from the Vistula Lagoon and the most southeastern point of the Kuril Islands. The points which are farthest separated in longitude are 6,600km (4,101mi) apart along a geodesic line. These points are: in the west, the same spit on the boundary with Poland, and in the east, the Big Diomede Island. The Russian Federation spans nine time zones.

Most of Russia consists of vast stretches of plains that are predominantly steppe to the south and heavily forested to the north, with tundra along the northern coast. Russia possesses 10% of the world's arable land.[159] Mountain ranges are found along the southern borders, such as the Caucasus (containing Mount Elbrus, which at 5,642m (18,510ft) is the highest point in both Russia and Europe) and the Altai (containing Mount Belukha, which at the 4,506m (14,783ft) is the highest point of Siberia outside of the Russian Far East); and in the eastern parts, such as the Verkhoyansk Range or the volcanoes of Kamchatka Peninsula (containing Klyuchevskaya Sopka, which at the 4,750m (15,584ft) is the highest active volcano in Eurasia as well as the highest point of Asian Russia). The Ural Mountains, rich in mineral resources, form a north-south range that divides Europe and Asia.

Russia has an extensive coastline of over 37,000km (22,991mi) along the Arctic and Pacific Oceans, as well as along the Baltic Sea, Sea of Azov, Black Sea and Caspian Sea.[99] The Barents Sea, White Sea, Kara Sea, Laptev Sea, East Siberian Sea, Chukchi Sea, Bering Sea, Sea of Okhotsk, and the Sea of Japan are linked to Russia via the Arctic and Pacific. Russia's major islands and archipelagos include Novaya Zemlya, the Franz Josef Land, the Severnaya Zemlya, the New Siberian Islands, Wrangel Island, the Kuril Islands, and Sakhalin. The Diomede Islands (one controlled by Russia, the other by the United States) are just 3km (1.9mi) apart, and Kunashir Island is about 20km (12.4mi) from Hokkaido, Japan.

Russia has thousands of rivers and inland bodies of water, providing it with one of the world's largest surface water resources. Its lakes contain approximately one-quarter of the world's liquid fresh water.[160] The largest and most prominent of Russia's bodies of fresh water is Lake Baikal, the world's deepest, purest, oldest and most capacious fresh water lake.[161] Baikal alone contains over one-fifth of the world's fresh surface water.[160] Other major lakes include Ladoga and Onega, two of the largest lakes in Europe. Russia is second only to Brazil in volume of the total renewable water resources. Of the country's 100,000 rivers,[162] the Volga is the most famous, not only because it is the longest river in Europe, but also because of its major role in Russian history.[99] The Siberian rivers Ob, Yenisey, Lena and Amur are among the longest rivers in the world.

The enormous size of Russia and the remoteness of many areas from the sea result in the dominance of the humid continental climate, which is prevalent in all parts of the country except for the tundra and the extreme southeast. Mountains in the south obstruct the flow of warm air masses from the Indian Ocean, while the plain of the west and north makes the country open to Arctic and Atlantic influences.[163]

Most of Northern European Russia and Siberia has a subarctic climate, with extremely severe winters in the inner regions of Northeast Siberia (mostly the Sakha Republic, where the Northern Pole of Cold is located with the record low temperature of 71.2C or 96.2F), and more moderate winters elsewhere. Both the strip of land along the shore of the Arctic Ocean and the Russian Arctic islands have a polar climate.

The coastal part of Krasnodar Krai on the Black Sea, most notably in Sochi, possesses a humid subtropical climate with mild and wet winters. In many regions of East Siberia and the Far East, winter is dry compared to summer; other parts of the country experience more even precipitation across seasons. Winter precipitation in most parts of the country usually falls as snow. The region along the Lower Volga and Caspian Sea coast, as well as some areas of southernmost Siberia, possesses a semi-arid climate.

Throughout much of the territory there are only two distinct seasonswinter and summeras spring and autumn are usually brief periods of change between extremely low and extremely high temperatures.[163] The coldest month is January (February on the coastline); the warmest is usually July. Great ranges of temperature are typical. In winter, temperatures get colder both from south to north and from west to east. Summers can be quite hot, even in Siberia.[165] The continental interiors are the driest areas.

From north to south the East European Plain, also known as Russian Plain, is clad sequentially in Arctic tundra, coniferous forest (taiga), mixed and broad-leaf forests, grassland (steppe), and semi-desert (fringing the Caspian Sea), as the changes in vegetation reflect the changes in climate. Siberia supports a similar sequence but is largely taiga. Russia has the world's largest forest reserves,[166] known as "the lungs of Europe",[167] second only to the Amazon Rainforest in the amount of carbon dioxide it absorbs.

There are 266 mammal species and 780 bird species in Russia. A total of 415 animal species have been included in the Red Data Book of the Russian Federation as of 1997 and are now protected.[168]

Russia has a developed, high-income market economy with enormous natural resources, particularly oil and natural gas. It has the 15th largest economy in the world by nominal GDP and the 6th largest by purchasing power parity (PPP). Since the turn of the 21st century, higher domestic consumption and greater political stability have bolstered economic growth in Russia. The country ended 2008 with its ninth straight year of growth, but growth has slowed with the decline in the price of oil and gas. Real GDP per capita, PPP (current international) was 19,840 in 2010.[169] Growth was primarily driven by non-traded services and goods for the domestic market, as opposed to oil or mineral extraction and exports.[99] The average nominal salary in Russia was $967 per month in early 2013, up from $80 in 2000.[170][171] In March 2014 the average nominal monthly wages reached 30,000 RUR (or US$980),[172][173] while tax on the income of individuals is payable at the rate of 13% on most incomes.[174] Approximately 12.8% of Russians lived below the national poverty line in 2011,[175] significantly down from 40% in 1998 at the worst point of the post-Soviet collapse.[95] Unemployment in Russia was 5.4% in 2014, down from about 12.4% in 1999.[176] The middle class has grown from just 8million persons in 2000 to 104million persons in 2013.[177][178] However, after United States-led sanctions since 2014 and a collapse in oil prices, the proportion of middle-class could halve to 20%.[179] Sugar imports reportedly dropped 82% between 2012 and 2013 as a result of the increase in domestic output.[180]

Oil, natural gas, metals, and timber account for more than 80% of Russian exports abroad.[99] Since 2003, the exports of natural resources started decreasing in economic importance as the internal market strengthened considerably. Despite higher energy prices, oil and gas only contribute to 5.7% of Russia's GDP and the government predicts this will be 3.7% by 2011.[181] Oil export earnings allowed Russia to increase its foreign reserves from $12billion in 1999 to $597.3billion on 1 August 2008, the third largest foreign exchange reserves in the world.[182] The macroeconomic policy under Finance Minister Alexei Kudrin was prudent and sound, with excess income being stored in the Stabilization Fund of Russia.[183] In 2006, Russia repaid most of its formerly massive debts,[184] leaving it with one of the lowest foreign debts among major economies.[185] The Stabilization Fund helped Russia to come out of the global financial crisis in a much better state than many experts had expected.[183]

A simpler, more streamlined tax code adopted in 2001 reduced the tax burden on people and dramatically increased state revenue.[186] Russia has a flat tax rate of 13%. This ranks it as the country with the second most attractive personal tax system for single managers in the world after the United Arab Emirates.[187] According to Bloomberg, Russia is considered well ahead of most other resource-rich countries in its economic development, with a long tradition of education, science, and industry.[188] The country has a higher proportion of higher education graduates than any other country in Eurasia.[189]

The economic development of the country has been uneven geographically with the Moscow region contributing a very large share of the country's GDP.[190] Inequality of household income and wealth has also been noted, with Credit Suisse finding Russian wealth distribution so much more extreme than other countries studied it "deserves to be placed in a separate category."[191][192] Another problem is modernisation of infrastructure, ageing and inadequate after years of being neglected in the 1990s; the government has said $1trillion will be invested in development of infrastructure by 2020.[193] In December 2011, Russia finally[clarification needed] joined the World Trade Organisation, allowing it a greater access to overseas markets. Some analysts estimate that WTO membership could bring the Russian economy a bounce of up to 3% annually.[194] Russia ranks as the second-most corrupt country in Europe (after Ukraine), according to the Corruption Perceptions Index. The Norwegian-Russian Chamber of Commerce also states that "[c]orruption is one of the biggest problems both Russian and international companies have to deal with".[195] The high rate of corruption acts as a hidden tax as businesses and individuals often have to pay money that is not part of the official tax rate. It is estimated that corruption is costing the Russian economy an estimated $2 billion (80 billion rubles) per year.[196] In 2014, a book-length study by Professor Karen Dawisha was published concerning corruption in Russian under Putin's government.[197]

The Russian central bank announced plans in 2013 to free float the Russian ruble in 2015. According to a stress test conducted by the central bank Russian financial system would be able to handle a currency decline of 25%30% without major central bank interference. However, Russian economy began stagnating in late 2013 and in combination with the War in Donbass is in danger of entering stagflation, slow growth and high inflation. The Russian ruble collapsed by 24% from October 2013 to October 2014 entering the level where the central bank may need to intervene to strengthen the currency. Moreover, after bringing inflation down to 3.6% in 2012, the lowest rate since gaining independence from the Soviet Union, inflation in Russia jumped to nearly 7.5% in 2014, causing the central bank to increase its lending rate to 8% from 5.5% in 2013.[198][199][200] In an October 2014 article in Bloomberg Business Week, it was reported that Russia had significantly started shifting its economy towards China in response to increasing financial tensions following its annexation of Crimea and subsequent Western economic sanctions.[201]

Russia's total area of cultivated land is estimated at 1,237,294 square kilometres (477,722sqmi), the fourth largest in the world.[202] From 1999 to 2009, Russia's agriculture grew steadily,[203] and the country turned from a grain importer to the third largest grain exporter after the EU and the United States.[204] The production of meat has grown from 6,813,000 tonnes in 1999 to 9,331,000 tonnes in 2008, and continues to grow.[205]

This restoration of agriculture was supported by a credit policy of the government, helping both individual farmers and large privatized corporate farms that once were Soviet kolkhozes and which still own the significant share of agricultural land.[206] While large farms concentrate mainly on grain production and husbandry products, small private household plots produce most of the country's potatoes, vegetables and fruits.[207]

Since Russia borders three oceans (the Atlantic, Arctic, and Pacific), Russian fishing fleets are a major world fish supplier. Russia captured 3,191,068 tons of fish in 2005.[208] Both exports and imports of fish and sea products grew significantly in recent years, reaching $2,415 and $2,036 million, respectively, in 2008.[209]

Sprawling from the Baltic Sea to the Pacific Ocean, Russia has more than a fifth of the world's forests, which makes it the largest forest country in the world.[166][210] However, according to a 2012 study by the Food and Agriculture Organization of the United Nations and the Government of the Russian Federation,[211] the considerable potential of Russian forests is underutilized and Russia's share of the global trade in forest products is less than four percent.[212]

In recent years, Russia has frequently been described in the media as an energy superpower.[213][214] The country has the world's largest natural gas reserves,[215] the 8th largest oil reserves,[216] and the second largest coal reserves.[217] Russia is the world's leading natural gas exporter[218] and second largest natural gas producer,[33] while also the largest oil exporter and the largest oil producer.[32]

Read this article:
Russia - Wikipedia, the free encyclopedia

Read More...

Molecular Genetics – DNA, RNA, & Protein

October 17th, 2016 4:42 pm

MOLECULAR GENETICS You Are Here* molecular basis of inheritance Genes ---> Enzymes ---> Metabolism (phenotype) Central Dogma of Molecular Biology* DNA -transcription-->RNA-translation--> Protein Concept Activity -17.1 Overview of Protein Synthesis - INFORMATION FLOW

What is a GENE = ? DNA is the genetic material... [ but what about, retroviruses, as HIV & TMV, contain RNA ] - a discrete piece of deoxyribonucleic acid - linear polymer of repeating nucleotide monomers nucleotides* --> A adenine,C cytosine T thymidine,G guanine --> polynucleotide*

Technology with a Twist - Understanding Genetics

INFORMATION PROCESSING & the CENTRAL DOGMA - the letters of the genetic alphabet... are the nucleotides A, T, G, & C of DNA - the unit of information is CODON = genetic 'word' a triplet sequence of nucleotides 'CAT' in a polynucleotide 3 nucleotides = 1 codon (word) = 1 amino acid in a polypeptide - the definition of (codon) word = amino acid - Size of Human Genome: 3,000,000,000 base pairs or 1.5b in single strand of DNA genes 500,000,000 possible codons (words or amino acids) - average page your textbook = approx 850 words thus, human genome is equal to 588,000 pages or 470 copies of bio text book reading at 3 bases/sec it would take you about 47.6 years @ 8h/d - 7d/w WOW... extreme nanotechnology Mice & humans (indeed, most or all mammals including dogs, cats, rabbits, monkeys, & apes) have roughly the same number of nucleotides in their genomes -- about 3 billion bp. It is estimated that 99.9% of the 3billion n's of human genome is same person to person.

Experimental Proof of DNA as Genetic Material...

1. Transformation Experiments of Fred Griffith... (1920's) Streptococcus pneumoniae -pathogenic S strain & benign R transforming 'principle'* (converting R to S cells) is the genetic element 2. Oswald Avery, Colin MacLeod, & Maclyn McCarty... (1940's) suggest the transforming substance* is DNAmolecules, but... 3. Alfred Hershey & Martha Chase's* 1952 bacteriophage experiments*... VIRAL REPLICATION*[ phage infection & & lytic/lysogenic* ] a genetically controlled biological activity (viral reproduction) they did novel experiment... 1st real use radioisotopes in biology* CONCLUSION - DNA is genetic material because (32P) nucleic acid not (35S) protein guides* viral replication Sumanas, Inc. animation - Lifecycle of HIV virus

Structure of DNA ..... Discovery of Double Helix... Watson's book Nobel prize* -JD Watson, Francis Crick,Maurice Wilkins, but [ Erwin Chargaff & RosyFranklin]... Race for the Double Helix "Life Story" - a BBC dramatization of the discovery of DNA. used two approaches to decipher structure: 1. model building - figure* (are the bases in/out; are the sugar-P's in/out?) 2. x-ray diffraction*pattern* favor a DNA helix of constant diameter* we know now: DNA is a double stranded, helical, polynucleotide chains, made of... 4 nucleotides - A, T, G, C (purine & pyrimidines) in 2 polynucleotide strands (polymer chains) head-tail polarity [5'-----3'] - strands run antiparallel held together via weak H-Bonds & complimentary pairing - Chargaff's rule*..... A:T G:C A + G / T + C = 1.0 Fig's:sugar-P backbone*,base*pairing, dimensions*, models of DNA structure* john kyrk's animation of DNA & Quicktime movie of DNA structure literature references & myDNAi timeline*

Replication of DNA... (Arthur Kornberg - 1959 Nobel - died 10/26/07) copying of DNA into DNA is structurally obvious??? [figure*] Patterns of Replication* = conservative, semi-conservative, & dispersive Matt Meselson & Frank Stahl1958 - experimental design* can we separate 15N-DNA from 14N-DNA - (OLD DNA from NEW DNA)? sedimentation of DNA's (sucrose gradients --> CsCl gradients* & picture*) we can predict results... figure* & overview& all possible results Sumanas, Inc. animation - Meselson-Stahl DNA Replications*

Model of Replication is bacterial with DNA polymerase III... several enzymes* form a Replication Complex (Replisome) & include: helicase - untwists DNA topoisomerase [DNA gyrase] - removes supercoils, single strand binding proteins - stabilize replication fork, Primase - makes RNA primer POL III - synthesizes new DNA strands DNA polymerase I - removes RNA primer 1 base at a time, adds DNA bases DNA ligase repairs Okazaki fragments (seals lagging strand 3' open holes) Concept Activity - DNA Replication Review Structure of DNA polymerase III* copies both strands simultaneously, as DNA is Threaded Through a Replisome* a "replication machine", which may be stationary by anchoring in nuclear matrix Continuous & Discontinuous replication occur simultaneously in both strands

EVENTS: 1. DNA pol III binds at the origin of replication site in the template strand 2. DNA is unwound by replisome complex using helicase & topoisomerase 3. all polymerases require a preexisting DNA strand (PRIMER) to start replication, thus Primase adds a single short primer to the LEADING strand and adds many primers to the LAGGING strand 4. DNA pol III is a dimer adding new nucleotides to both strands primers direction of reading is 3' ---> 5' on template direction of synthesis of new strand is 5" ---> 3' rate of synthesis is substantial 400 nucleotide/sec 5. DNA pol I removes primer at 5' end replacing with DNA bases, leaves 3' hole 6. DNA ligase seals 3' holes of Okazaki fragments on lagging strand the sequence of events in detail* and DNA Repair* Rates of DNA synthesis: myDNAi movie of replication* native polymerase: 400 bases/sec with 1 error per 109 bases artificial: phophoramidite method (Marvin Caruthers, U.Colorado); ssDNA synthesis on polystyrene bead @ 1 base/300 sec with error rate of 1/100b

GENE Expression the Central Dogma of Molecular Biology depicts flow of genetic information Transcription - copying of DNA sequence into RNA Translation- copying of RNA sequence into protein DNA sequence -------> RNA sequence -----> amino acid sequence TAC AUG MET triplet sequence in DNA --> codon in mRNA ---->amino acid in protein Information : triplet sequence in DNA is the genetic word [codon] Compare Events: Procaryotes* vs. Eucaryotes* = Separation of labor Differences DNA vs. RNA (bases & sugars) and its single stranded Flow of Gene Information (FIG*) - One Gene - One enzyme (Beadle & Tatum) 18.3-Overview: Control of Gene Expression

Transcription - RNA polymerase Concept Activity 17.2 - Transcription RNA*polymerase - in bacteria Sigma factor* binds promoter & initiates* copying* [pnpase] transcription factors* are needed to recognize specific DNA sequence [motif*], binds to promoter DNA region [ activators & transcription factors*]* makes a complimentary copy* of one of the two DNA strands[sense strand] Quicktime movie of transcription*myDNAi Roger Kornberg's movie of transcription (2006 Nobel)* Kinds of RNA [table*] tRNA - small, 80n, anticodon sequence, single strand with 2ndary structure* function = picks up aa & transports it to ribosome rRNA - 3 individual pieces of RNA - make up the organelle = RIBOSOME primary transcript is processed into the 3 pieces of rRNA pieces(picture*) & recall structure of ribosome

Other classes of RNA: small nuclear RNA (snRNP's)- plays a structural and catalytic role in spliceosome* there are 5 snRNP's making a spliceosome [U1, U2, U4, U5, & U6]; they and participate in several RNA-RNA and RNA-protein interactions

SRP (signal recognition particle) - srpRNA is a component of the protein-RNA complex that recognizes the signal sequence of polypeptides targeted to the ER - figure*

small nucleolar RNA (snoRNA) - aids in processing of pre-rRNA transcripts for ribosome subunit formation in the nucleolus

micro RNA's (micro-RNA)- also called antisense RNA & interfereing RNA; c7-fig 19.9 short (20-24 nucleotide) RNAs that bind to mRNA inhibiting it. figure* present in MODEL eukaryotic organisms as:roundworms, fruit flies, mice, humans, & plants (arabidopsis); seems to help regulate gene expression by controlling the timing of developmental events via mRNA action also inhibits translation of target mRNAs. ex: siRNA --> [BARR Body*]

TRANSLATION - Making a Protein process of making a protein in a specific amino acid sequence from a unique mRNA sequence...[ E.M. picture* ] polypeptides are built on the ribosome (pic) on a polysome [ animation*] Sequence of 4 Steps in Translation... [glossary] 1. add an amino acid to tRNA -- > aa-tRNA - ACTIVATION* 2. assemble players [ribosome*, mRNA, aa-tRNA] - INITIATION* 3. adding new aa's via peptidyl transferase - ELONGATION* 4. stopping the process - TERMINATION* Concept CD Activity - 17.4 Events in Translation Review the processes - initiation, elongation, & termination myDNAi real-time movie of translation*& Quicktime movie of translation Review figures & parts: Summary fig* [ components, locations, AA-site, & advanced animation ] [ Nobel Committee static animations of Central Dogma ]

GENETIC CODE... ...is the sequence of nucleotides in DNA, but routinely shown as a mRNA code* ...specifies sequence of amino acids to be linked into the protein coding ratio* - # of n's... how many nucleotides specify 1 aa 1n = 4 singlets, 2n= 16 doublets, 3n = 64 triplets Student CD Activity - 11.2 - Triplet Coding S. Ochoa (1959 Nobel) - polynucleotide phosphorylase can make SYNTHETIC mRNA Np-Np-Np-Np <----> Np-Np-Np + Np Marshall Nirenberg (1968 Nobel)- synthetic mRNA's used in an in vitro system 5'-UUU-3' = pheU + C --> UUU, UUC, UCC, CCC UCU, CUC, CCU, CUU the Genetic CODE* - 64 triplet codons [61 = aa & 3 stop codons] universal (but some anomalies), 1 initiator codon (AUG), redundant but non-ambiguous, and exhibits "wobble*".

GENETIC CHANGE - a change in DNA nucleotide sequence (= change in mRNA) - 2 significant waysmutation & recombination [glossary] 1. MUTATION - a permanent change in an organism's DNA*that results in a different codon = different amino acid sequence Point mutation* - a single to few nucleotides change... - deletions, insertions, frame-shift mutations* [CAT] - single nucleotide base substitutions* : non-sense = change to no amino acid (a STOP codon) UCA --> UAA ser to non mis-sense = different amino acid UCA --> UUA ser to leu Sickle Cell Anemia* - a mis-sense mutation... (SCA-pleiotropy) another point mutation blood disease - thalassemia - Effects = no effect, detrimental (lethal), +/- functionality, beneficial

2. Recombination (Recombinant DNA)newly combined DNA's that [glossary]* can change genotype via insertion of NEW (foreign) DNA molecules into recipient cell 1. fertilization*- sperm inserted into recipient egg cell* --> zygote [n + n = 2n] 2. exchange of homologous chromatids via crossing over* = new gene combo's 3. transformation* - absorption of 'foreign' DNA by recipient cells changes cell 4. BACTERIAL CONJUGATION* - involves DNA plasmidsg* (F+ & R = resistance) conjugation may be a primitive sex-like reproduction in bacteria[Hfr*] 5. VIRAL TRANSDUCTION - insertion via a viral vector(lysogeny* &TRANSDUCTION*) general transduction - pieces of bacterial DNA are packaged w viral DNA during viral replication restricted transduction - a temperate phage goes lytic carrying adjacent bacterial DNA into virus particle 6. DESIGNER GENES - man-made recombinant DNA molecules

Designer Genes - Genetic Engineering - Biotechnology

RECOMBINANT DNA TECHNOLOGY... a collection of experimental techniques, which allow for isolation, copying, & insertion of new DNA sequences into host-recipientcells by A NUMBER OF laboratory protocols & methodologies

Restriction Endonucleases-[glossary]*... diplotomic cuts (unequal) at uniqueDNA sequences Eco-R1-figure* @ mostly palindromes... [never odd or even] 5' GAATTC 3' 5' G. . . . . + AATTC 3' 3' CTTAAG 5' 3' CTTAA .. .. G 5' campbell 7/e movie* DNA's cut this way have STICKY (complimentary) ENDS & can be reannealed or spliced* w other DNA molecules to produce new genes combosand sealed via DNA ligase. myDNAi movie of restriction enzyme action*

Procedures of Biotechnology? [Genome Biology Research] A. Technology involved in Cloning a Gene...[animation* & the tools of genetic analysis] making copies of gene DNA 1. via a plasmid*[ A.E. fig& human shotgun plasmid cloning & My DNAi movie*] 2. Librariesg... [ library figure* & BAC's* &Sumanas animation - DNA fingerprint library] 3. Probesg... [ cDNAg & reverse transcriptaseg & DNA Probe Hybridizationg... cDNA figure*& cDNA library* & a probe for a gene of interest* finding a gene with a probe among a library*] 4. Polymerase Chain Reactiong & figure 20.7* & animation*+Sumanas, Inc. animation* the PCR song PCR reaction protocol & Xeroxing DNA & Taq polymerase

Read more:
Molecular Genetics - DNA, RNA, & Protein

Read More...

MCW: Microbiology and Molecular Genetics Department

October 17th, 2016 4:42 pm

The mission of our faculty is to conduct innovative and impactful research in Microbiology, Immunology, and Molecular Genetics and to train students and postdoctoral fellows for careers as biomedical scientists. Our faculty also instruct in the Graduate School of Biomedical Sciences and the Medical School and often collaborate with clinical scientists to facilitate the translation of bench to bedside therapies to treat human diseases. Our students acquire professional training while carrying out independent research projects in microbial pathogenesis and physiology, the immune response, and host interactions with microbial pathogens. Our administrative and research staff strive to support the research, teaching and service activities of our students and faculty.

Contact information for faculty members in the department, including email addresses and room numbers, can be found on the faculty pages.

Medical College of Wisconsin Department of Microbiology and Molecular Genetics BSB - 2nd Floor - Room 273 8701 Watertown Plank Road Milwaukee, WI 53226

(414) 955-8253 | (414) 955-6535 (fax)

The department is located on the second floor of the Basic Science Building at 8701 W. Watertown Plank Road.

Read this article:
MCW: Microbiology and Molecular Genetics Department

Read More...

Stem-cell therapy – Wikipedia

October 17th, 2016 4:40 pm

This article is about the medical therapy. For the cell type, see Stem cell.

Stem-cell therapy is the use of stem cells to treat or prevent a disease or condition.

Bone marrow transplant is the most widely used stem-cell therapy, but some therapies derived from umbilical cord blood are also in use. Research is underway to develop various sources for stem cells, and to apply stem-cell treatments for neurodegenerative diseases and conditions such as diabetes, heart disease, and other conditions.

Stem-cell therapy has become controversial following developments such as the ability of scientists to isolate and culture embryonic stem cells, to create stem cells using somatic cell nuclear transfer and their use of techniques to create induced pluripotent stem cells. This controversy is often related to abortion politics and to human cloning. Additionally, efforts to market treatments based on transplant of stored umbilical cord blood have been controversial.

For over 30 years, bone marrow has been used to treat cancer patients with conditions such as leukaemia and lymphoma; this is the only form of stem-cell therapy that is widely practiced.[1][2][3] During chemotherapy, most growing cells are killed by the cytotoxic agents. These agents, however, cannot discriminate between the leukaemia or neoplastic cells, and the hematopoietic stem cells within the bone marrow. It is this side effect of conventional chemotherapy strategies that the stem-cell transplant attempts to reverse; a donor's healthy bone marrow reintroduces functional stem cells to replace the cells lost in the host's body during treatment. The transplanted cells also generate an immune response that helps to kill off the cancer cells; this process can go too far, however, leading to graft vs host disease, the most serious side effect of this treatment.[4]

Another stem-cell therapy called Prochymal, was conditionally approved in Canada in 2012 for the management of acute graft-vs-host disease in children who are unresponsive to steroids.[5] It is an allogenic stem therapy based on mesenchymal stem cells (MSCs) derived from the bone marrow of adult donors. MSCs are purified from the marrow, cultured and packaged, with up to 10,000 doses derived from a single donor. The doses are stored frozen until needed.[6]

The FDA has approved five hematopoietic stem-cell products derived from umbilical cord blood, for the treatment of blood and immunological diseases.[7]

In 2014, the European Medicines Agency recommended approval of Holoclar, a treatment involving stem cells, for use in the European Union. Holoclar is used for people with severe limbal stem cell deficiency due to burns in the eye.[8]

In March 2016 GlaxoSmithKline's Strimvelis (GSK2696273) therapy for the treatment ADA-SCID was recommended for EU approval.[9]

Stem cells are being studied for a number of reasons. The molecules and exosomes released from stem cells are also being studied in an effort to make medications.[10]

Research has been conducted on the effects of stem cells on animal models of brain degeneration, such as in Parkinson's, Amyotrophic lateral sclerosis, and Alzheimer's disease.[11][12][13] There have been preliminary studies related to multiple sclerosis.[14][15]

Healthy adult brains contain neural stem cells which divide to maintain general stem-cell numbers, or become progenitor cells. In healthy adult laboratory animals, progenitor cells migrate within the brain and function primarily to maintain neuron populations for olfaction (the sense of smell). Pharmacological activation of endogenous neural stem cells has been reported to induce neuroprotection and behavioral recovery in adult rat models of neurological disorder.[16][17][18]

Stroke and traumatic brain injury lead to cell death, characterized by a loss of neurons and oligodendrocytes within the brain. A small clinical trial was underway in Scotland in 2013, in which stem cells were injected into the brains of stroke patients.[19]

Clinical and animal studies have been conducted into the use of stem cells in cases of spinal cord injury.[20][21][22]

The pioneering work[23] by Bodo-Eckehard Strauer has now been discredited by the identification of hundreds of factual contradictions.[24] Among several clinical trials that have reported that adult stem-cell therapy is safe and effective, powerful effects have been reported from only a few laboratories, but this has covered old[25] and recent[26] infarcts as well as heart failure not arising from myocardial infarction.[27] While initial animal studies demonstrated remarkable therapeutic effects,[28][29] later clinical trials achieved only modest, though statistically significant, improvements.[30][31] Possible reasons for this discrepancy are patient age,[32] timing of treatment[33] and the recent occurrence of a myocardial infarction.[34] It appears that these obstacles may be overcome by additional treatments which increase the effectiveness of the treatment[35] or by optimizing the methodology although these too can be controversial. Current studies vary greatly in cell-procuring techniques, cell types, cell-administration timing and procedures, and studied parameters, making it very difficult to make comparisons. Comparative studies are therefore currently needed.

Stem-cell therapy for treatment of myocardial infarction usually makes use of autologous bone-marrow stem cells (a specific type or all), however other types of adult stem cells may be used, such as adipose-derived stem cells.[36] Adult stem cell therapy for treating heart disease was commercially available in at least five continents as of 2007.[citation needed]

Possible mechanisms of recovery include:[11]

It may be possible to have adult bone-marrow cells differentiate into heart muscle cells.[11]

The first successful integration of human embryonic stem cell derived cardiomyocytes in guinea pigs (mouse hearts beat too fast) was reported in August 2012. The contraction strength was measured four weeks after the guinea pigs underwent simulated heart attacks and cell treatment. The cells contracted synchronously with the existing cells, but it is unknown if the positive results were produced mainly from paracrine as opposed to direct electromechanical effects from the human cells. Future work will focus on how to get the cells to engraft more strongly around the scar tissue. Whether treatments from embryonic or adult bone marrow stem cells will prove more effective remains to be seen.[37]

In 2013 the pioneering reports of powerful beneficial effects of autologous bone marrow stem cells on ventricular function were found to contain "hundreds" of discrepancies.[38] Critics report that of 48 reports there seemed to be just five underlying trials, and that in many cases whether they were randomized or merely observational accepter-versus-rejecter, was contradictory between reports of the same trial. One pair of reports of identical baseline characteristics and final results, was presented in two publications as, respectively, a 578 patient randomized trial and as a 391 patient observational study. Other reports required (impossible) negative standard deviations in subsets of patients, or contained fractional patients, negative NYHA classes. Overall there were many more patients published as having receiving stem cells in trials, than the number of stem cells processed in the hospital's laboratory during that time. A university investigation, closed in 2012 without reporting, was reopened in July 2013.[39]

One of the most promising benefits of stem cell therapy is the potential for cardiac tissue regeneration to reverse the tissue loss underlying the development of heart failure after cardiac injury.[40]

Initially, the observed improvements were attributed to a transdifferentiation of BM-MSCs into cardiomyocyte-like cells.[28] Given the apparent inadequacy of unmodified stem cells for heart tissue regeneration, a more promising modern technique involves treating these cells to create cardiac progenitor cells before implantation to the injured area.[41]

The specificity of the human immune-cell repertoire is what allows the human body to defend itself from rapidly adapting antigens. However, the immune system is vulnerable to degradation upon the pathogenesis of disease, and because of the critical role that it plays in overall defense, its degradation is often fatal to the organism as a whole. Diseases of hematopoietic cells are diagnosed and classified via a subspecialty of pathology known as hematopathology. The specificity of the immune cells is what allows recognition of foreign antigens, causing further challenges in the treatment of immune disease. Identical matches between donor and recipient must be made for successful transplantation treatments, but matches are uncommon, even between first-degree relatives. Research using both hematopoietic adult stem cells and embryonic stem cells has provided insight into the possible mechanisms and methods of treatment for many of these ailments.[citation needed]

Fully mature human red blood cells may be generated ex vivo by hematopoietic stem cells (HSCs), which are precursors of red blood cells. In this process, HSCs are grown together with stromal cells, creating an environment that mimics the conditions of bone marrow, the natural site of red-blood-cell growth. Erythropoietin, a growth factor, is added, coaxing the stem cells to complete terminal differentiation into red blood cells.[42] Further research into this technique should have potential benefits to gene therapy, blood transfusion, and topical medicine.

In 2004, scientists at King's College London discovered a way to cultivate a complete tooth in mice[43] and were able to grow bioengineered teeth stand-alone in the laboratory. Researchers are confident that the tooth regeneration technology can be used to grow live teeth in human patients.

In theory, stem cells taken from the patient could be coaxed in the lab turning into a tooth bud which, when implanted in the gums, will give rise to a new tooth, and would be expected to be grown in a time over three weeks.[44] It will fuse with the jawbone and release chemicals that encourage nerves and blood vessels to connect with it. The process is similar to what happens when humans grow their original adult teeth. Many challenges remain, however, before stem cells could be a choice for the replacement of missing teeth in the future.[45][46]

Research is ongoing in different fields, alligators which are polyphyodonts grow up to 50 times a successional tooth (a small replacement tooth) under each mature functional tooth for replacement once a year.[47]

Heller has reported success in re-growing cochlea hair cells with the use of embryonic stem cells.[48]

Since 2003, researchers have successfully transplanted corneal stem cells into damaged eyes to restore vision. "Sheets of retinal cells used by the team are harvested from aborted fetuses, which some people find objectionable." When these sheets are transplanted over the damaged cornea, the stem cells stimulate renewed repair, eventually restore vision.[49] The latest such development was in June 2005, when researchers at the Queen Victoria Hospital of Sussex, England were able to restore the sight of forty patients using the same technique. The group, led by Sheraz Daya, was able to successfully use adult stem cells obtained from the patient, a relative, or even a cadaver. Further rounds of trials are ongoing.[50]

In April 2005, doctors in the UK transplanted corneal stem cells from an organ donor to the cornea of Deborah Catlyn, a woman who was blinded in one eye when acid was thrown in her eye at a nightclub. The cornea, which is the transparent window of the eye, is a particularly suitable site for transplants. In fact, the first successful human transplant was a cornea transplant. The absence of blood vessels within the cornea makes this area a relatively easy target for transplantation. The majority of corneal transplants carried out today are due to a degenerative disease called keratoconus.

The University Hospital of New Jersey reports that the success rate for growth of new cells from transplanted stem cells varies from 25 percent to 70 percent.[51]

In 2014, researchers demonstrated that stem cells collected as biopsies from donor human corneas can prevent scar formation without provoking a rejection response in mice with corneal damage.[52]

In January 2012, The Lancet published a paper by Steven Schwartz, at UCLA's Jules Stein Eye Institute, reporting two women who had gone legally blind from macular degeneration had dramatic improvements in their vision after retinal injections of human embryonic stem cells.[53]

In June 2015, the Stem Cell Ophthalmology Treatment Study (SCOTS), the largest adult stem cell study in ophthalmology ( http://www.clinicaltrials.gov NCT # 01920867) published initial results on a patient with optic nerve disease who improved from 20/2000 to 20/40 following treatment with bone marrow derived stem cells.[54]

Diabetes patients lose the function of insulin-producing beta cells within the pancreas.[55] In recent experiments, scientists have been able to coax embryonic stem cell to turn into beta cells in the lab. In theory if the beta cell is transplanted successfully, they will be able to replace malfunctioning ones in a diabetic patient.[56]

Human embryonic stem cells may be grown in cell culture and stimulated to form insulin-producing cells that can be transplanted into the patient.

However, clinical success is highly dependent on the development of the following procedures:[11]

Clinical case reports in the treatment orthopaedic conditions have been reported. To date, the focus in the literature for musculoskeletal care appears to be on mesenchymal stem cells. Centeno et al. have published MRI evidence of increased cartilage and meniscus volume in individual human subjects.[57][58] The results of trials that include a large number of subjects, are yet to be published. However, a published safety study conducted in a group of 227 patients over a 3-4-year period shows adequate safety and minimal complications associated with mesenchymal cell transplantation.[59]

Wakitani has also published a small case series of nine defects in five knees involving surgical transplantation of mesenchymal stem cells with coverage of the treated chondral defects.[60]

Stem cells can also be used to stimulate the growth of human tissues. In an adult, wounded tissue is most often replaced by scar tissue, which is characterized in the skin by disorganized collagen structure, loss of hair follicles and irregular vascular structure. In the case of wounded fetal tissue, however, wounded tissue is replaced with normal tissue through the activity of stem cells.[61] A possible method for tissue regeneration in adults is to place adult stem cell "seeds" inside a tissue bed "soil" in a wound bed and allow the stem cells to stimulate differentiation in the tissue bed cells. This method elicits a regenerative response more similar to fetal wound-healing than adult scar tissue formation.[61] Researchers are still investigating different aspects of the "soil" tissue that are conducive to regeneration.[61]

Culture of human embryonic stem cells in mitotically inactivated porcine ovarian fibroblasts (POF) causes differentiation into germ cells (precursor cells of oocytes and spermatozoa), as evidenced by gene expression analysis.[62]

Human embryonic stem cells have been stimulated to form Spermatozoon-like cells, yet still slightly damaged or malformed.[63] It could potentially treat azoospermia.

In 2012, oogonial stem cells were isolated from adult mouse and human ovaries and demonstrated to be capable of forming mature oocytes.[64] These cells have the potential to treat infertility.

Destruction of the immune system by the HIV is driven by the loss of CD4+ T cells in the peripheral blood and lymphoid tissues. Viral entry into CD4+ cells is mediated by the interaction with a cellular chemokine receptor, the most common of which are CCR5 and CXCR4. Because subsequent viral replication requires cellular gene expression processes, activated CD4+ cells are the primary targets of productive HIV infection.[65] Recently scientists have been investigating an alternative approach to treating HIV-1/AIDS, based on the creation of a disease-resistant immune system through transplantation of autologous, gene-modified (HIV-1-resistant) hematopoietic stem and progenitor cells (GM-HSPC).[66]

On 23 January 2009, the US Food and Drug Administration gave clearance to Geron Corporation for the initiation of the first clinical trial of an embryonic stem-cell-based therapy on humans. The trial aimed evaluate the drug GRNOPC1, embryonic stem cell-derived oligodendrocyte progenitor cells, on patients with acute spinal cord injury. The trial was discontinued in November 2011 so that the company could focus on therapies in the "current environment of capital scarcity and uncertain economic conditions".[67] In 2013 biotechnology and regenerative medicine company BioTime (NYSEMKT:BTX) acquired Geron's stem cell assets in a stock transaction, with the aim of restarting the clinical trial.[68]

Scientists have reported that MSCs when transfused immediately within few hours post thawing may show reduced function or show decreased efficacy in treating diseases as compared to those MSCs which are in log phase of cell growth(fresh), so cryopreserved MSCs should be brought back into log phase of cell growth in invitro culture before these are administered for clinical trials or experimental therapies, re-culturing of MSCs will help in recovering from the shock the cells get during freezing and thawing. Various clinical trials on MSCs have failed which used cryopreserved product immediately post thaw as compared to those clinical trials which used fresh MSCs.[69]

There is widespread controversy over the use of human embryonic stem cells. This controversy primarily targets the techniques used to derive new embryonic stem cell lines, which often requires the destruction of the blastocyst. Opposition to the use of human embryonic stem cells in research is often based on philosophical, moral, or religious objections.[110] There is other stem cell research that does not involve the destruction of a human embryo, and such research involves adult stem cells, amniotic stem cells, and induced pluripotent stem cells.

Stem-cell research and treatment was practiced in the People's Republic of China. The Ministry of Health of the People's Republic of China has permitted the use of stem-cell therapy for conditions beyond those approved of in Western countries. The Western World has scrutinized China for its failed attempts to meet international documentation standards of these trials and procedures.[111]

State-funded companies based in the Shenzhen Hi-Tech Industrial Zone treat the symptoms of numerous disorders with adult stem-cell therapy. Development companies are currently focused on the treatment of neurodegenerative and cardiovascular disorders. The most radical successes of Chinese adult stem cell therapy have been in treating the brain. These therapies administer stem cells directly to the brain of patients with cerebral palsy, Alzheimer's, and brain injuries.[citation needed]

Since 2008 many universities, centers and doctors tried a diversity of methods; in Lebanon proliferation for stem cell therapy, in-vivo and in-vitro techniques were used, Thus this country is considered the launching place of the Regentime[112] procedure. http://www.researchgate.net/publication/281712114_Treatment_of_Long_Standing_Multiple_Sclerosis_with_Regentime_Stem_Cell_Technique The regenerative medicine also took place in Jordan and Egypt.[citation needed]

Stem-cell treatment is currently being practiced at a clinical level in Mexico. An International Health Department Permit (COFEPRIS) is required. Authorized centers are found in Tijuana, Guadalajara and Cancun. Currently undergoing the approval process is Los Cabos. This permit allows the use of stem cell.[citation needed]

In 2005, South Korean scientists claimed to have generated stem cells that were tailored to match the recipient. Each of the 11 new stem cell lines was developed using somatic cell nuclear transfer (SCNT) technology. The resultant cells were thought to match the genetic material of the recipient, thus suggesting minimal to no cell rejection.[113]

As of 2013, Thailand still considers Hematopoietic stem cell transplants as experimental. Kampon Sriwatanakul began with a clinical trial in October 2013 with 20 patients. 10 are going to receive stem-cell therapy for Type-2 diabetes and the other 10 will receive stem-cell therapy for emphysema. Chotinantakul's research is on Hematopoietic cells and their role for the hematopoietic system function in homeostasis and immune response.[114]

Today, Ukraine is permitted to perform clinical trials of stem-cell treatments (Order of the MH of Ukraine 630 "About carrying out clinical trials of stem cells", 2008) for the treatment of these pathologies: pancreatic necrosis, cirrhosis, hepatitis, burn disease, diabetes, multiple sclerosis, critical lower limb ischemia. The first medical institution granted the right to conduct clinical trials became the "Institute of Cell Therapy"(Kiev).

Other countries where doctors did stem cells research, trials, manipulation, storage, therapy: Brazil, Cyprus, Germany, Italy, Israel, Japan, Pakistan, Philippines, Russia, Switzerland, Turkey, United Kingdom, India, and many others.

See the original post:
Stem-cell therapy - Wikipedia

Read More...

AJRCCM – Home (ATS Journals)

October 16th, 2016 6:43 am

This site uses cookies to improve performance. If your browser does not accept cookies, you cannot view this site.

There are many reasons why a cookie could not be set correctly. Below are the most common reasons:

This site uses cookies to improve performance by remembering that you are logged in when you go from page to page. To provide access without cookies would require the site to create a new session for every page you visit, which slows the system down to an unacceptable level.

This site stores nothing other than an automatically generated session ID in the cookie; no other information is captured.

In general, only the information that you provide, or the choices you make while visiting a web site, can be stored in a cookie. For example, the site cannot determine your email name unless you choose to type it. Allowing a website to create a cookie does not give that or any other site access to the rest of your computer, and only the site that created the cookie can read it.

Read more:
AJRCCM - Home (ATS Journals)

Read More...

Entertainment – CBC News

October 16th, 2016 6:43 am

TELEVISION

Breaking new ground: Kim's Convenience to be Canada's 1st sitcom led by Asians

TELEVISION

Fresh start for Steven Sabados, 'sexy' crime thriller Shoot the Messenger and more debut on CBC-TV

Italian journalist claims to reveal the true identity of Elena Ferrante

Robin Williams was fighting 'terrorist within his brain,' widow says in essay

'Indian Group of Seven' artist Daphne Odjig dead at 97

MOVIE REVIEW

Deepwater Horizon, Queen of Katwe and more

VISUAL ART

VR an eye-popping new canvas for artists using Tilt Brush

Video

Queen of Katwe a refreshingly positive African story

FILM

Deepwater Horizon explores riggers' side of the story

Lawren Harris mountainscape featured in Steve Martin exhibit set for auction

Esports franchise Team Liquid sold to Magic Johnson, NBA co-owners group

Pokemon Go fervour has cooled, but the game isn't dead yet

Emma Donoghue, Madeleine Thien shortlisted for $100K Giller Prize

Photos

Contenders for the Turner Prize include a train, a brick suit and giant buttocks

Inuk artist Annie Pootoogook found dead in Ottawa

Photos

From darkness to light: Inside D.C.'s new African-American museum

FILM REVIEW

Storks a surprisingly snappy and contemporary comedy, says CBC's Eli Glasner

FILM

Xavier Dolan's It's Only the End of the World explores imperfect family relations

The Magnificent Seven 'like a jazz band,' says director Antoine Fuqua

TELEVISION

Does loosening Cancon rules hobble Canadian TV creators?

Disney pulls boy's costume critics lambasted as 'Polyface'

MUSIC

'I have no regrets,' rogue Tenor Remigio Pereira says after O Canada stunt

Winnipeg artist 'blown away' by $25K national prize win

CBC BOOKS

Anosh Irani, Katherena Vermette make Rogers Writers' Trust Fiction Prize shortlist

Read more here:
Entertainment - CBC News

Read More...

How Light Works | HowStuffWorks

October 16th, 2016 6:43 am

Light is at once both obvious and mysterious. We are bathed in yellow warmth every day and stave off the darkness with incandescent and fluorescent bulbs. But what exactly is light? We catch glimpses of its nature when a sunbeam angles through a dust-filled room, when a rainbow appears after a storm or when a drinking straw in a glass of water looks disjointed. These glimpses, however, only lead to more questions. Does light travel as a wave, a ray or a stream of particles? Is it a single color or many colors mixed together? Does it have a frequency like sound? And what are some of the common properties of light, such as absorption, reflection, refraction and diffraction?

You might think scientists know all the answers, but light continues to surprise them. Here's an example: We've always taken for granted that light travels faster than anything in the universe. Then, in 1999, researchers at Harvard University were able to slow a beam of light down to 38 miles an hour (61 kilometers per hour) by passing it through a state of matter known as a Bose-Einstein condensate. That's almost 18 million times slower than normal! No one would have thought such a feat possible just a few years ago, yet this is the capricious way of light. Just when you think you have it figured out, it defies your efforts and seems to change its nature.

Still, we've come a long way in our understanding. Some of the brightest minds in the history of science have focused their powerful intellects on the subject. Albert Einstein tried to imagine what it would be like to ride on a beam of light. "What if one were to run after a ray of light?" he asked. "What if one were riding on the beam? If one were to run fast enough, would it no longer move at all?"

Einstein, though, is getting ahead of the story. To appreciate how light works, we have to put it in its proper historical context. Our first stop is the ancient world, where some of the earliest scientists and philosophers pondered the true nature of this mysterious substance that stimulates sight and makes things visible.

Read more:
How Light Works | HowStuffWorks

Read More...

Hematology Conferences | Blood Disorder Conferences | USA …

October 13th, 2016 9:46 am

9thInternational Conference on Hematology

Date: November 02-04, 2017

Venue: Las Vegas, USA

Hematology 2016 has been designed with many interesting and informative scientific sessions; it includes all possible aspects of Hematology research.

Hematology

Erythrocytesare also known as red blood cells which carry oxygen to the body and collect carbon dioxide from the body by the use of hemoglobin and its life span of 120 days. along the side the leucocytes helps in protecting the healthy cells because the W.B.C (leucocytes) act as the defending cells in protecting the immune system from the foreign cells. Theseleucocytesare multipotent cells in bone marrow and there life span is of 3-4 days where the yellow blood cells are called as thrombocytes they are where small and irregular in shape they have life span of 5-9 days they are mostly seen in mammals they help in clotting of blood which are in fibrin form called as thrombosis these lead to heart stroke, blockage of blood in blood mostly in arms and legs. where C.B.C is known ascomplete blood countis done to know the number of cells in a body these are mainly done by lab technician presently they are been tested by automatic analyzer the high and low amount of cells will lead to many diseases. Decrease of R.B.C in the body these causes of anemia which leads to weakness, feeling of tired, shortness of breath and person will be noticeably pale. Formation of blood cellular components are called as Hematopoiesis and all the cellular blood components are derived from hematopoiesis stem cells in a healthy individual nearly 10111012new blood cells are produced these help in steady peripheral circulation. If there is a increases of R.B.C in the body these causes polycythemia these can be measured through hematocrit level.

Blood Disorders

Hemophilia Ais a genetic deficiency in clottingfactor VIII,which causes increased bleeding and usually affects males. About 70% of the time it is inherited as an X-linked recessive trait, but around 30% of cases arise from spontaneous mutations. Hemophilia B is ablood clottingdisorder caused by amutationof thefactor IXgene, leading to a deficiency of factor IX. It is the second-most common form ofhaemophilia, rarer thanhaemophilia A. It is sometimes calledChristmas disease, named afterStephen Christmas, the first patient described with this disease.In addition, the first report of its identification was published in the Christmas edition of theBritish Medical Journal.Hemophilia C is a mild form of haemophiliaaffecting both sexes. However, it predominantly occurs in Jews ofAshkenazidescent. It is the fourth most common coagulationdisorder aftervon Willebrand's diseaseandhaemophiliaAandB.In theUSAit is thought to affect 1 in 100,000 of the adult population, making it 10% as common as haemophilia A. Idiopathic thrombocytopenic purpura(ITP), also known asimmune thrombocytopenia,primary immune thrombocytopenia,primary immune thrombocytopenic purpuraorautoimmune thrombocytopenic purpura, is defined as isolated low platelet count (thrombocytopenia) with normalbone marrowand the absence of other causes of thrombocytopeniaVon Willebrand diseasesis the most common hereditarycoagulationabnormality described in humans. Platelets also called "thrombocytes" areblood cellswhose function (along with thecoagulation factors) is to stop bleeding by clumping and clogging blood vessel injuries.Platelets have nocell nucleus: Coagulation is highlyconservedthroughout biology; in allmammals, coagulation involves both a cellular (platelet) and aprotein(coagulation factor) component and these are occoured due togenetic blood disorders

Hematologic Malignancies

Lymphatic leukemiawhich effect the white blood cells(w.b.c) they are closely related to the lymphomas and some of them are unitary diseases which related to the adult T cells leukemia these come under the lymphoproliferative disorders. Mostly they involve in the B-cell sub type lymphocytes. The myeloid leukemia is preferred to the granulocyte precursor in the bone marrow and spinal cord and these arises the abnormal growth in the blood from tissues in the bone marrow. They are mainly related to the hematopoietic cells and these sub title into acute and chronic lymphoblastic leukemia. The acute leukemia is that rapidly producing immature blood cells as they are bulk number of cells healthy cells are not produced in bone marrow due this spill over the blood stream which spread to other body parts. Where as in chronic leukemia highly bulid of matured cells are formed but still abnormal white cells are formed these can not be treated immediately mostly seen in older people. The cancer which originate from white blood cells are called as lymphoma and this disorder is mainly seen inHodgkin lymphomathese diseases is treated by radiation and chemotherapy, orhematopoietic stem cell transplantation. The cancer which starts with in the cell are called as Non Hodgkin lymphocytes and these lymphocytes are of lymph nodes. The bone marrow which develops too many white blood cells leads tomultiple myleoma. The further details on malignance are been discussed inHematology oncology conference-2015.

Hematology and immunology

Blood groupsare of ABO type and but at present the Rh blood grouping of 50 well defined antigens in which 5 are more important they are D,C,c,E and e and Rh factors are of Rh positive and Rh negative which refers to the D-antigen. These D-antigen helps in prevention of erythroblast fetalis lacking of Rh antigen it defined as negative and presences of Rh antigen in blood leads to positive these leads to rh incompatibility. The prevention treatment of diseases related to the blood is called as the Hematology. The hematologists conduct works on cancer to. The disorder of immune system leading to hypersensitivity is called asClinical Immunologyand the abnormal growth of an infection are known as Inflammation and the arise of an abnormal immune response to the body or an immune suppression are known as Auto immune disorder. The stem cell therapy is used to treat or prevent a disease or a condition mostly Bone marrow stem cell therapy is seen and recently umbilical cord therapy Stem cell transplantation strategies remains a dangerous procedure with many possible complications; it is reserved for patients with life-threatening diseases.

Blood Transplantation

Theumbilical cordis a conduit between the developingembryoorfetusand theplacenta. The umbilical vein supplies the fetus with nutrient-richbloodfrom theplacenta The hematopoitic bone marrow transplant, the HSC are removed from a large bone of the donor, typically thepelvis, through a largeneedlethat reaches the center of the bone. Acute myeloid leukemia is a cancerof themyeloidline of blood cells, characterized by the rapid growth of abnormalwhite blood cellsthat accumulate in thebone marrowand interfere withthe production of normal blood cells and the Thrombosis is the formation of ablood clot inside ablood vessel, obstructing the flow ofbloodthrough thecirculatory system. TheHemostaticis a process which causes bleeding to stop, meaning to keep blood within a damaged blood vessel this is the first stage of wound healing. Metabolic syndromeis a disorder of energy utilization and storage, diagnosed by a co-occurrence of three out of five of the following medical conditions, obesity,elevated blood pressure,elevated fasting plasma glucose,high serum triglycerides, and lowhigh-density lipoprotein(HDL) levels. Metabolic syndrome increases the risk developingcardiovascular diseaseanddiabetes.

Diagnosis and Treatment

Palliative careis amultidisciplinary approachto specialisedmedical carefor people with seriousillnesses The spleen, similar in structure to a largelymph node, acts as a blood filter. Anticoagulants(antithrombics) are a class of drugs that work to prevent thecoagulation(clotting) of blood. Some anticoagulants are used in medical equipment, such astest tubes ,blood transfusionbags, andrenal dialysisequipment. Anvena cava filteris a type of vascular filter, amedicaldevice that is implanted byinterventional radiologistsor vascular surgeons into theinferior vena cavato presumably prevent life-threateningpulmonary emboliistherapyusingionizing radiation, generally as part of cancer treatmentto control or killmalignantcells. Radiation therapy may be curative in a number of types of cancer if they are localized to one area of the body. The subspecialty ofoncologythat focuses on radiotherapy is calledradiation oncology. Translational research is another term fortranslated researchandtranslational science, Applying knowledge from basic science is a major stumbling block in science, partially due to the compartmentalization within science. Targeted drug delivery is a method of deliveringmedicationto a patient in a manner that increases theconcentrationof the medication in some parts of the body relative to others.

New Drug Development in Haematology

The development of antibiotic resistance in particular stems from the drugs targeting only specific bacterial molecules. Because the drugisso specific, any mutation in these molecules will interfere with or negate its destructive effect, resulting in antibiotic resistance. Known asDrug deliveryConditions treated with combination therapy includetuberculosis,leprosy,cancer,malaria, andHIV/AIDS. One major benefit of combination therapies is that they reduce development ofdrug resistance, since a pathogen or tumor is less likely to have resistance to multiple drugs simultaneously.Artemisinin-based monotherapies for malaria are explicitly discouraged to avoid the problem of developing resistance to the newer treatment. Drug Induced Blood Disorders causes of sickle cell anemia,pale skin non steroids antiinflammatory drugswhich causes ulcers Using drug repositioning, pharmaceutical companies have achieved a number successes, for examplePfizer'sViagrainerectile dysfunctionandCelgene'sthalidomidein severe erythema nodosum leprosum. Smaller companies, including Ore Pharmaceuticals,Biovista, Numedicus,Melior Discoveryand SOM Biotech are also performing drug repositioning on a systematic basis. These companies use a combination of approaches including in silico biology and in vivo/in vitro experimentation to assess a compound and develop and confirm hypotheses concerning its usage for new indications.

Hematology Research

Lymphatic diseasesthis is a type of cancer of the lymphatic system. It can start almost any where in the body. It's believed to be caused by HIV, Epstein-Barr Syndrome, age and family history. Symptoms include weight loss, fever, swollen lymph nodes, night sweats, itchy skin, fatigue, chest pain, coughing and/or trouble swallowing. Thelymphatic systemis part of thecirculatory system, comprising a network oflymphatic vesselsthat carry a clear fluid called lymph directionally towards the heart. The lymphatic system was first described in the seventeenth century independently byOlaus RudbeckandThomas Bartholin. Unlike thecardiovascular systemthe lymphatic system is not a closed system. The human circulatory system processes an average of 20 litres ofbloodper day throughcapillary filtrationwhich removesplasmawhile leaving theblood cells. Roughly 17 litres of the filtered plasma get reabsorbed directly into the blood vessels, while the remaining 3 litres are left behind in theinterstitial fluid. One of the main functions of the lymph system is to provide an accessory return route to the blood for the surplus 3 litres. Lymphatic diseases are ofNon-Hodgkin's Lymphoma, Hodgkins. Thethymusis a specialized primarylymphoidorgan of theimmune system. Within the thymus,T cellsor Tlymphocytesmature. T cells are critical to theadaptive immune system, where the body adapts specifically to foreign invaders.One of the example of lymph node development. Formation oflymph nodeinto the tumor which lead to cancer called oncology.

Various Aspects of Haematology

Pediatric Haematology and Oncologyis an internationalpeer-reviewedmedical journalthat covers all aspects ofpediatrichematologyandoncology. The journal covers immunology, pathology, and pharmacology in relation to blood diseases and cancer in children and shows how basic experimental research can contribute to the understanding of clinical problems. Physicians specialized in hematology are known ashematologistsorhaematologists. Their routine work mainly includes the care and treatment of patients with hematological diseases, although some may also work at the hematology laboratory viewingblood filmsandbone marrowslides under themicroscope, interpreting various hematological test results andblood clotting testresults. In some institutions, hematologists also manage the hematology laboratory. Physicians who work in hematology laboratories, and most commonly manage them, are pathologists specialized in the diagnosis of hematological diseases, referred to as hematopathologistsorhaematopathologists.Experimental Hematologyis apeer-reviewedmedical journalofhematology, which publishesoriginal researcharticles and reviews, as well as the abstracts of the annual proceedings of theSociety for Hematology and Stem Cells and they should be done under theHematology guidlines.

Blood Based Products

Ablood substituteis a substance used to mimic and fulfill some functions ofbiologicalblood. It aims to provide an alternative toblood transfusion, which is transferring blood orblood-based productsfrom one person into another. Thus far, there are no well-acceptedoxygen-carryingblood substitutes, which is the typical objective of ared blood celltransfusion; however, there are widely available non-bloodvolume expandersfor cases where only volume restoration is required. These are helping doctors and surgeons avoid the risks of disease transmission and immune suppression, address the chronic blood donor shortage, and address the concerns of Jehovah's Witnesses and others who have religious objections to receiving transfused blood.Pathogen reductionusing riboflavin and UV lightis a method by which infectiouspathogensinblood for transfusionare inactivated by addingriboflavinand irradiating withUV light. This method reduces the infectious levels of disease-causing agents that may be found in donated blood components, while still maintaining good quality blood components for transfusion. This type of approach to increase blood safety is also known as pathogen inactivation in the industry. Anartificial cellorminimal cellis an engineered particle that mimics one or many functions of abiological cell. The term does not refer to a specific physical entity, but rather to the idea that certain functions or structures of biological cells can be replaced or supplemented with a synthetic entity. Often, artificial cells are biological or polymeric membranes which enclose biologically active materials. As such,nanoparticles,liposomes,polymersomes, microcapsules and a number of other particles have qualified as artificial cells.Manufacturing of semi synthetic products of drugs are known as therapeutic biological products.Anticoagulants(antithrombics) are a class of drugs that work to prevent thecoagulation(clotting) of blood. Such substances occur naturally in leeches and blood-sucking insects.

See the original post:
Hematology Conferences | Blood Disorder Conferences | USA ...

Read More...

Blindness by Jose Saramago – Powell’s Books

October 9th, 2016 1:43 pm

Awards

Winner of the 1998 Nobel Prize for Literature A New York Times Notable Book of the Year A Los Angeles Times Best Book of the Year

A devastating and often horrific look at societal breakdown, Blindness is one of the most acclaimed novels from Jos Saramago, Portugal's only Nobel laureate for literature. Far more than a mere dystopian plague novel, Blindness is a metaphorical account of society's basest tendencies in the face of catastrophe. Saramago's magnificently wending sentences and trademark style lend grace and beauty to an otherwise gruesome tale of epidemic chaos. Recommended By Jeremy G., Powells.com

A city is hit by an epidemic of "white blindness" which spares no one. Authorities confine the blind to an empty mental hospital, but there the criminal element holds everyone captive, stealing food rations and raping women. There is one eyewitness to this nightmare who guides seven strangers among them a boy with no mother, a girl with dark glasses, a dog of tears through the barren streets, and the procession becomes as uncanny as the surroundings are harrowing. A magnificent parable of loss and disorientation and a vivid evocation of the horrors of the twentieth century, Blindness has swept the reading public with its powerful portrayal of man's worst appetites and weaknesses and man's ultimately exhilarating spirit. The stunningly powerful novel of man's will to survive against all odds, by the winner of the 1998 Nobel Prize for Literature.

"Beautifully written in a concise, haunting prose...this unsettling, highly original work is essential reading." Library Journal

"Saramago's Blindness is the best novel I've read since Gabriel Garcia Marquez' Love in the Time of Cholera. It is a novel of enormous skill and authority....Like all great books it is simultaneously contemporary and timeless, and ambitiously confronts the human condition without a false note struck anywhere. Saramago is one of the great writers of our time, and Blindness, ironically is the product of his extraordinary vision." David Guterson, author of Snow Falling on Cedars

"Blindness may be as revolutionary in its own way and time as were, say, The Trial and The Plague were in theirs. Another masterpiece." Kirkus Reviews, starred review

"Saramago writes phantasmagoria in the midst of the most astonishing fantasy he has a meticulous sense of detail. It's very eloquent stuff." Harold Bloom, author of The Western Canon

"It is the voice of Blindness that gives it its charm. By turns ironic, humorous and frank, there is a kind of wink of humor between author and reader that is perfectly imbued with fury at the excesses of the current century. Blindness reminds me of Kafka roaring with laughter as he read his stories to his friends....Blindness' impact carries the force of an author whose sensibility is significant." The Washington Post

"Blindness is a shattering work by a literary master." The Boston Globe

"More frightening than Stephen King, as unrelenting as a bad dream, Jos Saramago's Blindness politely rubs our faces in apocalypse....A metaphor like 'white blindness' might easily seem forced or labored, but Saramago makes it live by focusing on the stubbornly literal; his account of a clump of newly blind people trying to find their way to food or to the bathroom provides some surprisingly gripping passages. While this epidemic has a clear symbolic burden, it's also a real and very inconvenient affliction." Salon

In Blindness, a city is overcome by an epidemic of blindness that spares only one woman. She becomes a guide for a group of seven strangers and serves as the eyes and ears for the reader in this profound parable of loss and disorientation. We return to the city years later in Saramagos Seeing, a satirical commentary on government in general and democracy in particular. Together here for the first time, this beautiful edition will be a welcome addition to the library of any Saramago fan.

Jos Saramago (1922-2010) was the author of many novels, among them Blindness, All the Names, Baltasarand Blimunda, and The Year of the Death of Ricardo Reis. In 1998 he was awarded the Nobel Prize for Literature.

See the original post here:
Blindness by Jose Saramago - Powell's Books

Read More...

Home | EMBO Reports

October 8th, 2016 5:48 pm

You have accessRestricted access

Article

These two authors contributed equally to this work

Using a histone FRAP method, this study identifies Gadd45a as a chromatin relaxer and somatic cell reprogramming enhancer. Gadd45a destabilizes histoneDNA interactions and facilitates the binding of Yamanaka factors to their targets.

Using a histone FRAP method, this study identifies Gadd45a as a chromatin relaxer and somatic cell reprogramming enhancer. Gadd45a destabilizes histoneDNA interactions and facilitates the binding of Yamanaka factors to their targets.

FRAP is used to assess heterochromatin/euchromatin dynamics in somatic cell reprogramming.

Gadd45a is a chromatin relaxer and improves somatic cell reprogramming.

Gadd45a destabilizes histoneDNA interactions and facilitates the binding of Yamanaka factors to their targets.

Keshi Chen, Qi Long, Tao Wang, Danyun Zhao, Yanshuang Zhou, Juntao Qi, Yi Wu, Shengbiao Li, Chunlan Chen, Xiaoming Zeng, Jianguo Yang, Zisong Zhou, Weiwen Qin, Xiyin Liu, Yuxing Li, Yingying Li, Xiaofen Huang, Dajiang Qin, Jiekai Chen, Guangjin Pan, Hans R Schler, Guoliang Xu, Xingguo Liu, Duanqing Pei

The rest is here:
Home | EMBO Reports

Read More...

Page 1,088«..1020..1,0871,0881,0891,090..1,1001,110..»


2025 © StemCell Therapy is proudly powered by WordPress
Entries (RSS) Comments (RSS) | Violinesth by Patrick