header logo image


Page 1,117«..1020..1,1161,1171,1181,119..1,1301,140..»

Enbrel (etanercept)

November 8th, 2016 9:44 pm

IMPORTANT SAFETY INFORMATION

What is the most important information I should know about ENBREL?

ENBREL is a medicine that affects your immune system. ENBREL can lower the ability of your immune system to fight infections. Serious infections have happened in patients taking ENBREL. These infections include tuberculosis (TB) and infections caused by viruses, fungi, or bacteria that have spread throughout the body. Some patients have died from these infections. Your healthcare provider should test you for TB before you take ENBREL and monitor you closely for TB before, during, and after ENBREL treatment, even if you have tested negative for TB.

There have been some cases of unusual cancers reported in children and teenage patients who started using tumor necrosis factor (TNF) blockers before 18 years of age. Also, for children, teenagers, and adults taking TNF blockers, including ENBREL, the chances of getting lymphoma or other cancers may increase. Patients with RA may be more likely to get lymphoma.

Before starting ENBREL, tell your healthcare provider if you:

What are the possible side effects of ENBREL?

ENBREL can cause serious side effects including: New infections or worsening of infections you already have; hepatitis B can become active if you already have had it; nervous system problems, such as multiple sclerosis, seizures, or inflammation of the nerves of the eyes; blood problems (some fatal); new or worsening heart failure; new or worsening psoriasis; allergic reactions; autoimmune reactions, including a lupus-like syndrome and autoimmune hepatitis.

Common side effects include: Injection site reactions and upper respiratory infections (sinus infections).

In general, side effects in children were similar in frequency and type as those seen in adult patients. The types of infections reported were generally mild and similar to those usually seen in children.

These are not all the side effects with ENBREL. Tell your healthcare provider about any side effect that bothers you or does not go away.

If you have any questions about this information, be sure to discuss them with your healthcare provider. You are encouraged to report negative side effects of prescription drugs to the FDA. Visit http://www.fda.gov/medwatch, or call 1-800-FDA-1088.

Please see Prescribing Information and Medication Guide.

INDICATIONS

Moderate to Severe Rheumatoid Arthritis (RA)

ENBREL is indicated for reducing signs and symptoms, keeping joint damage from getting worse, and improving physical function in patients with moderately to severely active rheumatoid arthritis. ENBREL can be taken with methotrexate or used alone.

Moderately to Severely Active Polyarticular Juvenile Idiopathic Arthritis (JIA)

ENBREL is indicated for reducing signs and symptoms of moderately to severely active polyarticular juvenile idiopathic arthritis (JIA) in children ages 2 years and older.

Psoriatic Arthritis

ENBREL is indicated for reducing signs and symptoms, keeping joint damage from getting worse, and improving physical function in patients with psoriatic arthritis. ENBREL can be used with or without methotrexate.

Ankylosing Spondylitis (AS)

ENBREL is indicated for reducing signs and symptoms in patients with active ankylosing spondylitis.

Moderate to Severe Plaque Psoriasis

ENBREL is indicated for chronic moderate to severe plaque psoriasis (PsO) in children 4 years and older and adults who may benefit from taking injections or pills (systemic therapy) or phototherapy (ultraviolet light).

IMPORTANT SAFETY INFORMATION: What is the most important information I should know about Enbrel (etanercept)?

ENBREL is a medicine that affects your immune system. ENBREL can lower the ability of your immune system to fight infections. Serious infections have happened in patients taking ENBREL. These infections include tuberculosis(TB) and infections caused by viruses, fungi, or bacteria that have spread throughout the body. Some patients have died from these infections. Your healthcare provider should test you for TB before you take ENBREL and monitor you closely for TB before, during, and after ENBREL treatment, even if you have tested negative for TB.

There have been some cases of unusual cancers reported in children and teenage patients who started using tumor necrosis factor (TNF) blockers before 18 years of age. Also, for children, teenagers, and adults taking TNF blockers, including ENBREL, the chances of getting lymphoma or other cancers may increase. Patients with RA may be more likely to get lymphoma.

Before starting ENBREL, tell your healthcare provider if you:

What are the possible side effects of ENBREL?

ENBREL can cause serious side effects including: New infections or worsening of infections you already have; hepatitis B can become active if you already have had it; nervous system problems, such as multiple sclerosis, seizures, or inflammation of the nerves of the eyes; blood problems (some fatal); new or worsening heart failure; new or worsening psoriasis; allergic reactions; autoimmune reactions, including a lupus-like syndrome and autoimmune hepatitis.

Common side effects include: Injection site reactions, upper respiratory infections (sinus infections), and headache.

In general, side effects in children were similar in frequency and type as those seen in adult patients. The types of infections reported were generally mild and similar to those usually seen in children.

These are not all the side effects with ENBREL. Tell your healthcare provider about any side effect that bothers you or does not go away.

If you have any questions about this information, be sure to discuss them with your healthcare provider. You are encouraged to report negative side effects of prescription drugs to the FDA. Visit http://www.fda.gov/medwatch, or call 1-800-FDA-1088.

Please see Prescribing Information and Medication Guide.

INDICATIONS

Moderate to Severe Rheumatoid Arthritis (RA)

ENBREL is indicated for reducing signs and symptoms, keeping joint damage from getting worse, and improving physical function in patients with moderately to severely active rheumatoid arthritis. ENBREL can be taken with methotrexate or used alone.

Moderately to Severely Active Polyarticular Juvenile Idiopathic Arthritis (JIA)

ENBREL is indicated for reducing signs and symptoms of moderately to severely active polyarticular juvenile idiopathic arthritis (JIA) in children ages 2 years and older.

Psoriatic Arthritis

ENBREL is indicated for reducing signs and symptoms, keeping joint damage from getting worse, and improving physical function in patients with psoriatic arthritis. ENBREL can be used with or without methotrexate.

Ankylosing Spondylitis(AS)

ENBREL is indicated for reducing signs and symptoms in patients with active ankylosing spondylitis.

Moderate to Severe Plaque Psoriasis

ENBREL is indicated for the treatment of adult patients (18 years or older) with chronic moderate to severe plaque psoriasis who are candidates for systemic therapy or phototherapy.

At Enbrel.com, you can learn about Enbrel (etanercept), a self-injected biologic medicine used to treat inflammatory diseases with long-term effects. You can find information about moderate to severe rheumatoid arthritis(RA), moderate to severe plaque psoriasis, psoriatic arthritis, moderately to severely active polyarticular juvenile idiopathic arthritis(JIA), and ankylosing spondylitis(AS). You can learn about symptoms, treatment, how Enbrel (etanercept) works for each condition, results for each condition, results for each condition, and safety information.

Enbrel.com supports you and your loved ones from diagnosis to treatment. You can find resources like injection demonstrations, patient testimonial videos, questions to ask your doctor, and even help with finding a rheumatologist or dermatologist near you.

Enbrel.com also provices ongoing assistance with ENBREL SupportTM, a patient support program to help with out-of-pocket costs and connect you with registered nurses and ENBREL Nurse Partners. The resources available will help you get started. Resources include the ENBREL Starter Kit, injection and medicine refill reminders, free needle disposal containers, travel packs, and ongoing education.

More:
Enbrel (etanercept)

Read More...

Genetics & Medicine – Site Guide – NCBI

November 7th, 2016 7:49 pm

Bookshelf

A collection of biomedical books that can be searched directly or from linked data in other NCBI databases. The collection includes biomedical textbooks, other scientific titles, genetic resources such as GeneReviews, and NCBI help manuals.

A resource to provide a public, tracked record of reported relationships between human variation and observed health status with supporting evidence. Related information intheNIH Genetic Testing Registry (GTR),MedGen,Gene,OMIM,PubMedand other sources is accessible through hyperlinks on the records.

A registry and results database of publicly- and privately-supported clinical studies of human participants conducted around the world.

An archive and distribution center for the description and results of studies which investigate the interaction of genotype and phenotype. These studies include genome-wide association (GWAS), medical resequencing, molecular diagnostic assays, as well as association between genotype and non-clinical traits.

An open, publicly accessible platform where the HLA community can submit, edit, view, and exchange data related to the human major histocompatibility complex. It consists of an interactive Alignment Viewer for HLA and related genes, an MHC microsatellite database, a sequence interpretation site for Sequencing Based Typing (SBT), and a Primer/Probe database.

A searchable database of genes, focusing on genomes that have been completely sequenced and that have an active research community to contribute gene-specific data. Information includes nomenclature, chromosomal localization, gene products and their attributes (e.g., protein interactions), associated markers, phenotypes, interactions, and links to citations, sequences, variation details, maps, expression reports, homologs, protein domain content, and external databases.

A collection of expert-authored, peer-reviewed disease descriptions on the NCBI Bookshelf that apply genetic testing to the diagnosis, management, and genetic counseling of patients and families with specific inherited conditions.

Summaries of information for selected genetic disorders with discussions of the underlying mutation(s) and clinical features, as well as links to related databases and organizations.

A voluntary registry of genetic tests and laboratories, with detailed information about the tests such as what is measured and analytic and clinical validity. GTR also is a nexus for information about genetic conditions and provides context-specific links to a variety of resources, including practice guidelines, published literature, and genetic data/information. The initial scope of GTR includes single gene tests for Mendelian disorders, as well as arrays, panels and pharmacogenetic tests.

A database of known interactions of HIV-1 proteins with proteins from human hosts. It provides annotated bibliographies of published reports of protein interactions, with links to the corresponding PubMed records and sequence data.

A compilation of data from the NIAID Influenza Genome Sequencing Project and GenBank. It provides tools for flu sequence analysis, annotation and submission to GenBank. This resource also has links to other flu sequence resources, and publications and general information about flu viruses.

A portal to information about medical genetics. MedGen includes term lists from multiple sources and organizes them into concept groupings and hierarchies. Links are also provided to information related to those concepts in the NIH Genetic Testing Registry (GTR), ClinVar,Gene, OMIM, PubMed, and other sources.

A project involving the collection and analysis of bacterial pathogen genomic sequences originating from food, environmental and patient isolates. Currently, an automated pipeline clusters and identifies sequences supplied primarily by public health laboratories to assist in the investigation of foodborne disease outbreaks and discover potential sources of food contamination.

A database of human genes and genetic disorders. NCBI maintains current content and continues to support its searching and integration with other NCBI databases. However, OMIM now has a new home at omim.org, and users are directed to this site for full record displays.

A database of citations and abstracts for biomedical literature from MEDLINE and additional life science journals. Links are provided when full text versions of the articles are available via PubMed Central (described below) or other websites.

A digital archive of full-text biomedical and life sciences journal literature, including clinical medicine and public health.

A collection of clinical effectiveness reviews and other resources to help consumers and clinicians use and understand clinical research results. These are drawn from the NCBI Bookshelf and PubMed, including published systematic reviews from organizations such as the Agency for Health Care Research and Quality, The Cochrane Collaboration, and others (see complete listing). Links to full text articles are provided when available.

A collection of resources specifically designed to support the research of retroviruses, including a genotyping tool that uses the BLAST algorithm to identify the genotype of a query sequence; an alignment tool for global alignment of multiple sequences; an HIV-1 automatic sequence annotation tool; and annotated maps of numerous retroviruses viewable in GenBank, FASTA, and graphic formats, with links to associated sequence records.

A summary of data for the SARS coronavirus (CoV), including links to the most recent sequence data and publications, links to other SARS related resources, and a pre-computed alignment of genome sequences from various isolates.

An extension of the Influenza Virus Resource to other organisms, providing an interface to download sequence sets of selected viruses, analysis tools, including virus-specific BLAST pages, and genome annotation pipelines.

Read more:
Genetics & Medicine - Site Guide - NCBI

Read More...

Legal issues in predictive genetic testing programs.

November 6th, 2016 3:45 am

This article reviews aspects of predictive genetic testing to which the general law of doctor-patient relations applies and identifies peculiarities of such testing that raise more specialized legal issues. Where testing programs are experimental in character, investigators bear legal responsibilities to inform their subjects adequately and separate duties to submit their proposals to ethical review. Access to routine care and counseling and to specialized testing programs are addressed in the contexts of antidiscrimination laws and patient protection. The law on patients' adequately informed and free decision-making regarding testing is reviewed, with particular attention to reproductive counseling and planning for future inability to make or express decisions about care. Modern perceptions of the legal nature of medical confidentiality are applied to results of predictive genetic testing, and distinctions are illustrated between justified and excusable breaches of confidentiality, particularly with regard to familial disorders. Attention is given to patients' directions that their medical information be made available to third parties and to themselves. Finally, legal issues are considered regarding legal control of tissue samples that patients give for genetic diagnosis.

Read more:
Legal issues in predictive genetic testing programs.

Read More...

10 Benefits to Drinking Warm Lemon Water Every Morning …

November 4th, 2016 5:41 pm

Something that has been very important for my body during this 7-Day Spring Cleanse, but has also been a part of my daily routine for a few months now, is drinking warm lemon water. I have started (almost) every day with a glass of warm lemon water and it has made a huge differences for me. Warm lemon water in the morning helps kickstart the digestion process for the day. According to Ayurvedic philosophy, choices that you make regarding your daily routine either build up resistance to disease or tear it down. Ayurveda invites us to get a jump-start on the day by focusing on morning rituals that work to align the body with natures rhythms, balance the doshas and foster self-esteem alongside self-discipline.

There are many health benefits of lemons that have been known for centuries. The two biggest are lemons strong antibacterial, antiviral, and immune-boosting powers and their use as a weight loss aid because lemon juice is a digestive aid and liver cleanser. Lemons contain many substancesnotably citric acid, calcium, magnesium, vitamin C, bioflavonoids, pectin, and limonenethat promote immunity and fight infection.

You should be using purified water and it should be lukewarm not scalding hot. You want to avoid ice cold water, since that can be a lot for your body to process and it takes more energy to process ice cold water than the warm. Always use fresh lemons, organic if possible, never bottled lemon juice. I squeeze 1/2 a lemon with each glass and I drink it down first thing before I eat a single thing, or workout, etc.

BONUS: try adding freshly grated ginger or a little cayenne for a boost.

1) Aids Digestion. Lemon juice flushes out unwanted materials and toxins from the body. Its atomic composition is similar to saliva and the hydrochloric acid of digestive juices. It encourages the liver to produce bile which is an acid that is required for digestion. Lemons are also high in minerals and vitamins and help loosen ama, or toxins, in the digestive tract. The digestive qualities of lemon juice help to relieve symptoms of indigestion, such as heartburn, belching and bloating. The American Cancer Society actually recommends offering warm lemon water to cancer sufferers to help stimulate bowel movements.

2) Cleanses Your System / is a Diuretic. Lemon juice helps flush out unwanted materials in part because lemons increase the rate of urination in the body. Therefore toxins are released at a faster rate which helps keep your urinary tract healthy. The citric acid in lemons helps maximize enzyme function, which stimulates the liver and aids in detoxification.

3) Boosts Your Immune System. Lemons are high in vitamin C, which is great for fighting colds. Theyre high in potassium, which stimulates brain and nerve function. Potassium also helps control blood pressure. Ascorbic acid (vitamin C) found in lemons demonstrates anti-inflammatory effects, and is used as complementary support for asthma and other respiratory symptoms plus it enhances iron absorption in the body; iron plays an important role in immune function. Lemons also contain saponins, which show antimicrobial properties that may help keep cold and flu at bay. Lemons also reduce the amount of phlegm produced by the body.

4) Balances pH Levels. Lemons are one of the most alkalizing foods for the body. Sure, they are acidic on their own, but inside our bodies theyre alkaline (the citric acid does not create acidity in the body once metabolized). Lemons contain both citric and ascorbic acid, weak acids easily metabolized from the body allowing the mineral content of lemons to help alkalize the blood. Disease states only occur when the body pH is acidic.Drinking lemon water regularly can help to remove overall acidity in the body, including uric acid in the joints, which is one of the primary causes of pain and inflammation.

5) Clears Skin.The vitamin C component as well as other antioxidants helps decrease wrinkles and blemishes and it helps to combat free radical damage. Vitamin C is vital for healthy glowing skin while its alkaline nature kills some types of bacteria known to cause acne. It can actually be applied directly to scars or age spots to help reduce their appearance. Since lemon water purges toxins from your blood, it would also be helping to keep your skin clear of blemishes from the inside out. The vitamin C contained in the lemon rejuvenates the skin from within your body.

6) Energizes You and Enhances Your Mood. The energy a human receives from food comes from the atoms and molecules in your food. A reaction occurs when the positive charged ions from food enter the digestive tract and interact with the negative charged enzymes. Lemon is one of the few foods that contain more negative charged ions, providing your body with more energy when it enters the digestive tract. The scent of lemon also has mood enhancing and energizing properties. The smell of lemon juice can brighten your mood and help clear your mind. Lemon can also help reduce anxiety and depression.

7) Promotes Healing. Ascorbic acid (vitamin C), found in abundance in lemons, promotes wound healing, and is an essential nutrient in the maintenance of healthy bones, connective tissue, and cartilage. As noted previously, vitamin C also displays anti-inflammatory properties. Combined, vitamin C is an essential nutrient in the maintenance of good health and recovery from stress and injury.

8) Freshens Breath.Besides fresher breath, lemons have been known to help relieve tooth pain and gingivitis. Be aware that citric acid can erode tooth enamel, so you should be mindful of this. No not brush your teeth just after drinking your lemon water. It is best to brush your teeth first, then drink your lemon water, or wait a significant amount of time after to brush your teeth. Additionally, you can rinse your mouth with purified water after you finish your lemon water.

9) Hydrates Your Lymph System.Warm water and lemon juice supports the immune system by hydrating and replacing fluids lost by your body. When your body is deprived of water, you can definitely feel the side effects, which include: feeling tired, sluggish, decreased immune function, constipation, lack of energy, low/high blood pressure, lack of sleep, lack of mental clarity and feeling stressed, just to name a few.

10) Aids in Weight Loss. Lemons are high in pectin fiber, which helps fight hunger cravings. Studies have shown people who maintain a more alkaline diet, do in fact lose weight faster. I personally find myself making better choices throughout the day, if I start my day off right, by making a health conscious choice to drink warm lemon water first thing every morning.

Do you drink warm lemon water every morning? What are your favorite benefits?

I always zest my lemons before I juice them for my daily warm lemon water. I keep a container in the freezer and I just keep adding to it. Its great to toss into pasta dishes, in salad dressings, etc.

Tagged as: 10 Benefits to Drinking Warm Lemon Water Every Morning, ayurveda, benefits, breath, energy, fresh, healing, Health, lemon, lemon juice, lymph, mood enhancing, pH balance, reasons to drink warm lemon water, skin, tasty yummies, weight loss

Read the rest here:
10 Benefits to Drinking Warm Lemon Water Every Morning ...

Read More...

genetics facts, information, pictures | Encyclopedia.com …

November 3rd, 2016 5:49 pm

I. Genetics And BehaviorP. L. Broadhurst

BIBLIOGRAPHY

II. Demography and Population GeneticsJean Sutter

BIBLIOGRAPHY

III. Race and GeneticsJ. N. Spuhler

BIBLIOGRAPHY

Behavior genetics is a relatively new cross-disciplinary specialization between genetics and Psychology. It is so new that it hardly knows what to call itself. The term behavior genetics is gaining currency in the United States; but in some quarters there, and certainly elsewhere, the term psycho-genetics is favored. Logically, the best name would be genetical psychology, since the emphasis is on the use of the techniques of genetics in the analysis of behavior rather than vice versa; but the in evitable ambiguity of that term is apparent. Psy chologists generally use the terms genetic or genetical in two senses: in the first and older sense of developmental, or ontogenetic; and in the second, more recent usage relating to the analysis of inheritance. The psychologist G. Stanley Hall coined the term genetic before the turn of the century to denote developmental studies (witness the Journal of Genetic Psychology), and Alfred Binet even used the term psychogenetic in this sense. But with the rapid rise of the discipline now known as genetics after the rediscovery of the Mendelian laws in 1900, William Bateson, one of the founders of this new science, pre-empted the term genetic in naming it, thereby investing genetic with the double meaning that causes the current confusion. Psychological genetics, with its obvious abbreviation, psychogenetics, is probably the best escape from the dilemma.

Importance of genetics in behavior. The importance of psychogenetics lies in the fundamental nature of the biological processes in our understanding of human social behavior. The social sciences, and psychology in particular, have long concentrated on environmental determinants of behavior and neglected hereditary ones. But it is clear that in many psychological functions a substantial portion of the observed variation, roughly of the order of 50 per cent for many traits, can be ascribed to hereditary causation. To ignore this hereditary contribution is to impede both action and thought in this area.

This manifold contribution to behavioral variation is not a static affair. Heredity and environment interact, and behavior is the product, rather than the sum, of their respective contributions. The number of sources of variability in both he redity and environment is large, and the consequent number of such possible products even larger. Nevertheless, these outcomes are not incalculable, and experimental and other analyses of their limits are of immense potential interest to the behavioral scientist. The chief theoretical interest lies in the analysis of the evolution of behavior; and the chief practical significance, so far as can be envisaged at present, lies in the possibilities psychogenetics has for the optimization of genetic potential by manipulation of the environmental expression of it.

Major current approaches. The major approaches to the study of psychogenetics can be characterized as the direct, or experimental, and the indirect, or observational. The former derive principally from the genetical parent of this hybrid discipline and involve the manipulation of the heredity of experimental subjects, usually by restricting the choice of mates in some specially defined way. Since such techniques are not possible with human subjects a second major approach exists, the indirect or observational, with its techniques derived largely from psychology and sociology. The two approaches are largely complementary in the case of natural genetic experiments in human populations, such as twinning or cousin marriages. Thus, the distinction between the two is based on the practicability of controlling in some way the essentially immutable genetic endowmentin a word, the genotypeof the individuals subject to investigation. With typical experimental animals (rats, mice, etc.) and other organisms used by the geneticist, such as the fruit fly and many microorganisms, the genotype can often be specified in advance and populations constructed by the hybridization of suitable strains to meet this specification with a high degree of accuracy. Not so with humans, where the genotype must remain as given, and indeed where its details can rarely be specified with any degree of accuracy except for certain physical characteristics, such as blood groups. Observational, demographic, and similar techniques are therefore all that are available here. The human field has another disadvantage in rigorous psychogenetic work: the impossibility of radically manipulating the environmentfor example, by rearing humans in experimental environ ments from birth in the way that can easily be done with animals in the laboratory. Since in psychogenetics, as in all branches of genetics, one deals with a phenotypein this case, behavior and since the phenotype is the end product of the action, or better still, interaction of genotype and environment, human psychogenetics is fraught with double difficulty. Analytical techniques to be mentioned later can assist in resolving some of these difficulties.

Definition. To define psychogenetics as the study of the inheritance of behavior is to adopt a misleadingly narrow definition of the area of study, and one which is unduly restrictive in its emphasis on the hereditarian point of view. Just as the parent discipline of genetics is the analysis not only of the similarities between individuals but also of the differences between them, so psychogenetics seeks to understand the basis of individual differences in behavior. Any psychogenetic analysis must therefore be concerned with the environmental determinants of behavior (conventionally implicated in the genesis of differences) in addition to the hereditary ones (the classic source of resemblances). But manifestly this dichotomy does not always operate, so that for this reason alone the analysis of environmental effects must go hand in hand with the search for genetic causation. This is true even if the intention is merely to exclude the influence of the one the better to study the other; but the approach advocated here is to study the two in tandem, as it were, and to determine the extent to which the one interacts with the other. Psychogenetics is best viewed as that specialization which concerns itself with the interaction of heredity and environment, insofar as they affect behavior. To attempt greater precision is to become involved in subtle semantic problems about the meanings of terms.

At first sight many would tend to restrict environmental effects to those operating after the birth of the organism, but to do so would be to exclude prenatal environmental effects that have been shown to be influential in later behavior. On the other hand, to broaden the concept of environment to include all influences after fertilization the point in time at which the genotype is fixed permits consideration of the reciprocal influence of parts of the genotype upon each other. Can environment include the rest of the genotype, other than that part which is more or less directly concerned with the phenotype under consideration? This point assumes some importance since there are characteristics, not behavioralat least, none that are behavioral have so far been reported whose expression depends on the nature of the other genes present in the organism. In the absence of some of them, or rather certain alleles of the gene pairs, the value phenotypically observed would be different from what it would be if they were present. That is, different components of the genotype, in interplay with one another, modify phenotypic expression of the characteristic they in fluence. Can such indirect action, which recalls that of a chemical catalyst, best be considered as environmental or innate? It would be preferable to many to regard this mechanism as a genetic effect rather than an environmental one in the usually accepted sense. Hence, the definition of the area of study as one involving the interaction of heredity and environment, while apparently adding complexity, in fact serves to reduce confusion.

It must be conceded that this view has not as yet gained general acceptance. In some of the work reviewed in the necessarily brief survey of the major findings in this area, attempts have been made to retain a rather rigid dichotomy between heredity and environmentnature versus nurture in fact, an either/or proposition that the facts do not warrant. The excesses of both sides in the controversies of the 1920sfor example, the famous debate between Watson and McDougall over the relative importance of learned (environmental) and instinctive (genetic) determinants of behavior show the fallacies that extreme protagonists on either side can entertain if the importance of the interaction effect is ignored.

Gene action. The nature of gene action as such is essentially conducive to interaction with the environment, since the behavioral phenotype we observe is the end product of a long chain of action, principally biochemical, originating in the chromosome within the individual cell. A chromosome has a complex structure, involving DNA (deoxyribonucleic acid) and the connections of DNA with various proteins, and may be influenced in turn by another nucleic acid, RNA (ribonucleic acid), also within the cell but external to the nucleus. There are complex structures and sequences of processes, anatomical, physiological, and hormonal, which underlie normal development and differentiation of structure and function in the growth, development, and maturation of the organism. Much of this influence is determined genetically in the sense that the genotype of the organism, fixed at conception, determines how it proceeds under normal environmental circumstances. But it would be a mistake to regard any such sequence as rigid or immutable, as we shall see.

The state of affairs that arises when a number of genetically determined biochemical abnormalities affect behavior is illustrative of the argument. Many of these biochemical deficiencies or inborn errors of metabolism in humans are the outcome of a chain of causation starting with genie structures, some of them having known chromosomal locations. Their effects on the total personalitythat is, the sum total of behavorial variation that makes the individual uniquecan range from the trivial to the intense. The facility with which people can taste a solution of phenylthiocarbamide (PTC), a synthetic substance not found in nature, varies in a relatively simple genetical way: people are either tasters or nontasters in certain rather well-defined proportions, with a pattern of inheritance determined probably by one gene of major effect. But being taste blind or not is a relatively unimportant piece of behavior, since one is never likely to encounter it outside a genetical experiment. (It should perhaps be added that there is some evidence that the ability to taste PTC may be linked with other characteristics of some importance, such as susceptibility to thyroid disease.) Nevertheless, this example is insignificant compared with the psychological effect of the absence of a biochemical link in patients suffering from phenylketonuria. They are unable to metabolize phenylalanine to tyrosine in the liver, with the result that the phenylalanine accumulates and the patient suffers multiple defects, among which is usually gross intellectual defect, with an IQ typically on the order of 30. This gross biochemical failure is mediated by a single recessive gene that may be passed on in a family unnoticed in heterozygoussingle doseform but becomes painfully apparent in the unfortunate individual who happens to receive a double dose and consequently is homozygous for the defect.

Alternatively, a normal dominant gene may mutate to the recessive form and so give rise to the trouble. While mutation is a relatively rare event individually, the number of genes in each individualprobably on the order of ten thousand and the number of individuals in a population make it statistically a factor to be reckoned with. One of the best documented cases of a deleterious mutation of this kind giving rise to a major defect relates to the hemophilia transmitted, with certain important political consequences, to some of the descendants of Queen Victoria of England. The dependence of the last tsarina of Russia on the monk Rasputin was said to be based in part on the beneficial therapeutic effect of his hypnotic techniques on the uncontrollable bleeding of the Tsarevitch Alexis. Victoria was almost certainly heterozygous for hemophilia and, in view of the absence of any previous record of the defect in the Hanoverian dynasty, it seems likely that the origin of the trouble was a mutation in one of the germ cells in a testicle of Victorias father, the duke of Kent, before Victoria was conceived in August 1818.

But however it comes about, a defect such as phenylketonuria can be crippling. Fortunately, its presence can be diagnosed in very early life by a simple urine test for phenyl derivatives. The dependence of the expression of the genetic defect on the environmental circumstances is such that its effect can be mitigated by feeding the afflicted infant with a specially composed diet low in the phenylalanine with which the patients biochemical make-up cannot cope. Here again, therefore, one sees the interaction of genotype and environment in this case the type of food eaten. Many of the human biochemical defects that have been brought to light in recent years are rather simply determined genetically, in contrast with the prevailing beliefs about the bases of many behavioral characteristics including intelligence, personality, and most psychotic and neurotic disorders. This is also true of several chromosomal aberrations that have been much studied recently and that are now known to be implicated in various conditions of profound behavioral importance. Prominent among these is Downs syndrome (mongolism) with, again, effects including impairment of cognitive power. [SeeIntelligence and Intelligence Testing; Mental Disorders, articles onBiological AspectsandGenetic Aspects.]

Sex as a genetic characteristic. The sex difference is perhaps the most striking genetically determined difference in behavior and the one that is most often ignored in this connection. Primary sex is completely determined genetically at the moment of fertilization of the ovum; in mammals sex depends on whether the spermatozoon effecting fertilization bears an X or a Y chromosome to combine with the X chromosome inevitably contributed by the ovum. The resulting gamete then has the form of an XX (female) or an XY (male) individual. This difference penetrates every cell of every tissue of the resulting individual and in turn is responsible for the observable gross differences in morphology. These, in turn, subserve differences of physiological function, metabolism, and endocrine function which profoundly influence not only those aspects of behavior relating to sexual behav ior and reproductive function in the two sexes but many other aspects as well. But behavior is also influenced by social and cultural pressures, so that the resulting sex differences in behavior as observed by the psychologist are especially good examples of a phenotype that must be the and product of both genetic and environmental forces. There is a large literature on sex differences in human behavior and a sizable one on such differ ences in animal behavior, but there has been little attempt to assess this pervasive variation in terms of the relative contribution of genetic and environmental determinants. This is partly because of the technical difficulties of the problem, in the sense that all subjects must be of either one sex or the othercrossing males with females will always result in the same groups as those one started with, either males or femalesthere being, in general, no genetically intermediate sex against which to evaluate either and identical twins being inevitably of like sex. It is also partly because the potential of genetic analyses that do not involve direct experi mentation has not been realized. This is especially so since the causal routes whereby genetic determinants of sex influence many of the behavioral phenotypes observed are often better understood than in other cases where the genetic determinants underlying individual differences manifest in a population are not so clear-cut. [SeeIndividualDifferences, article onSex Differences.]

Sex linkage. There is one exception to the general lack of interest in the biometrical analysis of sex differences having behavioral connotations: sex-linked conditions. That is to say, it is demonstrated or postulated that the gene or genes responsible for the behavioroften a defect, as in the case of color blindness, which has a significantly greater incidence in males than in femalesare linked with the sex difference by virtue of their location on the sex chromosome determining genetic sex. Thus it is that sex can be thought of as a chromosomal difference of regular occurrence, as opposed to aberrations of the sort which give rise to pathological conditions, such as Downs syndrome. Indeed there are also various anomalies of genetic sex that give rise to problems of sexual identity, in which the psychological and overt be havioral consequences can be of major importance for the individual. While the evidence in such cases of environmental modification of the causative genetic conditions is less dramatic than in phenylketonuria, interaction undoubtedly exists, since these chromosomal defects of sex differentiation can in some cases be alleviated by appropriate surgical and hormonal treatment. [SeeSexual BEHavior, article onSexual Deviation: Psychological Aspects; andVision, article onColor Vision and Color Blindness.]

Human psychogenetics. It is abundantly clear that most of the phenotypes the behavioral scientist is interested in are multidetermined, both environmentally and genetically. The previous examples, however, are the exception rather than the rule, and their prominence bears witness that our understanding of genetics and behavior is as yet so little advanced that the simpler modes of genetic expression have been the first to be explored. In genetics itself, the striking differences in seed configuration used by Mendel in his classic crosses of sweet peas are determined by major genes with full dominance acting simply. But such clear-cut expression, especially of dominance, is unusual in human psychogenetics, and more complex statistical techniques are necessary to evaluate multiple genetic and environmental effects acting to produce the observed phenotype.

Whatever the analysis applied to the data gathered in other fields, in human psychogenetics the method employed cannot be the straightforward Mendelian one of crossbreeding which, in various elaborations, remains the basic tool of the geneticist today. Neither can it be the method of selection artificial, as opposed to naturalthat is other wise known as selective breeding. Indeed, none of the experimental techniques that can be applied to any other organism, whatever the phenotype being measured, is applicable to man, since experimental mating is effectively ruled out as a permissible technique in current cultures. It may be remarked in passing that such has not always been the case. The experiment of the Mogul emperor, Akbar, who reared children in isolation to determine their natural religion (and merely produced mutes) and the eugenics program of J. H. Noyes at the Oneida Community in New York State in the nineteenth century are cases in point. The apparent inbreeding of brother with sister among the rulers of ancient Egypt in the eighteenth dynasty (sixteenth to fourteenth century B.C.), which is often quoted as an example of the absence in humans of the deleterious effects of inbreeding (inbreeding depression), may not be all it seems. It is likely that the definition of sister and brother in this context did not necessarily have the same biological relevance that it has today but was rather a cultural role that could be defined, at least in this case, at will.

Twin study. In the absence of the possibility of an experimental approach, contemporary re search in human psychogenetics must rely on natural genetic experiments. Of these, the one most widely used and most industriously studied is the phenomenon of human twinning. Credit for the recognition of the value of observations on twins can be given to the nineteenth-century English scientist entist Francis Galton, who pioneered many fields of inquiry. He may be justly regarded as the father of psychogenetics for the practical methods he introduced into this field, such as the method of twin study, as well as for his influence which extended, although indirectly, even to the American experimenters in psychogenetics during the early decades of the present century.

Twin births are relatively rare in humans and vary in frequency with the ethnic group. However, the extent to which such ethnic groups differ among themselves behaviorally as a result of the undoubted genetic differences, of which incidence of multiple births is but one example, is controversial. As is well known, there are two types of twins: the monozygotic or so-called identical twins, derived from a single fertilized ovum that has split into two at an early stage in development, and the dizygotic or so-called fraternal twins, developed from two separate ova fertilized by different spermatozoa. These two physical types are not always easy to differentiate, although this difficulty is relatively miner in twin study. Nonetheless, they have led to two kinds of investigation. The first relates to differences in monozygotic twins who have identical hereditary make-up but who have been reared apart and thus subjected to different environmental influences during childhood; and the second relates to the comparison of the two types of twins usually restricted to like-sex pairs, since fraternal twins can differ in sex. The latter method supposes all differences between monozygotic pairs to be due to environmental origin, whereas the (greater) difference between dizygotic pairs is of environmental plus genetic origin. Thus, the relative contribution of the two sources of variation can be evaluated.

Findings obtained from either method have not been especially clear-cut, both because of intractable problems regarding the relative weight to be placed upon differences in the environment in which the twins have been reared and because of the sampling difficulties, which are likely to be formidable in any twin study. Nevertheless, interesting inferences can be drawn from twin study. The investigation of separated monozygotic twins has shown that while even with their identical heredity they can differ quite widely, there exists a significant resemblance in basic aspects of personality including intelligence, introversion, and neurotic tendencies, and that these resemblances can persist despite widely different environments in which the members of a pair are reared. Such findings emphasize the need to consider the contribution of genotype and environment in an inter active senseclearly some genotypes represented in the personality of monozygotic twin pairs are sensitive to environmentally induced variation, whereas others are resistant to it.

Comparisons between monozygotic and dizygotic twins reared together suggest that monozygotic twins more closely resemble each other in many aspects of personality, especially those defining psychological factors such as neuroticism and introversion-extroversion. The increase in the differences between the two types of twins when factor measures are usedas opposed to simple test scoressuggests that a more basic biological stratum is tapped by factor techniques, since the genetic determination seems greater than where individual tests are employed. Here again, the de gree to which any phenotype is shown to be hereditary in origin is valid only for the environment in which it developed and is measured; different environments may well yield different results. The problems of environmental control in human samples are so intractable that some students of the subject have questioned whether the effort and undoubted skill devoted to twin study have been well invested, in view of the inherent and persisting equivocality of the outcome.

Multivariate methods. Methods of twin study, introduced largely to improve upon the earlier methods of familial correlation (parents with off spring, sib with sib, etc.), have been combined with them. Familial correlation methods them selves have not been dealt with here, since within-family environments are bound to be even greater contaminants in determining the observed behavior than environments in twin study methods. Never theless, used on a large scale and in conjunction with twin study and with control subjects selected at random from a population, multivariate methods show promise for defining the limits of environmental and genotypic interaction. So far, the solutions to the problems of biometrical analysis posed by this type of investigation have been only partial, and the sheer weight of effort involved in locating and testing the requisite numbers of subjects standing in the required relationships has deterred all but a few pioneers. Despite the undoubtedly useful part such investigations have played in defining the problems involved, the absence of the possibility of experimental breeding has proved a drawback in the provision of socially useful data.

Animal psychogenetics. Recourse has often been had to nonhuman subjects. The additional problem thereby incurred of the relevance of comparative data to human behavior is probably balanced by the double refinements of the control of both the heredity and the environment of the experimental subjects. Two major methods of genetics have been employed, both intended to produce subjects of predetermined genotype: the crossbreeding of strains of animals of known genotype; and phenotypic selection, the mating of like with like to increase a given characteristic in a population.

Selection. Behavioral phenotypes of interest have been studied by the above methods, often using laboratory rodents. For example, attributes such as intelligence, activity, speed of conditioning, and emotionality have been selectively bred in rats.

Selection for emotional behavior in the rat will serve as an example of the techniques used and the results achieved. Rats, in common with many other mammalian species, defecate when afraid. A technique of measuring individual differences in emotional arousal is based on this propensity. The animal under test is exposed to mildly stressful noise and light stimulation in an open field or arena. The number of fecal pellets deposited serves as an index of disturbance, and in this way the extremes among a large group of rats can be characterized as high or low in emotional responsiveness. Continued selection from within the high and low groups will in time produce two distinct strains. Control of environmental variables is achieved by a rigid standardization of the conditions under which the animals are reared before being subjected to the test as adults. Careful checks on maternal effects, both prenatal and postnatal, reveal these effects to be minimal.

Such an experiment does little beyond establishing the importance of the genetic effect on the given strains in the given environment. While there are techniques for assessing the relative importance of the genetic and environmental contributions to the variation observed under selection, they are better suited to the analysis of the outcome of experiments using the alternative major genetical method, that of crossbreeding of inbred strains.

Crossbreeding. Strains used in crossbreeding experiments have usually been inbred for a phenotypic character of interest, although not usually a behavioral one. However, this does not preclude the use of these inbred strains for behavioral studies, since linkage relationships among genes ensure that selection for factors multidetermined genetically often involves multiple changes in characteristics other than those selected for, and behavior is no exception to this rule. Moreover, the existence of such inbred strains constitutes perhaps the most important single advantage of animals as subjects, since it enables simplifying assumptions regarding the homozygosity or genetic uniformity of such strains to be made in analysis of the outcome of crosses. Members of inbred strains are theoretically as alike as monozygotic twin pairs, so that genetic relationshipswhich in human populations can be investigated only after widespread efforts to find themcan be multiplied at will in laboratory animals.

This approach allows a more sensitive analysis of the determinants, both environmental and genetic, of the behavioral phenotype under observation. In addition, the nature of the genetic forces can be further differentiated into considerations of the average dominance effects of the genes in volved, the extent to which they tend to increase or decrease the metrical expression of the behavioral phenotype, and the extent to which the different strains involved possess such increasers or de creases. Finally, rough estimates of the number of these genes can be given. But the analysis depends upon meeting requirements regarding the scaling of the metric upon which the behavior is measured and is essentially a statistical one. That is, only average effects of cumulative action of the relatively large number of genes postulated as in volved can be detected. Gone are the elegantly simple statistics derived from the classical Men-delian analyses of genes of major effect, often displaying dominance, like those encountered incertain human inborn errors of metabolism. There is little evidence of the existence of comparable genes of major effect mediating behavior in laboratory animals, although some have been studied in in sects, especially the fruit fly.

A typical investigation of a behavioral phenotype might take the form of identifying two inbred strains known to differ in a behavioral trait, measuring individuals from these strains, and then systematically crossing them and measuring all offspring. When this was done for the runway performance of mice, an attribute related to their temperamental wildness, the results, analyzed by the techniques of biometrical genetics, showed that the behavior was controlled by at least three groups of genes (a probable underestimate). The contributions of these groups were additive to each other and independent of the environment when measured on a logarithmic scale but interacted with each other and with the environment on a linear scale. These genes showed a significant average dominance effect, and there was a preponderance of dominant genes in the direction of greater wildness. The heritability ratio of the contributions of nature and nurture was around seven to three.

The use of inbred lines may be restricted to first filial crosses if a number of such crosses are made from several different lines. This increases precision of analysis in addition to allowing a proportionate decrease in the amount of laboratory work. One investigation examined the exploratory behavior of six different strains of rats in an open field of the kind used for the selection mentioned above. On a linear scale there were no untoward environmental effects, including specifically prenatal maternal ones. The heritability ratio was high, around nine to one; and while there was a significant average dominance component among the genes determining exploration, there was no preponderance of dominants or recessively acting genes among increasers or decreasers. The relative standing in this respect of the parental strains could be established with some precision.

Limitations. While the methods described above have allowed the emergence of results that ultimately may assist our understanding of the mechanisms of behavioral inheritance, it cannot be said that much substantial progress has yet been made. Until experiments explore the effect of a range of different genotypes interacting with a range of environments of psychological interest and consequence, little more can be expected. Manipulating heredity in a single standard environment or manipulating the environment of a single standard genotype can only provide conclusions so limited to both the genotypes and conditions employed that they have little usefulness in a wider context. When better experiments are performed, as seems likely in the next few decades, then problems of some sociological importance and interest will arise in the application of these experiments to the tasks of maximizing genetic potential and perfecting environmental control for the purpose of so doing. A new eugenics may well develop, but grappling with the problems of its impact on contemporary society had best be left to future generations.

P. L. Broadhurst

[Directly related are the entriesEugenics; Evolution; Mental Disorders, article onGenetic Aspects. Other relevant material may be found inIndividual Differences, article onSex Differences; Instinct; Intelligence and Intelligence Testing; Mental Ertardation; Psychology, article onConstitutional Psychology.]

Broadhurst, P. L. 1960 Experiments in Psychogenetics: Applications of Biometrical Genetics to the Inheritance of Behavior. Pages 1-102 in Hans J. Eysenck (editor), Experiments in Personality. Volume 1: Psychogenetics and Psychopharmacology. London: Routledge. Selection and crossbreeding methods applied to laboratory rats.

Catteix, RaymondB.; Stice, GlenF.; and Kristy, Nor TonF. 1957 A First Approximation to Nature-Nurture Ratios for Eleven Primary Personality Factors in Objective Tests. Journal of Abnormal and Social Psychology 54:143159. Pioneer multivariate analysis combining twin study and familial correlations.

Fuller, JohnL.; and Thompson, W. Robert 1960 Be havior Genetics. New York: Wiley. A comprehen sive review of the field.

Mather, Kenneth1949 Biometrical Genetics: The Study of Continuous Variation. New York: Dover. The classic work on the analysis of quantitative char acteristics.

Shields, James1962 Monozygotic Twins Brought Up Apart and Brought Up Together: An Investigation Into the Genetic and Environmental Causes of Variation in Personality. Oxford Univ. Press.

The best available definition of population genetics is doubtless that of Malcot: It is the totality of mathematical models that can be constructed to represent the evolution of the structure of a population classified according to the distribution of its Mendelian genes (1955, p. 240). This definition, by a probabilist mathematician, gives a correct idea of the constructed and abstract side of this branch of genetics; it also makes intelligible the rapid development of population genetics since the advent of Mendelism.

In its formal aspect this branch of genetics might even seem to be a science that is almost played out. Indeed, it is not unthinkable that mathematicians have exhausted all the structural possibilities for building models, both within the context of general genetics and within that of the hypothesesmore or less complex and abstractthat enable us to characterize the state of a population.

Two major categories of models can be distinguished: determinist models are those in which variations in population composition over time are rigorously determined by (a) a known initial state of the population; (b) a known number of forces or pressures operating, in the course of generations, in an unambiguously defined fashion (Male-cot 1955, p. 240). These pressures involve mutation, selection, and preferential marriages (by consanguinity, for instance). Determinist models, based on ratios that have been exactly ascertained from preceding phenomena, can be expressed only in terms of populations that are infinite in the mathematical sense. In fact, it is only in this type of population that statistical regularities can emerge (Malecot 1955). In these models the composition of each generation is perfectly defined by the composition of the preceding generation.

Stochastic models, in contrast to determinist ones, involve only finite populations, in which the gametes that, beginning with the first generation, are actually going to give birth to the new generation represent only a finite number among all possible gametes. The result is that among these active, or useful, gametes (Malecot 1959), male or female, the actual frequency of a gene will differ from the probability that each gamete had of carrying it at the outset.

The effect of chance will play a prime role, and the frequencies of the genes will be able to drift from one generation to the other. The effects of random drift and of genetic drift become, under these conditions, the focal points for research.

The body of research completed on these assumptions does indeed form a coherent whole, but these results, in spite of their brilliance, are marked by a very noticeable formalism. In reality, the models, although of great importance at the conceptual level, are often too far removed from the facts. In the study of man, particularly, the problems posed are often too complex for the solutions taken directly from the models to describe concrete reality.

Not all these models, however, are the result of purely abstract speculation; construction of some of them has been facilitated by experimental data. To illustrate this definition of population genetics and the problems that it raises, this article will limit itself to explaining one determinist model, both because it is one of the oldest and simplest to under stand and because it is one of those most often verified by observation.

A determinist model. Let us take the case of a particular human population: the inhabitants of an island cut off from outside contacts. It is obvious that great variability exists among the genes carried by the different inhabitants of this island. The genotypes differ materially from one another; in other words, there is a certain polymorphism in the populationpolymorphism that we can define in genetic terms with the help of a simple example.

Let us take the case of autosome (not connected with sex) gene a, transmitting itself in a mono-hybrid diallely. In relation to it individuals can be classified in three categories: homozygotes whose two alleles are a (a/a); heterozygotes, carriers of a and its allele a (a/a); and the homozygotes who are noncarriers of a (a/a). At any given moment or during any given generation, these three categories of individuals exist within the population in certain proportions relative to each other.

Now, according to Mendels second law (the law of segregation), the population born out of a cross between an individual who is homozygote for a (a/a) and an individual who is homozygote for a (a/a) will include individuals a/a, a/a, and a/a in the following proportions: one-fourth a/a, one-half a/a, and one-fourth a/a. In this popu lation the alleles a and a have the same frequency, one-half, and each sex produces half a and half a. If these individuals are mated randomly, a simple algebraic calculation quickly demonstrates that individuals of the generation following will be quan titatively distributed in the same fashion: one-fourth a/a, one-half a/a, and one-fourth a/a. It will be the same for succeeding generations.

It can therefore be stated that the genetic structure of such a population does not vary from one generation to the other. If we designate by p the initial proportion of a/a individuals and by q that of a/a individuals, we get p + q = 1, or the totality of the population. Applying this system of symbols to the preceding facts, it can be easily shown that the proportion of individuals of all three categories in the first generation born from a/a and a/a equals p2, 2pq, q2. In the second and third generation the frequency of individuals will always be similar: p2, 2pq, q2.

Until this point, we have remained at the individual level. If we proceed to that of the gametes carrying a or a and to that of genes a and a, we observe that their frequencies intermingle. In the type of population discussed above, the formula p2, 2pq, q2 still applies perfectly, therefore, to the gametes and genes. This model, which can be regarded as a formalization of the Hardy-Weinberg law, has other properties, but our study of it will stop here. (For a discussion of the study of isolated populations, see Sutter & Tabah 1951.)

Model construction and demographic reality. The Hardy-Weinberg law has been verified by numerous studies, involving both vegetable and animal species. The findings in the field of human blood groups have also been studied for a long time from a viewpoint derived implicitly from this law, especially in connection with their geographic distribution. Under the system of reproduction by sexes, a generation renews itself as a result of the encounter of the sexual cells (gametes) produced by individuals of both sexes belonging to the living generation. In the human species it can be said that this encounter takes place at random. One can imagine the advantage that formal population genetics can take of this circumstance, which can be compared to drawing marked balls by lot from two different urns. Model construction, already favored by these circumstances, is favored even further if the characteristics of the population utilized are artificially defined with the help of a certain number of hypotheses, of which the following is a summary description:

(1) Fertility is identical for all couples; there is no differential fertility.

(2) The population is closed; it cannot, there fore, be the locus of migrations (whether immigration or emigration).

(3) Marriages take place at random; there is no assortative mating.

(4) There are no systematic preferential marriages (for instance, because of consanguinity).

(5) Possible mutations are not taken into consideration.

(6) The size of the population is clearly denned. On the basis of these working hypotheses, the whole of which constitutes panmixia, it was possible, not long after the rediscovery of Mendels laws, to construct the first mathematical models. Thus, population genetics took its first steps forward, one of which was undoubtedly the Hardy-Weinberg law.

Mere inspection of the preceding hypotheses will enable the reader to judge how, taken one by one, they conflict with reality. In fact, no human population can be panmictic in the way the models are.

The following evidence can be cited in favor of this conclusion:

(1) Fertility is never the same with all couples. In fact, differential fertility is the rule in human populations. There is always a far from negligible sterility rate of about 18 per cent among the large populations of Western civilization. On the other hand, the part played by large families in keeping up the numbers of these populations is extremely important; we can therefore generalize by emphasizing that for one or another reason individ uals carrying a certain assortment of genes reproduce themselves more or less than the average number of couples. That is what makes for the fact that in each population there is always a certain degree of selection. Hypothesis (1) above, essential to the construction of models, is therefore very far removed from reality.

(2) Closed populations are extremely rare. Even among the most primitive peoples there is always a minimum of emigration or immigration. The only cases where one could hope to see this condition fulfilled at the present time would be those of island populations that have remained extremely primitive.

(3) With assortative mating we touch on a point that is still obscure; but even if these phe nomena remain poorly understood, it can nevertheless be said that they appear to be crucial in determining the genetic composition of populations. This choice can be positive: the carriers of a given characteristic marry among each other more often than chance would warrant. The fact was demonstrated in England by Pearson and Lee (1903): very tall individuals have a tendency to marry each other, and so do very short ones. Willoughby (1933) has reported on this question with respect to a great number of somatic characteristics other than heightfor example, coloring of hair, eyes and skin, intelligence quotient, and so forth. Inversely, negative choice makes individuals with the same characteristics avoid marrying one another. This mechanism is much less well known than the above. The example of persons of violent nature (Dahlberg 1943) and of red-headed individuals has been cited many times, although it has not been possible to establish valid statistics to support it.

(4) The case of preferential marriages is not at all negligible. There are still numerous areas where marriages between relatives (consanguineous marriages) occur much more frequently than they would as the result of simple random encounters. In addition, recent studies on the structures of kinship have shown that numerous populations that do not do so today used to practice preferential marriagemost often in a matrilinear sense. These social phenomena have a wide repercussion on the genetic structure of populations and are capable of modifying them considerably from one generation to the other.

(5) Although we do not know exactly what the real rates of mutation are, it can be admitted that their frequency is not negligible. If one or several genes mutate at a given moment in one or several individuals, the nature of the gene or genes is in this way modified; its stability in the population undergoes a disturbance that can considerably transform the composition of that population.

(6) The size of the population and its limits have to be taken into account. We have seen that this is one of the essential characteristics important in differentiating two large categories of models.

The above examination brings us into contact with the realities of population: fertility, fecundity, nuptiality, mortality, migration, and size are the elements that are the concern of demography and are studied not only by this science but also very often as part of administrative routine. Leaving aside the influence of size, which by definition is of prime importance in the technique of the models, there remain five factors to be examined from the demographic point of view. Mutation can be ruled out of consideration, because, although its importance is great, it is felt only after the passage of a certain number of generations. It can therefore be admitted that it is not of immediate interest.

We can also set aside choice of a mate, because the importance of this factor in practice is still unknown. Accordingly, there remain three factors of prime importance: fertility, migration, and preferential marriage. Over the last decade the progressive disappearance of consanguineous marriage has been noted everywhere but in Asia. In many civilized countries marriage between cousins has practically disappeared. It can be stated, therefore, that this factor has in recent years become considerably less important.

Migrations remain very important on the genetic level, but, unfortunately, precise demographic data about them are rare, and most of the data are of doubtful validity. For instance, it is hard to judge how their influence on a population of Western culture could be estimated.

The only remaining factor, fertility (which to geneticists seems essential), has fortunately been studied in satisfactory fashion by demographers. To show the importance of differential fertility in human populations, let us recall a well-known cal culation made by Karl Pearson in connection with Denmark. In 1830, 50 per cent of the children in that country were born of 25 per cent of the parents. If that fertility had been maintained at the same rate, 73 per cent of the second-generation Danes and 97 per cent of the third generation would have been descended from the first 25 per cent. Similarly, before World War I, Charles B. Davenport calculated, on the basis of differential fertility, that 1,000 Harvard graduates would have only 50 descendants after two centuries, while 1,000 Rumanian emigrants living in Boston would have become 100,000.

Human reproduction involves both fecundity (capacity for reproduction) and fertility (actual reproductive performance). These can be estimated for males, females, and married couples treated as a reproductive unit. Let us rapidly review the measurements that demography provides for geneticists in this domain.

Crude birth rate. The number of living births in a calendar year per thousand of the average population in the same year is known as the crude birth rate. The rate does not seem a very useful one for geneticists: there are too many different groups of childbearing age; marriage rates are too variable from one population to another; birth control is not uniformly diffused, and so forth.

General fertility rate. The ratio of the number of live births in one year to the average number of women capable of bearing children (usually defined as those women aged 15 to 49) is known as the general fertility rate. Its genetic usefulness is no greater than that of the preceding figure. Moreover, experience shows that this figure is not very different from the crude birth rate.

Age-specific fertility rates. Fertility rates according to the age reached by the mother during the year under consideration are known as age-specific fertility rates. Demographic experience shows that great differences are observed here, depending on whether or not the populations are Malthusianin other words, whether they practice birth control or not. In the case of a population where the fertility is natural, knowledge of the mothers age is sufficient. In cases where the population is Malthusian, the figure becomes interesting when it is calculated both by age and by age group of the mothers at time of marriage, thus combining the mothers age at the birth of her child and her age at marriage. This is generally known as the age-specific marital fertility rate. If we are dealing with a Malthusian population, it is preferable, in choosing the sample to be studied, to take into consideration the age at marriage rather than the age at the childs birth. Thus, while the age at birth is sufficient for natural populations, these techniques cannot be applied indiscriminately to all populations.

Family histories. Fertility rates can also be calculated on the basis of family histories, which can be reconstructed from such sources as parish registries (Fleury & Henry 1965) or, in some countries, from systematic family registrations (for instance, the Japanese koseki or honseki). The method for computing the fertility rate for, say, the 25-29-year-old age group from this kind of data is first to determine the number of legitimate births in the group. It is then necessary to make a rigorous count of the number of years lived in wedlock between their 25th and 30th birthdays by all the women in the group; this quantity is known as the groups total woman-years. The number of births is then divided by the number of woman-years to obtain the groups fertility rate. This method is very useful in the study of historical problems in genetics, since it is often the only one that can be applied to the available data.

Let us leavefer tility rates in order to examine rates of reproduction. Here we return to more purely genetic considerations, since we are looking for the mechanism whereby one generation is replaced by the one that follows it. Starting with a series of fertility rates by age groups, a gross reproduction rate can be calculated that gives the average number of female progeny that would be born to an age cohort of women, all of whom live through their entire reproductive period and continue to give birth at the rates prevalent when they themselves were born. The gross reproduction rate obtaining for a population at any one time can be derived by combining the rates for the different age cohorts.

A gross reproduction rate for a real generation can also be determined by calculating the average number of live female children ever born to women of fifty or over. As explained above, this rate is higher for non-Malthusian than for Malthusian populations and can be refined by taking into consideration the length of marriage.

We have seen that in order to be correct, it is necessary for the description of fertility in Malthusian populations to be closely related to the date of marriage. Actually, when a family reaches the size that the parents prefer, fertility tends to approach zero. The preferred size is evidently related to length of marriage in such a manner that fertility is more closely linked with length of marriage than with age at marriage. In recent years great progress has been made in the demographic analysis of fertility, based on this kind of data. This should en ablegeneticists to be more circumspect in their choice of sections of the population to be studied.

Americans talk of cohort analysis, the French of analysis by promotion (a term meaning year or class, as we might speak of the class of 1955). A cohort, or promotion, includes all women born within a 12-month period; to estimate fertility or mortality, it is supposed that these women are all born at the same moment on the first of January of that year. Thus, women born between January 1, 1900, and January 1, 1901, are considered to be exactly 15 years old on January 1, 1915; exactly 47 years old on January 1, 1947; and so forth.

The research done along these lines has issued in the construction of tables that are extremely useful in estimating fertility in a human population. As we have seen, it is more useful to draw up cohorts based on age at marriage than on age at birth. A fertility table set up in this way gives for each cohort the cumulative birth rate, by order of birth and single age of mother, for every woman surviving at each age, from 15 to 49. The progress that population genetics could make in knowing real genie frequencies can be imagined, if it could concentrate its research on any particular cohort and its descendants.

This rapid examination of the facts that demography can now provide in connection with fertility clearly reveals the variables that population genetics can use to make its models coincide with reality. The models retain their validity for genetics because they are still derived from basic genetic concepts; their application to actual problems, however, should be based on the kind of data mentioned above. We have voluntarily limited ourselves to the problem of fertility, since it is the most important factor in genetics research.

The close relationship between demography and population genetics that now appears can be illustrated by the field of research into blood groups. Although researchers concede that blood groups are independent of both age and sex, they do not explore the full consequences of this, since their measures are applied to samples of the population that are representative only in a demographic sense. We must deplore the fact that this method has spread to the other branches of genetics, since it is open to criticism not only from the demographic but from the genetic point of view. By proceeding in this way, a most important factor is overlookedthat of genie frequencies.

Let us admit that the choice of a blood group to be studied is of little impor tance when the characteristic is widely distributed throughout the populationfor instance, if each individual is the carrier of a gene taken into account in the system being studied (e.g., a system made up of groups A, B, and O). But this is no longer the case if the gene is carried only by a few individualsin other words, if its frequency attains 0.1 per cent or less. In this case (and cases like this are common in human genetics) the structure of the sample examined begins to take on prime importance.

A brief example must serve to illustrate this cardinal point. We have seen that in the case of rare recessive genes the importance of consan guineous marriages is considerable. The scarcer that carriers of recessive genes become in the pop ulation as a whole, the greater the proportion of such carriers produced by consanguineous marriages. Thus if as many as 25 per cent of all individuals in a population are carriers of recessive genes, and if one per cent of all marriages in that population are marriages between first cousins, then this one per cent of consanguineous unions will produce 1.12 times as many carriers of recessive genes as will be produced by all the unions of persons not so related. But if recessive genes are carried by only one per cent of the total population, then the same proportion of marriages between first cousins will produce 2.13 times as many carriers as will be produced by all other marriages. This production ratio increases to 4.9 if the total frequency of carriers is .01 per cent, to 20.2 if it is .005 per cent, and to 226 if it is .0001 per cent. Under these conditions, one can see the importance of the sampling method used to estimate the frequency within a population, not only of the individuals who are carriers but of the gametes and genes themselves.

Genealogical method. It should be emphasized that genetic studies based on genealogies remain the least controversial. Studying a population where the degrees of relationship connecting individuals are known presents an obvious interest. Knowing one or several characteristics of certain parents, we can follow what becomes of these in the descendants. Their evolution can also be considered from the point of view of such properties of genes as dominance, recessiveness, expressivity, and penetrance. But above all, we can follow the evolution of these characteristics in the population over time and thus observe the effects of differential fertility. Until now the genealogical method was applicable only to a numerically sparse population, but progress in electronic methods of data processing permits us to anticipate its application to much larger populations (Sutter & Tabah 1956).

Dynamic studies. In very large modern populations it would appear that internal analysis of cohorts and their descendants will bring in the future a large measure of certainty to research in population genetics. In any case, it is a sure way to a dynamic genetics based on demographic reality. For instance, it has been recommended that blood groups should be studied according to age groups; but if we proceed to do so without regard for demographic factors, we cannot make our observations dynamic. Thus, a study that limits itself to, let us say, the fifty- to sixty-year-old age group will have to deal with a universe that includes certain genetically dead elements, such as unmarried and sterile persons, which have no meaning from the dynamic point of view. But if a study is made of this same fifty- to sixty-year-old age group and then of the twenty- to thirty-year-old age group, and if in the older group only those individuals are considered who have descendants in the younger group, the dynamic potential of the data is maximized. It is quite possible to subject demographic cohorts to this sort of interpretation, because in many countries demographic statistics supply series of individuals classified according to the mothers age at their birth.

This discussion would not be complete if we did not stress another aspect of the genetic importance of certain demographic factors, revealed by modern techniques, which have truly created a demographic biology. Particularly worthy of note are the mothers age, order of birth, spacing between births, and size of family.

The mothers age is a great influence on fecundity. A certain number of couples become in capable of having a second child after the birth of the first child; a third child after the second; a fourth after the third; and so forth. This sterility increases with the length of a marriage and especially after the age of 35. It is very important to realize this when, for instance, natural selection and its effects are being studied.

The mothers age also strongly influences the frequency of twin births (monozygotic or dizygotic), spontaneous abortions, stillborn or abnormal births, and so on. Many examples can also be given of the influence of the order of birth, the interval between births, and the size of the family to illustrate their effect on such things as fertility, mortality, morbidity, and malformations.

It has been demonstrated above how seriously demographic factors must be taken into consideration when we wish to study the influence of the genetic structure of populations. We will leave aside the possible environmental influences, such as social class and marital status, since they have previously been codified by Osborn (1956/1957) and Larsson (19561957), among others. At the practical level, however, the continuing efforts to utilize vital statistics for genetic purposes should be pointed out. In this connection, the research of H. B. Newcombe and his colleagues (1965), who are attempting to organize Canadian national statistics for use in genetics, cannot be too highly praised. The United Nations itself posed the problem on the world level at a seminar organized in Geneva in 1960. The question of the relation between demography and genetics is therefore being posed in an acute form.

These problems also impinge in an important way on more general philosophical issues, as has been demonstrated by Haldane (1932), Fisher (1930), and Wright (1951). It must be recognized, however, that their form of Neo-Darwinism, although it is based on Mendelian genetics, too often neglects demographic considerations. In the future these seminal developments should be renewed in full confrontation with demographic reality.

Jean Sutter

[Directly related are the entriesCohort Analysis; Fertility; Fertility Control. Other relevant ma terial may be found inNuptiality; Race; SocialBehavior, Animal, article onThe Regulation of Animal Populations.]

Barclay, George W. 1958 Techniques of Population Analysis. New York: Wiley.

Dahlberg, Gunnar(1943)1948 Mathematical Methods for Population Genetics. New York and London: Inter-science. First published in German.

Dunn, Leslie C. (editor) 1951 Genetics in the Twentieth Century: Essays on the Progress of Genetics During Its First Fifty Years. New York: Macmillan.

Fisher, R. A. (1930) 1958 The Genetical Theory of Natural Selection. 2d ed., rev. New York: Dover.

See original here:
genetics facts, information, pictures | Encyclopedia.com ...

Read More...

Alex Jones’ Infowars: There’s a war on for your mind!

November 3rd, 2016 3:46 am

Secret 12 Vitamin B12

Limited Advanced Release

39.95

29.95

Discover The Benefits of Super Advanced Vitamin B-12 with The Infowars Life Secret 12 Proprietary Formula.

http://www.infowars.com/wp-content/uploads/2015/10/s12-300.jpg

http://www.infowarsstore.com/secret-12-vitamin-b12.html?ims=biwuo&utm_campaign=Secret+12&utm_source=Infowars+Widget&utm_medium=Infowars.com

http://www.infowarsstore.com/secret-12-vitamin-b12.html?ims=biwuo&utm_campaign=Secret+12&utm_source=Infowars+Widget&utm_medium=Infowars.com

Deep Cleanse

39.95

29.95

Scientifically formulated to use powerful nano-colloidal zeolites and organic ingredients to aid the body's normal function of cleansing itself from chemicals and toxic metals.

http://www.infowars.com/wp-content/uploads/2015/10/dc-300.jpg

http://www.infowarsstore.com/deep-cleanse.html?ims=izkbo&utm_campaign=Deep+Cleanse&utm_source=Infowars+Widget&utm_medium=Infowars.com

http://www.infowarsstore.com/deep-cleanse.html?ims=izkbo&utm_campaign=Deep+Cleanse&utm_source=Infowars+Widget&utm_medium=Infowars.com

Silver Bullet Colloidal Silver

29.95

19.95

The Infowars Life Silver Bullet Colloidal Silver is finally here following Alex's extensive search for a powerful colloidal silver product that is both free of artificial additives and utilizes high quality processes to ensure for a truly unique product that has applications for both preparedness and regular use.

http://www.infowars.com/wp-content/uploads/2016/02/sb-300.jpg

http://www.infowarsstore.com/silver-bullet-40-off.html?ims=njcpn&utm_campaign=Silver+Bullet&utm_source=Infowars+Widget&utm_medium=Infowars.com

http://www.infowarsstore.com/silver-bullet-40-off.html?ims=njcpn&utm_campaign=Silver+Bullet&utm_source=Infowars+Widget&utm_medium=Infowars.com

Brain Force

39.95

29.95

Flip the switch and supercharge your state of mind with Brain Force the next generation of neural activation from Infowars Life.

http://www.infowars.com/wp-content/uploads/2016/02/bf-300.jpg

http://www.infowarsstore.com/brain-force.html?ims=xrwzl&utm_campaign=Brain+Force&utm_source=Infowars+Widget&utm_medium=Infowars.com&utm_content=Brain+Force

http://www.infowarsstore.com/brain-force.html?ims=xrwzl&utm_campaign=Brain+Force&utm_source=Infowars+Widget&utm_medium=Infowars.com&utm_content=Brain+Force

Super Male Vitality

69.95

59.95

The all new and advanced Super Male Vitality formula uses the newest extraction technology with even more powerful concentrations of various herbs and extracts designed to be even stronger.

http://www.infowars.com/wp-content/uploads/2016/02/smv-200.jpg

http://www.infowarsstore.com/super-male-vitality.html?ims=lyhju&utm_campaign=Super+Male+Vitality&utm_source=Infowars+Widget&utm_medium=Infowars.com

http://www.infowarsstore.com/super-male-vitality.html?ims=lyhju&utm_campaign=Super+Male+Vitality&utm_source=Infowars+Widget&utm_medium=Infowars.com

Survival Shield X-2 Nascent Iodine

39.95

29.95

Leading the way into the next generation of super high quality nascent iodine, the new Infowars Life Survival Shield X-2 is now here.

http://www.infowars.com/wp-content/uploads/2016/02/x2-200.jpg

http://www.infowarsstore.com/survival-shield-x-2-nascent-iodine.html?ims=ybieu&utm_campaign=Survival+Shield+X-2+&utm_source=Infowars+Widget&utm_medium=Infowars.com

http://www.infowarsstore.com/survival-shield-x-2-nascent-iodine.html?ims=ybieu&utm_campaign=Survival+Shield+X-2+&utm_source=Infowars+Widget&utm_medium=Infowars.com

Read the original:
Alex Jones' Infowars: There's a war on for your mind!

Read More...

Greenpeace USA

November 2nd, 2016 10:47 am

The world is watching tell President Obama to stop the Dakota Access Pipeline!

Home hero http://www.greenpeace.org/usa/wp-content/uploads/2015/05/DAPL_hero.jpg, The world is watching tell President Obama to stop the Dakota Access Pipeline!, take action , https://secure3.convio.net/gpeace/site/Advocacy?cmd=display&page=UserAction&id=2027&s_src=hero

30M

Number of supporters worldwide

$0

Amount of money we've accepted from corporations

55

Number of countries in which we operate

Infographic TimberPost Object ( [ImageClass] => TimberImage [PostClass] => TimberPost [TermClass] => TimberTerm [object_type] => post [_content:protected] => [_get_terms:protected] => [_permalink:protected] => http://www.greenpeace.org/usa/ [_next:protected] => Array ( ) [_prev:protected] => Array ( ) [class] => post-296 page type-page status-publish hentry [id] => 296 [ID] => 296 [post_author] => 96 [post_content] => [post_date] => 2015-05-04 20:05:31 [post_excerpt] => [post_parent] => 0 [post_status] => publish [post_title] => Greenpeace USA [post_type] => page [slug] => home-page [_edit_lock] => 1477691709:11 [_edit_last] => 11 [_wp_page_template] => gpusa-home.php [hero_image] => 56126 [_hero_image] => field_5547d46339d5e [hero_title] => The world is watching tell President Obama to stop the Dakota Access Pipeline! [_hero_title] => field_5547d47f39d5f [hero_button_text] => take action [_hero_button_text] => field_5547d48b39d60 [hero_url] => https://secure3.convio.net/gpeace/site/Advocacy?cmd=display&page=UserAction&id=2027&s_src=hero [_hero_url] => field_5547d49539d61 [_custom_header_image_id] => [html_page_title] => [_html_page_title] => field_554d1b87acb28 [featured_actions] => Array ( [0] => 55139 [1] => 53881 [2] => 465 ) [_featured_actions] => field_554d17fe1a757 [featured_blogs] => Array ( [0] => 56118 [1] => 56082 [2] => 55925 ) [_featured_blogs] => field_554d185e78983 [featured_stories_victories] => TimberPost Object ( [ImageClass] => TimberImage [PostClass] => TimberPost [TermClass] => TimberTerm [object_type] => post [_content:protected] => [_get_terms:protected] => [_permalink:protected] => [_next:protected] => Array ( ) [_prev:protected] => Array ( ) [class] => post-55505 stories type-stories status-publish hentry category-climate category-movement-news category-protest [id] => 55505 [ID] => 55505 [post_author] => 11 [post_content] => [post_date] => 2016-10-10 19:44:43 [post_excerpt] => [post_parent] => 0 [post_status] => publish [post_title] => These Are Our Prayers in Action A Look at Life in the #NoDAPL Resistance Camps [post_type] => stories [slug] => these-are-our-prayers-in-action-a-look-at-life-in-the-nodapl-resistance-camps [_dwls_first_image] => [_edit_lock] => 1476143246:11 [_edit_last] => 11 [superheader] => [_superheader] => field_5539b43b64305 [image_video] => 55686 [_image_video] => field_5539b146b4973 [show_on_page] => true [_show_on_page] => field_558aa7e995c6a [html_page_title] => [_html_page_title] => field_554d1b87acb28 [subtitle] => [_subtitle] => field_5539b3413c3cf [descriptive_paragraph] => For months, the Standing Rock Sioux and allies have been protecting their water by resisting construction of the Dakota Access Pipeline, which would carry 500,000 barrels of oil a day from North Dakota to Illinois. Peter Dakota Molof spent a week supporting water protectors at resistance camps set up along Lake Oahe this is what he saw. [_descriptive_paragraph] => field_5539b34c3c3d0 [body] => As I turn off the two-lane highway that courses through the Standing Rock Indian Reservation into Oceti Sakowin Camp (technically an overflow camp from the original Camp of the Sacred Stones that formed in April of this year), I am bursting with feelings. Ive been on the road for three days in Greenpeaces Rolling Sunlight to provide solar power to #NoDAPL resistance efforts. Without strong cell reception, its been hard to know what to expect when I arrive, so Ive spent long days anxiously trying to imagine what it will be like at camp. But I dont think theres any way to prepare for a place like this. There isnt any way to prepare to witness history in the making. From the road, the valley flat provides an incredible view of the expanse of Oceti Sakowin, the surrounding camps, and the mass of protectors who have come from Nations far and wide to defend water from the Dakota Access Pipeline. After a brief chat with some helpful camp security, we begin pulling our 13-ton truck down the avenue of flags representing the Indigenous nations who have lent their support. I will spend the next week working with the hundreds of people who have pledged to peacefully and prayerfully stop the Dakota Access Pipeline. Each day, there are non-violent direct action or peace-keeper trainings designed to ground us all in the principles of camp and our purpose here. The conversations are rich, delving into what the role of a protector is versus a protester, and how to hold each other accountable to the principles weve agree to. https://twitter.com/RuthHHopkins/status/779511223154937856?lang=en I am struck by how unique this moment is to be training with members of so many nations, with so many relatives from so many different places, and with so many people who have never before taken action on their principles in this way. These are our prayers in action. Among us are also leaders from other historic moments of Indigenous resistance, like Wounded Knee II and Alcatraz. We listen humbly to our Elders as they remind us that we are responsible for one anothers actions as much as we are responsible for our own. The days are long and the weather is turning cold. There is talk of what will happen when winter really hits, and protectors who have been here since last April recount how relentless the snow was last year. But no one is talking about leaving.

take action

hold them accountable

We believe in the publics right to know about whats happening to our planet. Our investigations expose environmental crimes and the people, companies and governments that need to be held responsible.

Each one of us can make small changes in our lives, but together we can change the world. Greenpeace connects people from all over the globe. We bring together diverse perspectives, and help communities and individuals to come together.

We have the courage to take action and stand up for our beliefs. We work together to stop the destruction of the environment using peaceful direct action and creative communication. We dont just identify problems, we create solutions.

Environmental issues often impact Indigenous people first and hardest; in the end they will affect us all.

For months, the Standing Rock Sioux and allies have been protecting their water by resisting construction of the Dakota Access Pipeline, which would carry 500,000 barrels of oil a day from North Dakota to Illinois. Peter Dakota Molof spent a week supporting water protectors at resistance camps set up along Lake Oahe this is what he saw.

Munduruku child with achiote (Bixa orellana) painting in Sawr Muybu Indigenous Land, home to the Munduruku people, Par state, Brazil.

Forest next to the Tapajs river, in Sawr Muybu Indigenous Land, home to the Munduruku people, Par state, Brazil. Foto:

Photo by Greenpeace / Jill Pape.

Democracy Awakening Rally in front of the U.S. Capitol as allied groups came together to call for voting rights, and

Environmental activist Maria Langholz holds an Oil Change International "Keep in the Ground" scarf at a Democratic presidential hopeful Bernie

Loggerhead turtle swimming around a fish aggregation device

The Greenpeace Rainbow Warrior is in the South Pacific documenting unsustainable fishing practices with a spotlight on tuna. This week:

Kayaktivists at the 'Paddle in Seattle' in June. Photo Credit: Marcus Donner / Greenpeace

Photo Credit: Yair Meyuhas / Greenpeace

Baby orangutans at the Orangutan Foundation International Care Center in Pangkalan Bun, Central Kalimantan. Expansion of oil palm plantations is

See more here:
Greenpeace USA

Read More...

Integrative Medicine – ynhh.org

November 1st, 2016 5:49 am

Integrative medicine reaffirms the importance of the relationship between practitioner and patient, focuses on the whole person, is informed by evidence, and makes use of all appropriate therapeutic, lifestyle approaches, and healthcare disciplines to achieve optimal health and healing.

Smilow Cancer Hospital's approach to integrative medicine provides evidence-based guidance about complementary therapies commonly used by cancer patients and survivors. We work to optimize mainstream care and address the serious physical and emotional symptoms often experienced by patients before, during, and after therapy, and avoid interactions with conventional care. Our team has expertise in the practice and scientific evaluation of complementary medicine which can guide patients to make effective decisions about the most helpful integrative therapies throughout their treatment program and beyond. We collaborate closely with your oncology team to provide safe and effective care.

The program is located in the Integrative Medicine/Rehabilitation Services area on the first floor of Smilow Cancer Hospital, room 1402. As a patient, many of the services can be provided on your floor or in your room. All Services are offered free of charge to patients undergoing cancer treatment at Smilow Cancer Hospital.

Office HoursMonday - Friday, 8 am - 4 pm, closed all major holidays

Integrative Medicine clinical consultations provide guidance for patients in the safe use of dietary supplements/natural products, acupuncture, massage, meditation, and other complementary therapies. Dr. Ali has extensive experience in the integrative management of chronic disease for patients, as well as teaching patients to optimize their health from a holistic perspective. He is trained in naturopathic medicine, integrative medicine, epidemiology, and patient-oriented research.

Art Expression offers a variety of creative outlets that provide a unique therapeutic experience. A broad spectrum of engaging classes and workshops, taught by visiting artists, provide patients the opportunity to learn various art techniques and to participate in collaborative installations and projects.

Plant-based oils are used to promote relaxation, relieve stress and anxiety, and help control insomnia, nausea, and pain. Essential oils can be incorporated into other complementary approaches.

Experienced and licensed therapists are trained in oncology massage, focused on improving side effects from cancer and its treatment. Research has shown that massage therapy may reduce pain, promote relaxation, and boost mood in cancer patients.

Reiki is a complementary health approach in which practitioners place their hands lightly on or just above a person, with the goal of facilitating the person's own healing response. Patients often report relaxation and stress-reduction effects.

Yoga is a mind and body practice with origins in ancient Indian philosophy, combining breathing techniques, physical postures, meditation, and relaxation. Patients can receive individual bedside yoga therapy, adapted to patient needs and limitations.

Patients, caregivers, staff and volunteers are invited to join voices and experience the benefits of singing together.

Group classes incorporate breathing techniques, physical postures, meditation, and relaxation, adapted to patient needs and limitations.

Qi gong is a centuries-old mind and body practice that involves certain postures and gentle movements with mental focus, breathing, and relaxation. The movements can be adapted or practiced while walking, standing, or sitting. Practicing Qi gong may reduce pain, reduce anxiety, and improve general quality of life.

Walking a labyrinth is an experience that allows contemplation as well as a place to retreat, regroup and renew in support of each individual journey. Labyrinths provide a quiet walking meditation and take 5-10 minutes to complete.

Walking a labyrinth is an experience that allows contemplation as well as a place to retreat, regroup and renew in support of each individual journey. Labyrinths provide a quiet walking meditation and take 5-10 minutes to complete.

Patients are invited to work with an experienced mentor on a writing essay of their choice. Individuals can contribute to an annual anthology of written works.

A gentler form of Zumba, designed for all populations and all fitness levels. It blends easy to follow dance rhythms with music. Chair based options are available.

203-200-6129

Read more:
Integrative Medicine - ynhh.org

Read More...

Immune system – Wikipedia

November 1st, 2016 5:48 am

The immune system is a host defense system comprising many biological structures and processes within an organism that protects against disease. To function properly, an immune system must detect a wide variety of agents, known as pathogens, from viruses to parasitic worms, and distinguish them from the organism's own healthy tissue. In many species, the immune system can be classified into subsystems, such as the innate immune system versus the adaptive immune system, or humoral immunity versus cell-mediated immunity. In humans, the bloodbrain barrier, bloodcerebrospinal fluid barrier, and similar fluidbrain barriers separate the peripheral immune system from the neuroimmune system which protects the brain.

Pathogens can rapidly evolve and adapt, and thereby avoid detection and neutralization by the immune system; however, multiple defense mechanisms have also evolved to recognize and neutralize pathogens. Even simple unicellular organisms such as bacteria possess a rudimentary immune system, in the form of enzymes that protect against bacteriophage infections. Other basic immune mechanisms evolved in ancient eukaryotes and remain in their modern descendants, such as plants and invertebrates. These mechanisms include phagocytosis, antimicrobial peptides called defensins, and the complement system. Jawed vertebrates, including humans, have even more sophisticated defense mechanisms,[1] including the ability to adapt over time to recognize specific pathogens more efficiently. Adaptive (or acquired) immunity creates immunological memory after an initial response to a specific pathogen, leading to an enhanced response to subsequent encounters with that same pathogen. This process of acquired immunity is the basis of vaccination.

Disorders of the immune system can result in autoimmune diseases, inflammatory diseases and cancer.[2]Immunodeficiency occurs when the immune system is less active than normal, resulting in recurring and life-threatening infections. In humans, immunodeficiency can either be the result of a genetic disease such as severe combined immunodeficiency, acquired conditions such as HIV/AIDS, or the use of immunosuppressive medication. In contrast, autoimmunity results from a hyperactive immune system attacking normal tissues as if they were foreign organisms. Common autoimmune diseases include Hashimoto's thyroiditis, rheumatoid arthritis, diabetes mellitus type 1, and systemic lupus erythematosus. Immunology covers the study of all aspects of the immune system.

Immunology is a science that examines the structure and function of the immune system. It originates from medicine and early studies on the causes of immunity to disease. The earliest known reference to immunity was during the plague of Athens in 430 BC. Thucydides noted that people who had recovered from a previous bout of the disease could nurse the sick without contracting the illness a second time.[3] In the 18th century, Pierre-Louis Moreau de Maupertuis made experiments with scorpion venom and observed that certain dogs and mice were immune to this venom.[4] This and other observations of acquired immunity were later exploited by Louis Pasteur in his development of vaccination and his proposed germ theory of disease.[5] Pasteur's theory was in direct opposition to contemporary theories of disease, such as the miasma theory. It was not until Robert Koch's 1891 proofs, for which he was awarded a Nobel Prize in 1905, that microorganisms were confirmed as the cause of infectious disease.[6] Viruses were confirmed as human pathogens in 1901, with the discovery of the yellow fever virus by Walter Reed.[7]

Immunology made a great advance towards the end of the 19th century, through rapid developments, in the study of humoral immunity and cellular immunity.[8] Particularly important was the work of Paul Ehrlich, who proposed the side-chain theory to explain the specificity of the antigen-antibody reaction; his contributions to the understanding of humoral immunity were recognized by the award of a Nobel Prize in 1908, which was jointly awarded to the founder of cellular immunology, Elie Metchnikoff.[9]

The immune system protects organisms from infection with layered defenses of increasing specificity. In simple terms, physical barriers prevent pathogens such as bacteria and viruses from entering the organism. If a pathogen breaches these barriers, the innate immune system provides an immediate, but non-specific response. Innate immune systems are found in all plants and animals.[10] If pathogens successfully evade the innate response, vertebrates possess a second layer of protection, the adaptive immune system, which is activated by the innate response. Here, the immune system adapts its response during an infection to improve its recognition of the pathogen. This improved response is then retained after the pathogen has been eliminated, in the form of an immunological memory, and allows the adaptive immune system to mount faster and stronger attacks each time this pathogen is encountered.[11][12]

Both innate and adaptive immunity depend on the ability of the immune system to distinguish between self and non-self molecules. In immunology, self molecules are those components of an organism's body that can be distinguished from foreign substances by the immune system.[13] Conversely, non-self molecules are those recognized as foreign molecules. One class of non-self molecules are called antigens (short for antibody generators) and are defined as substances that bind to specific immune receptors and elicit an immune response.[14]

Microorganisms or toxins that successfully enter an organism encounter the cells and mechanisms of the innate immune system. The innate response is usually triggered when microbes are identified by pattern recognition receptors, which recognize components that are conserved among broad groups of microorganisms,[15] or when damaged, injured or stressed cells send out alarm signals, many of which (but not all) are recognized by the same receptors as those that recognize pathogens.[16] Innate immune defenses are non-specific, meaning these systems respond to pathogens in a generic way.[14] This system does not confer long-lasting immunity against a pathogen. The innate immune system is the dominant system of host defense in most organisms.[10]

Several barriers protect organisms from infection, including mechanical, chemical, and biological barriers. The waxy cuticle of many leaves, the exoskeleton of insects, the shells and membranes of externally deposited eggs, and skin are examples of mechanical barriers that are the first line of defense against infection.[14] However, as organisms cannot be completely sealed from their environments, other systems act to protect body openings such as the lungs, intestines, and the genitourinary tract. In the lungs, coughing and sneezing mechanically eject pathogens and other irritants from the respiratory tract. The flushing action of tears and urine also mechanically expels pathogens, while mucus secreted by the respiratory and gastrointestinal tract serves to trap and entangle microorganisms.[17]

Chemical barriers also protect against infection. The skin and respiratory tract secrete antimicrobial peptides such as the -defensins.[18]Enzymes such as lysozyme and phospholipase A2 in saliva, tears, and breast milk are also antibacterials.[19][20]Vaginal secretions serve as a chemical barrier following menarche, when they become slightly acidic, while semen contains defensins and zinc to kill pathogens.[21][22] In the stomach, gastric acid and proteases serve as powerful chemical defenses against ingested pathogens.

Within the genitourinary and gastrointestinal tracts, commensal flora serve as biological barriers by competing with pathogenic bacteria for food and space and, in some cases, by changing the conditions in their environment, such as pH or available iron.[23] This reduces the probability that pathogens will reach sufficient numbers to cause illness. However, since most antibiotics non-specifically target bacteria and do not affect fungi, oral antibiotics can lead to an "overgrowth" of fungi and cause conditions such as a vaginal candidiasis (a yeast infection).[24] There is good evidence that re-introduction of probiotic flora, such as pure cultures of the lactobacilli normally found in unpasteurized yogurt, helps restore a healthy balance of microbial populations in intestinal infections in children and encouraging preliminary data in studies on bacterial gastroenteritis, inflammatory bowel diseases, urinary tract infection and post-surgical infections.[25][26][27]

Inflammation is one of the first responses of the immune system to infection.[28] The symptoms of inflammation are redness, swelling, heat, and pain, which are caused by increased blood flow into tissue. Inflammation is produced by eicosanoids and cytokines, which are released by injured or infected cells. Eicosanoids include prostaglandins that produce fever and the dilation of blood vessels associated with inflammation, and leukotrienes that attract certain white blood cells (leukocytes).[29][30] Common cytokines include interleukins that are responsible for communication between white blood cells; chemokines that promote chemotaxis; and interferons that have anti-viral effects, such as shutting down protein synthesis in the host cell.[31]Growth factors and cytotoxic factors may also be released. These cytokines and other chemicals recruit immune cells to the site of infection and promote healing of any damaged tissue following the removal of pathogens.[32]

The complement system is a biochemical cascade that attacks the surfaces of foreign cells. It contains over 20 different proteins and is named for its ability to "complement" the killing of pathogens by antibodies. Complement is the major humoral component of the innate immune response.[33][34] Many species have complement systems, including non-mammals like plants, fish, and some invertebrates.[35]

In humans, this response is activated by complement binding to antibodies that have attached to these microbes or the binding of complement proteins to carbohydrates on the surfaces of microbes. This recognition signal triggers a rapid killing response.[36] The speed of the response is a result of signal amplification that occurs following sequential proteolytic activation of complement molecules, which are also proteases. After complement proteins initially bind to the microbe, they activate their protease activity, which in turn activates other complement proteases, and so on. This produces a catalytic cascade that amplifies the initial signal by controlled positive feedback.[37] The cascade results in the production of peptides that attract immune cells, increase vascular permeability, and opsonize (coat) the surface of a pathogen, marking it for destruction. This deposition of complement can also kill cells directly by disrupting their plasma membrane.[33]

Leukocytes (white blood cells) act like independent, single-celled organisms and are the second arm of the innate immune system.[14] The innate leukocytes include the phagocytes (macrophages, neutrophils, and dendritic cells), innate lymphoid cells, mast cells, eosinophils, basophils, and natural killer cells. These cells identify and eliminate pathogens, either by attacking larger pathogens through contact or by engulfing and then killing microorganisms.[35] Innate cells are also important mediators in lymphoid organ development and the activation of the adaptive immune system.[38]

Phagocytosis is an important feature of cellular innate immunity performed by cells called 'phagocytes' that engulf, or eat, pathogens or particles. Phagocytes generally patrol the body searching for pathogens, but can be called to specific locations by cytokines.[14] Once a pathogen has been engulfed by a phagocyte, it becomes trapped in an intracellular vesicle called a phagosome, which subsequently fuses with another vesicle called a lysosome to form a phagolysosome. The pathogen is killed by the activity of digestive enzymes or following a respiratory burst that releases free radicals into the phagolysosome.[39][40] Phagocytosis evolved as a means of acquiring nutrients, but this role was extended in phagocytes to include engulfment of pathogens as a defense mechanism.[41] Phagocytosis probably represents the oldest form of host defense, as phagocytes have been identified in both vertebrate and invertebrate animals.[42]

Neutrophils and macrophages are phagocytes that travel throughout the body in pursuit of invading pathogens.[43] Neutrophils are normally found in the bloodstream and are the most abundant type of phagocyte, normally representing 50% to 60% of the total circulating leukocytes.[44] During the acute phase of inflammation, particularly as a result of bacterial infection, neutrophils migrate toward the site of inflammation in a process called chemotaxis, and are usually the first cells to arrive at the scene of infection. Macrophages are versatile cells that reside within tissues and: (i) produce a wide array of chemicals including enzymes, complement proteins, and cytokines, while they can also (ii) act as scavengers that rid the body of worn-out cells and other debris, and as antigen-presenting cells that activate the adaptive immune system.[45]

Dendritic cells (DC) are phagocytes in tissues that are in contact with the external environment; therefore, they are located mainly in the skin, nose, lungs, stomach, and intestines.[46] They are named for their resemblance to neuronal dendrites, as both have many spine-like projections, but dendritic cells are in no way connected to the nervous system. Dendritic cells serve as a link between the bodily tissues and the innate and adaptive immune systems, as they present antigens to T cells, one of the key cell types of the adaptive immune system.[46]

Mast cells reside in connective tissues and mucous membranes, and regulate the inflammatory response.[47] They are most often associated with allergy and anaphylaxis.[44] Basophils and eosinophils are related to neutrophils. They secrete chemical mediators that are involved in defending against parasites and play a role in allergic reactions, such as asthma.[48] Natural killer (NK cells) cells are leukocytes that attack and destroy tumor cells, or cells that have been infected by viruses.[49]

Natural killer cells, or NK cells, are a component of the innate immune system which does not directly attack invading microbes. Rather, NK cells destroy compromised host cells, such as tumor cells or virus-infected cells, recognizing such cells by a condition known as "missing self." This term describes cells with low levels of a cell-surface marker called MHC I (major histocompatibility complex) a situation that can arise in viral infections of host cells.[35] They were named "natural killer" because of the initial notion that they do not require activation in order to kill cells that are "missing self." For many years it was unclear how NK cells recognize tumor cells and infected cells. It is now known that the MHC makeup on the surface of those cells is altered and the NK cells become activated through recognition of "missing self". Normal body cells are not recognized and attacked by NK cells because they express intact self MHC antigens. Those MHC antigens are recognized by killer cell immunoglobulin receptors (KIR) which essentially put the brakes on NK cells.[50]

The adaptive immune system evolved in early vertebrates and allows for a stronger immune response as well as immunological memory, where each pathogen is "remembered" by a signature antigen.[51] The adaptive immune response is antigen-specific and requires the recognition of specific "non-self" antigens during a process called antigen presentation. Antigen specificity allows for the generation of responses that are tailored to specific pathogens or pathogen-infected cells. The ability to mount these tailored responses is maintained in the body by "memory cells". Should a pathogen infect the body more than once, these specific memory cells are used to quickly eliminate it.

The cells of the adaptive immune system are special types of leukocytes, called lymphocytes. B cells and T cells are the major types of lymphocytes and are derived from hematopoietic stem cells in the bone marrow.[35] B cells are involved in the humoral immune response, whereas T cells are involved in cell-mediated immune response.

Both B cells and T cells carry receptor molecules that recognize specific targets. T cells recognize a "non-self" target, such as a pathogen, only after antigens (small fragments of the pathogen) have been processed and presented in combination with a "self" receptor called a major histocompatibility complex (MHC) molecule. There are two major subtypes of T cells: the killer T cell and the helper T cell. In addition there are regulatory T cells which have a role in modulating immune response. Killer T cells only recognize antigens coupled to Class I MHC molecules, while helper T cells and regulatory T cells only recognize antigens coupled to Class II MHC molecules. These two mechanisms of antigen presentation reflect the different roles of the two types of T cell. A third, minor subtype are the T cells that recognize intact antigens that are not bound to MHC receptors.[52]

In contrast, the B cell antigen-specific receptor is an antibody molecule on the B cell surface, and recognizes whole pathogens without any need for antigen processing. Each lineage of B cell expresses a different antibody, so the complete set of B cell antigen receptors represent all the antibodies that the body can manufacture.[35]

Killer T cells are a sub-group of T cells that kill cells that are infected with viruses (and other pathogens), or are otherwise damaged or dysfunctional.[53] As with B cells, each type of T cell recognizes a different antigen. Killer T cells are activated when their T cell receptor (TCR) binds to this specific antigen in a complex with the MHC Class I receptor of another cell. Recognition of this MHC:antigen complex is aided by a co-receptor on the T cell, called CD8. The T cell then travels throughout the body in search of cells where the MHC I receptors bear this antigen. When an activated T cell contacts such cells, it releases cytotoxins, such as perforin, which form pores in the target cell's plasma membrane, allowing ions, water and toxins to enter. The entry of another toxin called granulysin (a protease) induces the target cell to undergo apoptosis.[54] T cell killing of host cells is particularly important in preventing the replication of viruses. T cell activation is tightly controlled and generally requires a very strong MHC/antigen activation signal, or additional activation signals provided by "helper" T cells (see below).[54]

Helper T cells regulate both the innate and adaptive immune responses and help determine which immune responses the body makes to a particular pathogen.[55][56] These cells have no cytotoxic activity and do not kill infected cells or clear pathogens directly. They instead control the immune response by directing other cells to perform these tasks.

Helper T cells express T cell receptors (TCR) that recognize antigen bound to Class II MHC molecules. The MHC:antigen complex is also recognized by the helper cell's CD4 co-receptor, which recruits molecules inside the T cell (e.g., Lck) that are responsible for the T cell's activation. Helper T cells have a weaker association with the MHC:antigen complex than observed for killer T cells, meaning many receptors (around 200300) on the helper T cell must be bound by an MHC:antigen in order to activate the helper cell, while killer T cells can be activated by engagement of a single MHC:antigen molecule. Helper T cell activation also requires longer duration of engagement with an antigen-presenting cell.[57] The activation of a resting helper T cell causes it to release cytokines that influence the activity of many cell types. Cytokine signals produced by helper T cells enhance the microbicidal function of macrophages and the activity of killer T cells.[14] In addition, helper T cell activation causes an upregulation of molecules expressed on the T cell's surface, such as CD40 ligand (also called CD154), which provide extra stimulatory signals typically required to activate antibody-producing B cells.[58]

Gamma delta T cells ( T cells) possess an alternative T cell receptor (TCR) as opposed to CD4+ and CD8+ () T cells and share the characteristics of helper T cells, cytotoxic T cells and NK cells. The conditions that produce responses from T cells are not fully understood. Like other 'unconventional' T cell subsets bearing invariant TCRs, such as CD1d-restricted Natural Killer T cells, T cells straddle the border between innate and adaptive immunity.[59] On one hand, T cells are a component of adaptive immunity as they rearrange TCR genes to produce receptor diversity and can also develop a memory phenotype. On the other hand, the various subsets are also part of the innate immune system, as restricted TCR or NK receptors may be used as pattern recognition receptors. For example, large numbers of human V9/V2 T cells respond within hours to common molecules produced by microbes, and highly restricted V1+ T cells in epithelia respond to stressed epithelial cells.[52]

A B cell identifies pathogens when antibodies on its surface bind to a specific foreign antigen.[61] This antigen/antibody complex is taken up by the B cell and processed by proteolysis into peptides. The B cell then displays these antigenic peptides on its surface MHC class II molecules. This combination of MHC and antigen attracts a matching helper T cell, which releases lymphokines and activates the B cell.[62] As the activated B cell then begins to divide, its offspring (plasma cells) secrete millions of copies of the antibody that recognizes this antigen. These antibodies circulate in blood plasma and lymph, bind to pathogens expressing the antigen and mark them for destruction by complement activation or for uptake and destruction by phagocytes. Antibodies can also neutralize challenges directly, by binding to bacterial toxins or by interfering with the receptors that viruses and bacteria use to infect cells.[63]

Evolution of the adaptive immune system occurred in an ancestor of the jawed vertebrates. Many of the classical molecules of the adaptive immune system (e.g., immunoglobulins and T cell receptors) exist only in jawed vertebrates. However, a distinct lymphocyte-derived molecule has been discovered in primitive jawless vertebrates, such as the lamprey and hagfish. These animals possess a large array of molecules called Variable lymphocyte receptors (VLRs) that, like the antigen receptors of jawed vertebrates, are produced from only a small number (one or two) of genes. These molecules are believed to bind pathogenic antigens in a similar way to antibodies, and with the same degree of specificity.[64]

When B cells and T cells are activated and begin to replicate, some of their offspring become long-lived memory cells. Throughout the lifetime of an animal, these memory cells remember each specific pathogen encountered and can mount a strong response if the pathogen is detected again. This is "adaptive" because it occurs during the lifetime of an individual as an adaptation to infection with that pathogen and prepares the immune system for future challenges. Immunological memory can be in the form of either passive short-term memory or active long-term memory.

Newborn infants have no prior exposure to microbes and are particularly vulnerable to infection. Several layers of passive protection are provided by the mother. During pregnancy, a particular type of antibody, called IgG, is transported from mother to baby directly across the placenta, so human babies have high levels of antibodies even at birth, with the same range of antigen specificities as their mother.[65]Breast milk or colostrum also contains antibodies that are transferred to the gut of the infant and protect against bacterial infections until the newborn can synthesize its own antibodies.[66] This is passive immunity because the fetus does not actually make any memory cells or antibodiesit only borrows them. This passive immunity is usually short-term, lasting from a few days up to several months. In medicine, protective passive immunity can also be transferred artificially from one individual to another via antibody-rich serum.[67]

Long-term active memory is acquired following infection by activation of B and T cells. Active immunity can also be generated artificially, through vaccination. The principle behind vaccination (also called immunization) is to introduce an antigen from a pathogen in order to stimulate the immune system and develop specific immunity against that particular pathogen without causing disease associated with that organism.[14] This deliberate induction of an immune response is successful because it exploits the natural specificity of the immune system, as well as its inducibility. With infectious disease remaining one of the leading causes of death in the human population, vaccination represents the most effective manipulation of the immune system mankind has developed.[35][68]

Most viral vaccines are based on live attenuated viruses, while many bacterial vaccines are based on acellular components of micro-organisms, including harmless toxin components.[14] Since many antigens derived from acellular vaccines do not strongly induce the adaptive response, most bacterial vaccines are provided with additional adjuvants that activate the antigen-presenting cells of the innate immune system and maximize immunogenicity.[69]

The immune system is a remarkably effective structure that incorporates specificity, inducibility and adaptation. Failures of host defense do occur, however, and fall into three broad categories: immunodeficiencies, autoimmunity, and hypersensitivities.

Immunodeficiencies occur when one or more of the components of the immune system are inactive. The ability of the immune system to respond to pathogens is diminished in both the young and the elderly, with immune responses beginning to decline at around 50 years of age due to immunosenescence.[70][71] In developed countries, obesity, alcoholism, and drug use are common causes of poor immune function.[71] However, malnutrition is the most common cause of immunodeficiency in developing countries.[71] Diets lacking sufficient protein are associated with impaired cell-mediated immunity, complement activity, phagocyte function, IgA antibody concentrations, and cytokine production. Additionally, the loss of the thymus at an early age through genetic mutation or surgical removal results in severe immunodeficiency and a high susceptibility to infection.[72]

Immunodeficiencies can also be inherited or 'acquired'.[14]Chronic granulomatous disease, where phagocytes have a reduced ability to destroy pathogens, is an example of an inherited, or congenital, immunodeficiency. AIDS and some types of cancer cause acquired immunodeficiency.[73][74]

Overactive immune responses comprise the other end of immune dysfunction, particularly the autoimmune disorders. Here, the immune system fails to properly distinguish between self and non-self, and attacks part of the body. Under normal circumstances, many T cells and antibodies react with "self" peptides.[75] One of the functions of specialized cells (located in the thymus and bone marrow) is to present young lymphocytes with self antigens produced throughout the body and to eliminate those cells that recognize self-antigens, preventing autoimmunity.[61]

Hypersensitivity is an immune response that damages the body's own tissues. They are divided into four classes (Type I IV) based on the mechanisms involved and the time course of the hypersensitive reaction. Type I hypersensitivity is an immediate or anaphylactic reaction, often associated with allergy. Symptoms can range from mild discomfort to death. Type I hypersensitivity is mediated by IgE, which triggers degranulation of mast cells and basophils when cross-linked by antigen.[76] Type II hypersensitivity occurs when antibodies bind to antigens on the patient's own cells, marking them for destruction. This is also called antibody-dependent (or cytotoxic) hypersensitivity, and is mediated by IgG and IgM antibodies.[76]Immune complexes (aggregations of antigens, complement proteins, and IgG and IgM antibodies) deposited in various tissues trigger Type III hypersensitivity reactions.[76] Type IV hypersensitivity (also known as cell-mediated or delayed type hypersensitivity) usually takes between two and three days to develop. Type IV reactions are involved in many autoimmune and infectious diseases, but may also involve contact dermatitis (poison ivy). These reactions are mediated by T cells, monocytes, and macrophages.[76]

It is likely that a multicomponent, adaptive immune system arose with the first vertebrates, as invertebrates do not generate lymphocytes or an antibody-based humoral response.[1] Many species, however, utilize mechanisms that appear to be precursors of these aspects of vertebrate immunity. Immune systems appear even in the structurally most simple forms of life, with bacteria using a unique defense mechanism, called the restriction modification system to protect themselves from viral pathogens, called bacteriophages.[77] Prokaryotes also possess acquired immunity, through a system that uses CRISPR sequences to retain fragments of the genomes of phage that they have come into contact with in the past, which allows them to block virus replication through a form of RNA interference.[78][79] Offensive elements of the immune systems are also present in unicellular eukaryotes, but studies of their roles in defense are few.[80]

Pattern recognition receptors are proteins used by nearly all organisms to identify molecules associated with pathogens. Antimicrobial peptides called defensins are an evolutionarily conserved component of the innate immune response found in all animals and plants, and represent the main form of invertebrate systemic immunity.[1] The complement system and phagocytic cells are also used by most forms of invertebrate life. Ribonucleases and the RNA interference pathway are conserved across all eukaryotes, and are thought to play a role in the immune response to viruses.[81]

Unlike animals, plants lack phagocytic cells, but many plant immune responses involve systemic chemical signals that are sent through a plant.[82] Individual plant cells respond to molecules associated with pathogens known as Pathogen-associated molecular patterns or PAMPs.[83] When a part of a plant becomes infected, the plant produces a localized hypersensitive response, whereby cells at the site of infection undergo rapid apoptosis to prevent the spread of the disease to other parts of the plant. Systemic acquired resistance (SAR) is a type of defensive response used by plants that renders the entire plant resistant to a particular infectious agent.[82]RNA silencing mechanisms are particularly important in this systemic response as they can block virus replication.[84]

Another important role of the immune system is to identify and eliminate tumors. This is called immune surveillance. The transformed cells of tumors express antigens that are not found on normal cells. To the immune system, these antigens appear foreign, and their presence causes immune cells to attack the transformed tumor cells. The antigens expressed by tumors have several sources;[86] some are derived from oncogenic viruses like human papillomavirus, which causes cervical cancer,[87] while others are the organism's own proteins that occur at low levels in normal cells but reach high levels in tumor cells. One example is an enzyme called tyrosinase that, when expressed at high levels, transforms certain skin cells (e.g. melanocytes) into tumors called melanomas.[88][89] A third possible source of tumor antigens are proteins normally important for regulating cell growth and survival, that commonly mutate into cancer inducing molecules called oncogenes.[86][90][91]

The main response of the immune system to tumors is to destroy the abnormal cells using killer T cells, sometimes with the assistance of helper T cells.[89][92] Tumor antigens are presented on MHC class I molecules in a similar way to viral antigens. This allows killer T cells to recognize the tumor cell as abnormal.[93] NK cells also kill tumorous cells in a similar way, especially if the tumor cells have fewer MHC class I molecules on their surface than normal; this is a common phenomenon with tumors.[94] Sometimes antibodies are generated against tumor cells allowing for their destruction by the complement system.[90]

Clearly, some tumors evade the immune system and go on to become cancers.[95] Tumor cells often have a reduced number of MHC class I molecules on their surface, thus avoiding detection by killer T cells.[93] Some tumor cells also release products that inhibit the immune response; for example by secreting the cytokine TGF-, which suppresses the activity of macrophages and lymphocytes.[96] In addition, immunological tolerance may develop against tumor antigens, so the immune system no longer attacks the tumor cells.[95]

Paradoxically, macrophages can promote tumor growth [97] when tumor cells send out cytokines that attract macrophages, which then generate cytokines and growth factors that nurture tumor development. In addition, a combination of hypoxia in the tumor and a cytokine produced by macrophages induces tumor cells to decrease production of a protein that blocks metastasis and thereby assists spread of cancer cells.

Hormones can act as immunomodulators, altering the sensitivity of the immune system. For example, female sex hormones are known immunostimulators of both adaptive[98] and innate immune responses.[99] Some autoimmune diseases such as lupus erythematosus strike women preferentially, and their onset often coincides with puberty. By contrast, male sex hormones such as testosterone seem to be immunosuppressive.[100] Other hormones appear to regulate the immune system as well, most notably prolactin, growth hormone and vitamin D.[101][102]

When a T-cell encounters a foreign pathogen, it extends a vitamin D receptor. This is essentially a signaling device that allows the T-cell to bind to the active form of vitamin D, the steroid hormone calcitriol. T-cells have a symbiotic relationship with vitamin D. Not only does the T-cell extend a vitamin D receptor, in essence asking to bind to the steroid hormone version of vitamin D, calcitriol, but the T-cell expresses the gene CYP27B1, which is the gene responsible for converting the pre-hormone version of vitamin D, calcidiol into the steroid hormone version, calcitriol. Only after binding to calcitriol can T-cells perform their intended function. Other immune system cells that are known to express CYP27B1 and thus activate vitamin D calcidiol, are dendritic cells, keratinocytes and macrophages.[103][104]

It is conjectured that a progressive decline in hormone levels with age is partially responsible for weakened immune responses in aging individuals.[105] Conversely, some hormones are regulated by the immune system, notably thyroid hormone activity.[106] The age-related decline in immune function is also related to decreasing vitamin D levels in the elderly. As people age, two things happen that negatively affect their vitamin D levels. First, they stay indoors more due to decreased activity levels. This means that they get less sun and therefore produce less cholecalciferol via UVB radiation. Second, as a person ages the skin becomes less adept at producing vitamin D.[107]

The immune system is affected by sleep and rest,[108] and sleep deprivation is detrimental to immune function.[109] Complex feedback loops involving cytokines, such as interleukin-1 and tumor necrosis factor- produced in response to infection, appear to also play a role in the regulation of non-rapid eye movement (REM) sleep.[110] Thus the immune response to infection may result in changes to the sleep cycle, including an increase in slow-wave sleep relative to REM sleep.[111]

When suffering from sleep deprivation, active immunizations may have a diminished effect and may result in lower antibody production, and a lower immune response, than would be noted in a well-rested individual. Additionally, proteins such as NFIL3, which have been shown to be closely intertwined with both T-cell differentiation and our circadian rhythms, can be affected through the disturbance of natural light and dark cycles through instances of sleep deprivation, shift work, etc. As a result, these disruptions can lead to an increase in chronic conditions such as heart disease, chronic pain, and asthma.[112]

In addition to the negative consequences of sleep deprivation, sleep and the intertwined circadian system have been shown to have strong regulatory effects on immunological functions affecting both the innate and the adaptive immunity. First, during the early slow-wave-sleep stage, a sudden drop in blood levels of cortisol, epinephrine, and norepinephrine induce increased blood levels of the hormones leptin, pituitary growth hormone, and prolactin. These signals induce a pro-inflammatory state through the production of the pro-inflammatory cytokines interleukin-1, interleukin-12, TNF-alpha and IFN-gamma. These cytokines then stimulate immune functions such as immune cells activation, proliferation, and differentiation. It is during this time that undifferentiated, or less differentiated, like nave and central memory T cells, peak (i.e. during a time of a slowly evolving adaptive immune response). In addition to these effects, the milieu of hormones produced at this time (leptin, pituitary growth hormone, and prolactin) support the interactions between APCs and T-cells, a shift of the Th1/Th2 cytokine balance towards one that supports Th1, an increase in overall Th cell proliferation, and nave T cell migration to lymph nodes. This milieu is also thought to support the formation of long-lasting immune memory through the initiation of Th1 immune responses.[113]

In contrast, during wake periods differentiated effector cells, such as cytotoxic natural killer cells and CTLs (cytotoxic T lymphocytes), peak in order to elicit an effective response against any intruding pathogens. As well during awake active times, anti-inflammatory molecules, such as cortisol and catecholamines, peak. There are two theories as to why the pro-inflammatory state is reserved for sleep time. First, inflammation would cause serious cognitive and physical impairments if it were to occur during wake times. Second, inflammation may occur during sleep times due to the presence of melatonin. Inflammation causes a great deal of oxidative stress and the presence of melatonin during sleep times could actively counteract free radical production during this time.[113][114]

Overnutrition is associated with diseases such as diabetes and obesity, which are known to affect immune function. More moderate malnutrition, as well as certain specific trace mineral and nutrient deficiencies, can also compromise the immune response.[115]

Foods rich in certain fatty acids may foster a healthy immune system.[116] Likewise, fetal undernourishment can cause a lifelong impairment of the immune system.[117]

The immune response can be manipulated to suppress unwanted responses resulting from autoimmunity, allergy, and transplant rejection, and to stimulate protective responses against pathogens that largely elude the immune system (see immunization) or cancer.

Immunosuppressive drugs are used to control autoimmune disorders or inflammation when excessive tissue damage occurs, and to prevent transplant rejection after an organ transplant.[35][118]

Anti-inflammatory drugs are often used to control the effects of inflammation. Glucocorticoids are the most powerful of these drugs; however, these drugs can have many undesirable side effects, such as central obesity, hyperglycemia, osteoporosis, and their use must be tightly controlled.[119] Lower doses of anti-inflammatory drugs are often used in conjunction with cytotoxic or immunosuppressive drugs such as methotrexate or azathioprine. Cytotoxic drugs inhibit the immune response by killing dividing cells such as activated T cells. However, the killing is indiscriminate and other constantly dividing cells and their organs are affected, which causes toxic side effects.[118] Immunosuppressive drugs such as cyclosporin prevent T cells from responding to signals correctly by inhibiting signal transduction pathways.[120]

Cancer immunotherapy covers the medical ways to stimulate the immune system to attack cancer tumours.

Immunology is strongly experimental in everyday practice but is also characterized by an ongoing theoretical attitude. Many theories have been suggested in immunology from the end of the nineteenth century up to the present time. The end of the 19th century and the beginning of the 20th century saw a battle between "cellular" and "humoral" theories of immunity. According to the cellular theory of immunity, represented in particular by Elie Metchnikoff, it was cells more precisely, phagocytes that were responsible for immune responses. In contrast, the humoral theory of immunity, held, among others, by Robert Koch and Emil von Behring, stated that the active immune agents were soluble components (molecules) found in the organisms humors rather than its cells.[121][122][123]

In the mid-1950s, Frank Burnet, inspired by a suggestion made by Niels Jerne,[124] formulated the clonal selection theory (CST) of immunity.[125] On the basis of CST, Burnet developed a theory of how an immune response is triggered according to the self/nonself distinction: "self" constituents (constituents of the body) do not trigger destructive immune responses, while "nonself" entities (pathogens, an allograft) trigger a destructive immune response.[126] The theory was later modified to reflect new discoveries regarding histocompatibility or the complex "two-signal" activation of T cells.[127] The self/nonself theory of immunity and the self/nonself vocabulary have been criticized,[123][128][129] but remain very influential.[130][131]

More recently, several theoretical frameworks have been suggested in immunology, including "autopoietic" views,[132] "cognitive immune" views,[133] the "danger model" (or "danger theory",[128] and the "discontinuity" theory.[134][135] The danger model, suggested by Polly Matzinger and colleagues, has been very influential, arousing many comments and discussions.[136][137][138][139]

Larger drugs (>500 Da) can provoke a neutralizing immune response, particularly if the drugs are administered repeatedly, or in larger doses. This limits the effectiveness of drugs based on larger peptides and proteins (which are typically larger than 6000 Da). In some cases, the drug itself is not immunogenic, but may be co-administered with an immunogenic compound, as is sometimes the case for Taxol. Computational methods have been developed to predict the immunogenicity of peptides and proteins, which are particularly useful in designing therapeutic antibodies, assessing likely virulence of mutations in viral coat particles, and validation of proposed peptide-based drug treatments. Early techniques relied mainly on the observation that hydrophilic amino acids are overrepresented in epitope regions than hydrophobic amino acids;[140] however, more recent developments rely on machine learning techniques using databases of existing known epitopes, usually on well-studied virus proteins, as a training set.[141] A publicly accessible database has been established for the cataloguing of epitopes from pathogens known to be recognizable by B cells.[142] The emerging field of bioinformatics-based studies of immunogenicity is referred to as immunoinformatics.[143]Immunoproteomics is the study of large sets of proteins (proteomics) involved in the immune response.

The success of any pathogen depends on its ability to elude host immune responses. Therefore, pathogens evolved several methods that allow them to successfully infect a host, while evading detection or destruction by the immune system.[144] Bacteria often overcome physical barriers by secreting enzymes that digest the barrier, for example, by using a type II secretion system.[145] Alternatively, using a type III secretion system, they may insert a hollow tube into the host cell, providing a direct route for proteins to move from the pathogen to the host. These proteins are often used to shut down host defenses.[146]

An evasion strategy used by several pathogens to avoid the innate immune system is to hide within the cells of their host (also called intracellular pathogenesis). Here, a pathogen spends most of its life-cycle inside host cells, where it is shielded from direct contact with immune cells, antibodies and complement. Some examples of intracellular pathogens include viruses, the food poisoning bacterium Salmonella and the eukaryotic parasites that cause malaria (Plasmodium falciparum) and leishmaniasis (Leishmania spp.). Other bacteria, such as Mycobacterium tuberculosis, live inside a protective capsule that prevents lysis by complement.[147] Many pathogens secrete compounds that diminish or misdirect the host's immune response.[144] Some bacteria form biofilms to protect themselves from the cells and proteins of the immune system. Such biofilms are present in many successful infections, e.g., the chronic Pseudomonas aeruginosa and Burkholderia cenocepacia infections characteristic of cystic fibrosis.[148] Other bacteria generate surface proteins that bind to antibodies, rendering them ineffective; examples include Streptococcus (protein G), Staphylococcus aureus (protein A), and Peptostreptococcus magnus (protein L).[149]

The mechanisms used to evade the adaptive immune system are more complicated. The simplest approach is to rapidly change non-essential epitopes (amino acids and/or sugars) on the surface of the pathogen, while keeping essential epitopes concealed. This is called antigenic variation. An example is HIV, which mutates rapidly, so the proteins on its viral envelope that are essential for entry into its host target cell are constantly changing. These frequent changes in antigens may explain the failures of vaccines directed at this virus.[150] The parasite Trypanosoma brucei uses a similar strategy, constantly switching one type of surface protein for another, allowing it to stay one step ahead of the antibody response.[151] Masking antigens with host molecules is another common strategy for avoiding detection by the immune system. In HIV, the envelope that covers the virion is formed from the outermost membrane of the host cell; such "self-cloaked" viruses make it difficult for the immune system to identify them as "non-self" structures.[152]

Follow this link:
Immune system - Wikipedia

Read More...

Personalized Regenerative Medicine – Stem Cell Therapy San …

October 31st, 2016 7:44 am

DISCLAIMERS

ABOUT DR. STEENBOCKS TREATMENTS

Dr. Steenbocks clinic uses a combination of treatments in a unique approach which he calls Personalize Regenerative Therapy. The clinic usually combines multiple therapeutic modalities, including, stem cells (via bone marrow transplant), various medical devices (such as hyperbaric chambers, electrical pulse generator, IV medications (such as chelation) and proprietary nutritional supplements.

Most or all of these therapeutic modalities are given off-label, which means that while the drugs and devices are approved by the Federal Food and Drug Administration, these therapies have not been approved for your specific disease. Federal law permits physicians to make off label-use of drugs and devices based on their medical judgment.

Dr. Steenbocks use of these cutting-edge treatments and the combinations is based on his many years treating patients. His treatment approach would not be considered standard or conventional medicine, and are likely not offered any place else in the United States. Many of the diseases which Dr. Steenbock treats, such as ALS and Cerebral Palsy are considered incurable by conventional means.

Dr. Steenbock and his clinic do not and cannot promise to cure or even make better any patients disease or medical condition. Despite his best efforts, patients may not respond to his treatment approach, and in some cases, a patient may even get worse. Testimonials and patient stories contained in the web site are examples only and do not mean that a person with the same medical condition will achieve the same, similar or any beneficial result from the treatment.

ABOUT THE PURPOSE OF THIS WEB SITE

Medical information or statements made within this site are not intended for use in or as a substitute for the diagnosis or treatment of any health or physical condition or as a substitute for a physician-patient relationship which has been established by an in-person evaluation of a patient. This website does not provide specific medical advice and does not endorse any medical or professional service or services obtained through information provided on this site or any links to or from this site. This information and advice published or made available through this website is not intended to replace the services of a physician or a health care professional acting under a physicians supervision, nor does it constitute a doctor-patient relationship. Each individuals treatment and/or results may vary based upon the circumstances, the patients specific situation, as well as the health care providers medical judgment and only after further discussion of the patients specific situation, goals, risks and benefits and other relevant medical discussion. Testimonials or statements made by any person(s) within this site are not intended to substitute for this discussion or evaluation or as a guarantee as to outcomes. Whether to accept any treatment by a patient should be assessed by the patient as to the risks and benefits of such procedures and only after consultation with a health care professional. Some links within this website may lead to other websites, including those maintained and operated by third parties. These links are included solely as a convenience to you, and the presence of such a link does not imply an endorsement or approval of the content of the linked site. Use of this site constitutes acknowledgement and acceptance of these limitations and disclaimers.

See more here:
Personalized Regenerative Medicine - Stem Cell Therapy San ...

Read More...

Visual impairment – Wikipedia

October 31st, 2016 7:42 am

Visual impairment, also known as vision impairment or vision loss, is a decreased ability to see to a degree that causes problems not fixable by usual means, such as glasses.[1][2] Some also include those who have a decreased ability to see because they do not have access to glasses or contact lenses.[1] Visual impairment is often defined as a best corrected visual acuity of worse than either 20/40 or 20/60.[3] The term blindness is used for complete or nearly complete vision loss.[3] Visual impairment may cause people difficulties with normal daily activities such as driving, reading, socializing, and walking.[2]

The most common causes of visual impairment globally are uncorrected refractive errors (43%), cataracts (33%), and glaucoma (2%).[4] Refractive errors include near sighted, far sighted, presbyopia, and astigmatism.[4] Cataracts are the most common cause of blindness.[4] Other disorders that may cause visual problems include age related macular degeneration, diabetic retinopathy, corneal clouding, childhood blindness, and a number of infections.[5] Visual impairment can also be caused by problems in the brain due to stroke, prematurity, or trauma among others.[6] These cases are known as cortical visual impairment.[6] Screening for vision problems in children may improve future vision and educational achievement.[7] Screening adults without symptoms is of uncertain benefit.[8] Diagnosis is by an eye exam.[2]

The World Health Organization (WHO) estimates that 80% of visual impairment is either preventable or curable with treatment.[4] This includes cataracts, the infections river blindness and trachoma, glaucoma, diabetic retinopathy, uncorrected refractive errors, and some cases of childhood blindness.[9] Many people with significant visual impairment benefit from vision rehabilitation, changes in their environmental, and assistive devices.[2]

As of 2012 there were 285 million people who were visually impaired of which 246 million had low vision and 39 million were blind.[4] The majority of people with poor vision are in the developing world and are over the age of 50 years.[4] Rates of visual impairment have decreased since the 1990s.[4] Visual impairments have considerable economic costs both directly due to the cost of treatment and indirectly due to decreased ability to work.[10]

The definition of visual impairment is reduced vision not corrected by glasses or contact lenses. The World Health Organization uses the following classifications of visual impairment. When the vision in the better eye with best possible glasses correction is:

Blindness is defined by the World Health Organization as vision in a person's best eye with best correction of less than 20/500 or a visual field of less than 10 degrees.[3] This definition was set in 1972, and there is ongoing discussion as to whether it should be altered to officially include uncorrected refractive errors.[1]

Severely sight impaired

Sight impaired

Low vision

In the UK, the Certificate of Vision Impairment (CVI) is used to certify patients as severely sight impaired or sight impaired.[12] The accompanying guidance for clinical staff states: "The National Assistance Act 1948 states that a person can be certified as severely sight impaired if they are "so blind as to be unable to perform any work for which eye sight is essential". The test is whether a person cannot do any work for which eyesight is essential, not just his or her normal job or one particular job."[13]

In practice, the definition depends on individuals' visual acuity and the extent to which their field of vision is restricted. The Department of Health identifies three groups of people who may be classified as severely visually impaired.[13]

The Department of Health also state that a person is more likely to be classified as severely visually impaired if their eyesight has failed recently or if they are an older individual, both groups being perceived as less able to adapt to their vision loss.[13]

In the United States, any person with vision that cannot be corrected to better than 20/200 in the best eye, or who has 20 degrees (diameter) or less of visual field remaining, is considered legally blind or eligible for disability classification and possible inclusion in certain government sponsored programs.

In the United States, the terms partially sighted, low vision, legally blind and totally blind are used by schools, colleges, and other educational institutions to describe students with visual impairments.[14] They are defined as follows:

In 1934, the American Medical Association adopted the following definition of blindness:

Central visual acuity of 20/200 or less in the better eye with corrective glasses or central visual acuity of more than 20/200 if there is a visual field defect in which the peripheral field is contracted to such an extent that the widest diameter of the visual field subtends an angular distance no greater than 20 degrees in the better eye.[15]

The United States Congress included this definition as part of the Aid to the Blind program in the Social Security Act passed in 1935.[15][16] In 1972, the Aid to the Blind program and two others combined under Title XVI of the Social Security Act to form the Supplemental Security Income program[17] which states:

An individual shall be considered to be blind for purposes of this title if he has central visual acuity of 20/200 or less in the better eye with the use of a correcting lens. An eye which is accompanied by a limitation in the fields of vision such that the widest diameter of the visual field subtends an angle no greater than 20 degrees shall be considered for purposes of the first sentence of this subsection as having a central visual acuity of 20/200 or less. An individual shall also be considered to be blind for purposes of this title if he is blind as defined under a State plan approved under title X or XVI as in effect for October 1972 and received aid under such plan (on the basis of blindness) for December 1973, so long as he is continuously blind as so defined.[18]

Kuwait is one of many nations that share the 6/60 criteria for legal blindness.[19]

Visual impairments may take many forms and be of varying degrees. Visual acuity alone is not always a good predictor of the degree of problems a person may have. Someone with relatively good acuity (e.g., 20/40) can have difficulty with daily functioning, while someone with worse acuity (e.g., 20/200) may function reasonably well if their visual demands are not great.

The American Medical Association has estimated that the loss of one eye equals 25% impairment of the visual system and 24% impairment of the whole person;[20][21] total loss of vision in both eyes is considered to be 100% visual impairment and 85% impairment of the whole person.[20]

Some people who fall into this category can use their considerable residual vision their remaining sight to complete daily tasks without relying on alternative methods. The role of a low vision specialist (optometrist or ophthalmologist) is to maximize the functional level of a patient's vision by optical or non-optical means. Primarily, this is by use of magnification in the form of telescopic systems for distance vision and optical or electronic magnification for near tasks.

People with significantly reduced acuity may benefit from training conducted by individuals trained in the provision of technical aids. Low vision rehabilitation professionals, some of whom are connected to an agency for the blind, can provide advice on lighting and contrast to maximize remaining vision. These professionals also have access to non-visual aids, and can instruct patients in their uses.

The subjects making the most use of rehabilitation instruments, who lived alone, and preserved their own mobility and occupation were the least depressed, with the lowest risk of suicide and the highest level of social integration.

Those with worsening sight and the prognosis of eventual blindness are at comparatively high risk of suicide and thus may be in need of supportive services. These observations advocate the establishment and extension of therapeutic and preventative programs to include patients with impending and current severe visual impairment who do not qualify for services for the blind. Ophthalmologists should be made aware of these potential consequences and incorporate a place for mental health professionals in their treatment of these types of patients, with a view to preventing the onset of depressive symptomatology, avoiding self-destructive behavior, and improving the quality of life of these patients. Such intervention should occur in the early stages of diagnosis, particularly as many studies have demonstrated how rapid acceptance of the serious visual handicap has led to a better, more productive compliance with rehabilitation programs. Moreover, psychological distress has been reported (and is exemplified by our psychological autopsy study) to be at its highest when sight loss is not complete, but the prognosis is unfavorable.10 Therefore, early intervention is imperative for enabling successful psychological adjustment.[22]

Blindness can occur in combination with such conditions as intellectual disability, autism spectrum disorders, cerebral palsy, hearing impairments, and epilepsy.[23][24] Blindness in combination with hearing loss is known as deafblindness.

It has been estimated that over half of totally blind people have non-24-hour sleepwake disorder, a condition in which a person's circadian rhythm, normally slightly longer than 24 hours, is not entrained (re-set) to the light/dark cycle.[25][26]

The most common causes of visual impairment globally in 2010 were:

The most common causes of blindness in 2010 were:

About 90% of people who are visually impaired live in the developing world.[4] Age-related macular degeneration, glaucoma, and diabetic retinopathy are the leading causes of blindness in the developed world.[27]

Of these, cataract is responsible for >65%, or more than 22 million cases of blindness, and glaucoma is responsible for 6 million cases.

Cataracts: is the congenital and pediatric pathology that describes the greying or opacity of the crystalline lens, which is most commonly caused by intrauterine infections, metabolic disorders, and genetically transmitted syndromes.[28] Cataracts are the leading cause of child and adult blindness that doubles in prevalence with every ten years after the age of 40.[29] Consequently, today cataracts are more common among adults than in children.[28] That is, people face higher chances of developing cataracts as they age. Nonetheless, cataracts tend to have a greater financial and emotional toll upon children as they must undergo expensive diagnosis, long term rehabilitation, and visual assistance.[30] Also, according to the Saudi Journal for Health Sciences, sometimes patients experience irreversible amblyopia[28] after pediatric cataract surgery because the cataracts prevented the normal maturation of vision prior to operation.[31] Despite the great progress in treatment, cataracts remain a global problem in both economically developed and developing countries.[32] At present, with the variant outcomes as well as the unequal access to cataract surgery, the best way to reduce the risk of developing cataracts is to avoid smoking and extensive exposure to sun light (i.e. UV-B rays).[29]

Glaucoma is a congenital and pediatric eye disease characterized by increased pressure within the eye or intraocular pressure (IOP).[33] Glaucoma causes visual field loss as well as severs the optic nerve.[34] Early diagnosis and treatment of glaucoma in patients is imperative because glaucoma is triggered by non-specific levels of IOP.[34] Also, another challenge in accurately diagnosing glaucoma is that the disease has four etiologies: 1) inflammatory ocular hypertension syndrome (IOHS); 2) severe uveitic angle closure; 3) corticosteroid-induced; and 4) a heterogonous mechanism associated with structural change and chronic inflammation.[33] In addition, often pediatric glaucoma differs greatly in etiology and management from the glaucoma developed by adults.[35] Currently, the best sign of pediatric glaucoma is an IOP of 21mm Hg or greater present within a child.[35] One of the most common causes of pediatric glaucoma is cataract removal surgery, which leads to an incidence rate of about 12.2% among infants and 58.7% among 10-year-olds.[35]

Childhood blindness can be caused by conditions related to pregnancy, such as congenital rubella syndrome and retinopathy of prematurity. Leprosy and onchocerciasis each blind approximately 1 million individuals in the developing world.

The number of individuals blind from trachoma has decreased in the past 10 years from 6 million to 1.3 million, putting it in seventh place on the list of causes of blindness worldwide.

Central corneal ulceration is also a significant cause of monocular blindness worldwide, accounting for an estimated 850,000 cases of corneal blindness every year in the Indian subcontinent alone. As a result, corneal scarring from all causes is now the fourth greatest cause of global blindness.[36]

Eye injuries, most often occurring in people under 30, are the leading cause of monocular blindness (vision loss in one eye) throughout the United States. Injuries and cataracts affect the eye itself, while abnormalities such as optic nerve hypoplasia affect the nerve bundle that sends signals from the eye to the back of the brain, which can lead to decreased visual acuity.

Cortical blindness results from injuries to the occipital lobe of the brain that prevent the brain from correctly receiving or interpreting signals from the optic nerve. Symptoms of cortical blindness vary greatly across individuals and may be more severe in periods of exhaustion or stress. It is common for people with cortical blindness to have poorer vision later in the day.

Blinding has been used as an act of vengeance and torture in some instances, to deprive a person of a major sense by which they can navigate or interact within the world, act fully independently, and be aware of events surrounding them. An example from the classical realm is Oedipus, who gouges out his own eyes after realizing that he fulfilled the awful prophecy spoken of him. Having crushed the Bulgarians, the Byzantine Emperor Basil II blinded as many as 15,000 prisoners taken in the battle, before releasing them.[37] Contemporary examples include the addition of methods such as acid throwing as a form of disfigurement.

People with albinism often have vision loss to the extent that many are legally blind, though few of them actually cannot see. Leber's congenital amaurosis can cause total blindness or severe sight loss from birth or early childhood.

Recent advances in mapping of the human genome have identified other genetic causes of low vision or blindness. One such example is Bardet-Biedl syndrome.

Rarely, blindness is caused by the intake of certain chemicals. A well-known example is methanol, which is only mildly toxic and minimally intoxicating, and breaks down into the substances formaldehyde and formic acid which in turn can cause blindness, an array of other health complications, and death.[38] When competing with ethanol for metabolism, ethanol is metabolized first, and the onset of toxicity is delayed. Methanol is commonly found in methylated spirits, denatured ethyl alcohol, to avoid paying taxes on selling ethanol intended for human consumption. Methylated spirits are sometimes used by alcoholics as a desperate and cheap substitute for regular ethanol alcoholic beverages.

It is important that people be examined by someone specializing in low vision care prior to other rehabilitation training to rule out potential medical or surgical correction for the problem and to establish a careful baseline refraction and prescription of both normal and low vision glasses and optical aids. Only a doctor is qualified to evaluate visual functioning of a compromised visual system effectively.[45] The American Medical Association provide an approach to evaluating visual loss as it affects an individual's ability to perform activities of daily living.[20]

Screening adults who have no symptoms is of uncertain benefit.[8]

The World Health Organization estimates that 80% of visual loss is either preventable or curable with treatment.[4] This includes cataracts, onchocerciasis, trachoma, glaucoma, diabetic retinopathy, uncorrected refractive errors, and some cases of childhood blindness.[9] The Center for Disease Control and Prevention estimates that half of blindness in the United States is preventable.[2]

Aside from medical help, various sources provide information, rehabilitation, education, and work and social integration.

Many people with serious visual impairments can travel independently, using a wide range of tools and techniques. Orientation and mobility specialists are professionals who are specifically trained to teach people with visual impairments how to travel safely, confidently, and independently in the home and the community. These professionals can also help blind people to practice travelling on specific routes which they may use often, such as the route from one's house to a convenience store. Becoming familiar with an environment or route can make it much easier for a blind person to navigate successfully.

Tools such as the white cane with a red tip the international symbol of blindness may also be used to improve mobility. A long cane is used to extend the user's range of touch sensation. It is usually swung in a low sweeping motion, across the intended path of travel, to detect obstacles. However, techniques for cane travel can vary depending on the user and/or the situation. Some visually impaired persons do not carry these kinds of canes, opting instead for the shorter, lighter identification (ID) cane. Still others require a support cane. The choice depends on the individual's vision, motivation, and other factors.

A small number of people employ guide dogs to assist in mobility. These dogs are trained to navigate around various obstacles, and to indicate when it becomes necessary to go up or down a step. However, the helpfulness of guide dogs is limited by the inability of dogs to understand complex directions. The human half of the guide dog team does the directing, based upon skills acquired through previous mobility training. In this sense, the handler might be likened to an aircraft's navigator, who must know how to get from one place to another, and the dog to the pilot, who gets them there safely.

GPS devices can also be used as a mobility aid. Such software can assist blind people with orientation and navigation, but it is not a replacement for traditional mobility tools such as white canes and guide dogs.

Some blind people are skilled at echolocating silent objects simply by producing mouth clicks and listening to the returning echoes. It has been shown that blind echolocation experts use what is normally the "visual" part of their brain to process the echoes.[46][47]

Government actions are sometimes taken to make public places more accessible to blind people. Public transportation is freely available to the blind in many cities. Tactile paving and audible traffic signals can make it easier and safer for visually impaired pedestrians to cross streets. In addition to making rules about who can and cannot use a cane, some governments mandate the right-of-way be given to users of white canes or guide dogs.

Most visually impaired people who are not totally blind read print, either of a regular size or enlarged by magnification devices. Many also read large-print, which is easier for them to read without such devices. A variety of magnifying glasses, some handheld, and some on desktops, can make reading easier for them.

Others read Braille (or the infrequently used Moon type), or rely on talking books and readers or reading machines, which convert printed text to speech or Braille. They use computers with special hardware such as scanners and refreshable Braille displays as well as software written specifically for the blind, such as optical character recognition applications and screen readers.

Some people access these materials through agencies for the blind, such as the National Library Service for the Blind and Physically Handicapped in the United States, the National Library for the Blind or the RNIB in the United Kingdom.

Closed-circuit televisions, equipment that enlarges and contrasts textual items, are a more high-tech alternative to traditional magnification devices.

There are also over 100 radio reading services throughout the world that provide people with vision impairments with readings from periodicals over the radio. The International Association of Audio Information Services provides links to all of these organizations.

Access technology such as screen readers, screen magnifiers and refreshable Braille displays enable the blind to use mainstream computer applications and mobile phones. The availability of assistive technology is increasing, accompanied by concerted efforts to ensure the accessibility of information technology to all potential users, including the blind. Later versions of Microsoft Windows include an Accessibility Wizard & Magnifier for those with partial vision, and Microsoft Narrator, a simple screen reader. Linux distributions (as live CDs) for the blind include Oralux and Adriane Knoppix, the latter developed in part by Adriane Knopper who has a visual impairment. Mac OS also comes with a built-in screen reader, called VoiceOver.

The movement towards greater web accessibility is opening a far wider number of websites to adaptive technology, making the web a more inviting place for visually impaired surfers.

Experimental approaches in sensory substitution are beginning to provide access to arbitrary live views from a camera.

Modified visual output that includes large print and/or clear simple graphics can be of benefit to users with some residual vision.[48]

Blind people may use talking equipment such as thermometers, watches, clocks, scales, calculators, and compasses. They may also enlarge or mark dials on devices such as ovens and thermostats to make them usable. Other techniques used by blind people to assist them in daily activities include:

Most people, once they have been visually impaired for long enough, devise their own adaptive strategies in all areas of personal and professional management.

For the blind, there are books in braille, audio-books, and text-to-speech computer programs, machines and e-book readers. Low vision people can make use of these tools as well as large-print reading materials and e-book readers that provide large font sizes.

Computers are important tools of integration for the visually impaired person. They allow, using standard or specific programs, screen magnification and conversion of text into sound or touch (Braille line), and are useful for all levels of visual handicap. OCR scanners can, in conjunction with text-to-speech software, read the contents of books and documents aloud via computer. Vendors also build closed-circuit televisions that electronically magnify paper, and even change its contrast and color, for visually impaired users. For more information, consult Assistive technology.

In adults with low vision there is no conclusive evidence supporting one form of reading aid over another.[50] In several studies stand-based closed-circuit television and hand-held closed-circuit television allowed faster reading than optical aids.[50] While electronic aids may allow faster reading for individuals with low vision, portability, ease of use, and affordability must be considered for people.[50]

Children with low vision sometimes have reading delays, but do benefit from phonics-based beginning reading instruction methods. Engaging phonics instruction is multisensory, highly motivating, and hands-on. Typically students are first taught the most frequent sounds of the alphabet letters, especially the so-called short vowel sounds, then taught to blend sounds together with three-letter consonant-vowel-consonant words such as cat, red, sit, hot, sun. Hands-on (or kinesthetically appealing) VERY enlarged print materials such as those found in "The Big Collection of Phonics Flipbooks" by Lynn Gordon (Scholastic, 2010) are helpful for teaching word families and blending skills to beginning readers with low vision. Beginning reading instructional materials should focus primarily on the lower-case letters, not the capital letters (even though they are larger) because reading text requires familiarity (mostly) with lower-case letters. Phonics-based beginning reading should also be supplemented with phonemic awareness lessons, writing opportunities, and lots of read-alouds (literature read to children daily) to stimulate motivation, vocabulary development, concept development, and comprehension skill development. Many children with low vision can be successfully included in regular education environments. Parents may need to be vigilant to ensure that the school provides the teacher and students with appropriate low vision resources, for example technology in the classroom, classroom aide time, modified educational materials, and consultation assistance with low vision experts.

Communication with the visually impaired can be more difficult than communicating with someone who doesn't have vision loss. However, many people are uncomfortable with communicating with the blind, and this can cause communication barriers. One of the biggest obstacles in communicating with visually impaired individuals comes from face-to-face interactions.[51] There are many factors that can cause the sighted to become uncomfortable while communicating face to face. There are many non-verbal factors that hinder communication between the visually impaired and the sighted, more often than verbal factors do. These factors, which Rivka Bialistock[51] mentions in her article, include:

The blind person sends these signals or types of non-verbal communication without being aware that they are doing so. These factors can all affect the way an individual would feel about communicating with the visually impaired. This leaves the visually impaired feeling rejected and lonely.

In the article Towards better communication, from the interest point of view. Orskills of sight-glish for the blind and visually impaired, the author, Rivka Bialistock [51] comes up with a method to reduce individuals being uncomfortable with communicating with the visually impaired. This method is called blind-glish or sight-glish, which is a language for the blind, similar to English. For example, babies, who are not born and able to talk right away, communicate through sight-glish, simply seeing everything and communicating non-verbally. This comes naturally to sighted babies, and by teaching this same method to babies with a visual impairment can improve their ability to communicate better, from the very beginning.

To avoid the rejected feeling of the visually impaired, people need to treat the blind the same way they would treat anyone else, rather than treating them like they have a disability, and need special attention. People may feel that it is improper to, for example, tell their blind child to look at them when they are speaking. However, this contributes to the sight-glish method.[51] It is important to disregard any mental fears or uncomfortable feelings people have while communicating (verbally and non-verbally) face-to-face.

Individuals with a visual disability not only have to find ways to communicate effectively with the people around them, but their environment as well. The blind or visually impaired rely largely on their other senses such as hearing, touch, and smell in order to understand their surroundings.[52]

Sound is one of the most important senses that the blind or visually impaired use in order to locate objects in their surroundings. A form of echolocation is used, similarly to that of a bat.[53] Echolocation from a person's perspective is when the person uses sound waves generated from speech or other forms of noise such as cane tapping, which reflect off of objects and bounce back at the person giving them a rough idea of where the object is. This does not mean they can depict details based on sound but rather where objects are in order to interact, or avoid them. Increases in atmospheric pressure and humidity increase a person's ability to use sound to their advantage as wind or any form of background noise impairs it.[52]

Touch is also an important aspect of how blind or visually impaired people perceive the world. Touch gives immense amount of information in the persons immediate surrounding. Feeling anything with detail gives off information on shape, size, texture, temperature, and many other qualities. Touch also helps with communication; braille is a form of communication in which people use their fingers to feel elevated bumps on a surface and can understand what is meant to be interpreted.[54] There are some issues and limitations with touch as not all objects are accessible to feel, which makes it difficult to perceive the actual object. Another limiting factor is that the learning process of identifying objects with touch is much slower than identifying objects with sight. This is due to the fact the object needs to be approached and carefully felt until a rough idea can be constructed in the brain.[52]

Certain smells can be associated with specific areas and help a person with vision problems to remember a familiar area. This way there is a better chance of recognizing an areas layout in order to navigate themselves through. The same can be said for people as well. Some people have their own special odor that a person with a more trained sense of smell can pick up. A person with an impairment of their vision can use this to recognize people within their vicinity without them saying a word.[52]

Visual impairment can have profound effects on the development of infant and child communication. The language and social development of a child or infant can be very delayed by the inability to see the world around them.

Social development includes interactions with the people surrounding the infant in the beginning of its life. To a child with vision, a smile from a parent is the first symbol of recognition and communication, and is almost an instant factor of communication. For a visually impaired infant, recognition of a parent's voice will be noticed at approximately two months old, but a smile will only be evoked through touch between parent and baby. This primary form of communication is greatly delayed for the child and will prevent other forms of communication from developing. Social interactions are more complicated because subtle visual cues are missing and facial expressions from others are lost.

Due to delays in a child's communication development, they may appear to be disinterested in social activity with peers, non-communicative and un-education on how to communicate with other people. This may cause the child to be avoided by peers and consequently over protected by family members.

With sight, much of what is learned by a child is learned through imitation of others, where as a visually impaired child needs very planned instruction directed at the development of postponed imitation. A visually impaired infant may jabber and imitate words sooner than a sighted child, but may show delay when combining words to say themselves, the child may tend to initiate few questions and their use of adjectives is infrequent. Normally the child's sensory experiences are not readily coded into language and this may cause them to store phrases and sentences in their memory and repeat them out of context. The language of the blind child does not seem to mirror their developing knowledge of the world, but rather their knowledge of the language of others.

A visually impaired child may also be hesitant to explore the world around them due to fear of the unknown and also may be discouraged from exploration by overprotective family members. Without concrete experiences, the child is not able to develop meaningful concepts or the language to describe or think about them.[55]

Visual impairment has the ability to create consequences for health and well being. Visual impairment is increasing especially among older people. It is recognized that those individuals with visual impairment are likely to have limited access to information and healthcare facilities, and may not receive the best care possible because not all health care professionals are aware of specific needs related to vision.

The WHO estimates that in 2012 there were 285 million visually impaired people in the world, of which 246 million had low vision and 39 million were blind.[4]

Of those who are blind 90% live in the developing world.[56] Worldwide for each blind person, an average of 3.4 people have low vision, with country and regional variation ranging from 2.4 to 5.5.[57]

By age: Visual impairment is unequally distributed across age groups. More than 82% of all people who are blind are 50 years of age and older, although they represent only 19% of the world's population. Due to the expected number of years lived in blindness (blind years), childhood blindness remains a significant problem, with an estimated 1.4 million blind children below age 15.

By gender: Available studies consistently indicate that in every region of the world, and at all ages, females have a significantly higher risk of being visually impaired than males.

By geography: Visual impairment is not distributed uniformly throughout the world. More than 90% of the world's visually impaired live in developing countries.[57]

Since the estimates of the 1990s, new data based on the 2002 global population show a reduction in the number of people who are blind or visually impaired, and those who are blind from the effects of infectious diseases, but an increase in the number of people who are blind from conditions related to longer life spans.[57]

In 1987, it was estimated that 598,000 people in the United States met the legal definition of blindness.[58] Of this number, 58% were over the age of 65.[58] In 19941995, 1.3 million Americans reported legal blindness.[59]

To determine which people qualify for special assistance because of their visual disabilities, various governments have specific definitions for legal blindness.[60] In North America and most of Europe, legal blindness is defined as visual acuity (vision) of 20/200 (6/60) or less in the better eye with best correction possible. This means that a legally blind individual would have to stand 20 feet (6.1m) from an object to see itwith corrective lenseswith the same degree of clarity as a normally sighted person could from 200 feet (61m). In many areas, people with average acuity who nonetheless have a visual field of less than 20 degrees (the norm being 180 degrees) are also classified as being legally blind. Approximately ten percent of those deemed legally blind, by any measure, have no vision. The rest have some vision, from light perception alone to relatively good acuity. Low vision is sometimes used to describe visual acuities from 20/70 to 20/200.[61]

The Moche people of ancient Peru depicted the blind in their ceramics.[62]

In Greek myth, Tiresias was a prophet famous for his clairvoyance. According to one myth, he was blinded by the gods as punishment for revealing their secrets, while another holds that he was blinded as punishment after he saw Athena naked while she was bathing. In the Odyssey, the one-eyed Cyclops Polyphemus captures Odysseus, who blinds Polyphemus to escape. In Norse mythology, Loki tricks the blind god Hr into killing his brother Baldr, the god of happiness.

The New Testament contains numerous instances of Jesus performing miracles to heal the blind. According to the Gospels, Jesus healed the two blind men of Galilee, the blind man of Bethsaida, the blind man of Jericho and the man who was born blind.

The parable of the blind men and an elephant has crossed between many religious traditions and is part of Jain, Buddhist, Sufi and Hindu lore. In various versions of the tale, a group of blind men (or men in the dark) touch an elephant to learn what it is like. Each one feels a different part, but only one part, such as the side or the tusk. They then compare notes and learn that they are in complete disagreement.

"Three Blind Mice" is a medieval English nursery rhyme about three blind mice whose tails are cut off after chasing the farmer's wife. The work is explicitly incongruous, ending with the comment Did you ever see such a sight in your life, As three blind mice?

Poet John Milton, who went blind in mid-life, composed On His Blindness, a sonnet about coping with blindness. The work posits that [those] who best Bear [God]'s mild yoke, they serve him best.

The Dutch painter and engraver Rembrandt often depicted scenes from the apocryphal Book of Tobit, which tells the story of a blind patriarch who is healed by his son, Tobias, with the help of the archangel Raphael.[63]

Slaver-turned-abolitionist John Newton composed the hymn Amazing Grace about a wretch who "once was lost, but now am found, Was blind, but now I see." Blindness, in this sense, is used both metaphorically (to refer to someone who was ignorant but later became knowledgeable) and literally, as a reference to those healed in the Bible. In the later years of his life, Newton himself would go blind.

H. G. Wells' story "The Country of the Blind" explores what would happen if a sighted man found himself trapped in a country of blind people to emphasise society's attitude to blind people by turning the situation on its head.

Bob Dylan's anti-war song "Blowin' in the Wind" twice alludes to metaphorical blindness: How many times can a man turn his head // and pretend that he just doesn't see... How many times must a man look up // Before he can see the sky?

Contemporary fiction contains numerous well-known blind characters. Some of these characters can "see" by means of fictitious devices, such as the Marvel Comics superhero Daredevil, who can "see" via his super-human hearing acuity, or Star Trek's Geordi La Forge, who can see with the aid of a VISOR, a fictitious device that transmits optical signals to his brain.

Visit link:
Visual impairment - Wikipedia

Read More...

Ophthalmology – Wikipedia

October 30th, 2016 5:44 pm

Ophthalmology ( or )[1] is the branch of medicine that deals with the anatomy, physiology and diseases of the eye.[2] An ophthalmologist is a specialist in medical and surgical eye problems. Since ophthalmologists perform operations on eyes, they are both surgical and medical specialists. A multitude of diseases and conditions can be diagnosed from the eye.[3]

The Greek roots of the word ophthalmology are (ophthalmos, "eye") and -o (-logia, "study, discourse"),[4][5] i.e., "the study of eyes". The discipline applies to all animal eyes, whether human or not, since the practice and procedures are quite similar with respect to disease processes, while differences in anatomy or disease prevalence, whether subtle or substantial, may differentiate the two.[citation needed]

The Indian surgeon Sushruta wrote Sushruta Samhita in Sanskrit in about 800 BC which describes 76 ocular diseases (of these 51 surgical) as well as several ophthalmological surgical instruments and techniques.[6][7] His description of cataract surgery was more akin to extracapsular lens extraction than to couching.[8] He has been described as the first cataract surgeon.[9][10]

The pre-Hippocratics largely based their anatomical conceptions of the eye on speculation, rather than empiricism.[11] They recognized the sclera and transparent cornea running flushly as the outer coating of the eye, with an inner layer with pupil, and a fluid at the centre. It was believed, by Alcamaeon and others, that this fluid was the medium of vision and flowed from the eye to the brain by a tube. Aristotle advanced such ideas with empiricism. He dissected the eyes of animals, and discovering three layers (not two), found that the fluid was of a constant consistency with the lens forming (or congealing) after death, and the surrounding layers were seen to be juxtaposed. He and his contemporaries further put forth the existence of three tubes leading from the eye, not one. One tube from each eye met within the skull.

Rufus of Ephesus recognised a more modern eye, with conjunctiva, extending as a fourth epithelial layer over the eye.[12] Rufus was the first to recognise a two-chambered eye, with one chamber from cornea to lens (filled with water), the other from lens to retina (filled with an egg white-like substance). The Greek physician Galen remedied some mistakes including the curvature of the cornea and lens, the nature of the optic nerve, and the existence of a posterior chamber.

Though this model was a roughly correct modern model of the eye, it contained errors. Still, it was not advanced upon again until after Vesalius. A ciliary body was then discovered and the sclera, retina, choroid, and cornea were seen to meet at the same point. The two chambers were seen to hold the same fluid, as well as the lens being attached to the choroid. Galen continued the notion of a central canal, but he dissected the optic nerve and saw that it was solid. He mistakenly counted seven optical muscles, one too many. He also knew of the tear ducts.

Medieval Islamic Arabic and Persian scientists (unlike their classical predecessors) considered it normal to combine theory and practice, including the crafting of precise instruments, and therefore found it natural to combine the study of the eye with the practical application of that knowledge.[13] Hunain ibn Ishaq, and others beginning with the medieval Arabic period, taught that the crystalline lens is in the exact center of the eye.[14] This idea was propagated until the end of the 1500s.[14]

Ibn al-Haytham (Alhazen), an Arab scientist with Islamic beliefs, wrote extensively on optics and the anatomy of the eye in his Book of Optics (1021).

Ibn al-Nafis, an Arabic native of Damascus, wrote a large textbook, The Polished Book on Experimental Ophthalmology, divided into two parts, On the Theory of Ophthalmology and Simple and Compounded Ophthalmic Drugs.[15]

In the 17th and 18th centuries, hand lenses were used by Malpighi, and microscopes by van Leeuwenhoek, preparations for fixing the eye for study by Ruysch, and later the freezing of the eye by Petit. This allowed for detailed study of the eye and an advanced model. Some mistakes persisted, such as: why the pupil changed size (seen to be vessels of the iris filling with blood), the existence of the posterior chamber, and of course the nature of the retina. In 1722, van Leeuwenhoek noted the existence of rods and cones,[citation needed] though they were not properly discovered until Gottfried Reinhold Treviranus in 1834 by use of a microscope.

Georg Joseph Beer (17631821) was an Austrian ophthalmologist and leader of the First Viennese School of Medicine. He introduced a flap operation for treatment of cataracts (Beer's operation), as well as popularizing the instrument used to perform the surgery (Beer's knife).[16]

The first ophthalmic surgeon in Great Britain was John Freke, appointed to the position by the Governors of St Bartholomew's Hospital in 1727. A major breakthrough came with the appointment of Baron Michael Johann Baptist de Wenzel (172490), a German who became oculist to King George III of England in 1772. His skill at removing cataracts legitimized the field.[17] The first dedicated ophthalmic hospital opened in 1805 in London; it is now called Moorfields Eye Hospital. Clinical developments at Moorfields and the founding of the Institute of Ophthalmology (now part of the University College London) by Sir Stewart Duke Elder established the site as the largest eye hospital in the world and a nexus for ophthalmic research.[18]

The prominent opticians of the late 19th and early 20th centuries included Ernst Abbe (18401905), a co-owner of at the Zeiss Jena factories in Germany where he developed numerous optical instruments. Hermann von Helmholtz (1821-1894) was a polymath who made contributions to many fields of science and invented the ophthalmoscope in 1851. They both made theoretical calculations on image formation in optical systems and had also studied the optics of the eye.

Numerous ophthalmologists fled Germany after 1933 as the Nazis began to persecute those of Jewish descent. A representative leader was Joseph Igersheimer (18791965), best known for his discoveries with arsphenamine for the treatment of syphilis. He fled to Turkey in 1933. As one of eight emigrant directors in the Faculty of Medicine at the University of Istanbul, he built a modern clinic and trained students. In 1939, he went to the United States, becoming a professor at Tufts University.[19]

Polish ophthalmology dates to the 13th century. The Polish Ophthalmological Society was founded in 1911. A representative leader was Adam Zamenhof (18881940), who introduced certain diagnostic, surgical, and nonsurgical eye-care procedures and was shot by the Nazis in 1940.[20] Zofia Falkowska (191593) head of the Faculty and Clinic of Ophthalmology in Warsaw from 1963 to 1976, was the first to use lasers in her practice.

Ophthalmologists are physicians (MD/MBBS or D.O., not OD or BOptom) who have completed a college degree, medical school, and residency in ophthalmology. Ophthalmology training equips eye specialists to provide the full spectrum of eye care, including the prescription of glasses and contact lenses, medical treatment, and complex microsurgery. In many countries, ophthalmologists also undergo additional specialized training in one of the many subspecialties. Ophthalmology was the first branch of medicine to offer board certification, now a standard practice among all specialties.

In Australia and New Zealand, the FRACO/FRANZCO is the equivalent postgraduate specialist qualification. It is a very competitive speciality to enter training and has a closely monitored and structured training system in place over the five years of postgraduate training. Overseas-trained ophthalmologists are assessed using the pathway published on the RANZCO website. Those who have completed their formal training in the UK and have the CCST/CCT are usually deemed to be comparable.

In Bangladesh to be an ophthalmologist the basic degree is an MBBS. Then they have to obtain a postgraduate degree or diploma in specialty ophthalmology. In Bangladesh, these are Diploma in Ophthalmology, Diploma in Community Ophthalmology, Fellow or Member of the College of Physicians and Surgeons in ophthalmology, and Master of Science in ophthalmology.

In Canada, an ophthalmology residency after medical school is undertaken. The residency lasts a minimum of five years after the MD degree which culminates in fellowship of the Royal College of Surgeons of Canada (FRCSC). Subspecialty training is undertaken by about 30% of fellows (FRCSC) in a variety of fields from anterior segment, cornea, glaucoma, visual rehabilitation, uveitis, oculoplastics, medical and surgical retina, ocular oncology, ocular pathology, or neuro-ophthalmology. About 35 vacancies open per year for ophthalmology residency training in all of Canada. These numbers fluctuate per year, ranging from 30 to 37 spots. Of these, up to seven spots are often dedicated to French-speaking universities in Quebec, while the rest of the English-speaking spots are competed for by hundreds of applicants each year. At the end of the five years, the graduating ophthalmologist must pass the oral and written portions of the Royal College exam.

In Finland, physicians willing to become ophthalmologists must undergo a five-year specialization which includes practical training and theoretical studies.

In India, after completing MBBS degree, postgraduate study in ophthalmology is required. The degrees are Doctor of Medicine, Master of Surgery, Diploma in Ophthalmic Medicine and Surgery, and Diplomate of National Board. The concurrent training and work experience is in the form of a junior residency at a medical college, eye hospital, or institution under the supervision of experienced faculty. Further work experience in form of fellowship, registrar, or senior resident refines the skills of these eye surgeons. All India Ophthalmological Society and various state-level ophthalmological societies hold regular conferences and actively promote continuing medical education.

In Nepal, to become an ophthalmologist, three years postgraduate study is required after completing MBBS degree. The postgraduate degree in ophthalmology is called MD in Ophthalmology. This degree is currently provided by Tilganga Institute of Ophthalmology, Tilganga, Kathmandu, BPKLCO, Institute of Medicine, TU, Kathmandu, BP Koirala Institute of Health Sciences, Dharan, Kathmandu University, Dhulikhel and National Academy of Medical Science, Kathmandu. Few Nepalese citizen also study this subject in Bangladesh, China, India, Pakistan and other countries. All the graduates have to pass Nepal Medical Council Licensing Exam to become a registered Ophthalmology in Nepal. The concurrent residency training is in the form of a PG student (resident) at a medical college, eye hospital, or institution according to the degree providing university's rules and regulations. Nepal Ophthalmic Society holds regular conferences and actively promote continuing medical education.

In Ireland, the Royal College of Surgeons of Ireland grants Membership (MRCSI (Ophth)) and Fellowship (FRCSI (Ophth)) qualifications in conjunction with the Irish College of Ophthalmologists. Total postgraduate training involves an intern year, a minimum of three years of basic surgical training and a further 4.5 years of higher surgical training. Clinical training takes place within public, Health Service Executive-funded hospitals in Dublin, Sligo, Limerick, Galway, Waterford, and Cork. A minimum of 8.5 years of training is required before eligibility to work in consultant posts. Some trainees take extra time to obtain MSc, MD or PhD degrees and to undertake clinical fellowships in the UK, Australia and the United States.

In Pakistan, after MBBS, a four-year full-time residency program leads to an exit-level FCPS examination in ophthalmology, held under the auspices of the College of Physicians and Surgeons, Pakistan. The tough examination is assessed by both highly qualified Pakistani and eminent international ophthalmic consultants. As a prerequisite to the final examinations, an intermediate module, an optics and refraction module, and a dissertation written on a research project carried out under supervision is also assessed. Moreover, a two-and-a-half-year residency program leads to an MCPS while a two-year training of DOMS is also being offered.[21] For candidates in the military, a stringent two-year graded course, with quarterly assessments, is held under Armed Forces Post Graduate Medical Institute in Rawalpindi. The M.S. in ophthalmology is also one of the specialty programs. In addition to programs for doctors, various diplomas and degrees for allied eyecare personnel are also being offered to produce competent optometrists, orthoptists, ophthalmic nurses, ophthalmic technologists, and ophthalmic technicians in this field. These programs are being offered notably by the College of Ophthalmology and Allied Vision Sciences in Lahore and the Pakistan Institute of Community Ophthalmology in Peshawar.[22] Subspecialty fellowships are also being offered in the fields of pediatric ophthalmology and vitreoretinal ophthalmology. King Edward Medical University, Al Shifa Trust Eye Hospital Rawalpindi, and Al- Ibrahim Eye Hospital Karachi have also started a degree program in this field.

Ophthalmology is a considered a medical specialty that uses medicine and surgery to treat diseases of the eye. There are two professional organizations in the country: the Philippine Academy of Ophthalmology (PAO)[23] and the Philippine Academy of Medical Specialists, Discipline in Ophthalmology (PAMS Ophtha). Individually, they regulate ophthalmology residency programs and board certification through their respective accrediting agencies. To become a general ophthalmologist in the Philippines, a candidate must have completed a Doctor of Medicine degree (MD) or its equivalent (e.g. MBBS), have completed an internship in Medicine, have passed the physician licensure exam, and completed residency training at a hospital accredited by the Philippine Board of Ophthalmology (accrediting arm of PAO) [24] or by the Philippine Academy of Medical Specialists, Discipline in Ophthalmology (PAMS Ophtha). Attainment of board certification in ophthalmology from either PBO or PAMS Ophtha is optional, but preferred, in acquiring privileges in most major health institutions. Graduates of residency programs can receive further training in ophthalmology subspecialties, such as neuro-ophthalmology, retina, etc. by completing a fellowship program which varies in length depending on each program's requirements.

In the United Kingdom, three colleges grant postgraduate degrees in ophthalmology. The Royal College of Ophthalmologists (RCOphth) and the Royal College of Surgeons of Edinburgh grant MRCOphth/FRCOphth and MRCSEd/FRCSEd, (although membership is no longer a prerequisite for fellowship), the Royal College of Glasgow grants FRCS. Postgraduate work as a specialist registrar and one of these degrees is required for specialization in eye diseases. Such clinical work is within the NHS, with supplementary private work for some consultants. Only 2.3 ophthalmologists exist per 100,000 population in the UK fewer pro rata than in any other nation in the European Union.[25]

In the United States, four years of residency training after medical school are required, with the first year being an internship in surgery, internal medicine, pediatrics, or a general transition year. Optional fellowships in advanced topics may be pursued for several years after residency. Most currently practicing ophthalmologists train in medical residency programs accredited by the Accreditation Council for Graduate Medical Education or the American Osteopathic Association and are board-certified by the American Board of Ophthalmology or the American Osteopathic Board of Ophthalmology and Otolaryngology. United States physicians who train in osteopathic medical schools hold the Doctor of Osteopathic Medicine (DO) degree rather than an MD degree. The same residency and certification requirements for ophthalmology training must be fulfilled by osteopathic physicians.

Physicians must complete the requirements of continuing medical education to maintain licensure and for recertification. Professional bodies like the American Academy of Ophthalmology and American Society of Cataract and Refractive Surgery organize conferences, help physician members through continuing medical education programs for maintaining board certification, and provide political advocacy and peer support.

Ophthalmology includes subspecialities which deal either with certain diseases or diseases of certain parts of the eye. Some of them are:

Read the rest here:
Ophthalmology - Wikipedia

Read More...

Annual Reviews – Home

October 30th, 2016 5:43 pm

This site uses cookies to improve performance. If your browser does not accept cookies, you cannot view this site.

There are many reasons why a cookie could not be set correctly. Below are the most common reasons:

This site uses cookies to improve performance by remembering that you are logged in when you go from page to page. To provide access without cookies would require the site to create a new session for every page you visit, which slows the system down to an unacceptable level.

This site stores nothing other than an automatically generated session ID in the cookie; no other information is captured.

In general, only the information that you provide, or the choices you make while visiting a web site, can be stored in a cookie. For example, the site cannot determine your email name unless you choose to type it. Allowing a website to create a cookie does not give that or any other site access to the rest of your computer, and only the site that created the cookie can read it.

Read more:
Annual Reviews - Home

Read More...

Mount Sinai Health System – New York City | Mount Sinai …

October 29th, 2016 12:42 pm

Select Specialty Addiction Psychiatry Adolescent Medicine Allergy and Immunology Alzheimer's Disease Anatomic Pathology Anatomic Pathology and Clinical Pathology Anesthesiology Bariatric Surgery Blood Banking/Transfusion Medicine Body Imaging Breast Cancer - Surgery Breast Imaging Cardiology Cardiovascular Disease Cardiovascular Surgery Cerebrovascular Diseases/Stroke Child and Adolescent Psychiatry Clinical Genetics - MD Clinical Pathology Clinical Pathology (Laboratory Hematology) Clinical and Laboratory Immunology - Pediatrics Colon and Rectal Surgery/Proctology Cornea, External Disease & Refractive Surgery Critical Care Medicine Critical Care Medicine - Anesthesiology Cytopathology Dentistry Dermatology Dermatopathology - Dermatology Diagnostic Radiology Ear, Nose, Throat/ Otolaryngology Emergency Medicine Endocrinology, Diabetes and Metabolism Endodontics Facial Plastic Surgery Family Medicine Family Planning Female Pelvic Medicine Gastroenterology Geriatric Medicine Geriatric Psychiatry Geriatrics, Palliative Care Glaucoma Gynecologic Oncology Hand Surgery Hand Surgery - Plastic and Reconstructive Surgery Head & Neck Surgery Headache Medicine Hematology Hematology - Clinical Pathology Hematology-Oncology Hospital Medicine Infectious Disease Internal Medicine Interventional Cardiology Interventional Neuroradiology Interventional Radiology Intestinal Transplantation Intestinal Transplantation and Rehabilitation Kidney/Pancreas Transplantation Liver Medicine Liver Surgery Liver Transplantation Living Donor Surgery Maternal and Fetal Medicine Medical Genetics and Genomics Medical Oncology Medical Toxicology - Emergency Medicine Medical and Surgical Retina Nephrology Neuro-Ophthalmology Neurocritical Care Neurology Neuropathology Neuroradiology Neurosurgery Nuclear Medicine Obstetrics and Gynecology Occupational Medicine Oncology Ophthalmic Pathology Ophthalmic Plastic Surgery Ophthalmology Optometry Oral/Maxillofacial Surgery Orthodontics Orthopaedic Surgery Pain Management Pediatric Allergy and Immunology Pediatric Anesthesia Pediatric Cardiology Pediatric Critical Care Medicine Pediatric Dentistry Pediatric Emergency Medicine - Pediatrics Pediatric Endocrinology Pediatric Gastroenterology and Hepatology Pediatric Hematology-Oncology Pediatric Infectious Diseases Pediatric Liver Transplantation Pediatric Nephrology and Hypertension Pediatric Neurology Pediatric Neurosurgery Pediatric Ophthalmology Pediatric Orthopaedic Surgery Pediatric Pulmonology Pediatric Radiology - Radiological Physics Pediatric Rheumatology Pediatric Surgery Pediatric Urology Pediatrics Pediatrics Neonatal-Perinatal Medicine Periodontics Plastic and Reconstructive Surgery Podiatry Primary Care Prosthodontics Psychiatry Psychology-PhD Public Health and General Preventive Medicine Pulmonary Medicine Radiation Oncology Radiology Reconstructive Surgery Rehabilitation and Physical Medicine Reproductive Endocrinology Rheumatology Sleep Medicine Spinal Cord Injury Medicine Spine Surgery Sports Medicine (Rehabilitation) Surgery Surgical Critical Care - Surgery Surgical Oncology Thoracic Surgery Transplantation Urogynecology Urology Uveitis Vascular Surgery

See the original post:
Mount Sinai Health System - New York City | Mount Sinai ...

Read More...

American Psychological Association (APA)

October 29th, 2016 12:41 pm

APA presidential election

Get to know the candidates. Log in to MyAPA and submit your ballot by Oct. 31, 2016.

Psychology's superhero

What's psychology's connection to Wonder Woman, the first feminist crusader?

Eating disorders

Learn about the major kinds of eating disorders and how a psychologist can help.

The discipline gap

Black students feel less welcome at schools with excessive suspensions, study finds.

Join APA

Join our community of researchers, teachers, practitioners and students.

MyAPA

Access your MyAPA account, subscriptions, products and more.

Access your MyAPA account, subscriptions, products or log out.

APA Actions in Response to Independent Review.

Practice Central: Resources for practitioners from the APA Practice Organization

Resources for practitioners from the APA Practice Organization

APA is the leading scientific and professional organization representing psychology in the United States. Our mission is to advance the creation, communication and application of psychological knowledge to benefit society and improve people's lives.

The only cure for OCD is expensive, elusive, and scary

October 27, 2016, The Atlantic

Autism study shows benefits when parents get involved

October 26, 2016, CNN

7 ways to know if you're on the right career path

October 25, 2016, Forbes

Teen hackers study considers link to addiction

October 24, 2016, BBC News

Can mental illness be prevented in the womb?

October 22, 2016, NPR

Talking to your therapist about election anxiety

October 20, 2016, The New York Times

More:
American Psychological Association (APA)

Read More...

Regenerative medicine – Wikipedia

October 28th, 2016 12:44 am

Regenerative medicine is a branch of translational research[1] in tissue engineering and molecular biology which deals with the "process of replacing, engineering or regenerating human cells, tissues or organs to restore or establish normal function".[2] This field holds the promise of engineering damaged tissues and organs by stimulating the body's own repair mechanisms to functionally heal previously irreparable tissues or organs.[3]

Regenerative medicine also includes the possibility of growing tissues and organs in the laboratory and implanting them when the body cannot heal itself. If a regenerated organ's cells would be derived from the patient's own tissue or cells, this would potentially solve the problem of the shortage of organs available for donation, and the problem of organ transplant rejection.[4][5][6]

Some of the biomedical approaches within the field of regenerative medicine may involve the use of stem cells.[7] Examples include the injection of stem cells or progenitor cells obtained through directed differentiation (cell therapies); the induction of regeneration by biologically active molecules administered alone or as a secretion by infused cells (immunomodulation therapy); and transplantation of in vitro grown organs and tissues (tissue engineering).[8][9]

The term "regenerative medicine" was first used in a 1992 article on hospital administration by Leland Kaiser. Kaisers paper closes with a series of short paragraphs on future technologies that will impact hospitals. One paragraph had "Regenerative Medicine" as a bold print title and stated, "A new branch of medicine will develop that attempts to change the course of chronic disease and in many instances will regenerate tired and failing organ systems."[10][11]

The widespread use of the term regenerative medicine is attributed to William A. Haseltine (founder of Human Genome Sciences).[12] Haseltine was briefed on the project to isolate human embryonic stem cells and embryonic germ cells at Geron Corporation in collaboration with researchers at the University of Wisconsin-Madison and Johns Hopkins School of Medicine. He recognized that these cells' unique ability to differentiate into all the cell types of the human body (pluripotency) had the potential to develop into a new kind of regenerative therapy.[13][14] Explaining the new class of therapies that such cells could enable, he used the term "regenerative medicine" in the way that it is used today: "an approach to therapy that ... employs human genes, proteins and cells to re-grow, restore or provide mechanical replacements for tissues that have been injured by trauma, damaged by disease or worn by time" and "offers the prospect of curing diseases that cannot be treated effectively today, including those related to aging." [15] From 1995 to 1998 Michael D. West, PhD, organized and managed the research between Geron Corporation and its academic collaborators James Thomson at the University of Wisconsin-Madison and John Gearhart of Johns Hopkins University that led to the first isolation of human embryonic stem and human embryonic germ cells, respectively.[16]

Dr. Stephen Badylak, a Research Professor in the Department of Surgery and director of Tissue Engineering at the McGowan Institute for Regenerative Medicine at the University of Pittsburgh, developed a process for scraping cells from the lining of a pig's bladder, decellularizing (removing cells to leave a clean extracellular structure) the tissue and then drying it to become a sheet or a powder. This extracellular matrix powder was used to regrow the finger of Lee Spievak, who had severed half an inch of his finger after getting it caught in a propeller of a model plane.[17][18][19][dubious discuss] As of 2011, this new technology is being employed by the military on U.S. war veterans in Texas, as well as for some civilian patients. Nicknamed "pixie-dust," the powdered extracellular matrix is being used to successfully regenerate tissue lost and damaged due to traumatic injuries.[20]

In June 2008, at the Hospital Clnic de Barcelona, Professor Paolo Macchiarini and his team, of the University of Barcelona, performed the first tissue engineered trachea (wind pipe) transplantation. Adult stem cells were extracted from the patient's bone marrow, grown into a large population, and matured into cartilage cells, or chondrocytes, using an adaptive method originally devised for treating osteoarthritis. The team then seeded the newly grown chondrocytes, as well as epithileal cells, into a decellularised (free of donor cells) tracheal segment that was donated from a 51-year-old transplant donor who had died of cerebral hemorrhage. After four days of seeding, the graft was used to replace the patient's left main bronchus. After one month, a biopsy elicited local bleeding, indicating that the blood vessels had already grown back successfully.[21][22]

In 2009, the SENS Foundation was launched, with its stated aim as "the application of regenerative medicine defined to include the repair of living cells and extracellular material in situ to the diseases and disabilities of ageing." [23]

In 2012, Professor Paolo Macchiarini and his team improved upon the 2008 implant by transplanting a laboratory-made trachea seeded with the patient's own cells.[24]

On September 12, 2014, surgeons at the Institute of Biomedical Research and Innovation Hospital in Kobe, Japan, transplanted a 1.3 by 3.0 millimeter sheet of retinal pigment epithelium cells, which were differentiated from iPS cells through Directed differentiation, into an eye of an elderly woman, who suffers from age-related macular degeneration.[25]

Because a person's own (autologous) cord blood stem cells can be safely infused back into that individual without being rejected by the body's immune system and because they have unique characteristics compared to other sources of stem cells they are an increasing focus of regenerative medicine research.

The use of cord blood stem cells in treating conditions such as brain injury[26] and Type 1 Diabetes[27] is already being studied in humans, and earlier stage research is being conducted for treatments of stroke,[28][29] and hearing loss.[30]

Current estimates indicate that approximately 1 in 3 Americans could benefit from regenerative medicine.[31] With autologous (the person's own) cells, there is no risk of the immune system rejecting the cells.

Researchers are exploring the use of cord blood stem cells for a spectrum of regenerative medicine applications, including the following:

A clinical trial under way at the University of Florida is examining how an infusion of autologous cord blood stem cells into children with Type 1 diabetes will impact metabolic control over time, as compared to standard insulin treatments. Preliminary results demonstrate that an infusion of cord blood stem cell is safe and may provide some slowing of the loss of insulin production in children with type 1 diabetes.[32]

The stem cells found in a newborn's umbilical cord blood are holding great promise in cardiovascular repair. Researchers are noting several positive observations in pre-clinical animal studies. Thus far, in animal models of myocardial infarction, cord blood stem cells have shown the ability to selectively migrate to injured cardiac tissue, improve vascular function and blood flow at the site of injury, and improve overall heart function.[31]

Research has demonstrated convincing evidence in animal models that cord blood stem cells injected intravenously have the ability to migrate to the area of brain injury, alleviating mobility related symptoms.[33][34] Also, administration of human cord blood stem cells into animals with stroke was shown to significantly improve behavior by stimulating the creation of new blood vessels and neurons in the brain.[35]

This research also lends support for the pioneering clinical work at Duke University, focused on evaluating the impact of autologous cord blood infusions in children diagnosed with cerebral palsy and other forms of brain injury. This study is examining if an infusion of the child's own cord blood stem cells facilitates repair of damaged brain tissue, including many with cerebral palsy. To date, more than 100 children have participated in the experimental treatment many whose parents are reporting good progress.[36]

Another report published encouraging results in 2 toddlers with cerebral palsy where autologous cord blood infusion was combined with G-CSF.[37]

As these clinical and pre-clinical studies demonstrate, cord blood stem cells will likely be an important resource as medicine advances toward harnessing the body's own cells for treatment. The field of regenerative medicine can be expected to benefit greatly as additional cord blood stem cell applications are researched and more people have access to their own preserved cord blood. [38]

On May 17, 2012, Osiris Therapeutics announced that Canadian health regulators approved Prochymal, a drug for acute graft-versus-host disease in children who have failed to respond to steroid treatment. Prochymal is the first stem cell drug to be approved anywhere in the world for a systemic disease. Graft-versus-host disease, a potentially fatal complication from bone marrow transplant, involves the newly implanted cells attacking the patient's body.[39]

Less technical further reading

More technical further reading

Scientific Journals

med

See the article here:
Regenerative medicine - Wikipedia

Read More...

DNA – Wikipedia

October 28th, 2016 12:42 am

Deoxyribonucleic acid (i;[1]DNA) is a molecule that carries the genetic instructions used in the growth, development, functioning and reproduction of all known living organisms and many viruses. DNA and RNA are nucleic acids; alongside proteins, lipids and complex carbohydrates (polysaccharides), they are one of the four major types of macromolecules that are essential for all known forms of life. Most DNA molecules consist of two biopolymer strands coiled around each other to form a double helix.

The two DNA strands are termed polynucleotides since they are composed of simpler monomer units called nucleotides.[2][3] Each nucleotide is composed of one of four nitrogen-containing nucleobaseseither cytosine (C), guanine (G), adenine (A), or thymine (T)and a sugar called deoxyribose and a phosphate group. The nucleotides are joined to one another in a chain by covalent bonds between the sugar of one nucleotide and the phosphate of the next, resulting in an alternating sugar-phosphate backbone. The nitrogenous bases of the two separate polynucleotide strands are bound together (according to base pairing rules (A with T, and C with G) with hydrogen bonds to make double-stranded DNA. The total amount of related DNA base pairs on Earth is estimated at 5.0 x 1037, and weighs 50 billion tonnes.[4] In comparison, the total mass of the biosphere has been estimated to be as much as 4 trillion tons of carbon (TtC).[5]

DNA stores biological information. The DNA backbone is resistant to cleavage, and both strands of the double-stranded structure store the same biological information. This information is replicated as and when the two strands separate. A large part of DNA (more than 98% for humans) is non-coding, meaning that these sections do not serve as patterns for protein sequences.

The two strands of DNA run in opposite directions to each other and are thus antiparallel. Attached to each sugar is one of four types of nucleobases (informally, bases). It is the sequence of these four nucleobases along the backbone that encodes biological information. RNA strands are created using DNA strands as a template in a process called transcription. Under the genetic code, these RNA strands are translated to specify the sequence of amino acids within proteins in a process called translation.

Within eukaryotic cells, DNA is organized into long structures called chromosomes. During cell division these chromosomes are duplicated in the process of DNA replication, providing each cell its own complete set of chromosomes. Eukaryotic organisms (animals, plants, fungi, and protists) store most of their DNA inside the cell nucleus and some of their DNA in organelles, such as mitochondria or chloroplasts.[6] In contrast, prokaryotes (bacteria and archaea) store their DNA only in the cytoplasm. Within the eukaryotic chromosomes, chromatin proteins such as histones compact and organize DNA. These compact structures guide the interactions between DNA and other proteins, helping control which parts of the DNA are transcribed.

DNA was first isolated by Friedrich Miescher in 1869. Its molecular structure was identified by James Watson and Francis Crick in 1953, whose model-building efforts were guided by X-ray diffraction data acquired by Rosalind Franklin. DNA is used by researchers as a molecular tool to explore physical laws and theories, such as the ergodic theorem and the theory of elasticity. The unique material properties of DNA have made it an attractive molecule for material scientists and engineers interested in micro- and nano-fabrication. Among notable advances in this field are DNA origami and DNA-based hybrid materials.[7]

DNA is a long polymer made from repeating units called nucleotides.[8][9] The structure of DNA is non-static,[10] all species comprises two helical chains each coiled round the same axis, and each with a pitch of 34ngstrms (3.4nanometres) and a radius of 10ngstrms (1.0nanometre).[11] According to another study, when measured in a particular solution, the DNA chain measured 22 to 26ngstrms wide (2.2 to 2.6nanometres), and one nucleotide unit measured 3.3 (0.33nm) long.[12] Although each individual repeating unit is very small, DNA polymers can be very large molecules containing millions of nucleotides. For instance, the DNA in the largest human chromosome, chromosome number 1, consists of approximately 220 million base pairs[13] and would be 85mm long if straightened.

In living organisms DNA does not usually exist as a single molecule, but instead as a pair of molecules that are held tightly together.[14][15] These two long strands entwine like vines, in the shape of a double helix. The nucleotide contains both a segment of the backbone of the molecule (which holds the chain together) and a nucleobase (which interacts with the other DNA strand in the helix). A nucleobase linked to a sugar is called a nucleoside and a base linked to a sugar and one or more phosphate groups is called a nucleotide. A polymer comprising multiple linked nucleotides (as in DNA) is called a polynucleotide.[16]

The backbone of the DNA strand is made from alternating phosphate and sugar residues.[17] The sugar in DNA is 2-deoxyribose, which is a pentose (five-carbon) sugar. The sugars are joined together by phosphate groups that form phosphodiester bonds between the third and fifth carbon atoms of adjacent sugar rings. These asymmetric bonds mean a strand of DNA has a direction. In a double helix, the direction of the nucleotides in one strand is opposite to their direction in the other strand: the strands are antiparallel. The asymmetric ends of DNA strands are said to have a directionality of five prime (5) and three prime (3), with the 5 end having a terminal phosphate group and the 3 end a terminal hydroxyl group. One major difference between DNA and RNA is the sugar, with the 2-deoxyribose in DNA being replaced by the alternative pentose sugar ribose in RNA.[15]

The DNA double helix is stabilized primarily by two forces: hydrogen bonds between nucleotides and base-stacking interactions among aromatic nucleobases.[19] In the aqueous environment of the cell, the conjugated bonds of nucleotide bases align perpendicular to the axis of the DNA molecule, minimizing their interaction with the solvation shell. The four bases found in DNA are adenine (A), cytosine (C), guanine (G) and thymine (T). These four bases are attached to the sugar-phosphate to form the complete nucleotide, as shown for adenosine monophosphate. Adenine pairs with thymine and guanine pairs with cytosine. It was represented by A-T base pairs and G-C base pairs.[20][21]

The nucleobases are classified into two types: the purines, A and G, being fused five- and six-membered heterocyclic compounds, and the pyrimidines, the six-membered rings C and T.[15] A fifth pyrimidine nucleobase, uracil (U), usually takes the place of thymine in RNA and differs from thymine by lacking a methyl group on its ring. In addition to RNA and DNA, many artificial nucleic acid analogues have been created to study the properties of nucleic acids, or for use in biotechnology.[22]

Uracil is not usually found in DNA, occurring only as a breakdown product of cytosine. However, in several bacteriophages, Bacillus subtilis bacteriophages PBS1 and PBS2 and Yersinia bacteriophage piR1-37, thymine has been replaced by uracil.[23] Another phage - Staphylococcal phage S6 - has been identified with a genome where thymine has been replaced by uracil.[24]

Base J (beta-d-glucopyranosyloxymethyluracil), a modified form of uracil, is also found in several organisms: the flagellates Diplonema and Euglena, and all the kinetoplastid genera.[25] Biosynthesis of J occurs in two steps: in the first step a specific thymidine in DNA is converted into hydroxymethyldeoxyuridine; in the second HOMedU is glycosylated to form J.[26] Proteins that bind specifically to this base have been identified.[27][28][29] These proteins appear to be distant relatives of the Tet1 oncogene that is involved in the pathogenesis of acute myeloid leukemia.[30] J appears to act as a termination signal for RNA polymerase II.[31][32]

Twin helical strands form the DNA backbone. Another double helix may be found tracing the spaces, or grooves, between the strands. These voids are adjacent to the base pairs and may provide a binding site. As the strands are not symmetrically located with respect to each other, the grooves are unequally sized. One groove, the major groove, is 22 wide and the other, the minor groove, is 12 wide.[33] The width of the major groove means that the edges of the bases are more accessible in the major groove than in the minor groove. As a result, proteins such as transcription factors that can bind to specific sequences in double-stranded DNA usually make contact with the sides of the bases exposed in the major groove.[34] This situation varies in unusual conformations of DNA within the cell (see below), but the major and minor grooves are always named to reflect the differences in size that would be seen if the DNA is twisted back into the ordinary B form.

In a DNA double helix, each type of nucleobase on one strand bonds with just one type of nucleobase on the other strand. This is called complementary base pairing. Here, purines form hydrogen bonds to pyrimidines, with adenine bonding only to thymine in two hydrogen bonds, and cytosine bonding only to guanine in three hydrogen bonds. This arrangement of two nucleotides binding together across the double helix is called a base pair. As hydrogen bonds are not covalent, they can be broken and rejoined relatively easily. The two strands of DNA in a double helix can thus be pulled apart like a zipper, either by a mechanical force or high temperature.[35] As a result of this base pair complementarity, all the information in the double-stranded sequence of a DNA helix is duplicated on each strand, which is vital in DNA replication. This reversible and specific interaction between complementary base pairs is critical for all the functions of DNA in living organisms.[9]

The two types of base pairs form different numbers of hydrogen bonds, AT forming two hydrogen bonds, and GC forming three hydrogen bonds (see figures, right). DNA with high GC-content is more stable than DNA with low GC-content.

As noted above, most DNA molecules are actually two polymer strands, bound together in a helical fashion by noncovalent bonds; this double stranded structure (dsDNA) is maintained largely by the intrastrand base stacking interactions, which are strongest for G,C stacks. The two strands can come apart a process known as melting to form two single-stranded DNA molecules (ssDNA) molecules. Melting occurs at high temperature, low salt and high pH (low pH also melts DNA, but since DNA is unstable due to acid depurination, low pH is rarely used).

The stability of the dsDNA form depends not only on the GC-content (% G,C basepairs) but also on sequence (since stacking is sequence specific) and also length (longer molecules are more stable). The stability can be measured in various ways; a common way is the "melting temperature", which is the temperature at which 50% of the ds molecules are converted to ss molecules; melting temperature is dependent on ionic strength and the concentration of DNA. As a result, it is both the percentage of GC base pairs and the overall length of a DNA double helix that determines the strength of the association between the two strands of DNA. Long DNA helices with a high GC-content have stronger-interacting strands, while short helices with high AT content have weaker-interacting strands.[36] In biology, parts of the DNA double helix that need to separate easily, such as the TATAAT Pribnow box in some promoters, tend to have a high AT content, making the strands easier to pull apart.[37]

In the laboratory, the strength of this interaction can be measured by finding the temperature necessary to break the hydrogen bonds, their melting temperature (also called Tm value). When all the base pairs in a DNA double helix melt, the strands separate and exist in solution as two entirely independent molecules. These single-stranded DNA molecules (ssDNA) have no single common shape, but some conformations are more stable than others.[38]

A DNA sequence is called "sense" if its sequence is the same as that of a messenger RNA copy that is translated into protein.[39] The sequence on the opposite strand is called the "antisense" sequence. Both sense and antisense sequences can exist on different parts of the same strand of DNA (i.e. both strands can contain both sense and antisense sequences). In both prokaryotes and eukaryotes, antisense RNA sequences are produced, but the functions of these RNAs are not entirely clear.[40] One proposal is that antisense RNAs are involved in regulating gene expression through RNA-RNA base pairing.[41]

A few DNA sequences in prokaryotes and eukaryotes, and more in plasmids and viruses, blur the distinction between sense and antisense strands by having overlapping genes.[42] In these cases, some DNA sequences do double duty, encoding one protein when read along one strand, and a second protein when read in the opposite direction along the other strand. In bacteria, this overlap may be involved in the regulation of gene transcription,[43] while in viruses, overlapping genes increase the amount of information that can be encoded within the small viral genome.[44]

DNA can be twisted like a rope in a process called DNA supercoiling. With DNA in its "relaxed" state, a strand usually circles the axis of the double helix once every 10.4 base pairs, but if the DNA is twisted the strands become more tightly or more loosely wound.[45] If the DNA is twisted in the direction of the helix, this is positive supercoiling, and the bases are held more tightly together. If they are twisted in the opposite direction, this is negative supercoiling, and the bases come apart more easily. In nature, most DNA has slight negative supercoiling that is introduced by enzymes called topoisomerases.[46] These enzymes are also needed to relieve the twisting stresses introduced into DNA strands during processes such as transcription and DNA replication.[47]

DNA exists in many possible conformations that include A-DNA, B-DNA, and Z-DNA forms, although, only B-DNA and Z-DNA have been directly observed in functional organisms.[17] The conformation that DNA adopts depends on the hydration level, DNA sequence, the amount and direction of supercoiling, chemical modifications of the bases, the type and concentration of metal ions, and the presence of polyamines in solution.[48]

The first published reports of A-DNA X-ray diffraction patternsand also B-DNAused analyses based on Patterson transforms that provided only a limited amount of structural information for oriented fibers of DNA.[49][50] An alternative analysis was then proposed by Wilkins et al., in 1953, for the in vivo B-DNA X-ray diffraction-scattering patterns of highly hydrated DNA fibers in terms of squares of Bessel functions.[51] In the same journal, James Watson and Francis Crick presented their molecular modeling analysis of the DNA X-ray diffraction patterns to suggest that the structure was a double-helix.[11]

Although the B-DNA form is most common under the conditions found in cells,[52] it is not a well-defined conformation but a family of related DNA conformations[53] that occur at the high hydration levels present in living cells. Their corresponding X-ray diffraction and scattering patterns are characteristic of molecular paracrystals with a significant degree of disorder.[54][55]

Compared to B-DNA, the A-DNA form is a wider right-handed spiral, with a shallow, wide minor groove and a narrower, deeper major groove. The A form occurs under non-physiological conditions in partly dehydrated samples of DNA, while in the cell it may be produced in hybrid pairings of DNA and RNA strands, and in enzyme-DNA complexes.[56][57] Segments of DNA where the bases have been chemically modified by methylation may undergo a larger change in conformation and adopt the Z form. Here, the strands turn about the helical axis in a left-handed spiral, the opposite of the more common B form.[58] These unusual structures can be recognized by specific Z-DNA binding proteins and may be involved in the regulation of transcription.[59]

For many years exobiologists have proposed the existence of a shadow biosphere, a postulated microbial biosphere of Earth that uses radically different biochemical and molecular processes than currently known life. One of the proposals was the existence of lifeforms that use arsenic instead of phosphorus in DNA. A report in 2010 of the possibility in the bacterium GFAJ-1, was announced,[60][60][61] though the research was disputed,[61][62] and evidence suggests the bacterium actively prevents the incorporation of arsenic into the DNA backbone and other biomolecules.[63]

At the ends of the linear chromosomes are specialized regions of DNA called telomeres. The main function of these regions is to allow the cell to replicate chromosome ends using the enzyme telomerase, as the enzymes that normally replicate DNA cannot copy the extreme 3 ends of chromosomes.[64] These specialized chromosome caps also help protect the DNA ends, and stop the DNA repair systems in the cell from treating them as damage to be corrected.[65] In human cells, telomeres are usually lengths of single-stranded DNA containing several thousand repeats of a simple TTAGGG sequence.[66]

These guanine-rich sequences may stabilize chromosome ends by forming structures of stacked sets of four-base units, rather than the usual base pairs found in other DNA molecules. Here, four guanine bases form a flat plate and these flat four-base units then stack on top of each other, to form a stable G-quadruplex structure.[68] These structures are stabilized by hydrogen bonding between the edges of the bases and chelation of a metal ion in the centre of each four-base unit.[69] Other structures can also be formed, with the central set of four bases coming from either a single strand folded around the bases, or several different parallel strands, each contributing one base to the central structure.

In addition to these stacked structures, telomeres also form large loop structures called telomere loops, or T-loops. Here, the single-stranded DNA curls around in a long circle stabilized by telomere-binding proteins.[70] At the very end of the T-loop, the single-stranded telomere DNA is held onto a region of double-stranded DNA by the telomere strand disrupting the double-helical DNA and base pairing to one of the two strands. This triple-stranded structure is called a displacement loop or D-loop.[68]

In DNA, fraying occurs when non-complementary regions exist at the end of an otherwise complementary double-strand of DNA. However, branched DNA can occur if a third strand of DNA is introduced and contains adjoining regions able to hybridize with the frayed regions of the pre-existing double-strand. Although the simplest example of branched DNA involves only three strands of DNA, complexes involving additional strands and multiple branches are also possible.[71] Branched DNA can be used in nanotechnology to construct geometric shapes, see the section on uses in technology below.

The expression of genes is influenced by how the DNA is packaged in chromosomes, in a structure called chromatin. Base modifications can be involved in packaging, with regions that have low or no gene expression usually containing high levels of methylation of cytosine bases. DNA packaging and its influence on gene expression can also occur by covalent modifications of the histone protein core around which DNA is wrapped in the chromatin structure or else by remodeling carried out by chromatin remodeling complexes (see Chromatin remodeling). There is, further, crosstalk between DNA methylation and histone modification, so they can coordinately affect chromatin and gene expression.[72]

For one example, cytosine methylation, produces 5-methylcytosine, which is important for X-inactivation of chromosomes.[73] The average level of methylation varies between organisms the worm Caenorhabditis elegans lacks cytosine methylation, while vertebrates have higher levels, with up to 1% of their DNA containing 5-methylcytosine.[74] Despite the importance of 5-methylcytosine, it can deaminate to leave a thymine base, so methylated cytosines are particularly prone to mutations.[75] Other base modifications include adenine methylation in bacteria, the presence of 5-hydroxymethylcytosine in the brain,[76] and the glycosylation of uracil to produce the "J-base" in kinetoplastids.[77][78]

DNA can be damaged by many sorts of mutagens, which change the DNA sequence. Mutagens include oxidizing agents, alkylating agents and also high-energy electromagnetic radiation such as ultraviolet light and X-rays. The type of DNA damage produced depends on the type of mutagen. For example, UV light can damage DNA by producing thymine dimers, which are cross-links between pyrimidine bases.[80] On the other hand, oxidants such as free radicals or hydrogen peroxide produce multiple forms of damage, including base modifications, particularly of guanosine, and double-strand breaks.[81] A typical human cell contains about 150,000 bases that have suffered oxidative damage.[82] Of these oxidative lesions, the most dangerous are double-strand breaks, as these are difficult to repair and can produce point mutations, insertions, deletions from the DNA sequence, and chromosomal translocations.[83] These mutations can cause cancer. Because of inherent limits in the DNA repair mechanisms, if humans lived long enough, they would all eventually develop cancer.[84][85] DNA damages that are naturally occurring, due to normal cellular processes that produce reactive oxygen species, the hydrolytic activities of cellular water, etc., also occur frequently. Although most of these damages are repaired, in any cell some DNA damage may remain despite the action of repair processes. These remaining DNA damages accumulate with age in mammalian postmitotic tissues. This accumulation appears to be an important underlying cause of aging.[86][87][88]

Many mutagens fit into the space between two adjacent base pairs, this is called intercalation. Most intercalators are aromatic and planar molecules; examples include ethidium bromide, acridines, daunomycin, and doxorubicin. For an intercalator to fit between base pairs, the bases must separate, distorting the DNA strands by unwinding of the double helix. This inhibits both transcription and DNA replication, causing toxicity and mutations.[89] As a result, DNA intercalators may be carcinogens, and in the case of thalidomide, a teratogen.[90] Others such as benzo[a]pyrene diol epoxide and aflatoxin form DNA adducts that induce errors in replication.[91] Nevertheless, due to their ability to inhibit DNA transcription and replication, other similar toxins are also used in chemotherapy to inhibit rapidly growing cancer cells.[92]

DNA usually occurs as linear chromosomes in eukaryotes, and circular chromosomes in prokaryotes. The set of chromosomes in a cell makes up its genome; the human genome has approximately 3 billion base pairs of DNA arranged into 46 chromosomes.[93] The information carried by DNA is held in the sequence of pieces of DNA called genes. Transmission of genetic information in genes is achieved via complementary base pairing. For example, in transcription, when a cell uses the information in a gene, the DNA sequence is copied into a complementary RNA sequence through the attraction between the DNA and the correct RNA nucleotides. Usually, this RNA copy is then used to make a matching protein sequence in a process called translation, which depends on the same interaction between RNA nucleotides. In alternative fashion, a cell may simply copy its genetic information in a process called DNA replication. The details of these functions are covered in other articles; here the focus is on the interactions between DNA and other molecules that mediate the function of the genome.

Genomic DNA is tightly and orderly packed in the process called DNA condensation, to fit the small available volumes of the cell. In eukaryotes, DNA is located in the cell nucleus, with small amounts in mitochondria and chloroplasts. In prokaryotes, the DNA is held within an irregularly shaped body in the cytoplasm called the nucleoid.[94] The genetic information in a genome is held within genes, and the complete set of this information in an organism is called its genotype. A gene is a unit of heredity and is a region of DNA that influences a particular characteristic in an organism. Genes contain an open reading frame that can be transcribed, and regulatory sequences such as promoters and enhancers, which control transcription of the open reading frame.

In many species, only a small fraction of the total sequence of the genome encodes protein. For example, only about 1.5% of the human genome consists of protein-coding exons, with over 50% of human DNA consisting of non-coding repetitive sequences.[95] The reasons for the presence of so much noncoding DNA in eukaryotic genomes and the extraordinary differences in genome size, or C-value, among species represent a long-standing puzzle known as the "C-value enigma".[96] However, some DNA sequences that do not code protein may still encode functional non-coding RNA molecules, which are involved in the regulation of gene expression.[97]

Some noncoding DNA sequences play structural roles in chromosomes. Telomeres and centromeres typically contain few genes, but are important for the function and stability of chromosomes.[65][99] An abundant form of noncoding DNA in humans are pseudogenes, which are copies of genes that have been disabled by mutation.[100] These sequences are usually just molecular fossils, although they can occasionally serve as raw genetic material for the creation of new genes through the process of gene duplication and divergence.[101]

A gene is a sequence of DNA that contains genetic information and can influence the phenotype of an organism. Within a gene, the sequence of bases along a DNA strand defines a messenger RNA sequence, which then defines one or more protein sequences. The relationship between the nucleotide sequences of genes and the amino-acid sequences of proteins is determined by the rules of translation, known collectively as the genetic code. The genetic code consists of three-letter 'words' called codons formed from a sequence of three nucleotides (e.g. ACT, CAG, TTT).

In transcription, the codons of a gene are copied into messenger RNA by RNA polymerase. This RNA copy is then decoded by a ribosome that reads the RNA sequence by base-pairing the messenger RNA to transfer RNA, which carries amino acids. Since there are 4 bases in 3-letter combinations, there are 64 possible codons (43combinations). These encode the twenty standard amino acids, giving most amino acids more than one possible codon. There are also three 'stop' or 'nonsense' codons signifying the end of the coding region; these are the TAA, TGA, and TAG codons.

Cell division is essential for an organism to grow, but, when a cell divides, it must replicate the DNA in its genome so that the two daughter cells have the same genetic information as their parent. The double-stranded structure of DNA provides a simple mechanism for DNA replication. Here, the two strands are separated and then each strand's complementary DNA sequence is recreated by an enzyme called DNA polymerase. This enzyme makes the complementary strand by finding the correct base through complementary base pairing, and bonding it onto the original strand. As DNA polymerases can only extend a DNA strand in a 5 to 3 direction, different mechanisms are used to copy the antiparallel strands of the double helix.[102] In this way, the base on the old strand dictates which base appears on the new strand, and the cell ends up with a perfect copy of its DNA.

Naked extracellular DNA (eDNA), most of it released by cell death, is nearly ubiquitous in the environment. Its concentration in soil may be as high as 2 g/L, and its concentration in natural aquatic environments may be as high at 88 g/L.[103] Various possible functions have been proposed for eDNA: it may be involved in horizontal gene transfer;[104] it may provide nutrients;[105] and it may act as a buffer to recruit or titrate ions or antibiotics.[106] Extracellular DNA acts as a functional extracellular matrix component in the biofilms of several bacterial species. It may act as a recognition factor to regulate the attachment and dispersal of specific cell types in the biofilm;[107] it may contribute to biofilm formation;[108] and it may contribute to the biofilm's physical strength and resistance to biological stress.[109]

All the functions of DNA depend on interactions with proteins. These protein interactions can be non-specific, or the protein can bind specifically to a single DNA sequence. Enzymes can also bind to DNA and of these, the polymerases that copy the DNA base sequence in transcription and DNA replication are particularly important.

Structural proteins that bind DNA are well-understood examples of non-specific DNA-protein interactions. Within chromosomes, DNA is held in complexes with structural proteins. These proteins organize the DNA into a compact structure called chromatin. In eukaryotes this structure involves DNA binding to a complex of small basic proteins called histones, while in prokaryotes multiple types of proteins are involved.[110][111] The histones form a disk-shaped complex called a nucleosome, which contains two complete turns of double-stranded DNA wrapped around its surface. These non-specific interactions are formed through basic residues in the histones, making ionic bonds to the acidic sugar-phosphate backbone of the DNA, and are thus largely independent of the base sequence.[112] Chemical modifications of these basic amino acid residues include methylation, phosphorylation and acetylation.[113] These chemical changes alter the strength of the interaction between the DNA and the histones, making the DNA more or less accessible to transcription factors and changing the rate of transcription.[114] Other non-specific DNA-binding proteins in chromatin include the high-mobility group proteins, which bind to bent or distorted DNA.[115] These proteins are important in bending arrays of nucleosomes and arranging them into the larger structures that make up chromosomes.[116]

A distinct group of DNA-binding proteins are the DNA-binding proteins that specifically bind single-stranded DNA. In humans, replication protein A is the best-understood member of this family and is used in processes where the double helix is separated, including DNA replication, recombination and DNA repair.[117] These binding proteins seem to stabilize single-stranded DNA and protect it from forming stem-loops or being degraded by nucleases.

In contrast, other proteins have evolved to bind to particular DNA sequences. The most intensively studied of these are the various transcription factors, which are proteins that regulate transcription. Each transcription factor binds to one particular set of DNA sequences and activates or inhibits the transcription of genes that have these sequences close to their promoters. The transcription factors do this in two ways. Firstly, they can bind the RNA polymerase responsible for transcription, either directly or through other mediator proteins; this locates the polymerase at the promoter and allows it to begin transcription.[119] Alternatively, transcription factors can bind enzymes that modify the histones at the promoter. This changes the accessibility of the DNA template to the polymerase.[120]

As these DNA targets can occur throughout an organism's genome, changes in the activity of one type of transcription factor can affect thousands of genes.[121] Consequently, these proteins are often the targets of the signal transduction processes that control responses to environmental changes or cellular differentiation and development. The specificity of these transcription factors' interactions with DNA come from the proteins making multiple contacts to the edges of the DNA bases, allowing them to "read" the DNA sequence. Most of these base-interactions are made in the major groove, where the bases are most accessible.[34]

Nucleases are enzymes that cut DNA strands by catalyzing the hydrolysis of the phosphodiester bonds. Nucleases that hydrolyse nucleotides from the ends of DNA strands are called exonucleases, while endonucleases cut within strands. The most frequently used nucleases in molecular biology are the restriction endonucleases, which cut DNA at specific sequences. For instance, the EcoRV enzyme shown to the left recognizes the 6-base sequence 5-GATATC-3 and makes a cut at the horizontal line. In nature, these enzymes protect bacteria against phage infection by digesting the phage DNA when it enters the bacterial cell, acting as part of the restriction modification system.[123] In technology, these sequence-specific nucleases are used in molecular cloning and DNA fingerprinting.

Enzymes called DNA ligases can rejoin cut or broken DNA strands.[124] Ligases are particularly important in lagging strand DNA replication, as they join together the short segments of DNA produced at the replication fork into a complete copy of the DNA template. They are also used in DNA repair and genetic recombination.[124]

Topoisomerases are enzymes with both nuclease and ligase activity. These proteins change the amount of supercoiling in DNA. Some of these enzymes work by cutting the DNA helix and allowing one section to rotate, thereby reducing its level of supercoiling; the enzyme then seals the DNA break.[46] Other types of these enzymes are capable of cutting one DNA helix and then passing a second strand of DNA through this break, before rejoining the helix.[125] Topoisomerases are required for many processes involving DNA, such as DNA replication and transcription.[47]

Helicases are proteins that are a type of molecular motor. They use the chemical energy in nucleoside triphosphates, predominantly adenosine triphosphate (ATP), to break hydrogen bonds between bases and unwind the DNA double helix into single strands.[126] These enzymes are essential for most processes where enzymes need to access the DNA bases.

Polymerases are enzymes that synthesize polynucleotide chains from nucleoside triphosphates. The sequence of their products are created based on existing polynucleotide chainswhich are called templates. These enzymes function by repeatedly adding a nucleotide to the 3 hydroxyl group at the end of the growing polynucleotide chain. As a consequence, all polymerases work in a 5 to 3 direction.[127] In the active site of these enzymes, the incoming nucleoside triphosphate base-pairs to the template: this allows polymerases to accurately synthesize the complementary strand of their template. Polymerases are classified according to the type of template that they use.

In DNA replication, DNA-dependent DNA polymerases make copies of DNA polynucleotide chains. To preserve biological information, it is essential that the sequence of bases in each copy are precisely complementary to the sequence of bases in the template strand. Many DNA polymerases have a proofreading activity. Here, the polymerase recognizes the occasional mistakes in the synthesis reaction by the lack of base pairing between the mismatched nucleotides. If a mismatch is detected, a 3 to 5 exonuclease activity is activated and the incorrect base removed.[128] In most organisms, DNA polymerases function in a large complex called the replisome that contains multiple accessory subunits, such as the DNA clamp or helicases.[129]

RNA-dependent DNA polymerases are a specialized class of polymerases that copy the sequence of an RNA strand into DNA. They include reverse transcriptase, which is a viral enzyme involved in the infection of cells by retroviruses, and telomerase, which is required for the replication of telomeres.[64][130] Telomerase is an unusual polymerase because it contains its own RNA template as part of its structure.[65]

Transcription is carried out by a DNA-dependent RNA polymerase that copies the sequence of a DNA strand into RNA. To begin transcribing a gene, the RNA polymerase binds to a sequence of DNA called a promoter and separates the DNA strands. It then copies the gene sequence into a messenger RNA transcript until it reaches a region of DNA called the terminator, where it halts and detaches from the DNA. As with human DNA-dependent DNA polymerases, RNA polymerase II, the enzyme that transcribes most of the genes in the human genome, operates as part of a large protein complex with multiple regulatory and accessory subunits.[131]

A DNA helix usually does not interact with other segments of DNA, and in human cells the different chromosomes even occupy separate areas in the nucleus called "chromosome territories".[133] This physical separation of different chromosomes is important for the ability of DNA to function as a stable repository for information, as one of the few times chromosomes interact is in chromosomal crossover which occurs during sexual reproduction, when genetic recombination occurs. Chromosomal crossover is when two DNA helices break, swap a section and then rejoin.

Recombination allows chromosomes to exchange genetic information and produces new combinations of genes, which increases the efficiency of natural selection and can be important in the rapid evolution of new proteins.[134] Genetic recombination can also be involved in DNA repair, particularly in the cell's response to double-strand breaks.[135]

The most common form of chromosomal crossover is homologous recombination, where the two chromosomes involved share very similar sequences. Non-homologous recombination can be damaging to cells, as it can produce chromosomal translocations and genetic abnormalities. The recombination reaction is catalyzed by enzymes known as recombinases, such as RAD51.[136] The first step in recombination is a double-stranded break caused by either an endonuclease or damage to the DNA.[137] A series of steps catalyzed in part by the recombinase then leads to joining of the two helices by at least one Holliday junction, in which a segment of a single strand in each helix is annealed to the complementary strand in the other helix. The Holliday junction is a tetrahedral junction structure that can be moved along the pair of chromosomes, swapping one strand for another. The recombination reaction is then halted by cleavage of the junction and re-ligation of the released DNA.[138]

DNA contains the genetic information that allows all modern living things to function, grow and reproduce. However, it is unclear how long in the 4-billion-year history of life DNA has performed this function, as it has been proposed that the earliest forms of life may have used RNA as their genetic material.[139][140] RNA may have acted as the central part of early cell metabolism as it can both transmit genetic information and carry out catalysis as part of ribozymes.[141] This ancient RNA world where nucleic acid would have been used for both catalysis and genetics may have influenced the evolution of the current genetic code based on four nucleotide bases. This would occur, since the number of different bases in such an organism is a trade-off between a small number of bases increasing replication accuracy and a large number of bases increasing the catalytic efficiency of ribozymes.[142] However, there is no direct evidence of ancient genetic systems, as recovery of DNA from most fossils is impossible because DNA survives in the environment for less than one million years, and slowly degrades into short fragments in solution.[143] Claims for older DNA have been made, most notably a report of the isolation of a viable bacterium from a salt crystal 250 million years old,[144] but these claims are controversial.[145][146]

Building blocks of DNA (adenine, guanine and related organic molecules) may have been formed extraterrestrially in outer space.[147][148][149] Complex DNA and RNA organic compounds of life, including uracil, cytosine, and thymine, have also been formed in the laboratory under conditions mimicking those found in outer space, using starting chemicals, such as pyrimidine, found in meteorites. Pyrimidine, like polycyclic aromatic hydrocarbons (PAHs), the most carbon-rich chemical found in the universe, may have been formed in red giants or in interstellar cosmic dust and gas clouds.[150]

Methods have been developed to purify DNA from organisms, such as phenol-chloroform extraction, and to manipulate it in the laboratory, such as restriction digests and the polymerase chain reaction. Modern biology and biochemistry make intensive use of these techniques in recombinant DNA technology. Recombinant DNA is a man-made DNA sequence that has been assembled from other DNA sequences. They can be transformed into organisms in the form of plasmids or in the appropriate format, by using a viral vector.[151] The genetically modified organisms produced can be used to produce products such as recombinant proteins, used in medical research,[152] or be grown in agriculture.[153][154]

Forensic scientists can use DNA in blood, semen, skin, saliva or hair found at a crime scene to identify a matching DNA of an individual, such as a perpetrator. This process is formally termed DNA profiling, but may also be called "genetic fingerprinting". In DNA profiling, the lengths of variable sections of repetitive DNA, such as short tandem repeats and minisatellites, are compared between people. This method is usually an extremely reliable technique for identifying a matching DNA.[155] However, identification can be complicated if the scene is contaminated with DNA from several people.[156] DNA profiling was developed in 1984 by British geneticist Sir Alec Jeffreys,[157] and first used in forensic science to convict Colin Pitchfork in the 1988 Enderby murders case.[158]

The development of forensic science, and the ability to now obtain genetic matching on minute samples of blood, skin, saliva, or hair has led to re-examining many cases. Evidence can now be uncovered that was scientifically impossible at the time of the original examination. Combined with the removal of the double jeopardy law in some places, this can allow cases to be reopened where prior trials have failed to produce sufficient evidence to convince a jury. People charged with serious crimes may be required to provide a sample of DNA for matching purposes. The most obvious defence to DNA matches obtained forensically is to claim that cross-contamination of evidence has occurred. This has resulted in meticulous strict handling procedures with new cases of serious crime. DNA profiling is also used successfully to positively identify victims of mass casualty incidents,[159] bodies or body parts in serious accidents, and individual victims in mass war graves, via matching to family members.

DNA profiling is also used in DNA paternity testing to determine if someone is the biological parent or grandparent of a child with the probability of parentage is typically 99.99% when the alleged parent is biologically related to the child. Normal DNA sequencing methods happen after birth but there are new methods to test paternity while a mother is still pregnant.[160]

Deoxyribozymes, also called DNAzymes or catalytic DNA are first discovered in 1994.[161] They are mostly single stranded DNA sequences isolated from a large pool of random DNA sequences through a combinatorial approach called in vitro selection or systematic evolution of ligands by exponential enrichment (SELEX). DNAzymes catalyze variety of chemical reactions including RNA-DNA cleavage, RNA-DNA ligation, amino acids phosphorylation-dephosphorylation, carbon-carbon bond formation, and etc. DNAzymes can enhance catalytic rate of chemical reactions up to 100,000,000,000-fold over the uncatalyzed reaction.[162] The most extensively studied class of DNAzymes are RNA-cleaving types which have been used to detect different metal ions and designing therapeutic agents. Several metal-specific DNAzymes have been reported including the GR-5 DNAzyme (lead-specific),[161] the CA1-3 DNAzymes (copper-specific),[163] the 39E DNAzyme (uranyl-specific) and the NaA43 DNAzyme (sodium-specific).[164] The NaA43 DNAzyme, which is reported to be more than 10,000-fold selective for sodium over other metal ions, was used to make a real-time sodium sensor in living cells.

Bioinformatics involves the development of techniques to store, data mine, search and manipulate biological data, including DNA nucleic acid sequence data. These have led to widely applied advances in computer science, especially string searching algorithms, machine learning and database theory.[165] String searching or matching algorithms, which find an occurrence of a sequence of letters inside a larger sequence of letters, were developed to search for specific sequences of nucleotides.[166] The DNA sequence may be aligned with other DNA sequences to identify homologous sequences and locate the specific mutations that make them distinct. These techniques, especially multiple sequence alignment, are used in studying phylogenetic relationships and protein function.[167] Data sets representing entire genomes' worth of DNA sequences, such as those produced by the Human Genome Project, are difficult to use without the annotations that identify the locations of genes and regulatory elements on each chromosome. Regions of DNA sequence that have the characteristic patterns associated with protein- or RNA-coding genes can be identified by gene finding algorithms, which allow researchers to predict the presence of particular gene products and their possible functions in an organism even before they have been isolated experimentally.[168] Entire genomes may also be compared, which can shed light on the evolutionary history of particular organism and permit the examination of complex evolutionary events.

DNA nanotechnology uses the unique molecular recognition properties of DNA and other nucleic acids to create self-assembling branched DNA complexes with useful properties.[169] DNA is thus used as a structural material rather than as a carrier of biological information. This has led to the creation of two-dimensional periodic lattices (both tile-based and using the DNA origami method) and three-dimensional structures in the shapes of polyhedra.[170]Nanomechanical devices and algorithmic self-assembly have also been demonstrated,[171] and these DNA structures have been used to template the arrangement of other molecules such as gold nanoparticles and streptavidin proteins.[172]

Because DNA collects mutations over time, which are then inherited, it contains historical information, and, by comparing DNA sequences, geneticists can infer the evolutionary history of organisms, their phylogeny.[173] This field of phylogenetics is a powerful tool in evolutionary biology. If DNA sequences within a species are compared, population geneticists can learn the history of particular populations. This can be used in studies ranging from ecological genetics to anthropology; For example, DNA evidence is being used to try to identify the Ten Lost Tribes of Israel.[174][175]

In a paper published in Nature in January 2013, scientists from the European Bioinformatics Institute and Agilent Technologies proposed a mechanism to use DNA's ability to code information as a means of digital data storage. The group was able to encode 739 kilobytes of data into DNA code, synthesize the actual DNA, then sequence the DNA and decode the information back to its original form, with a reported 100% accuracy. The encoded information consisted of text files and audio files. A prior experiment was published in August 2012. It was conducted by researchers at Harvard University, where the text of a 54,000-word book was encoded in DNA.[176][177]

DNA was first isolated by the Swiss physician Friedrich Miescher who, in 1869, discovered a microscopic substance in the pus of discarded surgical bandages. As it resided in the nuclei of cells, he called it "nuclein".[178][179] In 1878, Albrecht Kossel isolated the non-protein component of "nuclein", nucleic acid, and later isolated its five primary nucleobases.[180][181] In 1919, Phoebus Levene identified the base, sugar and phosphate nucleotide unit.[182] Levene suggested that DNA consisted of a string of nucleotide units linked together through the phosphate groups. Levene thought the chain was short and the bases repeated in a fixed order. In 1937, William Astbury produced the first X-ray diffraction patterns that showed that DNA had a regular structure.[183]

In 1927, Nikolai Koltsov proposed that inherited traits would be inherited via a "giant hereditary molecule" made up of "two mirror strands that would replicate in a semi-conservative fashion using each strand as a template".[184][185] In 1928, Frederick Griffith in his experiment discovered that traits of the "smooth" form of Pneumococcus could be transferred to the "rough" form of the same bacteria by mixing killed "smooth" bacteria with the live "rough" form.[186][187] This system provided the first clear suggestion that DNA carries genetic informationthe AveryMacLeodMcCarty experimentwhen Oswald Avery, along with coworkers Colin MacLeod and Maclyn McCarty, identified DNA as the transforming principle in 1943.[188] DNA's role in heredity was confirmed in 1952, when Alfred Hershey and Martha Chase in the HersheyChase experiment showed that DNA is the genetic material of the T2 phage.[189]

In 1953, James Watson and Francis Crick suggested what is now accepted as the first correct double-helix model of DNA structure in the journal Nature.[11] Their double-helix, molecular model of DNA was then based on one X-ray diffraction image (labeled as "Photo 51")[190] taken by Rosalind Franklin and Raymond Gosling in May 1952, and the information that the DNA bases are paired.

Experimental evidence supporting the Watson and Crick model was published in a series of five articles in the same issue of Nature.[191] Of these, Franklin and Gosling's paper was the first publication of their own X-ray diffraction data and original analysis method that partly supported the Watson and Crick model;[50][192] this issue also contained an article on DNA structure by Maurice Wilkins and two of his colleagues, whose analysis and in vivo B-DNA X-ray patterns also supported the presence in vivo of the double-helical DNA configurations as proposed by Crick and Watson for their double-helix molecular model of DNA in the prior two pages of Nature.[51] In 1962, after Franklin's death, Watson, Crick, and Wilkins jointly received the Nobel Prize in Physiology or Medicine.[193] Nobel Prizes are awarded only to living recipients. A debate continues about who should receive credit for the discovery.[194]

In an influential presentation in 1957, Crick laid out the central dogma of molecular biology, which foretold the relationship between DNA, RNA, and proteins, and articulated the "adaptor hypothesis".[195] Final confirmation of the replication mechanism that was implied by the double-helical structure followed in 1958 through the MeselsonStahl experiment.[196] Further work by Crick and coworkers showed that the genetic code was based on non-overlapping triplets of bases, called codons, allowing Har Gobind Khorana, Robert W. Holley and Marshall Warren Nirenberg to decipher the genetic code.[197] These findings represent the birth of molecular biology.

Read more from the original source:
DNA - Wikipedia

Read More...

Gene therapy – Wikipedia

October 28th, 2016 12:42 am

Gene therapy is the therapeutic delivery of nucleic acid polymers into a patient's cells as a drug to treat disease.[1] The first attempt at modifying human DNA was performed in 1980 by Martin Cline, but the first successful and approved[by whom?] nuclear gene transfer in humans was performed in May 1989.[2] The first therapeutic use of gene transfer as well as the first direct insertion of human DNA into the nuclear genome was performed by French Anderson in a trial starting in September 1990.

Between 1989 and February 2016, over 2,300 clinical trials had been conducted, more than half of them in phase I.[3]

It should be noted that not all medical procedures that introduce alterations to a patient's genetic makeup can be considered gene therapy. Bone marrow transplantation and organ transplants in general have been found to introduce foreign DNA into patients.[4] Gene therapy is defined by the precision of the procedure and the intention of direct therapeutic effects.

Gene therapy was conceptualized in 1972, by authors who urged caution before commencing human gene therapy studies.

The first attempt, an unsuccessful one, at gene therapy (as well as the first case of medical transfer of foreign genes into humans not counting organ transplantation) was performed by Martin Cline on 10 July 1980.[5][6] Cline claimed that one of the genes in his patients was active six months later, though he never published this data or had it verified[7] and even if he is correct, it's unlikely it produced any significant beneficial effects treating beta-thalassemia.[8]

After extensive research on animals throughout the 1980s and a 1989 bacterial gene tagging trial on humans, the first gene therapy widely accepted as a success was demonstrated in a trial that started on September 14, 1990, when Ashi DeSilva was treated for ADA-SCID.[9]

The first somatic treatment that produced a permanent genetic change was performed in 1993.[10]

This procedure was referred to sensationally and somewhat inaccurately in the media as a "three parent baby", though mtDNA is not the primary human genome and has little effect on an organism's individual characteristics beyond powering their cells.

Gene therapy is a way to fix a genetic problem at its source. The polymers are either translated into proteins, interfere with target gene expression, or possibly correct genetic mutations.

The most common form uses DNA that encodes a functional, therapeutic gene to replace a mutated gene. The polymer molecule is packaged within a "vector", which carries the molecule inside cells.

Early clinical failures led to dismissals of gene therapy. Clinical successes since 2006 regained researchers' attention, although as of 2014, it was still largely an experimental technique.[11] These include treatment of retinal diseases Leber's congenital amaurosis[12][13][14][15] and choroideremia,[16]X-linked SCID,[17] ADA-SCID,[18][19]adrenoleukodystrophy,[20]chronic lymphocytic leukemia (CLL),[21]acute lymphocytic leukemia (ALL),[22]multiple myeloma,[23]haemophilia[19] and Parkinson's disease.[24] Between 2013 and April 2014, US companies invested over $600 million in the field.[25]

The first commercial gene therapy, Gendicine, was approved in China in 2003 for the treatment of certain cancers.[26] In 2011 Neovasculgen was registered in Russia as the first-in-class gene-therapy drug for treatment of peripheral artery disease, including critical limb ischemia.[27] In 2012 Glybera, a treatment for a rare inherited disorder, became the first treatment to be approved for clinical use in either Europe or the United States after its endorsement by the European Commission.[11][28]

Following early advances in genetic engineering of bacteria, cells, and small animals, scientists started considering how to apply it to medicine. Two main approaches were considered replacing or disrupting defective genes.[29] Scientists focused on diseases caused by single-gene defects, such as cystic fibrosis, haemophilia, muscular dystrophy, thalassemia and sickle cell anemia. Glybera treats one such disease, caused by a defect in lipoprotein lipase.[28]

DNA must be administered, reach the damaged cells, enter the cell and express/disrupt a protein.[30] Multiple delivery techniques have been explored. The initial approach incorporated DNA into an engineered virus to deliver the DNA into a chromosome.[31][32]Naked DNA approaches have also been explored, especially in the context of vaccine development.[33]

Generally, efforts focused on administering a gene that causes a needed protein to be expressed. More recently, increased understanding of nuclease function has led to more direct DNA editing, using techniques such as zinc finger nucleases and CRISPR. The vector incorporates genes into chromosomes. The expressed nucleases then knock out and replace genes in the chromosome. As of 2014 these approaches involve removing cells from patients, editing a chromosome and returning the transformed cells to patients.[34]

Gene editing is a potential approach to alter the human genome to treat genetic diseases,[35] viral diseases,[36] and cancer.[37] As of 2016 these approaches were still years from being medicine.[38][39]

Gene therapy may be classified into two types:

In somatic cell gene therapy (SCGT), the therapeutic genes are transferred into any cell other than a gamete, germ cell, gametocyte or undifferentiated stem cell. Any such modifications affect the individual patient only, and are not inherited by offspring. Somatic gene therapy represents mainstream basic and clinical research, in which therapeutic DNA (either integrated in the genome or as an external episome or plasmid) is used to treat disease.

Over 600 clinical trials utilizing SCGT are underway in the US. Most focus on severe genetic disorders, including immunodeficiencies, haemophilia, thalassaemia and cystic fibrosis. Such single gene disorders are good candidates for somatic cell therapy. The complete correction of a genetic disorder or the replacement of multiple genes is not yet possible. Only a few of the trials are in the advanced stages.[40]

In germline gene therapy (GGT), germ cells (sperm or eggs) are modified by the introduction of functional genes into their genomes. Modifying a germ cell causes all the organism's cells to contain the modified gene. The change is therefore heritable and passed on to later generations. Australia, Canada, Germany, Israel, Switzerland and the Netherlands[41] prohibit GGT for application in human beings, for technical and ethical reasons, including insufficient knowledge about possible risks to future generations[41] and higher risks versus SCGT.[42] The US has no federal controls specifically addressing human genetic modification (beyond FDA regulations for therapies in general).[41][43][44][45]

The delivery of DNA into cells can be accomplished by multiple methods. The two major classes are recombinant viruses (sometimes called biological nanoparticles or viral vectors) and naked DNA or DNA complexes (non-viral methods).

In order to replicate, viruses introduce their genetic material into the host cell, tricking the host's cellular machinery into using it as blueprints for viral proteins. Scientists exploit this by substituting a virus's genetic material with therapeutic DNA. (The term 'DNA' may be an oversimplification, as some viruses contain RNA, and gene therapy could take this form as well.) A number of viruses have been used for human gene therapy, including retrovirus, adenovirus, lentivirus, herpes simplex, vaccinia and adeno-associated virus.[3] Like the genetic material (DNA or RNA) in viruses, therapeutic DNA can be designed to simply serve as a temporary blueprint that is degraded naturally or (at least theoretically) to enter the host's genome, becoming a permanent part of the host's DNA in infected cells.

Non-viral methods present certain advantages over viral methods, such as large scale production and low host immunogenicity. However, non-viral methods initially produced lower levels of transfection and gene expression, and thus lower therapeutic efficacy. Later technology remedied this deficiency[citation needed].

Methods for non-viral gene therapy include the injection of naked DNA, electroporation, the gene gun, sonoporation, magnetofection, the use of oligonucleotides, lipoplexes, dendrimers, and inorganic nanoparticles.

Some of the unsolved problems include:

Three patients' deaths have been reported in gene therapy trials, putting the field under close scrutiny. The first was that of Jesse Gelsinger in 1999.[52] One X-SCID patient died of leukemia in 2003.[9] In 2007, a rheumatoid arthritis patient died from an infection; the subsequent investigation concluded that the death was not related to gene therapy.[53]

In 1972 Friedmann and Roblin authored a paper in Science titled "Gene therapy for human genetic disease?"[54] Rogers (1970) was cited for proposing that exogenous good DNA be used to replace the defective DNA in those who suffer from genetic defects.[55]

In 1984 a retrovirus vector system was designed that could efficiently insert foreign genes into mammalian chromosomes.[56]

The first approved gene therapy clinical research in the US took place on 14 September 1990, at the National Institutes of Health (NIH), under the direction of William French Anderson.[57] Four-year-old Ashanti DeSilva received treatment for a genetic defect that left her with ADA-SCID, a severe immune system deficiency. The effects were temporary, but successful.[58]

Cancer gene therapy was introduced in 1992/93 (Trojan et al. 1993).[59] The treatment of glioblastoma multiforme, the malignant brain tumor whose outcome is always fatal, was done using a vector expressing antisense IGF-I RNA (clinical trial approved by NIH n 1602, and FDA in 1994). This therapy also represents the beginning of cancer immunogene therapy, a treatment which proves to be effective due to the anti-tumor mechanism of IGF-I antisense, which is related to strong immune and apoptotic phenomena.

In 1992 Claudio Bordignon, working at the Vita-Salute San Raffaele University, performed the first gene therapy procedure using hematopoietic stem cells as vectors to deliver genes intended to correct hereditary diseases.[60] In 2002 this work led to the publication of the first successful gene therapy treatment for adenosine deaminase-deficiency (SCID). The success of a multi-center trial for treating children with SCID (severe combined immune deficiency or "bubble boy" disease) from 2000 and 2002, was questioned when two of the ten children treated at the trial's Paris center developed a leukemia-like condition. Clinical trials were halted temporarily in 2002, but resumed after regulatory review of the protocol in the US, the United Kingdom, France, Italy and Germany.[61]

In 1993 Andrew Gobea was born with SCID following prenatal genetic screening. Blood was removed from his mother's placenta and umbilical cord immediately after birth, to acquire stem cells. The allele that codes for adenosine deaminase (ADA) was obtained and inserted into a retrovirus. Retroviruses and stem cells were mixed, after which the viruses inserted the gene into the stem cell chromosomes. Stem cells containing the working ADA gene were injected into Andrew's blood. Injections of the ADA enzyme were also given weekly. For four years T cells (white blood cells), produced by stem cells, made ADA enzymes using the ADA gene. After four years more treatment was needed.[citation needed]

Jesse Gelsinger's death in 1999 impeded gene therapy research in the US.[62][63] As a result, the FDA suspended several clinical trials pending the reevaluation of ethical and procedural practices.[64]

The modified cancer gene therapy strategy of antisense IGF-I RNA (NIH n 1602)[65] using antisense / triple helix anti IGF-I approach was registered in 2002 by Wiley gene therapy clinical trial - n 635 and 636. The approach has shown promising results in the treatment of six different malignant tumors: glioblastoma, cancers of liver, colon, prostate, uterus and ovary (Collaborative NATO Science Programme on Gene Therapy USA, France, Poland n LST 980517 conducted by J. Trojan) (Trojan et al., 2012). This antigene antisense/triple helix therapy has proven to be efficient, due to the mechanism stopping simultaneously IGF-I expression on translation and transcription levels, strengthening anti-tumor immune and apoptotic phenomena.

Sickle-cell disease can be treated in mice.[66] The mice which have essentially the same defect that causes human cases used a viral vector to induce production of fetal hemoglobin (HbF), which normally ceases to be produced shortly after birth. In humans, the use of hydroxyurea to stimulate the production of HbF temporarily alleviates sickle cell symptoms. The researchers demonstrated this treatment to be a more permanent means to increase therapeutic HbF production.[67]

A new gene therapy approach repaired errors in messenger RNA derived from defective genes. This technique has the potential to treat thalassaemia, cystic fibrosis and some cancers.[68]

Researchers created liposomes 25 nanometers across that can carry therapeutic DNA through pores in the nuclear membrane.[69]

In 2003 a research team inserted genes into the brain for the first time. They used liposomes coated in a polymer called polyethylene glycol, which, unlike viral vectors, are small enough to cross the bloodbrain barrier.[70]

Short pieces of double-stranded RNA (short, interfering RNAs or siRNAs) are used by cells to degrade RNA of a particular sequence. If a siRNA is designed to match the RNA copied from a faulty gene, then the abnormal protein product of that gene will not be produced.[71]

Gendicine is a cancer gene therapy that delivers the tumor suppressor gene p53 using an engineered adenovirus. In 2003, it was approved in China for the treatment of head and neck squamous cell carcinoma.[26]

In March researchers announced the successful use of gene therapy to treat two adult patients for X-linked chronic granulomatous disease, a disease which affects myeloid cells and damages the immune system. The study is the first to show that gene therapy can treat the myeloid system.[72]

In May a team reported a way to prevent the immune system from rejecting a newly delivered gene.[73] Similar to organ transplantation, gene therapy has been plagued by this problem. The immune system normally recognizes the new gene as foreign and rejects the cells carrying it. The research utilized a newly uncovered network of genes regulated by molecules known as microRNAs. This natural function selectively obscured their therapeutic gene in immune system cells and protected it from discovery. Mice infected with the gene containing an immune-cell microRNA target sequence did not reject the gene.

In August scientists successfully treated metastatic melanoma in two patients using killer T cells genetically retargeted to attack the cancer cells.[74]

In November researchers reported on the use of VRX496, a gene-based immunotherapy for the treatment of HIV that uses a lentiviral vector to deliver an antisense gene against the HIV envelope. In a phase I clinical trial, five subjects with chronic HIV infection who had failed to respond to at least two antiretroviral regimens were treated. A single intravenous infusion of autologous CD4 T cells genetically modified with VRX496 was well tolerated. All patients had stable or decreased viral load; four of the five patients had stable or increased CD4 T cell counts. All five patients had stable or increased immune response to HIV antigens and other pathogens. This was the first evaluation of a lentiviral vector administered in a US human clinical trial.[75][76]

In May researchers announced the first gene therapy trial for inherited retinal disease. The first operation was carried out on a 23-year-old British male, Robert Johnson, in early 2007.[77]

Leber's congenital amaurosis is an inherited blinding disease caused by mutations in the RPE65 gene. The results of a small clinical trial in children were published in April.[12] Delivery of recombinant adeno-associated virus (AAV) carrying RPE65 yielded positive results. In May two more groups reported positive results in independent clinical trials using gene therapy to treat the condition. In all three clinical trials, patients recovered functional vision without apparent side-effects.[12][13][14][15]

In September researchers were able to give trichromatic vision to squirrel monkeys.[78] In November 2009, researchers halted a fatal genetic disorder called adrenoleukodystrophy in two children using a lentivirus vector to deliver a functioning version of ABCD1, the gene that is mutated in the disorder.[79]

An April paper reported that gene therapy addressed achromatopsia (color blindness) in dogs by targeting cone photoreceptors. Cone function and day vision were restored for at least 33 months in two young specimens. The therapy was less efficient for older dogs.[80]

In September it was announced that an 18-year-old male patient in France with beta-thalassemia major had been successfully treated.[81] Beta-thalassemia major is an inherited blood disease in which beta haemoglobin is missing and patients are dependent on regular lifelong blood transfusions.[82] The technique used a lentiviral vector to transduce the human -globin gene into purified blood and marrow cells obtained from the patient in June 2007.[83] The patient's haemoglobin levels were stable at 9 to 10 g/dL. About a third of the hemoglobin contained the form introduced by the viral vector and blood transfusions were not needed.[83][84] Further clinical trials were planned.[85]Bone marrow transplants are the only cure for thalassemia, but 75% of patients do not find a matching donor.[84]

Cancer immunogene therapy using modified anti gene, antisense / triple helix approach was introduced in South America in 2010/11 in La Sabana University, Bogota (Ethical Committee 14.12.2010, no P-004-10). Considering the ethical aspect of gene diagnostic and gene therapy targeting IGF-I, the IGF-I expressing tumors i.e. lung and epidermis cancers, were treated (Trojan et al. 2016). [86][87]

In 2007 and 2008, a man was cured of HIV by repeated Hematopoietic stem cell transplantation (see also Allogeneic stem cell transplantation, Allogeneic bone marrow transplantation, Allotransplantation) with double-delta-32 mutation which disables the CCR5 receptor. This cure was accepted by the medical community in 2011.[88] It required complete ablation of existing bone marrow, which is very debilitating.

In August two of three subjects of a pilot study were confirmed to have been cured from chronic lymphocytic leukemia (CLL). The therapy used genetically modified T cells to attack cells that expressed the CD19 protein to fight the disease.[21] In 2013, the researchers announced that 26 of 59 patients had achieved complete remission and the original patient had remained tumor-free.[89]

Human HGF plasmid DNA therapy of cardiomyocytes is being examined as a potential treatment for coronary artery disease as well as treatment for the damage that occurs to the heart after myocardial infarction.[90][91]

In 2011 Neovasculgen was registered in Russia as the first-in-class gene-therapy drug for treatment of peripheral artery disease, including critical limb ischemia; it delivers the gene encoding for VEGF.[92][27] Neovasculogen is a plasmid encoding the CMV promoter and the 165 amino acid form of VEGF.[93][94]

The FDA approved Phase 1 clinical trials on thalassemia major patients in the US for 10 participants in July.[95] The study was expected to continue until 2015.[96]

In July 2012, the European Medicines Agency recommended approval of a gene therapy treatment for the first time in either Europe or the United States. The treatment used Alipogene tiparvovec (Glybera) to compensate for lipoprotein lipase deficiency, which can cause severe pancreatitis.[97] The recommendation was endorsed by the European Commission in November 2012[11][28][98][99] and commercial rollout began in late 2014.[100]

In December 2012, it was reported that 10 of 13 patients with multiple myeloma were in remission "or very close to it" three months after being injected with a treatment involving genetically engineered T cells to target proteins NY-ESO-1 and LAGE-1, which exist only on cancerous myeloma cells.[23]

In March researchers reported that three of five subjects who had acute lymphocytic leukemia (ALL) had been in remission for five months to two years after being treated with genetically modified T cells which attacked cells with CD19 genes on their surface, i.e. all B-cells, cancerous or not. The researchers believed that the patients' immune systems would make normal T-cells and B-cells after a couple of months. They were also given bone marrow. One patient relapsed and died and one died of a blood clot unrelated to the disease.[22]

Following encouraging Phase 1 trials, in April, researchers announced they were starting Phase 2 clinical trials (called CUPID2 and SERCA-LVAD) on 250 patients[101] at several hospitals to combat heart disease. The therapy was designed to increase the levels of SERCA2, a protein in heart muscles, improving muscle function.[102] The FDA granted this a Breakthrough Therapy Designation to accelerate the trial and approval process.[103] In 2016 it was reported that no improvement was found from the CUPID 2 trial.[104]

In July researchers reported promising results for six children with two severe hereditary diseases had been treated with a partially deactivated lentivirus to replace a faulty gene and after 732 months. Three of the children had metachromatic leukodystrophy, which causes children to lose cognitive and motor skills.[105] The other children had Wiskott-Aldrich syndrome, which leaves them to open to infection, autoimmune diseases and cancer.[106] Follow up trials with gene therapy on another six children with Wiskott-Aldrich syndrome were also reported as promising.[107][108]

In October researchers reported that two children born with adenosine deaminase severe combined immunodeficiency disease (ADA-SCID) had been treated with genetically engineered stem cells 18 months previously and that their immune systems were showing signs of full recovery. Another three children were making progress.[19] In 2014 a further 18 children with ADA-SCID were cured by gene therapy.[109] ADA-SCID children have no functioning immune system and are sometimes known as "bubble children."[19]

Also in October researchers reported that they had treated six haemophilia sufferers in early 2011 using an adeno-associated virus. Over two years later all six were producing clotting factor.[19][110]

Data from three trials on Topical cystic fibrosis transmembrane conductance regulator gene therapy were reported to not support its clinical use as a mist inhaled into the lungs to treat cystic fibrosis patients with lung infections.[111]

In January researchers reported that six choroideremia patients had been treated with adeno-associated virus with a copy of REP1. Over a six-month to two-year period all had improved their sight.[112][113] By 2016, 32 patients had been treated with positive results and researchers were hopeful the treatment would be long-lasting.[16] Choroideremia is an inherited genetic eye disease with no approved treatment, leading to loss of sight.

In March researchers reported that 12 HIV patients had been treated since 2009 in a trial with a genetically engineered virus with a rare mutation (CCR5 deficiency) known to protect against HIV with promising results.[114][115]

Clinical trials of gene therapy for sickle cell disease were started in 2014[116][117] although one review failed to find any such trials.[118]

In February LentiGlobin BB305, a gene therapy treatment undergoing clinical trials for treatment of beta thalassemia gained FDA "breakthrough" status after several patients were able to forgo the frequent blood transfusions usually required to treat the disease.[119]

In March researchers delivered a recombinant gene encoding a broadly neutralizing antibody into monkeys infected with simian HIV; the monkeys' cells produced the antibody, which cleared them of HIV. The technique is named immunoprophylaxis by gene transfer (IGT). Animal tests for antibodies to ebola, malaria, influenza and hepatitis are underway.[120][121]

In March scientists, including an inventor of CRISPR, urged a worldwide moratorium on germline gene therapy, writing scientists should avoid even attempting, in lax jurisdictions, germline genome modification for clinical application in humans until the full implications are discussed among scientific and governmental organizations.[122][123][124][125]

Also in 2015 Glybera was approved for the German market.[126]

In October, researchers announced that they had treated a baby girl, Layla Richards, with an experimental treatment using donor T-cells genetically engineered to attack cancer cells. Two months after the treatment she was still free of her cancer (a highly aggressive form of acute lymphoblastic leukaemia [ALL]). Children with highly aggressive ALL normally have a very poor prognosis and Layla's disease had been regarded as terminal before the treatment.[127]

In December, scientists of major world academies called for a moratorium on inheritable human genome edits, including those related to CRISPR-Cas9 technologies[128] but that basic research including embryo gene editing should continue.[129]

In April the Committee for Medicinal Products for Human Use of the European Medicines Agency endorsed a gene therapy treatment called Strimvelis and recommended it be approved.[130][131] This treats children born with ADA-SCID and who have no functioning immune system - sometimes called the "bubble baby" disease. This would be the second gene therapy treatment to be approved in Europe.[132]

Speculated uses for gene therapy include:

Gene Therapy techniques have the potential to provide alternative treatments for those with infertility. Recently, successful experimentation on mice has proven that fertility can be restored by using the gene therapy method, CRISPR.[133] Spermatogenical stem cells from another organism were transplanted into the testes of an infertile male mouse. The stem cells re-established spermatogenesis and fertility.[134]

Athletes might adopt gene therapy technologies to improve their performance.[135]Gene doping is not known to occur, but multiple gene therapies may have such effects. Kayser et al. argue that gene doping could level the playing field if all athletes receive equal access. Critics claim that any therapeutic intervention for non-therapeutic/enhancement purposes compromises the ethical foundations of medicine and sports.[136]

Genetic engineering could be used to change physical appearance, metabolism, and even improve physical capabilities and mental faculties such as memory and intelligence. Ethical claims about germline engineering include beliefs that every fetus has a right to remain genetically unmodified, that parents hold the right to genetically modify their offspring, and that every child has the right to be born free of preventable diseases.[137][138][139] For adults, genetic engineering could be seen as another enhancement technique to add to diet, exercise, education, cosmetics and plastic surgery.[140][141] Another theorist claims that moral concerns limit but do not prohibit germline engineering.[142]

Possible regulatory schemes include a complete ban, provision to everyone, or professional self-regulation. The American Medical Associations Council on Ethical and Judicial Affairs stated that "genetic interventions to enhance traits should be considered permissible only in severely restricted situations: (1) clear and meaningful benefits to the fetus or child; (2) no trade-off with other characteristics or traits; and (3) equal access to the genetic technology, irrespective of income or other socioeconomic characteristics."[143]

As early in the history of biotechnology as 1990, there have been scientists opposed to attempts to modify the human germline using these new tools,[144] and such concerns have continued as technology progressed.[145] With the advent of new techniques like CRISPR, in March 2015 a group of scientists urged a worldwide moratorium on clinical use of gene editing technologies to edit the human genome in a way that can be inherited.[122][123][124][125] In April 2015, researchers sparked controversy when they reported results of basic research to edit the DNA of non-viable human embryos using CRISPR.[133][146]

Regulations covering genetic modification are part of general guidelines about human-involved biomedical research.

The Helsinki Declaration (Ethical Principles for Medical Research Involving Human Subjects) was amended by the World Medical Association's General Assembly in 2008. This document provides principles physicians and researchers must consider when involving humans as research subjects. The Statement on Gene Therapy Research initiated by the Human Genome Organization (HUGO) in 2001 provides a legal baseline for all countries. HUGOs document emphasizes human freedom and adherence to human rights, and offers recommendations for somatic gene therapy, including the importance of recognizing public concerns about such research.[147]

No federal legislation lays out protocols or restrictions about human genetic engineering. This subject is governed by overlapping regulations from local and federal agencies, including the Department of Health and Human Services, the FDA and NIH's Recombinant DNA Advisory Committee. Researchers seeking federal funds for an investigational new drug application, (commonly the case for somatic human genetic engineering), must obey international and federal guidelines for the protection of human subjects.[148]

NIH serves as the main gene therapy regulator for federally funded research. Privately funded research is advised to follow these regulations. NIH provides funding for research that develops or enhances genetic engineering techniques and to evaluate the ethics and quality in current research. The NIH maintains a mandatory registry of human genetic engineering research protocols that includes all federally funded projects.

An NIH advisory committee published a set of guidelines on gene manipulation.[149] The guidelines discuss lab safety as well as human test subjects and various experimental types that involve genetic changes. Several sections specifically pertain to human genetic engineering, including Section III-C-1. This section describes required review processes and other aspects when seeking approval to begin clinical research involving genetic transfer into a human patient.[150] The protocol for a gene therapy clinical trial must be approved by the NIH's Recombinant DNA Advisory Committee prior to any clinical trial beginning; this is different from any other kind of clinical trial.[149]

As with other kinds of drugs, the FDA regulates the quality and safety of gene therapy products and supervises how these products are used clinically. Therapeutic alteration of the human genome falls under the same regulatory requirements as any other medical treatment. Research involving human subjects, such as clinical trials, must be reviewed and approved by the FDA and an Institutional Review Board.[151][152]

Gene therapy is the basis for the plotline of the film I Am Legend[153] and the TV show Will Gene Therapy Change the Human Race?.[154]

Read more:
Gene therapy - Wikipedia

Read More...

Diabetes mellitus – Wikipedia

October 27th, 2016 5:44 am

Diabetes mellitus (DM), commonly referred to as diabetes, is a group of metabolic diseases in which there are high blood sugar levels over a prolonged period.[2] Symptoms of high blood sugar include frequent urination, increased thirst, and increased hunger. If left untreated, diabetes can cause many complications.[3]Acute complications can include diabetic ketoacidosis, nonketotic hyperosmolar coma, or death.[4] Serious long-term complications include heart disease, stroke, chronic kidney failure, foot ulcers, and damage to the eyes.[3]

Diabetes is due to either the pancreas not producing enough insulin or the cells of the body not responding properly to the insulin produced.[5] There are three main types of diabetes mellitus:

Prevention and treatment involve maintaining a healthy diet, regular physical exercise, a normal body weight, and avoiding use of tobacco. Control of blood pressure and maintaining proper foot care are important for people with the disease. Type 1 DM must be managed with insulin injections.[3] Type 2 DM may be treated with medications with or without insulin.[7] Insulin and some oral medications can cause low blood sugar.[8]Weight loss surgery in those with obesity is sometimes an effective measure in those with type 2 DM.[9] Gestational diabetes usually resolves after the birth of the baby.[10]

As of 2015[update], an estimated 415 million people had diabetes worldwide,[11] with type 2 DM making up about 90% of the cases.[12][13] This represents 8.3% of the adult population,[13] with equal rates in both women and men.[14] As of 2014[update], trends suggested the rate would continue to rise.[15] Diabetes at least doubles a person's risk of early death.[3] From 2012 to 2015, approximately 1.5 to 5.0 million deaths each year resulted from diabetes.[7][11] The global economic cost of diabetes in 2014 was estimated to be US$612 billion.[16] In the United States, diabetes cost $245 billion in 2012.[17]

The classic symptoms of untreated diabetes are weight loss, polyuria (increased urination), polydipsia (increased thirst), and polyphagia (increased hunger).[18] Symptoms may develop rapidly (weeks or months) in type1 DM, while they usually develop much more slowly and may be subtle or absent in type2 DM.

Several other signs and symptoms can mark the onset of diabetes although they are not specific to the disease. In addition to the known ones above, they include blurry vision, headache, fatigue, slow healing of cuts, and itchy skin. Prolonged high blood glucose can cause glucose absorption in the lens of the eye, which leads to changes in its shape, resulting in vision changes. A number of skin rashes that can occur in diabetes are collectively known as diabetic dermadromes.

Low blood sugar is common in persons with type 1 and type 2 DM. Most cases are mild and are not considered medical emergencies. Effects can range from feelings of unease, sweating, trembling, and increased appetite in mild cases to more serious issues such as confusion, changes in behavior such as aggressiveness, seizures, unconsciousness, and (rarely) permanent brain damage or death in severe cases.[19][20] Moderate hypoglycemia may easily be mistaken for drunkenness;[21] rapid breathing and sweating, cold, pale skin are characteristic of hypoglycemia but not definitive.[22] Mild to moderate cases are self-treated by eating or drinking something high in sugar. Severe cases can lead to unconsciousness and must be treated with intravenous glucose or injections with glucagon.

People (usually with type1 DM) may also experience episodes of diabetic ketoacidosis, a metabolic disturbance characterized by nausea, vomiting and abdominal pain, the smell of acetone on the breath, deep breathing known as Kussmaul breathing, and in severe cases a decreased level of consciousness.[23]

A rare but equally severe possibility is hyperosmolar nonketotic state, which is more common in type2 DM and is mainly the result of dehydration.[23]

All forms of diabetes increase the risk of long-term complications. These typically develop after many years (1020), but may be the first symptom in those who have otherwise not received a diagnosis before that time.

The major long-term complications relate to damage to blood vessels. Diabetes doubles the risk of cardiovascular disease[24] and about 75% of deaths in diabetics are due to coronary artery disease.[25] Other "macrovascular" diseases are stroke, and peripheral vascular disease.

The primary complications of diabetes due to damage in small blood vessels include damage to the eyes, kidneys, and nerves.[26] Damage to the eyes, known as diabetic retinopathy, is caused by damage to the blood vessels in the retina of the eye, and can result in gradual vision loss and blindness.[26] Damage to the kidneys, known as diabetic nephropathy, can lead to tissue scarring, urine protein loss, and eventually chronic kidney disease, sometimes requiring dialysis or kidney transplant.[26] Damage to the nerves of the body, known as diabetic neuropathy, is the most common complication of diabetes.[26] The symptoms can include numbness, tingling, pain, and altered pain sensation, which can lead to damage to the skin. Diabetes-related foot problems (such as diabetic foot ulcers) may occur, and can be difficult to treat, occasionally requiring amputation. Additionally, proximal diabetic neuropathy causes painful muscle wasting and weakness.

There is a link between cognitive deficit and diabetes. Compared to those without diabetes, those with the disease have a 1.2 to 1.5-fold greater rate of decline in cognitive function.[27]

Diabetes mellitus is classified into four broad categories: type1, type2, gestational diabetes, and "other specific types".[5] The "other specific types" are a collection of a few dozen individual causes.[5] Diabetes is a more variable disease than once thought and people may have combinations of forms.[29] The term "diabetes", without qualification, usually refers to diabetes mellitus.

Type1 diabetes mellitus is characterized by loss of the insulin-producing beta cells of the islets of Langerhans in the pancreas, leading to insulin deficiency. This type can be further classified as immune-mediated or idiopathic. The majority of type1 diabetes is of the immune-mediated nature, in which a T-cell-mediated autoimmune attack leads to the loss of beta cells and thus insulin.[30] It causes approximately 10% of diabetes mellitus cases in North America and Europe. Most affected people are otherwise healthy and of a healthy weight when onset occurs. Sensitivity and responsiveness to insulin are usually normal, especially in the early stages. Type1 diabetes can affect children or adults, but was traditionally termed "juvenile diabetes" because a majority of these diabetes cases were in children.

"Brittle" diabetes, also known as unstable diabetes or labile diabetes, is a term that was traditionally used to describe the dramatic and recurrent swings in glucose levels, often occurring for no apparent reason in insulin-dependent diabetes. This term, however, has no biologic basis and should not be used.[31] Still, type1 diabetes can be accompanied by irregular and unpredictable high blood sugar levels, frequently with ketosis, and sometimes with serious low blood sugar levels. Other complications include an impaired counterregulatory response to low blood sugar, infection, gastroparesis (which leads to erratic absorption of dietary carbohydrates), and endocrinopathies (e.g., Addison's disease).[31] These phenomena are believed to occur no more frequently than in 1% to 2% of persons with type1 diabetes.[32]

Type1 diabetes is partly inherited, with multiple genes, including certain HLA genotypes, known to influence the risk of diabetes. The increase of incidence of type 1 diabetes reflects the modern lifestyle.[33] In genetically susceptible people, the onset of diabetes can be triggered by one or more environmental factors,[34] such as a viral infection or diet. Several viruses have been implicated, but to date there is no stringent evidence to support this hypothesis in humans.[34][35] Among dietary factors, data suggest that gliadin (a protein present in gluten) may play a role in the development of type 1 diabetes, but the mechanism is not fully understood.[36][37]

Type2 DM is characterized by insulin resistance, which may be combined with relatively reduced insulin secretion.[5] The defective responsiveness of body tissues to insulin is believed to involve the insulin receptor. However, the specific defects are not known. Diabetes mellitus cases due to a known defect are classified separately. Type2 DM is the most common type of diabetes mellitus.

In the early stage of type2, the predominant abnormality is reduced insulin sensitivity. At this stage, high blood sugar can be reversed by a variety of measures and medications that improve insulin sensitivity or reduce the liver's glucose production.

Type2 DM is due primarily to lifestyle factors and genetics.[38] A number of lifestyle factors are known to be important to the development of type2 DM, including obesity (defined by a body mass index of greater than 30), lack of physical activity, poor diet, stress, and urbanization.[12] Excess body fat is associated with 30% of cases in those of Chinese and Japanese descent, 6080% of cases in those of European and African descent, and 100% of Pima Indians and Pacific Islanders.[5] Even those who are not obese often have a high waisthip ratio.[5]

Dietary factors also influence the risk of developing type2 DM. Consumption of sugar-sweetened drinks in excess is associated with an increased risk.[39][40] The type of fats in the diet is also important, with saturated fats and trans fatty acids increasing the risk and polyunsaturated and monounsaturated fat decreasing the risk.[38] Eating lots of white rice also may increase the risk of diabetes.[41] A lack of exercise is believed to cause 7% of cases.[42]

Gestational diabetes mellitus (GDM) resembles type2 DM in several respects, involving a combination of relatively inadequate insulin secretion and responsiveness. It occurs in about 210% of all pregnancies and may improve or disappear after delivery.[43] However, after pregnancy approximately 510% of women with gestational diabetes are found to have diabetes mellitus, most commonly type 2.[43] Gestational diabetes is fully treatable, but requires careful medical supervision throughout the pregnancy. Management may include dietary changes, blood glucose monitoring, and in some cases, insulin may be required.

Though it may be transient, untreated gestational diabetes can damage the health of the fetus or mother. Risks to the baby include macrosomia (high birth weight), congenital heart and central nervous system abnormalities, and skeletal muscle malformations. Increased levels of insulin in a fetus's blood may inhibit fetal surfactant production and cause respiratory distress syndrome. A high blood bilirubin level may result from red blood cell destruction. In severe cases, perinatal death may occur, most commonly as a result of poor placental perfusion due to vascular impairment. Labor induction may be indicated with decreased placental function. A Caesarean section may be performed if there is marked fetal distress or an increased risk of injury associated with macrosomia, such as shoulder dystocia.[citation needed]

Prediabetes indicates a condition that occurs when a person's blood glucose levels are higher than normal but not high enough for a diagnosis of type2 DM. Many people destined to develop type2 DM spend many years in a state of prediabetes.

Latent autoimmune diabetes of adults (LADA) is a condition in which type1 DM develops in adults. Adults with LADA are frequently initially misdiagnosed as having type2 DM, based on age rather than etiology.

Some cases of diabetes are caused by the body's tissue receptors not responding to insulin (even when insulin levels are normal, which is what separates it from type2 diabetes); this form is very uncommon. Genetic mutations (autosomal or mitochondrial) can lead to defects in beta cell function. Abnormal insulin action may also have been genetically determined in some cases. Any disease that causes extensive damage to the pancreas may lead to diabetes (for example, chronic pancreatitis and cystic fibrosis). Diseases associated with excessive secretion of insulin-antagonistic hormones can cause diabetes (which is typically resolved once the hormone excess is removed). Many drugs impair insulin secretion and some toxins damage pancreatic beta cells. The ICD-10 (1992) diagnostic entity, malnutrition-related diabetes mellitus (MRDM or MMDM, ICD-10 code E12), was deprecated by the World Health Organization when the current taxonomy was introduced in 1999.[44]

Other forms of diabetes mellitus include congenital diabetes, which is due to genetic defects of insulin secretion, cystic fibrosis-related diabetes, steroid diabetes induced by high doses of glucocorticoids, and several forms of monogenic diabetes.

"Type 3 diabetes" has been suggested as a term for Alzheimer's disease as the underlying processes may involve insulin resistance by the brain.[45]

The following is a comprehensive list of other causes of diabetes:[46]

Insulin is the principal hormone that regulates the uptake of glucose from the blood into most cells of the body, especially liver, muscle, and adipose tissue. Therefore, deficiency of insulin or the insensitivity of its receptors plays a central role in all forms of diabetes mellitus.[48]

The body obtains glucose from three main places: the intestinal absorption of food, the breakdown of glycogen, the storage form of glucose found in the liver, and gluconeogenesis, the generation of glucose from non-carbohydrate substrates in the body.[49] Insulin plays a critical role in balancing glucose levels in the body. Insulin can inhibit the breakdown of glycogen or the process of gluconeogenesis, it can stimulate the transport of glucose into fat and muscle cells, and it can stimulate the storage of glucose in the form of glycogen.[49]

Insulin is released into the blood by beta cells (-cells), found in the islets of Langerhans in the pancreas, in response to rising levels of blood glucose, typically after eating. Insulin is used by about two-thirds of the body's cells to absorb glucose from the blood for use as fuel, for conversion to other needed molecules, or for storage. Lower glucose levels result in decreased insulin release from the beta cells and in the breakdown of glycogen to glucose. This process is mainly controlled by the hormone glucagon, which acts in the opposite manner to insulin.[50]

If the amount of insulin available is insufficient, if cells respond poorly to the effects of insulin (insulin insensitivity or insulin resistance), or if the insulin itself is defective, then glucose will not be absorbed properly by the body cells that require it, and it will not be stored appropriately in the liver and muscles. The net effect is persistently high levels of blood glucose, poor protein synthesis, and other metabolic derangements, such as acidosis.[49]

When the glucose concentration in the blood remains high over time, the kidneys will reach a threshold of reabsorption, and glucose will be excreted in the urine (glycosuria).[51] This increases the osmotic pressure of the urine and inhibits reabsorption of water by the kidney, resulting in increased urine production (polyuria) and increased fluid loss. Lost blood volume will be replaced osmotically from water held in body cells and other body compartments, causing dehydration and increased thirst (polydipsia).[49]

Diabetes mellitus is characterized by recurrent or persistent high blood sugar, and is diagnosed by demonstrating any one of the following:[44]

A positive result, in the absence of unequivocal high blood sugar, should be confirmed by a repeat of any of the above methods on a different day. It is preferable to measure a fasting glucose level because of the ease of measurement and the considerable time commitment of formal glucose tolerance testing, which takes two hours to complete and offers no prognostic advantage over the fasting test.[55] According to the current definition, two fasting glucose measurements above 126mg/dl (7.0mmol/l) is considered diagnostic for diabetes mellitus.

Per the World Health Organization people with fasting glucose levels from 6.1 to 6.9mmol/l (110 to 125mg/dl) are considered to have impaired fasting glucose.[56] people with plasma glucose at or above 7.8mmol/l (140mg/dl), but not over 11.1mmol/l (200mg/dl), two hours after a 75g oral glucose load are considered to have impaired glucose tolerance. Of these two prediabetic states, the latter in particular is a major risk factor for progression to full-blown diabetes mellitus, as well as cardiovascular disease.[57] The American Diabetes Association since 2003 uses a slightly different range for impaired fasting glucose of 5.6 to 6.9mmol/l (100 to 125mg/dl).[58]

Glycated hemoglobin is better than fasting glucose for determining risks of cardiovascular disease and death from any cause.[59]

The rare disease diabetes insipidus has similar symptoms to diabetes mellitus, but without disturbances in the sugar metabolism (insipidus means "without taste" in Latin) and does not involve the same disease mechanisms. Diabetes is a part of the wider condition known as metabolic syndrome.

There is no known preventive measure for type1 diabetes.[3] Type2 diabetes which accounts for 85-90% of all cases can often be prevented or delayed by maintaining a normal body weight, engaging in physical exercise, and consuming a healthful diet.[3] Higher levels of physical activity reduce the risk of diabetes by 28%.[60] Dietary changes known to be effective in helping to prevent diabetes include maintaining a diet rich in whole grains and fiber, and choosing good fats, such as the polyunsaturated fats found in nuts, vegetable oils, and fish.[61] Limiting sugary beverages and eating less red meat and other sources of saturated fat can also help prevent diabetes.[61] Tobacco smoking is also associated with an increased risk of diabetes and its complications, so smoking cessation can be an important preventive measure as well.[62]

The relationship between type 2 diabetes and the main modifiable risk factors (excess weight, unhealthy diet, physical inactivity and tobacco use) is similar in all regions of the world. There is growing evidence that the underlying determinants of diabetes are a reflection of the major forces driving social, economic and cultural change: globalization, urbanization, population ageing, and the general health policy environment.[63]

Diabetes mellitus is a chronic disease, for which there is no known cure except in very specific situations.[64] Management concentrates on keeping blood sugar levels as close to normal, without causing low blood sugar. This can usually be accomplished with a healthy diet, exercise, weight loss, and use of appropriate medications (insulin in the case of type1 diabetes; oral medications, as well as possibly insulin, in type2 diabetes).

Learning about the disease and actively participating in the treatment is important, since complications are far less common and less severe in people who have well-managed blood sugar levels.[65][66] The goal of treatment is an HbA1C level of 6.5%, but should not be lower than that, and may be set higher.[67] Attention is also paid to other health problems that may accelerate the negative effects of diabetes. These include smoking, elevated cholesterol levels, obesity, high blood pressure, and lack of regular exercise.[67]Specialized footwear is widely used to reduce the risk of ulceration, or re-ulceration, in at-risk diabetic feet. Evidence for the efficacy of this remains equivocal, however.[68]

People with diabetes can benefit from education about the disease and treatment, good nutrition to achieve a normal body weight, and exercise, with the goal of keeping both short-term and long-term blood glucose levels within acceptable bounds. In addition, given the associated higher risks of cardiovascular disease, lifestyle modifications are recommended to control blood pressure.[69]

Medications used to treat diabetes do so by lowering blood sugar levels. There are a number of different classes of anti-diabetic medications. Some are available by mouth, such as metformin, while others are only available by injection such as GLP-1 agonists. Type1 diabetes can only be treated with insulin, typically with a combination of regular and NPH insulin, or synthetic insulin analogs.[citation needed]

Metformin is generally recommended as a first line treatment for type2 diabetes, as there is good evidence that it decreases mortality.[70] It works by decreasing the liver's production of glucose.[71] Several other groups of drugs, mostly given by mouth, may also decrease blood sugar in type II DM. These include agents that increase insulin release, agents that decrease absorption of sugar from the intestines, and agents that make the body more sensitive to insulin.[71] When insulin is used in type2 diabetes, a long-acting formulation is usually added initially, while continuing oral medications.[70] Doses of insulin are then increased to effect.[70][72]

Since cardiovascular disease is a serious complication associated with diabetes, some have recommended blood pressure levels below 130/80mmHg.[73] However, evidence supports less than or equal to somewhere between 140/90mmHg to 160/100mmHg; the only additional benefit found for blood pressure targets beneath this range was an isolated decrease in stroke risk, and this was accompanied by an increased risk of other serious adverse events.[74][75] A 2016 review found potential harm to treating lower than 140 mmHg.[76] Among medications that lower blood pressure, angiotensin converting enzyme inhibitors (ACEIs) improve outcomes in those with DM while the similar medications angiotensin receptor blockers (ARBs) do not.[77]Aspirin is also recommended for people with cardiovascular problems, however routine use of aspirin has not been found to improve outcomes in uncomplicated diabetes.[78]

A pancreas transplant is occasionally considered for people with type1 diabetes who have severe complications of their disease, including end stage kidney disease requiring kidney transplantation.[79]

Weight loss surgery in those with obesity and type two diabetes is often an effective measure.[80] Many are able to maintain normal blood sugar levels with little or no medications following surgery[81] and long-term mortality is decreased.[82] There however is some short-term mortality risk of less than 1% from the surgery.[83] The body mass index cutoffs for when surgery is appropriate are not yet clear.[82] It is recommended that this option be considered in those who are unable to get both their weight and blood sugar under control.[84]

In countries using a general practitioner system, such as the United Kingdom, care may take place mainly outside hospitals, with hospital-based specialist care used only in case of complications, difficult blood sugar control, or research projects. In other circumstances, general practitioners and specialists share care in a team approach. Home telehealth support can be an effective management technique.[85]

no data

7.5

7.515

1522.5

22.530

3037.5

37.545

4552.5

52.560

6067.5

67.575

7582.5

82.5

28-91

92-114

115-141

142-163

164-184

185-209

210-247

248-309

310-404

405-1879

As of 2016, 422 million people have diabetes worldwide,[86] up from an estimated 382 million people in 2013[13] and from 108 million in 1980.[86] Accounting for the shifting age structure of the global population, the prevalence of diabetes is 8.5% among adults, nearly double the rate of 4.7% in 1980.[86] Type2 makes up about 90% of the cases.[12][14] Some data indicate rates are roughly equal in women and men,[14] but male excess in diabetes has been found in many populations with higher type 2 incidence, possibly due to sex-related differences in insulin sensitivity, consequences of obesity and regional body fat deposition, and other contributing factors such as high blood pressure, tobacco smoking and alcohol intake.[87][88]

The World Health Organization (WHO) estimates that diabetes mellitus resulted in 1.5 million deaths in 2012, making it the 8th leading cause of death.[7][86] However another 2.2 million deaths worldwide were attributable to high blood glucose and the increased risks of cardiovascular disease and other associated complications (e.g. kidney failure), which often lead to premature death and are often listed as the underlying cause on death certificates rather than diabetes.[86][89] For example, in 2014, the International Diabetes Federation (IDF) estimated that diabetes resulted in 4.9 million deaths worldwide,[15] using modelling to estimate the total amount of deaths that could be directly or indirectly attributed to diabetes.[16]

Diabetes mellitus occurs throughout the world but is more common (especially type 2) in more developed countries. The greatest increase in rates has however been seen in low- and middle-income countries,[86] where more than 80% of diabetic deaths occur.[90] The fastest prevalence increase is expected to occur in Asia and Africa, where most people with diabetes will probably live in 2030.[91] The increase in rates in developing countries follows the trend of urbanization and lifestyle changes, including increasingly sedentary lifestyles, less physically demanding work and the global nutrition transition, marked by increased intake of foods that are high energy-dense but nutrient-poor (often high in sugar and saturated fats, sometimes referred to as the "Western-style" diet).[86][91]

Diabetes was one of the first diseases described,[92] with an Egyptian manuscript from c. 1500 BCE mentioning "too great emptying of the urine".[93] The first described cases are believed to be of type1 diabetes.[93] Indian physicians around the same time identified the disease and classified it as madhumeha or "honey urine", noting the urine would attract ants.[93] The term "diabetes" or "to pass through" was first used in 230BCE by the Greek Apollonius of Memphis.[93] The disease was considered rare during the time of the Roman empire, with Galen commenting he had only seen two cases during his career.[93] This is possibly due to the diet and lifestyle of the ancients, or because the clinical symptoms were observed during the advanced stage of the disease. Galen named the disease "diarrhea of the urine" (diarrhea urinosa). The earliest surviving work with a detailed reference to diabetes is that of Aretaeus of Cappadocia (2nd or early 3rd century CE). He described the symptoms and the course of the disease, which he attributed to the moisture and coldness, reflecting the beliefs of the "Pneumatic School". He hypothesized a correlation of diabetes with other diseases and he discussed differential diagnosis from the snakebite which also provokes excessive thirst. His work remained unknown in the West until 1552, when the first Latin edition was published in Venice.[94]

Type1 and type2 diabetes were identified as separate conditions for the first time by the Indian physicians Sushruta and Charaka in 400-500CE with type1 associated with youth and type2 with being overweight.[93] The term "mellitus" or "from honey" was added by the Briton John Rolle in the late 1700s to separate the condition from diabetes insipidus, which is also associated with frequent urination.[93] Effective treatment was not developed until the early part of the 20th century, when Canadians Frederick Banting and Charles Herbert Best isolated and purified insulin in 1921 and 1922.[93] This was followed by the development of the long-acting insulin NPH in the 1940s.[93]

The word diabetes ( or ) comes from Latin diabts, which in turn comes from Ancient Greek (diabts) which literally means "a passer through; a siphon."[95]Ancient Greek physician Aretaeus of Cappadocia (fl. 1st century CE) used that word, with the intended meaning "excessive discharge of urine", as the name for the disease.[96][97] Ultimately, the word comes from Greek (diabainein), meaning "to pass through,"[95] which is composed of - (dia-), meaning "through" and (bainein), meaning "to go".[96] The word "diabetes" is first recorded in English, in the form diabete, in a medical text written around 1425.

The word mellitus ( or ) comes from the classical Latin word melltus, meaning "mellite"[98] (i.e. sweetened with honey;[98] honey-sweet[99]). The Latin word comes from mell-, which comes from mel, meaning "honey";[98][99] sweetness;[99] pleasant thing,[99] and the suffix -tus,[98] whose meaning is the same as that of the English suffix "-ite".[100] It was Thomas Willis who in 1675 added "mellitus" to the word "diabetes" as a designation for the disease, when he noticed the urine of a diabetic had a sweet taste (glycosuria). This sweet taste had been noticed in urine by the ancient Greeks, Chinese, Egyptians, Indians, and Persians.

The 1989 "St. Vincent Declaration"[101][102] was the result of international efforts to improve the care accorded to those with diabetes. Doing so is important not only in terms of quality of life and life expectancy but also economicallyexpenses due to diabetes have been shown to be a major drain on healthand productivity-related resources for healthcare systems and governments.

Several countries established more and less successful national diabetes programmes to improve treatment of the disease.[103]

People with diabetes who have neuropathic symptoms such as numbness or tingling in feet or hands are twice as likely to be unemployed as those without the symptoms.[104]

In 2010, diabetes-related emergency room (ER) visit rates in the United States were higher among people from the lowest income communities (526 per 10,000 population) than from the highest income communities (236 per 10,000 population). Approximately 9.4% of diabetes-related ER visits were for the uninsured.[105]

The term "type1 diabetes" has replaced several former terms, including childhood-onset diabetes, juvenile diabetes, and insulin-dependent diabetes mellitus (IDDM). Likewise, the term "type2 diabetes" has replaced several former terms, including adult-onset diabetes, obesity-related diabetes, and noninsulin-dependent diabetes mellitus (NIDDM). Beyond these two types, there is no agreed-upon standard nomenclature.

Diabetes mellitus is also occasionally known as "sugar diabetes" to differentiate it from diabetes insipidus.[106]

In animals, diabetes is most commonly encountered in dogs and cats. Middle-aged animals are most commonly affected. Female dogs are twice as likely to be affected as males, while according to some sources, male cats are also more prone than females. In both species, all breeds may be affected, but some small dog breeds are particularly likely to develop diabetes, such as Miniature Poodles.[107] The symptoms may relate to fluid loss and polyuria, but the course may also be insidious. Diabetic animals are more prone to infections. The long-term complications recognised in humans are much rarer in animals. The principles of treatment (weight loss, oral antidiabetics, subcutaneous insulin) and management of emergencies (e.g. ketoacidosis) are similar to those in humans.[107]

Inhalable insulin has been developed.[108] The original products were withdrawn due to side effects.[108] Afrezza, under development by pharmaceuticals company MannKind Corporation, was approved by the FDA for general sale in June 2014.[109] An advantage to inhaled insulin is that it may be more convenient and easy to use.[110]

Transdermal insulin in the form of a cream has been developed and trials are being conducted on people with type 2 diabetes.[111][112]

The rest is here:
Diabetes mellitus - Wikipedia

Read More...

Biotechnology A.S. Degree

October 27th, 2016 5:44 am

Program Goal:The biotechnology program is designed to prepare students for employment as technicians who will work in a laboratory or industrial setting. Biotechnology is a wide-ranging field encompassing: DNA/RNA and protein isolation, characterization, and sequencing; cell culture; genetic modification of organisms; toxicology; vaccine sterility testing; antibody isolation and production; and the development of diagnostic and therapeutic agents. This hands-on program is designed to meet local, statewide, and national need for laboratory technicians. Graduates are thoroughly grounded in basic laboratory skills and trained in advanced molecular biology techniques. Students are acclimated to both research and industrial environments. The program emphasizes laboratory-based, universal, and scalable technical skills resulting in a thorough and comprehensive understanding of the methodology.

Program Entrance Requirements: To be admitted into the biotechnology Degree Program, a student must have,

Achieved a level of English and reading proficiency which qualifies the student for entry into ENC 1101 or higher as demonstrated by the standard placement criteria currently in use at State College of Florida, Manatee-Sarasota (SCF)

Achieved a level ofmathematics proficiency which qualifies the student for entry into MAC 1105 or higher as demonstrated by the standard placement criteria currently in use at SCF

Achieved a level of chemistry and biological content proficiency equivalent to that covered in CHM 1025C and BSC 1007C as demonstrated by the standard placement criteria currently in use at SCF

Suggested course of study:

1

3

College Algebra

MAC 1105

3

4

Total Hours

12

4

3

Social and

Behavioral

Sciences

Must be an area III

Socialor Behavioral Science.

3

4

Total Hours

13

4

4

3

Total Hours

11

4

4

5

Total Hours

13

3

5

3

4

Total Hours

12

See more here:
Biotechnology A.S. Degree

Read More...

Page 1,117«..1020..1,1161,1171,1181,119..1,1301,140..»


2025 © StemCell Therapy is proudly powered by WordPress
Entries (RSS) Comments (RSS) | Violinesth by Patrick