header logo image


Page 1,080«..1020..1,0791,0801,0811,082..1,0901,100..»

3 Biotech – a SpringerOpen journal

January 2nd, 2017 5:46 pm

3 Biotech is a quarterly, peer-reviewed open access journal published under the brand SpringerOpen.

Continuous Article Publishing (CAP)

3 Biotech will be moving to the Continuous Article Publishing (CAP) in 2016, in which newly accepted papers will be published online with volume and article numbers, shortly after receipt of authors proofs. This change will alleviate the significant backlog of accepted articles that are currently available online as "published ahead of time," but are awaiting formal publication with a volume, issue number and page numbers. To achieve a smooth transition to the CAP model, all papers that have been accepted after June 2015 have been held back and will be published with volume and article numbers from January 2016 onwards. We wish to apologize for this short delay in article processing during this important transition phase, which is designed to speed up the process from acceptance of articles to final publication without the need for articles to be placed in a "published ahead of time" waiting line. In addition, a formal rapid publication from 2016 will ensure that all articles in 3 Biotech are immediately available in indexing services for researchers.

3 Biotech publishes the results of the latest research related to the study and application of biotechnology to:

- Medicine and Biomedical Sciences - Agriculture - The Environment

The focus on these three technology sectors recognizes that complete Biotechnology applications often require a combination of techniques. 3 Biotech not only presents the latest developments in biotechnology but also addresses the problems and benefits of integrating a variety of techniques for a particular application. 3 Biotech will appeal to scientists and engineers in both academia and industry focused on the safe and efficient application of Biotechnology to Medicine, Agriculture and the Environment.

Articles from a huge variety of biotechnology applications are welcome including:

- Cancer and stem cell research - Genetic engineering and cloning - Bioremediation and biodegradation - Bioinformatics and system biology - Biomarkers and biosensors - Biodiversity and biodiscovery - Biorobotics and biotoxins - Analytical biotechnology and the human genome

3 Biotech accepts original and review articles as well as short research reports, protocols and methods, notes to the editor, letters to the editor and book reviews for publication. Up to date topical review articles will also be considered. All the manuscripts are peer-reviewed for scientific quality and acceptance.

NEW:

3Biotech hasrecently receivedits first Impact Factor and is nowcovered by a range of A&I services, including:

- Science Citation Index Expanded - Journal Citation Reports/Science Edition - Biological Abstracts - BIOSIS Previews

Best Paper Award: 3 Biotech is supported by King Abdulaziz City for Science and Technology (KACST) in Saudi Arabia. Every year KACST awards the best paper with the KACST Medal and $5,000. The editors of 3 Biotech have elected the best paper among those published in 2011-2012 and 2012-2013.

- The 2011-2012 winning paper is:

Nanocrystalline hydroxyapatite and zinc-doped hydroxyapatite as carrier material for controlled delivery of ciprofloxacin

Authors: G. Devanand Venkatasubbu and colleagues at Anna University, India.

- The 2012-2013winning paper is: Stress influenced increase in phenolic content and radical scavenging capacity of Rhodotorula glutinis CCY 20-2-26 Authors: Raj Kumar Salar and colleagues at Chaudhary Devi Lal University, India.

Related subjects Agriculture - Biomaterials - Biotechnology - Cancer Research - Cell Biology - Systems Biology and Bioinformatics

Journal Citation Reports, Thomson Reuters

Science Citation Index Expanded (SciSearch), Journal Citation Reports/Science Edition, PubMed, PubMedCentral, EMBASE, Google Scholar, CAB International, AGRICOLA, Biological Abstracts, BIOSIS, CAB Abstracts, DOAJ, Global Health, OCLC, Summon by ProQuest

See original here:
3 Biotech - a SpringerOpen journal

Read More...

Genetic code – Wikipedia

December 28th, 2016 7:42 am

The genetic code is the set of rules by which information encoded within genetic material (DNA or mRNA sequences) is translated into proteins by living cells. Translation is accomplished by the ribosome, which links amino acids in an order specified by mRNA, using transfer RNA (tRNA) molecules to carry amino acids and to read the mRNA three nucleotides at a time. The genetic code is highly similar among all organisms and can be expressed in a simple table with 64 entries.

The code defines how sequences of nucleotide triplets, called codons, specify which amino acid will be added next during protein synthesis. With some exceptions,[1] a three-nucleotide codon in a nucleic acid sequence specifies a single amino acid. Because the vast majority of genes are encoded with exactly the same code (see the RNA codon table), this particular code is often referred to as the canonical or standard genetic code, or simply the genetic code, though in fact some variant codes have evolved. For example, protein synthesis in human mitochondria relies on a genetic code that differs from the standard genetic code.

While the "genetic code" determines a protein's amino acid sequence, other genomic regions determine when and where these proteins are produced according to a multitude of more complex "gene regulatory codes".

Serious efforts to understand how proteins are encoded began after the structure of DNA was discovered in 1953. George Gamow postulated that sets of three bases must be employed to encode the 20 standard amino acids used by living cells to build proteins. With four different nucleotides, a code of 2 nucleotides would allow for only a maximum of 42 = 16 amino acids. A code of 3 nucleotides could code for a maximum of 43 = 64 amino acids.[2]

The Crick, Brenner et al. experiment first demonstrated that codons consist of three DNA bases; Marshall Nirenberg and Heinrich J. Matthaei were the first to elucidate the nature of a codon in 1961 at the National Institutes of Health. They used a cell-free system to translate a poly-uracil RNA sequence (i.e., UUUUU...) and discovered that the polypeptide that they had synthesized consisted of only the amino acid phenylalanine.[3] They thereby deduced that the codon UUU specified the amino acid phenylalanine. This was followed by experiments in Severo Ochoa's laboratory that demonstrated that the poly-adenine RNA sequence (AAAAA...) coded for the polypeptide poly-lysine[4] and that the poly-cytosine RNA sequence (CCCCC...) coded for the polypeptide poly-proline.[5] Therefore, the codon AAA specified the amino acid lysine, and the codon CCC specified the amino acid proline. Using different copolymers most of the remaining codons were then determined. Subsequent work by Har Gobind Khorana identified the rest of the genetic code. Shortly thereafter, Robert W. Holley determined the structure of transfer RNA (tRNA), the adapter molecule that facilitates the process of translating RNA into protein. This work was based upon earlier studies by Severo Ochoa, who received the Nobel Prize in Physiology or Medicine in 1959 for his work on the enzymology of RNA synthesis.[6]

Extending this work, Nirenberg and Philip Leder revealed the triplet nature of the genetic code and deciphered the codons of the standard genetic code. In these experiments, various combinations of mRNA were passed through a filter that contained ribosomes, the components of cells that translate RNA into protein. Unique triplets promoted the binding of specific tRNAs to the ribosome. Leder and Nirenberg were able to determine the sequences of 54 out of 64 codons in their experiments.[7] In 1968, Khorana, Holley and Nirenberg received the Nobel Prize in Physiology or Medicine for their work.[8]

A codon is defined by the initial nucleotide from which translation starts and sets the frame for a run of uninterrupted triplets, which is known as an "open reading frame" (ORF). For example, the string GGGAAACCC, if read from the first position, contains the codons GGG, AAA, and CCC; and, if read from the second position, it contains the codons GGA and AAC; if read starting from the third position, GAA and ACC. Every sequence can, thus, be read in its 5' 3' direction in three reading frames, each of which will produce a different amino acid sequence (in the given example, Gly-Lys-Pro, Gly-Asn, or Glu-Thr, respectively). With double-stranded DNA, there are six possible reading frames, three in the forward orientation on one strand and three reverse on the opposite strand.[9]:330 The actual frame from which a protein sequence is translated is defined by a start codon, usually the first AUG codon in the mRNA sequence.

In eukaryotes, ORFs in exons are often interrupted by introns.

Translation starts with a chain initiation codon or start codon. Unlike stop codons, the codon alone is not sufficient to begin the process. Nearby sequences such as the Shine-Dalgarno sequence in E. coli and initiation factors are also required to start translation. The most common start codon is AUG, which is read as methionine or, in bacteria, as formylmethionine. Alternative start codons depending on the organism include "GUG" or "UUG"; these codons normally represent valine and leucine, respectively, but as start codons they are translated as methionine or formylmethionine.[10]

The three stop codons have been given names: UAG is amber, UGA is opal (sometimes also called umber), and UAA is ochre. "Amber" was named by discoverers Richard Epstein and Charles Steinberg after their friend Harris Bernstein, whose last name means "amber" in German.[11] The other two stop codons were named "ochre" and "opal" in order to keep the "color names" theme. Stop codons are also called "termination" or "nonsense" codons. They signal release of the nascent polypeptide from the ribosome because there is no cognate tRNA that has anticodons complementary to these stop signals, and so a release factor binds to the ribosome instead.[12]

During the process of DNA replication, errors occasionally occur in the polymerization of the second strand. These errors, called mutations, can affect the phenotype of an organism, especially if they occur within the protein coding sequence of a gene. Error rates are usually very low1 error in every 10100million basesdue to the "proofreading" ability of DNA polymerases.[14][15]

Missense mutations and nonsense mutations are examples of point mutations, which can cause genetic diseases such as sickle-cell disease and thalassemia respectively.[16][17][18] Clinically important missense mutations generally change the properties of the coded amino acid residue between being basic, acidic, polar or non-polar, whereas nonsense mutations result in a stop codon.[9]:266

Mutations that disrupt the reading frame sequence by indels (insertions or deletions) of a non-multiple of 3 nucleotide bases are known as frameshift mutations. These mutations usually result in a completely different translation from the original, and are also very likely to cause a stop codon to be read, which truncates the creation of the protein.[19] These mutations may impair the function of the resulting protein, and are thus rare in in vivo protein-coding sequences. One reason inheritance of frameshift mutations is rare is that, if the protein being translated is essential for growth under the selective pressures the organism faces, absence of a functional protein may cause death before the organism is viable.[20] Frameshift mutations may result in severe genetic diseases such as Tay-Sachs disease.[21]

Although most mutations that change protein sequences are harmful or neutral, some mutations have a beneficial effect on an organism.[22] These mutations may enable the mutant organism to withstand particular environmental stresses better than wild type organisms, or reproduce more quickly. In these cases a mutation will tend to become more common in a population through natural selection.[23]Viruses that use RNA as their genetic material have rapid mutation rates,[24] which can be an advantage, since these viruses will evolve constantly and rapidly, and thus evade the defensive responses of e.g. the human immune system.[25] In large populations of asexually reproducing organisms, for example, E. coli, multiple beneficial mutations may co-occur. This phenomenon is called clonal interference and causes competition among the mutations.[26]

Degeneracy is the redundancy of the genetic code. This term was given by Bernfield and Nirenberg. The genetic code has redundancy but no ambiguity (see the codon tables below for the full correlation). For example, although codons GAA and GAG both specify glutamic acid (redundancy), neither of them specifies any other amino acid (no ambiguity). The codons encoding one amino acid may differ in any of their three positions. For example, the amino acid leucine is specified by YUR or CUN (UUA, UUG, CUU, CUC, CUA, or CUG) codons (difference in the first or third position indicated using IUPAC notation), while the amino acid serine is specified by UCN or AGY (UCA, UCG, UCC, UCU, AGU, or AGC) codons (difference in the first, second, or third position).[27]:102117:521522 A practical consequence of redundancy is that errors in the third position of the triplet codon cause only a silent mutation or an error that would not affect the protein because the hydrophilicity or hydrophobicity is maintained by equivalent substitution of amino acids; for example, a codon of NUN (where N = any nucleotide) tends to code for hydrophobic amino acids. NCN yields amino acid residues that are small in size and moderate in hydropathy; NAN encodes average size hydrophilic residues. The genetic code is so well-structured for hydropathy that a mathematical analysis (Singular Value Decomposition) of 12 variables (4 nucleotides x 3 positions) yields a remarkable correlation (C = 0.95) for predicting the hydropathy of the encoded amino acid directly from the triplet nucleotide sequence, without translation.[28][29] Note in the table, below, eight amino acids are not affected at all by mutations at the third position of the codon, whereas in the figure above, a mutation at the second position is likely to cause a radical change in the physicochemical properties of the encoded amino acid.

The frequency of codons, also known as codon usage bias, can vary from species to species with functional implications for the control of translation. The following codon usage table is for the human genome.[30]

While slight variations on the standard code had been predicted earlier,[31] none were discovered until 1979, when researchers studying human mitochondrial genes discovered they used an alternative code.[32] Many slight variants have been discovered since then,[33] including various alternative mitochondrial codes,[34] and small variants such as translation of the codon UGA as tryptophan in Mycoplasma species, and translation of CUG as a serine rather than a leucine in yeasts of the "CTG clade" (Candida albicans is member of this group).[35][36][37] Because viruses must use the same genetic code as their hosts, modifications to the standard genetic code could interfere with the synthesis or functioning of viral proteins.[38] However, some viruses (such as totiviruses) have adapted to the genetic code modification of the host.[39] In bacteria and archaea, GUG and UUG are common start codons, but in rare cases, certain proteins may use alternative start codons not normally used by that species.[33]

In certain proteins, non-standard amino acids are substituted for standard stop codons, depending on associated signal sequences in the messenger RNA. For example, UGA can code for selenocysteine and UAG can code for pyrrolysine. Selenocysteine is now viewed as the 21st amino acid, and pyrrolysine is viewed as the 22nd.[33] Unlike selenocysteine, pyrrolysine encoded UAG is translated with the participation of a dedicated aminoacyl-tRNA synthetase.[40] Both selenocysteine and pyrrolysine may be present in the same organism.[41] Although the genetic code is normally fixed in an organism, the achaeal prokaryote Acetohalobium arabaticum can expand its genetic code from 20 to 21 amino acids (by including pyrrolysine) under different conditions of growth.[42]

Despite these differences, all known naturally occurring codes are very similar to each other, and the coding mechanism is the same for all organisms: three-base codons, tRNA, ribosomes, reading the code in the same direction and translating the code three letters at a time into sequences of amino acids.

Variant genetic codes used by an organism can be inferred by identifying highly conserved genes encoded in that genome, and comparing its codon usage to the amino acids in homologous proteins of other organisms. For example, the program FACIL[43] infers a genetic code by searching which amino acids in homologous protein domains are most often aligned to every codon. The resulting amino acid probabilities for each codon are displayed in a genetic code logo, that also shows the support for a stop codon.

The DNA codon table is essentially identical to that for RNA, but with U replaced by T.

The origin of the genetic code is a part of the question of the origin of life. Under the main hypothesis for the origin of life, the RNA world hypothesis, any model for the emergence of genetic code is intimately related to a model of the transfer from ribozymes (RNA enzymes) to proteins as the principal enzymes in cells. In line with the RNA world hypothesis, transfer RNA molecules appear to have evolved before modern aminoacyl-tRNA synthetases, so the latter cannot be part of the explanation of its patterns.[45]

A consideration of a hypothetical random genetic code further motivates a biochemical or evolutionary model for the origin of the genetic code. If amino acids were randomly assigned to triplet codons, there would be 1.51084 possible genetic codes to choose from.[46]:163 This number is found by calculating how many ways there are to place 21 items (20 amino acids plus one stop) in 64 bins, wherein each item is used at least once. [2] In fact, the distribution of codon assignments in the genetic code is nonrandom.[47] In particular, the genetic code clusters certain amino acid assignments. For example, amino acids that share the same biosynthetic pathway tend to have the same first base in their codons. This could be an evolutionary relic of early simpler genetic code with fewer amino acids, that later diverged to code for a larger set of amino acids.[48] It could also reflect steric and chemical properties that had another effect on the codon during its evolution. Amino acids with similar physical properties also tend to have similar codons,[49][50] reducing the problems caused by point mutations and mistranslations.[47]

Given the non-random genetic triplet coding scheme, it has been suggested that a tenable hypothesis for the origin of genetic code should address multiple aspects of the codon table such as absence of codons for D-amino acids, secondary codon patterns for some amino acids, confinement of synonymous positions to third position, a limited set of only 20 amino acids instead of a number closer to 64, and the relation of stop codon patterns to amino acid coding patterns.[51]

There are three main ideas for the origin of the genetic code, and many models belong to either one of them or to a combination thereof:[52]

Hypotheses for the origin of the genetic code have addressed a variety of scenarios:[56]

Since 2001, 40 non-natural amino acids have been added into protein by creating a unique codon (recoding) and a corresponding transfer-RNA:aminoacyl tRNA-synthetase pair to encode it with diverse physicochemical and biological properties in order to be used as a tool to exploring protein structure and function or to create novel or enhanced proteins.[71][72]

H. Murakami and M. Sisido have extended some codons to have four and five bases. Steven A. Benner constructed a functional 65th (in vivo) codon.[73]

Read this article:
Genetic code - Wikipedia

Read More...

Biotechnology Journals | Open Access – omicsonline.org

December 27th, 2016 12:42 am

Journal of Biotechnology & Biomaterials is a peer reviewed journal which publishes high quality articles reporting original research, review, commentary, opinion, rapid communication, case report etc. on all aspects of Biotechnology and Biomaterials. Content areas include Plant/Animal/Microbial Biotechnology, Applied Biotechnology, Red/Medical Biotechnology, Green/Agricultural Biotechnology, Environmental Biotechnology, Blue/Marine Biotechnology, White/Industrial Biotechnology, Food Biotechnology, Orthopedic and Dental Biomaterials, Cardiovascular Biomaterials, Ophthalmologic Biomaterials, Bioelectrodes and Biosensors, Burn Dressings and Skin Substitutes, Sutures, Drug Delivery Systems etc. This Biotechnology Journal with highest impact factor offers Open Access option to meet the needs of authors and maximize article visibility.

The journal is an academic journal providing an opportunity to researchers and scientists to explore the advanced and latest research developments in the use of living organisms and bioprocesses in engineering, technology and medicine. The Journal of Biotechnology and Biomaterials is of highest standards in terms of quality and provides a collaborative open access platform to the scientists throughout the world in the field of Biotechnology and Biomaterials. Journal of Biotechnology and Biomaterials is a scholarly Open Access journal and aims to publish the most complete and reliable source of information on the advanced and very latest research topics.

The journal is using the Editorial Manager System for quality in the peer-review process. Editorial Manager System is an online submission and review system, where authors can submit manuscripts and track their progress. Reviewers can download manuscripts and submit their opinions. Editors can manage the whole submission, review, revise & publish process. Publishers can see what manuscripts are in the pipeline awaiting publication.

The Journal assures a 21 days rapid review process with international peer-review standards and with quality reviewers. E-mail is sent automatically to concerned persons when significant events occur. After publishing, articles are freely available through online without any restrictions or any other subscriptions to researchers worldwide.

Applied Biotechnology is gives the major opportunity to study science on the edge of technology, innovation and even science itself. Applied Microbiology and Biotechnology focusses on prokaryotic or eukaryotic cells, relevant enzymes and proteins; applied genetics and molecular biotechnology; genomics and proteomics; applied microbial and cell physiology; environmental biotechnology; process and products and more.

Related Journals of Applied Biotechnology

Current Opinion in Biotechnology, Biotechnology Advances, Biotechnology for Biofuels, Journal of Bioprocessing & Biotechniques, Journal of Bioterrorism & Biodefense, Molecular Biology, Biology and Medicine, Crop Breeding and Applied Biotechnology, Applied Mycology and Biotechnology, Asian Biotechnology and Development Review, Biotechnology applications Journals, Journal of Applied Biomaterials & Fundamental Materials.

Biomaterials are commonly used in various medical devices and systems such as drug delivery systems, hybrid organs, tissue cultures, synthetic skin, synthetic blood vessels, artificial hearts, screws, plates, cardiac pacemakers, wires and pins for bone treatments, total artificial joint implants, skull reconstruction, and dental and maxillofacial applications. Among various applications, the application of biomaterials in cardiovascular system is most significant. The use of cardiovascular biomaterials (CB) is subjected to its blood compatibility and its integration with the surrounding environment where it is implanted.

Related Journals of Cardiovascular biomaterials

Journal of Biomimetics Biomaterials and Tissue Engineering, Journal of Advanced Chemical Engineering, Journal of Bioprocessing & Biotechniques, Journal of Biomaterials Science, Polymer Edition, Journal of Biomaterials Applications, Trends in Biomaterials and Artificial Organs, International Journal of Biomaterials and Journal of Biomaterials and Tissue Engineering, Cardiovascular biomaterials Journals.

Biomaterials are used daily in surgery, dental applications and drug delivery. Biomaterial implant is a construct with impregnated pharmaceutical products which can be placed into the body, that permits the prolonged release of a drug over an extended period of time. A biomaterial may also be an autograft, allograft or xenograft used as a transplant material.

Related journals of Biomaterial implants

Advanced Functional Materials, Biomaterials, Advanced healthcare materials, Journal of Biomimetics Biomaterials and Tissue Engineering, Journal of Molecular and Genetic Medicine, Journal of Phylogenetics & Evolutionary Biology, Clinical Oral Implants Research, International Journal of Oral and Maxillofacial Implants, Journal of Long-Term Effects of Medical Implants and Cochlear Implants International, Biomaterials Journals, Biomaterial implants Journals.

Animal Biotechnology covers the identification and manipulation of genes and their products, stressing applications in domesticated animals. Animals are used in many ways in biotechnology. Biotechnology provides new tools for improving human health and animal health and welfare and increasing livestock productivity. Biotechnology improves the food we eat - meat, milk and eggs. Biotechnology can improve an animals impact on the environment.

Related Journals of Animal biotechnology

Journal of Bioprocessing & Biotechniques, Journal of Molecular and Genetic Medicine, Biology and Medicine, Journal of Advanced Chemical Engineering, Animal Biotechnology, African Journal of Biotechnology, Current Pharmaceutical Biotechnology, Critical Reviews in Biotechnology and Reviews in Environmental Science and Biotechnology, Asian Journal of Microbiology Biotechnology and Environmental Sciences.

A biomaterial is any surface, matter, or construct that interacts with biological systems. The biomaterial science is the study of biomaterials. Biomaterials science encloses elements of medicine, biology, chemistry, tissue engineering and materials science. Biomaterials derived from either nature or synthesized in the laboratory using a different typrs of chemicals utilizing metallic components, polymers, ceramics or composite materials. They are oftenly used for a medical application.

Related Journals of Biomaterials

Biosensors and Bioelectronics, Journal of Bioactive and Compatible Polymers, Journal of Tissue Engineering, Journal of Biomimetics Biomaterials and Tissue Engineering, Journal of Bioterrorism & Biodefense, Fermentation Technology, Journal of Phylogenetics & Evolutionary Biology, International Journal of Nano and Biomaterials, Journal of Biomimetics, Biomaterials, and Tissue Engineering, Journal of Applied Biomaterials and Fundamental Materials, Journal of Biomaterials and Tissue Engineering and International Journal of Biomaterials.

Nanobiotechnology, nanobiology and bionanotechnology are terms that refer to the intersection of nanotechnology and biology. Bionanotechnology and nanobiotechnology serve as blanket terms for various related technologies. This discipline helps to indicate the merger of biological research with various fields of nanotechnology. Concepts enhanced through nanobiology are nanodevices, nanoparticles, and nanoscale phenomena. Nanotechnology uses biological systems as the biological inspirations.

Related Journals of Nano biotechnology

Biopolymers, Journal of the Mechanical Behavior of Biomedical Materials, Journal of Tissue Engineering and Regenerative Medicine, Journal of Bioprocessing & Biotechniques, Journal of Bioterrorism & Biodefense, Journal of Molecular and Genetic Medicine, Journal of Advanced Chemical Engineering, Journal of Nanobiotechnology, Artificial Cells, Nanomedicine and Biotechnology, IET Nanobiotechnology and Wiley Interdisciplinary Reviews: Nanomedicine and Nanobiotechnology, Australian journal of biotechnology, International Journal of Nano & Biomaterials, Nano biotechnology Journals.

Biocatalysis are used as natural catalysts, like protein enzymes, to perform chemical transformations on organic compounds. Both enzymes that have been more or less isolated and enzymes still residing inside living cells are employed for this task. Since biocatalysis deals with enzymes and microorganisms, it is historically classified separately from "homogeneous catalysis" and "heterogeneous catalysis". However, biocatalysis is simply a heterogeneous catalysis.

Related Journals of Biocatalysis

Biology and Medicine, Fermentation Technology, Journal of Advanced Chemical Engineering, Biocatalysis and Biotransformation and Biocatalysis and Agricultural Biotechnology.

Agricultural biotechnology is a collection of scientific techniques used to improve plants, animals and microorganisms. Based on an structure and characteristics of DNA, scientists have developed solutions to increase agricultural productivity. Scientists have learned how to move genes from one organism to another. This has been called genetic modification (GM), genetic engineering (GE) or genetic improvement (GI). Regardless of the name, the process allows the transfer of useful characteristics (such as resistance to a disease) into a plant, animal or microorganism by inserting genes from another organism.

Related Journals of Agricultural biotechnology

Journal of Phylogenetics & Evolutionary Biology, Journal of Molecular and Genetic Medicine, Molecular Biology, Journal of Bioprocessing & Biotechniques, Biocatalysis and Agricultural Biotechnology and Chinese Journal of Agricultural Biotechnology, Plant Biotechnology Journal, Plant Biotechnology Journals.

A biomolecule is any molecule which is present in living organisms, entails large macromolecules like proteins, lipids, polysaccharides, and nucleic acids, as well as small molecules include primary metabolites, secondary metabolites, and natural products. A common name for this class of material is biological materials. Nucleosides are molecules formed by attaching a nucleobase to a ribose or deoxyribose ring. Nucleosides can be phosphorylated by specific kinases in the cell, producing nucleotides.

Related Journals of Bio-molecules

Molecular Biology, Biology and Medicine, Journal of Molecular and Genetic Medicine, Journal of Phylogenetics & Evolutionary Biology, Biomolecules and Therapeutics, Applied Biochemistry and Biotechnology - Part B Molecular Biotechnology, Asia-Pacific Journal of Molecular Biology and Biotechnology, Bio-molecules Journals.

In developing countries, application of biotechnology to food processing is an issue of argument and discussions for a long time. Biotechnological study focuse development and improvement of customary fermentation processes. The application of Biotechnology to solve the environmental problems in the environment and in the ecosystems is called Environmental Biotechnology. It is applied and used to study the natural environment.

Related Journals of Biotechnology applications

NatureBiotechnology, Trends inBiotechnology, MetabolicEngineering, Journal of Bioprocessing & Biotechniques,Journal of Phylogenetics & Evolutionary Biology, Journal ofAdvanced Chemical Engineering, Applied Microbiology andBiotechnology, Applied Biochemistry and Biotechnology - PartA Enzyme Engineering and Biotechnology, Biotechnology and AppliedBiochemistry, Applied Biotechnology Journals, Applied Microbiologyand Biotechnology, Systems and Synthetic Biology and IET SyntheticBiology.

Industrial or white biotechnology uses enzymes and micro-organisms to make biobased products in sectors like chemicals, food and feed, detergents, paper and pulp, textiles and bioenergy (such as biofuels or biogas). It uses renewable raw materials and is one of the most promising, newest approaches towards lowering greenhouse gas emissions. Industrial biotechnology application has been proven to make significant contributions towards mitigating the impacts of climate change in these and other sectors.

Related Journals of White/industrial biotechnology

Critical Reviews in Biotechnology, Biotechnology and Bioengineering, Microbial Biotechnology, Journal of Bioprocessing & Biotechniques, Journal of Bioterrorism & Biodefense, Fermentation Technology, Molecular Biology, Journal of Phylogenetics & Evolutionary Biology, Journal of Molecular and Genetic Medicine, Chemical Sciences Journal, Industrial Biotechnology and Journal of Industrial Microbiology and Biotechnology, White/industrial biotechnology Journals.

See original here:
Biotechnology Journals | Open Access - omicsonline.org

Read More...

Biotechnology Conferences | USA Biotech events …

December 27th, 2016 12:42 am

Session & Tracks

Track 1:Molecular Biotechnology

Molecular biotechnology is the use of laboratory techniques to study and modify nucleic acids and proteins for applications in areas such as human and animal health, agriculture, and the environment.Molecular biotechnologyresults from the convergence of many areas of research, such as molecular biology, microbiology,biochemistry, immunology, genetics, and cell biology. It is an exciting field fueled by the ability to transfer genetic information between organisms with the goal of understanding important biological processes or creating a useful product.

Related Conferences

11th World Congress onBiotechnology and Biotech IndustriesMeet, July 28-29, 2016, Berlin, Germany; 10thAsia Pacific Biotech CongressJuly 25-27, 2016, Bangkok, Thailand; 13thBiotechnology Congress, Nov 28-30, 2016, San Francisco, USA; GlobalBiotechnology Congress2016, May 11th-14th 2016, Boston, MA, USA;BIO Investor Forum, October 20-21, 2015, San Francisco, USA;BIO Latin America Conference, October 14-16, 2015, Rio de Janeiro, Brazil;Bio Pharm America 20158th Annual International Partnering Conference, September 15-17, 2015, Boston, MA, USA.

Track 2:Environmental Biotechnology

The biotechnology is applied and used to study the natural environment. Environmental biotechnology could also imply that one try to harness biological process for commercial uses and exploitation. It is "the development, use and regulation of biological systems for remediation of contaminated environment and forenvironment-friendly processes(green manufacturing technologies and sustainable development). Environmental biotechnology can simply be described as "the optimal use of nature, in the form of plants, animals, bacteria, fungi and algae, to producerenewable energy, food and nutrients in a synergistic integrated cycle of profit making processes where the waste of each process becomes the feedstock for another process".

Related Conferences

11th World Congress onBiotechnology and Biotech IndustriesMeet, July 28-29, 2016, Berlin, Germany; 10thAsia Pacific Biotech CongressJuly 25-27, 2016, Bangkok, Thailand; 11thEuro Biotechnology Congress, November 07-09,2016, Alicante Spain; 13thBiotechnology Congress, Nov 28-30, 2016, San Francisco, USA; GlobalBiotechnology Congress2016, May 11th - 14th 2016, Boston, MA, USA;Biomarker Summit2016, March 21-23, 2016 San Diego, CA, USA; 14thVaccines Research & Development, July 7-8, Boston, USA;Pharmaceutical & BiotechPatent Litigation Forum, Mar 14 - 15, 2016, Amsterdam, Netherlands

Track 3:Animal Biotechnology

It improves the food we eat - meat, milk and eggs. Biotechnology can improve an animals impact on the environment. Animalbiotechnologyis the use of science and engineering to modify living organisms. The goal is to make products, to improve animals and to developmicroorganismsfor specific agricultural uses. It enhances the ability to detect, treat and prevent diseases, include creating transgenic animals (animals with one or more genes introduced by human intervention), using gene knock out technology to make animals with a specific inactivated gene and producing nearly identical animals by somatic cell nuclear transfer (or cloning).

Related Conferences

11th World Congress onBiotechnology and Biotech Industries Meet, July 28-29, 2016, Berlin, Germany; 10thAsia Pacific Biotech CongressJuly 25-27, 2016, Bangkok, Thailand; 11thEuro Biotechnology Congress, November 07-09,2016, Alicante Spain; 13thBiotechnology Congress, Nov 28-30, 2016, San Francisco, USA;Global Biotechnology Congress2016, May 11th - 14th 2016, Boston, MA, USA;Biomarker Summit2016, March 21-23, 2016 San Diego, CA, USA; 14thVaccines Research & Development, July 7-8, Boston, USA;Pharmaceutical & BiotechPatent Litigation Forum, Mar 14 - 15, 2016, Amsterdam, Netherlands; 4thBiomarkers in Diagnostics, Oct 07-08, 2015 Berlin, Germany, DEU.

Track 4:Medical Biotechnology and Biomedical Engineering

Medicine is by means of biotechnology techniques so much in diagnosing and treating dissimilar diseases. It also gives opportunity for the population to defend themselves from hazardous diseases. The pasture of biotechnology, genetic engineering, has introduced techniques like gene therapy, recombinant DNA technologyand polymerase chain retort which employ genes and DNA molecules to make adiagnosis diseasesand put in new and strong genes in the body which put back the injured cells. There are some applications of biotechnology which are live their part in the turf of medicine and giving good results.

Related Conferences

11th World Congress onBiotechnology and Biotech Industries Meet, July 28-29, 2016, Berlin, Germany; 10thAsia Pacific Biotech CongressJuly 25-27, 2016, Bangkok, Thailand; 11thEuro Biotechnology Congress, November 07-09,2016, Alicante Spain; 13thBiotechnology Congress, Nov 28-30, 2016, San Francisco, USA;Global Biotechnology Congress2016, May 11th - 14th 2016, Boston, MA, USA;Biomarker Summit2016, March 21-23, 2016 San Diego, CA, USA; 14thVaccines Research & Development, July 7-8, Boston, USA;Pharmaceutical & Biotech Patent Litigation Forum, Mar 14 - 15, 2016, Amsterdam, Netherlands; 4thBiomarkers in Diagnostics, Oct 07-08, 2015 Berlin, Germany, DEU.

Track 5:Agricultural Biotechnology

Biotechnology is being used to address problems in all areas of agricultural production and processing. This includesplant breedingto raise and stabilize yields; to improve resistance to pests, diseases and abiotic stresses such as drought and cold; and to enhance the nutritional content of foods. Modern agricultural biotechnology improves crops in more targeted ways. The best known technique is genetic modification, but the term agricultural biotechnology (or green biotechnology) also covers such techniques asMarker Assisted Breeding, which increases the effectiveness of conventional breeding.

Related Conferences

3rd GlobalFood Safety Conference, September 01-03, 2016, Atlanta USA; 10thAsia Pacific Biotech CongressJuly 25-27, 2016, Bangkok, Thailand; 11thEuro Biotechnology Congress, November 07-09,2016, Alicante Spain; 12thBiotechnology Congress, Nov 14-15, 2016, San Francisco, USA;Biologically Active Compoundsin Food, October 15-16 2015 Lodz, Poland; World Conference onInnovative Animal Nutrition and Feeding, October 15-17, 2015 Budapest, Hungary; 18th International Conference onFood Science and Biotechnology, November 28 - 29, 2016, Istanbul, Turkey; 18th International Conference on Agricultural Science, Biotechnology,Food and Animal Science, January 7 - 8, 2016, Singapore; International IndonesiaSeafood and Meat, 1517 October 2016, Jakarta, Indonesia.

Track 6:Industrial Biotechnology and Pharmaceutical Biotechnology

Industrial biotechnology is the application of biotechnology for industrial purposes, includingindustrial fermentation. The practice of using cells such as micro-organisms, or components of cells like enzymes, to generate industrially useful products in sectors such as chemicals, food and feed, detergents, paper and pulp, textiles andbiofuels. Industrial Biotechnology offers a premier forum bridging basic research and R&D with later-stage commercialization for sustainable bio based industrial and environmental applications.

Related Conferences

11th World Congress onBiotechnology and Biotech Industries Meet, July 28-29, 2016, Berlin, Germany; 10thAsia Pacific Biotech CongressJuly 25-27, 2016, Bangkok, Thailand; 11thEuro Biotechnology Congress, November 07-09,2016, Alicante Spain; 13thBiotechnology Congress, Nov 28-30, 2016, San Francisco, USA; GlobalBiotechnology Congress2016, May 11th - 14th 2016, Boston, MA, USA;Biomarker Summit2016, March 21-23, 2016 San Diego, CA, USA; 14thVaccines Research & Development, July 7-8, Boston, USA;Pharmaceutical & BiotechPatent Litigation Forum, Mar 14 - 15, 2016, Amsterdam, Netherlands; 4thBiomarkers in Diagnostics, Oct 07-08, 2015 Berlin, Germany, DEU.

Track 8:Microbial and Biochemical Technology

Microorganisms have been exploited for their specific biochemical and physiological properties from the earliest times for baking, brewing, and food preservation and more recently for producingantibiotics, solvents, amino acids, feed supplements, and chemical feedstuffs. Over time, there has been continuous selection by scientists of special strains ofmicroorganisms, based on their efficiency to perform a desired function. Progress, however, has been slow, often difficult to explain, and hard to repeat. Recent developments inmolecular biologyand genetic engineering could provide novel solutions to long-standing problems. Over the past decade, scientists have developed the techniques to move a gene from one organism to another, based on discoveries of how cells store, duplicate, and transfer genetic information.

Related conferences

3rdGlobal Food Safety Conference, September 01-03, 2016, Atlanta USA; 10thAsia Pacific Biotech CongressJuly 25-27, 2016, Bangkok, Thailand; 11thEuro Biotechnology Congress, November 07-09,2016, Alicante Spain; 12thBiotechnology Congress, Nov 14-15, 2016, San Francisco, USA;Biologically Active Compoundsin Food, October 15-16 2015 Lodz, Poland; World Conference onInnovative Animal Nutrition and Feeding, October 15-17, 2015 Budapest, Hungary; 18th International Conference onFood Science and Biotechnology, November 28 - 29, 2016, Istanbul, Turkey; 18th International Conference on Agricultural Science, Biotechnology,Food and Animal Science, January 7 - 8, 2016, Singapore; International IndonesiaSeafood and Meat, 1517 October 2016, Jakarta, Indonesia.

Track 9:Food Processing and Technology

Food processing is a process by which non-palatable and easily perishable raw materials are converted to edible and potable foods and beverages, which have a longer shelf life. Biotechnology helps in improving the edibility, texture, and storage of the food; in preventing the attack of the food, mainly dairy, by the virus likebacteriophage producing antimicrobial effect to destroy the unwanted microorganisms in food that cause toxicity to prevent the formation and degradation of other toxins andanti-nutritionalelements present naturally in food.

Related Conferences

11th World Congress onBiotechnology and Biotech Industries Meet, July 28-29, 2016, Berlin, Germany; 10thAsia Pacific Biotech CongressJuly 25-27, 2016, Bangkok, Thailand; 13thBiotechnology Congress, Nov 28-30, 2016, San Francisco, USA;Global Biotechnology Congress 2016, May 11th-14th 2016, Boston, MA, USA;BIO Investor Forum, October 20-21, 2015, San Francisco, USA;BIO Latin America Conference, October 14-16, 2015, Rio de Janeiro, Brazil;Bio Pharm America 20158th Annual International Partnering Conference, September 15-17, 2015, Boston, MA, USA.

Track 10:Genetic Engineering and Molecular Biology

One kind of biotechnology is gene technology, sometimes called 'genetic engineering' or'genetic modification', where the genetic material of living things is deliberately altered to enhance or remove a particular trait and allow the organism to perform new functions. Genes within a species can be modified, or genes can be moved from one species to another. Genetic engineering has applications inmedicine, research, agriculture and can be used on a wide range of plants, animals and microorganisms. It resulted in a series of medical products. The first two commercially prepared products from recombinant DNA technology were insulin andhuman growth hormone, both of which were cultured in the E. coli bacteria.

The field of molecular biology overlaps with biology and chemistry and in particular, genetics and biochemistry. A key area of molecular biology concerns understanding how various cellular systems interact in terms of the way DNA, RNA and protein synthesis function.

Related Conferences

11th World Congress onBiotechnology and Biotech Industries Meet, July 28-29, 2016, Berlin, Germany; 10thAsia Pacific Biotech CongressJuly 25-27, 2016, Bangkok, Thailand; 11thEuro Biotechnology Congress, November 07-09,2016, Alicante Spain; 13thBiotechnology Congress, Nov 28-30, 2016, San Francisco, USA;Global Biotechnology Congress2016, May 11th - 14th 2016, Boston, MA, USA;Biomarker Summit2016, March 21-23, 2016 San Diego, CA, USA; 14thVaccines Research & Development, July 7-8, Boston, USA;Pharmaceutical & BiotechPatent Litigation Forum, Mar 14 - 15, 2016, Amsterdam, Netherlands; 4thBiomarkers in Diagnostics, Oct 07-http://world.biotechnologycongress.com/08, 2015 Berlin, Germany, DEU.

Track 11:Tissue Science and Engineering

Tissue engineering is emerging as a significant potential alternative or complementary solution, whereby tissue and organ failure is addressed by implanting natural, synthetic, orsemisynthetic tissueand organ mimics that are fully functional from the start or that grow into the required functionality. Initial efforts have focused on skin equivalents for treating burns, but an increasing number of tissue types are now being engineered, as well as biomaterials and scaffolds used as delivery systems. A variety of approaches are used to coax differentiated or undifferentiated cells, such as stem cells, into the desired cell type. Notable results includetissue-engineeredbone, blood vessels, liver, muscle, and even nerve conduits. As a result of the medical and market potential, there is significant academic and corporate interest in this technology.

Related Conferences

11th World Congress onBiotechnology and Biotech Industries Meet, July 28-29, 2016, Berlin, Germany; 10thAsia Pacific Biotech CongressJuly 25-27, 2016, Bangkok, Thailand; 11thEuro Biotechnology Congress, November 07-09,2016, Alicante Spain; 13thBiotechnology Congress, Nov 28-30, 2016, San Francisco, USA;Global Biotechnology Congress2016, May 11th - 14th 2016, Boston, MA, USA;Biomarker Summit2016, March 21-23, 2016 San Diego, CA, USA; 14thVaccines Research & Development, July 7-8, Boston, USA;Pharmaceutical & BiotechPatent Litigation Forum, Mar 14 - 15, 2016, Amsterdam, Netherlands; 4thBiomarkers in Diagnostics, Oct 07-08, 2015 Berlin, Germany, DEU.

Track 12:Nano Biotechnology

Nano biotechnology, bio nanotechnology, and Nano biology are terms that refer to the intersection of nanotechnology and biology. Bio nanotechnology and Nano biotechnology serve as blanket terms for various related technologies. The most important objectives that are frequently found inNano biologyinvolve applying Nano tools to relevantmedical/biologicalproblems and refining these applications. Developing new tools, such as peptide Nano sheets, for medical and biological purposes is another primary objective in nanotechnology.

Related Conferences

8thWorldMedicalNanotechnologyCongress& Expo during June 9-11, Dallas, USA; 6thGlobal Experts Meeting and Expo onNanomaterialsand Nanotechnology, April 21-23, 2016 ,Dubai, UAE; 12thNanotechnologyProductsExpo, Nov 10-12, 2016 at Melbourne, Australia; 5thInternationalConference onNanotechand Expo, November 16-18, 2015 at San Antonio, USA; 11thInternational Conference and Expo onNano scienceandMolecular Nanotechnology, September 26-28 2016, London, UK; 18thInternational Conference onNanotechnologyand Biotechnology, February 4 - 5, 2016 in Melbourne, Australia; 16thInternational Conference onNanotechnology, August 22-25, 2016 in Sendai, Japan; International Conference onNano scienceand Nanotechnology, 7-11 Feb 2016 in Canberra, Australia; 18thInternational Conference onNano scienceand Nanotechnology, February 15 - 16, 2016 in Istanbul, Turkey; InternationalNanotechnologyConference& Expo, April 4-6, 2016 in Baltimore, USA.

Track 13:Bioinformatics and Biosensors

Bioinformatics is the application of computer technology to the management of biological information. Computers are used to gather, store, analyze and integrate biological and genetic information which can then be applied to gene-based drug discovery and development. The science of Bioinformatics, which is the melding of molecular biology with computer science, is essential to the use of genomic information in understanding human diseases and in the identification of newmolecular targetsfor drug discovery. This interesting field of science has many applications and research areas where it can be applied. It plays an essential role in today's plant science. As the amount of data grows exponentially, there is a parallel growth in the demand for tools and methods indata management, visualization, integration, analysis, modeling, and prediction.

Related conferences

11th World Congress onBiotechnology and Biotech IndustriesMeet, July 28-29, 2016, Berlin, Germany; 10thAsia Pacific Biotech CongressJuly 25-27, 2016, Bangkok, Thailand; 11thEuro Biotechnology Congress, November 07-09,2016, Alicante Spain; 12thBiotechnology Congress, Nov 14-15, 2016, San Francisco, USA;BIO IPCC Conference, Cary, North Carolina, USA; World Congress onIndustrial Biotechnology, April 17-20, 2016, San Diego, CA; 6thBio based Chemicals: Commercialization & Partnering, November 16-17, 2015, San Francisco, CA, USA; The European Forum forIndustrial Biotechnology and Bio economy, 27-29 October 2015, Brussels, Belgium; 4thBiotechnology World Congress, February 15th-18th, 2016, Dubai, United Arab Emirates; International Conference on Advances inBioprocess Engineering and Technology, 20th to 22nd January 2016,Kolkata, India; GlobalBiotechnology Congress2016, May 11th - 14th 2016, Boston, MA, USA

Track 14:Biotechnology investments and Biotech grants

Every new business needs some startup capital, for research, product development and production, permits and licensing and other overhead costs, in addition to what is needed to pay your staff, if you have any. Biotechnology products arise from successfulbiotechcompanies. These companies are built by talented individuals in possession of a scientific breakthrough that is translated into a product or service idea, which is ultimately brought into commercialization. At the heart of this effort is the biotech entrepreneur, who forms the company with a vision they believe will benefit the lives and health of countless individuals. Entrepreneurs start biotechnology companies for various reasons, but creatingrevolutionary productsand tools that impact the lives of potentially millions of people is one of the fundamental reasons why all entrepreneurs start biotechnology companies.

10thAsia Pacific Biotech CongressJuly 25-27, 2016, Bangkok; 11thEuroBiotechnologyCongress, November 7-9, 2016 Alicante, Spain; 11th World Congress onBiotechnology and Biotech IndustriesMeet, July 28-29, 2016, Berlin, Germany; 13thBiotechnologyCongress, November 28-30, 2016 San Francisco, USA; 10thAsia Pacific Biotech CongressJuly 25-27, 2016, Bangkok, UAE;BioInternational Convention, June 6-9, 2016 | San Francisco, CA;BiotechJapan, May 11-13, 2016, Tokyo, Japan;NANO BIOEXPO 2016, Jan. 27 - 29, 2016, Tokyo, Japan;ArabLabExpo2016, March 20-23, Dubai; 14thInternational exhibition for laboratory technology,chemical analysis, biotechnology and diagnostics, 12-14 Apr 2016, Moscow, Russia

Read this article:
Biotechnology Conferences | USA Biotech events ...

Read More...

Masters in Biotechnology Programs and … – Masters PhD Degrees

December 27th, 2016 12:42 am

Considering a Masters in Biotechnology Program or reviewing options for Masters Degrees in Biotechnology? A Masters in Biotechnology can openupexciting

Biotechnology is a challenging field that can involve a number of facets of both science and business or law. Many biotechnology master's degree programs focus on aspects of biology, cell biology, chemistry, or biological or chemical engineering. In general, biotechnology degrees involve research whether they are at a Masters or PhD level.

Scientific understanding is rapidly evolving, particularly in areas of cellular and molecular systems. Biotechnology master's students can therefore enjoy rich study opportunities particularly in fields such as genetic engineering, the Human Genome project, the production of new medicinal products, and research into the relationship between genetic malfunction and the origin of disease. These are just a few of the many areas that biotechnology students have the opportunity to explore today.

Another focus of biotechnology masters programs may be to equip students with the combination of science and business knowledge they need to help produce products and move them toward production. Today's complex business environment and government regulations require many steps and people with the ability to both understand and help produce new scientific technologies as well as get them approved and be able to market them.

Master degrees in biotechnology might prepare students to pursue careers in a variety of industries. While many students go on to further research or academic positions, there may also be some demand for biotechnologists outside of academia, both in the government and private sectors. Biotechnologists might pursue careers in anything from research to applied science and manufacturing. Those with specializations in business aspects of biotechnology may be qualified to pursue management positions within organizations attempting to produce and market new biotechnology.

Excerpt from:
Masters in Biotechnology Programs and ... - Masters PhD Degrees

Read More...

Drosophila melanogaster – Wikipedia

December 25th, 2016 4:44 pm

Drosophila melanogaster is a species of fly (the taxonomic order Diptera) in the family Drosophilidae. The species is known generally as the common fruit fly or vinegar fly. Starting with Charles W. Woodworth's proposal of the use of this species as a model organism, D. melanogaster continues to be widely used for biological research in studies of genetics, physiology, microbial pathogenesis, and life history evolution. It is typically used because it is an animal species that is easy to care for, has four pairs of chromosomes, breeds quickly, and lays many eggs.[2]D. melanogaster is a common pest in homes, restaurants, and other occupied places where food is served.[3]

Flies belonging to the family Tephritidae are also called "fruit flies". This can cause confusion, especially in Australia and South Africa, where the Mediterranean fruit fly Ceratitis capitata is an economic pest.

Wildtype fruit flies are yellow-brown, with brick-red eyes and transverse black rings across the abdomen. They exhibit sexual dimorphism: females are about 2.5 millimeters (0.098in) long; males are slightly smaller with darker backs. Males are easily distinguished from females based on colour differences, with a distinct black patch at the abdomen, less noticeable in recently emerged flies (see fig.), and the sexcombs (a row of dark bristles on the tarsus of the first leg). Furthermore, males have a cluster of spiky hairs (claspers) surrounding the reproducing parts used to attach to the female during mating. There are extensive images at FlyBase.[4]

Egg of D. melanogaster

The D. melanogaster lifespan is about 30 days at 29C (84F).

The developmental period for D. melanogaster varies with temperature, as with many ectothermic species. The shortest development time (egg to adult), 7 days, is achieved at 28C (82F).[5][6] Development times increase at higher temperatures (11 days at 30C or 86F) due to heat stress. Under ideal conditions, the development time at 25C (77F) is 8.5 days,[5][6][7] at 18C (64F) it takes 19 days[5][6] and at 12C (54F) it takes over 50 days.[5][6] Under crowded conditions, development time increases,[8] while the emerging flies are smaller.[8][9] Females lay some 400 eggs (embryos), about five at a time, into rotting fruit or other suitable material such as decaying mushrooms and sap fluxes. The eggs, which are about 0.5mm long, hatch after 1215 hours (at 25C or 77F).[5][6] The resulting larvae grow for about 4 days (at 25C) while molting twice (into second- and third-instar larvae), at about 24 and 48 h after hatching.[5][6] During this time, they feed on the microorganisms that decompose the fruit, as well as on the sugar of the fruit itself. The mother puts feces on the egg sacs to establish the same microbial composition in the larvae's guts which has worked positively for herself.[10] Then the larvae encapsulate in the puparium and undergo a four-day-long metamorphosis (at 25C), after which the adults eclose (emerge).[5][6]

Females become receptive to courting males at about 812 hours after emergence.[11] Specific neuron groups in females have been found to affect copulation behavior and mate choice. One such group in the abdominal nerve cord allows the female fly to pause her body movements to copulate.[12] Activation of these neurons induces the female to cease movement and orient herself towards the male to allow for mounting. If the group is inactivated, the female remains in motion and does not copulate. Various chemical signals such as male pheromones often are able to activate the group.[12]

The female fruit fly prefers a shorter duration when it comes to sex. Males, on the other hand, prefer it to last longer.[13] Males perform a sequence of five behavioral patterns to court females. First, males orient themselves while playing a courtship song by horizontally extending and vibrating their wings. Soon after, the male positions itself at the rear of the female's abdomen in a low posture to tap and lick the female genitalia. Finally, the male curls its abdomen and attempts copulation. Females can reject males by moving away, kicking, and extruding their ovipositor.[14] Copulation lasts around 1520 minutes,[15] during which males transfer a few hundred, very long (1.76mm) sperm cells in seminal fluid to the female.[16] Females store the sperm in a tubular receptacle and in two mushroom-shaped spermathecae; sperm from multiple matings compete for fertilization. A last male precedence is believed to exist in which the last male to mate with a female sires about 80% of her offspring. This precedence was found to occur through both displacement and incapacitation.[17] The displacement is attributed to sperm handling by the female fly as multiple matings are conducted and is most significant during the first 12 days after copulation. Displacement from the seminal receptacle is more significant than displacement from the spermathecae.[17] Incapacitation of first male sperm by second male sperm becomes significant 27 days after copulation. The seminal fluid of the second male is believed to be responsible for this incapacitation mechanism (without removal of first male sperm) which takes effect before fertilization occurs.[17] The delay in effectiveness of the incapacitation mechanism is believed to be a protective mechanism that prevents a male fly from incapacitating its own sperm should it mate with the same female fly repetitively. Sensory neurons in the uterus of female D. melanogaster respond to a male protein, sex peptide, which is found in sperm.[12] This protein makes the female reluctant to copulate for about 10 days after insemination. The signal pathway leading to this change in behavior has been determined. The signal is sent to a brain region that is a homolog of the hypothalamus and the hypothalamus then controls sexual behavior and desire[12]

D. melanogaster is often used for life extension studies, such as to identify genes purported to increase lifespan when mutated.[18]

D. melanogaster females exhibit mate choice copying. When virgin females are shown other females copulating with a certain type of male, they tend to copulate more with this type of male afterwards than naive females (which have not observed the copulation of others). This behavior is sensitive to environmental conditions, and females copy less in bad weather conditions.[19]

D. melanogaster males exhibit a strong reproductive learning curve. That is, with sexual experience, these flies tend to modify their future mating behavior in multiple ways. These changes include increased selectivity for courting only intraspecifically, as well as decreased courtship times.

Sexually nave D. melanogaster males are known to spend significant time courting interspecifically, such as with D. simulans flies. Nave D. melanogaster will also attempt to court females that are not yet sexually mature, and other males. D. melanogaster males show little to no preference for D. melanogaster females over females of other species or even other male flies. However, after D. simulans or other flies incapable of copulation have rejected the males advances, D. melanogaster males are much less likely to spend time courting nonspecifically in the future. This apparent learned behavior modification seems to be evolutionarily significant, as it allows the males to avoid investing energy into futile sexual encounters.[20]

In addition, males with previous sexual experience will modify their courtship dance when attempting to mate with new females the experienced males spend less time courting and therefore have lower mating latencies, meaning that they are able to reproduce more quickly. This decreased mating latency leads to a greater mating efficiency for experienced males over nave males.[21] This modification also appears to have obvious evolutionary advantages, as increased mating efficiency is extremely important in the eyes of natural selection.

Both male and female D. melanogaster act polygamously (having multiple sexual partners at the same time).[22] In both males and females, polygamy results in a decrease in evening activity compared to virgin flies, more so in males than females.[22] Evening activity consists of the activities that the flies participate in other than mating and finding partners, such as finding food.[23] The reproductive success of males and females varies, due to the fact that a female only needs to mate once to reach maximum fertility.[23] Mating with multiple partners provides no advantage over mating with one partner, and therefore females exhibit no difference in evening activity between polygamous and monogamous individuals.[23] For males, however, mating with multiple partners increases their reproductive success by increasing the genetic diversity of their offspring.[23] This benefit of genetic diversity is an evolutionary advantage because it increases the chance that some of the offspring will have traits that increase their fitness in their environment.

The difference in evening activity between polygamous and monogamous male flies can be explained with courtship. For polygamous flies, their reproductive success increases by having offspring with multiple partners, and therefore they spend more time and energy on courting multiple females.[23] On the other hand, monogamous flies only court one female, and expend less energy doing so.[23] While it requires more energy for male flies to court multiple females, the overall reproductive benefits it produces has kept polygamy as the preferred sexual choice.[23]

It has been shown that the mechanism that affects courtship behavior in Drosophila is controlled by the oscillator neurons DN1s and LNDs.[24] Oscillation of the DN1 neurons was found to be effected by socio-sexual interactions, and is connected to mating-related decrease of evening activity.[24]

D. melanogaster was among the first organisms used for genetic analysis, and today it is one of the most widely used and genetically best-known of all eukaryotic organisms. All organisms use common genetic systems; therefore, comprehending processes such as transcription and replication in fruit flies helps in understanding these processes in other eukaryotes, including humans.[25]

Thomas Hunt Morgan began using fruit flies in experimental studies of heredity at Columbia University in 1910 in a laboratory known as the Fly Room. The Fly Room was cramped with eight desks, each occupied by students and their experiments. They started off experiments using milk bottles to rear the fruit flies and handheld lenses for observing their traits. The lenses were later replaced by microscopes, which enhanced their observations. Morgan and his students eventually elucidated many basic principles of heredity, including sex-linked inheritance, epistasis, multiple alleles, and gene mapping.[25]

D. melanogaster is one of the most studied organisms in biological research, particularly in genetics and developmental biology. The several reasons include:

Genetic markers are commonly used in Drosophila research, for example within balancer chromosomes or P-element inserts, and most phenotypes are easily identifiable either with the naked eye or under a microscope. In the list of example common markers below, the allele symbol is followed by the name of the gene affected and a description of its phenotype. (Note: Recessive alleles are in lower case, while dominant alleles are capitalised.)

Drosophila genes are traditionally named after the phenotype they cause when mutated. For example, the absence of a particular gene in Drosophila will result in a mutant embryo that does not develop a heart. Scientists have thus called this gene tinman, named after the Oz character of the same name.[27] This system of nomenclature results in a wider range of gene names than in other organisms.

The genome of D. melanogaster (sequenced in 2000, and curated at the FlyBase database[26]) contains four pairs of chromosomes: an X/Y pair, and three autosomes labeled 2, 3, and 4. The fourth chromosome is so tiny, it is often ignored, aside from its important eyeless gene. The D. melanogaster sequenced genome of 139.5 million base pairs has been annotated[28] and contains around 15,682 genes according to Ensemble release 73. More than 60% of the genome appears to be functional non-protein-coding DNA[29] involved in gene expression control. Determination of sex in Drosophila occurs by the X:A ratio of X chromosomes to autosomes, not because of the presence of a Y chromosome as in human sex determination. Although the Y chromosome is entirely heterochromatic, it contains at least 16 genes, many of which are thought to have male-related functions.[30]

A March 2000 study by National Human Genome Research Institute comparing the fruit fly and human genome estimated that about 60% of genes are conserved between the two species.[31] About 75% of known human disease genes have a recognizable match in the genome of fruit flies,[32] and 50% of fly protein sequences have mammalian homologs. An online database called Homophila is available to search for human disease gene homologues in flies and vice versa.[33]Drosophila is being used as a genetic model for several human diseases including the neurodegenerative disorders Parkinson's, Huntington's, spinocerebellar ataxia and Alzheimer's disease. The fly is also being used to study mechanisms underlying aging and oxidative stress, immunity, diabetes, and cancer, as well as drug abuse.

Embryogenesis in Drosophila has been extensively studied, as its small size, short generation time, and large brood size makes it ideal for genetic studies. It is also unique among model organisms in that cleavage occurs in a syncytium.

During oogenesis, cytoplasmic bridges called "ring canals" connect the forming oocyte to nurse cells. Nutrients and developmental control molecules move from the nurse cells into the oocyte. In the figure to the left, the forming oocyte can be seen to be covered by follicular support cells.

After fertilization of the oocyte, the early embryo (or syncytial embryo) undergoes rapid DNA replication and 13 nuclear divisions until about 5000 to 6000 nuclei accumulate in the unseparated cytoplasm of the embryo. By the end of the eighth division, most nuclei have migrated to the surface, surrounding the yolk sac (leaving behind only a few nuclei, which will become the yolk nuclei). After the 10th division, the pole cells form at the posterior end of the embryo, segregating the germ line from the syncytium. Finally, after the 13th division, cell membranes slowly invaginate, dividing the syncytium into individual somatic cells. Once this process is completed, gastrulation starts.[34]

Nuclear division in the early Drosophila embryo happens so quickly, no proper checkpoints exist, so mistakes may be made in division of the DNA. To get around this problem, the nuclei that have made a mistake detach from their centrosomes and fall into the centre of the embryo (yolk sac), which will not form part of the fly.

The gene network (transcriptional and protein interactions) governing the early development of the fruit fly embryo is one of the best understood gene networks to date, especially the patterning along the anteroposterior (AP) and dorsoventral (DV) axes (See under morphogenesis).[34]

The embryo undergoes well-characterized morphogenetic movements during gastrulation and early development, including germ-band extension, formation of several furrows, ventral invagination of the mesoderm, and posterior and anterior invagination of endoderm (gut), as well as extensive body segmentation until finally hatching from the surrounding cuticle into a first-instar larva.

During larval development, tissues known as imaginal discs grow inside the larva. Imaginal discs develop to form most structures of the adult body, such as the head, legs, wings, thorax, and genitalia. Cells of the imaginal disks are set aside during embryogenesis and continue to grow and divide during the larval stagesunlike most other cells of the larva, which have differentiated to perform specialized functions and grow without further cell division. At metamorphosis, the larva forms a pupa, inside which the larval tissues are reabsorbed and the imaginal tissues undergo extensive morphogenetic movements to form adult structures.

Drosophila flies have both X and Y chromosomes, as well as autosomes. Unlike humans, the Y chromosome does not confer maleness; rather, it encodes genes necessary for making sperm. Sex is instead determined by the ratio of X chromosomes to autosomes. Furthermore, each cell "decides" whether to be male or female independently of the rest of the organism, resulting in the occasional occurrence of gynandromorphs.

Three major genes are involved in determination of Drosophila sex. These are sex-lethal, sisterless, and deadpan. Deadpan is an autosomal gene which inhibits sex-lethal, while sisterless is carried on the X chromosome and inhibits the action of deadpan. An AAX cell has twice as much deadpan as sisterless, so sex-lethal will be inhibited, creating a male. However, an AAXX cell will produce enough sisterless to inhibit the action of deadpan, allowing the sex-lethal gene to be transcribed to create a female.

Later, control by deadpan and sisterless disappears and what becomes important is the form of the sex-lethal gene. A secondary promoter causes transcription in both males and females. Analysis of the cDNA has shown that different forms are expressed in males and females. Sex-lethal has been shown to affect the splicing of its own mRNA. In males, the third exon is included which encodes a stop codon, causing a truncated form to be produced. In the female version, the presence of sex-lethal causes this exon to be missed out; the other seven amino acids are produced as a full peptide chain, again giving a difference between males and females.[35]

Presence or absence of functional sex-lethal proteins now go on to affect the transcription of another protein known as doublesex. In the absence of sex-lethal, doublesex will have the fourth exon removed and be translated up to and including exon 6 (DSX-M[ale]), while in its presence the fourth exon which encodes a stop codon will produce a truncated version of the protein (DSX-F[emale]). DSX-F causes transcription of Yolk proteins 1 and 2 in somatic cells, which will be pumped into the oocyte on its production.

Unlike mammals, Drosophila flies only have innate immunity and lack an adaptive immune response. The D. melanogaster immune system can be divided into two responses: humoral and cell-mediated. The former is a systemic response mediated through the Toll and imd pathways, which are parallel systems for detecting microbes. The Toll pathway in Drosophila is known as the homologue of Toll-like pathways in mammals. Spatzle, a known ligand for the Toll pathway in flies, is produced in response to Gram-positive bacteria, parasites, and fungal infection. Upon infection, pro-Spatzle will be cleaved by protease SPE (Spatzle processing enzyme) to become active Spatzle, which then binds to the Toll receptor located on the cell surface (Fat body, hemocytes) and dimerise for activation of downstream NF-B signaling pathways. The imd pathway, though, is triggered by Gram-negative bacteria through soluble and surface receptors (PGRP-LE and LC, respectively). D. melanogaster has a "fat body", which is thought to be homologous to the human liver. It is the primary secretory organ and produces antimicrobial peptides. These peptides are secreted into the hemolymph and bind infectious bacteria, killing them by forming pores in their cell walls. Years ago[when?] many drug companies wanted to purify these peptides and use them as antibiotics. Other than the fat body, hemocytes, the blood cells in Drosophila, are known as the homologue of mammalian monocyte/macrophages, possessing a significant role in immune responses. It is known from the literature that in response to immune challenge, hemocytes are able to secrete cytokines, for example Spatzle, to activate downstream signaling pathways in the fat body. However, the mechanism still remains unclear.

In 1971, Ron Konopka and Seymour Benzer published "Clock mutants of Drosophila melanogaster", a paper describing the first mutations that affected an animal's behavior. Wild-type flies show an activity rhythm with a frequency of about a day (24 hours). They found mutants with faster and slower rhythms, as well as broken rhythmsflies that move and rest in random spurts. Work over the following 30 years has shown that these mutations (and others like them) affect a group of genes and their products that comprise a biochemical or biological clock. This clock is found in a wide range of fly cells, but the clock-bearing cells that control activity are several dozen neurons in the fly's central brain.

Since then, Benzer and others have used behavioral screens to isolate genes involved in vision, olfaction, audition, learning/memory, courtship, pain, and other processes, such as longevity.

The first learning and memory mutants (dunce, rutabaga, etc.) were isolated by William "Chip" Quinn while in Benzer's lab, and were eventually shown to encode components of an intracellular signaling pathway involving cyclic AMP, protein kinase A, and a transcription factor known as CREB. These molecules were shown to be also involved in synaptic plasticity in Aplysia and mammals.[citation needed]

Male flies sing to the females during courtship using their wings to generate sound, and some of the genetics of sexual behavior have been characterized. In particular, the fruitless gene has several different splice forms, and male flies expressing female splice forms have female-like behavior and vice versa. The TRP channels nompC, nanchung, and inactive are expressed in sound-sensitive Johnston's organ neurons and participate in the transduction of sound.[36][37]

Furthermore, Drosophila has been used in neuropharmacological research, including studies of cocaine and alcohol consumption. Models for Parkinson's disease also exist for flies.[38]

Stereo images of the fly eye

The compound eye of the fruit fly contains 760 unit eyes or ommatidia, and are one of the most advanced among insects. Each ommatidium contains eight photoreceptor cells (R1-8), support cells, pigment cells, and a cornea. Wild-type flies have reddish pigment cells, which serve to absorb excess blue light so the fly is not blinded by ambient light.

Each photoreceptor cell consists of two main sections, the cell body and the rhabdomere. The cell body contains the nucleus, while the 100-m-long rhabdomere is made up of toothbrush-like stacks of membrane called microvilli. Each microvillus is 12 m in length and about 60 nm in diameter.[39] The membrane of the rhabdomere is packed with about 100 million rhodopsin molecules, the visual protein that absorbs light. The rest of the visual proteins are also tightly packed into the microvillar space, leaving little room for cytoplasm.

The photoreceptors in Drosophila express a variety of rhodopsin isoforms. The R1-R6 photoreceptor cells express rhodopsin1 (Rh1), which absorbs blue light (480nm). The R7 and R8 cells express a combination of either Rh3 or Rh4, which absorb UV light (345nm and 375nm), and Rh5 or Rh6, which absorb blue (437nm) and green (508nm) light, respectively. Each rhodopsin molecule consists of an opsin protein covalently linked to a carotenoid chromophore, 11-cis-3-hydroxyretinal.[40]

As in vertebrate vision, visual transduction in invertebrates occurs via a G protein-coupled pathway. However, in vertebrates, the G protein is transducin, while the G protein in invertebrates is Gq (dgq in Drosophila). When rhodopsin (Rh) absorbs a photon of light its chromophore, 11-cis-3-hydroxyretinal, is isomerized to all-trans-3-hydroxyretinal. Rh undergoes a conformational change into its active form, metarhodopsin. Metarhodopsin activates Gq, which in turn activates a phospholipase C (PLC) known as NorpA.[41]

PLC hydrolyzes phosphatidylinositol (4,5)-bisphosphate (PIP2), a phospholipid found in the cell membrane, into soluble inositol triphosphate (IP3) and diacylglycerol (DAG), which stays in the cell membrane. DAG or a derivative of DAG causes a calcium-selective ion channel known as transient receptor potential (TRP) to open and calcium and sodium flows into the cell. IP3 is thought to bind to IP3 receptors in the subrhabdomeric cisternae, an extension of the endoplasmic reticulum, and cause release of calcium, but this process does not seem to be essential for normal vision.[41]

Calcium binds to proteins such as calmodulin (CaM) and an eye-specific protein kinase C (PKC) known as InaC. These proteins interact with other proteins and have been shown to be necessary for shut off of the light response. In addition, proteins called arrestins bind metarhodopsin and prevent it from activating more Gq. A sodium-calcium exchanger known as CalX pumps the calcium out of the cell. It uses the inward sodium gradient to export calcium at a stoichiometry of 3 Na+/ 1 Ca++.[42]

TRP, InaC, and PLC form a signaling complex by binding a scaffolding protein called InaD. InaD contains five binding domains called PDZ domain proteins, which specifically bind the C termini of target proteins. Disruption of the complex by mutations in either the PDZ domains or the target proteins reduces the efficiency of signaling. For example, disruption of the interaction between InaC, the protein kinase C, and InaD results in a delay in inactivation of the light response.

Unlike vertebrate metarhodopsin, invertebrate metarhodopsin can be converted back into rhodopsin by absorbing a photon of orange light (580nm).

About two-thirds of the Drosophila brain is dedicated to visual processing.[43] Although the spatial resolution of their vision is significantly worse than that of humans, their temporal resolution is around 10 times better.

The wings of a fly are capable of beating up to 220 times per second.[citation needed] Flies fly via straight sequences of movement interspersed by rapid turns called saccades.[44] During these turns, a fly is able to rotate 90 in less than 50 milliseconds.[44]

Characteristics of Drosophila flight may be dominated by the viscosity of the air, rather than the inertia of the fly body, but the opposite case with inertia as the dominant force may occur.[44] However, subsequent work showed that while the viscous effects on the insect body during flight may be negligible, the aerodynamic forces on the wings themselves actually cause fruit flies' turns to be damped viscously.[45]

Drosophila is commonly considered a pest due to its tendency to infest habitations and establishments where fruit is found; the flies may collect in homes, restaurants, stores, and other locations.[3] Removal of an infestation can be difficult, as larvae may continue to hatch in nearby fruit even as the adult population is eliminated.

More:
Drosophila melanogaster - Wikipedia

Read More...

Genetics of Skin Cancer (PDQ)Health Professional Version …

December 25th, 2016 4:42 pm

Executive Summary

This executive summary reviews the topics covered in this PDQ summary on the genetics of skin cancer, with hyperlinks to detailed sections below that describe the evidence on each topic.

More than 100 types of tumors are clinically apparent on the skin; many are known to have familial and/or inherited components, either in isolation or as part of a syndrome with other features. Basal cell carcinoma (BCC) and squamous cell carcinoma (SCC), which are known collectively as nonmelanoma skin cancer, are two of the most common malignancies in the United States and are often caused by sun exposure, although several hereditary syndromes and genes are also associated with an increased risk of developing these cancers. Melanoma is less common than nonmelanoma skin cancer, but 5% to 10% of all melanomas arise in multiple-case families and may be inherited in an autosomal dominant fashion.

Several genes and hereditary syndromes are associated with the development of skin cancer. Basal cell nevus syndrome (BCNS, caused by pathogenic variants in PTCH1 and PTCH2) is associated with an increased risk of BCC, while syndromes such as xeroderma pigmentosum (XP), oculocutaneous albinism, epidermolysis bullosa, and Fanconi anemia are associated with an increased risk of SCC. The major tumor suppressor gene associated with melanoma is CDKN2A; pathogenic variants in CDKN2A have been estimated to account for 35% to 40% of all familial melanomas. Pathogenic variants in many other genes, including CDK4, CDK6, BAP1, and BRCA2, have also been found to be associated with melanoma.

Genome-wide searches are showing promise in identifying common, low-penetrance susceptibility alleles for many complex diseases, including melanoma, but the clinical utility of these findings remains uncertain.

Risk-reducing strategies for individuals with an increased hereditary predisposition to skin cancer are similar to recommendations for the general population, and include sun avoidance, use of sunscreen, use of sun-protective clothing, and avoidance of tanning beds. Chemopreventive agents such as isotretinoin and acitretin have been studied for the treatment of BCCs in patients with BCNS and XP and are associated with a significant decrease in the number of tumors per year. Vismodegib has also shown promise in reducing the per-patient annual rate of new BCCs requiring surgery among patients with BCNS. Isotretinoin has also been shown to reduce SCC incidence among patients with XP.

Treatment of hereditary skin cancers is similar to the treatment of sporadic skin cancers. One study in an XP population found therapeutic use of 5-fluorouracil to be efficacious, particularly in the treatment of extensive lesions. In addition to its role as a therapeutic and potential chemopreventive agent, vismodegib is also being studied for potential palliative effects for keratocystic odontogenic tumors in patients with BCNS.

Most of the psychosocial literature about hereditary skin cancers has focused on patients with familial melanoma. In individuals at risk of familial melanoma, psychosocial factors influence decisions about genetic testing for inherited cancer risk and risk-management strategies. Interest in genetic testing for pathogenic variants in CDKN2A is generally high. Perceived benefits among individuals with a strong family history of melanoma include information about the risk of melanoma for themselves and their children and increased motivation for sun-protective behavior. A number of studies have examined risk-reducing and early-detection behaviors in individuals with a family history of melanoma. Overall, these studies indicate inconsistent adoption and maintenance of these behaviors. Intervention studies have targeted knowledge about melanoma, sun protection, and screening behaviors in family members of melanoma patients, with mixed results. Research is ongoing to better understand and address psychosocial and behavioral issues in high-risk families.

[Note: Many of the medical and scientific terms used in this summary are found in the NCI Dictionary of Genetics Terms. When a linked term is clicked, the definition will appear in a separate window.]

[Note: A concerted effort is being made within the genetics community to shift terminology used to describe genetic variation. The shift is to use the term variant rather than the term mutation to describe a difference that exists between the person or group being studied and the reference sequence. Variants can then be further classified as benign (harmless), likely benign, of uncertain significance, likely pathogenic, or pathogenic (disease causing). Throughout this summary, we will use the term pathogenic variant to describe a disease-causing mutation. Refer to the Cancer Genetics Overview summary for more information about variant classification.]

[Note: Many of the genes described in this summary are found in the Online Mendelian Inheritance in Man (OMIM) database. When OMIM appears after a gene name or the name of a condition, click on OMIM for a link to more information.]

The genetics of skin cancer is an extremely broad topic. There are more than 100 types of tumors that are clinically apparent on the skin; many of these are known to have familial components, either in isolation or as part of a syndrome with other features. This is, in part, because the skin itself is a complex organ made up of multiple cell types. Furthermore, many of these cell types can undergo malignant transformation at various points in their differentiation, leading to tumors with distinct histology and dramatically different biological behaviors, such as squamous cell carcinoma (SCC) and basal cell cancer (BCC). These have been called nonmelanoma skin cancers or keratinocyte cancers.

Figure 1 is a simple diagram of normal skin structure. It also indicates the major cell types that are normally found in each compartment. Broadly speaking, there are two large compartmentsthe avascular epidermis and the vascular dermiswith many cell types distributed in a largely acellular matrix.[1]

Figure 1. Schematic representation of normal skin. The relatively avascular epidermis houses basal cell keratinocytes and squamous epithelial keratinocytes, the source cells for BCC and SCC, respectively. Melanocytes are also present in normal skin and serve as the source cell for melanoma. The separation between epidermis and dermis occurs at the basement membrane zone, located just inferior to the basal cell keratinocytes.

The outer layer or epidermis is made primarily of keratinocytes but has several other minor cell populations. The bottom layer is formed of basal keratinocytes abutting the basement membrane. The basement membrane is formed from products of keratinocytes and dermal fibroblasts, such as collagen and laminin, and is an important anatomical and functional structure. Basal keratinocytes lose contact with the basement membrane as they divide. As basal keratinocytes migrate toward the skin surface, they progressively differentiate to form the spinous cell layer; the granular cell layer; and the keratinized outer layer, or stratum corneum.

The true cytologic origin of BCC remains in question. BCC and basal cell keratinocytes share many histologic similarities, as is reflected in the name. Alternatively, the outer root sheath cells of the hair follicle have also been proposed as the cell of origin for BCC.[2] This is suggested by the fact that BCCs occur predominantly on hair-bearing skin. BCCs rarely metastasize but can invade tissue locally or regionally, sometimes following along nerves. A tendency for superficial necrosis has resulted in the name "rodent ulcer."[3]

Some debate remains about the origin of SCC; however, these cancers are likely derived from epidermal stem cells associated with the hair follicle.[4] A variety of tissues, such as lung and uterine cervix, can give rise to SCC, and this cancer has somewhat differing behavior depending on its source. Even in cancer derived from the skin, SCC from different anatomic locations can have moderately differing aggressiveness; for example, SCC from glabrous (smooth, hairless) skin has a lower metastatic rate than SCC arising from the vermillion border of the lip or from scars.[3]

Additionally, in the epidermal compartment, melanocytes distribute singly along the basement membrane and can undergo malignant transformation into melanoma. Melanocytes are derived from neural crest cells and migrate to the epidermal compartment near the eighth week of gestational age. Langerhans cells, or dendritic cells, are another cell type in the epidermis and have a primary function of antigen presentation. These cells reside in the skin for an extended time and respond to different stimuli, such as ultraviolet radiation or topical steroids, which cause them to migrate out of the skin.[5]

The dermis is largely composed of an extracellular matrix. Prominent cell types in this compartment are fibroblasts, endothelial cells, and transient immune system cells. When transformed, fibroblasts form fibrosarcomas and endothelial cells form angiosarcomas, Kaposi sarcoma, and other vascular tumors. There are a number of immune cell types that move in and out of the skin to blood vessels and lymphatics; these include mast cells, lymphocytes, mononuclear cells, histiocytes, and granulocytes. These cells can increase in number in inflammatory diseases and can form tumors within the skin. For example, urticaria pigmentosa is a condition that arises from mast cells and is occasionally associated with mast cell leukemia; cutaneous T-cell lymphoma is often confined to the skin throughout its course. Overall, 10% of leukemias and lymphomas have prominent expression in the skin.[6]

Epidermal appendages are also found in the dermal compartment. These are derivatives of the epidermal keratinocytes, such as hair follicles, sweat glands, and the sebaceous glands associated with the hair follicles. These structures are generally formed in the first and second trimesters of fetal development. These can form a large variety of benign or malignant tumors with diverse biological behaviors. Several of these tumors are associated with familial syndromes. Overall, there are dozens of different histological subtypes of these tumors associated with individual components of the adnexal structures.[7]

Finally, the subcutis is a layer that extends below the dermis with varying depth, depending on the anatomic location. This deeper boundary can include muscle, fascia, bone, or cartilage. The subcutis can be affected by inflammatory conditions such as panniculitis and malignancies such as liposarcoma.[8]

These compartments give rise to their own malignancies but are also the region of immediate adjacent spread of localized skin cancers from other compartments. The boundaries of each skin compartment are used to define the staging of skin cancers. For example, an in situ melanoma is confined to the epidermis. Once the cancer crosses the basement membrane into the dermis, it is invasive. Internal malignancies also commonly metastasize to the skin. The dermis and subcutis are the most common locations, but the epidermis can also be involved in conditions such as Pagetoid breast cancer.

The skin has a wide variety of functions. First, the skin is an important barrier preventing extensive water and temperature loss and providing protection against minor abrasions. These functions can be aberrantly regulated in cancer. For example, in the erythroderma (reddening of the skin) associated with advanced cutaneous T-cell lymphoma, alterations in the regulations of body temperature can result in profound heat loss. Second, the skin has important adaptive and innate immunity functions. In adaptive immunity, antigen-presenting cells engender T-cell responses consisting of increased levels of TH1, TH2, or TH17 cells.[9] In innate immunity, the immune system produces numerous peptides with antibacterial and antifungal capacity. Consequently, even small breaks in the skin can lead to infection. The skin-associated lymphoid tissue is one of the largest arms of the immune system. It may also be important in immune surveillance against cancer. Immunosuppression, which occurs during organ transplant, is a significant risk factor for skin cancer. The skin is significant for communication through facial expression and hand movements. Unfortunately, areas of specialized function, such as the area around the eyes and ears, are common places for cancer to occur. Even small cancers in these areas can lead to reconstructive challenges and have significant cosmetic and social ramifications.[1]

While the appearance of any one skin cancer can vary, there are general physical presentations that can be used in screening. BCCs most commonly have a pearly rim or can appear somewhat eczematous (see Figure 2 and Figure 3). They often ulcerate (see Figure 2). SCCs frequently have a thick keratin top layer (see Figure 4). Both BCCs and SCCs are associated with a history of sun-damaged skin. Melanomas are characterized by asymmetry, border irregularity, color variation, a diameter of more than 6 mm, and evolution (ABCDE criteria). (Refer to What Does Melanoma Look Like? on NCI's website for more information about the ABCDE criteria.) Photographs representing typical clinical presentations of these cancers are shown below.

Enlarge

Figure 2. Ulcerated basal cell carcinoma (left panel) and ulcerated basal cell carcinoma with characteristic pearly rim (right panel).

Figure 3. Superficial basal cell carcinoma (left panel) and nodular basal cell carcinoma (right panel).

Enlarge

Figure 4. Squamous cell carcinoma on the face with thick keratin top layer (left panel) and squamous cell carcinoma on the leg (right panel).

Enlarge

Figure 5. Melanomas with characteristic asymmetry, border irregularity, color variation, and large diameter.

Basal cell carcinoma (BCC) is the most common malignancy in people of European descent, with an associated lifetime risk of 30%.[1] While exposure to ultraviolet (UV) radiation is the risk factor most closely linked to the development of BCC, other environmental factors (such as ionizing radiation, chronic arsenic ingestion, and immunosuppression) and genetic factors (such as family history, skin type, and genetic syndromes) also potentially contribute to carcinogenesis. In contrast to melanoma, metastatic spread of BCC is very rare and typically arises from large tumors that have evaded medical treatment for extended periods of time. BCCs can invade tissue locally or regionally, sometimes following along nerves. A tendency for superficial necrosis has resulted in the name "rodent ulcer." With early detection, the prognosis for BCC is excellent.

This section focuses on risk factors in individuals at increased hereditary risk of developing BCC. (Refer to the PDQ summary on Skin Cancer Prevention for information about risk factors for BCC in the general population.)

Sun exposure is the major known environmental factor associated with the development of skin cancer of all types. There are different patterns of sun exposure associated with each major type of skin cancer (BCC, squamous cell carcinoma [SCC], and melanoma). (Refer to the PDQ summary on Skin Cancer Prevention for more information about sun exposure as a risk factor for skin cancer in the general population.)

The high-risk phenotype consists of individuals with the following physical characteristics:

Specifically, people with more highly pigmented skin demonstrate lower incidence of BCC than do people with lighter pigmented skin. Individuals with Fitzpatrick Type I or II skin were shown to have a twofold increased risk of BCC in a small case-control study.[2] (Refer to the Pigmentary characteristics section in the Melanoma section of this summary for a more detailed discussion of skin phenotypes based upon pigmentation.) Blond or red hair color was associated with increased risk of BCC in two large cohorts: the Nurses Health Study and the Health Professionals Follow-Up Study.[3] In women from the Nurses Health Study, there was an increased risk of BCC in women with red hair relative to those with light brown hair (adjusted relative risk [RR], 1.30; 95% confidence interval [CI], 1.201.40). In men from the Health Professionals Follow-Up Study, the risk of BCC associated with red hair was lower (RR, 1.17; 95% CI, 1.021.34) and was not significant after adjustment for melanoma family history and sunburn history.[3] Risk associated with blond hair was also increased for both men and women (RR, pooled analysis, 1.09; 95% CI, 1.021.18), and dark brown hair was protective against BCC (RR, pooled analysis, 0.89; 95% CI 0.870.92).

Individuals with BCCs and/or SCCs report a higher frequency of these cancers in their family members than do controls. The importance of this finding is unclear. Apart from defined genetic disorders with an increased risk of BCC, a positive family history of any skin cancer is a strong predictor of the development of BCC. Data from the Nurses Health Study and the Health Professionals Follow-Up Study indicate that the family history of melanoma in a first-degree relative (FDR) is associated with an increased risk of BCC in both men and women (RR, 1.31; 95% CI, 1.251.37; P <.0001).[3] A study of 376 early-onset BCC cases and 383 controls found that a family history of any type of skin cancer increased the risk of early-onset BCC (odds ratio [OR], 2.49; 95% CI, 1.803.45). This risk increased when an FDR was diagnosed with skin cancer before age 50 years (OR, 4.79; 95% CI, 2.907.90). Individuals who had a family history of both melanoma and nonmelanoma skin cancer (NMSC) had the highest risk (OR, 3.65; 95% CI, 1.797.47).[4]

A study on the heritability of cancer among 80,309 monozygotic and 123,382 dizygotic twins showed that NMSCs have a heritability of 43% (95% CI, 26%59%), suggesting that almost half of the risk of NMSC is caused by inherited factors.[5] Additionally, the cumulative risk of NMSC was 1.9-fold higher for monozygotic than for dizygotic twins (95% CI, 1.82.0).[5]

A personal history of BCC or SCC is strongly associated with subsequent BCC or SCC. There is an approximate 20% increased risk of a subsequent lesion within the first year after a skin cancer has been diagnosed. The mean age of occurrence for these NMSCs is the mid-60s.[6-11] In addition, several studies have found that individuals with a history of skin cancer have an increased risk of a subsequent diagnosis of a noncutaneous cancer;[12-15] however, other studies have contradicted this finding.[16-19] In the absence of other risk factors or evidence of a defined cancer susceptibility syndrome, as discussed below, skin cancer patients are encouraged to follow screening recommendations for the general population for sites other than the skin.

Pathogenic variants in the gene coding for the transmembrane receptor protein PTCH1, or PTCH, are associated with basal cell nevus syndrome (BCNS) and sporadic cutaneous BCCs. (Refer to the BCNS section of this summary for more information.) PTCH1, the human homolog of the Drosophila segment polarity gene patched (ptc), is an integral component of the hedgehog signaling pathway, which serves many developmental (appendage development, embryonic segmentation, neural tube differentiation) and regulatory (maintenance of stem cells) roles.

In the resting state, the transmembrane receptor protein PTCH1 acts catalytically to suppress the seven-transmembrane protein Smoothened (Smo), preventing further downstream signal transduction.[20] Binding of the hedgehog ligand to PTCH1 releases inhibition of Smo, with resultant activation of transcription factors (GLI1, GLI2), cell proliferation genes (cyclin D, cyclin E, myc), and regulators of angiogenesis.[21,22] Thus, the balance of PTCH1 (inhibition) and Smo (activation) manages the essential regulatory downstream hedgehog signal transduction pathway. Loss-of-function pathogenic variants of PTCH1 or gain-of-function variants of Smo tip this balance toward activation, a key event in potential neoplastic transformation.

Demonstration of allelic loss on chromosome 9q22 in both sporadic and familial BCCs suggested the potential presence of an associated tumor suppressor gene.[23,24] Further investigation identified a pathogenic variant in PTCH1 that localized to the area of allelic loss.[25] Up to 30% of sporadic BCCs demonstrate PTCH1 pathogenic variants.[26] In addition to BCC, medulloblastoma and rhabdomyosarcoma, along with other tumors, have been associated with PTCH1 pathogenic variants. All three malignancies are associated with BCNS, and most people with clinical features of BCNS demonstrate PTCH1 pathogenic variants, predominantly truncation in type.[27]

Truncating pathogenic variants in PTCH2, a homolog of PTCH1 mapping to chromosome 1p32.1-32.3, have been demonstrated in both BCC and medulloblastoma.[28,29] PTCH2 displays 57% homology to PTCH1.[30] While the exact role of PTCH2 remains unclear, there is evidence to support its involvement in the hedgehog signaling pathway.[28,31]

Pathogenic variants in the BAP1 gene are associated with an increased risk of a variety of cancers, including cutaneous melanoma and uveal melanoma. (Refer to the BAP1 section in the Melanoma section of this summary for more information.) Although the BCC penetrance in individuals with pathogenic variants in BAP1 is yet undescribed, there are several BAP1 families that report diagnoses of BCC.[32,33] In one study, pathogenic variant carriers from four families reported diagnoses of BCC. Tumor evaluation of BAP1 showed loss of BAP1 protein expression by immunohistochemistry in BCCs of two germline BAP1 pathogenic variant carriers but not in 53 sporadic BCCs.[32] A second report noted that four individuals from BAP1 families were diagnosed with a total of 19 BCCs. Complete loss of BAP1 nuclear expression was observed in 17 of 19 BCCs from these individuals but none of 22 control BCC specimens.[34] Loss of BAP1 nuclear expression was also reported in a series of 7 BCCs from individuals with loss of function BAP1 variants, but only in 1 of 31 sporadic BCCs.[35]

BCNS, also known as Gorlin Syndrome, Gorlin-Goltz syndrome, and nevoid BCC syndrome, is an autosomal dominant disorder with an estimated prevalence of 1 in 57,000 individuals.[36] The syndrome is notable for complete penetrance and high levels of variable expressivity, as evidenced by evaluation of individuals with identical genotypes but widely varying phenotypes.[27,37] The clinical features of BCNS differ more among families than within families.[38] BCNS is primarily associated with germline pathogenic variants in PTCH1, but families with this phenotype have also been associated with alterations in PTCH2 and SUFU.[39-41]

As detailed above, PTCH1 provides both developmental and regulatory guidance; spontaneous or inherited germline pathogenic variants of PTCH1 in BCNS may result in a wide spectrum of potentially diagnostic physical findings. The BCNS pathogenic variant has been localized to chromosome 9q22.3-q31, with a maximum logarithm of the odd (LOD) score of 3.597 and 6.457 at markers D9S12 and D9S53.[36] The resulting haploinsufficiency of PTCH1 in BCNS has been associated with structural anomalies such as odontogenic keratocysts, with evaluation of the cyst lining revealing heterozygosity for PTCH1.[42] The development of BCC and other BCNS-associated malignancies is thought to arise from the classic two-hit suppressor gene model: baseline heterozygosity secondary to germline PTCH1 pathogenic variant as the first hit, with the second hit due to mutagen exposure such as UV or ionizing radiation.[43-47] However, haploinsufficiency or dominant negative isoforms have also been implicated for the inactivation of PTCH1.[48]

The diagnosis of BCNS is typically based upon characteristic clinical and radiologic examination findings. Several sets of clinical diagnostic criteria for BCNS are in use (refer to Table 1 for a comparison of these criteria).[49-52] Although each set of criteria has advantages and disadvantages, none of the sets have a clearly superior balance of sensitivity and specificity for identifying carriers of pathogenic variants. The BCNS Colloquium Group proposed criteria in 2011 that required 1 major criterion with molecular diagnosis, two major criteria without molecular diagnosis, or one major and two minor criteria without molecular diagnosis.[52] PTCH1 pathogenic variants are found in 60% to 85% of patients who meet clinical criteria.[53,54] Most notably, BCNS is associated with the formation of both benign and malignant neoplasms. The strongest benign neoplasm association is with ovarian fibromas, diagnosed in 14% to 24% of females affected by BCNS.[46,50,55] BCNS-associated ovarian fibromas are more likely to be bilateral and calcified than sporadic ovarian fibromas.[56] Ameloblastomas, aggressive tumors of the odontogenic epithelium, have also been proposed as a diagnostic criterion for BCNS, but most groups do not include it at this time.[57]

Other associated benign neoplasms include gastric hamartomatous polyps,[58] congenital pulmonary cysts,[59] cardiac fibromas,[60] meningiomas,[61-63] craniopharyngiomas,[64] fetal rhabdomyomas,[65] leiomyomas,[66] mesenchymomas,[67] and nasal dermoid tumors. Development of meningiomas and ependymomas occurring postradiation therapy has been documented in the general pediatric population; radiation therapy for syndrome-associated intracranial processes may be partially responsible for a subset of these benign tumors in individuals with BCNS.[68-70] In addition, radiation therapy of malignant medulloblastomas in the BCNS population may result in many cutaneous BCCs in the radiation ports. Similarly, treatment of BCC of the skin with radiation therapy may result in induction of large numbers of additional BCCs.[45,46,66]

The diagnostic criteria for BCNS are described in Table 1 below.

Of greatest concern with BCNS are associated malignant neoplasms, the most common of which is BCC. BCC in individuals with BCNS may appear during childhood as small acrochordon -like lesions, while larger lesions demonstrate more classic cutaneous features.[71] Nonpigmented BCCs are more common than pigmented lesions.[72] The age at first BCC diagnosis associated with BCNS ranges from 3 to 53 years, with a mean age of 21.4 years; the vast majority of individuals are diagnosed with their first BCC before age 20 years.[50,55] Most BCCs are located on sun-exposed sites, but individuals with greater than 100 BCCs have a more uniform distribution of BCCs over the body.[72] Case series have suggested that up to 1 in 200 individuals with BCC demonstrate findings supportive of a diagnosis of BCNS.[36] BCNS has rarely been reported in individuals with darker skin pigmentation; however, significantly fewer BCCs are found in individuals of African or Mediterranean ancestry.[50,73,74] Despite the rarity of BCC in this population, reported cases document full expression of the noncutaneous manifestations of BCNS.[74] However, in individuals of African ancestry who have received radiation therapy, significant basal cell tumor burden has been reported within the radiation port distribution.[50,66] Thus, cutaneous pigmentation may protect against the mutagenic effects of UV but not against ionizing radiation.

Variants associated with an increased risk of BCC in the general population appear to modify the age of BCC onset in individuals with BCNS. A study of 125 individuals with BCNS found that a variant in MC1R (Arg151Cys) was associated with an early median age of onset of 27 years (95% CI, 2034), compared with individuals who did not carry the risk allele and had a median age of BCC of 34 years (95% CI, 3040) (hazard ratio [HR], 1.64; 95% CI, 1.042.58, P = .034). A variant in the TERT-CLPTM1L gene showed a similar effect, with individuals with the risk allele having a median age of BCC of 31 years (95% CI, 2837) relative to a median onset of 41 years (95% CI, 3248) in individuals who did not carry a risk allele (HR, 1.44; 95% CI, 1.081.93, P = .014).[75]

Many other malignancies have been associated with BCNS. Medulloblastoma carries the strongest association with BCNS and is diagnosed in 1% to 5% of BCNS cases. While BCNS-associated medulloblastoma is typically diagnosed between ages 2 and 3 years, sporadic medulloblastoma is usually diagnosed later in childhood, between the ages of 6 and 10 years.[46,50,55,76] A desmoplastic phenotype occurring around age 2 years is very strongly associated with BCNS and carries a more favorable prognosis than sporadic classic medulloblastoma.[77,78] Up to three times more males than females with BCNS are diagnosed with medulloblastoma.[79] As with other malignancies, treatment of medulloblastoma with ionizing radiation has resulted in numerous BCCs within the radiation field.[46,61] Other reported malignancies include ovarian carcinoma,[80] ovarian fibrosarcoma,[81,82] astrocytoma,[83] melanoma,[84] Hodgkin disease,[85,86] rhabdomyosarcoma,[87] and undifferentiated sinonasal carcinoma.[88]

Odontogenic keratocystsor keratocystic odontogenic tumors (KCOTs), as renamed by the World Health Organization working groupare one of the major features of BCNS.[89] Demonstration of clonal loss of heterozygosity (LOH) of common tumor suppressor genes, including PTCH1, supports the transition of terminology to reflect a neoplastic process.[42] Less than one-half of KCOTs from individuals with BCNS show LOH of PTCH1.[48,90] The tumors are lined with a thin squamous epithelium and a thin corrugated layer of parakeratin. Increased mitotic activity in the tumor epithelium and potential budding of the basal layer with formation of daughter cysts within the tumor wall may be responsible for the high rates of recurrence post simple enucleation.[89,91] In a recent case series of 183 consecutively excised KCOTs, 6% of individuals demonstrated an association with BCNS.[89] A study that analyzed the rate of PTCH1 pathogenic variants in BCNS-associated KCOTs found that 11 of 17 individuals carried a germline PTCH1 pathogenic variant and an additional 3 individuals had somatic pathogenic variants in this gene.[92] Individuals with germline PTCH1 pathogenic variants had an early age of KCOT presentation. KCOTs occur in 65% to 100% of individuals with BCNS,[50,93] with higher rates of occurrence in young females.[94]

Palmoplantar pits are another major finding in BCC and occur in 70% to 80% of individuals with BCNS.[55] When these pits occur together with early-onset BCC and/or KCOTs, they are considered diagnostic for BCNS.[95]

Several characteristic radiologic findings have been associated with BCNS, including lamellar calcification of falx cerebri;[96,97] fused, splayed or bifid ribs;[98] and flame-shaped lucencies or pseudocystic bone lesions of the phalanges, carpal, tarsal, long bones, pelvis, and calvaria.[54] Imaging for rib abnormalities may be useful in establishing the diagnosis in younger children, who may have not yet fully manifested a diagnostic array on physical examination.

Table 2 summarizes the frequency and median age of onset of nonmalignant findings associated with BCNS.

Individuals with PTCH2 pathogenic variants may have a milder phenotype of BCNS than those with PTCH1 variants. Characteristic features such as palmar/plantar pits, macrocephaly, falx calcification, hypertelorism, and coarse face may be absent in these individuals.[99]

A 9p22.3 microdeletion syndrome that includes the PTCH1 locus has been described in ten children.[100] All patients had facial features typical of BCNS, including a broad forehead, but they had other features variably including craniosynostosis, hydrocephalus, macrosomia, and developmental delay. At the time of the report, none had basal cell skin cancer. On the basis of their hemizygosity of the PTCH1 gene, these patients are presumably at an increased risk of basal cell skin cancer.

Germline pathogenic variants in SUFU, a major negative regulator of the hedgehog pathway, have been identified in a small number of individuals with a clinical phenotype resembling that of BCNS.[40,41] These pathogenic variants were first identified in individuals with childhood medulloblastoma,[101] and the incidence of medulloblastoma appears to be much higher in individuals with BCNS associated with SUFU pathogenic variants than in those with PTCH1 variants.[40] SUFU pathogenic variants may also be associated with an increased predisposition to meningioma.[63,102] Conversely, odontogenic jaw keratocysts appear less frequently in this population. Some clinical laboratories offer genetic testing for SUFU pathogenic variants for individuals with BCNS who do not have an identifiable PTCH1 variant.

Rombo syndrome, a very rare probably autosomal dominant genetic disorder associated with BCC, has been outlined in three case series in the literature.[103-105] The cutaneous examination is within normal limits until age 7 to 10 years, with the development of distinctive cyanotic erythema of the lips, hands, and feet and early atrophoderma vermiculatum of the cheeks, with variable involvement of the elbows and dorsal hands and feet.[103] Development of BCC occurs in the fourth decade.[103] A distinctive grainy texture to the skin, secondary to interspersed small, yellowish, follicular-based papules and follicular atrophy, has been described.[103,105] Missing, irregularly distributed, and/or misdirected eyelashes and eyebrows are another associated finding.[103,104] The genetic basis of Rombo syndrome is not known.

Bazex-Dupr-Christol syndrome, another rare genodermatosis associated with development of BCC, has more thorough documentation in the literature than Rombo syndrome. Inheritance is accomplished in an X-linked dominant fashion, with no reported male-to-male transmission.[106-108] Regional assignment of the locus of interest to chromosome Xq24-q27 is associated with a maximum LOD score of 5.26 with the DXS1192 locus.[109] Further work has narrowed the potential location to an 11.4-Mb interval on chromosome Xq25-27; however, the causative gene remains unknown.[110]

Characteristic physical findings include hypotrichosis, hypohidrosis, milia, follicular atrophoderma of the cheeks, and multiple BCC, which manifest in the late second decade to early third decade.[106] Documented hair changes with Bazex-Dupr-Christol syndrome include reduced density of scalp and body hair, decreased melanization,[111] a twisted/flattened appearance of the hair shaft on electron microscopy,[112] and increased hair shaft diameter on polarizing light microscopy.[108] The milia, which may be quite distinctive in childhood, have been reported to regress or diminish substantially at puberty.[108] Other reported findings in association with this syndrome include trichoepitheliomas; hidradenitis suppurativa; hypoplastic alae; and a prominent columella, the fleshy terminal portion of the nasal septum.[113,114]

A rare subtype of epidermolysis bullosa simplex (EBS), Dowling-Meara (EBS-DM), is primarily inherited in an autosomal dominant fashion and is associated with pathogenic variants in either keratin-5 (KRT5) or keratin-14 (KRT14).[115] EBS-DM is one of the most severe types of EBS and occasionally results in mortality in early childhood.[116] One report cites an incidence of BCC of 44% by age 55 years in this population.[117] Individuals who inherit two EBS pathogenic variants may present with a more severe phenotype.[118] Other less phenotypically severe subtypes of EBS can also be caused by pathogenic variants in either KRT5 or KRT14.[115] Approximately 75% of individuals with a clinical diagnosis of EBS (regardless of subtype) have KRT5 or KRT14 pathogenic variants.[119]

Characteristics of hereditary syndromes associated with a predisposition to BCC are described in Table 3 below.

(Refer to the Brooke-Spiegler Syndrome, Multiple Familial Trichoepithelioma, and Familial Cylindromatosis section in the Rare Skin Cancer Syndromes section of this summary for more information about Brooke-Spiegler syndrome.)

As detailed further below, the U.S. Preventive Services Task Force does not recommend regular screening for the early detection of any cutaneous malignancies, including BCC. However, once BCC is detected, the National Comprehensive Cancer Network guidelines of care for NMSCs recommends complete skin examinations every 6 to 12 months for life.[130]

The BCNS Colloquium Group has proposed guidelines for the surveillance of individuals with BCNS (see Table 4).

Level of evidence: 5

Avoidance of excessive cumulative and sporadic sun exposure is important in reducing the risk of BCC, along with other cutaneous malignancies. Scheduling activities outside of the peak hours of UV radiation, utilizing sun-protective clothing and hats, using sunscreen liberally, and strictly avoiding tanning beds are all reasonable steps towards minimizing future risk of skin cancer.[131] For patients with particular genetic susceptibility (such as BCNS), avoidance or minimization of ionizing radiation is essential to reducing future tumor burden.

Level of evidence: 2aii

The role of various systemic retinoids, including isotretinoin and acitretin, has been explored in the chemoprevention and treatment of multiple BCCs, particularly in BCNS patients. In one study of isotretinoin use in 12 patients with multiple BCCs, including 5 patients with BCNS, tumor regression was noted, with decreasing efficacy as the tumor diameter increased.[132] However, the results were insufficient to recommend use of systemic retinoids for treatment of BCC. Three additional patients, including one with BCNS, were followed long-term for evaluation of chemoprevention with isotretinoin, demonstrating significant decrease in the number of tumors per year during treatment.[132] Although the rate of tumor development tends to increase sharply upon discontinuation of systemic retinoid therapy, in some patients the rate remains lower than their pretreatment rate, allowing better management and control of their cutaneous malignancies.[132-134] In summary, the use of systemic retinoids for chemoprevention of BCC is reasonable in high-risk patients, including patients with xeroderma pigmentosum, as discussed in the Squamous Cell Carcinoma section of this summary.

A patients cumulative and evolving tumor load should be evaluated carefully in light of the potential long-term use of a medication class with cumulative and idiosyncratic side effects. Given the possible side-effect profile, systemic retinoid use is best managed by a practitioner with particular expertise and comfort with the medication class. However, for all potentially childbearing women, strict avoidance of pregnancy during the systemic retinoid courseand for 1 month after completion of isotretinoin and 3 years after completion of acitretinis essential to avoid potentially fatal and devastating fetal malformations.

Level of evidence (retinoids): 2aii

In a phase II study of 41 patients with BCNS, vismodegib (an inhibitor of the hedgehog pathway) has been shown to reduce the per-patient annual rate of new BCCs requiring surgery.[135] Existing BCCs also regressed for these patients during daily treatment with 150 mg of oral vismodegib. While patients treated had visible regression of their tumors, biopsy demonstrated residual microscopic malignancies at the site, and tumors progressed after the discontinuation of the therapy. Adverse effects included taste disturbance, muscle cramps, hair loss, and weight loss and led to discontinuation of the medication in 54% of subjects. Based on the side-effect profile and rate of disease recurrence after discontinuation of the medication, additional study regarding optimal dosing of vismodegib is ongoing.

Level of evidence (vismodegib): 1aii

A phase III, double-blind, placebo-controlled clinical trial evaluated the effects of oral nicotinamide (vitamin B3) in 386 individuals with a history of at least two NMSCs within 5 years before study enrollment.[136] After 12 months of treatment, those taking nicotinamide 500 mg twice daily had a 20% reduction in the incidence of new BCCs (95% CI, 6%39%; P = .12). The rate of new NMSCs was 23% lower in the nicotinamide group (95% CI, 438; P =.02) than in the placebo group. No clinically significant differences in adverse events were observed between the two groups, and there was no evidence of benefit after discontinuation of nicotinamide. Of note, this study was not conducted in a population with an identified genetic predisposition to BCC.

Level of evidence (nicotinamide): 1aii

Treatment of individual BCCs in BCNS is generally the same as for sporadic basal cell cancers. Due to the large number of lesions on some patients, this can present a surgical challenge. Field therapy with imiquimod or photodynamic therapy are attractive options, as they can treat multiple tumors simultaneously.[137,138] However, given the radiosensitivity of patients with BCNS, radiation as a therapeutic option for large tumors should be avoided.[50] There are no randomized trials, but the isolated case reports suggest that field therapy has similar results as in sporadic basal cell cancer, with higher success rates for superficial cancers than for nodular cancers.[137,138]

Consensus guidelines for the use of methylaminolevulinate photodynamic therapy in BCNS recommend that this modality may best be used for superficial BCC of all sizes and for nodular BCC less than 2 mm thick.[139] Monthly therapy with photodynamic therapy may be considered for these patients as clinically indicated.

Level of evidence (imiquimod and photodynamic therapy): 4

Topical treatment with LDE225, a Smoothened agonist, has also been investigated for the treatment of BCC in a small number of patients with BCNS with promising results;[140] however, this medication is not approved in this formulation by the U.S. Food and Drug Administration.

Level of evidence (LDE225): 1

In addition to its effects on the prevention of BCCs in patients with BCNS, vismodegib may also have a palliative effect on KCOTs found in this population. An initial report indicated that the use of GDC-0449, the hedgehog pathway inhibitor now known as vismodegib, resulted in resolution of KCOTs in one patient with BCNS.[141] Another small study found that four of six patients who took 150 mg of vismodegib daily had a reduction in the size of KCOTs.[142] None of the six patients in this study had new KCOTs or an increase in the size of existing KCOTs while being treated, and one patient had a sustained response that lasted 9 months after treatment was discontinued.

Level of evidence (vismodegib): 3diii

Squamous cell carcinoma (SCC) is the second most common type of skin cancer and accounts for approximately 20% of cutaneous malignancies. Although most cancer registries do not include information on the incidence of nonmelanoma skin cancer (NMSC), annual incidence estimates range from 1 million to 5.4 million cases in the United States.[1,2]

Mortality is rare from this cancer; however, the morbidity and costs associated with its treatment are considerable.

Sun exposure is the major known environmental factor associated with the development of skin cancer of all types; however, different patterns of sun exposure are associated with each major type of skin cancer.

Unlike basal cell carcinoma (BCC), SCC is associated with chronic exposure, rather than intermittent intense exposure to ultraviolet (UV) radiation. Occupational exposure is the characteristic pattern of sun exposure linked with SCC.[3] A case-control study in southern Europe showed increased risk of SCC when lifetime sun exposure exceeded 70,000 hours. People whose lifetime sun exposure equaled or exceeded 200,000 hours had an odds ratio (OR) 8 to 9 times that of the reference group.[4] A Canadian case-control study did not find an association between cumulative lifetime sun exposure and SCC; however, sun exposure in the 10 years before diagnosis and occupational exposure were found to be risk factors.[5]

In addition to environmental radiation, exposure to therapeutic radiation is another risk factor for SCC. Individuals with skin disorders treated with psoralen and ultraviolet-A radiation (PUVA) had a threefold to sixfold increase in SCC.[6] This effect appears to be dose-dependent, as only 7% of individuals who underwent fewer than 200 treatments had SCC, compared with more than 50% of those who underwent more than 400 treatments.[7] Therapeutic use of ultraviolet-B (UVB) radiation has also been shown to cause a mild increase in SCC (adjusted incidence rate ratio, 1.37).[8] Devices such as tanning beds also emit UV radiation and have been associated with increased SCC risk, with a reported OR of 2.5 (95% confidence interval [CI], 1.73.8).[9]

Investigation into the effect of ionizing radiation on SCC carcinogenesis has yielded conflicting results. One population-based case-control study found that patients who had undergone therapeutic radiation therapy had an increased risk of SCC at the site of previous radiation (OR, 2.94), compared with individuals who had not undergone radiation treatments.[10] Cohort studies of radiology technicians, atomic-bomb survivors, and survivors of childhood cancers have not shown an increased risk of SCC, although the incidence of BCC was increased in all of these populations.[11-13] For those who develop SCC at previously radiated sites that are not sun-exposed, the latent period appears to be quite long; these cancers may be diagnosed years or even decades after the radiation exposure.[14]

The effect of other types of radiation, such as cosmic radiation, is also controversial. Pilots and flight attendants have a reported incidence of SCC that ranges between 2.1 and 9.9 times what would be expected; however, the overall cancer incidence is not consistently elevated. Some attribute the high rate of NMSCs in airline flight personnel to cosmic radiation, while others suspect lifestyle factors.[15-20]

Like BCCs, SCCs appear to be associated with exposure to arsenic in drinking water and combustion products.[21,22] However, this association may hold true only for the highest levels of arsenic exposure. Individuals who had toenail concentrations of arsenic above the 97th percentile were found to have an approximately twofold increase in SCC risk.[23] For arsenic, the latency period can be lengthy; invasive SCC has been found to develop at an average of 20 years after exposure.[24]

Current or previous cigarette smoking has been associated with a 1.5-fold to 2-fold increase in SCC risk,[25-27] although one large study showed no change in risk.[28] Available evidence suggests that the effect of smoking on cancer risk seems to be greater for SCC than for BCC.

Additional reports have suggested weak associations between SCC and exposure to insecticides, herbicides, or fungicides.[29]

Like melanoma and BCC, SCC occurs more frequently in individuals with lighter skin than in those with darker skin.[3,30] A case-control study of 415 cases and 415 controls showed similar findings; relative to Fitzpatrick Type I skin, individuals with increasingly darker skin had decreased risks of skin cancer (ORs, 0.6, 0.3, and 0.1, for Fitzpatrick Types II, III, and IV, respectively).[31] (Refer to the Pigmentary characteristics section in the Melanoma section of this summary for a more detailed discussion of skin phenotypes based upon pigmentation.) The same study found that blue eyes and blond/red hair were also associated with increased risks of SCC, with crude ORs of 1.7 (95% CI, 1.22.3) for blue eyes, 1.5 (95% CI, 1.12.1) for blond hair, and 2.2 (95% CI, 1.53.3) for red hair.

However, SCC can also occur in individuals with darker skin. An Asian registry based in Singapore reported an increase in skin cancer in that geographic area, with an incidence rate of 8.9 per 100,000 person-years. Incidence of SCC, however, was shown to be on the decline.[30] SCC is the most common form of skin cancer in black individuals in the United States and in certain parts of Africa; the mortality rate for this disease is relatively high in these populations.[32,33] Epidemiologic characteristics of, and prevention strategies for, SCC in those individuals with darker skin remain areas of investigation.

Freckling of the skin and reaction of the skin to sun exposure have been identified as other risk factors for SCC.[34] Individuals with heavy freckling on the forearm were found to have a 14-fold increase in SCC risk if freckling was present in adulthood, and an almost threefold risk if freckling was present in childhood.[34,35] The degree of SCC risk corresponded to the amount of freckling. In this study, the inability of the skin to tan and its propensity to burn were also significantly associated with risk of SCC (OR of 2.9 for severe burn and 3.5 for no tan).

The presence of scars on the skin can also increase the risk of SCC, although the process of carcinogenesis in this setting may take years or even decades. SCCs arising in chronic wounds are referred to as Marjolins ulcers. The mean time for development of carcinoma in these wounds is estimated at 26 years.[36] One case report documents the occurrence of cancer in a wound that was incurred 59 years earlier.[37]

Immunosuppression also contributes to the formation of NMSCs. Among solid-organ transplant recipients, the risk of SCC is 65 to 250 times higher, and the risk of BCC is 10 times higher than that observed in the general population, although the risks vary with transplant type.[38-41] NMSCs in high-risk patients (solid-organ transplant recipients and chronic lymphocytic leukemia patients) occur at a younger age, are more common and more aggressive, and have a higher risk of recurrence and metastatic spread than these cancers do in the general population.[42,43] Additionally, there is a high risk of second SCCs.[44,45] In one study, over 65% of kidney transplant recipients developed subsequent SCCs after their first diagnosis.[44] Among patients with an intact immune system, BCCs outnumber SCCs by a 4:1 ratio; in transplant patients, SCCs outnumber BCCs by a 2:1 ratio.

This increased risk has been linked to an interaction between the level of immunosuppression and UV radiation exposure. As the duration and dosage of immunosuppressive agents increase, so does the risk of cutaneous malignancy; this effect is reversed with decreasing the dosage of, or taking a break from, immunosuppressive agents. Heart transplant recipients, requiring the highest rates of immunosuppression, are at much higher risk of cutaneous malignancy than liver transplant recipients, in whom much lower levels of immunosuppression are needed to avoid rejection.[38,46,47] The risk appears to be highest in geographic areas with high UV exposure.[47] When comparing Australian and Dutch organ transplant populations, the Australian patients carried a fourfold increased risk of developing SCC and a fivefold increased risk of developing BCC.[48] This finding underlines the importance of rigorous sun avoidance, particularly among high-risk immunosuppressed individuals.

Link:
Genetics of Skin Cancer (PDQ)Health Professional Version ...

Read More...

Genetically modified food – Wikipedia

December 24th, 2016 6:43 am

Genetically modified foods or GM foods, also known as genetically engineered foods, are foods produced from organisms that have had changes introduced into their DNA using the methods of genetic engineering. Genetic engineering techniques allow for the introduction of new traits as well as greater control over traits than previous methods such as selective breeding and mutation breeding.[1]

Commercial sale of genetically modified foods began in 1994, when Calgene first marketed its unsuccessful Flavr Savr delayed-ripening tomato.[2][3] Most food modifications have primarily focused on cash crops in high demand by farmers such as soybean, corn, canola, and cotton. Genetically modified crops have been engineered for resistance to pathogens and herbicides and for better nutrient profiles. GM livestock have been developed, although as of November 2013 none were on the market.[4]

There is a scientific consensus[5][6][7][8] that currently available food derived from GM crops poses no greater risk to human health than conventional food,[9][10][11][12][13] but that each GM food needs to be tested on a case-by-case basis before introduction.[14][15][16] Nonetheless, members of the public are much less likely than scientists to perceive GM foods as safe.[17][18][19][20] The legal and regulatory status of GM foods varies by country, with some nations banning or restricting them, and others permitting them with widely differing degrees of regulation.[21][22][23][24]

However, there are ongoing public concerns related to food safety, regulation, labelling, environmental impact, research methods, and the fact that some GM seeds are subject to intellectual property rights owned by corporations.[25]

Genetically modified foods, GM foods or genetically engineered foods, are foods produced from organisms that have had changes introduced into their DNA using the methods of genetic engineering as opposed to traditional cross breeding.[26][27] In the US, the Department of Agriculture (USDA) and the Food and Drug Administration (FDA) favor the use of "genetic engineering" over "genetic modification" as the more precise term; the USDA defines genetic modification to include "genetic engineering or other more traditional methods."[28][29]

According to the World Health Organization, "Genetically modified organisms (GMOs) can be defined as organisms (i.e. plants, animals or microorganisms) in which the genetic material (DNA) has been altered in a way that does not occur naturally by mating and/or natural recombination. The technology is often called 'modern biotechnology' or 'gene technology', sometimes also 'recombinant DNA technology' or 'genetic engineering'. ... Foods produced from or using GM organisms are often referred to as GM foods."[26]

Human-directed genetic manipulation of food began with the domestication of plants and animals through artificial selection at about 10,500 to 10,100 BC.[30]:1 The process of selective breeding, in which organisms with desired traits (and thus with the desired genes) are used to breed the next generation and organisms lacking the trait are not bred, is a precursor to the modern concept of genetic modification (GM).[30]:1[31]:1 With the discovery of DNA in the early 1900s and various advancements in genetic techniques through the 1970s[32] it became possible to directly alter the DNA and genes within food.

The first genetically modified plant was produced in 1983, using an antibiotic-resistant tobacco plant.[33] Genetically modified microbial enzymes were the first application of genetically modified organisms in food production and were approved in 1988 by the US Food and Drug Administration.[34] In the early 1990s, recombinant chymosin was approved for use in several countries.[34][35] Cheese had typically been made using the enzyme complex rennet that had been extracted from cows' stomach lining. Scientists modified bacteria to produce chymosin, which was also able to clot milk, resulting in cheese curds.[36]

The first genetically modified food approved for release was the Flavr Savr tomato in 1994.[2] Developed by Calgene, it was engineered to have a longer shelf life by inserting an antisense gene that delayed ripening.[37] China was the first country to commercialize a transgenic crop in 1993 with the introduction of virus-resistant tobacco.[38] In 1995, Bacillus thuringiensis (Bt) Potato was approved for cultivation, making it the first pesticide producing crop to be approved in the USA.[39] Other genetically modified crops receiving marketing approval in 1995 were: canola with modified oil composition, Bt maize, cotton resistant to the herbicide bromoxynil, Bt cotton, glyphosate-tolerant soybeans, virus-resistant squash, and another delayed ripening tomato.[2]

With the creation of golden rice in 2000, scientists had genetically modified food to increase its nutrient value for the first time.[40]

By 2010, 29 countries had planted commercialized biotech crops and a further 31 countries had granted regulatory approval for transgenic crops to be imported.[41] The US was the leading country in the production of GM foods in 2011, with twenty-five GM crops having received regulatory approval.[42] In 2015, 92% of corn, 94% of soybeans, and 94% of cotton produced in the US were genetically modified strains.[43]

The first genetically modified animal to be approved for food use was AquAdvantage salmon in 2015.[44] The salmon were transformed with a growth hormone-regulating gene from a Pacific Chinook salmon and a promoter from an ocean pout enabling it to grow year-round instead of only during spring and summer.[45]

In April 2016, a white button mushroom (Agaricus bisporus) modified using the CRISPR technique received de facto approval in the United States, after the USDA said it would not have to go through the agency's regulatory process. The agency considers the mushroom exempt because the editing process did not involve the introduction of foreign DNA.[46]

The most widely planted GMOs are designed to tolerate herbicides. By 2006 some weed populations had evolved to tolerate some of the same herbicides. Palmer amaranth is a weed that competes with cotton. A native of the southwestern US, it traveled east and was first found resistant to glyphosate in 2006, less than 10 years after GM cotton was introduced.[47][48][49]

Genetically engineered organisms are generated and tested in the laboratory for desired qualities. The most common modification is to add one or more genes to an organism's genome. Less commonly, genes are removed or their expression is increased or silenced or the number of copies of a gene is increased or decreased.

Once satisfactory strains are produced, the producer applies for regulatory approval to field-test them, called a "field release." Field-testing involves cultivating the plants on farm fields or growing animals in a controlled environment. If these field tests are successful, the producer applies for regulatory approval to grow and market the crop. Once approved, specimens (seeds, cuttings, breeding pairs, etc.) are cultivated and sold to farmers. The farmers cultivate and market the new strain. In some cases, the approval covers marketing but not cultivation.

According to the USDA, the number of field releases for genetically engineered organisms has grown from four in 1985 to an average of about 800 per year. Cumulatively, more than 17,000 releases had been approved through September 2013.[50]

Papaya was genetically modified to resist the ringspot virus. 'SunUp' is a transgenic red-fleshed Sunset papaya cultivar that is homozygous for the coat protein gene PRSV; 'Rainbow' is a yellow-fleshed F1 hybrid developed by crossing 'SunUp' and nontransgenic yellow-fleshed 'Kapoho'.[51] The New York Times stated, "in the early 1990s, Hawaiis papaya industry was facing disaster because of the deadly papaya ringspot virus. Its single-handed savior was a breed engineered to be resistant to the virus. Without it, the states papaya industry would have collapsed. Today, 80% of Hawaiian papaya is genetically engineered, and there is still no conventional or organic method to control ringspot virus."[52] The GM cultivar was approved in 1998.[53] In China, a transgenic PRSV-resistant papaya was developed by South China Agricultural University and was first approved for commercial planting in 2006; as of 2012 95% of the papaya grown in Guangdong province and 40% of the papaya grown in Hainan province was genetically modified.[54]

The New Leaf potato, a GM food developed using naturally occurring bacteria found in the soil known as Bacillus thuringiensis (Bt), was made to provide in-plant protection from the yield-robbing Colorado potato beetle.[55] The New Leaf potato, brought to market by Monsanto in the late 1990s, was developed for the fast food market. It was withdrawn in 2001 after retailers rejected it and food processors ran into export problems.[56]

As of 2005, about 13% of the Zucchini (a form of squash) grown in the US was genetically modified to resist three viruses; that strain is also grown in Canada.[57][58]

In 2011, BASF requested the European Food Safety Authority's approval for cultivation and marketing of its Fortuna potato as feed and food. The potato was made resistant to late blight by adding resistant genes blb1 and blb2 that originate from the Mexican wild potato Solanum bulbocastanum.[59][60] In February 2013, BASF withdrew its application.[61]

In 2013, the USDA approved the import of a GM pineapple that is pink in color and that "overexpresses" a gene derived from tangerines and suppress other genes, increasing production of lycopene. The plant's flowering cycle was changed to provide for more uniform growth and quality. The fruit "does not have the ability to propagate and persist in the environment once they have been harvested," according to USDA APHIS. According to Del Monte's submission, the pineapples are commercially grown in a "monoculture" that prevents seed production, as the plant's flowers aren't exposed to compatible pollen sources. Importation into Hawaii is banned for "plant sanitation" reasons.[62]

In 2014, the USDA approved a genetically modified potato developed by J.R. Simplot Company that contained ten genetic modifications that prevent bruising and produce less acrylamide when fried. The modifications eliminate specific proteins from the potatoes, via RNA interference, rather than introducing novel proteins.[63][64]

In February 2015 Arctic Apples were approved by the USDA,[65] becoming the first genetically modified apple approved for sale in the US.[66]Gene silencing is used to reduce the expression of polyphenol oxidase (PPO), thus preventing the fruit from browning.[67]

Corn used for food and ethanol has been genetically modified to tolerate various herbicides and to express a protein from Bacillus thuringiensis (Bt) that kills certain insects.[68] About 90% of the corn grown in the U.S. was genetically modified in 2010.[69] In the US in 2015, 81% of corn acreage contained the Bt trait and 89% of corn acreage contained the glyphosate-tolerant trait.[43] Corn can be processed into grits, meal and flour as an ingredient in pancakes, muffins, doughnuts, breadings and batters, as well as baby foods, meat products, cereals and some fermented products. Corn-based masa flour and masa dough are used in the production of taco shells, corn chips and tortillas.[70]

Genetically modified soybean has been modified to tolerate herbicides and produce healthier oils.[71] In 2015, 94% of soybean acreage in the U.S. was genetically modified to be glyphosate-tolerant.[43]

Starch or amylum is a polysaccharide produced by all green plants as an energy store. Pure starch is a white, tasteless and odourless powder. It consists of two types of molecules: the linear and helical amylose and the branched amylopectin. Depending on the plant, starch generally contains 20 to 25% amylose and 75 to 80% amylopectin by weight.[72]

Starch can be further modified to create modified starch for specific purposes,[73] including creation of many of the sugars in processed foods. They include:

Lecithin is a naturally occurring lipid. It can be found in egg yolks and oil-producing plants. it is an emulsifier and thus is used in many foods. Corn, soy and safflower oil are sources of lecithin, though the majority of lecithin commercially available is derived from soy.[74][75][76][pageneeded] Sufficiently processed lecithin is often undetectable with standard testing practices.[72][not in citation given] According to the FDA, no evidence shows or suggests hazard to the public when lecithin is used at common levels. Lecithin added to foods amounts to only 2 to 10 percent of the 1 to 5 g of phosphoglycerides consumed daily on average.[74][75] Nonetheless, consumer concerns about GM food extend to such products.[77][bettersourceneeded] This concern led to policy and regulatory changes in Europe in 2000,[citation needed] when Regulation (EC) 50/2000 was passed[78] which required labelling of food containing additives derived from GMOs, including lecithin.[citation needed] Because of the difficulty of detecting the origin of derivatives like lecithin with current testing practices, European regulations require those who wish to sell lecithin in Europe to employ a comprehensive system of Identity preservation (IP).[79][verification needed][80][pageneeded]

The US imports 10% of its sugar, while the remaining 90% is extracted from sugar beet and sugarcane. After deregulation in 2005, glyphosate-resistant sugar beet was extensively adopted in the United States. 95% of beet acres in the US were planted with glyphosate-resistant seed in 2011.[81] GM sugar beets are approved for cultivation in the US, Canada and Japan; the vast majority are grown in the US. GM beets are approved for import and consumption in Australia, Canada, Colombia, EU, Japan, Korea, Mexico, New Zealand, Philippines, Russian Federation and Singapore.[82] Pulp from the refining process is used as animal feed. The sugar produced from GM sugarbeets contains no DNA or proteinit is just sucrose that is chemically indistinguishable from sugar produced from non-GM sugarbeets.[72][83] Independent analyses conducted by internationally recognized laboratories found that sugar from Roundup Ready sugar beets is identical to the sugar from comparably grown conventional (non-Roundup Ready) sugar beets. And, like all sugar, sugar from Roundup Ready sugar beets contains no genetic material or detectable protein (including the protein that provides glyphosate tolerance).[84]

Most vegetable oil used in the US is produced from GM crops canola,[85]corn,[86][87]cotton[88] and soybeans.[89] Vegetable oil is sold directly to consumers as cooking oil, shortening and margarine[90] and is used in prepared foods. There is a vanishingly small amount of protein or DNA from the original crop in vegetable oil.[72][91] Vegetable oil is made of triglycerides extracted from plants or seeds and then refined and may be further processed via hydrogenation to turn liquid oils into solids. The refining process[92] removes all, or nearly all non-triglyceride ingredients.[93] Medium-chain triglycerides (MCTs) offer an alternative to conventional fats and oils. The length of a fatty acid influences its fat absorption during the digestive process. Fatty acids in the middle position on the glycerol molecules appear to be absorbed more easily and influence metabolism more than fatty acids on the end positions. Unlike ordinary fats, MCTs are metabolized like carbohydrates. They have exceptional oxidative stability, and prevent foods from turning rancid readily.[94]

Livestock and poultry are raised on animal feed, much of which is composed of the leftovers from processing crops, including GM crops. For example, approximately 43% of a canola seed is oil. What remains after oil extraction is a meal that becomes an ingredient in animal feed and contains canola protein.[95] Likewise, the bulk of the soybean crop is grown for oil and meal. The high-protein defatted and toasted soy meal becomes livestock feed and dog food. 98% of the US soybean crop goes for livestock feed.[96][97] In 2011, 49% of the US maize harvest was used for livestock feed (including the percentage of waste from distillers grains).[98] "Despite methods that are becoming more and more sensitive, tests have not yet been able to establish a difference in the meat, milk, or eggs of animals depending on the type of feed they are fed. It is impossible to tell if an animal was fed GM soy just by looking at the resulting meat, dairy, or egg products. The only way to verify the presence of GMOs in animal feed is to analyze the origin of the feed itself."[99]

A 2012 literature review of studies evaluating the effect of GM feed on the health of animals did not find evidence that animals were adversely affected, although small biological differences were occasionally found. The studies included in the review ranged from 90 days to two years, with several of the longer studies considering reproductive and intergenerational effects.[100]

Rennet is a mixture of enzymes used to coagulate milk into cheese. Originally it was available only from the fourth stomach of calves, and was scarce and expensive, or was available from microbial sources, which often produced unpleasant tastes. Genetic engineering made it possible to extract rennet-producing genes from animal stomachs and insert them into bacteria, fungi or yeasts to make them produce chymosin, the key enzyme.[101][102] The modified microorganism is killed after fermentation. Chymosin is isolated from the fermentation broth, so that the Fermentation-Produced Chymosin (FPC) used by cheese producers has an amino acid sequence that is identical to bovine rennet.[103] The majority of the applied chymosin is retained in the whey. Trace quantities of chymosin may remain in cheese.[103]

FPC was the first artificially produced enzyme to be approved by the US Food and Drug Administration.[34][35] FPC products have been on the market since 1990 and as of 2015 had yet to be surpassed in commercial markets.[104] In 1999, about 60% of US hard cheese was made with FPC.[105] Its global market share approached 80%.[106] By 2008, approximately 80% to 90% of commercially made cheeses in the US and Britain were made using FPC.[103]

In some countries, recombinant (GM) bovine somatotropin (also called rBST, or bovine growth hormone or BGH) is approved for administration to increase milk production. rBST may be present in milk from rBST treated cows, but it is destroyed in the digestive system and even if directly injected into the human bloodstream, has no observable effect on humans.[107][108][109] The FDA, World Health Organization, American Medical Association, American Dietetic Association and the National Institutes of Health have independently stated that dairy products and meat from rBST-treated cows are safe for human consumption.[110] However, on 30 September 2010, the United States Court of Appeals, Sixth Circuit, analyzing submitted evidence, found a "compositional difference" between milk from rBGH-treated cows and milk from untreated cows.[111][112] The court stated that milk from rBGH-treated cows has: increased levels of the hormone Insulin-like growth factor 1 (IGF-1); higher fat content and lower protein content when produced at certain points in the cow's lactation cycle; and more somatic cell counts, which may "make the milk turn sour more quickly."[112]

Genetically modified livestock are organisms from the group of cattle, sheep, pigs, goats, birds, horses and fish kept for human consumption, whose genetic material (DNA) has been altered using genetic engineering techniques. In some cases, the aim is to introduce a new trait to the animals which does not occur naturally in the species, i.e. transgenesis.

A 2003 review published on behalf of Food Standards Australia New Zealand examined transgenic experimentation on terrestrial livestock species as well as aquatic species such as fish and shellfish. The review examined the molecular techniques used for experimentation as well as techniques for tracing the transgenes in animals and products as well as issues regarding transgene stability.[113]

Some mammals typically used for food production have been modified to produce non-food products, a practice sometimes called Pharming.

A GM salmon, awaiting regulatory approval[114][115][116] since 1997,[117] was approved for human consumption by the American FDA in November 2015, to be raised in specific land-based hatcheries in Canada and Panama.[118]

The use of genetically modified food-grade organisms as recombinant vaccine expression hosts and delivery vehicles can open new avenues for vaccinology. Considering that oral immunization is a beneficial approach in terms of costs, patient comfort, and protection of mucosal tissues, the use of food-grade organisms can lead to highly advantageous vaccines in terms of costs, easy administration, and safety. The organisms currently used for this purpose are bacteria (Lactobacillus and Bacillus), yeasts, algae, plants, and insect species. Several such organisms are under clinical evaluation, and the current adoption of this technology by the industry indicates a potential to benefit global healthcare systems.[119]

There is a scientific consensus[120][121][122][123] that currently available food derived from GM crops poses no greater risk to human health than conventional food,[124][125][126][127][128] but that each GM food needs to be tested on a case-by-case basis before introduction.[129][130][131] Nonetheless, members of the public are much less likely than scientists to perceive GM foods as safe.[132][133][134][135]

Opponents claim that long-term health risks have not been adequately assessed and propose various combinations of additional testing, labeling[136] or removal from the market.[137][138][139][140] The advocacy group European Network of Scientists for Social and Environmental Responsibility (ENSSER), disputes the claim that "science" supports the safety of current GM foods, proposing that each GM food must be judged on case-by-case basis.[141] The Canadian Association of Physicians for the Environment called for removing GM foods from the market pending long term health studies.[137] Multiple disputed studies have claimed health effects relating to GM foods or to the pesticides used with them.[142]

The legal and regulatory status of GM foods varies by country, with some nations banning or restricting them, and others permitting them with widely differing degrees of regulation.[143][144][145][146] Countries such as the United States, Canada, Lebanon and Egypt use substantial equivalence to determine if further testing is required, while many countries such as those in the European Union, Brazil and China only authorize GMO cultivation on a case-by-case basis. In the U.S. the FDA determined that GMO's are "Generally Recognized as Safe" (GRAS) and therefore do not require additional testing if the GMO product is substantially equivalent to the non-modified product.[147] If new substances are found, further testing may be required to satisfy concerns over potential toxicity, allergenicity, possible gene transfer to humans or genetic outcrossing to other organisms.[26]

Government regulation of GMO development and release varies widely between countries. Marked differences separate GMO regulation in the U.S. and GMO regulation in the European Union.[148] Regulation also varies depending on the intended product's use. For example, a crop not intended for food use is generally not reviewed by authorities responsible for food safety.[149]

In the U.S., three government organizations regulate GMOs. The FDA checks the chemical composition of organisms for potential allergens. The United States Department of Agriculture (USDA) supervises field testing and monitors the distribution of GM seeds. The United States Environmental Protection Agency (EPA) is responsible for monitoring pesticide usage, including plants modified to contain proteins toxic to insects. Like USDA, EPA also oversees field testing and the distribution of crops that have had contact with pesticides to ensure environmental safety.[150][bettersourceneeded] In 2015 the Obama administration announced that it would update the way the government regulated GM crops.[151]

In 1992 FDA published "Statement of Policy: Foods derived from New Plant Varieties." This statement is a clarification of FDA's interpretation of the Food, Drug, and Cosmetic Act with respect to foods produced from new plant varieties developed using recombinant deoxyribonucleic acid (rDNA) technology. FDA encouraged developers to consult with the FDA regarding any bioengineered foods in development. The FDA says developers routinely do reach out for consultations. In 1996 FDA updated consultation procedures.[152][153]

As of 2015, 64 countries require labeling of GMO products in the marketplace.[154]

US and Canadian national policy is to require a label only given significant composition differences or documented health impacts, although some individual US states (Vermont, Connecticut and Maine) enacted laws requiring them.[155][156][157][158] In July 2016, Public Law 114-214 was enacted to regulate labeling of GMO food on a national basis.

In some jurisdictions, the labeling requirement depends on the relative quantity of GMO in the product. A study that investigated voluntary labeling in South Africa found that 31% of products labeled as GMO-free had a GM content above 1.0%.[159]

In Europe all food (including processed food) or feed that contains greater than 0.9% GMOs must be labelled.[160]

Testing on GMOs in food and feed is routinely done using molecular techniques such as PCR and bioinformatics.[161]

In a January 2010 paper, the extraction and detection of DNA along a complete industrial soybean oil processing chain was described to monitor the presence of Roundup Ready (RR) soybean: "The amplification of soybean lectin gene by end-point polymerase chain reaction (PCR) was successfully achieved in all the steps of extraction and refining processes, until the fully refined soybean oil. The amplification of RR soybean by PCR assays using event-specific primers was also achieved for all the extraction and refining steps, except for the intermediate steps of refining (neutralisation, washing and bleaching) possibly due to sample instability. The real-time PCR assays using specific probes confirmed all the results and proved that it is possible to detect and quantify genetically modified organisms in the fully refined soybean oil. To our knowledge, this has never been reported before and represents an important accomplishment regarding the traceability of genetically modified organisms in refined oils."[162]

According to Thomas Redick, detection and prevention of cross-pollination is possible through the suggestions offered by the Farm Service Agency (FSA) and Natural Resources Conservation Service (NRCS). Suggestions include educating farmers on the importance of coexistence, providing farmers with tools and incentives to promote coexistence, conduct research to understand and monitor gene flow, provide assurance of quality and diversity in crops, provide compensation for actual economic losses for farmers.[163]

The genetically modified foods controversy consists of a set of disputes over the use of food made from genetically modified crops. The disputes involve consumers, farmers, biotechnology companies, governmental regulators, non-governmental organizations, environmental and political activists and scientists. The major disagreements include whether GM foods can be safely consumed, harm the environment and/or are adequately tested and regulated.[138][164] The objectivity of scientific research and publications has been challenged.[137] Farming-related disputes include the use and impact of pesticides, seed production and use, side effects on non-GMO crops/farms,[165] and potential control of the GM food supply by seed companies.[137]

The conflicts have continued since GM foods were invented. They have occupied the media, the courts, local, regional and national governments and international organizations.

The literature about Biodiversity and the GE food/feed consumption has sometimes resulted in animated debate regarding the suitability of the experimental designs, the choice of the statistical methods or the public accessibility of data. Such debate, even if positive and part of the natural process of review by the scientific community, has frequently been distorted by the media and often used politically and inappropriately in anti-GE crops campaigns.

Domingo, Jos L.; Bordonaba, Jordi Gin (2011). "A literature review on the safety assessment of genetically modified plants" (PDF). Environment International. 37: 734742. doi:10.1016/j.envint.2011.01.003. PMID21296423. In spite of this, the number of studies specifically focused on safety assessment of GM plants is still limited. However, it is important to remark that for the first time, a certain equilibrium in the number of research groups suggesting, on the basis of their studies, that a number of varieties of GM products (mainly maize and soybeans) are as safe and nutritious as the respective conventional non-GM plant, and those raising still serious concerns, was observed. Moreover, it is worth mentioning that most of the studies demonstrating that GM foods are as nutritional and safe as those obtained by conventional breeding, have been performed by biotechnology companies or associates, which are also responsible of commercializing these GM plants. Anyhow, this represents a notable advance in comparison with the lack of studies published in recent years in scientific journals by those companies.

Krimsky, Sheldon (2015). "An Illusory Consensus behind GMO Health Assessment" (PDF). Science, Technology, & Human Values. 40: 132. doi:10.1177/0162243915598381. I began this article with the testimonials from respected scientists that there is literally no scientific controversy over the health effects of GMOs. My investigation into the scientific literature tells another story.

And contrast:

Panchin, Alexander Y.; Tuzhikov, Alexander I. (January 14, 2016). "Published GMO studies find no evidence of harm when corrected for multiple comparisons". Critical Reviews in Biotechnology: 15. doi:10.3109/07388551.2015.1130684. ISSN0738-8551. PMID26767435. Here, we show that a number of articles some of which have strongly and negatively influenced the public opinion on GM crops and even provoked political actions, such as GMO embargo, share common flaws in the statistical evaluation of the data. Having accounted for these flaws, we conclude that the data presented in these articles does not provide any substantial evidence of GMO harm.

The presented articles suggesting possible harm of GMOs received high public attention. However, despite their claims, they actually weaken the evidence for the harm and lack of substantial equivalency of studied GMOs. We emphasize that with over 1783 published articles on GMOs over the last 10 years it is expected that some of them should have reported undesired differences between GMOs and conventional crops even if no such differences exist in reality.

and

Yang, Y.T.; Chen, B. (2016). "Governing GMOs in the USA: science, law and public health". Journal of the Science of Food and Agriculture. 96: 18511855. doi:10.1002/jsfa.7523. PMID26536836. It is therefore not surprising that efforts to require labeling and to ban GMOs have been a growing political issue in the USA (citing Domingo and Bordonaba, 2011).

Overall, a broad scientific consensus holds that currently marketed GM food poses no greater risk than conventional food... Major national and international science and medical associations have stated that no adverse human health effects related to GMO food have been reported or substantiated in peer-reviewed literature to date.

Despite various concerns, today, the American Association for the Advancement of Science, the World Health Organization, and many independent international science organizations agree that GMOs are just as safe as other foods. Compared with conventional breeding techniques, genetic engineering is far more precise and, in most cases, less likely to create an unexpected outcome.

Pinholster, Ginger (October 25, 2012). "AAAS Board of Directors: Legally Mandating GM Food Labels Could "Mislead and Falsely Alarm Consumers"". American Association for the Advancement of Science. Retrieved February 8, 2016.

"REPORT 2 OF THE COUNCIL ON SCIENCE AND PUBLIC HEALTH (A-12): Labeling of Bioengineered Foods" (PDF). American Medical Association. 2012. Retrieved March 19, 2016. Bioengineered foods have been consumed for close to 20 years, and during that time, no overt consequences on human health have been reported and/or substantiated in the peer-reviewed literature.

GM foods currently available on the international market have passed safety assessments and are not likely to present risks for human health. In addition, no effects on human health have been shown as a result of the consumption of such foods by the general population in the countries where they have been approved. Continuous application of safety assessments based on the Codex Alimentarius principles and, where appropriate, adequate post market monitoring, should form the basis for ensuring the safety of GM foods.

"Genetically modified foods and health: a second interim statement" (PDF). British Medical Association. March 2004. Retrieved March 21, 2016. In our view, the potential for GM foods to cause harmful health effects is very small and many of the concerns expressed apply with equal vigour to conventionally derived foods. However, safety concerns cannot, as yet, be dismissed completely on the basis of information currently available.

When seeking to optimise the balance between benefits and risks, it is prudent to err on the side of caution and, above all, learn from accumulating knowledge and experience. Any new technology such as genetic modification must be examined for possible benefits and risks to human health and the environment. As with all novel foods, safety assessments in relation to GM foods must be made on a case-by-case basis.

Members of the GM jury project were briefed on various aspects of genetic modification by a diverse group of acknowledged experts in the relevant subjects. The GM jury reached the conclusion that the sale of GM foods currently available should be halted and the moratorium on commercial growth of GM crops should be continued. These conclusions were based on the precautionary principle and lack of evidence of any benefit. The Jury expressed concern over the impact of GM crops on farming, the environment, food safety and other potential health effects.

The Royal Society review (2002) concluded that the risks to human health associated with the use of specific viral DNA sequences in GM plants are negligible, and while calling for caution in the introduction of potential allergens into food crops, stressed the absence of evidence that commercially available GM foods cause clinical allergic manifestations. The BMA shares the view that that there is no robust evidence to prove that GM foods are unsafe but we endorse the call for further research and surveillance to provide convincing evidence of safety and benefit.

The literature about Biodiversity and the GE food/feed consumption has sometimes resulted in animated debate regarding the suitability of the experimental designs, the choice of the statistical methods or the public accessibility of data. Such debate, even if positive and part of the natural process of review by the scientific community, has frequently been distorted by the media and often used politically and inappropriately in anti-GE crops campaigns.

Domingo, Jos L.; Bordonaba, Jordi Gin (2011). "A literature review on the safety assessment of genetically modified plants" (PDF). Environment International. 37: 734742. doi:10.1016/j.envint.2011.01.003. PMID21296423. In spite of this, the number of studies specifically focused on safety assessment of GM plants is still limited. However, it is important to remark that for the first time, a certain equilibrium in the number of research groups suggesting, on the basis of their studies, that a number of varieties of GM products (mainly maize and soybeans) are as safe and nutritious as the respective conventional non-GM plant, and those raising still serious concerns, was observed. Moreover, it is worth mentioning that most of the studies demonstrating that GM foods are as nutritional and safe as those obtained by conventional breeding, have been performed by biotechnology companies or associates, which are also responsible of commercializing these GM plants. Anyhow, this represents a notable advance in comparison with the lack of studies published in recent years in scientific journals by those companies.

Krimsky, Sheldon (2015). "An Illusory Consensus behind GMO Health Assessment" (PDF). Science, Technology, & Human Values. 40: 132. doi:10.1177/0162243915598381. I began this article with the testimonials from respected scientists that there is literally no scientific controversy over the health effects of GMOs. My investigation into the scientific literature tells another story.

And contrast:

Panchin, Alexander Y.; Tuzhikov, Alexander I. (January 14, 2016). "Published GMO studies find no evidence of harm when corrected for multiple comparisons". Critical Reviews in Biotechnology: 15. doi:10.3109/07388551.2015.1130684. ISSN0738-8551. PMID26767435. Here, we show that a number of articles some of which have strongly and negatively influenced the public opinion on GM crops and even provoked political actions, such as GMO embargo, share common flaws in the statistical evaluation of the data. Having accounted for these flaws, we conclude that the data presented in these articles does not provide any substantial evidence of GMO harm.

The presented articles suggesting possible harm of GMOs received high public attention. However, despite their claims, they actually weaken the evidence for the harm and lack of substantial equivalency of studied GMOs. We emphasize that with over 1783 published articles on GMOs over the last 10 years it is expected that some of them should have reported undesired differences between GMOs and conventional crops even if no such differences exist in reality.

and

Yang, Y.T.; Chen, B. (2016). "Governing GMOs in the USA: science, law and public health". Journal of the Science of Food and Agriculture. 96: 18511855. doi:10.1002/jsfa.7523. PMID26536836. It is therefore not surprising that efforts to require labeling and to ban GMOs have been a growing political issue in the USA (citing Domingo and Bordonaba, 2011).

Overall, a broad scientific consensus holds that currently marketed GM food poses no greater risk than conventional food... Major national and international science and medical associations have stated that no adverse human health effects related to GMO food have been reported or substantiated in peer-reviewed literature to date.

Despite various concerns, today, the American Association for the Advancement of Science, the World Health Organization, and many independent international science organizations agree that GMOs are just as safe as other foods. Compared with conventional breeding techniques, genetic engineering is far more precise and, in most cases, less likely to create an unexpected outcome.

Pinholster, Ginger (October 25, 2012). "AAAS Board of Directors: Legally Mandating GM Food Labels Could "Mislead and Falsely Alarm Consumers"". American Association for the Advancement of Science. Retrieved February 8, 2016.

"REPORT 2 OF THE COUNCIL ON SCIENCE AND PUBLIC HEALTH (A-12): Labeling of Bioengineered Foods" (PDF). American Medical Association. 2012. Retrieved March 19, 2016. Bioengineered foods have been consumed for close to 20 years, and during that time, no overt consequences on human health have been reported and/or substantiated in the peer-reviewed literature.

GM foods currently available on the international market have passed safety assessments and are not likely to present risks for human health. In addition, no effects on human health have been shown as a result of the consumption of such foods by the general population in the countries where they have been approved. Continuous application of safety assessments based on the Codex Alimentarius principles and, where appropriate, adequate post market monitoring, should form the basis for ensuring the safety of GM foods.

"Genetically modified foods and health: a second interim statement" (PDF). British Medical Association. March 2004. Retrieved March 21, 2016. In our view, the potential for GM foods to cause harmful health effects is very small and many of the concerns expressed apply with equal vigour to conventionally derived foods. However, safety concerns cannot, as yet, be dismissed completely on the basis of information currently available.

When seeking to optimise the balance between benefits and risks, it is prudent to err on the side of caution and, above all, learn from accumulating knowledge and experience. Any new technology such as genetic modification must be examined for possible benefits and risks to human health and the environment. As with all novel foods, safety assessments in relation to GM foods must be made on a case-by-case basis.

Members of the GM jury project were briefed on various aspects of genetic modification by a diverse group of acknowledged experts in the relevant subjects. The GM jury reached the conclusion that the sale of GM foods currently available should be halted and the moratorium on commercial growth of GM crops should be continued. These conclusions were based on the precautionary principle and lack of evidence of any benefit. The Jury expressed concern over the impact of GM crops on farming, the environment, food safety and other potential health effects.

The Royal Society review (2002) concluded that the risks to human health associated with the use of specific viral DNA sequences in GM plants are negligible, and while calling for caution in the introduction of potential allergens into food crops, stressed the absence of evidence that commercially available GM foods cause clinical allergic manifestations. The BMA shares the view that that there is no robust evidence to prove that GM foods are unsafe but we endorse the call for further research and surveillance to provide convincing evidence of safety and benefit.

See the original post:
Genetically modified food - Wikipedia

Read More...

Human eye – Wikipedia

December 24th, 2016 6:42 am

This article is about the human eye. For eyes in general, see Eye.

The human eye is an organ that reacts to light and has several purposes. As a sense organ, the mammalian eye allows vision. Rod and cone cells in the retina allow conscious light perception and vision including color differentiation and the perception of depth. The human eye can distinguish about 10 million colors[1] and is possibly capable of detecting a single photon.[2]

Similar to the eyes of other mammals, the human eye's non-image-forming photosensitive ganglion cells in the retina receive light signals which affect adjustment of the size of the pupil, regulation and suppression of the hormone melatonin and entrainment of the body clock.[3]

The eye is not shaped like a perfect sphere, rather it is a fused two-piece unit, composed of the anterior segment and the posterior segment. The anterior segment is made up of the cornea, iris and lens. The cornea is transparent and more curved, and is linked to the larger posterior segment, composed of the vitreous, retina, choroid and the outer white shell called the sclera. The cornea is typically about 11.5mm (0.3in) in diameter, and 1/2 mm (500 um) in thickness near its center. The posterior chamber constitutes the remaining five-sixths; its diameter is typically about 24mm. The cornea and sclera are connected by an area termed the limbus. The iris is the pigmented circular structure concentrically surrounding the center of the eye, the pupil, which appears to be black. The size of the pupil, which controls the amount of light entering the eye, is adjusted by the iris' dilator and sphincter muscles.

Light energy enters the eye through the cornea, through the pupil and then through the lens. The lens shape is changed for near focus (accommodation) and is controlled by the ciliary muscle. Photons of light falling on the light-sensitive cells of the retina (photoreceptor cones and rods) are converted into electrical signals that are transmitted to the brain by the optic nerve and interpreted as sight and vision.

Dimensions typically differ among adults by only one or two millimetres, remarkably consistent across different ethnicities. The vertical measure, generally less than the horizontal, is about 24mm. The transverse size of a human adult eye is approximately 24.2mm and the sagittal size is 23.7mm with no significant difference between sexes and age groups. Strong correlation has been found between the transverse diameter and the width of the orbit (r = 0.88).[4] The typical adult eye has an anterior to posterior diameter of 24 millimetres, a volume of six cubic centimetres (0.4 cu. in.),[5] and a mass of 7.5 grams (weight of 0.25 oz.).[citation needed]

The eyeball grows rapidly, increasing from about 1617 millimetres (about 0.65inch) at birth to 22.523mm (approx. 0.89 in) by three years of age. By age 13, the eye attains its full size.

The eye is made up of three coats, or layers, enclosing various anatomical structures. The outermost layer, known as the fibrous tunic, is composed of the cornea and sclera. The middle layer, known as the vascular tunic or uvea, consists of the choroid, ciliary body, pigmented epithelium and iris. The innermost is the retina, which gets its oxygenation from the blood vessels of the choroid (posteriorly) as well as the retinal vessels (anteriorly).

The spaces of the eye are filled with the aqueous humour anteriorly, between the cornea and lens, and the vitreous body, a jelly-like substance, behind the lens, filling the entire posterior cavity. The aqueous humour is a clear watery fluid that is contained in two areas: the anterior chamber between the cornea and the iris, and the posterior chamber between the iris and the lens. The lens is suspended to the ciliary body by the suspensory ligament (Zonule of Zinn), made up of hundreds of fine transparent fibers which transmit muscular forces to change the shape of the lens for accommodation (focusing). The vitreous body is a clear substance composed of water and proteins, which give it a jelly-like and sticky composition.[6]

The approximate field of view of an individual human eye (measured from the fixation point, i.e., the point at which one's gaze is directed) varies by facial anatomy, but is typically 30 superior (up, limited by the brow), 45 nasal (limited by the nose), 70 inferior (down), and 100 temporal (towards the temple).[7][8][9] For both eyes combined (binocular) visual field is 100 vertical and 200 horizontal.[10][11] When viewed at large angles from the side, the iris and pupil may still be visible by the viewer, indicating the person has peripheral vision possible at that angle.[12][13][14]

About 15 temporal and 1.5 below the horizontal is the blind spot created by the optic nerve nasally, which is roughly 7.5 high and 5.5 wide.[15]

The retina has a static contrast ratio of around 100 000:1 (about 6.5 f-stops). As soon as the eye moves rapidly to acquire a target (saccades), it re-adjusts its exposure by adjusting the iris, which adjusts the size of the pupil. Initial dark adaptation takes place in approximately four seconds of profound, uninterrupted darkness; full adaptation through adjustments in retinal rod photoreceptors is 80% complete in thirty minutes. The process is nonlinear and multifaceted, so an interruption by light exposure requires restarting the dark adaptation process over again. Full adaptation is dependent on good blood flow; thus dark adaptation may be hampered by retinal disease, poor vascular circulation and high altitude exposure.[citation needed]

The human eye can detect a luminance range of 1014, or one hundred trillion (100,000,000,000,000) (about 46.5 f-stops), from 106 cd/m2, or one millionth (0.000001) of a candela per square meter to 108 cd/m2 or one hundred million (100,000,000) candelas per square meter.[16][17][18] This range does not include looking at the midday sun (109 cd/m2)[19] or lightning discharge.

At the low end of the range is the absolute threshold of vision for a steady light across a wide field of view, about 106 cd/m2 (0.000001 candela per square meter).[20][21] The upper end of the range is given in terms of normal visual performance as 108 cd/m2 (100,000,000 or one hundred million candelas per square meter).[22]

The eye includes a lens similar to lenses found in optical instruments such as cameras and the same physics principles can be applied. The pupil of the human eye is its aperture; the iris is the diaphragm that serves as the aperture stop. Refraction in the cornea causes the effective aperture (the entrance pupil) to differ slightly from the physical pupil diameter. The entrance pupil is typically about 4mm in diameter, although it can range from 2mm (f/8.3) in a brightly lit place to 8mm (f/2.1) in the dark. The latter value decreases slowly with age; older people's eyes sometimes dilate to not more than 5-6mm in the dark, and may be as small as 1mm in the light.[23][24]

The visual system in the human brain is too slow to process information if images are slipping across the retina at more than a few degrees per second.[25] Thus, to be able to see while moving, the brain must compensate for the motion of the head by turning the eyes. Frontal-eyed animals have a small area of the retina with very high visual acuity, the fovea centralis. It covers about 2 degrees of visual angle in people. To get a clear view of the world, the brain must turn the eyes so that the image of the object of regard falls on the fovea. Any failure to make eye movements correctly can lead to serious visual degradation.

Having two eyes allows the brain to determine the depth and distance of an object, called stereovision, and gives the sense of three-dimensionality to the vision. Both eyes must point accurately enough that the object of regard falls on corresponding points of the two retinas to stimulate stereovison; otherwise, double vision might occur. Some persons with congenitally crossed eyes tend to ignore one eye's vision, thus do not suffer double vision, and do not have stereovision. The movements of the eye are controlled by six muscles attached to each eye. and allow the eye to elevate, depress, converge, diverge and roll. These muscles are both controlled voluntarily and involuntarily to track objects and correct for simultaneous head movements.

Each eye has six muscles that control its movements: the lateral rectus, the medial rectus, the inferior rectus, the superior rectus, the inferior oblique, and the superior oblique. When the muscles exert different tensions, a torque is exerted on the globe that causes it to turn, in almost pure rotation, with only about one millimeter of translation.[26] Thus, the eye can be considered as undergoing rotations about a single point in the center of the eye.

Rapid eye movement, REM, typically refers to the sleep stage during which the most vivid dreams occur. During this stage, the eyes move rapidly. It is not in itself a unique form of eye movement.

Saccades are quick, simultaneous movements of both eyes in the same direction controlled by the frontal lobe of the brain. Some irregular drifts, movements, smaller than a saccade and larger than a microsaccade, subtend up to one tenth of a degree.

Even when looking intently at a single spot, the eyes drift around. This ensures that individual photosensitive cells are continually stimulated in different degrees. Without changing input, these cells would otherwise stop generating output. Microsaccades move the eye no more than a total of 0.2 in adult humans.

The vestibulo-ocular reflex is a reflex eye movement that stabilizes images on the retina during head movement by producing an eye movement in the direction opposite to head movement in response to neural input from the vestibular system of the inner ear, thus maintaining the image in the center of the visual field. For example, when the head moves to the right, the eyes move to the left. This applies for head movements up and down, left and right, and tilt to the right and left, all of which give input to the ocular muscles to maintain visual stability.

Eyes can also follow a moving object around. This tracking is less accurate than the vestibulo-ocular reflex, as it requires the brain to process incoming visual information and supply feedback. Following an object moving at constant speed is relatively easy, though the eyes will often make saccadic jerks to keep up. The smooth pursuit movement can move the eye at up to 100/s in adult humans.

It is more difficult to visually estimate speed in low light conditions or while moving, unless there is another point of reference for determining speed.

The optokinetic reflex is a combination of a saccade and smooth pursuit movement. When, for example, looking out of the window at a moving train, the eyes can focus on a 'moving' train for a short moment (through smooth pursuit), until the train moves out of the field of vision. At this point, the optokinetic reflex kicks in, and moves the eye back to the point where it first saw the train (through a saccade).

The adjustment to close-range vision involves three processes to focus an image on the retina.

When a creature with binocular vision looks at an object, the eyes must rotate around a vertical axis so that the projection of the image is in the centre of the retina in both eyes. To look at a nearby object, the eyes rotate 'towards each other' (convergence), while for an object farther away they rotate 'away from each other' (divergence).

Lenses cannot refract light rays at their edges as well as they can closer to the center. The image produced by any lens is therefore somewhat blurry around the edges (spherical aberration). It can be minimized by screening out peripheral light rays and looking only at the better-focused center. In the eye, the pupil serves this purpose by constricting while the eye is focused on nearby objects. Small aperatures also give an increase in depth of field, allowing a broader range of "in focus" vision. In this way the pupil has a dual purpose for near vision: to reduce spherical aberration and increase depth of field.[27]

Changing the curvature of the lens is carried out by the ciliary muscles surrounding the lens; this process is called "accommodation". Accommodation narrows the inner diameter of the ciliary body, which actually relaxes the fibers of the suspensory ligament attached to the periphery of the lens, and allows the lens to relax into a more convex, or globular, shape. A more convex lens refracts light more strongly and focuses divergent light rays from near objects onto the retina, allowing closer objects to be brought into better focus.[27][28]

The human eye contains enough complexity to warrant specialized attention and care beyond the duties of a general practitioner. These specialists, or eye care professionals, serve different functions in different countries. Eye care professionals can have overlap in their patient care privileges: both an ophthalmologist (M.D.) and optometrist (D.O.) are professionals who diagnoses eye disease and can prescribe lenses to correct vision,; but, typically, the ophthalmologist is licensed to perform surgery and perform complex procedures to correct disease:

Eye irritation has been defined as the magnitude of any stinging, scratching, burning, or other irritating sensation from the eye.[29] It is a common problem experienced by people of all ages. Related eye symptoms and signs of irritation are discomfort, dryness, excess tearing, itching, grating, foreign body sensation, ocular fatigue, pain, scratchiness, soreness, redness, swollen eyelids, and tiredness, etc. These eye symptoms are reported with intensities from mild to severe. It has been suggested that these eye symptoms are related to different causal mechanisms, and symptoms are related to the particular ocular anatomy involved.[30]

Several suspected causal factors in our environment have been studied so far.[29] One hypothesis is that indoor air pollution may cause eye and airway irritation.[31][32] Eye irritation depends somewhat on destabilization of the outer-eye tear film, in which the formation of dry spots on the cornea, resulting in ocular discomfort.[31][33][34] Occupational factors are also likely to influence the perception of eye irritation. Some of these are lighting (glare and poor contrast), gaze position, reduced blink rate, limited number of breaks from visual tasking, and a constant combination of accommodation, musculoskeletal burden, and impairment of the visual nervous system.[35][36] Another factor that may be related is work stress.[37][38] In addition, psychological factors have been found in multivariate analyses to be associated with an increase in eye irritation among VDU users.[39][40] Other risk factors, such as chemical toxins/irritants (e.g. amines, formaldehyde, acetaldehyde, acrolein, N-decane, VOCs, ozone, pesticides and preservatives, allergens, etc.) might cause eye irritation as well.

Certain volatile organic compounds that are both chemically reactive and airway irritants may cause eye irritation. Personal factors (e.g. use of contact lenses, eye make-up, and certain medications) may also affect destabilization of the tear film and possibly result in more eye symptoms.[30] Nevertheless, if airborne particles alone should destabilize the tear film and cause eye irritation, their content of surface-active compounds must be high.[30] An integrated physiological risk model with blink frequency, destabilization, and break-up of the eye tear film as inseparable phenomena may explain eye irritation among office workers in terms of occupational, climate, and eye-related physiological risk factors.[30]

There are two major measures of eye irritation. One is blink frequency which can be observed by human behavior. The other measures are break up time, tear flow, hyperemia (redness, swelling), tear fluid cytology, and epithelial damage (vital stains) etc., which are human beings physiological reactions. Blink frequency is defined as the number of blinks per minute and it is associated with eye irritation. Blink frequencies are individual with mean frequencies of < 2-3 to 20-30 blinks/minute, and they depend on environmental factors including the use of contact lenses. Dehydration, mental activities, work conditions, room temperature, relative humidity, and illumination all influence blink frequency. Break-up time (BUT) is another major measure of eye irritation and tear film stability.[41] It is defined as the time interval (in seconds) between blinking and rupture. BUT is considered to reflect the stability of the tear film as well. In normal persons, the break-up time exceeds the interval between blinks, and, therefore, the tear film is maintained.[30] Studies have shown that blink frequency is correlated negatively with break-up time. This phenomenon indicates that perceived eye irritation is associated with an increase in blink frequency since the cornea and conjunctiva both have sensitive nerve endings that belong to the first trigeminal branch.[42][43] Other evaluating methods, such as hyperemia, cytology etc. have increasingly been used to assess eye irritation.

There are other factors that are related to eye irritation as well. Three major factors that influence the most are indoor air pollution, contact lenses and gender differences. Field studies have found that the prevalence of objective eye signs is often significantly altered among office workers in comparisons with random samples of the general population.[44][45][46][47] These research results might indicate that indoor air pollution has played an important role in causing eye irritation. There are more and more people wearing contact lens now and dry eyes appear to be the most common complaint among contact lens wearers.[48][49][50] Although both contact lens wearers and spectacle wearers experience similar eye irritation symptoms, dryness, redness, and grittiness have been reported far more frequently among contact lens wearers and with greater severity than among spectacle wearers.[50] Studies have shown that incidence of dry eyes increases with age.[51][52] especially among women.[53] Tear film stability (e.g. break-up time) is significantly lower among women than among men. In addition, women have a higher blink frequency while reading.[54] Several factors may contribute to gender differences. One is the use of eye make-up. Another reason could be that the women in the reported studies have done more VDU work than the men, including lower grade work. A third often-quoted explanation is related to the age-dependent decrease of tear secretion, particularly among women after 40 years of age.[53][55][56]

In a study conducted by UCLA, the frequency of reported symptoms in industrial buildings was investigated.[57] The study's results were that eye irritation was the most frequent symptom in industrial building spaces, at 81%. Modern office work with use of office equipment has raised concerns about possible adverse health effects.[58] Since the 1970s, reports have linked mucosal, skin, and general symptoms to work with self-copying paper. Emission of various particulate and volatile substances has been suggested as specific causes. These symptoms have been related to Sick building syndrome (SBS), which involves symptoms such as irritation to the eyes, skin, and upper airways, headache and fatigue.[59]

Many of the symptoms described in SBS and multiple chemical sensitivity (MCS) resemble the symptoms known to be elicited by airborne irritant chemicals.[60] A repeated measurement design was employed in the study of acute symptoms of eye and respiratory tract irritation resulting from occupational exposure to sodium borate dusts.[61] The symptom assessment of the 79 exposed and 27 unexposed subjects comprised interviews before the shift began and then at regular hourly intervals for the next six hours of the shift, four days in a row.[61] Exposures were monitored concurrently with a personal real time aerosol monitor. Two different exposure profiles, a daily average and short term (15 minute) average, were used in the analysis. Exposure-response relations were evaluated by linking incidence rates for each symptom with categories of exposure.[61]

Acute incidence rates for nasal, eye, and throat irritation, and coughing and breathlessness were found to be associated with increased exposure levels of both exposure indices. Steeper exposure-response slopes were seen when short term exposure concentrations were used. Results from multivariate logistic regression analysis suggest that current smokers tended to be less sensitive to the exposure to airborne sodium borate dust.[61]

Several actions can be taken to prevent eye irritation

In addition, other measures are proper lid hygiene, avoidance of eye rubbing,[69] and proper use of personal products and medication. Eye make-up should be used with care.[70]

The paraphilic practice of oculolinctus, or eyeball-licking, may also cause irritations, infections, or damage to the eye.[71]

There are many diseases, disorders, and age-related changes that may affect the eyes and surrounding structures.

As the eye ages, certain changes occur that can be attributed solely to the aging process. Most of these anatomic and physiologic processes follow a gradual decline. With aging, the quality of vision worsens due to reasons independent of diseases of the aging eye. While there are many changes of significance in the non-diseased eye, the most functionally important changes seem to be a reduction in pupil size and the loss of accommodation or focusing capability (presbyopia). The area of the pupil governs the amount of light that can reach the retina. The extent to which the pupil dilates decreases with age, leading to a substantial decrease in light received at the retina. In comparison to younger people, it is as though older persons are constantly wearing medium-density sunglasses. Therefore, for any detailed visually guided tasks on which performance varies with illumination, older persons require extra lighting. Certain ocular diseases can come from sexually transmitted diseases such as herpes and genital warts. If contact between the eye and area of infection occurs, the STD can be transmitted to the eye.[72]

With aging, a prominent white ring develops in the periphery of the cornea called arcus senilis. Aging causes laxity, downward shift of eyelid tissues and atrophy of the orbital fat. These changes contribute to the etiology of several eyelid disorders such as ectropion, entropion, dermatochalasis, and ptosis. The vitreous gel undergoes liquefaction (posterior vitreous detachment or PVD) and its opacities visible as floaters gradually increase in number.

Various eye care professionals, including ophthalmologists, optometrists, and opticians, are involved in the treatment and management of ocular and vision disorders. A Snellen chart is one type of eye chart used to measure visual acuity. At the conclusion of a complete eye examination, the eye doctor might provide the patient with an eyeglass prescription for corrective lenses. Some disorders of the eyes for which corrective lenses are prescribed include myopia (near-sightedness) which affects about one-third[citation needed] of the human population, hyperopia (far-sightedness) which affects about one quarter of the population, astigmatism, and presbyopia (the loss of focusing range during aging).

Macular degeneration is especially prevalent in the U.S. and affects roughly 1.75 million Americans each year.[73] Having lower levels of lutein and zeaxanthin within the macula may be associated with an increase in the risk of age-related macular degeneration,.[74][75] Lutein and zeaxanthin act as antioxidants that protect the retina and macula from oxidative damage from high-energy light waves.[76] As the light waves enter the eye they excite electrons that can cause harm to the cells in the eye, but before they can cause oxidative damage that may lead to macular degeneration or cataracts. Lutein and zeaxanthin bind to the electron free radical and are reduced rendering the electron safe. There are many ways to ensure a diet rich in lutein and zeaxanthin, the best of which is to eat dark green vegetables including kale, spinach, broccoli and turnip greens.[77] Nutrition is an important aspect of the ability to achieve and maintain proper eye health. Lutein and zeaxanthin are two major carotenoids, found in the macula of the eye, that are being researched to identify their role in the pathogenesis of eye disorders such as age-related macular degeneration and cataracts.[78]

Right eye without labels (horizontal section)

Eye and orbit anatomy with motor nerves

Image showing orbita with eye and nerves visible (periocular fat removed).

Image showing orbita with eye and periocular fat.

The structures of the eye labeled

Another view of the eye and the structures of the eye labeled

Read the original here:
Human eye - Wikipedia

Read More...

Pros and Cons of Cloning – Buzzle

December 22nd, 2016 12:43 am

Cloning is the process of creating a copy of a biological entity. In genetics, it refers to the process of making an identical copy of the DNA of an organism. Are you interested in understanding the pros and cons of cloning?

Advertisement

When Dolly, the first cloned sheep came in the news, cloning interested the masses. Not only researchers but even common people became interested in knowing about how cloning is done and what pros and cons it has. Everyone became more curious about how cloning could benefit the common man. Most of us want to know the pros and cons of cloning, its advantages and its potential risks to mankind. Let us understand them.

Cloning finds applications in genetic fingerprinting, amplification of DNA and alteration of the genetic makeup of organisms. It can be used to bring about desired changes in the genetic makeup of individuals thereby introducing positive traits in them, as also for the elimination of negative traits. Cloning can also be applied to plants to remove or alter defective genes, thereby making them resistant to diseases. Cloning may find applications in the development of human organs, thus making human life safer. Here we look at some of the potential advantages of cloning.

Organ Replacement

If vital organs of the human body can be cloned, they can serve as backups. Cloning body parts can serve as a lifesaver. When a body organ such as a kidney or heart fails to function, it may be possible to replace it with the cloned body organ.

Substitute for Natural Reproduction

Cloning in human beings can prove to be a solution to infertility. It can serve as an option for producing children. With cloning, it would be possible to produce certain desired traits in human beings. We might be able to produce children with certain qualities. Wouldn't that be close to creating a man-made being?!

Help in Genetic Research

Cloning technologies can prove helpful to researchers in genetics. They might be able to understand the composition of genes and the effects of genetic constituents on human traits, in a better manner. They will be able to alter genetic constituents in cloned human beings, thus simplifying their analysis of genes. Cloning may also help us combat a wide range of genetic diseases.

Obtain Specific Traits in Organisms

Cloning can make it possible for us to obtain customized organisms and harness them for the benefit of society. It can serve as the best means to replicate animals that can be used for research purposes. It can enable the genetic alteration of plants and animals. If positive changes can be brought about in living beings with the help of cloning, it will indeed be a boon to mankind.

Like every coin has two sides, cloning has its flip side too. Though cloning may work wonders in genetics, it has some potential disadvantages. Cloning, as you know, is copying or replicating biological traits in organisms. Thus it might reduce the diversity in nature. Imagine multiple living entities like one another! Another con of cloning is that it is not clear whether we will be able to bring all the potential uses of cloning into reality. Plus, there's a big question of whether the common man will afford harnessing cloning technologies to his benefit. Here we look at the potential disadvantages of cloning.

Detrimental to Genetic Diversity

Cloning creates identical genes. It is a process of replicating a genetic constitution, thus hampering the diversity in genes. In lessening genetic diversity, we weaken our ability of adaptation. Cloning is also detrimental to the beauty that lies in diversity.

Invitation to Malpractices

While cloning allows man to tamper with genes in human beings, it also makes deliberate reproduction of undesirable traits, a possibility. Cloning of body organs may invite malpractices in society.

Will it Reach the Common Man?

In cloning human organs and using them for transplant, or in cloning human beings themselves, technical and economic barriers will have to be considered. Will cloned organs be cost-effective? Will cloning techniques really reach the common man?

Man, a Man-made Being?

Moreover, cloning will put human and animal rights at stake. Will cloning fit into our ethical and moral principles? It will make man just another man-made being. Won't it devalue mankind? Won't it demean the value of human life?

Cloning is equal to emulating God. Is that easy? Is it risk-free? Many are afraid it is not.

Manali Oak

Last Updated: August 8, 2016

Don't Miss

The Legal and Ethical Issues of Cloning That Make it Controversial

Human Cloning Facts

Human Cloning Benefits

Human Cloning: The Pros and Cons Highlight Its Risk

Is Cloning Good or Bad?

More From Buzzle

Cloning: The Intriguing Recent History of Human Cloning

Raising Awareness About the Risks of Cloning

Read more from the original source:
Pros and Cons of Cloning - Buzzle

Read More...

Muscle – Wikipedia

December 19th, 2016 6:43 am

Muscle is a soft tissue found in most animals. Muscle cells contain protein filaments of actin and myosin that slide past one another, producing a contraction that changes both the length and the shape of the cell. Muscles function to produce force and motion. They are primarily responsible for maintaining and changing posture, locomotion, as well as movement of internal organs, such as the contraction of the heart and the movement of food through the digestive system via peristalsis.

Muscle tissues are derived from the mesodermal layer of embryonic germ cells in a process known as myogenesis. There are three types of muscle, skeletal or striated, cardiac, and smooth. Muscle action can be classified as being either voluntary or involuntary. Cardiac and smooth muscles contract without conscious thought and are termed involuntary, whereas the skeletal muscles contract upon command.[1] Skeletal muscles in turn can be divided into fast and slow twitch fibers.

Muscles are predominantly powered by the oxidation of fats and carbohydrates, but anaerobic chemical reactions are also used, particularly by fast twitch fibers. These chemical reactions produce adenosine triphosphate (ATP) molecules that are used to power the movement of the myosin heads.[2]

The term muscle is derived from the Latin musculus meaning "little mouse" perhaps because of the shape of certain muscles or because contracting muscles look like mice moving under the skin.[3][4]

The anatomy of muscles includes gross anatomy, which comprises all the muscles of an organism, and microanatomy, which comprises the structures of a single muscle.

Muscle tissue is a soft tissue, and is one of the four fundamental types of tissue present in animals. There are three types of muscle tissue recognized in vertebrates:

Cardiac and skeletal muscles are "striated" in that they contain sarcomeres that are packed into highly regular arrangements of bundles; the myofibrils of smooth muscle cells are not arranged in sarcomeres and so are not striated. While the sarcomeres in skeletal muscles are arranged in regular, parallel bundles, cardiac muscle sarcomeres connect at branching, irregular angles (called intercalated discs). Striated muscle contracts and relaxes in short, intense bursts, whereas smooth muscle sustains longer or even near-permanent contractions.

Skeletal (voluntary) muscle is further divided into two broad types: slow twitch and fast twitch:

The density of mammalian skeletal muscle tissue is about 1.06kg/liter.[8] This can be contrasted with the density of adipose tissue (fat), which is 0.9196kg/liter.[9] This makes muscle tissue approximately 15% denser than fat tissue.

All muscles are derived from paraxial mesoderm. The paraxial mesoderm is divided along the embryo's length into somites, corresponding to the segmentation of the body (most obviously seen in the vertebral column.[10] Each somite has 3 divisions, sclerotome (which forms vertebrae), dermatome (which forms skin), and myotome (which forms muscle). The myotome is divided into two sections, the epimere and hypomere, which form epaxial and hypaxial muscles, respectively. The only epaxial muscles in humans are the erector spinae and small intervertebral muscles, and are innervated by the dorsal rami of the spinal nerves. All other muscles, including those of the limbs are hypaxial, and inervated by the ventral rami of the spinal nerves.[10]

During development, myoblasts (muscle progenitor cells) either remain in the somite to form muscles associated with the vertebral column or migrate out into the body to form all other muscles. Myoblast migration is preceded by the formation of connective tissue frameworks, usually formed from the somatic lateral plate mesoderm. Myoblasts follow chemical signals to the appropriate locations, where they fuse into elongate skeletal muscle cells.[10]

Skeletal muscles are sheathed by a tough layer of connective tissue called the epimysium. The epimysium anchors muscle tissue to tendons at each end, where the epimysium becomes thicker and collagenous. It also protects muscles from friction against other muscles and bones. Within the epimysium are multiple bundles called fascicles, each of which contains 10 to 100 or more muscle fibers collectively sheathed by a perimysium. Besides surrounding each fascicle, the perimysium is a pathway for nerves and the flow of blood within the muscle. The threadlike muscle fibers are the individual muscle cells (myocytes), and each cell is encased within its own endomysium of collagen fibers. Thus, the overall muscle consists of fibers (cells) that are bundled into fascicles, which are themselves grouped together to form muscles. At each level of bundling, a collagenous membrane surrounds the bundle, and these membranes support muscle function both by resisting passive stretching of the tissue and by distributing forces applied to the muscle.[11] Scattered throughout the muscles are muscle spindles that provide sensory feedback information to the central nervous system. (This grouping structure is analogous to the organization of nerves which uses epineurium, perineurium, and endoneurium).

This same bundles-within-bundles structure is replicated within the muscle cells. Within the cells of the muscle are myofibrils, which themselves are bundles of protein filaments. The term "myofibril" should not be confused with "myofiber", which is a simply another name for a muscle cell. Myofibrils are complex strands of several kinds of protein filaments organized together into repeating units called sarcomeres. The striated appearance of both skeletal and cardiac muscle results from the regular pattern of sarcomeres within their cells. Although both of these types of muscle contain sarcomeres, the fibers in cardiac muscle are typically branched to form a network. Cardiac muscle fibers are interconnected by intercalated discs,[12] giving that tissue the appearance of a syncytium.

The filaments in a sarcomere are composed of actin and myosin.

The gross anatomy of a muscle is the most important indicator of its role in the body. There is an important distinction seen between pennate muscles and other muscles. In most muscles, all the fibers are oriented in the same direction, running in a line from the origin to the insertion. However, In pennate muscles, the individual fibers are oriented at an angle relative to the line of action, attaching to the origin and insertion tendons at each end. Because the contracting fibers are pulling at an angle to the overall action of the muscle, the change in length is smaller, but this same orientation allows for more fibers (thus more force) in a muscle of a given size. Pennate muscles are usually found where their length change is less important than maximum force, such as the rectus femoris.

Skeletal muscle is arranged in discrete muscles, an example of which is the biceps brachii (biceps). The tough, fibrous epimysium of skeletal muscle is both connected to and continuous with the tendons. In turn, the tendons connect to the periosteum layer surrounding the bones, permitting the transfer of force from the muscles to the skeleton. Together, these fibrous layers, along with tendons and ligaments, constitute the deep fascia of the body.

The muscular system consists of all the muscles present in a single body. There are approximately 650 skeletal muscles in the human body,[13] but an exact number is difficult to define. The difficulty lies partly in the fact that different sources group the muscles differently and partly in that some muscles, such as palmaris longus, are not always present.

A muscular slip is a narrow length of muscle that acts to augment a larger muscle or muscles.

The muscular system is one component of the musculoskeletal system, which includes not only the muscles but also the bones, joints, tendons, and other structures that permit movement.

The three types of muscle (skeletal, cardiac and smooth) have significant differences. However, all three use the movement of actin against myosin to create contraction. In skeletal muscle, contraction is stimulated by electrical impulses transmitted by the nerves, the motoneurons (motor nerves) in particular. Cardiac and smooth muscle contractions are stimulated by internal pacemaker cells which regularly contract, and propagate contractions to other muscle cells they are in contact with. All skeletal muscle and many smooth muscle contractions are facilitated by the neurotransmitter acetylcholine.

The action a muscle generates is determined by the origin and insertion locations. The cross-sectional area of a muscle (rather than volume or length) determines the amount of force it can generate by defining the number of "sarcomeres" which can operate in parallel. Each skeletal muscle contains long units called myofibrils, and each myofibril is a chain of sarcomeres. Since contraction occurs at the same time for all connected sarcomeres in a muscles cell, these chains of sarcomeres shorten together, thus shortening the muscle fiber, resulting in overall length change. [14]The amount of force applied to the external environment is determined by lever mechanics, specifically the ratio of in-lever to out-lever. For example, moving the insertion point of the biceps more distally on the radius (farther from the joint of rotation) would increase the force generated during flexion (and, as a result, the maximum weight lifted in this movement), but decrease the maximum speed of flexion. Moving the insertion point proximally (closer to the joint of rotation) would result in decreased force but increased velocity. This can be most easily seen by comparing the limb of a mole to a horse - in the former, the insertion point is positioned to maximize force (for digging), while in the latter, the insertion point is positioned to maximize speed (for running).

Muscular activity accounts for much of the body's energy consumption. All muscle cells produce adenosine triphosphate (ATP) molecules which are used to power the movement of the myosin heads. Muscles have a short-term store of energy in the form of creatine phosphate which is generated from ATP and can regenerate ATP when needed with creatine kinase. Muscles also keep a storage form of glucose in the form of glycogen. Glycogen can be rapidly converted to glucose when energy is required for sustained, powerful contractions. Within the voluntary skeletal muscles, the glucose molecule can be metabolized anaerobically in a process called glycolysis which produces two ATP and two lactic acid molecules in the process (note that in aerobic conditions, lactate is not formed; instead pyruvate is formed and transmitted through the citric acid cycle). Muscle cells also contain globules of fat, which are used for energy during aerobic exercise. The aerobic energy systems take longer to produce the ATP and reach peak efficiency, and requires many more biochemical steps, but produces significantly more ATP than anaerobic glycolysis. Cardiac muscle on the other hand, can readily consume any of the three macronutrients (protein, glucose and fat) aerobically without a 'warm up' period and always extracts the maximum ATP yield from any molecule involved. The heart, liver and red blood cells will also consume lactic acid produced and excreted by skeletal muscles during exercise.

At rest, skeletal muscle consumes 54.4 kJ/kg(13.0kcal/kg) per day. This is larger than adipose tissue (fat) at 18.8kJ/kg (4.5kcal/kg), and bone at 9.6kJ/kg (2.3kcal/kg).[15]

The efferent leg of the peripheral nervous system is responsible for conveying commands to the muscles and glands, and is ultimately responsible for voluntary movement. Nerves move muscles in response to voluntary and autonomic (involuntary) signals from the brain. Deep muscles, superficial muscles, muscles of the face and internal muscles all correspond with dedicated regions in the primary motor cortex of the brain, directly anterior to the central sulcus that divides the frontal and parietal lobes.

In addition, muscles react to reflexive nerve stimuli that do not always send signals all the way to the brain. In this case, the signal from the afferent fiber does not reach the brain, but produces the reflexive movement by direct connections with the efferent nerves in the spine. However, the majority of muscle activity is volitional, and the result of complex interactions between various areas of the brain.

Nerves that control skeletal muscles in mammals correspond with neuron groups along the primary motor cortex of the brain's cerebral cortex. Commands are routed though the basal ganglia and are modified by input from the cerebellum before being relayed through the pyramidal tract to the spinal cord and from there to the motor end plate at the muscles. Along the way, feedback, such as that of the extrapyramidal system contribute signals to influence muscle tone and response.

Deeper muscles such as those involved in posture often are controlled from nuclei in the brain stem and basal ganglia.

The afferent leg of the peripheral nervous system is responsible for conveying sensory information to the brain, primarily from the sense organs like the skin. In the muscles, the muscle spindles convey information about the degree of muscle length and stretch to the central nervous system to assist in maintaining posture and joint position. The sense of where our bodies are in space is called proprioception, the perception of body awareness. More easily demonstrated than explained, proprioception is the "unconscious" awareness of where the various regions of the body are located at any one time. This can be demonstrated by anyone closing their eyes and waving their hand around. Assuming proper proprioceptive function, at no time will the person lose awareness of where the hand actually is, even though it is not being detected by any of the other senses.

Several areas in the brain coordinate movement and position with the feedback information gained from proprioception. The cerebellum and red nucleus in particular continuously sample position against movement and make minor corrections to assure smooth motion.

The efficiency of human muscle has been measured (in the context of rowing and cycling) at 18% to 26%. The efficiency is defined as the ratio of mechanical work output to the total metabolic cost, as can be calculated from oxygen consumption. This low efficiency is the result of about 40% efficiency of generating ATP from food energy, losses in converting energy from ATP into mechanical work inside the muscle, and mechanical losses inside the body. The latter two losses are dependent on the type of exercise and the type of muscle fibers being used (fast-twitch or slow-twitch). For an overall efficiency of 20 percent, one watt of mechanical power is equivalent to 4.3 kcal per hour. For example, one manufacturer of rowing equipment calibrates its rowing ergometer to count burned calories as equal to four times the actual mechanical work, plus 300 kcal per hour,[16] this amounts to about 20 percent efficiency at 250 watts of mechanical output. The mechanical energy output of a cyclic contraction can depend upon many factors, including activation timing, muscle strain trajectory, and rates of force rise & decay. These can be synthesized experimentally using work loop analysis.

A display of "strength" (e.g. lifting a weight) is a result of three factors that overlap: physiological strength (muscle size, cross sectional area, available crossbridging, responses to training), neurological strength (how strong or weak is the signal that tells the muscle to contract), and mechanical strength (muscle's force angle on the lever, moment arm length, joint capabilities).

Vertebrate muscle typically produces approximately 2533N (5.67.4lbf) of force per square centimeter of muscle cross-sectional area when isometric and at optimal length.[17] Some invertebrate muscles, such as in crab claws, have much longer sarcomeres than vertebrates, resulting in many more sites for actin and myosin to bind and thus much greater force per square centimeter at the cost of much slower speed. The force generated by a contraction can be measured non-invasively using either mechanomyography or phonomyography, be measured in vivo using tendon strain (if a prominent tendon is present), or be measured directly using more invasive methods.

The strength of any given muscle, in terms of force exerted on the skeleton, depends upon length, shortening speed, cross sectional area, pennation, sarcomere length, myosin isoforms, and neural activation of motor units. Significant reductions in muscle strength can indicate underlying pathology, with the chart at right used as a guide.

Since three factors affect muscular strength simultaneously and muscles never work individually, it is misleading to compare strength in individual muscles, and state that one is the "strongest". But below are several muscles whose strength is noteworthy for different reasons.

Humans are genetically predisposed with a larger percentage of one type of muscle group over another. An individual born with a greater percentage of Type I muscle fibers would theoretically be more suited to endurance events, such as triathlons, distance running, and long cycling events, whereas a human born with a greater percentage of Type II muscle fibers would be more likely to excel at sprinting events such as 100 meter dash.[citation needed]

Exercise is often recommended as a means of improving motor skills, fitness, muscle and bone strength, and joint function. Exercise has several effects upon muscles, connective tissue, bone, and the nerves that stimulate the muscles. One such effect is muscle hypertrophy, an increase in size. This is used in bodybuilding.

Various exercises require a predominance of certain muscle fiber utilization over another. Aerobic exercise involves long, low levels of exertion in which the muscles are used at well below their maximal contraction strength for long periods of time (the most classic example being the marathon). Aerobic events, which rely primarily on the aerobic (with oxygen) system, use a higher percentage of Type I (or slow-twitch) muscle fibers, consume a mixture of fat, protein and carbohydrates for energy, consume large amounts of oxygen and produce little lactic acid. Anaerobic exercise involves short bursts of higher intensity contractions at a much greater percentage of their maximum contraction strength. Examples of anaerobic exercise include sprinting and weight lifting. The anaerobic energy delivery system uses predominantly Type II or fast-twitch muscle fibers, relies mainly on ATP or glucose for fuel, consumes relatively little oxygen, protein and fat, produces large amounts of lactic acid and can not be sustained for as long a period as aerobic exercise. Many exercises are partially aerobic and partially anaerobic; for example, soccer and rock climbing involve a combination of both.

The presence of lactic acid has an inhibitory effect on ATP generation within the muscle; though not producing fatigue, it can inhibit or even stop performance if the intracellular concentration becomes too high. However, long-term training causes neovascularization within the muscle, increasing the ability to move waste products out of the muscles and maintain contraction. Once moved out of muscles with high concentrations within the sarcomere, lactic acid can be used by other muscles or body tissues as a source of energy, or transported to the liver where it is converted back to pyruvate. In addition to increasing the level of lactic acid, strenuous exercise causes the loss of potassium ions in muscle and causing an increase in potassium ion concentrations close to the muscle fibres, in the interstitium. Acidification by lactic acid may allow recovery of force so that acidosis may protect against fatigue rather than being a cause of fatigue.[19]

Delayed onset muscle soreness is pain or discomfort that may be felt one to three days after exercising and generally subsides two to three days later. Once thought to be caused by lactic acid build-up, a more recent theory is that it is caused by tiny tears in the muscle fibers caused by eccentric contraction, or unaccustomed training levels. Since lactic acid disperses fairly rapidly, it could not explain pain experienced days after exercise.[20]

Independent of strength and performance measures, muscles can be induced to grow larger by a number of factors, including hormone signaling, developmental factors, strength training, and disease. Contrary to popular belief, the number of muscle fibres cannot be increased through exercise. Instead, muscles grow larger through a combination of muscle cell growth as new protein filaments are added along with additional mass provided by undifferentiated satellite cells alongside the existing muscle cells.[13]

Biological factors such as age and hormone levels can affect muscle hypertrophy. During puberty in males, hypertrophy occurs at an accelerated rate as the levels of growth-stimulating hormones produced by the body increase. Natural hypertrophy normally stops at full growth in the late teens. As testosterone is one of the body's major growth hormones, on average, men find hypertrophy much easier to achieve than women. Taking additional testosterone or other anabolic steroids will increase muscular hypertrophy.

Muscular, spinal and neural factors all affect muscle building. Sometimes a person may notice an increase in strength in a given muscle even though only its opposite has been subject to exercise, such as when a bodybuilder finds her left biceps stronger after completing a regimen focusing only on the right biceps. This phenomenon is called cross education.[citation needed]

Inactivity and starvation in mammals lead to atrophy of skeletal muscle, a decrease in muscle mass that may be accompanied by a smaller number and size of the muscle cells as well as lower protein content.[21] Muscle atrophy may also result from the natural aging process or from disease.

In humans, prolonged periods of immobilization, as in the cases of bed rest or astronauts flying in space, are known to result in muscle weakening and atrophy. Atrophy is of particular interest to the manned spaceflight community, because the weightlessness experienced in spaceflight results is a loss of as much as 30% of mass in some muscles.[22][23] Such consequences are also noted in small hibernating mammals like the golden-mantled ground squirrels and brown bats.[24]

During aging, there is a gradual decrease in the ability to maintain skeletal muscle function and mass, known as sarcopenia. The exact cause of sarcopenia is unknown, but it may be due to a combination of the gradual failure in the "satellite cells" that help to regenerate skeletal muscle fibers, and a decrease in sensitivity to or the availability of critical secreted growth factors that are necessary to maintain muscle mass and satellite cell survival. Sarcopenia is a normal aspect of aging, and is not actually a disease state yet can be linked to many injuries in the elderly population as well as decreasing quality of life.[25]

There are also many diseases and conditions that cause muscle atrophy. Examples include cancer and AIDS, which induce a body wasting syndrome called cachexia. Other syndromes or conditions that can induce skeletal muscle atrophy are congestive heart disease and some diseases of the liver.

Neuromuscular diseases are those that affect the muscles and/or their nervous control. In general, problems with nervous control can cause spasticity or paralysis, depending on the location and nature of the problem. A large proportion of neurological disorders, ranging from cerebrovascular accident (stroke) and Parkinson's disease to CreutzfeldtJakob disease, can lead to problems with movement or motor coordination.

Symptoms of muscle diseases may include weakness, spasticity, myoclonus and myalgia. Diagnostic procedures that may reveal muscular disorders include testing creatine kinase levels in the blood and electromyography (measuring electrical activity in muscles). In some cases, muscle biopsy may be done to identify a myopathy, as well as genetic testing to identify DNA abnormalities associated with specific myopathies and dystrophies.

A non-invasive elastography technique that measures muscle noise is undergoing experimentation to provide a way of monitoring neuromuscular disease. The sound produced by a muscle comes from the shortening of actomyosin filaments along the axis of the muscle. During contraction, the muscle shortens along its longitudinal axis and expands across the transverse axis, producing vibrations at the surface.[26]

The evolutionary origin of muscle cells in metazoans is a highly debated topic. In one line of thought scientists have believed that muscle cells evolved once and thus all animals with muscles cells have a single common ancestor. In the other line of thought, scientists believe muscles cells evolved more than once and any morphological or structural similarities are due to convergent evolution and genes that predate the evolution of muscle and even the mesoderm - the germ layer from which many scientists believe true muscle cells derive.

Schmid and Seipel argue that the origin of muscle cells is a monophyletic trait that occurred concurrently with the development of the digestive and nervous systems of all animals and that this origin can be traced to a single metazoan ancestor in which muscle cells are present. They argue that molecular and morphological similarities between the muscles cells in cnidaria and ctenophora are similar enough to those of bilaterians that there would be one ancestor in metazoans from which muscle cells derive. In this case, Schmid and Seipel argue that the last common ancestor of bilateria, ctenophora, and cnidaria was a triploblast or an organism with three germ layers and that diploblasty, meaning an organism with two germ layers, evolved secondarily due to their observation of the lack of mesoderm or muscle found in most cnidarians and ctenophores. By comparing the morphology of cnidarians and ctenophores to bilaterians, Schmid and Seipel were able to conclude that there were myoblast-like structures in the tentacles and gut of some species of cnidarians and in the tentacles of ctenophores. Since this is a structure unique to muscle cells, these scientists determined based on the data collected by their peers that this is a marker for striated muscles similar to that observed in bilaterians. The authors also remark that the muscle cells found in cnidarians and ctenophores are often contests due to the origin of these muscle cells being the ectoderm rather than the mesoderm or mesendoderm. The origin of true muscles cells is argued by others to be the endoderm portion of the mesoderm and the endoderm. However, Schmid and Seipel counter this skepticism about whether or not the muscle cells found in ctenophores and cnidarians are true muscle cells by considering that cnidarians develop through a medusa stage and polyp stage. They observe that in the hydrozoan medusa stage there is a layer of cells that separate from the distal side of the ectoderm to form the striated muscle cells in a way that seems similar to that of the mesoderm and call this third separated layer of cells the ectocodon. They also argue that not all muscle cells are derived from the mesendoderm in bilaterians with key examples being that in both the eye muscles of vertebrates and the muscles of spiralians these cells derive from the ectodermal mesoderm rather than the endodermal mesoderm. Furthermore, Schmid and Seipel argue that since myogenesis does occur in cnidarians with the help of molecular regulatory elements found in the specification of muscles cells in bilaterians that there is evidence for a single origin for striated muscle.[27]

In contrast to this argument for a single origin of muscle cells, Steinmetz et al. argue that molecular markers such as the myosin II protein used to determine this single origin of striated muscle actually predate the formation of muscle cells. This author uses an example of the contractile elements present in the porifera or sponges that do truly lack this striated muscle containing this protein. Furthermore, Steinmetz et al. present evidence for a polyphyletic origin of striated muscle cell development through their analysis of morphological and molecular markers that are present in bilaterians and absent in cnidarians, ctenophores, and bilaterians. Steimetz et al. showed that the traditional morphological and regulatory markers such as actin, the ability to couple myosin side chains phosphorylation to higher concentrations of the positive concentrations of calcium, and other MyHC elements are present in all metazoans not just the organisms that have been shown to have muscle cells. Thus, the usage of any of these structural or regulatory elements in determining whether or not the muscle cells of the cnidarians and ctenophores are similar enough to the muscle cells of the bilaterians to confirm a single lineage is questionable according to Steinmetz et al. Furthermore, Steinmetz et al. explain that the orthologues of the MyHc genes that have been used to hypothesize the origin of striated muscle occurred through a gene duplication event that predates the first true muscle cells (meaning striated muscle), and they show that the MyHc genes are present in the sponges that have contractile elements but no true muscle cells. Furthermore, Steinmetz et all showed that the localization of this duplicated set of genes that serve both the function of facilitating the formation of striated muscle genes and cell regulation and movement genes were already separated into striated myhc and non-muscle myhc. This separation of the duplicated set of genes is shown through the localization of the striated myhc to the contractile vacuole in sponges while the non-muscle myhc was more diffusely expressed during developmental cell shape and change. Steinmetz et al. found a similar pattern of localization in cnidarians with except with the cnidarian N. vectensis having this striated muscle marker present in the smooth muscle of the digestive track. Thus, Steinmetz et al. argue that the pleisiomorphic trait of the separated orthologues of myhc cannot be used to determine the monophylogeny of muscle, and additionally argue that the presence of a striated muscle marker in the smooth muscle of this cnidarian shows a fundamentally different mechanism of muscle cell development and structure in cnidarians.[28]

Steinmetz et al. continue to argue for multiple origins of striated muscle in the metazoans by explaining that a key set of genes used to form the troponin complex for muscle regulation and formation in bilaterians is missing from the cnidarians and ctenophores, and of 47 structural and regulatory proteins observed, Steinmetz et al. were not able to find even on unique striated muscle cell protein that was expressed in both cnidarians and bilaterians. Furthermore, the Z-disc seemed to have evolved differently even within bilaterians and there is a great deal diversity of proteins developed even between this clade, showing a large degree of radiation for muscle cells. Through this divergence of the Z-disc, Steimetz et al. argue that there are only four common protein components that were present in all bilaterians muscle ancestors and that of these for necessary Z-disc components only an actin protein that they have already argued is an uninformative marker through its pleisiomorphic state is present in cnidarians. Through further molecular marker testing, Steinmetz et al. observe that non-bilaterians lack many regulatory and structural components necessary for bilaterians muscle formation and do not find any unique set of proteins to both bilaterians and cnidarians and ctenophores that are not present in earlier, more primitive animals such as the sponges and amoebozoans. Through this analysis the authors conclude that due to the lack of elements that bilaterians muscles are dependent on for structure and usage, nonbilaterian muscles must be of a different origin with a different set regulatory and structural proteins.[28]

In another take on the argument, Andrikou and Arnone use the newly available data on gene regulatory networks to look at how the hierarchy of genes and morphogens and other mechanism of tissue specification diverge and are similar among early deuterostomes and protostomes. By understanding not only what genes are present in all bilaterians but also the time and place of deployment of these genes, Andrikou and Arnone discuss a deeper understanding of the evolution of myogenesis.[29]

In their paper Andrikou and Arnone argue that to truly understand the evolution of muscle cells the function of transcriptional regulators must be understood in the context of other external and internal interactions. Through their analysis, Andrikou and Arnone found that there were conserved orthologues of the gene regulatory network in both invertebrate bilaterians and in cnidarians. They argue that having this common, general regulatory circuit allowed for a high degree of divergence from a single well functioning network. Andrikou and Arnone found that the orthologues of genes found in vertebrates had been changed through different types of structural mutations in the invertebrate deuterostomes and protostomes, and they argue that these structural changes in the genes allowed for a large divergence of muscle function and muscle formation in these species. Andrikou and Arnone were able to recognize not only any difference due to mutation in the genes found in vertebrates and invertebrates but also the integration of species specific genes that could also cause divergence from the original gene regulatory network function. Thus, although a common muscle patterning system has been determined, they argue that this could be due to a more ancestral gene regulatory network being coopted several times across lineages with additional genes and mutations causing very divergent development of muscles. Thus it seems that myogenic patterning framework may be an ancestral trait. However, Andrikou and Arnone explain that the basic muscle patterning structure must also be considered in combination with the cis regulatory elements present at different times during development. In contrast with the high level of gene family apparatuses structure, Andrikou and Arnone found that the cis regulatory elements were not well conserved both in time and place in the network which could show a large degree of divergence in the formation of muscle cells. Through this analysis, it seems that the myogenic GRN is an ancestral GRN with actual changes in myogenic function and structure possibly being linked to later coopts of genes at different times and places.[29]

Evolutionarily, specialized forms of skeletal and cardiac muscles predated the divergence of the vertebrate/arthropod evolutionary line.[30][dead link] This indicates that these types of muscle developed in a common ancestor sometime before 700 million years ago (mya). Vertebrate smooth muscle was found to have evolved independently from the skeletal and cardiac muscle types.

Read the rest here:
Muscle - Wikipedia

Read More...

Rheumatoid Arthritis – National Library of Medicine – PubMed …

December 19th, 2016 6:42 am

Evidence reviews Antimalarials for treating rheumatoid arthritis

Antimalarials have been used for the treatment of rheumatoid arthritis (RA) for several decades. This review found four trials, with 300 patients receiving hydrochloroquine and 292 receiving placebo. A benefit was observed in the patients taking hydroxychloroquine compared to placebo. There was no difference between the two groups in terms of those who had to withdraw from trials due to side effects.

The purpose was to examine the effectiveness of patient education interventions on health status (pain, functional disability, psychological wellbeing and disease activity) in patients with rheumatoid arthritis (RA). Patient education had a small beneficial effect at first followup for disability, joint counts, patient global assessment, psychological status, and depression. At final followup (314 months) no evidence of significant benefits was found.

In rheumatoid arthritis (RA), the joints are swollen, stiff and painful. Nonsteroidal antiinflammatory drugs (NSAIDs) such as ibuprofen are often recommended to ease the pain and swelling in the joints. Paracetamol (also known as acetaminophen) is another type of medication to relieve pain in RA.

See all (641)

Antimalarials have been used for the treatment of rheumatoid arthritis (RA) for several decades. This review found four trials, with 300 patients receiving hydrochloroquine and 292 receiving placebo. A benefit was observed in the patients taking hydroxychloroquine compared to placebo. There was no difference between the two groups in terms of those who had to withdraw from trials due to side effects.

The purpose was to examine the effectiveness of patient education interventions on health status (pain, functional disability, psychological wellbeing and disease activity) in patients with rheumatoid arthritis (RA). Patient education had a small beneficial effect at first followup for disability, joint counts, patient global assessment, psychological status, and depression. At final followup (314 months) no evidence of significant benefits was found.

In rheumatoid arthritis (RA), the joints are swollen, stiff and painful. Nonsteroidal antiinflammatory drugs (NSAIDs) such as ibuprofen are often recommended to ease the pain and swelling in the joints. Paracetamol (also known as acetaminophen) is another type of medication to relieve pain in RA.

See all (126)

Read the original:
Rheumatoid Arthritis - National Library of Medicine - PubMed ...

Read More...

Welcome to the Natural Medicines Research Collaboration

December 18th, 2016 2:43 am

Natural Standard has provided just what the doctor ordered - an evidence-based review to tell us what is known, and what is not. Given the clear imperative to talk with our patients about CAM, here's the evidence summary you need.

Harley Goldberg, DO Medical Director, CAM Kaiser Permanente

Natural Standard provides a critical and transparent review of the evidence regarding herbs and supplements. As such, it is an extremely valuable resource for both clinicians and investigators.

David Eisenberg, MD Director, Osher Institute Division for Research and Education in Complementary & Integrative Medicine Harvard Medical School

The best and most authoritative web site available on herbal medicines.

The World Health Organization (WHO)

At last! An authoritative reference on the many nuances of Alternative Medicine. How to separate the good from the bad and the unknown. An extraordinary piece of work that will become the standard text in this area.

Vincent T. DeVita Jr., MD The Amy and Joseph Perella Professor of Medicine Yale School of Medicine Former Director, National Cancer Institute

Thank you for a great interview; and thanks so much for access to the Natural Standard website. I'm in research heaven!

Angela Hynes Author, Freelance Writer & Editor specializing in health and fitness

Natural Standard is an AAFP recommended resource for development of EB CME content.

American Academy of Family Physicians

"Natural Standard is like having access to the best library in the world so you don't have to look things up in ten locations!"

Jonny Bowden, PhD, CNS Author, The 150 Healthiest Foods on Earth

View original post here:
Welcome to the Natural Medicines Research Collaboration

Read More...

Psoriatic arthritis – Wikipedia

December 14th, 2016 8:42 am

Psoriatic arthritis (also arthritis psoriatica, arthropathic psoriasis or psoriatic arthropathy) is a type of inflammatory arthritis[1][2] that will develop in between 6 and 42% of people who have the chronic skin condition psoriasis.[3] Psoriatic arthritis is classified as a seronegative spondyloarthropathy and therefore occurs more commonly in patients with tissue type HLA-B27.

Pain, swelling, or stiffness in one or more joints is commonly present in psoriatic arthritis.[4] Psoriatic arthritis is inflammatory, and affected joints are generally red or warm to the touch.[4] Asymmetrical oligoarthritis, defined as inflammation affecting one to four joints during the first six months of disease, is present in 70% of cases. However, in 15% of cases the arthritis is symmetrical. The joints of the hand that are involved in psoriasis are the proximal interphalangeal (PIP), the distal interphalangeal (DIP), the metacarpophalangeal (MCP), and the wrist. Involvement of the distal interphalangeal joints (DIP) is a characteristic feature and is present in 15% of cases.

In addition to affecting the joints of the hands and wrists, psoriatic arthritis may affect the fingers, nails, and skin. Sausage-like swelling in the fingers or toes, known as dactylitis, may occur.[4] Psoriasis can also cause changes to the nails, such as pitting or separation from the nail bed,[4]onycholysis, hyperkeratosis under the nails, and horizontal ridging.[5] Psoriasis classically presents with scaly skin lesions, which are most commonly seen over extensor surfaces such as the scalp, natal cleft and umbilicus.

In psoriatic arthritis, pain can occur in the area of the sacrum (the lower back, above the tailbone),[4] as a result of sacroiliitis or spondylitis, which is present in 40% of cases. Pain can occur in and around the feet and ankles, especially enthesitis in the Achilles tendon (inflammation of the Achilles tendon where it inserts into the bone) or plantar fasciitis in the sole of the foot.[4]

Along with the above noted pain and inflammation, there is extreme exhaustion that does not go away with adequate rest. The exhaustion may last for days or weeks without abatement. Psoriatic arthritis may remain mild, or may progress to more destructive joint disease. Periods of active disease, or flares, will typically alternate with periods of remission. In severe forms, psoriatic arthritis may progress to arthritis mutilans[6] which on X-ray gives a "pencil-in-cup" appearance.

Because prolonged inflammation can lead to joint damage, early diagnosis and treatment to slow or prevent joint damage is recommended.[7]

The exact causes are not yet known, but a number of genetic associations have been identified in a genome-wide association study of psoriasis and psoriatic arthritis including HLA-B27.[8][9]

There is no definitive test to diagnose psoriatic arthritis. Symptoms of psoriatic arthritis may closely resemble other diseases, including rheumatoid arthritis. A rheumatologist (a doctor specializing in diseases affecting the joints) may use physical examinations, health history, blood tests and x-rays to accurately diagnose psoriatic arthritis.

Factors that contribute to a diagnosis of psoriatic arthritis include:

Other symptoms that are more typical of psoriatic arthritis than other forms of arthritis include inflammation in the Achilles tendon (at the back of the heel) or the Plantar fascia (bottom of the feet), and dactylitis (sausage-like swelling of the fingers or toes).[10]

Magnetic resonance image of the index finger in psoriatic arthritis (mutilans form). Shown is a T2 weighted fat suppressed sagittal image. Focal increased signal (probable erosion) is seen at the base of the middle phalanx (long thin arrow). There is synovitis at the proximal interphalangeal joint (long thick arrow) plus increased signal in the overlying soft tissues indicating oedema (short thick arrow). There is also diffuse bone oedema (short thin arrows) involving the head of the proximal phalanx and extending distally down the shaft.

Magnetic resonance images of the fingers in psoriatic arthritis. Shown are T1 weighted axial (a) pre-contrast and (b) post-contrast images exhibiting dactylitis due to flexor tenosynovitis at the second finger with enhancement and thickening of the tendon sheath (large arrow). Synovitis is seen in the fourth proximal interphalangeal joint (small arrow).

(a) T1-weighted and (b) short tau inversion recovery (STIR) magnetic resonance images of lumbar and lower thoracic spine in psoriatic arthritis. Signs of active inflammation are seen at several levels (arrows). In particular, anterior spondylitis is seen at level L1/L2 and an inflammatory Andersson lesion at the upper vertebral endplate of L3.

Magnetic resonance images of sacroiliac joints. Shown are T1-weighted semi-coronal magnetic resonance images through the sacroiliac joints (a) before and (b) after intravenous contrast injection. Enhancement is seen at the right sacroiliac joint (arrow, left side of image), indicating active sacroiliitis.

There are five main types of psoriatic arthritis:

The underlying process in psoriatic arthritis is inflammation; therefore, treatments are directed at reducing and controlling inflammation. Milder cases of psoriatic arthritis may be treated with NSAIDs alone; however, there is a trend toward earlier use of disease-modifying antirheumatic drugs or biological response modifiers to prevent irreversible joint destruction.

Typically the medications first prescribed for psoriatic arthritis are NSAIDs such as ibuprofen and naproxen, followed by more potent NSAIDs like diclofenac, indomethacin, and etodolac. NSAIDs can irritate the stomach and intestine, and long-term use can lead to gastrointestinal bleeding.[11][12] Coxibs (COX-2 inhibitors) e.g. Celecoxib or Etoricoxib, are associated with a statistically significant 50 to 66% relative risk reduction in gastrointestinal ulcers and bleeding complications compared to traditional NSAIDs, but carry an increased rate of cardiovascular events such as myocardial infarction (MI) or heart attack, and stroke.[13][14] Both COX-2 inhibitors and other non-selective NSAIDS have potential adverse effects that include damage to the kidneys.

These are used in persistent symptomatic cases without exacerbation. Rather than just reducing pain and inflammation, this class of drugs helps limit the amount of joint damage that occurs in psoriatic arthritis. Most DMARDs act slowly and may take weeks or even months to take full effect. Drugs such as methotrexate or leflunomide are commonly prescribed; other DMARDS used to treat psoriatic arthritis include cyclosporin, azathioprine, and sulfasalazine. These immunosuppressant drugs can also reduce psoriasis skin symptoms but can lead to liver and kidney problems and an increased risk of serious infection.

The most recent class of treatment is called biological response modifiers or biologics has been developed using recombinant DNA technology. Biologic medications are derived from living cells cultured in a laboratory. Unlike traditional DMARDS that affect the entire immune system, biologics target specific parts of the immune system. They are given by injection or intravenous (IV) infusion.

Biologics prescribed for psoriatic arthritis are TNF- inhibitors, including infliximab, etanercept, golimumab, certolizumab pegol and adalimumab, as well as the IL-12/IL-23 inhibitor ustekinumab.

Biologics may increase the risk of minor and serious infections.[citation needed] More rarely, they may be associated with nervous system disorders, blood disorders or certain types of cancer.[citation needed]

A first-in-class treatment option for the management of psoriatic arthritis, apremilast is a small molecule phosphodiesterase-4 inhibitor approved for use by the FDA in 2014. By inhibiting PDE4, an enzyme which breaks down cyclic adenosine monophosphate, cAMP levels rise, resulting in the down-regulation of various pro-inflammatory factors inlcuding TNF- and the up-regulation of anti-inflammatory factor interleukin 10.

It is given in tablet form and taken by mouth. Side effects include headache, back pain, nausea, diarrhea, fatigue, nasopharyngitis and upper respiratory tract infections, as well as depression and weight loss.

Patented in 2014 and manufactured by Celgene, there is no current generic equivalent available on the market.

A review found tentative evidence of benefit of low level laser therapy and concluded that it could be considered for relief of pain and stiffness associated RA.[15]

Retinoid etretinate is effective for both arthritis and skin lesions. Photochemotherapy with methoxy psoralen and long wave ultraviolet light (PUVA) are used for severe skin lesions. Doctors may use joint injections with corticosteroids in cases where one joint is severely affected. In psoriatic arthritis patients with severe joint damage orthopedic surgery may be implemented to correct joint destruction, usually with use of a joint replacement. Surgery is effective for pain alleviation, correcting joint disfigurement, and reinforcing joint usefulness and strength.

Seventy percent of people who develop psoriatic arthritis first show signs of psoriasis on the skin, 15 percent develop skin psoriasis and arthritis at the same time, and 15 percent develop skin psoriasis following the onset of psoriatic arthritis.[16]

Psoriatic arthritis can develop in people who have any level severity of psoriatic skin disease, ranging from mild to very severe.[17]

Psoriatic arthritis tends to appear about 10 years after the first signs of psoriasis. For the majority of people this is between the ages of 30 and 55, but the disease can also affect children. The onset of psoriatic arthritis symptoms before symptoms of skin psoriasis is more common in children than adults.[18]

More than 80% of patients with psoriatic arthritis will have psoriatic nail lesions characterized by nail pitting, separation of the nail from the underlying nail bed, ridging and cracking, or more extremely, loss of the nail itself (onycholysis).[18]

Men and women are equally affected by this condition. Like psoriasis, psoriatic arthritis is more common among Caucasians than Africans or Asians.[19]

Read this article:
Psoriatic arthritis - Wikipedia

Read More...

Induced pluripotent stem cell – Wikipedia

December 13th, 2016 6:42 am

Induced pluripotent stem cells (also known as iPS cells or iPSCs) are a type of pluripotent stem cell that can be generated directly from adult cells. The iPSC technology was pioneered by Shinya Yamanakas lab in Kyoto, Japan, who showed in 2006 that the introduction of four specific genes encoding transcription factors could convert adult cells into pluripotent stem cells.[1] He was awarded the 2012 Nobel Prize along with Sir John Gurdon "for the discovery that mature cells can be reprogrammed to become pluripotent." [2]

Pluripotent stem cells hold great promise in the field of regenerative medicine. Because they can propagate indefinitely, as well as give rise to every other cell type in the body (such as neurons, heart, pancreatic, and liver cells), they represent a single source of cells that could be used to replace those lost to damage or disease.

The most well-known type of pluripotent stem cell is the embryonic stem cell. However, since the generation of embryonic stem cells involves destruction (or at least manipulation) [3] of the pre-implantation stage embryo, there has been much controversy surrounding their use. Further, because embryonic stem cells can only be derived from embryos, it has so far not been feasible to create patient-matched embryonic stem cell lines.

Since iPSCs can be derived directly from adult tissues, they not only bypass the need for embryos, but can be made in a patient-matched manner, which means that each individual could have their own pluripotent stem cell line. These unlimited supplies of autologous cells could be used to generate transplants without the risk of immune rejection. While the iPSC technology has not yet advanced to a stage where therapeutic transplants have been deemed safe, iPSCs are readily being used in personalized drug discovery efforts and understanding the patient-specific basis of disease.[4]

iPSCs are typically derived by introducing products of specific set of pluripotency-associated genes, or reprogramming factors, into a given cell type. The original set of reprogramming factors (also dubbed Yamanaka factors) are the transcription factors Oct4 (Pou5f1), Sox2, cMyc, and Klf4. While this combination is most conventional in producing iPSCs, each of the factors can be functionally replaced by related transcription factors, miRNAs, small molecules, or even non-related genes such as lineage specifiers.

iPSC derivation is typically a slow and inefficient process, taking 12 weeks for mouse cells and 34 weeks for human cells, with efficiencies around 0.01%0.1%. However, considerable advances have been made in improving the efficiency and the time it takes to obtain iPSCs. Upon introduction of reprogramming factors, cells begin to form colonies that resemble pluripotent stem cells, which can be isolated based on their morphology, conditions that select for their growth, or through expression of surface markers or reporter genes.

Induced pluripotent stem cells were first generated by Shinya Yamanaka's team at Kyoto University, Japan, in 2006.[1] They hypothesized that genes important to embryonic stem cell (ESC) function might be able to induce an embryonic state in adult cells. They chose twenty-four genes previously identified as important in ESCs and used retroviruses to deliver these genes to mouse fibroblasts. The fibroblasts were engineered so that any cells reactivating the ESC-specific gene, Fbx15, could be isolated using antibiotic selection.

Upon delivery of all twenty-four factors, ESC-like colonies emerged that reactivated the Fbx15 reporter and could propagate indefinitely. To identify the genes necessary for reprogramming, the researchers removed one factor at a time from the pool of twenty-four. By this process, they identified four factors, Oct4, Sox2, cMyc, and Klf4, which were each necessary and together sufficient to generate ESC-like colonies under selection for reactivation of Fbx15.

Similar to ESCs, these iPSCs had unlimited self-renewal and were pluripotent, contributing to lineages from all three germ layers in the context of embryoid bodies, teratomas, and fetal chimeras. However, the molecular makeup of these cells, including gene expression and epigenetic marks, was somewhere between that of a fibroblast and an ESC, and the cells failed to produce viable chimeras when injected into developing embryos.

In June 2007, three separate research groups, including that of Yamanaka's, a Harvard/University of California, Los Angeles collaboration, and a group at MIT, published studies that substantially improved on the reprogramming approach, giving rise to iPSCs that were indistinguishable from ESCs. Unlike the first generation of iPSCs, these second generation iPSCs produced viable chimeric mice and contributed to the mouse germline, thereby achieving the 'gold standard' for pluripotent stem cells.

These second-generation iPSCs were derived from mouse fibroblasts by retroviral-mediated expression of the same four transcription factors (Oct4, Sox2, cMyc, Klf4). However, instead of using Fbx15 to select for pluripotent cells, the researchers used Nanog, a gene that is functionally important in ESCs. By using this different strategy, the researchers created iPSCs that were functionally identical to ESCs.[5][6][7][8]

Reprogramming of human cells to iPSCs was reported in November 2007 by two independent research groups: Shinya Yamanaka of Kyoto University, Japan, who pioneered the original iPSC method, and James Thomson of University of Wisconsin-Madison who was the first to derive human embryonic stem cells. With the same principle used in mouse reprogramming, Yamanaka's group successfully transformed human fibroblasts into iPSCs with the same four pivotal genes, OCT4, SOX2, KLF4, and C-MYC, using a retroviral system,[9] while Thomson and colleagues used a different set of factors, OCT4, SOX2, NANOG, and LIN28, using a lentiviral system.[10]

Obtaining fibroblasts to produce iPSCs involves a skin biopsy, and there has been a push towards identifying cell types that are more easily accessible.[11][12] In 2008, iPSCs were derived from human keratinocytes, which could be obtained from a single hair pluck.[13][14] In 2010, iPSCs were derived from peripheral blood cells,[15][16] and in 2012, iPSCs were made from renal epithelial cells in the urine.[17]

Other considerations for starting cell type include mutational load (for example, skin cells may harbor more mutations due to UV exposure),[11][12] time it takes to expand the population of starting cells,[11] and the ability to differentiate into a given cell type.[18]

[citation needed]

The generation of iPS cells is crucially dependent on the transcription factors used for the induction.

Oct-3/4 and certain products of the Sox gene family (Sox1, Sox2, Sox3, and Sox15) have been identified as crucial transcriptional regulators involved in the induction process whose absence makes induction impossible. Additional genes, however, including certain members of the Klf family (Klf1, Klf2, Klf4, and Klf5), the Myc family (c-myc, L-myc, and N-myc), Nanog, and LIN28, have been identified to increase the induction efficiency.

Although the methods pioneered by Yamanaka and others have demonstrated that adult cells can be reprogrammed to iPS cells, there are still challenges associated with this technology:

The table at right summarizes the key strategies and techniques used to develop iPS cells over the past half-decade. Rows of similar colors represents studies that used similar strategies for reprogramming.

One of the main strategies for avoiding problems (1) and (2) has been to use small compounds that can mimic the effects of transcription factors. These molecule compounds can compensate for a reprogramming factor that does not effectively target the genome or fails at reprogramming for another reason; thus they raise reprogramming efficiency. They also avoid the problem of genomic integration, which in some cases contributes to tumor genesis. Key studies using such strategy were conducted in 2008. Melton et al. studied the effects of histone deacetylase (HDAC) inhibitor valproic acid. They found that it increased reprogramming efficiency 100-fold (compared to Yamanakas traditional transcription factor method).[32] The researchers proposed that this compound was mimicking the signaling that is usually caused by the transcription factor c-Myc. A similar type of compensation mechanism was proposed to mimic the effects of Sox2. In 2008, Ding et al. used the inhibition of histone methyl transferase (HMT) with BIX-01294 in combination with the activation of calcium channels in the plasma membrane in order to increase reprogramming efficiency.[33] Deng et al. of Beijing University reported on July 2013 that induced pluripotent stem cells can be created without any genetic modification. They used a cocktail of seven small-molecule compounds including DZNep to induce the mouse somatic cells into stem cells which they called CiPS cells with the efficiency at 0.2% comparable to those using standard iPSC production techniques. The CiPS cells were introduced into developing mouse embryos and were found to contribute to all major cells types, proving its pluripotency.[34][35]

Ding et al. demonstrated an alternative to transcription factor reprogramming through the use of drug-like chemicals. By studying the MET (mesenchymal-epithelial transition) process in which fibroblasts are pushed to a stem-cell like state, Dings group identified two chemicals ALK5 inhibitor SB431412 and MEK (mitogen-activated protein kinase) inhibitor PD0325901 which was found to increase the efficiency of the classical genetic method by 100 fold. Adding a third compound known to be involved in the cell survival pathway, Thiazovivin further increases the efficiency by 200 fold. Using the combination of these three compounds also decreased the reprogramming process of the human fibroblasts from four weeks to two weeks. [36][37]

In April 2009, it was demonstrated that generation of iPS cells is possible without any genetic alteration of the adult cell: a repeated treatment of the cells with certain proteins channeled into the cells via poly-arginine anchors was sufficient to induce pluripotency.[38] The acronym given for those iPSCs is piPSCs (protein-induced pluripotent stem cells).

Another key strategy for avoiding problems such as tumor genesis and low throughput has been to use alternate forms of vectors: adenovirus, plasmids, and naked DNA and/or protein compounds.

In 2008, Hochedlinger et al. used an adenovirus to transport the requisite four transcription factors into the DNA of skin and liver cells of mice, resulting in cells identical to ESCs. The adenovirus is unique from other vectors like viruses and retroviruses because it does not incorporate any of its own genes into the targeted host and avoids the potential for insertional mutagenesis.[39] In 2009, Freed et al. demonstrated successful reprogramming of human fibroblasts to iPS cells.[40] Another advantage of using adenoviruses is that they only need to present for a brief amount of time in order for effective reprogramming to take place.

Also in 2008, Yamanaka et al. found that they could transfer the four necessary genes with a plasmid.[41] The Yamanaka group successfully reprogrammed mouse cells by transfection with two plasmid constructs carrying the reprogramming factors; the first plasmid expressed c-Myc, while the second expressed the other three factors (Oct4, Klf4, and Sox2). Although the plasmid methods avoid viruses, they still require cancer-promoting genes to accomplish reprogramming. The other main issue with these methods is that they tend to be much less efficient compared to retroviral methods. Furthermore, transfected plasmids have been shown to integrate into the host genome and therefore they still pose the risk of insertional mutagenesis. Because non-retroviral approaches have demonstrated such low efficiency levels, researchers have attempted to effectively rescue the technique with what is known as the PiggyBac Transposon System. Several studies have demonstrated that this system can effectively deliver the key reprogramming factors without leaving footprint mutations in the host cell genome. The PiggyBac Transposon System involves the re-excision of exogenous genes, which eliminates the issue of insertional mutagenesis. [42]

In January 2014, two articles were published claiming that a type of pluripotent stem cell can be generated by subjecting the cells to certain types of stress (bacterial toxin, a low pH of 5.7, or physical squeezing); the resulting cells were called STAP cells, for stimulus-triggered acquisition of pluripotency.[43]

In light of difficulties that other labs had replicating the results of the surprising study, in March 2014, one of the co-authors has called for the articles to be retracted.[44] On 4 June 2014, the lead author, Obokata agreed to retract both the papers [45] after she was found to have committed research misconduct as concluded in an investigation by RIKEN on 1 April 2014.[46]

MicroRNAs are short RNA molecules that bind to complementary sequences on messenger RNA and block expression of a gene. Measuring variations in microRNA expression in iPS cells can be used to predict their differentiation potential.[47] Addition of microRNAs can also be used to enhance iPS potential. Several mechanisms have been proposed.[47] ES cell-specific microRNA molecules (such as miR-291, miR-294 and miR-295) enhance the efficiency of induced pluripotency by acting downstream of c-Myc.[48]microRNAs can also block expression of repressors of Yamanakas four transcription factors, and there may be additional mechanisms induce reprogramming even in the absence of added exogenous transcription factors.[47]

Induced pluripotent stem cells are similar to natural pluripotent stem cells, such as embryonic stem (ES) cells, in many aspects, such as the expression of certain stem cell genes and proteins, chromatin methylation patterns, doubling time, embryoid body formation, teratoma formation, viable chimera formation, and potency and differentiability, but the full extent of their relation to natural pluripotent stem cells is still being assessed.[49]

Gene expression and genome-wide H3K4me3 and H3K27me3 were found to be extremely similar between ES and iPS cells.[50][citation needed] The generated iPSCs were remarkably similar to naturally isolated pluripotent stem cells (such as mouse and human embryonic stem cells, mESCs and hESCs, respectively) in the following respects, thus confirming the identity, authenticity, and pluripotency of iPSCs to naturally isolated pluripotent stem cells:

Recent achievements and future tasks for safe iPSC-based cell therapy are collected in the review of Okano et al.[62]

The task of producing iPS cells continues to be challenging due to the six problems mentioned above. A key tradeoff to overcome is that between efficiency and genomic integration. Most methods that do not rely on the integration of transgenes are inefficient, while those that do rely on the integration of transgenes face the problems of incomplete reprogramming and tumor genesis, although a vast number of techniques and methods have been attempted. Another large set of strategies is to perform a proteomic characterization of iPS cells.[63] Further studies and new strategies should generate optimal solutions to the five main challenges. One approach might attempt to combine the positive attributes of these strategies into an ultimately effective technique for reprogramming cells to iPS cells.

Another approach is the use of iPS cells derived from patients to identify therapeutic drugs able to rescue a phenotype. For instance, iPS cell lines derived from patients affected by ectodermal dysplasia syndrome (EEC), in which the p63 gene is mutated, display abnormal epithelial commitment that could be partially rescued by a small compound[64]

An attractive feature of human iPS cells is the ability to derive them from adult patients to study the cellular basis of human disease. Since iPS cells are self-renewing and pluripotent, they represent a theoretically unlimited source of patient-derived cells which can be turned into any type of cell in the body. This is particularly important because many other types of human cells derived from patients tend to stop growing after a few passages in laboratory culture. iPS cells have been generated for a wide variety of human genetic diseases, including common disorders such as Down syndrome and polycystic kidney disease.[65][66] In many instances, the patient-derived iPS cells exhibit cellular defects not observed in iPS cells from healthy patients, providing insight into the pathophysiology of the disease.[67] An international collaborated project, StemBANCC, was formed in 2012 to build a collection of iPS cell lines for drug screening for a variety of disease. Managed by the University of Oxford, the effort pooled funds and resources from 10 pharmaceutical companies and 23 universities. The goal is to generate a library of 1,500 iPS cell lines which will be used in early drug testing by providing a simulated human disease environment.[68] Furthermore, combining hiPSC technology and genetically-encoded voltage and calcium indicators provided a large-scale and high-throughput platform for cardiovascular drug safety screening.[69]

A proof-of-concept of using induced pluripotent stem cells (iPSCs) to generate human organ for transplantation was reported by researchers from Japan. Human liver buds (iPSC-LBs) were grown from a mixture of three different kinds of stem cells: hepatocytes (for liver function) coaxed from iPSCs; endothelial stem cells (to form lining of blood vessels) from umbilical cord blood; and mesenchymal stem cells (to form connective tissue). This new approach allows different cell types to self-organize into a complex organ, mimicking the process in fetal development. After growing in vitro for a few days, the liver buds were transplanted into mice where the liver quickly connected with the host blood vessels and continued to grow. Most importantly, it performed regular liver functions including metabolizing drugs and producing liver-specific proteins. Further studies will monitor the longevity of the transplanted organ in the host body (ability to integrate or avoid rejection) and whether it will transform into tumors.[70][71] Using this method, cells from one mouse could be used to test 1,000 drug compounds to treat liver disease, and reduce animal use by up to 50,000.[72]

Embryonic cord-blood cells were induced into pluripotent stem cells using plasmid DNA. Using cell surface endothelial/pericytic markers CD31 and CD146, researchers identified 'vascular progenitor', the high-quality, multipotent vascular stem cells. After the iPS cells were injected directly into the vitreous of the damaged retina of mice, the stem cells engrafted into the retina, grew and repaired the vascular vessels.[73][74]

Labelled iPSCs-derived NSCs injected into laboratory animals with brain lesions were shown to migrate to the lesions and some motor function improvement was observed.[75]

Although a pint of donated blood contains about two trillion red blood cells and over 107 million blood donations are collected globally, there is still a critical need for blood for transfusion. In 2014, type O red blood cells were synthesized at the Scottish National Blood Transfusion Service from iPSC. The cells were induced to become a mesoderm and then blood cells and then red blood cells. The final step was to make them eject their nuclei and mature properly. Type O can be transfused into all patients. Human clinical trials were not expected to begin before 2016.[76]

The first human clinical trial using autologous iPSCs was approved by the Japan Ministry Health and was to be conducted in 2014 in Kobe. However the trial was suspended after Japan's new regenerative medicine laws came into effect last November.[77] iPSCs derived from skin cells from six patients suffering from wet age-related macular degeneration were to be reprogrammed to differentiate into retinal pigment epithelial (RPE) cells. The cell sheet would be transplanted into the affected retina where the degenerated RPE tissue was excised. Safety and vision restoration monitoring would last one to three years.[78][79] The benefits of using autologous iPSCs are that there is theoretically no risk of rejection and it eliminates the need to use embryonic stem cells.[79]

See the original post here:
Induced pluripotent stem cell - Wikipedia

Read More...

Communities Voices and Insights – Washington Times

December 8th, 2016 5:45 am

Related Articles

Robert P. George writes, "If Donald Trump keeps his word, his victory over Hillary Clinton will have monumental consequences not only for the Supreme Court but for the entire federal judiciary."

Pope Francis on fake news; Jared Kushner and Israeli settlement; Trump and evangelicals

Soviet dictator Joseph Stalin had his Pulitzer Prize winning New York Times reporter Walter Duranty to cover-up his genocidal crimes. Cuban dictator Fidel Castro had his New York Times reporter Herbert Matthews to deny his Communist fanaticism. And Elon Musk has his New York Times reporter Andrew Ross Sorkin to whitewash his job-killing, crony-capitalist, multi-billion dollar plunder of American taxpayers.

Billionaire Sheldon Adelson sanctimoniously demanding a federal monopoly on the exploitation of fashionable debaucheries is like a dog walking on his hind legs. It is done awkwardly, but you are surprised to see it done at all.

Along with the joys of the season, the holidays call us to shop at busier than usual stores; attend special parties and events; as well as travel for extended amounts of time with planes, trains and automobiles to visit family and friends.

Gary Sinise at Pearl Harbor; World Vision and Israel, by Luke Moon; Bibles in hotel rooms

I'm sure that Donald Trump and the people who will serve in his administration have high goals for how to "Make America Great Again" -- that phrase borrowed from Reagan's 1980 campaign. The rarest achievement of all, however, might be for Mr. Trump to serve the American people so splendidly that even after eight years in office, voters say, "We'll take some more of that."

The new federal initiative breaks my heart.

Fake news is an old story. It has featured in domestic politics and international affairs since the beginning of time.

Hugh Hewitt on Christians as strangers in the land; Trump and LGBT Americans; Betsy DeVos

As a modern day woman, I count the Jeep Wrangler as my favorite vehicle and there is much good I can say about the 2017 version. Of course, better to let the Wrangler speak for itself.

By Jan. 20, President Obama will be gone and President-elect Donald Trump will have the opportunity to lead America for four or possibly eight years. But there is still a month and a half left for Mr. Obama to do as much damage has he can on the way out.

General Mattis on reading; Chip and Joanna Gaines; Os Guinness on Christians as salt and light

The United States should abandon its propensity for moral sermonizing in the manner of Dickensian schoolmarms about foreign leaders in obedience to the biblical injunction that, "He who is without sin ... let him first cast a stone at her." We need to tend to our own gardens.

As winter approaches, the temperature has gotten exponentially hotter on the Crimean Peninsula and Ukraine's border with Russia.

Evangelical opinion on a Cabinet with Romney; John Heubusch authors novel gripped by of science and faith; book reading in America

By Lawrence J. Fedewa

The presidential election of 2016 has been the most dramatic in memory. Each candidate went up or down every week, shocking revelations came every few days, then a stunning victory - now this. Just when everyone thought it was over, up comes another chapter: recount petitions!

It is that special time of year - where sugar plums not only dance in our heads but also join "Auntie's" favorite pie along with tables filled with tempting delights, at every turn. And if you have concerns about tipping the scales, it is for good reason.

Falwell and Liberty after Trump; Pro-Life Millennials; Ted Cruz and the Castros

A constitutional wall will block President-elect Donald Trump's mean-spirited ambition to swiftly deport up to 3 million undocumented immigrants

More here:
Communities Voices and Insights - Washington Times

Read More...

HIV/AIDS research – Wikipedia

December 8th, 2016 5:45 am

HIV/AIDS research includes all medical research that attempts to prevent, treat, or cure HIV/AIDS, as well as fundamental research about the nature of HIV as an infectious agent and AIDS as the disease caused by HIV.

Examples of particular HIV/AIDS research include, drug development, HIV vaccines, pre-exposure prophylaxis, or post-exposure prophylaxis.[1]

A body of scientific evidence has shown that men who are circumcised are less likely to contract HIV than men who are uncircumcized.[2] Research published in 2014, concludes that the sex hormones estrogen and progesterone selectively impact HIV transmission.[3]

"Pre-exposure prophylaxis" refers to the practice of taking some drugs before being exposed to HIV infection, and having a decreased chance of contracting HIV as a result of taking that drug. Post-exposure prophylaxis refers to taking some drugs quickly after being exposed to HIV, while the virus is in a person's body but before the virus has established itself. In both cases, the drugs would be the same as those used to treat persons with HIV, and the intent of taking the drugs would be to eradicate the virus before the person becomes irreversibly infected.

Post-exposure prophylaxis is recommended in anticipated cases of HIV exposure, such as if a nurse somehow has blood-to-blood contact with a patient in the course of work, or if someone without HIV requests the drugs immediately after having unprotected sex with a person who might have HIV. Pre-exposure prophylaxis is sometimes an option for HIV-negative persons who feel that they are at increased risk of HIV infection, such as an HIV-negative person in a serodiscordant relationship with an HIV-positive partner.

Current research in these agents include drug development, efficacy testing, and practice recommendations for using drugs for HIV prevention.

The within-host dynamics of HIV infection include the spread of the virus in vivo, the establishment of latency, the effects of immune response on the virus, etc.[4][5] Early studies used simple models and only considered the cell-free spreading of HIV, in which virus particles bud from an infected T cell, enter the blood/extracellular fluid, and then infect another T cell.[5] A 2015 study[4] proposes a more realistic model of HIV dynamics that also incorporates the viral cell-to-cell spreading mechanism, where the virus is directly transited from one cell to another, as well as the T cell activation, the cellular immune response, and the immune exhaustion as the infection progresses.[4]

A 2014 study with SIV found that the virus initially establishes a reservoir in the gut. The virus infection provokes an inflammatory response of paneth cells in the intestine, helping to spread the virus by causing tissue damage. The findings offer new pointers for potential future treatments, testing (biomarkers), and help to explain the virus resistance to antiviral therapies. The study also identified the bacteria strain Lactobacillus plantarum, which reversed damage by rapidly reducing IL-1 (Interleukin-1 beta).[6] Seeding of HIV in the body begins within a few days, during the acute phase of HIV infection.[7]

Research to improve current treatments includes decreasing side effects of current drugs, further simplifying drug regimens to improve adherence, and determining better sequences of regimens to manage drug resistance. There are variations in the health community in recommendations on what treatment doctors should recommend for people with HIV. One question, for example, is determining when a doctor should recommend that a patient take antiretroviral drugs and what drugs a doctor may recommend. This field also includes the development of antiretroviral drugs.

Infection with the Human Immunodeficiency Virus-1 (HIV) is associated with clinical symptoms of accelerated aging, as evidenced by increased incidence and diversity of age-related illnesses at relatively young ages. A significant age acceleration effect could be detected in brain (7.4 years) and blood (5.2 years) tissue due to HIV-1 infection [8] with the help of a biomarker of aging, which is known as epigenetic clock.

A long-term nonprogressor is a person who is infected with HIV, but whose body, for whatever reason, naturally controls the virus so that the infection does not progress to the AIDS stage. Such persons are of great interest to researchers, who feel that a study of their physiologies could provide a deeper understanding of the virus and disease.

An HIV vaccine is a vaccine that would be given to a person who does not have HIV, in order to confer protection against subsequent exposures to HIV, thus reducing the likelihood that the person would become infected by HIV. Currently, no effective HIV vaccine exists. Various HIV vaccines have been tested in clinical trials almost since the discovery of HIV.

Only a vaccine is thought to be able to halt the pandemic. This is because a vaccine would cost less, thus being affordable for developing countries, and would not require daily treatment.[9] However, after over 20 years of research, HIV-1 remains a difficult target for a vaccine.[9][10]

In 2003 a clinical trial in Thailand tested an HIV vaccine called RV 144. In 2009, the researchers reported that this vaccine showed some efficacy in protecting recipients from HIV infection. Results of this trial give the first supporting evidence of any vaccine being effective in lowering the risk of contracting HIV. Another possible vaccine comes from a novel gene therapy that alters the CCR5 co-receptor permanently, preventing HIV from entering cells.[11] Other vaccine trials continue worldwide.

A microbicide for sexually transmitted diseases is a gel which would be applied to the skin - perhaps a rectal microbicide for persons who engage in anal sex or a vaginal microbicide for persons who engage in vaginal sex - and if infected body fluid such as blood or semen were to touch the gel, then HIV in that fluid would be destroyed and the people having sex would be less likely to spread infection between themselves.

On March 7, 2013, the Washington University in St. Louis website published a report by Julia Evangelou Strait, in which it was reported that ongoing nanoparticle research showed that nanoparticles loaded with various compounds could be used to target infectious agents whilst leaving healthy cells unaffected. In the study detailed by this report, it was found that nanoparticles loaded with Mellitin, a compound found in Bee venom, could deliver the agent to the HIV, causing the breakdown of the outer protein envelope of the virus. This, they say, could lead to the production of a vaginal gel which could help prevent infection by disabling the virus.[12] Dr Joshua Hood goes on to explain that beyond preventative measures in the form of a topical gel, he sees "potential for using nanoparticles with melittin as therapy for existing HIV infections, especially those that are drug-resistant. The nanoparticles could be injected intravenously and, in theory, would be able to clear HIV from the blood stream."[12]

In 2007, Timothy Ray Brown,[13] a 40-year-old HIV-positive man, also known as "the Berlin Patient", was given a stem cell transplant as part of his treatment for acute myeloid leukemia (AML).[14] A second transplant was made a year later after a relapse. The donor was chosen not only for genetic compatibility but also for being homozygous for a CCR5-32 mutation that confers resistance to HIV infection.[15][16] After 20 months without antiretroviral drug treatment, it was reported that HIV levels in Brown's blood, bone marrow, and bowel were below the limit of detection.[16] The virus remained undetectable over three years after the first transplant.[14] Although the researchers and some commentators have characterized this result as a cure, others suggest that the virus may remain hidden in tissues[17] such as the brain (which acts as a viral reservoir).[18] Stem cell treatment remains investigational because of its anecdotal nature, the disease and mortality risk associated with stem cell transplants, and the difficulty of finding suitable donors.[17][19]

Complementing efforts to control viral replication, immunotherapies that may assist in the recovery of the immune system have been explored in past and ongoing trials, including IL-2 and IL-7.[20]

The failure of vaccine candidates to protect against HIV infection and progression to AIDS has led to a renewed focus on the biological mechanisms responsible for HIV latency. A limited period of therapy combining anti-retrovirals with drugs targeting the latent reservoir may one day allow for total eradication of HIV infection.[21] Researchers have discovered an abzyme that can destroy the protein gp120 CD4 binding site. This protein is common to all HIV variants as it is the attachment point for B lymphocytes and subsequent compromising of the immune system.[22]

A turning point for HIV research occurred in 2007, following the bone marrow transplant of HIV sufferer Timothy Ray Brown. Brown underwent the procedure after he developed leukaemia and the donor of the bone marrow possessed a rare genetic mutation that caused Brown's cells to become resistant to HIV. Brown attained the title of the "Berlin Patient" in the HIV research field and is the first man to have been cured of the virus. As of April 2013, two primary approaches are being pursued in the search for a HIV cure: The first is gene therapy that aims to develop a HIV-resistant immune system for patients, and the second is being led by Danish scientists, who are conducting clinical trials to strip the HIV from human DNA and have it destroyed permanently by the immune system.[23]

Two more cases with similarities to the Brown case have occurred since the 2007 discovery; however, they differ because the transplanted marrow has not been confirmed as mutated. The cases were publicized in a July 2013 CNN story that relayed the experience of two patients who had taken antiretroviral therapy for years before they developed lymphoma, a cancer of the lymph nodes. They then underwent lymphoma chemotherapy and bone marrow transplantation, while remaining on an antiretroviral regimen; while they retained traces of HIV four months afterwards, six to nine months after the transplant, the two patients had no detectable trace of HIV in their blood. However, the managing clinician Dr. Timothy Heinrich stated at the Malaysian International AIDS Society Conference where the findings were presented:

It's possible, again, that the virus could return in a week, it could return in a month -- in fact, some mathematical modeling predicts that virus could even return one to two years after we stop antiretroviral therapy, so we really don't know what the long-term or full effects of stem cell transplantation and viral persistence is.[24]

In March 2016, researchers at Temple University, Philadelphia, reported that they have used genome editing to delete HIV from T cells. According to the researchers, this approach could lead to a dramatic reduction of the viral load in patient cells.[25][26]

In April 2016, Innovative Bioresearch, a privately held company owned by research scientist Jonathan Fior, reported the results of a pioneering pilot study that explored the infusion of SupT1 cells as a cell-based therapy for HIV in a humanized mouse model.[27][28] This novel cell-based therapy uses irradiated SupT1 cells as a decoy target for HIV to prevent CD4+ T cell depletion as well as to render the virus less cytopathic. The research showed that in animals treated with SupT1 cell infusion, significantly lower plasma viral load (~10-fold) and potentially preserved CD4+ T cell frequency were observed at Week 1, with one animal showing complete suppression of viral replication and preservation of CD4+ T cell count (no virus detected anymore at Weeks 3 and 4). Interestingly, as also mentioned in a previous paper wrote by the same author, Jonathan Fior, in vitro studies of HIV evolution showed that prolonged virus replication in the SupT1 cell line results in a less cytopathic virus with a reduced capacity for syncytium formation, a higher sensitivity to neutralization, improved replication in SupT1 cells and impaired infection of primary CD4+ T cell.[29] According to the research, this indicates that in vivo virus replication in the infused SupT1 cells should also have a vaccination effect.[28]

Read the rest here:
HIV/AIDS research - Wikipedia

Read More...

Breast Cancer Research | Home page

December 8th, 2016 5:44 am

Dr. Lewis A. Chodosh is a physician-scientist who received a BS in Molecular Biophysics and Biochemistry from Yale University, and MD from Harvard Medical School, and a PhD. in Biochemistry from M.I.T. in the laboratory of Dr. Phillip Sharp.He performed his clinical training in Internal Medicine and Endocrinology at the Massachusetts General Hospital, after which he was a postdoctoral research fellow with Dr. Philip Leder at Harvard Medical School.Dr. Chodosh joined the faculty of the University of Pennsylvania in 1994, where he is currently a Professor in the Departments of Cancer Biology, Cell & Developmental Biology, and Medicine. He serves as Chairman of the Department of Cancer Biology, Associate Director for Basic Science of the Abramson Cancer Center, and Director of Cancer Genetics for the Abramson Family Cancer Research Institute at the University of Pennsylvania. Additionally, heis on the scientific advisory board for the Harvard Nurses' Health Studies I and II.

Dr. Chodosh's research focuses on genetic, genomic and molecular approaches to understanding breast cancer susceptibility and pathogenesis.

Continue reading here:
Breast Cancer Research | Home page

Read More...

Ageing – Wikipedia

December 7th, 2016 2:43 pm

Ageing, also spelled aging, is the process of becoming older. The term refers especially to human beings, many animals, and fungi, whereas for example bacteria, perennial plants and some simple animals are potentially immortal. In the broader sense, ageing can refer to single cells within an organism which have ceased dividing (cellular senescence) or to the population of a species (population ageing).

In humans, ageing represents the accumulation of changes in a human being over time,[1] encompassing physical, psychological, and social change. Reaction time, for example, may slow with age, while knowledge of world events and wisdom may expand. Ageing is among the greatest known risk factors for most human diseases:[2] of the roughly 150,000 people who die each day across the globe, about two thirds die from age-related causes.

The causes of ageing are unknown; current theories are assigned to the damage concept, whereby the accumulation of damage (such as DNA breaks, oxidised DNA and/or mitochondrial malfunctions)[3] may cause biological systems to fail, or to the programmed ageing concept, whereby internal processes (such as DNA telomere shortening) may cause ageing. Programmed ageing should not be confused with programmed cell death (apoptosis).

The discovery, in 1934, that calorie restriction can extend lifespan by 50% in rats has motivated research into delaying and preventing ageing.

Human beings and members of other species, especially animals, necessarily experience ageing and mortality. Fungi, too, can age.[4] In contrast, many species can be considered immortal: for example, bacteria fission to produce daughter cells, strawberry plants grow runners to produce clones of themselves, and animals in the genus Hydra have a regenerative ability with which they avoid dying of old age.

Early life forms on Earth, starting at least 3.7 billion years ago,[5] were single-celled organisms. Such single-celled organisms (prokaryotes, protozoans, algae) multiply by fissioning into daughter cells, thus do not age and are innately immortal.[6][7]

Ageing and mortality of the individual organism became possible with the evolution of sexual reproduction,[8] which occurred with the emergence of the fungal/animal kingdoms approximately a billion years ago, and with the evolution of flowering plants 160 million years ago. The sexual organism could henceforth pass on some of its genetic material to produce new individuals and itself could become disposable with regards to the survival of its species.[8] This classic biological idea has however been perturbed recently by the discovery that the bacterium E. coli may split into distinguishable daughter cells, which opens the theoretical possibility of "age classes" among bacteria.[9]

Even within humans and other mortal species, there are cells with the potential for immortality: cancer cells which have lost the ability to die when maintained in cell culture such as the HeLa cell line,[10] and specific stem cells such as germ cells (producing ova and spermatozoa).[11] In artificial cloning, adult cells can be rejuvenated back to embryonic status and then used to grow a new tissue or animal without ageing.[12] Normal human cells however die after about 50 cell divisions in laboratory culture (the Hayflick Limit, discovered by Leonard Hayflick in 1961).[10]

A number of characteristic ageing symptoms are experienced by a majority or by a significant proportion of humans during their lifetimes.

Dementia becomes more common with age.[35] About 3% of people between the ages of 6574 have dementia, 19% between 75 and 84 and nearly half of those over 85 years of age.[36] The spectrum includes mild cognitive impairment and the neurodegenerative diseases of Alzheimer's disease, cerebrovascular disease, Parkinson's disease and Lou Gehrig's disease. Furthermore, many types of memory may decline with ageing, but not semantic memory or general knowledge such as vocabulary definitions, which typically increases or remains steady until late adulthood[37] (see Ageing brain). Intelligence may decline with age, though the rate may vary depending on the type and may in fact remain steady throughout most of the lifespan, dropping suddenly only as people near the end of their lives. Individual variations in rate of cognitive decline may therefore be explained in terms of people having different lengths of life.[38] There might be changes to the brain: after 20 years of age there may be a 10% reduction each decade in the total length of the brain's myelinated axons.[39][40]

Age can result in visual impairment, whereby non-verbal communication is reduced,[41] which can lead to isolation and possible depression. Macular degeneration causes vision loss and increases with age, affecting nearly 12% of those above the age of 80.[42] This degeneration is caused by systemic changes in the circulation of waste products and by growth of abnormal vessels around the retina.[43]

A distinction can be made between "proximal ageing" (age-based effects that come about because of factors in the recent past) and "distal ageing" (age-based differences that can be traced back to a cause early in person's life, such as childhood poliomyelitis).[38]

Ageing is among the greatest known risk factors for most human diseases.[2] Of the roughly 150,000 people who die each day across the globe, about two thirds100,000 per daydie from age-related causes. In industrialised nations, the proportion is higher, reaching 90%.[44][45][46]

At present, researchers are only just beginning to understand the biological basis of ageing even in relatively simple and short-lived organisms such as yeast.[47] Less still is known about mammalian ageing, in part due to the much longer lives in even small mammals such as the mouse (around 3 years). A primary model organism for studying ageing is the nematode C. elegans, thanks to its short lifespan of 23 weeks, the ability to easily perform genetic manipulations or suppress gene activity with RNA interference, and other factors.[48] Most known mutations and RNA interference targets that extend lifespan were first discovered in C. elegans.[49]

Factors that are proposed to influence biological ageing[50] fall into two main categories, programmed and damage-related. Programmed factors follow a biological timetable, perhaps a continuation of the one that regulates childhood growth and development. This regulation would depend on changes in gene expression that affect the systems responsible for maintenance, repair and defence responses. Damage-related factors include internal and environmental assaults to living organisms that induce cumulative damage at various levels.[51]

There are three main metabolic pathways which can influence the rate of ageing:

It is likely that most of these pathways affect ageing separately, because targeting them simultaneously leads to additive increases in lifespan.[53]

The rate of ageing varies substantially across different species, and this, to a large extent, is genetically based. For example, numerous perennial plants ranging from strawberries and potatoes to willow trees typically produce clones of themselves by vegetative reproduction and are thus potentially immortal, while annual plants such as wheat and watermelons die each year and reproduce by sexual reproduction. In 2008 it was discovered that inactivation of only two genes in the annual plant Arabidopsis thaliana leads to its conversion into a potentially immortal perennial plant.[54]

Clonal immortality apart, there are certain species whose individual lifespans stand out among Earth's life-forms, including the bristlecone pine at 5062 years[55] (however Hayflick states that the bristlecone pine has no cells older than 30 years), invertebrates like the hard clam (known as quahog in New England) at 508 years,[56] the Greenland shark at 400 years,[57] fish like the sturgeon and the rockfish, and the sea anemone[58] and lobster.[59][60] Such organisms are sometimes said to exhibit negligible senescence.[61] The genetic aspect has also been demonstrated in studies of human centenarians.

In laboratory settings, researchers have demonstrated that selected alterations in specific genes can extend lifespan quite substantially in yeast and roundworms, less so in fruit flies and less again in mice. Some of the targeted genes have homologues across species and in some cases have been associated with human longevity.[62]

Caloric restriction substantially affects lifespan in many animals, including the ability to delay or prevent many age-related diseases.[103] Typically, this involves caloric intake of 6070% of what an ad libitum animal would consume, while still maintaining proper nutrient intake.[103] In rodents, this has been shown to increase lifespan by up to 50%;[104] similar effects occur for yeast and Drosophila.[103] No lifespan data exist for humans on a calorie-restricted diet,[76] but several reports support protection from age-related diseases.[105][106] Two major ongoing studies on rhesus monkeys initially revealed disparate results; while one study, by the University of Wisconsin, showed that caloric restriction does extend lifespan,[107] the second study, by the National Institute on Ageing (NIA), found no effects of caloric restriction on longevity.[108] Both studies nevertheless showed improvement in a number of health parameters. Notwithstanding the similarly low calorie intake, the diet composition differed between the two studies (notably a high sucrose content in the Wisconsin study), and the monkeys have different origins (India, China), initially suggesting that genetics and dietary composition, not merely a decrease in calories, are factors in longevity.[76] However, in a comparative analysis in 2014, the Wisconsin researchers found that the allegedly non-starved NIA control monkeys in fact are moderately underweight when compared with other monkey populations, and argued this was due to the NIA's apportioned feeding protocol in contrast to Wisconsin's truly unrestricted ad libitum feeding protocol.[109] They conclude that moderate calorie restriction rather than extreme calorie restriction is sufficient to produce the observed health and longevity benefits in the studied rhesus monkeys.[110]

In his book How and Why We Age, Hayflick says that caloric restriction may not be effective in humans, citing data from the Baltimore Longitudinal Study of Aging which shows that being thin does not favour longevity.[need quotation to verify][111] Similarly, it is sometimes claimed that moderate obesity in later life may improve survival, but newer research has identified confounding factors such as weight loss due to terminal disease. Once these factors are accounted for, the optimal body weight above age 65 corresponds to a leaner body mass index of 23 to 27.[112]

Alternatively, the benefits of dietary restriction can also be found by changing the macro nutrient profile to reduce protein intake without any changes to calorie level, resulting in similar increases in longevity.[113][114] Dietary protein restriction not only inhibits mTOR activity but also IGF-1, two mechanisms implicated in ageing.[74] Specifically, reducing leucine intake is sufficient to inhibit mTOR activity, achievable through reducing animal food consumption.[115][116]

The Mediterranean diet is credited with lowering the risk of heart disease and early death.[117][118] The major contributors to mortality risk reduction appear to be a higher consumption of vegetables, fish, fruits, nuts and monounsaturated fatty acids, i.e., olive oil.[119]

The amount of sleep has an impact on mortality. People who live the longest report sleeping for six to seven hours each night.[120][121] Lack of sleep (<5 hours) more than doubles the risk of death from cardiovascular disease, but too much sleep (>9 hours) is associated with a doubling of the risk of death, though not primarily from cardiovascular disease.[122] Sleeping more than 7 to 8 hours per day has been consistently associated with increased mortality, though the cause is probably other factors such as depression and socioeconomic status, which would correlate statistically.[123] Sleep monitoring of hunter-gatherer tribes from Africa and from South America has shown similar sleep patterns across continents: their average sleeping duration is 6.4 hours (with a summer/winter difference of 1 hour), afternoon naps (siestas) are uncommon, and insomnia is very rare (tenfold less than in industrial societies).[124]

Physical exercise may increase life expectancy.[125] People who participate in moderate to high levels of physical exercise have a lower mortality rate compared to individuals who are not physically active.[126] Moderate levels of exercise have been correlated with preventing aging and improving quality of life by reducing inflammatory potential.[127] The majority of the benefits from exercise are achieved with around 3500 metabolic equivalent (MET) minutes per week.[128] For example, climbing stairs 10 minutes, vacuuming 15 minutes, gardening 20 minutes, running 20 minutes, and walking or bicycling for 25 minutes on a daily basis would together achieve about 3000 MET minutes a week.[128]

Avoidance of chronic stress (as opposed to acute stress) is associated with a slower loss of telomeres in most but not all studies,[129][130] and with decreased cortisol levels. A chronically high cortisol level compromises the immune system, causes cardiac damage/arterosclerosis and is associated with facial ageing, and the latter in turn is a marker for increased morbidity and mortality.[131][132] Stress can be countered by social connection, spirituality, and (for men more clearly than for women) married life, all of which are associated with longevity.[133][134][135]

The following drugs and interventions have been shown to retard or reverse the biological effects of ageing in animal models, but none has yet been proven to do so in humans.

Evidence in both animals and humans suggests that resveratrol may be a caloric restriction mimetic.[136]

As of 2015 metformin was under study for its potential effect on slowing ageing in the worm C.elegans and the cricket.[137] Its effect on otherwise healthy humans is unknown.[137]

Rapamycin was first shown to extend lifespan in eukaryotes in 2006 by Powers et al. who showed a dose-responsive effect of rapamycin on lifespan extension in yeast cells.[138] In a 2009 study, the lifespans of mice fed rapamycin were increased between 28 and 38% from the beginning of treatment, or 9 to 14% in total increased maximum lifespan. Of particular note, the treatment began in mice aged 20 months, the equivalent of 60 human years.[139] Rapamycin has subsequently been shown to extend mouse lifespan in several separate experiments,[140][141] and is now being tested for this purpose in nonhuman primates (the marmoset monkey).[142]

Cancer geneticist Ronald A. DePinho and his colleagues published research in mice where telomerase activity was first genetically removed. Then, after the mice had prematurely aged, they restored telomerase activity by reactivating the telomerase gene. As a result, the mice were rejuvenated: Shrivelled testes grew back to normal and the animals regained their fertility. Other organs, such as the spleen, liver, intestines and brain, recuperated from their degenerated state. "[The finding] offers the possibility that normal human ageing could be slowed by reawakening the enzyme in cells where it has stopped working" says Ronald DePinho. However, activating telomerase in humans could potentially encourage the growth of tumours.[143]

Most known genetic interventions in C. elegans increase lifespan by 1.5 to 2.5-fold. As of 2009[update], the record for lifespan extension in C. elegans is a single-gene mutation which increases adult survival by tenfold.[49] The strong conservation of some of the mechanisms of ageing discovered in model organisms imply that they may be useful in the enhancement of human survival. However, the benefits may not be proportional; longevity gains are typically greater in C. elegans than fruit flies, and greater in fruit flies than in mammals. One explanation for this is that mammals, being much longer-lived, already have many traits which promote lifespan.[49]

Some research effort is directed to slow ageing and extend healthy lifespan.[144][145][146]

The US National Institute on Aging currently funds an intervention testing programme, whereby investigators nominate compounds (based on specific molecular ageing theories) to have evaluated with respect to their effects on lifespan and age-related biomarkers in outbred mice.[147] Previous age-related testing in mammals has proved largely irreproducible, because of small numbers of animals and lax mouse husbandry conditions.[citation needed] The intervention testing programme aims to address this by conducting parallel experiments at three internationally recognised mouse ageing-centres, the Barshop Institute at UTHSCSA, the University of Michigan at Ann Arbor and the Jackson Laboratory.

Several companies and organisations, such as Google Calico, Human Longevity, Craig Venter, Gero,[148]SENS Research Foundation, and Science for Life Extension in Russia,[149] declared stopping or delaying ageing as their goal.

Prizes for extending lifespan and slowing ageing in mammals exist. The Methuselah Foundation offers the Mprize. Recently, the $1 Million Palo Alto Longevity Prize was launched. It is a research incentive prize to encourage teams from all over the world to compete in an all-out effort to "hack the code" that regulates our health and lifespan. It was founded by Joon Yun.[150][151][152][153][154]

Different cultures express age in different ways. The age of an adult human is commonly measured in whole years since the day of birth. Arbitrary divisions set to mark periods of life may include: juvenile (via infancy, childhood, preadolescence, adolescence), early adulthood, middle adulthood, and late adulthood. More casual terms may include "teenagers," "tweens," "twentysomething", "thirtysomething", etc. as well as "vicenarian", "tricenarian", "quadragenarian", etc.

Most legal systems define a specific age for when an individual is allowed or obliged to do particular activities. These age specifications include voting age, drinking age, age of consent, age of majority, age of criminal responsibility, marriageable age, age of candidacy, and mandatory retirement age. Admission to a movie for instance, may depend on age according to a motion picture rating system. A bus fare might be discounted for the young or old. Each nation, government and non-governmental organisation has different ways of classifying age. In other words, chronological ageing may be distinguished from "social ageing" (cultural age-expectations of how people should act as they grow older) and "biological ageing" (an organism's physical state as it ages).[155]

In a UNFPA report about ageing in the 21st century, it highlighted the need to "Develop a new rights-based culture of ageing and a change of mindset and societal attitudes towards ageing and older persons, from welfare recipients to active, contributing members of society."[156] UNFPA said that this "requires, among others, working towards the development of international human rights instruments and their translation into national laws and regulations and affirmative measures that challenge age discrimination and recognise older people as autonomous subjects."[156] Older persons make contributions to society including caregiving and volunteering. For example, "A study of Bolivian migrants who [had] moved to Spain found that 69% left their children at home, usually with grandparents. In rural China, grandparents care for 38% of children aged under five whose parents have gone to work in cities."[156]

Population ageing is the increase in the number and proportion of older people in society. Population ageing has three possible causes: migration, longer life expectancy (decreased death rate) and decreased birth rate. Ageing has a significant impact on society. Young people tend to have fewer legal privileges (if they are below the age of majority), they are more likely to push for political and social change, to develop and adopt new technologies, and to need education. Older people have different requirements from society and government, and frequently have differing values as well, such as for property and pension rights.[157]

In the 21st century, one of the most significant population trends is ageing.[158] Currently, over 11% of the world's current population are people aged 60 and older and the United Nations Population Fund (UNFPA) estimates that by 2050 that number will rise to approximately 22%.[156] Ageing has occurred due to development which has enabled better nutrition, sanitation, health care, education and economic well-being. Consequently, fertility rates have continued to decline and life expectancy have risen. Life expectancy at birth is over 80 now in 33 countries. Ageing is a "global phenomenon," that is occurring fastest in developing countries, including those with large youth populations, and poses social and economic challenges to the work which can be overcome with "the right set of policies to equip individuals, families and societies to address these challenges and to reap its benefits."[159]

As life expectancy rises and birth rates decline in developed countries, the median age rises accordingly. According to the United Nations, this process is taking place in nearly every country in the world.[160] A rising median age can have significant social and economic implications, as the workforce gets progressively older and the number of old workers and retirees grows relative to the number of young workers. Older people generally incur more health-related costs than do younger people in the workplace and can also cost more in worker's compensation and pension liabilities.[161] In most developed countries an older workforce is somewhat inevitable. In the United States for instance, the Bureau of Labor Statistics estimates that one in four American workers will be 55 or older by 2020.[161]

Among the most urgent concerns of older persons worldwide is income security. This poses challenges for governments with ageing populations to ensure investments in pension systems continues in order to provide economic independence and reduce poverty in old age. These challenges vary for developing and developed countries. UNFPA stated that, "Sustainability of these systems is of particular concern, particularly in developed countries, while social protection and old-age pension coverage remain a challenge for developing countries, where a large proportion of the labour force is found in the informal sector."[156]

The global economic crisis has increased financial pressure to ensure economic security and access to health care in old age. In order to elevate this pressure "social protection floors must be implemented in order to guarantee income security and access to essential health and social services for all older persons and provide a safety net that contributes to the postponement of disability and prevention of impoverishment in old age."[156]

It has been argued that population ageing has undermined economic development.[162] Evidence suggests that pensions, while making a difference to the well-being of older persons, also benefit entire families especially in times of crisis when there may be a shortage or loss of employment within households. A study by the Australian Government in 2003 estimated that "women between the ages of 65 and 74 years contribute A$16 billion per year in unpaid caregiving and voluntary work. Similarly, men in the same age group contributed A$10 billion per year."[156]

Due to increasing share of the elderly in the population, health care expenditures will continue to grow relative to the economy in coming decades. This has been considered as a negative phenomenon and effective strategies like labour productivity enhancement should be considered to deal with negative consequences of ageing.[163]

In the field of sociology and mental health, ageing is seen in five different views: ageing as maturity, ageing as decline, ageing as a life-cycle event, ageing as generation, and ageing as survival.[164] Positive correlates with ageing often include economics, employment, marriage, children, education, and sense of control, as well as many others. The social science of ageing includes disengagement theory, activity theory, selectivity theory, and continuity theory. Retirement, a common transition faced by the elderly, may have both positive and negative consequences.[165] As cyborgs currently are on the rise some theorists argue there is a need to develop new definitions of ageing and for instance a bio-techno-social definition of ageing has been suggested.[166]

With age inevitable biological changes occur that increase the risk of illness and disability. UNFPA states that,[159]

"A life-cycle approach to health care one that starts early, continues through the reproductive years and lasts into old age is essential for the physical and emotional well-being of older persons, and, indeed, all people. Public policies and programmes should additionally address the needs of older impoverished people who cannot afford health care."

Many societies in Western Europe and Japan have ageing populations. While the effects on society are complex, there is a concern about the impact on health care demand. The large number of suggestions in the literature for specific interventions to cope with the expected increase in demand for long-term care in ageing societies can be organised under four headings: improve system performance; redesign service delivery; support informal caregivers; and shift demographic parameters.[167]

However, the annual growth in national health spending is not mainly due to increasing demand from ageing populations, but rather has been driven by rising incomes, costly new medical technology, a shortage of health care workers and informational asymmetries between providers and patients.[168] A number of health problems become more prevalent as people get older. These include mental health problems as well as physical health problems, especially dementia.

It has been estimated that population ageing only explains 0.2 percentage points of the annual growth rate in medical spending of 4.3% since 1970. In addition, certain reforms to the Medicare system in the United States decreased elderly spending on home health care by 12.5% per year between 1996 and 2000.[169]

Positive self-perception of health has been correlated with higher well-being and reduced mortality in the elderly.[170][171] Various reasons have been proposed for this association; people who are objectively healthy may naturally rate their health better than that of their ill counterparts, though this link has been observed even in studies which have controlled for socioeconomic status, psychological functioning and health status.[172] This finding is generally stronger for men than women,[171] though this relationship is not universal across all studies and may only be true in some circumstances.[172]

As people age, subjective health remains relatively stable, even though objective health worsens.[173] In fact, perceived health improves with age when objective health is controlled in the equation.[174] This phenomenon is known as the "paradox of ageing." This may be a result of social comparison;[175] for instance, the older people get, the more they may consider themselves in better health than their same-aged peers.[176] Elderly people often associate their functional and physical decline with the normal ageing process.[177][178]

The concept of successful ageing can be traced back to the 1950s and was popularised in the 1980s. Traditional definitions of successful ageing have emphasised absence of physical and cognitive disabilities.[179] In their 1987 article, Rowe and Kahn characterised successful ageing as involving three components: a) freedom from disease and disability, b) high cognitive and physical functioning, and c) social and productive engagement.[180]

The ancient Greek dramatist Euripides (5th century BC) describes the multiply-headed mythological monster Hydra as having a regenerative capacity which makes it immortal, which is the historical background to the name of the biological genus Hydra. The Book of Job (c. 6th century BC) describes human lifespan as inherently limited and makes a comparison with the innate immortality that a felled tree may have when undergoing vegetative regeneration.[181]

Read the original here:
Ageing - Wikipedia

Read More...

Ashkenazi Jews – Wikipedia

December 7th, 2016 2:43 pm

Ashkenazi Jews ( Y'hudey Ashkenaz in Ashkenazi Hebrew) Total population (10[1]11.2[2] million) Regions with significant populations United States 56 million[3] Israel 2.8 million[1][4] Russia 194,000500,000 Argentina 300,000 United Kingdom 260,000 Canada 240,000 France 200,000 Germany 200,000 Ukraine 150,000 Australia 120,000 South Africa 80,000 Belarus 80,000 Hungary 75,000 Chile 70,000 Belgium 30,000 Brazil 30,000 Netherlands 30,000 Moldova 30,000 Poland 25,000 Mexico 18,500 Sweden 18,000 Latvia 10,000 Romania 10,000 Austria 9,000 New Zealand 5,000 Azerbaijan 4,300 Lithuania 4,000 Czech Republic 3,000 Slovakia 3,000 Estonia 1,000 Languages Historical: Yiddish Modern: Local languages, primarily:English, Hebrew, Russian Religion Judaism, some secular, irreligious Related ethnic groups Sephardi Jews, Mizrahi Jews, Samaritans,[5][5][6][7]Kurds,[7] other Levantines (Druze, Assyrians,[5][6]Arabs[5][6][8][9]), Mediterranean groups[10][11][12][13][14]

Ashkenazi Jews, also known as Ashkenazic Jews or simply Ashkenazim (Hebrew: , Ashkenazi Hebrew pronunciation: [aknazim], singular: [aknazi], Modern Hebrew: [akenazim, akenazi]; also Y'hudey Ashkenaz),[15] are a Jewish diaspora population who coalesced as a distinct community in the Holy Roman Empire around the end of the first millennium.[16] The traditional diaspora language of Ashkenazi Jews is Yiddish (which incorporates several dialects), while until recently Hebrew was only used as a sacred language.

The Ashkenazim settled and established communities throughout Central and Eastern Europe, which was their primary region of concentration and residence from the Middle Ages until recent times. They subsequently evolved their own distinctive culture and diasporic identities.[17] Throughout their time in Europe, the Ashkenazim have made many important contributions to philosophy, scholarship, literature, art, music and science.[18][19][20][21]

In the late Middle Ages, the center of gravity of the Ashkenazi population shifted steadily eastward,[22] moving out of the Holy Roman Empire into the Pale of Settlement (comprising parts of present-day Belarus, Latvia, Lithuania, Moldova, Poland, Russia, and Ukraine).[23][24] In the course of the late 18th and 19th centuries, those Jews who remained in or returned to the German lands experienced a cultural reorientation; under the influence of the Haskalah and the struggle for emancipation, as well the intellectual and cultural ferment in urban centers, they gradually abandoned the use of Yiddish, while developing new forms of Jewish religious life and cultural identity.[25]

The genocidal impact of the Holocaust (the mass murder of approximately six million Jews during World War II) devastated the Ashkenazim and their culture, affecting almost every Jewish family.[26][27] It is estimated that in the 11th century Ashkenazi Jews composed only three percent of the world's Jewish population, while at their peak in 1931 they accounted for 92 percent of the world's Jews. Immediately prior to the Holocaust, the number of Jews in the world stood at approximately 16.7 million.[28] Statistical figures vary for the contemporary demography of Ashkenazi Jews, oscillating between 10 million[1] and 11.2 million.[2]Sergio DellaPergola in a rough calculation of Sephardic and Mizrahi Jews, implies that Ashkenazi make up less than 74% of Jews worldwide.[29] Other estimates place Ashkenazi Jews as making up about 75% of Jews worldwide.[30]

Genetic studies on Ashkenazimresearching both their paternal and maternal lineagessuggest a significant proportion of West Asian ancestry. Those studies have arrived at diverging conclusions regarding both the degree and the sources of their European ancestry, and have generally focused on the extent of the European genetic origin observed in Ashkenazi maternal lineages.[31] Ashkenazi Jews are popularly contrasted with Sephardi Jews (also called Sephardim), who are descendants of Jews from the Iberian Peninsula (though there are other groups as well). There are some differences in how the two groups pronounce certain Hebrew letters and in points of ritual.

The name Ashkenazi derives from the biblical figure of Ashkenaz, the first son of Gomer, son of Khaphet, son of Noah, and a Japhetic patriarch in the Table of Nations (Genesis 10). The name of Gomer has often been linked to the ethnonym Cimmerians. Biblical Ashkenaz is usually derived from Assyrian Akza (cuneiform Akuzai/Ikuzai), a people who expelled the Cimmerians from the Armenian area of the Upper Euphrates,[32] whose name is usually associated with the name of the Scythians.[33][34] The intrusive n in the Biblical name is likely due to a scribal error confusing a waw with a nun .[34][35][36]

In Jeremiah 51:27, Ashkenaz figures as one of three kingdoms in the far north, the others being Minni and Ararat, perhaps corresponding to Urartu, called on by God to resist Babylon.[36][37]

In the Yoma tractate of the Babylonian Talmud the name Gomer is rendered as Germania, which elsewhere in rabbinical literature was identified with Germanikia in northwestern Syria, but later became associated with Germania. Ashkenaz is linked to Scandza/Scanzia, viewed as the cradle of Germanic tribes, as early as a 6th-century gloss to the Historia Ecclesiastica of Eusebius.[38] In the 10th-century History of Armenia of Yovhannes Drasxanakertc'i (1.15) Ashkenaz was associated with Armenia,[39] as it was occasionally in Jewish usage, where its denotation extended at times to Adiabene, Khazaria, Crimea and areas to the east.[40] His contemporary Saadia Gaon identified Ashkenaz with the Saquliba or Slavic territories,[41] and such usage covered also the lands of tribes neighboring the Slavs, and Eastern and Central Europe.[40] In modern times, Samuel Krauss identified the Biblical "Ashkenaz" with Khazaria.[42]

Sometime in the early medieval period, the Jews of central and eastern Europe came to be called by this term.[36] In conformity with the custom of designating areas of Jewish settlement with biblical names, Spain was denominated Sefarad (Obadiah 20), France was called Tsarefat (1 Kings 17:9), and Bohemia was called the Land of Canaan.[43] By the high medieval period, Talmudic commentators like Rashi began to use Ashkenaz/Eretz Ashkenaz to designate Germany, earlier known as Loter,[36][38] where, especially in the Rhineland communities of Speyer, Worms and Mainz, the most important Jewish communities arose.[44] Rashi uses leshon Ashkenaz (Ashkenazi language) to describe German speech, and Byzantium and Syrian Jewish letters referred to the Crusaders as Ashkenazim.[38] Given the close links between the Jewish communities of France and Germany following the Carolingian unification, the term Ashkenazi came to refer to both the Jews of medieval Germany and France.[45]

Outside of their origins in ancient Israel, the history of Ashkenazim is shrouded in mystery,[46] and many theories have arisen speculating on their emergence as a distinct community of Jews.[47] The most well supported theory is the one that details a Jewish migration from Israel through what is now Italy and other parts of southern Europe.[48] The historical record attests to Jewish communities in southern Europe since pre-Christian times.[49] Many Jews were denied full Roman citizenship until 212 CE, when Emperor Caracalla granted all free peoples this privilege. Jews were required to pay a poll tax until the reign of Emperor Julian in 363. In the late Roman Empire, Jews were free to form networks of cultural and religious ties and enter into various local occupations. But, after Christianity became the official religion of Rome and Constantinople in 380, Jews were increasingly marginalized.

The history of Jews in Greece goes back to at least the Archaic Era of Greece, when the classical culture of Greece was undergoing a process of formalization after the Greek Dark Age. The Greek historian Herodotus knew of the Jews, whom he called "Palestinian Syrians",[citation needed] and listed them among the levied naval forces in service of the invading Persians. While Jewish monotheism was not deeply affected by Greek Polytheism, the Greek way of living was attractive for many wealthier Jews.[50] The Synagogue in the Agora of Athens is dated to the period between 267 and 396 CE. The Stobi Synagogue in Macedonia, was built on the ruins of a more ancient synagogue in the 4th century, while later in the 5th century, the synagogue was transformed into Christian basilica.[51]Hellenistic Judaism thrived in Antioch and Alexandria, many of these Greek-speaking Jews would convert to Christianity.[52] Sporadic[53]epigraphic evidence in grave site excavations, particularly in Brigetio (Szny), Aquincum (buda), Intercisa (Dunajvros), Triccinae (Srvr), Savaria (Szombathely), Sopianae (Pcs) in Hungary, and Osijek in Croatia, attest to the presence of Jews after the 2nd and 3rd centuries where Roman garrisons were established,[54] There was a sufficient number of Jews in Pannonia to form communities and build a synagogue. Jewish troops were among the Syrian soldiers transferred there, and replenished from the Middle East, after 175 C.E. Jews and especially Syrians came from Antioch, Tarsus and Cappadocia. Others came from Italy and the Hellenized parts of the Roman empire. The excavations suggest they first lived in isolated enclaves attached to Roman legion camps, and intermarried among other similar oriental families within the military orders of the region.[53]Raphael Patai states that later Roman writers remarked that they differed little in either customs, manner of writing, or names from the people among whom they dwelt; and it was especially difficult to differentiate Jews from the Syrians.[55][56] After Pannonia was ceded to the Huns in 433, the garrison populations were withdrawn to Italy, and only a few, enigmatic traces remain of a possible Jewish presence in the area some centuries later.[57]

No evidence has yet been found of a Jewish presence in antiquity in Germany beyond its Roman border, nor in Eastern Europe. In Gaul and Germany itself, with the possible exception of Trier and Cologne, the archeological evidence suggests at most a fleeting presence of very few Jews, primarily itinerant traders or artisans.[58] A substantial Jewish population emerged in northern Gaul by the Middle Ages,[59] but Jewish communities existed in 465 CE in Brittany, in 524 CE in Valence, and in 533 CE in Orleans.[60] Throughout this period and into the early Middle Ages, some Jews assimilated into the dominant Greek and Latin cultures, mostly through conversion to Christianity.[61][bettersourceneeded] King Dagobert I of the Franks expelled the Jews from his Merovingian kingdom in 629. Jews in former Roman territories faced new challenges as harsher anti-Jewish Church rulings were enforced.

Charlemagne's expansion of the Frankish empire around 800, including northern Italy and Rome, brought on a brief period of stability and unity in Francia. This created opportunities for Jewish merchants to settle again north of the Alps. Charlemagne granted the Jews freedoms similar to those once enjoyed under the Roman Empire. In addition, Jews from southern Italy, fleeing religious persecution, began to move into central Europe.[citation needed] Returning to Frankish lands, many Jewish merchants took up occupations in finance and commerce, including money lending, or usury. (Church legislation banned Christians from lending money in exchange for interest.) From Charlemagne's time to the present, Jewish life in northern Europe is well documented. By the 11th century, when Rashi of Troyes wrote his commentaries, Jews in what came to be known as "Ashkenaz" were known for their halakhic learning, and Talmudic studies. They were criticized by Sephardim and other Jewish scholars in Islamic lands for their lack of expertise in Jewish jurisprudence (dinim) and general ignorance of Hebrew linguistics and literature.[62]Yiddish emerged as a result of Judeo-Latin language contact with various High German vernaculars in the medieval period.[63] It is a Germanic language written with Hebrew letters, and heavily influenced by Hebrew and Aramaic, with some elements of Romance and later Slavic languages.[64]

Historical records show evidence of Jewish communities north of the Alps and Pyrenees as early as the 8th and 9th century. By the 11th century Jewish settlers, moving from southern European and Middle Eastern centers, appear to have begun to settle in the north, especially along the Rhine, often in response to new economic opportunities and at the invitation of local Christian rulers. Thus Baldwin V, Count of Flanders, invited Jacob ben Yekutiel and his fellow Jews to settle in his lands; and soon after the Norman Conquest of England, William the Conqueror likewise extended a welcome to continental Jews to take up residence there. Bishop Rdiger Huzmann called on the Jews of Mainz to relocate to Speyer. In all of these decisions, the idea that Jews had the know-how and capacity to jump-start the economy, improve revenues, and enlarge trade seems to have played a prominent role.[65] Typically Jews relocated close to the markets and churches in town centres, where, though they came under the authority of both royal and ecclesiastical powers, they were accorded administrative autonomy.[65]

In the 11th century, both Rabbinic Judaism and the culture of the Babylonian Talmud that underlies it became established in southern Italy and then spread north to Ashkenaz.[66]

The Jewish communities along the Rhine river from Cologne to Mainz were decimated in the Rhineland massacres of 1096. With the onset of the Crusades in 1095, and the expulsions from England (1290), France (1394), and parts of Germany (15th century), Jewish migration pushed eastward into Poland (10th century), Lithuania (10th century), and Russia (12th century). Over this period of several hundred years, some have suggested, Jewish economic activity was focused on trade, business management, and financial services, due to several presumed factors: Christian European prohibitions restricting certain activities by Jews, preventing certain financial activities (such as "usurious" loans)[67] between Christians, high rates of literacy, near universal male education, and ability of merchants to rely upon and trust family members living in different regions and countries.

By the 15th century, the Ashkenazi Jewish communities in Poland were the largest Jewish communities of the Diaspora.[68] This area, which eventually fell under the domination of Russia, Austria, and Prussia (Germany), would remain the main center of Ashkenazi Jewry until the Holocaust.

The answer to why there was so little assimilation of Jews in central and eastern Europe for so long would seem to lie in part in the probability that the alien surroundings in central and eastern Europe were not conducive, though contempt did not prevent some assimilation. Furthermore, Jews lived almost exclusively in shtetls, maintained a strong system of education for males, heeded rabbinic leadership, and scorned the life-style of their neighbors; and all of these tendencies increased with every outbreak of antisemitism.[69]

In the first half of the 11th century, Hai Gaon refers to questions that had been addressed to him from Ashkenaz, by which he undoubtedly means Germany. Rashi in the latter half of the 11th century refers to both the language of Ashkenaz[70] and the country of Ashkenaz.[71] During the 12th century, the word appears quite frequently. In the Mahzor Vitry, the kingdom of Ashkenaz is referred to chiefly in regard to the ritual of the synagogue there, but occasionally also with regard to certain other observances.[72]

In the literature of the 13th century, references to the land and the language of Ashkenaz often occur. Examples include Solomon ben Aderet's Responsa (vol. i., No. 395); the Responsa of Asher ben Jehiel (pp.4, 6); his Halakot (Berakot i. 12, ed. Wilna, p.10); the work of his son Jacob ben Asher, Tur Orach Chayim (chapter 59); the Responsa of Isaac ben Sheshet (numbers 193, 268, 270).

In the Midrash compilation, Genesis Rabbah, Rabbi Berechiah mentions Ashkenaz, Riphath, and Togarmah as German tribes or as German lands. It may correspond to a Greek word that may have existed in the Greek dialect of the Jews in Syria Palaestina, or the text is corrupted from "Germanica." This view of Berechiah is based on the Talmud (Yoma 10a; Jerusalem Talmud Megillah 71b), where Gomer, the father of Ashkenaz, is translated by Germamia, which evidently stands for Germany, and which was suggested by the similarity of the sound.

In later times, the word Ashkenaz is used to designate southern and western Germany, the ritual of which sections differs somewhat from that of eastern Germany and Poland. Thus the prayer-book of Isaiah Horowitz, and many others, give the piyyutim according to the Minhag of Ashkenaz and Poland.

According to 16th-century mystic Rabbi Elijah of Chelm, Ashkenazi Jews lived in Jerusalem during the 11th century. The story is told that a German-speaking Jew saved the life of a young German man surnamed Dolberger. So when the knights of the First Crusade came to siege Jerusalem, one of Dolberger's family members who was among them rescued Jews in Palestine and carried them back to Worms to repay the favor.[73] Further evidence of German communities in the holy city comes in the form of halakhic questions sent from Germany to Jerusalem during the second half of the 11th century.[74]

Material relating to the history of German Jews has been preserved in the communal accounts of certain communities on the Rhine, a Memorbuch, and a Liebesbrief, documents that are now part of the Sassoon Collection.[75]Heinrich Graetz has also added to the history of German Jewry in modern times in the abstract of his seminal work, History of the Jews, which he entitled "Volksthmliche Geschichte der Juden."

In an essay on Sephardi Jewry, Daniel Elazar at the Jerusalem Center for Public Affairs[76] summarized the demographic history of Ashkenazi Jews in the last thousand years, noting that at the end of the 11th century, 97% of world Jewry was Sephardic and 3% Ashkenazi; by the end of XVI century, the: 'Treaty on the redemption of captives', by Gracian of the God's Mother, Mercy Priest, who was imprisoned by Turks, cites a Tunisian Hebrew, made captive when arriving to Gaeta, who aided others with money, named: 'Simon Escanasi', in the mid-17th century, "Sephardim still outnumbered Ashkenazim three to two", but by the end of the 18th century, "Ashkenazim outnumbered Sephardim three to two, the result of improved living conditions in Christian Europe versus the Ottoman Muslim world."[76] By 1931, Ashkenazi Jews accounted for nearly 92% of world Jewry.[76] These factors are sheer demography showing the migration patterns of Jews from Southern and Western Europe to Central and Eastern Europe.

In 1740 a family from Lithuania became the first Ashkenazi Jews to settle in the Jewish Quarter of Jerusalem.[77]

In the generations after emigration from the west, Jewish communities in places like Poland, Russia, and Belarus enjoyed a comparatively stable socio-political environment. A thriving publishing industry and the printing of hundreds of biblical commentaries precipitated the development of the Hasidic movement as well as major Jewish academic centers.[78] After two centuries of comparative tolerance in the new nations, massive westward emigration occurred in the 19th and 20th centuries in response to pogroms in the east and the economic opportunities offered in other parts of the world. Ashkenazi Jews have made up the majority of the American Jewish community since 1750.[68]

In the context of the European Enlightenment, Jewish emancipation began in 18th century France and spread throughout Western and Central Europe. Disabilities that had limited the rights of Jews since the Middle Ages were abolished, including the requirements to wear distinctive clothing, pay special taxes, and live in ghettos isolated from non-Jewish communities, and the prohibitions on certain professions. Laws were passed to integrate Jews into their host countries, forcing Ashkenazi Jews to adopt family names (they had formerly used patronymics). Newfound inclusion into public life led to cultural growth in the Haskalah, or Jewish Enlightenment, with its goal of integrating modern European values into Jewish life.[79] As a reaction to increasing antisemitism and assimilation following the emancipation, Zionism was developed in central Europe.[80] Other Jews, particularly those in the Pale of Settlement, turned to socialism. These tendencies would be united in Labor Zionism, the founding ideology of the State of Israel.

Of the estimated 8.8 million Jews living in Europe at the beginning of World War II, the majority of whom were Ashkenazi, about 6 million more than two-thirds were systematically murdered in the Holocaust. These included 3 million of 3.3 million Polish Jews (91%); 900,000 of 1.5 million in Ukraine (60%); and 5090% of the Jews of other Slavic nations, Germany, Hungary, and the Baltic states, and over 25% of the Jews in France. Sephardi communities suffered similar depletions in a few countries, including Greece, the Netherlands and the former Yugoslavia.[81] As the large majority of the victims were Ashkenazi Jews, their percentage dropped from nearly 92% of world Jewry in 1931 to nearly 80% of world Jewry today.[76] The Holocaust also effectively put an end to the dynamic development of the Yiddish language in the previous decades, as the vast majority of the Jewish victims of the Holocaust, around 5 million, were Yiddish speakers.[82] Many of the surviving Ashkenazi Jews emigrated to countries such as Israel, Canada, Argentina, Australia, and the United States after the war.

Following the Holocaust, some sources place Ashkenazim today as making up approximately 8385 percent of Jews worldwide,[83][84][85][86] while Sergio DellaPergola in a rough calculation of Sephardic and Mizrahi Jews, implies that Ashkenazi make up a notably lower figure, less than 74%.[29] Other estimates place Ashkenazi Jews as making up about 75% of Jews worldwide.[30] Ashkenazi Jews constitute around 3536% of Israel's total population, or 47.5% of Israel's Jewish population.[87][88]

In Israel, the term Ashkenazi is now used in a manner unrelated to its original meaning, often applied to all Jews who settled in Europe and sometimes including those whose ethnic background is actually Sephardic. Jews of any non-Ashkenazi background, including Mizrahi, Yemenite, Kurdish and others who have no connection with the Iberian Peninsula, have similarly come to be lumped together as Sephardic. Jews of mixed background are increasingly common, partly because of intermarriage between Ashkenazi and non-Ashkenazi, and partly because many do not see such historic markers as relevant to their life experiences as Jews.[89]

Religious Ashkenazi Jews living in Israel are obliged to follow the authority of the chief Ashkenazi rabbi in halakhic matters. In this respect, a religiously Ashkenazi Jew is an Israeli who is more likely to support certain religious interests in Israel, including certain political parties. These political parties result from the fact that a portion of the Israeli electorate votes for Jewish religious parties; although the electoral map changes from one election to another, there are generally several small parties associated with the interests of religious Ashkenazi Jews. The role of religious parties, including small religious parties that play important roles as coalition members, results in turn from Israel's composition as a complex society in which competing social, economic, and religious interests stand for election to the Knesset, a unicameral legislature with 120 seats.[90]

People of Ashkenazi descent constitute around 47.5% of Israeli Jews (and therefore 3536% of Israelis).[4] They have played a prominent role in the economy, media, and politics[91] of Israel since its founding. During the first decades of Israel as a state, strong cultural conflict occurred between Sephardic and Ashkenazi Jews (mainly east European Ashkenazim). The roots of this conflict, which still exists to a much smaller extent in present-day Israeli society, are chiefly attributed to the concept of the "melting pot".[92] That is to say, all Jewish immigrants who arrived in Israel were strongly encouraged to "melt down" their own particular exilic identities within the general social "pot" in order to become Israeli.[93]

The Ashkenazi Chief Rabbis in the Yishuv and Israel include:

Religious Jews have Minhagim, customs, in addition to Halakha, or religious law, and different interpretations of law. Different groups of religious Jews in different geographic areas historically adopted different customs and interpretations. On certain issues, Orthodox Jews are required to follow the customs of their ancestors, and do not believe they have the option of picking and choosing. For this reason, observant Jews at times find it important for religious reasons to ascertain who their household's religious ancestors are in order to know what customs their household should follow. These times include, for example, when two Jews of different ethnic background marry, when a non-Jew converts to Judaism and determines what customs to follow for the first time, or when a lapsed or less observant Jew returns to traditional Judaism and must determine what was done in his or her family's past. In this sense, "Ashkenazic" refers both to a family ancestry and to a body of customs binding on Jews of that ancestry. Reform Judaism, which does not necessarily follow those minhagim, did nonetheless originate among Ashkenazi Jews.[94]

In a religious sense, an Ashkenazi Jew is any Jew whose family tradition and ritual follows Ashkenazi practice. Until the Ashkenazi community first began to develop in the Early Middle Ages, the centers of Jewish religious authority were in the Islamic world, at Baghdad and in Islamic Spain. Ashkenaz (Germany) was so distant geographically that it developed a minhag of its own. Ashkenazi Hebrew came to be pronounced in ways distinct from other forms of Hebrew.[95]

In this respect, the counterpart of Ashkenazi is Sephardic, since most non-Ashkenazi Orthodox Jews follow Sephardic rabbinical authorities, whether or not they are ethnically Sephardic. By tradition, a Sephardic or Mizrahi woman who marries into an Orthodox or Haredi Ashkenazi Jewish family raises her children to be Ashkenazi Jews; conversely an Ashkenazi woman who marries a Sephardi or Mizrahi man is expected to take on Sephardic practice and the children inherit a Sephardic identity, though in practice many families compromise. A convert generally follows the practice of the beth din that converted him or her. With the integration of Jews from around the world in Israel, North America, and other places, the religious definition of an Ashkenazi Jew is blurring, especially outside Orthodox Judaism.[96]

New developments in Judaism often transcend differences in religious practice between Ashkenazi and Sephardic Jews. In North American cities, social trends such as the chavurah movement, and the emergence of "post-denominational Judaism"[97][98] often bring together younger Jews of diverse ethnic backgrounds. In recent years, there has been increased interest in Kabbalah, which many Ashkenazi Jews study outside of the Yeshiva framework. Another trend is the new popularity of ecstatic worship in the Jewish Renewal movement and the Carlebach style minyan, both of which are nominally of Ashkenazi origin.[99]

Culturally, an Ashkenazi Jew can be identified by the concept of Yiddishkeit, which means "Jewishness" in the Yiddish language.[100]Yiddishkeit is specifically the Jewishness of Ashkenazi Jews.[101] Before the Haskalah and the emancipation of Jews in Europe, this meant the study of Torah and Talmud for men, and a family and communal life governed by the observance of Jewish Law for men and women. From the Rhineland to Riga to Romania, most Jews prayed in liturgical Ashkenazi Hebrew, and spoke Yiddish in their secular lives. But with modernization, Yiddishkeit now encompasses not just Orthodoxy and Hasidism, but a broad range of movements, ideologies, practices, and traditions in which Ashkenazi Jews have participated and somehow retained a sense of Jewishness. Although a far smaller number of Jews still speak Yiddish, Yiddishkeit can be identified in manners of speech, in styles of humor, in patterns of association. Broadly speaking, a Jew is one who associates culturally with Jews, supports Jewish institutions, reads Jewish books and periodicals, attends Jewish movies and theater, travels to Israel, visits historical synagogues, and so forth. It is a definition that applies to Jewish culture in general, and to Ashkenazi Yiddishkeit in particular.

As Ashkenazi Jews moved away from Europe, mostly in the form of aliyah to Israel, or immigration to North America, and other English-speaking areas such as South Africa; and Europe (particularly France) and Latin America, the geographic isolation that gave rise to Ashkenazim has given way to mixing with other cultures, and with non-Ashkenazi Jews who, similarly, are no longer isolated in distinct geographic locales. Hebrew has replaced Yiddish as the primary Jewish language for many Ashkenazi Jews, although many Hasidic and Hareidi groups continue to use Yiddish in daily life. (There are numerous Ashkenazi Jewish anglophones and Russian-speakers as well, although English and Russian are not originally Jewish languages.)

France's blended Jewish community is typical of the cultural recombination that is going on among Jews throughout the world. Although France expelled its original Jewish population in the Middle Ages, by the time of the French Revolution, there were two distinct Jewish populations. One consisted of Sephardic Jews, originally refugees from the Inquisition and concentrated in the southwest, while the other community was Ashkenazi, concentrated in formerly German Alsace, and mainly speaking a German dialect similar to Yiddish. (A third community of Provenal Jews living in Comtat Venaissin were technically outside France, and were later absorbed into the Sephardim.) The two communities were so separate and different that the National Assembly emancipated them separately in 1790 and 1791.[102]

But after emancipation, a sense of a unified French Jewry emerged, especially when France was wracked by the Dreyfus affair in the 1890s. In the 1920s and 1930s, Ashkenazi Jews from Europe arrived in large numbers as refugees from antisemitism, the Russian revolution, and the economic turmoil of the Great Depression. By the 1930s, Paris had a vibrant Yiddish culture, and many Jews were involved in diverse political movements. After the Vichy years and the Holocaust, the French Jewish population was augmented once again, first by Ashkenazi refugees from Central Europe, and later by Sephardi immigrants and refugees from North Africa, many of them francophone.

Then, in the 1990s, yet another Ashkenazi Jewish wave began to arrive from countries of the former Soviet Union and Central Europe. The result is a pluralistic Jewish community that still has some distinct elements of both Ashkenazi and Sephardic culture. But in France, it is becoming much more difficult to sort out the two, and a distinctly French Jewishness has emerged.[103]

In an ethnic sense, an Ashkenazi Jew is one whose ancestry can be traced to the Jews who settled in Central Europe. For roughly a thousand years, the Ashkenazim were a reproductively isolated population in Europe, despite living in many countries, with little inflow or outflow from migration, conversion, or intermarriage with other groups, including other Jews. Human geneticists have argued that genetic variations have been identified that show high frequencies among Ashkenazi Jews, but not in the general European population, be they for patrilineal markers (Y-chromosome haplotypes) and for matrilineal markers (mitotypes).[104] Since the middle of the 20th century, many Ashkenazi Jews have intermarried, both with members of other Jewish communities and with people of other nations and faiths.[105]

A 2006 study found Ashkenazi Jews to be a clear, homogeneous genetic subgroup. Strikingly, regardless of the place of origin, Ashkenazi Jews can be grouped in the same genetic cohort that is, regardless of whether an Ashkenazi Jew's ancestors came from Poland, Russia, Hungary, Lithuania, or any other place with a historical Jewish population, they belong to the same ethnic group. The research demonstrates the endogamy of the Jewish population in Europe and lends further credence to the idea of Ashkenazi Jews as an ethnic group. Moreover, though intermarriage among Jews of Ashkenazi descent has become increasingly common, many Haredi Jews, particularly members of Hasidic or Hareidi sects, continue to marry exclusively fellow Ashkenazi Jews. This trend keeps Ashkenazi genes prevalent and also helps researchers further study the genes of Ashkenazi Jews with relative ease. It is noteworthy that these Haredi Jews often have extremely large families.[10]

The Halakhic practices of (Orthodox) Ashkenazi Jews may differ from those of Sephardi Jews, particularly in matters of custom. Differences are noted in the Shulkhan Arukh itself, in the gloss of Moses Isserles. Well known differences in practice include:

The term Ashkenazi also refers to the nusach Ashkenaz (Hebrew, "liturgical tradition", or rite) used by Ashkenazi Jews in their Siddur (prayer book). A nusach is defined by a liturgical tradition's choice of prayers, order of prayers, text of prayers and melodies used in the singing of prayers. Two other major forms of nusach among Ashkenazic Jews are Nusach Sefard (not to be confused with the Sephardic ritual), which is the general Polish Hasidic nusach, and Nusach Ari, as used by Lubavitch Hasidim.

Several famous people have Ashkenazi as a surname, such as Vladimir Ashkenazy. However, most people with this surname hail from within Sephardic communities, particularly from the Syrian Jewish community. The Sephardic carriers of the surname would have some Ashkenazi ancestors since the surname was adopted by families who were initially of Ashkenazic origins who moved to Sephardi countries and joined those communities. Ashkenazi would be formally adopted as the family surname having started off as a nickname imposed by their adopted communities. Some have shortened the name to Ash.

Relations between Ashkenazim and Sephardim have not always been warm. North African Sepharadim and Berber Jews were often looked upon by Ashkenazim as second-class citizens during the first decade after the creation of Israel. This has led to protest movements such as the Israeli Black Panthers led by Saadia Marciano a Moroccan Jew. Nowadays, relations are getting better.[107] In some instances, Ashkenazi communities have accepted significant numbers of Sephardi newcomers, sometimes resulting in intermarriage.[108][109]

Ashkenazi Jews have a noted history of achievement in Western societies[110] in the fields of exact and social sciences, literature, finance, politics, media, and others. In those societies where they have been free to enter any profession, they have a record of high occupational achievement, entering professions and fields of commerce where higher education is required.[111] Ashkenazi Jews have won a large number of the Nobel awards.[112][113] While they make up about 2% of the U.S. population,[114] 27% of United States Nobel prize winners in the 20th century,[114] a quarter of Fields Medal winners,[115] 25% of ACM Turing Award winners,[114] half the world's chess champions,[114] including 8% of the top 100 world chess players,[116] and a quarter of Westinghouse Science Talent Search winners[115] have Ashkenazi Jewish ancestry.

Time magazine's person of the 20th century, Albert Einstein,[117] was an Ashkenazi Jew. According to a study performed by Cambridge University, 21% of Ivy League students, 25% of the Turing Award winners, 23% of the wealthiest Americans, and 38% of the Oscar-winning film directors, and 29% of Oslo awardees are Ashkenazi Jews.[118]

Efforts to identify the origins of Ashkenazi Jews through DNA analysis began in the 1990s. Currently, there are three types of genetic origin testing, autosomal DNA (atDNA), mitochondrial DNA (mtDNA), and Y-chromosomal DNA (Y-DNA). Autosomal DNA is a mixture from an individual's entire ancestry, Y-DNA shows a male's lineage only along his strict-paternal line, mtDNA shows any person's lineage only along the strict-maternal line. Genome-wide association studies have also been employed to yield findings relevant to genetic origins.

Like most DNA studies of human migration patterns, the earliest studies on Ashkenazi Jews focused on the Y-DNA and mtDNA segments of the human genome. Both segments are unaffected by recombination (except for the ends of the Y chromosome the pseudoautosomal regions known as PAR1 and PAR2), thus allowing tracing of direct maternal and paternal lineages.

These studies revealed that Ashkenazi Jews originate from an ancient (2000 BCE - 700 BCE) population of the Middle East who had spread to Europe.[119] Ashkenazic Jews display the homogeneity of a genetic bottleneck, meaning they descend from a larger population whose numbers were greatly reduced but recovered through a few founding individuals. Although the Jewish people in general were present across a wide geographical area as described, genetic research done by Gil Atzmon of the Longevity Genes Project at Albert Einstein College of Medicine suggests "that Ashkenazim branched off from other Jews around the time of the destruction of the First Temple, 2,500 years ago ... flourished during the Roman Empire but then went through a 'severe bottleneck' as they dispersed, reducing a population of several million to just 400 families who left Northern Italy around the year 1000 for Central and eventually Eastern Europe."[120]

Various studies have arrived at diverging conclusions regarding both the degree and the sources of the non-Levantine admixture in Ashkenazim,[31] particularly with respect to the extent of the non-Levantine genetic origin observed in Ashkenazi maternal lineages, which is in contrast to the predominant Levantine genetic origin observed in Ashkenazi paternal lineages. All studies nevertheless agree that genetic overlap with the Fertile Crescent exists in both lineages, albeit at differing rates. Collectively, Ashkenazi Jews are less genetically diverse than other Jewish ethnic divisions, due to their genetic bottleneck.[121]

The majority of genetic findings to date concerning Ashkenazi Jews conclude that the male line was founded by ancestors from the Middle East.[122][123][124] Others have found a similar genetic line among Greeks, and Macedonians.[citation needed]

A study of haplotypes of the Y-chromosome, published in 2000, addressed the paternal origins of Ashkenazi Jews. Hammer et al.[125] found that the Y-chromosome of Ashkenazi and Sephardic Jews contained mutations that are also common among other Middle Eastern peoples, but uncommon in the autochthonous European population. This suggested that the male ancestors of the Ashkenazi Jews could be traced mostly to the Middle East. The proportion of male genetic admixture in Ashkenazi Jews amounts to less than 0.5% per generation over an estimated 80 generations, with "relatively minor contribution of European Y chromosomes to the Ashkenazim," and a total admixture estimate "very similar to Motulsky's average estimate of 12.5%." This supported the finding that "Diaspora Jews from Europe, Northwest Africa, and the Near East resemble each other more closely than they resemble their non-Jewish neighbors." "Past research found that 5080 percent of DNA from the Ashkenazi Y chromosome, which is used to trace the male lineage, originated in the Near East," Richards said.

The population has subsequently spread out. Based on accounts such as those of Jewish historian Flavius Josephus, by the time of the destruction of the Second Temple in 70 CE, as many as six million Jews were already living in the Roman Empire, but outside Israel, mainly in Italy and Southern Europe. In contrast, only about 500,000 lived in Judea, said Ostrer, who was not involved in the new study.[126]

A 2001 study by Nebel et al. showed that both Ashkenazi and Sephardic Jewish populations share the same overall paternal Near Eastern ancestries. In comparison with data available from other relevant populations in the region, Jews were found to be more closely related to groups in the north of the Fertile Crescent. The authors also report on Eu 19 (R1a) chromosomes, which are very frequent in Central and Eastern Europeans (54%60%) at elevated frequency (12.7%) in Ashkenazi Jews. They hypothesized that the differences among Ashkenazim Jews could reflect low-level gene flow from surrounding European populations or genetic drift during isolation.[127] A later 2005 study by Nebel et al., found a similar level of 11.5% of male Ashkenazim belonging to R1a1a (M17+), the dominant Y-chromosome haplogroup in Central and Eastern Europeans.[128]

Before 2006, geneticists had largely attributed the ethnogenesis of most of the world's Jewish populations, including Ashkenazi Jews, to Israelite Jewish male migrants from the Middle East and "the women from each local population whom they took as wives and converted to Judaism." Thus, in 2002, in line with this model of origin, David Goldstein, now of Duke University, reported that unlike male Ashkenazi lineages, the female lineages in Ashkenazi Jewish communities "did not seem to be Middle Eastern", and that each community had its own genetic pattern and even that "in some cases the mitochondrial DNA was closely related to that of the host community." In his view this suggested "that Jewish men had arrived from the Middle East, taken wives from the host population and converted them to Judaism, after which there was no further intermarriage with non-Jews."[104]

In 2006, a study by Behar et al.,[129] based on what was at that time high-resolution analysis of haplogroup K (mtDNA), suggested that about 40% of the current Ashkenazi population is descended matrilineally from just four women, or "founder lineages", that were "likely from a Hebrew/Levantine mtDNA pool" originating in the Middle East in the 1st and 2nd centuries CE. Additionally, Behar et al. suggested that the rest of Ashkenazi mtDNA is originated from ~150 women, and that most of those were also likely of Middle Eastern origin.[129] In reference specifically to Haplogroup K, they suggested that although it is common throughout western Eurasia, "the observed global pattern of distribution renders very unlikely the possibility that the four aforementioned founder lineages entered the Ashkenazi mtDNA pool via gene flow from a European host population".

In 2013, however, a study of Ashkenazi mitochondrial DNA by a team led by Martin B. Richards of the University of Huddersfield in England reached different conclusions, corroborating the pre-2006 origin hypothesis. Testing was performed on the full 16,600 DNA units composing mitochondrial DNA (the 2006 Behar study had only tested 1,000 units) in all their subjects, and the study found that the four main female Ashkenazi founders had descent lines that were established in Europe 10,000 to 20,000 years in the past[130] while most of the remaining minor founders also have a deep European ancestry. The study states that the great majority of Ashkenazi maternal lineages were not brought from the Near East (i.e., they were non-Israelite), nor were they recruited in the Caucasus (i.e., they were non-Khazar), but instead they were assimilated within Europe, primarily of Italian and Old French origins. Richards summarized the findings on the female line as such: "[N]one [of the mtDNA] came from the North Caucasus, located along the border between Europe and Asia between the Black and Caspian seas. All of our presently available studies including my own, should thoroughly debunk one of the most questionable, but still tenacious, hypotheses: that most Ashkenazi Jews can trace their roots to the mysterious Khazar Kingdom that flourished during the ninth century in the region between the Byzantine Empire and the Persian Empire."[126] The 2013 study estimated that 80 percent of Ashkenazi maternal ancestry comes from women indigenous to Europe, and only 8 percent from the Near East, while the origin of the remainder is undetermined.[12][130] According to the study these findings "point to a significant role for the conversion of women in the formation of Ashkenazi communities."[12][13][131][132][133][134]Karl Skorecki at Technion criticized the study for perceived flaws in phylogenetic analysis. "While Costa et al have re-opened the question of the maternal origins of Ashkenazi Jewry, the phylogenetic analysis in the manuscript does not 'settle' the question."[135]

A 2014 study by Fernndez et al. has found that Ashkenazi Jews display a frequency of haplogroup K in their maternal DNA that suggests an ancient Near Eastern origin, similar to the results of Behar. He stated that this observation clearly contradicts the results of the study led by Richards that suggested a European source for 3 exclusively Ashkenazi K lineages.[136]

In genetic epidemiology, a genome-wide association study (GWA study, or GWAS) is an examination of all or most of the genes (the genome) of different individuals of a particular species to see how much the genes vary from individual to individual. These techniques were originally designed for epidemiological uses, to identify genetic associations with observable traits.[137]

A 2006 study by Seldin et al. used over five thousand autosomal SNPs to demonstrate European genetic substructure. The results showed "a consistent and reproducible distinction between 'northern' and 'southern' European population groups". Most northern, central, and eastern Europeans (Finns, Swedes, English, Irish, Germans, and Ukrainians) showed >90% in the "northern" population group, while most individual participants with southern European ancestry (Italians, Greeks, Portuguese, Spaniards) showed >85% in the "southern" group. Both Ashkenazi Jews as well as Sephardic Jews showed >85% membership in the "southern" group. Referring to the Jews clustering with southern Europeans, the authors state the results were "consistent with a later Mediterranean origin of these ethnic groups".[10]

A 2007 study by Bauchet et al. found that Ashkenazi Jews were most closely clustered with Arabic North African populations when compared to Global population, and in the European structure analysis, they share similarities only with Greeks and Southern Italians, reflecting their east Mediterranean origins.[138][139]

A 2010 study on Jewish ancestry by Atzmon-Ostrer et al. stated "Two major groups were identified by principal component, phylogenetic, and identity by descent (IBD) analysis: Middle Eastern Jews and European/Syrian Jews. The IBD segment sharing and the proximity of European Jews to each other and to southern European populations suggested similar origins for European Jewry and refuted large-scale genetic contributions of Central and Eastern European and Slavic populations to the formation of Ashkenazi Jewry", as both groups the Middle Eastern Jews and European/Syrian Jews shared common ancestors in the Middle East about 2500 years ago. The study examines genetic markers spread across the entire genome and shows that the Jewish groups (Ashkenazi and non Ashkenazi) share large swaths of DNA, indicating close relationships and that each of the Jewish groups in the study (Iranian, Iraqi, Syrian, Italian, Turkish, Greek and Ashkenazi) has its own genetic signature but is more closely related to the other Jewish groups than to their fellow non-Jewish countrymen.[140] Atzmon's team found that the SNP markers in genetic segments of 3 million DNA letters or longer were 10 times more likely to be identical among Jews than non-Jews. Results of the analysis also tally with biblical accounts of the fate of the Jews. The study also found that with respect to non-Jewish European groups, the population most closely related to Ashkenazi Jews are modern-day Italians. The study speculated that the genetic-similarity between Ashkenazi Jews and Italians may be due to inter-marriage and conversions in the time of the Roman Empire. It was also found that any two Ashkenazi Jewish participants in the study shared about as much DNA as fourth or fifth cousins.[141][142]

A 2010 study by Bray et al., using SNP microarray techniques and linkage analysis found that when assuming Druze and Palestinian Arab populations to represent the reference to world Jewry ancestor genome, between 35 and 55 percent of the modern Ashkenazi genome can possibly be of European origin, and that European "admixture is considerably higher than previous estimates by studies that used the Y chromosome" with this reference point. Assuming this reference point the linkage disequilibrium in the Ashkenazi Jewish population was interpreted as "matches signs of interbreeding or 'admixture' between Middle Eastern and European populations".[143] On the Bray et al. tree, Ashkenazi Jews were found to be a genetically more divergent population than Russians, Orcadians, French, Basques, Italians, Sardinians and Tuscans. The study also observed that Ashkenazim are more diverse than their Middle Eastern relatives, which was counterintuitive because Ashkenazim are supposed to be a subset, not a superset, of their assumed geographical source population. Bray et al. therefore postulate that these results reflect not the population antiquity but a history of mixing between genetically distinct populations in Europe. However, it's possible that the relaxation of marriage prescription in the ancestors of Ashkenazim that drove their heterozygosity up, while the maintenance of the FBD rule in native Middle Easterners have been keeping their heterozygosity values in check. Ashkenazim distinctiveness as found in the Bray et al. study, therefore, may come from their ethnic endogamy (ethnic inbreeding), which allowed them to "mine" their ancestral gene pool in the context of relative reproductive isolation from European neighbors, and not from clan endogamy (clan inbreeding). Consequently, their higher diversity compared to Middle Easterners stems from the latter's marriage practices, not necessarily from the former's admixture with Europeans.[144]

The genome-wide genetic study carried out in 2010 by Behar et al. examined the genetic relationships among all major Jewish groups, including Ashkenazim, as well as the genetic relationship between these Jewish groups and non-Jewish ethnic populations. The study found that contemporary Jews (excluding Indian and Ethiopian Jews) have a close genetic relationship with people from the Levant. The authors explained that "the most parsimonious explanation for these observations is a common genetic origin, which is consistent with an historical formulation of the Jewish people as descending from ancient Hebrew and Israelite residents of the Levant".[145]

A 2015 study by James Xue et al. results suggested that 75% of the European ancestry in AJ is South-European, with the rest mostly East European. The time of admixture was inferred to be around 30-40 generations ago, on the eve of the Ashkenazi settlement in Eastern-Europe.[146]

In the late 19th century, it was proposed that the core of today's Ashkenazi Jewry are genetically descended from a hypothetical Khazarian Jewish diaspora who had migrated westward from modern Russia and Ukraine into modern France and Germany (as opposed to the currently held theory that Jews from France and Germany migrated into Eastern Europe). The hypothesis is not corroborated by historical sources[147] and is unsubstantiated by genetics, but it is still occasionally supported by scholars who have had some success in keeping the theory in the academic conscience.[148] The theory is associated with antisemitism[149] and anti-Zionism.[150][151]

A 2013 trans-genome study carried out by 30 geneticists, from 13 universities and academies, from 9 countries, assembling the largest data set available to date, for assessment of Ashkenazi Jewish genetic origins found no evidence of Khazar origin among Ashkenazi Jews. "Thus, analysis of Ashkenazi Jews together with a large sample from the region of the Khazar Khaganate corroborates the earlier results that Ashkenazi Jews derive their ancestry primarily from populations of the Middle East and Europe, that they possess considerable shared ancestry with other Jewish populations, and that there is no indication of a significant genetic contribution either from within or from north of the Caucasus region", the authors concluded.[152]

There are many references to Ashkenazi Jews in the literature of medical and population genetics. Indeed, much awareness of "Ashkenazi Jews" as an ethnic group or category stems from the large number of genetic studies of disease, including many that are well reported in the media, that have been conducted among Jews. Jewish populations have been studied more thoroughly than most other human populations, for a variety of reasons:

The result is a form of ascertainment bias. This has sometimes created an impression that Jews are more susceptible to genetic disease than other populations.[153] Healthcare professionals are often taught to consider those of Ashkenazi descent to be at increased risk for colon cancer.[154]

Genetic counseling and genetic testing are often undertaken by couples where both partners are of Ashkenazi ancestry. Some organizations, most notably Dor Yeshorim, organize screening programs to prevent homozygosity for the genes that cause related diseases.[155][156]

Visit link:
Ashkenazi Jews - Wikipedia

Read More...

Page 1,080«..1020..1,0791,0801,0811,082..1,0901,100..»


2025 © StemCell Therapy is proudly powered by WordPress
Entries (RSS) Comments (RSS) | Violinesth by Patrick