Page 1,442«..1020..1,4411,4421,4431,444..1,4501,460..»

Current State Laws Against Human Embryo Research

Posted: June 22, 2018 at 12:48 am

Harmful experimentation on embryos is a felony in some states

Some members of Congress think that researchers should be able to obtain and destroy live human embryos for federally funded stem cell research. But such destruction of embryos for research seems to be illegal (regardless of its source of funding) in nine states. Therefore proposals for federal funding would force many taxpayers to approve destructive cell harvesting that is a felony in their home state.

Louisiana

Louisiana's law recognizes a human embryo outside the womb as a "juridical person," and prohibits the destruction of a viable fertilized ovum. La. Rev. Stat. tit. 9, 123, 129 (West 2000). It further states: "The use of a human ovum fertilized in vitro is solely for the support and contribution of the complete development of human in utero implantation. No in vitro fertilized human ovum will be farmed or cultured solely for research purposes or any other purposes." 122.

Maine

Maine's law prohibits the "use [of]...any live human fetus, whether intrauterine or extrauterine...for scientific experimentation or for any form of experimentation." Me. Rev. Stat. tit. 22 1593 (West 1992). A legal analysis commissioned by the National Bioethics Advisory Commission concluded that this law "ban[s] research on in vitro embryos altogether." NBAC, Ethical Issues in Human Stem Cell Research, vol. II, pages A-4, A-10.

Massachusetts

Massachusetts law prohibits "use [of] any live human fetus whether before or after expulsion from its mother's womb, for scientific, laboratory, research or other kind of experimentation." Mass. Gen. Laws ch. 112 12 J (a) I (West 1996). The section goes on to define "fetus" as including "an embryo." Ch. 112 12 (J) (a) IV.

Michigan

Michigan's law provides that "[a] person shall not use a live human embryo...for nontherapeutic research if...the research substantially jeopardizes the life or health of the embryo..." Mich. Comp. Laws 333.2685 (1) (West 1992). Performing such experimentation is a felony. 333.2691.

Minnesota

Minnesota's law prohibits using or permitting the use of "a living human conceptus for any type of scientific, laboratory research or other experimentation except to protect the life or health of the conceptus..." Min. Stat. 145.422 (West 1998). "Human conceptus" means "any human organism, conceived either in the human body or produced in an artificial environment other than the human body, from fertilization through the first 265 days thereafter." 145.421.

North Dakota

North Dakota law provides: "A person may not use any live human fetus, whether before or after expulsion from its mother's womb, for scientific, laboratory, research, or other kind of experimentation." N.D. Cent. Code 14-02.2-01(1) (Michie 1997). A legal analysis commissioned by the National Bioethics Advisory Commission concluded that this law "would ban embryo stem cell research using either IVF embryos or aborted conceptuses." NBAC, Ethical Issues in Human Stem Cell Research, vol. II, page A-4.

Pennsylvania

Pennsylvania's law prohibits "knowingly perform[ing] any type of nontherapeutic experimentation or nontherapeutic medical procedure... upon any unborn child..." Pa. Cons. Stat. tit 18. 3216 (a) (West 2000). Performing such experimentation is a felony. Id. "Unborn child" means "an individual organism of the species homo sapiens from fertilization until live birth." 3203.

Rhode Island

Rhode Island prohibits the use of "any live human fetus, whether before or after expulsion from its mother's womb, for scientific, laboratory research, or other kind of experimentation." R.I. Gen. Laws 11-54-1(a) (Michie 2000). A legal analysis commissioned by the National Bioethics Advisory Commission concluded that this law "ban[s] research on in vitro embryos altogether." NBAC, Ethical Issues in Human Stem Cell Research, vol. II, pages A-4, A-10.

South Dakota

Under a South Dakota law enacted in 2000, it is a crime to "conduct nontherapeutic research that destroys a human embryo," or to "conduct nontherapeutic research that subjects a human embryo to substantial risk of injury or death." S.D. Codified Laws 34-14-16, 34-14-17 (Michie Supp. 2001). It is also unlawful to "use for research purposes cells or tissues that [a] person knows were obtained" by doing such harm to embryos. 34-14-18. "Human embryo" means a living organism of the species Homo sapiens at the earliest stages of development (including the single-celled stage) that is not located in a woman's body." 34-14-20. Thus this law bans not only the destruction of the embryo to obtain stem cells (regardless of the source of funding), but also research using the resulting cells (regardless of whether the cells were harvested in that state or elsewhere).

Read this article:
Current State Laws Against Human Embryo Research

Posted in South Dakota Stem Cells | Comments Off on Current State Laws Against Human Embryo Research

Mammoth – Wikipedia

Posted: June 22, 2018 at 12:47 am

Extinct genus of mammals

A mammoth is any species of the extinct genus Mammuthus, proboscideans commonly equipped with long, curved tusks and, in northern species, a covering of long hair. They lived from the Pliocene epoch (from around 5million years ago) into the Holocene at about 4,500 years ago[1][2] in Africa, Europe, Asia, and North America. They were members of the family Elephantidae, which also contains the two genera of modern elephants and their ancestors. Mammoths stem from an ancestral species called M. africanavus, the African mammoth. These mammoths lived in northern Africa and disappeared about 3 or 4 million years ago. Descendants of these mammoths moved north and eventually covered most of Eurasia. These were M. meridionalis, the 'southern mammoths'.[3]

The earliest known proboscideans, the clade that contains the elephants, existed about 55 million years ago around the Tethys Sea area. The closest relatives of the Proboscidea are the sirenians and the hyraxes. The family Elephantidae is known to have existed six million years ago in Africa, and includes the living elephants and the mammoths. Among many now extinct clades, the mastodon is only a distant relative of the mammoths, and part of the separate Mammutidae family, which diverged 25 million years before the mammoths evolved.[4]

The following cladogram shows the placement of the genus Mammuthus among other proboscideans, based on hyoid characteristics:[5]

Since many remains of each species of mammoth are known from several localities, it is possible to reconstruct the evolutionary history of the genus through morphological studies. Mammoth species can be identified from the number of enamel ridges on their molars; the primitive species had few ridges, and the amount increased gradually as new species evolved and replaced the former ones. At the same time, the crowns of the teeth became longer, and the skulls become higher from top to bottom and shorter from the back to the front over time to accommodate this.[6]

The first known members of the genus Mammuthus are the African species Mammuthus subplanifrons from the Pliocene and Mammuthus africanavus from the Pleistocene. The former is thought to be the ancestor of later forms. Mammoths entered Europe around 3 million years ago; the earliest known type has been named M. rumanus, which spread across Europe and China. Only its molars are known, which show it had 810 enamel ridges. A population evolved 1214 ridges and split off from and replaced the earlier type, becoming M. meridionalis. In turn, this species was replaced by the steppe mammoth, M. trogontherii, with 1820 ridges, which evolved in East Asia ca. 1 million years ago. Mammoths derived from M. trogontherii evolved molars with 26 ridges 200,000 years ago in Siberia, and became the woolly mammoth, M. primigenius.[6] The Columbian mammoth, M. columbi, evolved from a population of M. trogontherii that had entered North America. A 2011 genetic study showed that two examined specimens of the Columbian mammoth were grouped within a subclade of woolly mammoths. This suggests that the two populations interbred and produced fertile offspring. It also suggested that a North American form known as "M. jeffersonii" may be a hybrid between the two species.[7]

By the late Pleistocene, mammoths in continental Eurasia had undergone a major transformation, including a shortening and heightening of the cranium and mandible, increase in molar hypsodonty index, increase in plate number, and thinning of dental enamel. Due to this change in physical appearance, it became customary to group European mammoths separately into distinguishable clusters:

There is speculation as to what caused this variation within the three chronospecies. Variations in environment, climate change, and migration surely played roles in the evolutionary process of the mammoths. Take M. primigenius for example: Woolly mammoths lived in opened grassland biomes. The cool steppe-tundra of the Northern Hemisphere was the ideal place for mammoths to thrive because of the resources it supplied. With occasional warmings during the ice age, climate would change the landscape, and resources available to the mammoths altered accordingly.[6][8][9]

The word mammoth was first used in Europe during the early 17th century, when referring to maimanto tusks discovered in Siberia.[10] John Bell,[11] who was on the Ob River in 1722, said that mammoth tusks were well known in the area. They were called "mammon's horn" and were often found in washed-out river banks. Some local people claimed to have seen a living mammoth, but they only came out at night and always disappeared under water when detected. He bought one and presented it to Hans Sloan who pronounced it an elephant's tooth.

The folklore of some native peoples of Siberia, who would routinely find mammoth bones, and sometimes frozen mammoth bodies, in eroding river banks, had various interesting explanations for these finds. Among the Khanty people of the Irtysh River basin, a belief existed that the mammoth was some kind of a water spirit. According to other Khanty, the mammoth was a creature that lived underground, burrowing its tunnels as it went, and would die if it accidentally came to the surface.[12] The concept of the mammoth as an underground creature was known to the Chinese, who received some mammoth ivory from the Siberian natives; accordingly, the creature was known in China as yn sh , "the hidden rodent".[13]

Thomas Jefferson, who famously had a keen interest in paleontology, is partially responsible for transforming the word mammoth from a noun describing the prehistoric elephant to an adjective describing anything of surprisingly large size. The first recorded use of the word as an adjective was in a description of a large wheel of cheese (the "Cheshire Mammoth Cheese") given to Jefferson in 1802.[14]

Like their modern relatives, mammoths were quite large. The largest known species reached heights in the region of 4m (13.1ft) at the shoulder and weights of up to 8 tonnes (8.8 short tons), while exceptionally large males may have exceeded 12 tonnes (13.2 short tons). However, most species of mammoth were only about as large as a modern Asian elephant (which are about 2.5 m to 3 m high at the shoulder, and rarely exceeding 5 tonnes). Both sexes bore tusks. A first, small set appeared at about the age of six months, and these were replaced at about 18 months by the permanent set. Growth of the permanent set was at a rate of about 2.5 to 15.2cm (1 to 6in) per year.[15]

Based on studies of their close relatives, the modern elephants, mammoths probably had a gestation period of 22 months, resulting in a single calf being born. Their social structure was probably the same as that of African and Asian elephants, with females living in herds headed by a matriarch, whilst bulls lived solitary lives or formed loose groups after sexual maturity.[16]

Scientists discovered and studied the remains of a mammoth calf, and found that fat greatly influenced its form, and enabled it to store large amounts of nutrients necessary for survival in temperatures as low as 50C (58F).[17] The fat also allowed the mammoths to increase their muscle mass, allowing the mammoths to fight against enemies and live longer.[18]

Depending on the species or race of mammoth, the diet differed somewhat depending on location, although all mammoths ate similar things. For the Columbian mammoth, M. columbi, the diet was mainly grazing. American Columbian mammoths fed primarily on cacti leaves, trees, and shrubs. These assumptions were based on mammoth feces and mammoth teeth. Mammoths, like modern day elephants, have hypsodont molars. These features also allowed mammoths to live an expansive life because of the availability of grasses and trees.[19]

For the Mongochen mammoth, its diet consisted of herbs, grasses, larch, and shrubs, and possibly alder. These inferences were made through the observation of mammoth feces, which scientists observed contained non-arboreal pollen and moss spores.[20]

European mammoths had a major diet of C3 carbon fixation plants. This was determined by examining the isotopic data from the European mammoth teeth.[21]

The Yamal baby mammoth Lyuba, found in 2007 in the Yamal Peninsula in Western Siberia, suggests that baby mammoths, as do modern baby elephants, ate the dung of adult animals. The evidence to show this is that the dentition (teeth) of the baby mammoth had not yet fully developed to chew grass. Furthermore, there was an abundance of ascospores of coprophilous fungi from the pollen spectrum of the baby's mother. Coprophilous fungi are fungi that grow on animal dung and disperse spores in nearby vegetation, which the baby mammoth would then consume. Spores might have gotten into its stomach while grazing for the first few times. Coprophagy may be an adaptation, serving to populate the infant's gut with the needed microbiome for digestion.

Mammoths alive in the Arctic during the Last Glacial Maximum consumed mainly forbs, such as Artemisia; graminoids were only a minor part of their diet.[22]

The woolly mammoth (M. primigenius) was the last species of the genus. Most populations of the woolly mammoth in North America and Eurasia, as well as all the Columbian mammoths (M. columbi) in North America, died out around the time of the last glacial retreat, as part of a mass extinction of megafauna in northern Eurasia and the Americas. Until recently, the last woolly mammoths were generally assumed to have vanished from Europe and southern Siberia about 12,000 years ago, but new findings show some were still present there about 10,000 years ago. Slightly later, the woolly mammoths also disappeared from continental northern Siberia.[23] A small population survived on St. Paul Island, Alaska, up until 3750BC,[2][24][25] and the small[26] mammoths of Wrangel Island survived until 1650BC.[27][28] Recent research of sediments in Alaska indicates mammoths survived on the American mainland until 10,000 years ago.[29]

A definitive explanation for their extinction has yet to be agreed upon. The warming trend (Holocene) that occurred 12,000 years ago, accompanied by a glacial retreat and rising sea levels, has been suggested as a contributing factor. Forests replaced open woodlands and grasslands across the continent. The available habitat would have been reduced for some megafaunal species, such as the mammoth. However, such climate changes were nothing new; numerous very similar warming episodes had occurred previously within the ice age of the last several million years without producing comparable megafaunal extinctions, so climate alone is unlikely to have played a decisive role.[30][31] The spread of advanced human hunters through northern Eurasia and the Americas around the time of the extinctions, however, was a new development, and thus might have contributed significantly.[30][31]

Whether the general mammoth population died out for climatic reasons or due to overhunting by humans is controversial.[32] During the transition from the Late Pleistocene epoch to the Holocene epoch, there was shrinkage of the distribution of the mammoth because progressive warming at the end of the Pleistocene epoch changed the mammoth's environment. The mammoth steppe was a periglacial landscape with rich herb and grass vegetation that disappeared along with the mammoth because of environmental changes in the climate. Mammoths had moved to isolated spots in Eurasia, where they disappeared completely. Also, it is thought that Late Paleolithic and Mesolithic human hunters might have affected the size of the last mammoth populations in Europe.[citation needed] There is evidence to suggest that humans did cause the mammoth extinction, although there is no definitive proof. It was found that humans living south of a mammoth steppe learned to adapt themselves to the harsher climates north of the steppe, where mammoths resided. It was concluded that if humans could survive the harsh north climate of that particular mammoth steppe then it was possible humans could hunt (and eventually extinguish) mammoths everywhere. Another hypothesis suggests mammoths fell victim to an infectious disease. A combination of climate change and hunting by humans may be a possible explanation for their extinction. Homo erectus is known to have consumed mammoth meat as early as 1.8 million years ago,[33] though this may mean only successful scavenging, rather than actual hunting. Later humans show greater evidence for hunting mammoths; mammoth bones at a 50,000-year-old site in South Britain suggest that Neanderthals butchered the animals,[34] while various sites in Eastern Europe dating from 15,000 to 44,000 years old suggest humans (probably Homo sapiens) built dwellings using mammoth bones (the age of some of the earlier structures suggests that Neanderthals began the practice).[35] However, the American Institute of Biological Sciences notes that bones of dead elephants, left on the ground and subsequently trampled by other elephants, tend to bear marks resembling butchery marks, which have allegedly been misinterpreted as such by archaeologists.[36]

Many hypotheses also seek to explain the regional extinction of mammoths in specific areas. Scientists have speculated that the mammoths of Saint Paul Island, an isolated enclave where mammoths survived until about 8,000 years ago, died out as the island shrank by 8090% when sea levels rose, eventually making it too small to support a viable population.[37] Similarly, genome sequences of the Wrangel Island mammoths indicate a sharp decline in genetic diversity, though the extent to which this played a role in their extinction is still unclear.[38] Another hypothesis, said to be the cause of mammoth extinction in Siberia, comes from the idea that many may have drowned. While traveling to the Northern River, many of these mammoths broke through the ice and drowned. This also explains bones remains in the Arctic Coast and islands of the New Siberian Group.[citation needed]

Dwarfing occurred with the pygmy mammoth on the outer Channel Islands of California, but at an earlier period. Those animals were very likely killed by early Paleo-Native Americans, and habitat loss caused by a rising sea level that split Santa Rosae into the outer Channel Islands.[39]

An estimated 150 million mammoths are buried in the frozen Siberian tundra.[40] The use of this preserved genetic material to recreate or de-extinct living mammoths, has long been discussed theoretically but has only recently become the subject of formal effort due to advances in molecular biology techniques. As of 2015, there are several major ongoing projects, such as those of Akira Iritani of Japan and the Long Now Foundation,[41][42] attempting to create a mammoth-elephant hybrid.[43] According to the researchers, a mammoth cannot be recreated, but they will try to eventually grow in an "artificial womb" a hybrid elephant with some wooly mammoth traits.[44][45]

In April 2015, Swedish scientists published the complete genome (complete chromosomal DNA sequences) of the woolly mammoth,[41] and now separate projects are working on gradually adding mammoth DNA sequences to elephant cells.[41][42][46] For example, a Harvard University team is already attempting to study the animals' characteristics in vitro by inserting some specific mammoth genes into Asian elephant stem cells.[47] By March 2015, some woolly mammoth genes had been copied into the genome of an Asian elephant, using the new CRISPR DNA editing technique. Genetic segments from frozen mammoth specimens, including genes for the ears, subcutaneous fat, and hair attributes, were copied into the DNA of skin cells from a modern elephant.[47][48]

If a viable hybrid embryo is obtained via gene editing procedures, it may be possible to implant it into a female Asian elephant housed in a zoo,[41] but with the current knowledge and technology, it is unlikely that the hybrid embryo would be carried through the two-year gestation.[49][50] To illustrate one of the many difficulties, it can be noted that the genetic differences between an Asian elephant and an African elephant are so great that they actually cannot be interbred,[51] but on one occasion a pair produced a live calf named Motty that died of organ defects at less than two weeks old.[52]

If any method is ever successful, there is the suggestion to introduce the hybrids to a wildlife reserve in Siberia called the Pleistocene Park.[53] Some biologists question the ethics of such recreation attempts. In addition to the technical problems, there is not much habitat left that would be suitable for elephant-mammoth hybrids. Because both species are [were] social and gregarious, creating a few specimens would not be ideal. The time and resources required would be enormous, and the scientific benefits would be unclear, suggesting these resources should instead be used to preserve extant elephant species which are endangered.[54][55] The ethics of using elephants as surrogate mothers in cloning attempts has also been questioned, as most embryos would not survive, and it would be impossible to know the exact needs of a hybrid elephant-mammoth calf.[56]

See the original post here:
Mammoth - Wikipedia

Posted in Alaska Stem Cells | Comments Off on Mammoth – Wikipedia

Preventive healthcare – Wikipedia

Posted: June 21, 2018 at 11:47 am

Preventive healthcare (alternately preventive medicine, preventative healthcare/medicine, or prophylaxis) consists of measures taken for disease prevention, as opposed to disease treatment.[1] Just as health comprises a variety of physical and mental states, so do disease and disability, which are affected by environmental factors, genetic predisposition, disease agents, and lifestyle choices. Health, disease, and disability are dynamic processes which begin before individuals realize they are affected. Disease prevention relies on anticipatory actions that can be categorized as primal, primary, secondary, and tertiary prevention.[1][2][3]

Each year, millions of people die of preventable deaths. A 2004 study showed that about half of all deaths in the United States in 2000 were due to preventable behaviors and exposures.[4] Leading causes included cardiovascular disease, chronic respiratory disease, unintentional injuries, diabetes, and certain infectious diseases.[4] This same study estimates that 400,000 people die each year in the United States due to poor diet and a sedentary lifestyle.[4] According to estimates made by the World Health Organization (WHO), about 55 million people died worldwide in 2011, two thirds of this group from non-communicable diseases, including cancer, diabetes, and chronic cardiovascular and lung diseases.[5] This is an increase from the year 2000, during which 60% of deaths were attributed to these diseases.[5] Preventive healthcare is especially important given the worldwide rise in prevalence of chronic diseases and deaths from these diseases.

There are many methods for prevention of disease. It is recommended that adults and children aim to visit their doctor for regular check-ups, even if they feel healthy, to perform disease screening, identify risk factors for disease, discuss tips for a healthy and balanced lifestyle, stay up to date with immunizations and boosters, and maintain a good relationship with a healthcare provider.[6] Some common disease screenings include checking for hypertension (high blood pressure), hyperglycemia (high blood sugar, a risk factor for diabetes mellitus), hypercholesterolemia (high blood cholesterol), screening for colon cancer, depression, HIV and other common types of sexually transmitted disease such as chlamydia, syphilis, and gonorrhea, mammography (to screen for breast cancer), colorectal cancer screening, a Pap test (to check for cervical cancer), and screening for osteoporosis. Genetic testing can also be performed to screen for mutations that cause genetic disorders or predisposition to certain diseases such as breast or ovarian cancer.[6] However, these measures are not affordable for every individual and the cost effectiveness of preventive healthcare is still a topic of debate.[7][8]

Preventive healthcare strategies are described as taking place at the primal, primary, secondary, and tertiary prevention levels. In the 1940s, Hugh R. Leavell and E. Gurney Clark coined the term primary prevention. They worked at the Harvard and Columbia University Schools of Public Health, respectively, and later expanded the levels to include secondary and tertiary prevention.[9] Goldston (1987) notes that these levels might be better described as "prevention, treatment, and rehabilitation",[9] though the terms primary, secondary, and tertiary prevention are still in use today. The concept of primal prevention has been created much more recently, in relation to the new developments in molecular biology over the last fifty years,[10] more particularly in epigenetics, which point to the paramount importance of environmental conditions - both physical and affective - on the organism during its fetal and newborn life (or so-called primal life).[11]

Primordial prevention refers to measures designed to avoid the development of risk factors in the first place, early in life.[13][14]

A separate category of "health promotion" has recently been propounded. This health promotion par excellence is based on the 'new knowledge' in molecular biology, in particular on epigenetic knowledge, which points to how much affective - as well as physical - environment during fetal and newborn life may determine each and every aspect of adult health.[18][19][20] This new way of promoting health is now commonly called primal prevention.[21] It consists mainly in providing future parents with pertinent, unbiased information on primal health and supporting them during their child's primal period of life (i.e., "from conception to first anniversary" according to definition by the Primal Health Research Centre, London). This includes adequate parental leave[22] - ideally for both parents - with kin caregiving[23] and financial help where needed.

Another related concept is primordial prevention which to refers to all measures designed to prevent the development of risk factors in the first place, early in life.[13][14]

Primary prevention consists of traditional "health promotion" and "specific protection."[15] Health promotion activities are current, non-clinical life choices. For example, eating nutritious meals and exercising daily, that both prevent disease and create a sense of overall well-being. Preventing disease and creating overall well-being, prolongs our life expectancy.[1][15] Health-promotional activities do not target a specific disease or condition but rather promote health and well-being on a very general level.[1] On the other hand, specific protection targets a type or group of diseases and complements the goals of health promotion.[15]

Food is very much the most basic tool in preventive health care. The 2011 National Health Interview Survey performed by the Centers for Disease Control was the first national survey to include questions about ability to pay for food. Difficulty with paying for food, medicine, or both is a problem facing 1 out of 3 Americans. If better food options were available through food banks, soup kitchens, and other resources for low-income people, obesity and the chronic conditions that come along with it would be better controlled [24] A "food desert" is an area with restricted access to healthy foods due to a lack of supermarkets within a reasonable distance. These are often low-income neighborhoods with the majority of residents lacking transportation .[25] There have been several grassroots movements in the past 20 years to encourage urban gardening, such as the GreenThumb organization in New York City. Urban gardening uses vacant lots to grow food for a neighborhood and is cultivated by the local residents.[26] Mobile fresh markets are another resource for residents in a "food desert", which are specially outfitted buses bringing affordable fresh fruits and vegetables to low-income neighborhoods. These programs often hold educational events as well such as cooking and nutrition guidance.[27] Programs such as these are helping to provide healthy, affordable foods to the people who need them the most.

Scientific advancements in genetics have significantly contributed to the knowledge of hereditary diseases and have facilitated great progress in specific protective measures in individuals who are carriers of a disease gene or have an increased predisposition to a specific disease. Genetic testing has allowed physicians to make quicker and more accurate diagnoses and has allowed for tailored treatments or personalized medicine.[1] Similarly, specific protective measures such as water purification, sewage treatment, and the development of personal hygienic routines (such as regular hand-washing) became mainstream upon the discovery of infectious disease agents such as bacteria. These discoveries have been instrumental in decreasing the rates of communicable diseases that are often spread in unsanitary conditions.[1] Preventing #Sexually transmitted infections is another form of primary prevention.

Secondary prevention deals with latent diseases and attempts to prevent an asymptomatic disease from progressing to symptomatic disease.[15] Certain diseases can be classified as primary or secondary. This depends on definitions of what constitutes a disease, though, in general, primary prevention addresses the root cause of a disease or injury[15] whereas secondary prevention aims to detect and treat a disease early on.[28] Secondary prevention consists of "early diagnosis and prompt treatment" to contain the disease and prevent its spread to other individuals, and "disability limitation" to prevent potential future complications and disabilities from the disease.[1] For example, early diagnosis and prompt treatment for a syphilis patient would include a course of antibiotics to destroy the pathogen and screening and treatment of any infants born to syphilitic mothers. Disability limitation for syphilitic patients includes continued check-ups on the heart, cerebrospinal fluid, and central nervous system of patients to curb any damaging effects such as blindness or paralysis.[1]

Finally, tertiary prevention attempts to reduce the damage caused by symptomatic disease by focusing on mental, physical, and social rehabilitation. Unlike secondary prevention, which aims to prevent disability, the objective of tertiary prevention is to maximize the remaining capabilities and functions of an already disabled patient.[1] Goals of tertiary prevention include: preventing pain and damage, halting progression and complications from disease, and restoring the health and functions of the individuals affected by disease.[28] For syphilitic patients, rehabilitation includes measures to prevent complete disability from the disease, such as implementing work-place adjustments for the blind and paralyzed or providing counseling to restore normal daily functions to the greatest extent possible.[1]

The leading cause of death in the United States was tobacco. However, poor diet and lack of exercise may soon surpass tobacco as a leading cause of death. These behaviors are modifiable and public health and prevention efforts could make a difference to reduce these deaths.[4]

The leading causes of preventable death worldwide share similar trends to the United States. There are a few differences between the two, such as malnutrition, pollution, and unsafe sanitation, that reflect health disparities between the developing and developed world.[29]

In 2010, 7.6 million children died before reaching the age of 5. While this is a decrease from 9.6 million in the year 2000,[30] it is still far from the fourth Millennium Development Goal to decrease child mortality by two-thirds by the year 2015.[31] Of these deaths, about 64% were due to infection (including diarrhea, pneumonia, and malaria).[30] About 40% of these deaths occurred in neonates (children ages 128 days) due to pre-term birth complications.[31] The highest number of child deaths occurred in Africa and Southeast Asia.[30] In Africa, almost no progress has been made in reducing neonatal death since 1990.[31] India, Nigeria, Democratic Republic of the Congo, Pakistan, and China contributed to almost 50% of global child deaths in 2010. Targeting efforts in these countries is essential to reducing the global child death rate.[30]

Child mortality is caused by a variety of factors including poverty, environmental hazards, and lack of maternal education.[32] The World Health Organization created a list of interventions in the following table that were judged economically and operationally "feasible," based on the healthcare resources and infrastructure in 42 nations that contribute to 90% of all infant and child deaths. The table indicates how many infant and child deaths could have been prevented in the year 2000, assuming universal healthcare coverage.[32]

Obesity is a major risk factor for a wide variety of conditions including cardiovascular diseases, hypertension, certain cancers, and type 2 diabetes. In order to prevent obesity, it is recommended that individuals adhere to a consistent exercise regimen as well as a nutritious and balanced diet. A healthy individual should aim for acquiring 10% of their energy from proteins, 15-20% from fat, and over 50% from complex carbohydrates, while avoiding alcohol as well as foods high in fat, salt, and sugar.[33] Sedentary adults should aim for at least half an hour of moderate-level daily physical activity and eventually increase to include at least 20 minutes of intense exercise, three times a week.[34] Preventive health care offers many benefits to those that chose to participate in taking an active role in the culture. The medical system in our society is geared toward curing acute symptoms of disease after the fact that they have brought us into the emergency room. An ongoing epidemic within American culture is the prevalence of obesity. Eating healthier and routinely exercising plays a huge role in reducing an individuals risk for type 2 diabetes. About 23.6 million people in the United States have diabetes. Of those, 17.9 million are diagnosed and 5.7 million are undiagnosed. Ninety to 95 percent of people with diabetes have type 2 diabetes. Diabetes is the main cause of kidney failure, limb amputation, and new-onset blindness in American adults.[35]

In the case of a Sexually transmitted infection (STI) such as syphilis health prevention activities would include avoiding microorganisms by maintaining personal hygiene, routine check-up appointments with the doctor, and general sex education, whereas specific protective measures would be using prophylactics (such as condoms) during sex and discouraging sexual promiscuity.[1] STIs are common both historically and in today's society. STIs can be asymptomatic or cause a range of symptoms. The use of condoms reduces the risk of acquiring some STIs.[36] Other forms of STI prophylaxis includes: abstinence, testing and screening a partner, regular health check-ups, and certain medications such as Truvada.

Thrombosis is a serious circulatory disease affecting thousands, usually older persons undergoing surgical procedures, women taking oral contraceptives and travelers. Consequences of thrombosis can be heart attacks and strokes. Prevention can include: exercise, anti-embolism stockings, pneumatic devices, and pharmacological treatments.

In recent years, cancer has become a global problem. Low and middle income countries share a majority of the cancer burden largely due to exposure to carcinogens resulting from industrialization and globalization.[37] However, primary prevention of cancer and knowledge of cancer risk factors can reduce over one third of all cancer cases. Primary prevention of cancer can also prevent other diseases, both communicable and non-communicable, that share common risk factors with cancer.[37]

Lung cancer is the leading cause of cancer-related deaths in the United States and Europe and is a major cause of death in other countries.[38] Tobacco is an environmental carcinogen and the major underlying cause of lung cancer.[38] Between 25% and 40% of all cancer deaths and about 90% of lung cancer cases are associated with tobacco use. Other carcinogens include asbestos and radioactive materials.[39] Both smoking and second-hand exposure from other smokers can lead to lung cancer and eventually death.[38] Therefore, prevention of tobacco use is paramount to prevention of lung cancer.

Individual, community, and statewide interventions can prevent or cease tobacco use. 90% of adults in the US who have ever smoked did so prior to the age of 20. In-school prevention/educational programs, as well as counseling resources, can help prevent and cease adolescent smoking.[39] Other cessation techniques include group support programs, nicotine replacement therapy (NRT), hypnosis, and self-motivated behavioral change. Studies have shown long term success rates (>1 year) of 20% for hypnosis and 10%-20% for group therapy.[39]

Cancer screening programs serve as effective sources of secondary prevention. The Mayo Clinic, Johns Hopkins, and Memorial Sloan-Kettering hospitals conducted annual x-ray screenings and sputum cytology tests and found that lung cancer was detected at higher rates, earlier stages, and had more favorable treatment outcomes, which supports widespread investment in such programs.[39]

Legislation can also affect smoking prevention and cessation. In 1992, Massachusetts (United States) voters passed a bill adding an extra 25 cent tax to each pack of cigarettes, despite intense lobbying and a $7.3 million spent by the tobacco industry to oppose this bill. Tax revenue goes toward tobacco education and control programs and has led to a decline of tobacco use in the state.[40]

Lung cancer and tobacco smoking are increasing worldwide, especially in China. China is responsible for about one-third of the global consumption and production of tobacco products.[41] Tobacco control policies have been ineffective as China is home to 350 million regular smokers and 750 million passive smokers and the annual death toll is over 1 million.[41] Recommended actions to reduce tobacco use include: decreasing tobacco supply, increasing tobacco taxes, widespread educational campaigns, decreasing advertising from the tobacco industry, and increasing tobacco cessation support resources.[41] In Wuhan, China, a 1998 school-based program, implemented an anti-tobacco curriculum for adolescents and reduced the number of regular smokers, though it did not significantly decrease the number of adolescents who initiated smoking. This program was therefore effective in secondary but not primary prevention and shows that school-based programs have the potential to reduce tobacco use.[42]

Skin cancer is the most common cancer in the United States.[43] The most lethal form of skin cancer, melanoma, leads to over 50,000 annual deaths in the United States.[43] Childhood prevention is particularly important because a significant portion of ultraviolet radiation exposure from the sun occurs during childhood and adolescence and can subsequently lead to skin cancer in adulthood. Furthermore, childhood prevention can lead to the development of healthy habits that continue to prevent cancer for a lifetime.[43]

The Centers for Disease Control and Prevention (CDC) recommends several primary prevention methods including: limiting sun exposure between 10 AM and 4 PM, when the sun is strongest, wearing tighter-weave natural cotton clothing, wide-brim hats, and sunglasses as protective covers, using sunscreens that protect against both UV-A and UV-B rays, and avoiding tanning salons.[43] Sunscreen should be reapplied after sweating, exposure to water (through swimming for example) or after several hours of sun exposure.[43] Since skin cancer is very preventable, the CDC recommends school-level prevention programs including preventive curricula, family involvement, participation and support from the school's health services, and partnership with community, state, and national agencies and organizations to keep children away from excessive UV radiation exposure.[43]

Most skin cancer and sun protection data comes from Australia and the United States.[44] An international study reported that Australians tended to demonstrate higher knowledge of sun protection and skin cancer knowledge, compared to other countries.[44] Of children, adolescents, and adults, sunscreen was the most commonly used skin protection. However, many adolescents purposely used sunscreen with a low sun protection factor (SPF)in order to get a tan.[44] Various Australian studies have shown that many adults failed to use sunscreen correctly; many applied sunscreen well after their initial sun exposure and/or failed to reapply when necessary.[45][46][47] A 2002 case-control study in Brazil showed that only 3% of case participants and 11% of control participants used sunscreen with SPF >15.[48]

Cervical cancer ranks among the top three most common cancers among women in Latin America, sub-Saharan Africa, and parts of Asia. Cervical cytology screening aims to detect abnormal lesions in the cervix so that women can undergo treatment prior to the development of cancer. Given that high quality screening and follow-up care has been shown to reduce cervical cancer rates by up to 80%, most developed countries now encourage sexually active women to undergo a Pap test every 35 years. Finland and Iceland have developed effective organized programs with routine monitoring and have managed to significantly reduce cervical cancer mortality while using fewer resources than unorganized, opportunistic programs such as those in the United States or Canada.[49]

In developing nations in Latin America, such as Chile, Colombia, Costa Rica, and Cuba, both public and privately organized programs have offered women routine cytological screening since the 1970s. However, these efforts have not resulted in a significant change in cervical cancer incidence or mortality in these nations. This is likely due to low quality, inefficient testing. However, Puerto Rico, which has offered early screening since the 1960s, has witnessed an almost a 50% decline in cervical cancer incidence and almost a four-fold decrease in mortality between 1950 and 1990. Brazil, Peru, India, and several high-risk nations in sub-Saharan Africa which lack organized screening programs, have a high incidence of cervical cancer.[49]

Colorectal cancer is globally the second most common cancer in women and the third-most common in men,[50] and the fourth most common cause of cancer death after lung, stomach, and liver cancer,[51] having caused 715,000 deaths in 2010.[52]

It is also highly preventable; about 80 percent[53] of colorectal cancers begin as benign growths, commonly called polyps, which can be easily detected and removed during a colonoscopy. Other methods of screening for polyps and cancers include fecal occult blood testing. Lifestyle changes that may reduce the risk of colorectal cancer include increasing consumption of whole grains, fruits and vegetables, and reducing consumption of red meat (see Colorectal cancer).

Access to healthcare and preventive health services is unequal, as is the quality of care received. A study conducted by the Agency for Healthcare Research and Quality (AHRQ)revealed health disparities in the United States. In the United States, elderly adults (>65 years old)received worse care and had less access to care than their younger counterparts. The same trends are seen when comparing all racial minorities (black, Hispanic, Asian) to white patients, and low-income people to high-income people.[54] Common barriers to accessing and utilizing healthcare resources included lack of income and education, language barriers, and lack of health insurance. Minorities were less likely than whites to possess health insurance, as were individuals who completed less education. These disparities made it more difficult for the disadvantaged groups to have regular access to a primary care provider, receive immunizations, or receive other types of medical care.[54] Additionally, uninsured people tend to not seek care until their diseases progress to chronic and serious states and they are also more likely to forgo necessary tests, treatments, and filling prescription medications.[55]

These sorts of disparities and barriers exist worldwide as well. Often, there are decades of gaps in life expectancy between developing and developed countries. For example, Japan has an average life expectancy that is 36 years greater than that in Malawi.[56] Low-income countries also tend to have fewer physicians than high-income countries. In Nigeria and Myanmar, there are fewer than 4 physicians per 100,000 people while Norway and Switzerland have a ratio that is ten-fold higher.[56] Common barriers worldwide include lack of availability of health services and healthcare providers in the region, great physical distance between the home and health service facilities, high transportation costs, high treatment costs, and social norms and stigma toward accessing certain health services.[57]

With lifestyle factors such as diet and exercise rising to the top of preventable death statistics, the economics of healthy lifestyle is a growing concern. There is little question that positive lifestyle choices provide an investment in health throughout life.[58] To gauge success, traditional measures such as the quality years of life method (QALY), show great value. However, that method does not account for the cost of chronic conditions or future lost earnings because of poor health.[59] Developing future economic models that would guide both private and public investments as well as drive future policy to evaluate the efficacy of positive lifestyle choices on health is a major topic for economists globally.

Americans spend over three trillion a year on health care but have a higher rate of infant mortality, shorter life expectancies, and a higher rate of diabetes than other high-income nations because of negative lifestyle choices.[60] Despite these large costs, very little is spent on prevention for lifestyle-caused conditions in comparison. The Journal of American Medical Association estimates that $101 billion was spent in 2013 on the preventable disease of diabetes, and another $88 billion was spent on heart disease.[61] In an effort to encourage healthy lifestyle choices, workplace wellness programs are on the rise; but the economics and effectiveness data are still continuing to evolve and develop.[62]

Health insurance coverage impacts lifestyle choices. In a study by Sudano and Baker, even intermittent loss of coverage has negative effects on healthy choices.[63] The potential repeal of the Affordable Care Act (ACA) could significantly impact coverage for many Americans, as well as The Prevention and Public Health Fund which is our nations first and only mandatory funding stream dedicated to improving the publics health.[64] Also covered in the ACA is counseling on lifestyle prevention issues, such as weight management, alcohol use, and treatment for depression.[65] Policy makers can have substantial effects on the lifestyle choices made by Americans.

Because chronic illnesses predominate as a cause of death in the US and pathways for treating chronic illnesses are complex and multifaceted, prevention is a best practice approach to chronic disease when possible. In many cases, prevention requires mapping complex pathways[66] to determine the ideal point for intervention. In addition to efficacy, prevention is considered a cost-saving measure. Cost-effectiveness analysis of prevention is achievable, but impacted by the length of time it takes to see effects/outcomes of intervention. This makes prevention efforts difficult to fundparticularly in strained financial contexts. Prevention potentially creates other costs as well, due to extending the lifespan and thereby increasing opportunities for illness. In order to establish reliable economics of prevention[67] for illnesses that are complicated in origin, knowing how best to assess prevention efforts, i.e. developing useful measures and appropriate scope, is required.

Overview

There is no general consensus as to whether or not preventive healthcare measures are cost-effective, but they increase the quality of life dramatically. There are varying views on what constitutes a "good investment." Some argue that preventive health measures should save more money than they cost, when factoring in treatment costs in the absence of such measures. Others argue in favor of "good value" or conferring significant health benefits even if the measures do not save money[7][68] Furthermore, preventive health services are often described as one entity though they comprise a myriad of different services, each of which can individually lead to net costs, savings, or neither. Greater differentiation of these services is necessary to fully understand both the financial and health effects.[7]

A 2010 study reported that in the United States, vaccinating children, cessation of smoking, daily prophylactic use of aspirin, and screening of breast and colorectal cancers had the most potential to prevent premature death.[7] Preventive health measures that resulted in savings included vaccinating children and adults, smoking cessation, daily use of aspirin, and screening for issues with alcoholism, obesity, and vision failure.[7] These authors estimated that if usage of these services in the United States increased to 90% of the population, there would be net savings of $3.7 billion, which comprised only about -0.2% of the total 2006 United States healthcare expenditure.[7] Despite the potential for decreasing healthcare spending, utilization of healthcare resources in the United States still remains low, especially among Latinos and African-Americans.[69] Overall, preventive services are difficult to implement because healthcare providers have limited time with patients and must integrate a variety of preventive health measures from different sources.[69]

While these specific services bring about small net savings not every preventive health measure saves more than it costs. A 1970s study showed that preventing heart attacks by treating hypertension early on with drugs actually did not save money in the long run. The money saved by evading treatment from heart attack and stroke only amounted to about a quarter of the cost of the drugs.[70][71] Similarly, it was found that the cost of drugs or dietary changes to decrease high blood cholesterol exceeded the cost of subsequent heart disease treatment.[72][73] Due to these findings, some argue that rather than focusing healthcare reform efforts exclusively on preventive care, the interventions that bring about the highest level of health should be prioritized.[68]

Cohen et al. (2008) outline a few arguments made by skeptics of preventive healthcare. Many argue that preventive measures only cost less than future treatment when the proportion of the population that would become ill in the absence of prevention is fairly large.[8] The Diabetes Prevention Program Research Group conducted a 2012 study evaluating the costs and benefits (in quality-adjusted life-years or QALY's) of lifestyle changes versus taking the drug metformin. They found that neither method brought about financial savings, but were cost-effective nonetheless because they brought about an increase in QALY's.[74] In addition to scrutinizing costs, preventive healthcare skeptics also examine efficiency of interventions. They argue that while many treatments of existing diseases involve use of advanced equipment and technology, in some cases, this is a more efficient use of resources than attempts to prevent the disease.[8] Cohen et al. (2008) suggest that the preventive measures most worth exploring and investing in are those that could benefit a large portion of the population to bring about cumulative and widespread health benefits at a reasonable cost.[8]

There are at least four nationally implemented childhood obesity interventions in the United States: the Sugar-Sweetened Beverage excise tax (SSB), the TV AD program, active physical education (Active PE) policies, and early care and education (ECE) policies.[75] They each have similar goals of reducing childhood obesity. The effects of these interventions on BMI have been studied, and the cost-effectiveness analysis (CEA) has led to a better understanding of projected cost reductions and improved health outcomes.[76][77] The Childhood Obesity Intervention Cost-Effectiveness Study (CHOICES) was conducted to evaluate and compare the CEA of these four interventions.[75]

Gortmaker, S.L. et al. (2015) states: "The four initial interventions were selected by the investigators to represent a broad range of nationally scalable strategies to reduce childhood obesity using a mix of both policy and programmatic strategies... 1. an excise tax of $0.01 per ounce of sweetened beverages, applied nationally and administered at the state level (SSB), 2. elimination of the tax deductibility of advertising costs of TV advertisements for "nutritionally poor" foods and beverages seen by children and adolescents (TV AD), 3. state policy requiring all public elementary schools in which physical education (PE) is currently provided to devote 50% of PE class time to moderate and vigorous physical activity (Active PE), and 4. state policy to make early child educational settings healthier by increasing physical activity, improving nutrition, and reducing screen time (ECE)."

The CHOICES found that SSB, TV AD, and ECE led to net cost savings. Both SSB and TV AD increased quality adjusted life years and produced yearly tax revenue of 12.5 billion US dollars and 80 million US dollars, respectively.

Some challenges with evaluating the effectiveness of child obesity interventions include:

The cost-effectiveness of preventive care is a highly debated topic. While some economists argue that preventive care is valuable and potentially cost saving, others believe it is an inefficient waste of resources.[81] Preventive care is composed of a variety of clinical services and programs including annual doctors check-ups, annual immunizations, and wellness programs.

Clinical Preventive Services & Programs

Research on preventive care addresses the question of whether it is cost saving or cost effective and whether there is an economics evidence base for health promotion and disease prevention. The need for and interest in preventive care is driven by the imperative to reduce health care costs while improving quality of care and the patient experience. Preventive care can lead to improved health outcomes and cost savings potential. Services such as health assessments/screenings, prenatal care, and telehealth and telemedicine can reduce morbidity or mortality with low cost or cost savings.[82][83] Specifically, health assessments/screenings have cost savings potential, with varied cost-effectiveness based on screening and assessment type.[84] Inadequate prenatal care can lead to an increased risk of prematurity, stillbirth, and infant death.[85] Time is the ultimate resource and preventive care can help mitigate the time costs.[86] Telehealth and telemedicine is one option that has gained consumer interest, acceptance and confidence and can improve quality of care and patient satisfaction.[87]

Understanding the Economics for Investment

There are benefits and trade-offs when considering investment in preventive care versus other types of clinical services. Preventive care can be a good investment as supported by the evidence base and can drive population health management objectives.[8][83] The concepts of cost saving and cost-effectiveness are different and both are relevant to preventive care. For example, preventive care that may not save money may still provide health benefits. Thus, there is a need to compare interventions relative to impact on health and cost.[88]

Preventive care transcends demographics and is applicable to people of every age. The Health Capital Theory underpins the importance of preventive care across the lifecycle and provides a framework for understanding the variances in health and health care that are experienced. It treats health as a stock that provides direct utility. Health depreciates with age and the aging process can be countered through health investments. The theory further supports that individuals demand good health, that the demand for health investment is a derived demand (i.e. investment is health is due to the underlying demand for good health), and the efficiency of the health investment process increases with knowledge (i.e. it is assumed that the more educated are more efficient consumers and producers of health).[89]

The prevalence elasticity of demand for prevention can also provide insights into the economics. Demand for preventive care can alter the prevalence rate of a given disease and further reduce or even reverse any further growth of prevalence.[86] Reduction in prevalence subsequently leads to reduction in costs.

Economics for Policy Action

There are a number of organizations and policy actions that are relevant when discussing wthe economics of preventive care services. The evidence base, viewpoints, and policy briefs from the Robert Wood Johnson Foundation, the Organisation for Economic Co-operation and Development (OECD), and efforts by the U.S. Preventive Services Task Force (USPSTF) all provide examples that improve the health and well-being of populations (e.g. preventive health assessments/screenings, prenatal care, and telehealth/telemedicine). The Patient Protection and Affordable Care Act (PPACA, ACA) has major influence on the provision of preventive care services, although it is currently under heavy scrutiny and review by the new administration. According to the Centers for Disease Control and Prevention (CDC), the ACA makes preventive care affordable and accessible through mandatory coverage of preventive services without a deductible, copayment, coinsurance, or other cost sharing.[90]

The U.S. Preventive Services Task Force (USPSTF), a panel of national experts in prevention and evidence-based medicine, works to improve health of Americans by making evidence-based recommendations about clinical preventive services.[91] They do not consider the cost of a preventive service when determining a recommendation. Each year, the organization delivers a report to Congress that identifies critical evidence gaps in research and recommends priority areas for further review.[92]

The National Network of Perinatal Quality Collaboratives (NNPQC), sponsored by the CDC, supports state-based perinatal quality collaboratives (PQCs) in measuring and improving upon health care and health outcomes for mothers and babies. These PQCs have contributed to improvements such as reduction in deliveries before 39 weeks, reductions in healthcare associated blood stream infections, and improvements in the utilization of antenatal corticosteroids.[93]

Telehealth and telemedicine has realized significant growth and development recently. The Center for Connected Health Policy (The National Telehealth Policy Resource Center) has produced multiple reports and policy briefs on the topic of Telehealth and Telemedicine and how they contribute to preventive services.[94]

Policy actions and provision of preventive services do not guarantee utilization. Reimbursement has remained a significant barrier to adoption due to variances in payer and state level reimbursement policies and guidelines through government and commercial payers. Americans use preventive services at about half the recommended rate and cost-sharing, such as deductibles, co-insurance, or copayments, also reduce the likelihood that preventive services will be used.[90] Further, despite the ACAs enhancement of Medicare benefits and preventive services, there were no effects on preventive service utilization, calling out the fact that other fundamental barriers exist.[95]

The Patient Protection and Affordable Care Act also known as just the Affordable Care Act or Obamacare was passed and became law in the United States on March 23, 2010.[96] The finalized and newly ratified law was to address many issues in the U.S. healthcare system, which included expansion of coverage, insurance market reforms, better quality, and the forecast of efficiency and costs.[97] Under the insurance market reforms the act required that insurance companies no longer exclude people with pre-existing conditions, allow for children to be covered on their parents plan until the age of 26, expand appeals that dealt with reimbursement denials. The Affordable Care Act also banned the limited coverage imposed by health insurances and insurance companies were to include coverage for preventive health care services.[98] The U.S. Preventive Services Task Force has categorized and rated preventive health services as either A or B, as to which insurance companies must comply and present full coverage. Not only has the U.S. Preventive Services Task Force provided graded preventive health services that are appropriate for coverage they have also provided many recommendations to clinicians and insurers to promote better preventive care to ultimately provide better quality of care and lower the burden of costs.[99]

Health insurance and Preventive CareHealthcare insurance companies are willing to pay for preventive care despite the fact that patients are not acutely sick in hope that it will prevent them from developing a chronic disease later on in life.[100] Today, health insurance plans offered through the Marketplace, mandated by the Affordable Care Act are required to provide certain preventive care services free of charge to patients. Section 2713 of the Affordable Care Act, specifies that all private Marketplace and all employer-sponsored private plans (except those grandfathered in) are required to cover preventive care services that are ranked A or B by the US Preventive Services Task Force free of charge to patients.[101][102] For example, UnitedHealthcare insurance company has published patient guidelines at the beginning of the year explaining their preventive care coverage.[103]

Evaluating Incremental Benefits of Preventive CareEvaluating the incremental benefits of preventive care requires longer period of time when compared to acute ill patients. Inputs into the model such as, discounting rate and time horizon can have significant effects of the results. One controversial subject is use of 10-year time frame to assess cost effectiveness of diabetes preventive services by the Congressional Budget Office.[104]

The preventive care services mainly focuses on chronic disease,[105] the Congressional Budget Office has provided guidance that further research in the area of the economic impacts of obesity in the US before the CBO can estimate budgetary consequences. A bipartisan report published in May 2015, recognizes that the potential of the preventive care to improve patients health at individual and population levels while decreasing the healthcare expenditure.[106]

An Economic Case for Preventive Health

Mortality from Modifiable Risk Factors

Chronic diseases such as heart disease, stroke, diabetes, obesity and cancer have become the most common and costly health problems in the United States. In 2014, it was projected that by 2023 that the number of chronic disease cases would increase by 42%, resulting in $4.2 trillion in treatment and lost economic output.[107] They are also among the top ten leading causes of mortality.[108] Chronic diseases are driven by risk factors that are largely preventable. Sub-analysis performed on all deaths in the United States in the year 2000 revealed that almost half were attributed to preventable behaviors including tobacco, poor diet, physical inactivity and alcohol consumption.[109] More recent analysis reveals that heart disease and cancer alone accounted for nearly 46% of all deaths.[110] Modifiable risk factors are also responsible for a large morbidity burden, resulting in poor quality of life in the present and loss of future life earning years2. It is further estimated that by 2023, focused efforts on the prevention and treatment of chronic disease may result in 40 million fewer chronic disease cases, potentially reducing treatment costs by $220 billion.[107]

Childhood Vaccinations Reduce Health Care Costs

Childhood immunizations are largely responsible for the increase in life expectancy in the 20th century. From an economic standpoint, childhood vaccines demonstrate a very high return on investment.[109] According to Healthy People 2020, for every birth cohort that receives the routine childhood vaccination schedule, direct health care costs are reduced by $9.9 billion and society saves $33.4 billion in indirect costs.[111] The economic benefits of childhood vaccination extend beyond individual patients to insurance plans and vaccine manufacturers, all while improving the health of the population.[112]

Prevention and Health Capital Theory

The burden of preventable illness extends beyond the healthcare sector, incurring significant costs related to lost productivity among workers in the workforce. Indirect costs related to poor health behaviors and associated chronic disease costs U.S. employers billions of dollars each year.

According to the American Diabetes Association (ADA), medical costs for employees with diabetes are twice as high as for workers without diabetes and are caused by work-related absenteeism ($5 billion), reduced productivity at work ($20.8 billion), inability to work due to illness-related disability ($21.6 billion), and premature mortality ($18.5 billion). Reported estimates of the cost burden due to increasingly high levels of overweight and obese members in the workforce vary,[113] with best estimates suggesting 450 million more missed work days, resulting in $153 billion each year in lost productivity, according to the CDC Healthy Workforce.

In the field of economics, the Health Capital model explains how individual investments in health can increase earnings by increasing the number of healthy days available to work and to earn income.[114] In this context, health can be treated both as a consumption good, wherein individuals desire health because it improves quality of life in the present, and as an investment good because of its potential to increase attendance and workplace productivity over time. Preventive health behaviors such as healthful diet, regular exercise, access to and use of well-care, avoiding tobacco, and limiting alcohol can be viewed as health inputs that result in both a healthier workforce and substantial cost savings.

Preventive Care and Quality Adjusted Life Years

Health benefits of preventive care measures can be described in terms of quality-adjusted life-years (QALYs) saved. A QALY takes into account length and quality of life, and is used to evaluate the cost-effectiveness of medical and preventive interventions. Classically, one year of perfect health is defined as 1 QALY and a year with any degree of less than perfect health is assigned a value between 0 and 1 QALY.[115] As an economic weighting system, the QALY can be used to inform personal decisions, to evaluate preventive interventions and to set priorities for future preventive efforts.

Cost-saving and cost-effective benefits of preventive care measures are well established. The Robert Wood Johnson Foundation evaluated the prevention cost-effectiveness literature, and found that many preventive measures meet the benchmark of <$100,000 per QALY and are considered to be favorably cost-effective. These include screenings for HIV and chlamydia, cancers of the colon, breast and cervix, vision screening, and screening for abdominal aortic aneurysms in men >60 in certain populations. Alcohol and tobacco screening were found to be cost-saving in some reviews and cost-effective in others. According to the RWJF analysis, two preventive interventions were found to save costs in all reviews: childhood immunizations and counseling adults on the use of aspirin.

Prevention in Minority Populations

Health disparities are increasing in the United States for chronic diseases such as obesity, diabetes, cancer, and cardiovascular disease. Populations at heightened risk for health inequities are the growing proportion of racial and ethnic minorities, including African Americans, American Indians, Hispanics/Latinos, Asian Americans, Alaska Natives and Pacific Islanders.[116]

According to the Racial and Ethnic Approaches to Community Health (REACH), a national CDC program, Non-Hispanic blacks currently have the highest rates of obesity (48%), and risk of newly diagnosed diabetes is 77% higher among non-Hispanic blacks, 66% higher among Hispanics/Latinos and 18% higher among Asian Americans compared to non-Hispanic whites. Current U.S. population projections predict that more than half of Americans will belong to a minority group by 2044.[117] Without targeted preventive interventions, medical costs from chronic disease inequities will become unsustainable. Broadening health policies designed to improve delivery of preventive services for minority populations may help reduce substantial medical costs caused by inequities in health care, resulting in a return on investment.

Policies of Prevention

Chronic disease is a population level issue that requires population health level efforts and national and state level public policy to effectively prevent, rather than individual level efforts. The United States currently employs many public health policy efforts aligned with the preventive health efforts discussed above. For instance, the Centers for Disease Control and Prevention support initiatives such as Health in All Policies and HI-5 (Health Impact in 5 Years), collaborative efforts that aim to consider prevention across sectors[118] and address social determinants of health as a method of primary prevention for chronic disease.[119] Specific examples of programs targeting vaccination and obesity prevention in childhood are discussed in the sections to follow.

Policy Prevention of Obesity

Policies that address the obesity epidemic should be proactive and far-reaching, including a variety of stakeholders both in healthcare and in other sectors. Recommendations from the Institute of Medicine in 2012 suggest that concerted action be taken across and within five environments (physical activity (PA), food and beverage, marketing and messaging, healthcare and worksites, and schools) and all sectors of society (including government, business and industry, schools, child care, urban planning, recreation, transportation, media, public health, agriculture, communities, and home) in order for obesity prevention efforts to truly be successful.[120]

There are dozens of current policies acting at either (or all of) the federal, state, local and school levels. Most states employ a physical education requirement of 150 minutes of physical education per week at school, a policy of the National Association of Sport and Physical Education. In some cities, including Philadelphia, a sugary food tax is employed. This is a part of an amendment to Title 19 of the Philadelphia Code, Finance, Taxes and Collections; Chapter 19-4100, Sugar-Sweetened Beverage Tax, that was approved 2016, which establishes an excise tax of $0.015 per fluid ounce on distributors of beverages sweetened with both caloric and non-caloric sweeteners.[121] Distributors are required to file a return with the department, and the department can collect taxes, among other responsibilities.

These policies can be a source of tax credits. For example, under the Philadelphia policy, businesses can apply for tax credits with the revenue department on a first-come, first-served basis. This applies until the total amount of credits for a particular year reaches one million dollars.[122]

Recently, advertisements for food and beverages directed at children have received much attention. The Childrens Food and Beverage Advertising Initiative (CFBAI) is a self-regulatory program of the food industry. Each participating company makes a public pledge that details its commitment to advertise only foods that meet certain nutritional criteria to children under 12 years old.[123] This is a self-regulated program with policies written by the Council of Better Business Bureaus. The Robert Wood Johnson Foundation funded research to test the efficacy of the CFBAI. The results showed progress in terms of decreased advertising of food products that target children and adolescents.[124]

To explore other programs and initiatives related to policies of childhood obesity, visit the following organizations and online databases: U.S. Department of Agriculture, Robert Wood Johnson Foundation-supported Bridging the Gap Program, National Association of County and City Health Officials, Yale Rudd Center for Food Policy & Obesity, Centers for Disease Control and Preventions Chronic Disease State Policy Tracking System, National Conference of State Legislatures, Prevention Institutes ENACT local policy database, Organization for Economic Cooperation and Development (OECD), and the U.S. Preventive Services Task Force (USPSTF).

Childhood Immunization Policies

Despite nationwide controversies over childhood vaccination and immunization, there are policies and programs at the federal, state, local and school levels outlining vaccination requirements. All states require children to be vaccinated against certain communicable diseases as a condition for school attendance. However, currently 18 states allow exemptions for philosophical or moral reasons. Diseases for which vaccinations form part of the standard ACIP vaccination schedule are diphtheria tetanus pertussis (whooping cough), poliomyelitis (polio), measles, mumps, rubella, haemophilus influenzae type b, hepatitis B, influenza, and pneumococcal infections.[125] These schedules can be viewed on the CDC website.[126]

The CDC website describes a federally funded program, Vaccines for Children (VFC), which provides vaccines at no cost to children who might not otherwise be vaccinated because of inability to pay. Additionally, the Advisory Committee on Immunization Practices (ACIP)[127] is an expert vaccination advisory board that informs vaccination policy and guides on-going recommendations to the CDC, incorporating the most up-to-date cost-effectiveness and risk-benefit evidence in its recommendations.

An Economic Case Conclusion

There are economic and health related arguments for preventive healthcare. Direct and indirect medical costs related to preventable chronic disease are high, and will continue to rise with an aging and increasingly diverse U.S. population. The government, at federal, state, local and school levels has acknowledged this and created programs and policies to support chronic disease prevention, notably at the childhood age, and focusing on obesity prevention and vaccination. Economically, with an increase in QALY and a decrease in lost productivity over a lifetime, existing and innovative prevention interventions demonstrate a high return on investment and are expected to result in substantial healthcare cost-savings over time.

Original post:
Preventive healthcare - Wikipedia

Posted in Preventative Medicine | Comments Off on Preventive healthcare – Wikipedia

Stem Cell Transplantation | Leukemia and Lymphoma Society

Posted: June 21, 2018 at 11:46 am

Stem cell transplantation, sometimes referred to as bone marrow transplant, is a procedure that replaces unhealthy blood-forming cells with healthy cells. Stem cell transplantation allows doctors to give large doses of chemotherapy or radiation therapy to increase the chance of eliminating blood cancer in the marrow and then restoring normal blood cell production. Researchers continue to improve stem cell transplantation procedures, making them an option for more patients.

The basis for stem cell transplantation is that blood cells (red cells, white cells and platelets) and immune cells (lymphocytes) arise from the stem cells, which are present in marrow, peripheral blood and cord blood. Intense chemotherapy or radiation therapy kills the patient's stem cells. This stops the stem cells from making enough blood and immune cells.

The patient receives high-dose chemotherapy and/or radiation therapy, followed by the stem cell transplant. A donor's stem cells are then transfused into the patient's blood. The transplanted stem cells go from the patient's blood to his or her marrow.

The donor is usually a brother or a sister if one is available and if he or she is a match for the patient. Otherwise, an unrelated person with stem cells that match the patient's tissue type can be used. These matched unrelated donors (MUDs) can be found through stem cell donor banks or registries.

The new cells grow and provide a supply of red cells, white cells (including immune cells) and platelets. The donated stem cells make immune cells that are not totally matched with the patient's cells. (Patients and donors are matched to major tissue types but not minor tissue types.) For this reason, the donor immune cells may recognize the patient's cancer cells' minor tissue types as foreign and kill the cancer cells. This is called "graft versus cancer effect."

If you're a candidate for a stem cell transplant, your doctor will usually recommend one of three types:

A fourth type of stem cell transplantation, syngeneic transplantation, is much less common than the other three. Syngeneic transplantation is rare for the simple reason that it's only used on identical twins. In addition, the donor twin and the recipient twin must have identical genetic makeup and tissue type.

Your doctor considers several factors when deciding whether you're a candidate for stem cell transplantation. For allogeneic stem cell transplantation, your doctor takes into account:

When considering whether you're a candidate for an autologous stem cell transplantation, your doctor takes into account:

Allogeneic stem cell transplant is more successful in younger patients than older patients. About three-quarters of the people who develop a blood cancer are older than 50. In general, older individuals are more likely to:

However, the above factors are generalizations, and there's no specific age cutoff for stem cell transplantation.

Other factors and the response of the underlying disease to initial cancer therapy determine when your doctor considers transplant options. Some patients undergo stem cell transplantation in first remission. For other patients, it's recommended later in the course of treatment for relapsed or refractory disease.

Before you undergo stem cell transplantation, you'll need pretreatment, also called conditioning treatment. You'll be given high-dose chemotherapy or radiation therapy to:

Pretreatment for a reduced-intensity allogeneic stem cell transplant involves lower dosages of chemotherapy drugs or radiation than for a standard allogeneic stem cell transplant.

Donor stem cells are transferred to patients by infusion, a procedure similar to a blood transfusion. Blood is delivered through a catheter a thin flexible tube into a large blood vessel, usually in your chest.

Infusing the stem cells usually takes several hours. You'll be checked frequently for signs of fever, chills, hives, a drop in blood pressure or shortness of breath. You may experience side effects such as headache, nausea, flushing and shortness of breath from the cryopreservative used to freeze the stem cells. If so, you'll be treated and then continue infusion.

Go here to see the original:
Stem Cell Transplantation | Leukemia and Lymphoma Society

Posted in Stem Cells | Comments Off on Stem Cell Transplantation | Leukemia and Lymphoma Society

Stem Cell Treatment | Arizona | Stem Cell Rejuvenation Center

Posted: June 21, 2018 at 11:46 am

ADIPOSE STEM CELL THERAPIES AND TREATMENTS

PHOENIX ARIZONA | (602) 439-0000

WE PLAY AN ESSENTIALROLE IN IMPROVING THE LIVESOF PATIENTS FROM AROUND THE WORLD

For Immediate Assistance please fill out he form below:

TREATABLE CONDITIONS

HAVE GENERAL QUESTIONS

Please Note: Although we have supplied links to the research journals above on the use of stem cells for specific conditions, we are not saying that any of these studies would relate to your particular condition, nor that it would even be an effective treatment. OurAutologousStem Cell Therapy is not an FDA approved treatment for any condition. We provide stem cell therapy (less than manipulated) as a service &as a practice of medicine only. Please see theFAQ pagefor more information. Thesejournal articlesare for educational purposes only &are not intended to be used to sell or promote our therapy.

MAKING A POSITIVE IMPACT AROUND THE WORLD

2017 Stem Cell Rejuvenation Center

7600 N 15th St. Suite 102Phoenix, AZ 85020 USA

Telephone:(602) 439-0000Fax: (602) 439-0021

Here is the original post:
Stem Cell Treatment | Arizona | Stem Cell Rejuvenation Center

Posted in Stem Cells | Comments Off on Stem Cell Treatment | Arizona | Stem Cell Rejuvenation Center

Genetic Counseling – School of Medicine | University of …

Posted: June 21, 2018 at 11:46 am

What does it mean to be a genetic counseling student?

At the University of South Carolina it means you become part of the team from day one: an engaged learner in our genetics center.You'll have an experienced faculty who are open door mentors in your preparation for this career.

You'll have access in the classroom and in the clinic to the geneticist and genetic counselor faculty in our clinical rotation network oftwelve genetic centers. The world of genetic counseling will unfold for you in two very busy years, preparing you to take on the dozens of roles open to genetic counselors today.

Rigorous coursework, community service, challenging clinical rotations and a research-based thesis will provide opportunity for tremendous professional growth.

We've been perfecting our curriculum formore than 30 years to connect the knowledge with the skills youll need as a genetic counselor. Our reputation for excellence is known at home and abroad. We carefully review more than 140 applications per year to select thenine students who will graduate from the School of Medicine Genetic Counseling Program. Our alumni are our proudest accomplishment and work in the best genetic centers throughout the country. They build on our foundation to achieve goals in clinical care, education, research and industry beyond what we imagined.

First in the Southeast and tenth in the nation, we are one of 39 accredited programs in the United States. We have graduatedmore than 200 genetic counselors, many of whom are leading the profession today.

Weve received highly acclaimed Commendations for Excellence from the South Carolina Commission of Higher Education. American Board of Genetic Counseling accreditation was achieved in 2000, reaccreditation in 2006 and, most recently, theAccreditation Council for Genetic Counselingreaccreditation was awarded, 2014-2022.

You'll have the chance to form lifelong partnerships with our core and clinical rotation faculty. Build your professional network with geneticists and genetic counselors throughout the Southeast.

One of our program's greatest assets is our alumni. This dedicated group regularly teaches and mentors our students,serves on our advisory board, and raises money for our endowment.You'll enjoy the instant connection when meeting other USC Genetic Counseling graduates. As a student, you'll benefit from the alumni networkand all they have to offer you. Check out our Facebook group.

See the rest here:
Genetic Counseling - School of Medicine | University of ...

Posted in Genetic medicine | Comments Off on Genetic Counseling – School of Medicine | University of …

transhumanism | Definition, Origins, Characteristics …

Posted: June 21, 2018 at 11:45 am

Transhumanism, social and philosophical movement devoted to promoting the research and development of robust human-enhancement technologies. Such technologies would augment or increase human sensory reception, emotive ability, or cognitive capacity as well as radically improve human health and extend human life spans. Such modifications resulting from the addition of biological or physical technologies would be more or less permanent and integrated into the human body.

The term transhumanism was coined by English biologist and philosopher Julian Huxley in his 1957 essay of the same name. Huxley referred principally to improving the human condition through social and cultural change, but the essay and the name have been adopted as seminal by the transhumanist movement, which emphasizes material technology. Huxley held that, although humanity had naturally evolved, it was now possible for social institutions to supplant evolution in refining and improving the species. The ethos of Huxleys essayif not its lettercan be located in transhumanisms commitment to assuming the work of evolution, but through technology rather than society.

The movements adherents tend to be libertarian and employed in high technology or in academia. Its principal proponents have been prominent technologists like American computer scientist and futurist Ray Kurzweil and scientists like Austrian-born Canadian computer scientist and roboticist Hans Moravec and American nanotechnology researcher Eric Drexler, with the addition of a small but influential contingent of thinkers such as American philosopher James Hughes and Swedish philosopher Nick Bostrom. The movement has evolved since its beginnings as a loose association of groups dedicated to extropianism (a philosophy devoted to the transcendence of human limits). Transhumanism is principally divided between adherents of two visions of post-humanityone in which technological and genetic improvements have created a distinct species of radically enhanced humans and the other in which greater-than-human machine intelligence emerges.

The membership of the transhumanist movement tends to split in an additional way. One prominent strain of transhumanism argues that social and cultural institutionsincluding national and international governmental organizationswill be largely irrelevant to the trajectory of technological development. Market forces and the nature of technological progress will drive humanity to approximately the same end point regardless of social and cultural influences. That end point is often referred to as the singularity, a metaphor drawn from astrophysics and referring to the point of hyperdense material at the centre of a black hole which generates its intense gravitational pull. Among transhumanists, the singularity is understood as the point at which artificial intelligence surpasses that of humanity, which will allow the convergence of human and machine consciousness. That convergence will herald the increase in human consciousness, physical strength, emotional well-being, and overall health and greatly extend the length of human lifetimes.

The second strain of transhumanism holds a contrasting view, that social institutions (such as religion, traditional notions of marriage and child rearing, and Western perspectives of freedom) not only can influence the trajectory of technological development but could ultimately retard or halt it. Bostrom and British philosopher David Pearce founded the World Transhumanist Association in 1998 as a nonprofit organization dedicated to working with those social institutions to promote and guide the development of human-enhancement technologies and to combat those social forces seemingly dedicated to halting such technological progress.

Read more from the original source:
transhumanism | Definition, Origins, Characteristics ...

Posted in Transhumanism | Comments Off on transhumanism | Definition, Origins, Characteristics …

genetics | History, Biology, Timeline, & Facts …

Posted: June 21, 2018 at 11:45 am

Genetics, study of heredity in general and of genes in particular. Genetics forms one of the central pillars of biology and overlaps with many other areas, such as agriculture, medicine, and biotechnology.

Since the dawn of civilization, humankind has recognized the influence of heredity and applied its principles to the improvement of cultivated crops and domestic animals. A Babylonian tablet more than 6,000 years old, for example, shows pedigrees of horses and indicates possible inherited characteristics. Other old carvings show cross-pollination of date palm trees. Most of the mechanisms of heredity, however, remained a mystery until the 19th century, when genetics as a systematic science began.

Genetics arose out of the identification of genes, the fundamental units responsible for heredity. Genetics may be defined as the study of genes at all levels, including the ways in which they act in the cell and the ways in which they are transmitted from parents to offspring. Modern genetics focuses on the chemical substance that genes are made of, called deoxyribonucleic acid, or DNA, and the ways in which it affects the chemical reactions that constitute the living processes within the cell. Gene action depends on interaction with the environment. Green plants, for example, have genes containing the information necessary to synthesize the photosynthetic pigment chlorophyll that gives them their green colour. Chlorophyll is synthesized in an environment containing light because the gene for chlorophyll is expressed only when it interacts with light. If a plant is placed in a dark environment, chlorophyll synthesis stops because the gene is no longer expressed.

Genetics as a scientific discipline stemmed from the work of Gregor Mendel in the middle of the 19th century. Mendel suspected that traits were inherited as discrete units, and, although he knew nothing of the physical or chemical nature of genes at the time, his units became the basis for the development of the present understanding of heredity. All present research in genetics can be traced back to Mendels discovery of the laws governing the inheritance of traits. The word genetics was introduced in 1905 by English biologist William Bateson, who was one of the discoverers of Mendels work and who became a champion of Mendels principles of inheritance.

Read More on This Topic

heredity

clear in the study of genetics. Both aspects of heredity can be explained by genes, the functional units of heritable material that are found within all living cells. Every member of a species has a set of genes specific to that species. It is this set of genes that provides

Although scientific evidence for patterns of genetic inheritance did not appear until Mendels work, history shows that humankind must have been interested in heredity long before the dawn of civilization. Curiosity must first have been based on human family resemblances, such as similarity in body structure, voice, gait, and gestures. Such notions were instrumental in the establishment of family and royal dynasties. Early nomadic tribes were interested in the qualities of the animals that they herded and domesticated and, undoubtedly, bred selectively. The first human settlements that practiced farming appear to have selected crop plants with favourable qualities. Ancient tomb paintings show racehorse breeding pedigrees containing clear depictions of the inheritance of several distinct physical traits in the horses. Despite this interest, the first recorded speculations on heredity did not exist until the time of the ancient Greeks; some aspects of their ideas are still considered relevant today.

Hippocrates (c. 460c. 375 bce), known as the father of medicine, believed in the inheritance of acquired characteristics, and, to account for this, he devised the hypothesis known as pangenesis. He postulated that all organs of the body of a parent gave off invisible seeds, which were like miniaturized building components and were transmitted during sexual intercourse, reassembling themselves in the mothers womb to form a baby.

Aristotle (384322 bce) emphasized the importance of blood in heredity. He thought that the blood supplied generative material for building all parts of the adult body, and he reasoned that blood was the basis for passing on this generative power to the next generation. In fact, he believed that the males semen was purified blood and that a womans menstrual blood was her equivalent of semen. These male and female contributions united in the womb to produce a baby. The blood contained some type of hereditary essences, but he believed that the baby would develop under the influence of these essences, rather than being built from the essences themselves.

Aristotles ideas about the role of blood in procreation were probably the origin of the still prevalent notion that somehow the blood is involved in heredity. Today people still speak of certain traits as being in the blood and of blood lines and blood ties. The Greek model of inheritance, in which a teeming multitude of substances was invoked, differed from that of the Mendelian model. Mendels idea was that distinct differences between individuals are determined by differences in single yet powerful hereditary factors. These single hereditary factors were identified as genes. Copies of genes are transmitted through sperm and egg and guide the development of the offspring. Genes are also responsible for reproducing the distinct features of both parents that are visible in their children.

In the two millennia between the lives of Aristotle and Mendel, few new ideas were recorded on the nature of heredity. In the 17th and 18th centuries the idea of preformation was introduced. Scientists using the newly developed microscopes imagined that they could see miniature replicas of human beings inside sperm heads. French biologist Jean-Baptiste Lamarck invoked the idea of the inheritance of acquired characters, not as an explanation for heredity but as a model for evolution. He lived at a time when the fixity of species was taken for granted, yet he maintained that this fixity was only found in a constant environment. He enunciated the law of use and disuse, which states that when certain organs become specially developed as a result of some environmental need, then that state of development is hereditary and can be passed on to progeny. He believed that in this way, over many generations, giraffes could arise from deerlike animals that had to keep stretching their necks to reach high leaves on trees.

British naturalist Alfred Russel Wallace originally postulated the theory of evolution by natural selection. However, Charles Darwins observations during his circumnavigation of the globe aboard the HMS Beagle (183136) provided evidence for natural selection and his suggestion that humans and animals shared a common ancestry. Many scientists at the time believed in a hereditary mechanism that was a version of the ancient Greek idea of pangenesis, and Darwins ideas did not appear to fit with the theory of heredity that sprang from the experiments of Mendel.

Before Gregor Mendel, theories for a hereditary mechanism were based largely on logic and speculation, not on experimentation. In his monastery garden, Mendel carried out a large number of cross-pollination experiments between variants of the garden pea, which he obtained as pure-breeding lines. He crossed peas with yellow seeds to those with green seeds and observed that the progeny seeds (the first generation, F1) were all yellow. When the F1 individuals were self-pollinated or crossed among themselves, their progeny (F2) showed a ratio of 3:1 (3/4 yellow and 1/4 green). He deduced that, since the F2 generation contained some green individuals, the determinants of greenness must have been present in the F1 generation, although they were not expressed because yellow is dominant over green. From the precise mathematical 3:1 ratio (of which he found several other examples), he deduced not only the existence of discrete hereditary units (genes) but also that the units were present in pairs in the pea plant and that the pairs separated during gamete formation. Hence, the two original lines of pea plants were proposed to be YY (yellow) and yy (green). The gametes from these were Y and y, thereby producing an F1 generation of Yy that were yellow in colour because of the dominance of Y. In the F1 generation, half the gametes were Y and the other half were y, making the F2 generation produced from random mating 1/4 Yy, 1/2 YY, and 1/4 yy, thus explaining the 3:1 ratio. The forms of the pea colour genes, Y and y, are called alleles.

Mendel also analyzed pure lines that differed in pairs of characters, such as seed colour (yellow versus green) and seed shape (round versus wrinkled). The cross of yellow round seeds with green wrinkled seeds resulted in an F1 generation that were all yellow and round, revealing the dominance of the yellow and round traits. However, the F2 generation produced by self-pollination of F1 plants showed a ratio of 9:3:3:1 (9/16 yellow round, 3/16 yellow wrinkled, 3/16 green round, and 1/16 green wrinkled; note that a 9:3:3:1 ratio is simply two 3:1 ratios combined). From this result and others like it, he deduced the independent assortment of separate gene pairs at gamete formation.

Mendels success can be attributed in part to his classic experimental approach. He chose his experimental organism well and performed many controlled experiments to collect data. From his results, he developed brilliant explanatory hypotheses and went on to test these hypotheses experimentally. Mendels methodology established a prototype for genetics that is still used today for gene discovery and understanding the genetic properties of inheritance.

Mendels genes were only hypothetical entities, factors that could be inferred to exist in order to explain his results. The 20th century saw tremendous strides in the development of the understanding of the nature of genes and how they function. Mendels publications lay unmentioned in the research literature until 1900, when the same conclusions were reached by several other investigators. Then there followed hundreds of papers showing Mendelian inheritance in a wide array of plants and animals, including humans. It seemed that Mendels ideas were of general validity. Many biologists noted that the inheritance of genes closely paralleled the inheritance of chromosomes during nuclear divisions, called meiosis, that occur in the cell divisions just prior to gamete formation.

It seemed that genes were parts of chromosomes. In 1910 this idea was strengthened through the demonstration of parallel inheritance of certain Drosophila (a type of fruit fly) genes on sex-determining chromosomes by American zoologist and geneticist Thomas Hunt Morgan. Morgan and one of his students, Alfred Henry Sturtevant, showed not only that certain genes seemed to be linked on the same chromosome but that the distance between genes on the same chromosome could be calculated by measuring the frequency at which new chromosomal combinations arose (these were proposed to be caused by chromosomal breakage and reunion, also known as crossing over). In 1916 another student of Morgans, Calvin Bridges, used fruit flies with an extra chromosome to prove beyond reasonable doubt that the only way to explain the abnormal inheritance of certain genes was if they were part of the extra chromosome. American geneticist Hermann Joseph Mller showed that new alleles (called mutations) could be produced at high frequencies by treating cells with X-rays, the first demonstration of an environmental mutagenic agent (mutations can also arise spontaneously). In 1931 American botanist Harriet Creighton and American scientist Barbara McClintock demonstrated that new allelic combinations of linked genes were correlated with physically exchanged chromosome parts.

In 1908 British physician Archibald Garrod proposed the important idea that the human disease alkaptonuria, and certain other hereditary diseases, were caused by inborn errors of metabolism, suggesting for the first time that linked genes had molecular action at the cell level. Molecular genetics did not begin in earnest until 1941 when American geneticist George Beadle and American biochemist Edward Tatum showed that the genes they were studying in the fungus Neurospora crassa acted by coding for catalytic proteins called enzymes. Subsequent studies in other organisms extended this idea to show that genes generally code for proteins. Soon afterward, American bacteriologist Oswald Avery, Canadian American geneticist Colin M. MacLeod, and American biologist Maclyn McCarty showed that bacterial genes are made of DNA, a finding that was later extended to all organisms.

A major landmark was attained in 1953 when American geneticist and biophysicist James D. Watson and British biophysicists Francis Crick and Maurice Wilkins devised a double helix model for DNA structure. This model showed that DNA was capable of self-replication by separating its complementary strands and using them as templates for the synthesis of new DNA molecules. Each of the intertwined strands of DNA was proposed to be a chain of chemical groups called nucleotides, of which there were known to be four types. Because proteins are strings of amino acids, it was proposed that a specific nucleotide sequence of DNA could contain a code for an amino acid sequence and hence protein structure. In 1955 American molecular biologist Seymour Benzer, extending earlier studies in Drosophila, showed that the mutant sites within a gene could be mapped in relation to each other. His linear map indicated that the gene itself is a linear structure.

In 1958 the strand-separation method for DNA replication (called the semiconservative method) was demonstrated experimentally for the first time by American molecular biologist Matthew Meselson and American geneticist Franklin W. Stahl. In 1961 Crick and South African biologist Sydney Brenner showed that the genetic code must be read in triplets of nucleotides, called codons. American geneticist Charles Yanofsky showed that the positions of mutant sites within a gene matched perfectly the positions of altered amino acids in the amino acid sequence of the corresponding protein. In 1966 the complete genetic code of all 64 possible triplet coding units (codons), and the specific amino acids they code for, was deduced by American biochemists Marshall Nirenberg and Har Gobind Khorana. Subsequent studies in many organisms showed that the double helical structure of DNA, the mode of its replication, and the genetic code are the same in virtually all organisms, including plants, animals, fungi, bacteria, and viruses. In 1961 French biologist Franois Jacob and French biochemist Jacques Monod established the prototypical model for gene regulation by showing that bacterial genes can be turned on (initiating transcription into RNA and protein synthesis) and off through the binding action of regulatory proteins to a region just upstream of the coding region of the gene.

Technical advances have played an important role in the advance of genetic understanding. In 1970 American microbiologists Daniel Nathans and Hamilton Othanel Smith discovered a specialized class of enzymes (called restriction enzymes) that cut DNA at specific nucleotide target sequences. That discovery allowed American biochemist Paul Berg in 1972 to make the first artificial recombinant DNA molecule by isolating DNA molecules from different sources, cutting them, and joining them together in a test tube. These advances allowed individual genes to be cloned (amplified to a high copy number) by splicing them into self-replicating DNA molecules, such as plasmids (extragenomic circular DNA elements) or viruses, and inserting these into living bacterial cells. From these methodologies arose the field of recombinant DNA technology that presently dominates molecular genetics. In 1977 two different methods were invented for determining the nucleotide sequence of DNA: one by American molecular biologists Allan Maxam and Walter Gilbert and the other by English biochemist Fred Sanger. Such technologies made it possible to examine the structure of genes directly by nucleotide sequencing, resulting in the confirmation of many of the inferences about genes originally made indirectly.

In the 1970s Canadian biochemist Michael Smith revolutionized the art of redesigning genes by devising a method for inducing specifically tailored mutations at defined sites within a gene, creating a technique known as site-directed mutagenesis. In 1983 American biochemist Kary B. Mullis invented the polymerase chain reaction, a method for rapidly detecting and amplifying a specific DNA sequence without cloning it. In the last decade of the 20th century, progress in recombinant DNA technology and in the development of automated sequencing machines led to the elucidation of complete DNA sequences of several viruses, bacteria, plants, and animals. In 2001 the complete sequence of human DNA, approximately three billion nucleotide pairs, was made public.

A time line of important milestones in the history of genetics is provided in the table.

Classical genetics, which remains the foundation for all other areas in genetics, is concerned primarily with the method by which genetic traitsclassified as dominant (always expressed), recessive (subordinate to a dominant trait), intermediate (partially expressed), or polygenic (due to multiple genes)are transmitted in plants and animals. These traits may be sex-linked (resulting from the action of a gene on the sex, or X, chromosome) or autosomal (resulting from the action of a gene on a chromosome other than a sex chromosome). Classical genetics began with Mendels study of inheritance in garden peas and continues with studies of inheritance in many different plants and animals. Today a prime reason for performing classical genetics is for gene discoverythe finding and assembling of a set of genes that affects a biological property of interest.

Cytogenetics, the microscopic study of chromosomes, blends the skills of cytologists, who study the structure and activities of cells, with those of geneticists, who study genes. Cytologists discovered chromosomes and the way in which they duplicate and separate during cell division at about the same time that geneticists began to understand the behaviour of genes at the cellular level. The close correlation between the two disciplines led to their combination.

Plant cytogenetics early became an important subdivision of cytogenetics because, as a general rule, plant chromosomes are larger than those of animals. Animal cytogenetics became important after the development of the so-called squash technique, in which entire cells are pressed flat on a piece of glass and observed through a microscope; the human chromosomes were numbered using this technique.

Today there are multiple ways to attach molecular labels to specific genes and chromosomes, as well as to specific RNAs and proteins, that make these molecules easily discernible from other components of cells, thereby greatly facilitating cytogenetics research.

Microorganisms were generally ignored by the early geneticists because they are small in size and were thought to lack variable traits and the sexual reproduction necessary for a mixing of genes from different organisms. After it was discovered that microorganisms have many different physical and physiological characteristics that are amenable to study, they became objects of great interest to geneticists because of their small size and the fact that they reproduce much more rapidly than larger organisms. Bacteria became important model organisms in genetic analysis, and many discoveries of general interest in genetics arose from their study. Bacterial genetics is the centre of cloning technology.

Viral genetics is another key part of microbial genetics. The genetics of viruses that attack bacteria were the first to be elucidated. Since then, studies and findings of viral genetics have been applied to viruses pathogenic on plants and animals, including humans. Viruses are also used as vectors (agents that carry and introduce modified genetic material into an organism) in DNA technology.

Molecular genetics is the study of the molecular structure of DNA, its cellular activities (including its replication), and its influence in determining the overall makeup of an organism. Molecular genetics relies heavily on genetic engineering (recombinant DNA technology), which can be used to modify organisms by adding foreign DNA, thereby forming transgenic organisms. Since the early 1980s, these techniques have been used extensively in basic biological research and are also fundamental to the biotechnology industry, which is devoted to the manufacture of agricultural and medical products. Transgenesis forms the basis of gene therapy, the attempt to cure genetic disease by addition of normally functioning genes from exogenous sources.

The development of the technology to sequence the DNA of whole genomes on a routine basis has given rise to the discipline of genomics, which dominates genetics research today. Genomics is the study of the structure, function, and evolutionary comparison of whole genomes. Genomics has made it possible to study gene function at a broader level, revealing sets of genes that interact to impinge on some biological property of interest to the researcher. Bioinformatics is the computer-based discipline that deals with the analysis of such large sets of biological information, especially as it applies to genomic information.

The study of genes in populations of animals, plants, and microbes provides information on past migrations, evolutionary relationships and extents of mixing among different varieties and species, and methods of adaptation to the environment. Statistical methods are used to analyze gene distributions and chromosomal variations in populations.

Population genetics is based on the mathematics of the frequencies of alleles and of genetic types in populations. For example, the Hardy-Weinberg formula, p2 + 2pq + q2 = 1, predicts the frequency of individuals with the respective homozygous dominant (AA), heterozygous (Aa), and homozygous recessive (aa) genotypes in a randomly mating population. Selection, mutation, and random changes can be incorporated into such mathematical models to explain and predict the course of evolutionary change at the population level. These methods can be used on alleles of known phenotypic effect, such as the recessive allele for albinism, or on DNA segments of any type of known or unknown function.

Human population geneticists have traced the origins and migration and invasion routes of modern humans, Homo sapiens. DNA comparisons between the present peoples on the planet have pointed to an African origin of Homo sapiens. Tracing specific forms of genes has allowed geneticists to deduce probable migration routes out of Africa to the areas colonized today. Similar studies show to what degree present populations have been mixed by recent patterns of travel.

Another aspect of genetics is the study of the influence of heredity on behaviour. Many aspects of animal behaviour are genetically determined and can therefore be treated as similar to other biological properties. This is the subject material of behaviour genetics, whose goal is to determine which genes control various aspects of behaviour in animals. Human behaviour is difficult to analyze because of the powerful effects of environmental factors, such as culture. Few cases of genetic determination of complex human behaviour are known. Genomics studies provide a useful way to explore the genetic factors involved in complex human traits such as behaviour.

Some geneticists specialize in the hereditary processes of human genetics. Most of the emphasis is on understanding and treating genetic disease and genetically influenced ill health, areas collectively known as medical genetics. One broad area of activity is laboratory research dealing with the mechanisms of human gene function and malfunction and investigating pharmaceutical and other types of treatments. Since there is a high degree of evolutionary conservation between organisms, research on model organismssuch as bacteria, fungi, and fruit flies (Drosophila)which are easier to study, often provides important insights into human gene function.

Many single-gene diseases, caused by mutant alleles of a single gene, have been discovered. Two well-characterized single-gene diseases include phenylketonuria (PKU) and Tay-Sachs disease. Other diseases, such as heart disease, schizophrenia, and depression, are thought to have more complex heredity components that involve a number of different genes. These diseases are the focus of a great deal of research that is being carried out today.

Another broad area of activity is clinical genetics, which centres on advising parents of the likelihood of their children being affected by genetic disease caused by mutant genes and abnormal chromosome structure and number. Such genetic counseling is based on examining individual and family medical records and on diagnostic procedures that can detect unexpressed, abnormal forms of genes. Counseling is carried out by physicians with a particular interest in this area or by specially trained nonphysicians.

Go here to see the original:
genetics | History, Biology, Timeline, & Facts ...

Posted in Genetics | Comments Off on genetics | History, Biology, Timeline, & Facts …

HCG Diet Plan Food List & Meal Plan Menu Guide

Posted: June 21, 2018 at 11:45 am

Trying to stick to a strict diet can be hard right? I know it is for me!

In this article I'm going to show you how you can create a super healthy meal plan that is tasty and easy to stick to!

In order to be successful on the HCG weight loss protocol, not only is it important to follow the guidelines set forth by Dr. Simeon, but it is imperative that you follow and maintain a very low calorie diet, consuming no more than 500 calories a day. More importantly though is how those 500 calories are made up. While most of us eat more than 500 calories in just one sitting at our favorite restaurant, the HCG diet is very specific in just how to spread those calories out through your day.

While the list of HCG approved foods may seem short, it is compiled carefully so your body can easily lose weight and you can plan your meals easily. Some of the most successful dieters will tell contribute their success to meal planning and finding recipes that change things up, meaning they dont get bored with their food choices.

Whats on the HCG Phase 2 Food List?

Before we go through the list of food you are allowed to eat during the diet, its important to take your own dietary needs into account. If you are diabetic, or have food allergies, you and your doctor will need to come up with a complementary diet plan that is tailored to your specific needs. If there are foods on the list that you just dont like to eat, you cannot replace it with a food you like, unless it is also on the list.

To ensure you are eating the correct amount you may want to invest in a small kitchen scale, especially for your protein. If possible, your proteins should be organic, and look for grass-fed red meats. For all other approved foods, they should be organic as well, and most of your fruits and vegetables can be found at your local farmers market.

For protein, you can eat up to 200 grams per day, but only 100 grams per meal. Trim the fat, and do not cook anything on the bone.

Vegetables should be on your plate regardless of if you are on HCG phase 2 or not, but you have a bit more variety when it comes to these. Vegetables should only take up one cup of two meals, making it two cups per day. Dont eat all two cups in one sitting, break them up as the protocol says to.

Many of the fruits we enjoy can contain high amounts of sugar, so although they are a part of a healthy daily diet, there are only a few fruits you can safely eat while on HCG phase 2, and even then you can only eat two servings a day.

Just about everyone likes a little starch throughout their day, whether its a morning bagel or a warm dinner roll. The HCG diet protocol is very limiting in what starches are allowed, so while you might miss that danish from your favorite coffee shop, you wont miss the weight you lose.

The only starches allowed are Melba toast or Grissini breadsticks, and these should be used as sparingly as possible.

For everything else you eat throughout the day, there are rules attached to that as well. While you may not pay much attention the seasonings you use to add flavor, the HCG diet makes you become keenly aware of everything you eat. Although a short list, it does make sense since the diet is trying to keep you from mistakenly eating foods that could sabotage your progress.

These miscellaneous items can be added to your HCG meal plan throughout the diet:

Instructions for HCG phase 2 can seem limiting, but they work to help your body lose weight quickly, and to not retain fluids.Sounds tough right? Trying to keep all these rules in your head certainly can be!

Although we'll be getting into some more detail shortly you can cut out most of the hard work in remembering all the rules by grabbing a FREE COPY of my Top 7 HCG Diet Recipes. I've even thrown in a little bonus for you. My top list of recipes will help take the hard work out of meal selection and grocery shopping.

How you consume these foods daily is why meal planning really helps. Youll already know what youre eating when, so you can look forward to your favorite meal, and you dont need to worry about what you can eat later. Most users of the protocol find it best to have two small meals a day at lunch and dinner, and only drinking coffee or tea at breakfast.

Still, other people find they have better success by spreading their meals out and eating breakfast and dinner, or dividing up their meals and by eating lunch and having a fruit serving at breakfast time. A lunchtime option is especially important to some dieters who may feel pressured to go out to lunch with their coworkers, or dont want to talk about their lack of lunch with friends.

One of the most important things to keep in mind when following the HCG diet is that you should never combine two of any one category together in the same meal, that means you should not eat your apple and orange at lunch, while having cabbage and celery for dinner. My free Top 7 HCG Diet Recipes takes all of this into account so I highly recommend you check that to make sure you're on the right track.

While you are making your way through the HCG diet phase 2, youll find that changing up your daily meals and snacks alleviates the boredom you can sometimes feel when you eat the same thing each day. While a small, grilled chicken breast is easy to make and refrigerate for the week, it might not seem appetizing four days later.

A Typical Day On The HCG Diet

Breakfast: Coffee or tea with one serving of fruit

Lunch:100 grams of any one approved protein, 1 vegetable, 1 fruit, and 1 starch. Some dieters like to eat a heavier protein at lunch time, and a light one, such as fish, for dinner.

Dinner:100 grams of any one approved protein, 1 vegetable, 1 fruit and 1 starch.

It is not uncommon, while on the protocol, to find that you do not feel hungry, and you should not feel alarmed. The most important thing to remember about the diet is that you should never exceed 500 calories. So, if you find youre not as hungry one day, then its alright to skip a fruit, vegetable, or starch. However, do not skip your protein, because your body needs it in order to function properly. If you find yourself weak, or feeling hungry, then do not skimp in any way. Better click here and learn what can be the consequences.

The goal is to always eat to your hunger.

It is through this mechanism that your body will begin to breakdown the fat stored in your body, and you will see changes not only in your weight but in your overall shape.

What HCG Phase 2 Recipes Are Available?

If you search online for HCG recipes youre sure to find hundreds of them. HCG diet recipes are unique in that there are only a few ingredients to work with, you cant add too many vegetables to one plate, and no starches are allowed either. Although its restrictive, the lack of variety means dieters need to be creative, so youre not likely to run out of HCG recipes anytime during phase 2. Here are a few HCG diet recipes that are most popular:

Roasted Steak and Onions -100 grams of flank steak seasoned how you like, in a medium sized skillet. Brown your steak on both sides to seal in the juices before putting in a preheated oven set to broil. Add one small sliced onion and a splash of water to the skillet and saut the onions to desired doneness.

Chicken Apple Wraps 100 grams diced cooked chicken cooked with a dash of pepper, salt, and smoked paprika, 1 small apple diced, sprinkle with 2 tablespoons of lemon juice, 1/8 teaspoon cinnamon, 1/8 teaspoon cardamom, and a sprinkling of stevia.

Whitefish Taco Wraps 100 grams of whitefish, c. water, 1 tsp. apple cider vinegar, 1 clove crushed garlic, teaspoon ground cumin, teaspoon chili powder, dash of salt, season with cracked black pepper to taste. Bake fish with juice from half a lemon; add the spices until evenly coated. Bake at 350 until fluids run clear. Once the fish has been removed from the baking dish use a fork to lightly break up the fish, spoon into iceberg lettuce leaves and enjoy!

Roast Tomato Slices 1 tomato per serving, 1 clove crushed garlic, 1 sprig rosemary chopped garlic salt. Slice tomatoes into thin slices, spread out on parchment paper lined cookie sheet, sprinkle with garlic salt and chopped rosemary. Place the garlic between the slices. Roast for 6 hours at 200 degrees.

Cucumber Dill Salad 1 medium cucumber sliced and quartered, 1 tablespoon vinegar, 1 teaspoon dill, black pepper and stevia to taste. Mix vinegar, stevia and dill, pour over cucumbers and stir. Pepper to taste.

Oven Roasted Asparagus 1 bunch of asparagus. 3 tablespoons water, 1 clove minced garlic, 1 teaspoon salt, 1 tablespoon lemon juice, black pepper to taste. Preheat oven to 425 F, toss asparagus in water and lemon juice to coat, sprinkle with dry ingredients, arrange on a baking sheet in a single layer and bake for 12-15 minutes until desired tenderness is reached.

For some tasty treats to break up the monotony, these HCG diet phase 2 recipes for drinks can keep you motivated.

Frozen Strawberry Slushy

Blend the ice, strawberries, and vanilla in the blender until a slush forms. Taste, and add stevia to desired sweetness. Pour into a cup and enjoy! Strawberries out of season? No worries! Instead use 4 frozen strawberries and 1 cup of water. Additional alternatives are to use flavored liquid stevia to give yourself some variety and avoid boredom and cravings. Try replacing your vanilla and stevia with Chocolate flavored stevia for a super tasty treat!

Blended Coffee

Blend and enjoy! Once again you can get creative with flavored liquid stevia and use any of these flavors of Stevia, vanilla, chocolate, hazelnut, cinnamon, or peppermint! Who needs a coffee shop now?

Grapefruit Spritzer

Mix the grapefruit juice and orange stevia drops together, gently stir into sparkling mineral water and pour over crushed ice.

If you think you cant have desserts on the diet, think again. To satisfy chocolate cravings you can use cocoa powder, but it must be sugar free and all natural. Phase 2, or HCG P2 dessert recipes really rely on using the best cocoa powders, so be sure to read your ingredients carefully.

Chocolate Cheesecake -100g of low/no fat cottage cheese,1 Tbsp cocoa powder, splash of vanilla extract, liquid stevia to taste.Puree the ingredients together, and then taste to see if youd like more stevia added. Garnish with strawberry if youd like, and you can again use flavored stevia for some variety.

Baked Toffee Apples 1 apple thinly sliced, 1 tsp. ground cinnamon, 1tsp stevia, tsp. nutmeg, 1 tbsp. water, English toffee stevia. Combine all dry ingredients in a small mixing bowl and mix well. Place the liquids into a small plastic zip bag along with apple slices and seal.Shake well, lay flat in baking dish and bake at 350 for 25 30 minutes until desired doneness.

Check out the video below for a great recipe for chicken wraps that is HCG diet friendly!

When planning for HCG diet recipes, phase 2, it can seem like you dont have many options, but you simply need to be a little creative. If youre not in a particularly creative mood, my free Top 7 HCG Diet Recipescan help you out. The HCG diet can be tough, especially if youre a foodie, but its worth it. Everything you learn from implementing HCG recipes in phase 2 will only help you be more successful as you move into phases 3 and 4.Enjoy and Bon Appetite!

Now that you have an idea if what you'll be eating it's time to decide how you want to take the HCG hormone. The options you have are:

Visit link:
HCG Diet Plan Food List & Meal Plan Menu Guide

Posted in HCG Diet | Comments Off on HCG Diet Plan Food List & Meal Plan Menu Guide

HCG Diet System – Lose Weight Fast | Lose 10KGs in 30 Days

Posted: June 21, 2018 at 11:45 am

If you are tired of struggling to lose weight and keep it off then the HCG DIET SYSTEM SA is definitely the answer for you.

Best weight loss solution in record time. We will show you How to lose weight quickly without starving yourself, no crazy exercise regime, no pricey meals nor expensive supplements or pills.

I am sure you have tried endless diets which have left you feeling deprived, hungry depleted, no energy, even anxious or irritable. Well the good news is that you will have none of these side effects whilst using this Weight loss program. In fact most people say they have never felt better.

Many people think Fat is Fat.. but do you know that there are 3 different types of Fat in the body. Normal Fats, structural Fats and abnormal fats. Normal fats and structural fats the body needs. Abnormal Fats are the guilty culprit .. tummy rolls, saddle bags, love handles, double chin, batwing arms. The HCG attacks this abnormal fat, mobilising it to use as ENERGY and FUEL thus allowing you to Lose Weight Quickly, in the areas where you need it most.

Continued here:
HCG Diet System - Lose Weight Fast | Lose 10KGs in 30 Days

Posted in HCG Diet | Comments Off on HCG Diet System – Lose Weight Fast | Lose 10KGs in 30 Days

Page 1,442«..1020..1,4411,4421,4431,444..1,4501,460..»